Weekends Off on Clinical Rotations? Examining Clinical Opportunity Trends on Weekdays vs Weekends During Internal Medicine Clerkship Rotations in Veterans Health Administration Inpatient Wards

Article Type
Changed

Background

The Accreditation Council for Graduate Medical Education (ACGME) mandates an 80-hour weekly work limit for residents.1 In contrast, decisions regarding undergraduate medical education (UME) are strongly influenced locally, with individual institutions setting academic policy for students. These differences in oversight reflect fundamental differences in residents’ and students’ roles in patient care, power, and responsibility. Considering rotation schedules, internal medicine (IM) clerkship directors have discussed the relative value of weekend vs weekday duty during inpatient rotations, a scheduling topic of interest to students as well, though these conversations are limited by a lack of knowledge regarding admission patterns. Addressing this information gap would inform policy decisions.

The Veterans Health Administration (VHA) is uniquely positioned to address questions about UME clinical experiences nationwide: annually, over 118,000 students representing 97% of US medical schools train at VHA facilities.2,3 We aim to compare the number and variety of patient encounter opportunities presenting during inpatient VHA IM rotations on weekdays versus weekends to inform policy decisions for UME rotation schedules.

Innovation

The VHA Corporate Data Warehouse will be queried for all admissions, diagnoses, and length of stay on inpatient IM services at the 420 VHA hospitals affiliated with US medical schools from 2016-2026. We will aggregate case data for day of week, floor, hospital, and Veteran Integrated Service Network (VISN), and determine number of admissions by weekday (Monday-Friday) and weekend (Saturday-Sunday). Weekday vs. weekend admission data will be compared using generalized mixed effects models for clustered longitudinal data. Heterogeneity across hospitals and VISNs will be explored to examine unique regional trends.

Results

We have drafted strategies to query and curate relevant datasets, developed a preliminary analysis plan, and await data deployment from VHA data stewards.

Conclusions

We believe this will be the first VHA-wide evaluation of patient encounter trends on IM services to examine potential training experiences for medical students. This will increase understanding of the critical role VHA has in developing the nations’ healthcare workforce, and how patterns of opportunities for clinical education may be distributed over time, informing decisions about rotation schedules to maximize students’ abilities to interact with, learn from, and serve our nation’s veterans

References
  1. Dimitris KD, Taylor BC, Fankhauser RA. Resident work-week regulations: historical review and modern perspectives. J Surg Educ. 2008;65(4):290-296. doi:10.1016/j.jsurg.2008.05.011
  2. Health professions education statistics. Veterans Health Administration. Accessed March 19, 2025. https://www.va.gov/oaa/docs/OAACurrentStats.pdf
  3. Medical education at VA: It’s all about the Veterans. VA News. Updated August 16, 2021. Accessed March 19, 2025.  https://news.va.gov/93370/medical-education-at-va-its-all-about-the-veterans/ 
     
Issue
Federal Practitioner 42(suppl 7)
Publications
Topics
Sections

Background

The Accreditation Council for Graduate Medical Education (ACGME) mandates an 80-hour weekly work limit for residents.1 In contrast, decisions regarding undergraduate medical education (UME) are strongly influenced locally, with individual institutions setting academic policy for students. These differences in oversight reflect fundamental differences in residents’ and students’ roles in patient care, power, and responsibility. Considering rotation schedules, internal medicine (IM) clerkship directors have discussed the relative value of weekend vs weekday duty during inpatient rotations, a scheduling topic of interest to students as well, though these conversations are limited by a lack of knowledge regarding admission patterns. Addressing this information gap would inform policy decisions.

The Veterans Health Administration (VHA) is uniquely positioned to address questions about UME clinical experiences nationwide: annually, over 118,000 students representing 97% of US medical schools train at VHA facilities.2,3 We aim to compare the number and variety of patient encounter opportunities presenting during inpatient VHA IM rotations on weekdays versus weekends to inform policy decisions for UME rotation schedules.

Innovation

The VHA Corporate Data Warehouse will be queried for all admissions, diagnoses, and length of stay on inpatient IM services at the 420 VHA hospitals affiliated with US medical schools from 2016-2026. We will aggregate case data for day of week, floor, hospital, and Veteran Integrated Service Network (VISN), and determine number of admissions by weekday (Monday-Friday) and weekend (Saturday-Sunday). Weekday vs. weekend admission data will be compared using generalized mixed effects models for clustered longitudinal data. Heterogeneity across hospitals and VISNs will be explored to examine unique regional trends.

Results

We have drafted strategies to query and curate relevant datasets, developed a preliminary analysis plan, and await data deployment from VHA data stewards.

Conclusions

We believe this will be the first VHA-wide evaluation of patient encounter trends on IM services to examine potential training experiences for medical students. This will increase understanding of the critical role VHA has in developing the nations’ healthcare workforce, and how patterns of opportunities for clinical education may be distributed over time, informing decisions about rotation schedules to maximize students’ abilities to interact with, learn from, and serve our nation’s veterans

Background

The Accreditation Council for Graduate Medical Education (ACGME) mandates an 80-hour weekly work limit for residents.1 In contrast, decisions regarding undergraduate medical education (UME) are strongly influenced locally, with individual institutions setting academic policy for students. These differences in oversight reflect fundamental differences in residents’ and students’ roles in patient care, power, and responsibility. Considering rotation schedules, internal medicine (IM) clerkship directors have discussed the relative value of weekend vs weekday duty during inpatient rotations, a scheduling topic of interest to students as well, though these conversations are limited by a lack of knowledge regarding admission patterns. Addressing this information gap would inform policy decisions.

The Veterans Health Administration (VHA) is uniquely positioned to address questions about UME clinical experiences nationwide: annually, over 118,000 students representing 97% of US medical schools train at VHA facilities.2,3 We aim to compare the number and variety of patient encounter opportunities presenting during inpatient VHA IM rotations on weekdays versus weekends to inform policy decisions for UME rotation schedules.

Innovation

The VHA Corporate Data Warehouse will be queried for all admissions, diagnoses, and length of stay on inpatient IM services at the 420 VHA hospitals affiliated with US medical schools from 2016-2026. We will aggregate case data for day of week, floor, hospital, and Veteran Integrated Service Network (VISN), and determine number of admissions by weekday (Monday-Friday) and weekend (Saturday-Sunday). Weekday vs. weekend admission data will be compared using generalized mixed effects models for clustered longitudinal data. Heterogeneity across hospitals and VISNs will be explored to examine unique regional trends.

Results

We have drafted strategies to query and curate relevant datasets, developed a preliminary analysis plan, and await data deployment from VHA data stewards.

Conclusions

We believe this will be the first VHA-wide evaluation of patient encounter trends on IM services to examine potential training experiences for medical students. This will increase understanding of the critical role VHA has in developing the nations’ healthcare workforce, and how patterns of opportunities for clinical education may be distributed over time, informing decisions about rotation schedules to maximize students’ abilities to interact with, learn from, and serve our nation’s veterans

References
  1. Dimitris KD, Taylor BC, Fankhauser RA. Resident work-week regulations: historical review and modern perspectives. J Surg Educ. 2008;65(4):290-296. doi:10.1016/j.jsurg.2008.05.011
  2. Health professions education statistics. Veterans Health Administration. Accessed March 19, 2025. https://www.va.gov/oaa/docs/OAACurrentStats.pdf
  3. Medical education at VA: It’s all about the Veterans. VA News. Updated August 16, 2021. Accessed March 19, 2025.  https://news.va.gov/93370/medical-education-at-va-its-all-about-the-veterans/ 
     
References
  1. Dimitris KD, Taylor BC, Fankhauser RA. Resident work-week regulations: historical review and modern perspectives. J Surg Educ. 2008;65(4):290-296. doi:10.1016/j.jsurg.2008.05.011
  2. Health professions education statistics. Veterans Health Administration. Accessed March 19, 2025. https://www.va.gov/oaa/docs/OAACurrentStats.pdf
  3. Medical education at VA: It’s all about the Veterans. VA News. Updated August 16, 2021. Accessed March 19, 2025.  https://news.va.gov/93370/medical-education-at-va-its-all-about-the-veterans/ 
     
Issue
Federal Practitioner 42(suppl 7)
Issue
Federal Practitioner 42(suppl 7)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Developing a Multi-Disciplinary Integrative Health Elective at the San Francisco VA

Article Type
Changed

Background

Integrative health (IH) combines conventional and complementary medicine in a coordinated, evidence-based approach to treat the whole person. Nearly 40% of American adults have used complementary health approaches,1 yet IH exposure in medical training is limited. In 2022, the San Francisco VA Health Care Center launched a multidisciplinary clinical IH elective for University of California San Francisco (UCSF) internal medicine and SFVA nurse practitioner residents. Based on findings from a general and targeted needs assessment, including faculty and learner feedback, we found that the elective was well-received, but relied on one-on-one patient-based teaching. This structure created variable learning experiences and high faculty burden. Our project aims to formalize and evaluate the IH elective curriculum to better address the needs of both faculty and learners.

Methods

We used Kern’s six-step framework for curriculum development. To reduce variability, we sought to formalize the core curricular content by: 1) reviewing existing elective components, comparing them to similar curricula nationwide, and outlining foundational knowledge based on the exam domains of the American Board of Integrative Medicine (ABOIM);2 2) creating eleven learning objectives across three themes: patient-centered care, systems-based practice, and IH-specific knowledge; 3) developing IH subspecialty experience guides to standardize clinical teaching with suggested takeaways, guided reflection, and curated resources. To reduce faculty burden, we consolidated elective resources into a centralized e-learning hub. Trainees complete a pre/post self-assessment and evaluation at the end of the elective.

Results

We identified key learning opportunities in each IH shadowing experience to enhance learners’ knowledge. We developed an IH e-Learning Hub to provide easy access to elective materials and IH clinical tools. Evaluations from the first two learners who completed the elective indicate that the learning objectives were met and that learners gained increased knowledge of lifestyle medicine, mind-body medicine, manual medicine, and botanicals/dietary supplements. Learners valued increased IH subspecialty familiarity and reported high likelihood of future practice change.

Discussion

The project is ongoing. Next steps include collecting faculty evaluations about their experience, continuing to create and refine experience guides, promoting clinical tools for learner’s future practice, and developing strategies to recruit more learners to the elective.

References
  1. Nahin RL, Rhee A, Stussman B. Use of Complementary Health Approaches Overall and for Pain Management by US Adults. JAMA. 2024;331(7):613-615. doi:10.1001/jama.2023.26775
  2. Integrative medicine exam description. American Board of Physician Specialties. Updated July 2021. Accessed December 12, 2025. https://www.abpsus.org/integrative-medicine-description
Issue
Federal Practitioner 42(suppl 7)
Publications
Topics
Sections

Background

Integrative health (IH) combines conventional and complementary medicine in a coordinated, evidence-based approach to treat the whole person. Nearly 40% of American adults have used complementary health approaches,1 yet IH exposure in medical training is limited. In 2022, the San Francisco VA Health Care Center launched a multidisciplinary clinical IH elective for University of California San Francisco (UCSF) internal medicine and SFVA nurse practitioner residents. Based on findings from a general and targeted needs assessment, including faculty and learner feedback, we found that the elective was well-received, but relied on one-on-one patient-based teaching. This structure created variable learning experiences and high faculty burden. Our project aims to formalize and evaluate the IH elective curriculum to better address the needs of both faculty and learners.

Methods

We used Kern’s six-step framework for curriculum development. To reduce variability, we sought to formalize the core curricular content by: 1) reviewing existing elective components, comparing them to similar curricula nationwide, and outlining foundational knowledge based on the exam domains of the American Board of Integrative Medicine (ABOIM);2 2) creating eleven learning objectives across three themes: patient-centered care, systems-based practice, and IH-specific knowledge; 3) developing IH subspecialty experience guides to standardize clinical teaching with suggested takeaways, guided reflection, and curated resources. To reduce faculty burden, we consolidated elective resources into a centralized e-learning hub. Trainees complete a pre/post self-assessment and evaluation at the end of the elective.

Results

We identified key learning opportunities in each IH shadowing experience to enhance learners’ knowledge. We developed an IH e-Learning Hub to provide easy access to elective materials and IH clinical tools. Evaluations from the first two learners who completed the elective indicate that the learning objectives were met and that learners gained increased knowledge of lifestyle medicine, mind-body medicine, manual medicine, and botanicals/dietary supplements. Learners valued increased IH subspecialty familiarity and reported high likelihood of future practice change.

Discussion

The project is ongoing. Next steps include collecting faculty evaluations about their experience, continuing to create and refine experience guides, promoting clinical tools for learner’s future practice, and developing strategies to recruit more learners to the elective.

Background

Integrative health (IH) combines conventional and complementary medicine in a coordinated, evidence-based approach to treat the whole person. Nearly 40% of American adults have used complementary health approaches,1 yet IH exposure in medical training is limited. In 2022, the San Francisco VA Health Care Center launched a multidisciplinary clinical IH elective for University of California San Francisco (UCSF) internal medicine and SFVA nurse practitioner residents. Based on findings from a general and targeted needs assessment, including faculty and learner feedback, we found that the elective was well-received, but relied on one-on-one patient-based teaching. This structure created variable learning experiences and high faculty burden. Our project aims to formalize and evaluate the IH elective curriculum to better address the needs of both faculty and learners.

Methods

We used Kern’s six-step framework for curriculum development. To reduce variability, we sought to formalize the core curricular content by: 1) reviewing existing elective components, comparing them to similar curricula nationwide, and outlining foundational knowledge based on the exam domains of the American Board of Integrative Medicine (ABOIM);2 2) creating eleven learning objectives across three themes: patient-centered care, systems-based practice, and IH-specific knowledge; 3) developing IH subspecialty experience guides to standardize clinical teaching with suggested takeaways, guided reflection, and curated resources. To reduce faculty burden, we consolidated elective resources into a centralized e-learning hub. Trainees complete a pre/post self-assessment and evaluation at the end of the elective.

Results

We identified key learning opportunities in each IH shadowing experience to enhance learners’ knowledge. We developed an IH e-Learning Hub to provide easy access to elective materials and IH clinical tools. Evaluations from the first two learners who completed the elective indicate that the learning objectives were met and that learners gained increased knowledge of lifestyle medicine, mind-body medicine, manual medicine, and botanicals/dietary supplements. Learners valued increased IH subspecialty familiarity and reported high likelihood of future practice change.

Discussion

The project is ongoing. Next steps include collecting faculty evaluations about their experience, continuing to create and refine experience guides, promoting clinical tools for learner’s future practice, and developing strategies to recruit more learners to the elective.

References
  1. Nahin RL, Rhee A, Stussman B. Use of Complementary Health Approaches Overall and for Pain Management by US Adults. JAMA. 2024;331(7):613-615. doi:10.1001/jama.2023.26775
  2. Integrative medicine exam description. American Board of Physician Specialties. Updated July 2021. Accessed December 12, 2025. https://www.abpsus.org/integrative-medicine-description
References
  1. Nahin RL, Rhee A, Stussman B. Use of Complementary Health Approaches Overall and for Pain Management by US Adults. JAMA. 2024;331(7):613-615. doi:10.1001/jama.2023.26775
  2. Integrative medicine exam description. American Board of Physician Specialties. Updated July 2021. Accessed December 12, 2025. https://www.abpsus.org/integrative-medicine-description
Issue
Federal Practitioner 42(suppl 7)
Issue
Federal Practitioner 42(suppl 7)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Harm Reduction Integration in an Interprofessional Primary Care Training Clinic

Article Type
Changed

Background

Among people who use drugs (PWUD), harm reduction (HR) is an evidence-based low barrier approach to mitigating ongoing substance use risks and is considered a key pillar of the Department of Health and Human Service’s Overdose Prevention Strategy.1 Given the accessibility and continuity, primary care (PC) clinics are optimal sites for education about and provision of HR services.2,3

Aim

  1. Determining the impact of active and passive methods for HR supply.
  2. Recognizing the importance of clinician addiction education in the provision of HR services.

Methods

In January 2024, physician and nurse practitioner trainees in the West Haven Veterans Affairs (VA) Center of Education (CoE) in Interprofessional Primary Care received addiction care and HR strategy education. Initially, all patients presenting to the CoE completed a single-item substance use screening. Patients screening positive were offered HR supplies, including fentanyl and xylazine test strips (FTS, XTS), during the encounter (active distribution). Starting October 2024, HR kiosks were implemented in the clinic lobby, offering patients self-serve access to HR supplies (passive distribution). Test strip uptake was tracked through clinical encounter documentation and weekly kiosk inventory.

Results

Between January 2024 and June 2024, 92 FTS and 84 XTS were actively distributed. Upon implementation of the harm reduction kiosk, 253 FTS and 164 XTS were distributed between October 2024 and February 2025. In the CoE, FTS and XTS distribution increased by 275% and 195%, respectively, through passive kiosk distribution relative to active distribution during clinical encounters.

Conclusions

HR kiosk implementation resulted in significantly increased test strip uptake in the CoE, proving passive distribution to be an effective low barrier method of increasing access to HR and substance use disorder (SUD) resources. Although this model may reduce stigma and logistical barriers when presenting for a healthcare encounter, it limits the ability to track and engage patients for more intensive services. While each approach has unique advantages and disadvantages, test strip demand via both methods highlights the significant need for HR resources in PC settings. Continuing education for PC clinicians on low barrier SUD care and HR is critical to optimizing care for this population.

References
  1. Haffajee, RL, Sherry, TB, Dubenitz, JM, et al. Overdose prevention strategy. US Department of Health and Human Services (Issue Brief). Published October 27, 2021. Accessed December 11, 2025. https://aspe.hhs.gov/sites/default/files/documents/101936da95b69acb8446a4bad9179cc0/overdose-prevention-strategy.pdf
  2. Substance Abuse and Mental Health Services Administration. Advisory: low barrier models of care for substance use disorders. SAMHSA Publication No. PEP23-02-00-005. Published December 2023. Accessed December 11, 2025. https://library.samhsa.gov/sites/default/files/advisory-low-barrier-models-of-care-pep23-02-00-005.pdf
  3. Substance Abuse and Mental Health Services Administration: Harm Reduction Framework. Center for Substance Abuse Prevention, Substance Abuse and Mental Health Services Administration, 2023.
     
Issue
Federal Practitioner 42(suppl 7)
Publications
Topics
Sections

Background

Among people who use drugs (PWUD), harm reduction (HR) is an evidence-based low barrier approach to mitigating ongoing substance use risks and is considered a key pillar of the Department of Health and Human Service’s Overdose Prevention Strategy.1 Given the accessibility and continuity, primary care (PC) clinics are optimal sites for education about and provision of HR services.2,3

Aim

  1. Determining the impact of active and passive methods for HR supply.
  2. Recognizing the importance of clinician addiction education in the provision of HR services.

Methods

In January 2024, physician and nurse practitioner trainees in the West Haven Veterans Affairs (VA) Center of Education (CoE) in Interprofessional Primary Care received addiction care and HR strategy education. Initially, all patients presenting to the CoE completed a single-item substance use screening. Patients screening positive were offered HR supplies, including fentanyl and xylazine test strips (FTS, XTS), during the encounter (active distribution). Starting October 2024, HR kiosks were implemented in the clinic lobby, offering patients self-serve access to HR supplies (passive distribution). Test strip uptake was tracked through clinical encounter documentation and weekly kiosk inventory.

Results

Between January 2024 and June 2024, 92 FTS and 84 XTS were actively distributed. Upon implementation of the harm reduction kiosk, 253 FTS and 164 XTS were distributed between October 2024 and February 2025. In the CoE, FTS and XTS distribution increased by 275% and 195%, respectively, through passive kiosk distribution relative to active distribution during clinical encounters.

Conclusions

HR kiosk implementation resulted in significantly increased test strip uptake in the CoE, proving passive distribution to be an effective low barrier method of increasing access to HR and substance use disorder (SUD) resources. Although this model may reduce stigma and logistical barriers when presenting for a healthcare encounter, it limits the ability to track and engage patients for more intensive services. While each approach has unique advantages and disadvantages, test strip demand via both methods highlights the significant need for HR resources in PC settings. Continuing education for PC clinicians on low barrier SUD care and HR is critical to optimizing care for this population.

Background

Among people who use drugs (PWUD), harm reduction (HR) is an evidence-based low barrier approach to mitigating ongoing substance use risks and is considered a key pillar of the Department of Health and Human Service’s Overdose Prevention Strategy.1 Given the accessibility and continuity, primary care (PC) clinics are optimal sites for education about and provision of HR services.2,3

Aim

  1. Determining the impact of active and passive methods for HR supply.
  2. Recognizing the importance of clinician addiction education in the provision of HR services.

Methods

In January 2024, physician and nurse practitioner trainees in the West Haven Veterans Affairs (VA) Center of Education (CoE) in Interprofessional Primary Care received addiction care and HR strategy education. Initially, all patients presenting to the CoE completed a single-item substance use screening. Patients screening positive were offered HR supplies, including fentanyl and xylazine test strips (FTS, XTS), during the encounter (active distribution). Starting October 2024, HR kiosks were implemented in the clinic lobby, offering patients self-serve access to HR supplies (passive distribution). Test strip uptake was tracked through clinical encounter documentation and weekly kiosk inventory.

Results

Between January 2024 and June 2024, 92 FTS and 84 XTS were actively distributed. Upon implementation of the harm reduction kiosk, 253 FTS and 164 XTS were distributed between October 2024 and February 2025. In the CoE, FTS and XTS distribution increased by 275% and 195%, respectively, through passive kiosk distribution relative to active distribution during clinical encounters.

Conclusions

HR kiosk implementation resulted in significantly increased test strip uptake in the CoE, proving passive distribution to be an effective low barrier method of increasing access to HR and substance use disorder (SUD) resources. Although this model may reduce stigma and logistical barriers when presenting for a healthcare encounter, it limits the ability to track and engage patients for more intensive services. While each approach has unique advantages and disadvantages, test strip demand via both methods highlights the significant need for HR resources in PC settings. Continuing education for PC clinicians on low barrier SUD care and HR is critical to optimizing care for this population.

References
  1. Haffajee, RL, Sherry, TB, Dubenitz, JM, et al. Overdose prevention strategy. US Department of Health and Human Services (Issue Brief). Published October 27, 2021. Accessed December 11, 2025. https://aspe.hhs.gov/sites/default/files/documents/101936da95b69acb8446a4bad9179cc0/overdose-prevention-strategy.pdf
  2. Substance Abuse and Mental Health Services Administration. Advisory: low barrier models of care for substance use disorders. SAMHSA Publication No. PEP23-02-00-005. Published December 2023. Accessed December 11, 2025. https://library.samhsa.gov/sites/default/files/advisory-low-barrier-models-of-care-pep23-02-00-005.pdf
  3. Substance Abuse and Mental Health Services Administration: Harm Reduction Framework. Center for Substance Abuse Prevention, Substance Abuse and Mental Health Services Administration, 2023.
     
References
  1. Haffajee, RL, Sherry, TB, Dubenitz, JM, et al. Overdose prevention strategy. US Department of Health and Human Services (Issue Brief). Published October 27, 2021. Accessed December 11, 2025. https://aspe.hhs.gov/sites/default/files/documents/101936da95b69acb8446a4bad9179cc0/overdose-prevention-strategy.pdf
  2. Substance Abuse and Mental Health Services Administration. Advisory: low barrier models of care for substance use disorders. SAMHSA Publication No. PEP23-02-00-005. Published December 2023. Accessed December 11, 2025. https://library.samhsa.gov/sites/default/files/advisory-low-barrier-models-of-care-pep23-02-00-005.pdf
  3. Substance Abuse and Mental Health Services Administration: Harm Reduction Framework. Center for Substance Abuse Prevention, Substance Abuse and Mental Health Services Administration, 2023.
     
Issue
Federal Practitioner 42(suppl 7)
Issue
Federal Practitioner 42(suppl 7)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Building Trust: Enhancing Rural Women Veterans’ Healthcare Experiences Through Need-Supportive Patient-Centered Communication

Article Type
Changed

Background

Rural women veterans often confront unique healthcare barriers—geographic isolation, gender-related stigma, and limited provider cultural sensitivity that undermine trust and engagement. In response, we co-designed an interprofessional communication curriculum to promote relational, patient-centered care grounded in psychological need support.

Innovation

Anchored in Self Determination Theory (SDT), this curriculum equips nurses and social workers with need-supportive communication strategies that nurture autonomy, competence, and relatedness, integrating two transformative learning methods for enhancing respectful and inclusive listening:

  • Cultural humility reflections for veteran-centered care—personal narratives, storytelling, and power-awareness discussions to build lifelong reflective practices.
  • Medical improv simulations—adaptive improvisational role plays for healthcare environments fostering presence, adaptability, empathy, trust-building, and real-time responsiveness.

Delivered via a multiday health professions learning lab, the training combines asynchronous workshops with in-person facilitated interactions. Core modules cover SDT foundations, need supportive dialogue, veteran-centered cultural humility, and shared decision-making practices that uplift rural women veterans’ voices. Using Kirkpatrick’s Four Level Model, we assess impact at multiple tiers:

  1. Reaction: Participant satisfaction and perceived training relevance.
  2. Learning: Pre/post assessments track SDT knowledge and communication skills gains.
  3. Behavior: Observe simulations and self-reported changes in communication practices.
  4. Results: Qualitative satisfaction metrics and care engagement trends among rural women veterans.

Results

A pilot cohort (N = 20) across two rural sites is pending implementation. pre/post surveys will assess any improved confidence in applying need supportive communication and the most effective component in building empathetic presence. Feedback measures will also indicate the significance of combined uses of medical improv and cultural humility on deepened relational capacity and trust.

Discussion

This program operationalizes SDT within healthcare communications, integrating cultural humility and improvisation learning modalities to enhance care quality for rural women veterans, ultimately strengthening provider-patient connections. Using health professions learning lab environments can foster sustained behavioral impacts. Future iterations will expand to additional rural VA sites, co-designing with the voices of women veterans through focus groups.

Issue
Federal Practitioner 42(suppl 7)
Publications
Topics
Sections

Background

Rural women veterans often confront unique healthcare barriers—geographic isolation, gender-related stigma, and limited provider cultural sensitivity that undermine trust and engagement. In response, we co-designed an interprofessional communication curriculum to promote relational, patient-centered care grounded in psychological need support.

Innovation

Anchored in Self Determination Theory (SDT), this curriculum equips nurses and social workers with need-supportive communication strategies that nurture autonomy, competence, and relatedness, integrating two transformative learning methods for enhancing respectful and inclusive listening:

  • Cultural humility reflections for veteran-centered care—personal narratives, storytelling, and power-awareness discussions to build lifelong reflective practices.
  • Medical improv simulations—adaptive improvisational role plays for healthcare environments fostering presence, adaptability, empathy, trust-building, and real-time responsiveness.

Delivered via a multiday health professions learning lab, the training combines asynchronous workshops with in-person facilitated interactions. Core modules cover SDT foundations, need supportive dialogue, veteran-centered cultural humility, and shared decision-making practices that uplift rural women veterans’ voices. Using Kirkpatrick’s Four Level Model, we assess impact at multiple tiers:

  1. Reaction: Participant satisfaction and perceived training relevance.
  2. Learning: Pre/post assessments track SDT knowledge and communication skills gains.
  3. Behavior: Observe simulations and self-reported changes in communication practices.
  4. Results: Qualitative satisfaction metrics and care engagement trends among rural women veterans.

Results

A pilot cohort (N = 20) across two rural sites is pending implementation. pre/post surveys will assess any improved confidence in applying need supportive communication and the most effective component in building empathetic presence. Feedback measures will also indicate the significance of combined uses of medical improv and cultural humility on deepened relational capacity and trust.

Discussion

This program operationalizes SDT within healthcare communications, integrating cultural humility and improvisation learning modalities to enhance care quality for rural women veterans, ultimately strengthening provider-patient connections. Using health professions learning lab environments can foster sustained behavioral impacts. Future iterations will expand to additional rural VA sites, co-designing with the voices of women veterans through focus groups.

Background

Rural women veterans often confront unique healthcare barriers—geographic isolation, gender-related stigma, and limited provider cultural sensitivity that undermine trust and engagement. In response, we co-designed an interprofessional communication curriculum to promote relational, patient-centered care grounded in psychological need support.

Innovation

Anchored in Self Determination Theory (SDT), this curriculum equips nurses and social workers with need-supportive communication strategies that nurture autonomy, competence, and relatedness, integrating two transformative learning methods for enhancing respectful and inclusive listening:

  • Cultural humility reflections for veteran-centered care—personal narratives, storytelling, and power-awareness discussions to build lifelong reflective practices.
  • Medical improv simulations—adaptive improvisational role plays for healthcare environments fostering presence, adaptability, empathy, trust-building, and real-time responsiveness.

Delivered via a multiday health professions learning lab, the training combines asynchronous workshops with in-person facilitated interactions. Core modules cover SDT foundations, need supportive dialogue, veteran-centered cultural humility, and shared decision-making practices that uplift rural women veterans’ voices. Using Kirkpatrick’s Four Level Model, we assess impact at multiple tiers:

  1. Reaction: Participant satisfaction and perceived training relevance.
  2. Learning: Pre/post assessments track SDT knowledge and communication skills gains.
  3. Behavior: Observe simulations and self-reported changes in communication practices.
  4. Results: Qualitative satisfaction metrics and care engagement trends among rural women veterans.

Results

A pilot cohort (N = 20) across two rural sites is pending implementation. pre/post surveys will assess any improved confidence in applying need supportive communication and the most effective component in building empathetic presence. Feedback measures will also indicate the significance of combined uses of medical improv and cultural humility on deepened relational capacity and trust.

Discussion

This program operationalizes SDT within healthcare communications, integrating cultural humility and improvisation learning modalities to enhance care quality for rural women veterans, ultimately strengthening provider-patient connections. Using health professions learning lab environments can foster sustained behavioral impacts. Future iterations will expand to additional rural VA sites, co-designing with the voices of women veterans through focus groups.

Issue
Federal Practitioner 42(suppl 7)
Issue
Federal Practitioner 42(suppl 7)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Tai Chi Modification and Supplemental Movements Quality Improvement Program

Article Type
Changed

Background

The original program consisted of 12 movements that were to be split up between 3 weeks teaching 4 movements each week. Range of mobility was the main consideration for developing this HPE quality improvement project. Veterans who wanted to participate in Tai Chi were not able to engage in the activity due to the range of movement traditional Tai Chi required.

Innovation

The HPE Quality Improvement program developed a 15-movement warm-up, 12 co-ordinational movements consistent with the original program, 18 supplemental Tai Chi movements that were not included in the original program all of which focus on movements remaining below the shoulders and can be done standing or sitting. Four advanced exercises including “hip over heel” were included to target participants balance if able and to improve their hip strength, knee tendon/ligament strength. Tai Chi loses its potential to increase balance when performed in a sitting position.1 The movements drew upon Fu style Tai Chi and the program developer was given permission from Tommy Kirchoff to use his DVD Healing Exercises. The HPE program consisted of four 30–60-minute weekly sessions of learning the movements with another 4 weekly sessions of demonstrating the movements. Instructors were given written and visual documents to learn from and were evaluated by the developer during the last 4 weeks.
.

Results

Qualitative Data: Instructors notice a difference in how they feel, and appreciate having another option to offer veterans with mobility/standing issues. Patients expressed improvement in mobility relating to bending, arm extension, arm raising, muscle strengthening, hip strengthening and rotation.

Discussion

Future research will want to look at taking measurements before and after patient implementation to determine quantitative data related to balance, strength and range of movement including grip strength, stand up and go, and one-legged stands.

References
  1. Skelton DA, Mavroeidi A. How do muscle and bone strengthening and balance activities (MBSBA) vary across the life course, and are there particular ages where MBSBA are most important?. J Frailty Sarcopenia Falls. 2018;3(2):74-84. Published 2018 Jun 1. doi:10.22540/JFSF-03-074
Issue
Federal Practitioner 42(suppl 7)
Publications
Topics
Sections

Background

The original program consisted of 12 movements that were to be split up between 3 weeks teaching 4 movements each week. Range of mobility was the main consideration for developing this HPE quality improvement project. Veterans who wanted to participate in Tai Chi were not able to engage in the activity due to the range of movement traditional Tai Chi required.

Innovation

The HPE Quality Improvement program developed a 15-movement warm-up, 12 co-ordinational movements consistent with the original program, 18 supplemental Tai Chi movements that were not included in the original program all of which focus on movements remaining below the shoulders and can be done standing or sitting. Four advanced exercises including “hip over heel” were included to target participants balance if able and to improve their hip strength, knee tendon/ligament strength. Tai Chi loses its potential to increase balance when performed in a sitting position.1 The movements drew upon Fu style Tai Chi and the program developer was given permission from Tommy Kirchoff to use his DVD Healing Exercises. The HPE program consisted of four 30–60-minute weekly sessions of learning the movements with another 4 weekly sessions of demonstrating the movements. Instructors were given written and visual documents to learn from and were evaluated by the developer during the last 4 weeks.
.

Results

Qualitative Data: Instructors notice a difference in how they feel, and appreciate having another option to offer veterans with mobility/standing issues. Patients expressed improvement in mobility relating to bending, arm extension, arm raising, muscle strengthening, hip strengthening and rotation.

Discussion

Future research will want to look at taking measurements before and after patient implementation to determine quantitative data related to balance, strength and range of movement including grip strength, stand up and go, and one-legged stands.

Background

The original program consisted of 12 movements that were to be split up between 3 weeks teaching 4 movements each week. Range of mobility was the main consideration for developing this HPE quality improvement project. Veterans who wanted to participate in Tai Chi were not able to engage in the activity due to the range of movement traditional Tai Chi required.

Innovation

The HPE Quality Improvement program developed a 15-movement warm-up, 12 co-ordinational movements consistent with the original program, 18 supplemental Tai Chi movements that were not included in the original program all of which focus on movements remaining below the shoulders and can be done standing or sitting. Four advanced exercises including “hip over heel” were included to target participants balance if able and to improve their hip strength, knee tendon/ligament strength. Tai Chi loses its potential to increase balance when performed in a sitting position.1 The movements drew upon Fu style Tai Chi and the program developer was given permission from Tommy Kirchoff to use his DVD Healing Exercises. The HPE program consisted of four 30–60-minute weekly sessions of learning the movements with another 4 weekly sessions of demonstrating the movements. Instructors were given written and visual documents to learn from and were evaluated by the developer during the last 4 weeks.
.

Results

Qualitative Data: Instructors notice a difference in how they feel, and appreciate having another option to offer veterans with mobility/standing issues. Patients expressed improvement in mobility relating to bending, arm extension, arm raising, muscle strengthening, hip strengthening and rotation.

Discussion

Future research will want to look at taking measurements before and after patient implementation to determine quantitative data related to balance, strength and range of movement including grip strength, stand up and go, and one-legged stands.

References
  1. Skelton DA, Mavroeidi A. How do muscle and bone strengthening and balance activities (MBSBA) vary across the life course, and are there particular ages where MBSBA are most important?. J Frailty Sarcopenia Falls. 2018;3(2):74-84. Published 2018 Jun 1. doi:10.22540/JFSF-03-074
References
  1. Skelton DA, Mavroeidi A. How do muscle and bone strengthening and balance activities (MBSBA) vary across the life course, and are there particular ages where MBSBA are most important?. J Frailty Sarcopenia Falls. 2018;3(2):74-84. Published 2018 Jun 1. doi:10.22540/JFSF-03-074
Issue
Federal Practitioner 42(suppl 7)
Issue
Federal Practitioner 42(suppl 7)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Improving Life-Sustaining Treatment Discussions and Order Quality in a Primary Care Clinic

Article Type
Changed

Background

Veterans Health Administration Directive 1004.03(1) (Advance Care Planning) aims to establish a “system-wide, patient-centered and evidence-based approach to Advance Care Planning.”1 Life-sustaining treatment (LST) orders are documents of patient preference regarding interventions such as mechanical ventilation, CPR, dialysis, artificial nutrition and hydration; and are considered part of an Advance Care Plan. From a bioethics perspective, these orders promote patient autonomy by formalizing patient preferences around LSTs in the medical record, particularly for when a patient lacks capacity and/or cannot make decisions on their own.2 Through consensus building, our team defined vague, inactionable, or incorrectly written LST orders as Potentially Problematic Orders (PPO). PPOs which cause confusion at the bedside or lack clarity around preferences can pose serious risks to patient safety and autonomy by exposing patients to inappropriate initiation or withholding of LSTs. Improving the quality of LST orders and reducing the number of PPOs is a crucial element for safe and effective implementation of Directive 1004.03(1).

Aim

The aim of this quality improvement project was to reduce the number of PPOs in a VA Community-Based Outpatient Clinic (CBOC) by 75% by the end of 2025.

Methods

The Model for Improvement was used for this quality improvement project.3 One year of LST orders were audited and thematic analysis identified 7 subtypes of PPO. Some PPO subtypes included clerical errors, potentially mismatched order sets (e.g., Comfort Care order with no associated DNR order) ill-defined or vague orders, and clinically impractical orders (eg, “consents to one shock during CPR”). We defined vague, ill-defined, and impractical orders as the most ethically and clinically challenging given the possibility of confusion or error at the bedside. Initial data were collected from October 2022 to October 2023, and post-intervention data were collected from February 2024 to September 2024. Interventions included process changes (clarifying role responsibility, documentation practices, patient education), regular auditing and feedback from a supervisor, and staff education.

Results

Post-intervention analysis demonstrated that the proportion of PPO remained the same, with 25% of patient charts containing at least one PPO. However, the distribution of PPO in the most ethically and clinically problematic categories (vague, ill-defined, and impractical orders) decreased from 14.7% to <1%.

Conclusions

We successfully reduced the most ethically and clinically challenging PPOs to <1% in our initial intervention. To reduce the overall proportion of PPO, we plan enhancements in process automations, additional physical educational resources, and minor changes in audit criteria. Future projects will aim to address the remaining PPO error types and prepare this project for implementation in other CBOCs.

References
  1. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 1004.03(1): Advance care planning. Published December 12, 2023. Accessed December 11, 2025. https://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=11610
  2. White DB, Curtis JR, Lo B, Luce JM. Decisions to limit life-sustaining treatment for critically ill patients who lack both decision-making capacity and surrogate decision-makers. Crit Care Med. 2006;34(8):2053-2059. doi:10.1097/01.CCM.0000227654.38708.C1
  3. Ogrinc GS, Headrick LA, Barton AJ, Dolansky MA, Madigosky WS, Miltner RS, Hall AG. Fundamentals of Health Care Improvement: A Guide to Improving Your Patients’ Care (4th edition). Joint Commission Resources and Institute for Healthcare Improvement; 2022.
Issue
Federal Practitioner 42(suppl 7)
Publications
Topics
Sections

Background

Veterans Health Administration Directive 1004.03(1) (Advance Care Planning) aims to establish a “system-wide, patient-centered and evidence-based approach to Advance Care Planning.”1 Life-sustaining treatment (LST) orders are documents of patient preference regarding interventions such as mechanical ventilation, CPR, dialysis, artificial nutrition and hydration; and are considered part of an Advance Care Plan. From a bioethics perspective, these orders promote patient autonomy by formalizing patient preferences around LSTs in the medical record, particularly for when a patient lacks capacity and/or cannot make decisions on their own.2 Through consensus building, our team defined vague, inactionable, or incorrectly written LST orders as Potentially Problematic Orders (PPO). PPOs which cause confusion at the bedside or lack clarity around preferences can pose serious risks to patient safety and autonomy by exposing patients to inappropriate initiation or withholding of LSTs. Improving the quality of LST orders and reducing the number of PPOs is a crucial element for safe and effective implementation of Directive 1004.03(1).

Aim

The aim of this quality improvement project was to reduce the number of PPOs in a VA Community-Based Outpatient Clinic (CBOC) by 75% by the end of 2025.

Methods

The Model for Improvement was used for this quality improvement project.3 One year of LST orders were audited and thematic analysis identified 7 subtypes of PPO. Some PPO subtypes included clerical errors, potentially mismatched order sets (e.g., Comfort Care order with no associated DNR order) ill-defined or vague orders, and clinically impractical orders (eg, “consents to one shock during CPR”). We defined vague, ill-defined, and impractical orders as the most ethically and clinically challenging given the possibility of confusion or error at the bedside. Initial data were collected from October 2022 to October 2023, and post-intervention data were collected from February 2024 to September 2024. Interventions included process changes (clarifying role responsibility, documentation practices, patient education), regular auditing and feedback from a supervisor, and staff education.

Results

Post-intervention analysis demonstrated that the proportion of PPO remained the same, with 25% of patient charts containing at least one PPO. However, the distribution of PPO in the most ethically and clinically problematic categories (vague, ill-defined, and impractical orders) decreased from 14.7% to <1%.

Conclusions

We successfully reduced the most ethically and clinically challenging PPOs to <1% in our initial intervention. To reduce the overall proportion of PPO, we plan enhancements in process automations, additional physical educational resources, and minor changes in audit criteria. Future projects will aim to address the remaining PPO error types and prepare this project for implementation in other CBOCs.

Background

Veterans Health Administration Directive 1004.03(1) (Advance Care Planning) aims to establish a “system-wide, patient-centered and evidence-based approach to Advance Care Planning.”1 Life-sustaining treatment (LST) orders are documents of patient preference regarding interventions such as mechanical ventilation, CPR, dialysis, artificial nutrition and hydration; and are considered part of an Advance Care Plan. From a bioethics perspective, these orders promote patient autonomy by formalizing patient preferences around LSTs in the medical record, particularly for when a patient lacks capacity and/or cannot make decisions on their own.2 Through consensus building, our team defined vague, inactionable, or incorrectly written LST orders as Potentially Problematic Orders (PPO). PPOs which cause confusion at the bedside or lack clarity around preferences can pose serious risks to patient safety and autonomy by exposing patients to inappropriate initiation or withholding of LSTs. Improving the quality of LST orders and reducing the number of PPOs is a crucial element for safe and effective implementation of Directive 1004.03(1).

Aim

The aim of this quality improvement project was to reduce the number of PPOs in a VA Community-Based Outpatient Clinic (CBOC) by 75% by the end of 2025.

Methods

The Model for Improvement was used for this quality improvement project.3 One year of LST orders were audited and thematic analysis identified 7 subtypes of PPO. Some PPO subtypes included clerical errors, potentially mismatched order sets (e.g., Comfort Care order with no associated DNR order) ill-defined or vague orders, and clinically impractical orders (eg, “consents to one shock during CPR”). We defined vague, ill-defined, and impractical orders as the most ethically and clinically challenging given the possibility of confusion or error at the bedside. Initial data were collected from October 2022 to October 2023, and post-intervention data were collected from February 2024 to September 2024. Interventions included process changes (clarifying role responsibility, documentation practices, patient education), regular auditing and feedback from a supervisor, and staff education.

Results

Post-intervention analysis demonstrated that the proportion of PPO remained the same, with 25% of patient charts containing at least one PPO. However, the distribution of PPO in the most ethically and clinically problematic categories (vague, ill-defined, and impractical orders) decreased from 14.7% to <1%.

Conclusions

We successfully reduced the most ethically and clinically challenging PPOs to <1% in our initial intervention. To reduce the overall proportion of PPO, we plan enhancements in process automations, additional physical educational resources, and minor changes in audit criteria. Future projects will aim to address the remaining PPO error types and prepare this project for implementation in other CBOCs.

References
  1. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 1004.03(1): Advance care planning. Published December 12, 2023. Accessed December 11, 2025. https://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=11610
  2. White DB, Curtis JR, Lo B, Luce JM. Decisions to limit life-sustaining treatment for critically ill patients who lack both decision-making capacity and surrogate decision-makers. Crit Care Med. 2006;34(8):2053-2059. doi:10.1097/01.CCM.0000227654.38708.C1
  3. Ogrinc GS, Headrick LA, Barton AJ, Dolansky MA, Madigosky WS, Miltner RS, Hall AG. Fundamentals of Health Care Improvement: A Guide to Improving Your Patients’ Care (4th edition). Joint Commission Resources and Institute for Healthcare Improvement; 2022.
References
  1. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 1004.03(1): Advance care planning. Published December 12, 2023. Accessed December 11, 2025. https://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=11610
  2. White DB, Curtis JR, Lo B, Luce JM. Decisions to limit life-sustaining treatment for critically ill patients who lack both decision-making capacity and surrogate decision-makers. Crit Care Med. 2006;34(8):2053-2059. doi:10.1097/01.CCM.0000227654.38708.C1
  3. Ogrinc GS, Headrick LA, Barton AJ, Dolansky MA, Madigosky WS, Miltner RS, Hall AG. Fundamentals of Health Care Improvement: A Guide to Improving Your Patients’ Care (4th edition). Joint Commission Resources and Institute for Healthcare Improvement; 2022.
Issue
Federal Practitioner 42(suppl 7)
Issue
Federal Practitioner 42(suppl 7)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

A Health Educator’s Primer to Cost-Effectiveness in Health Professions Education

Article Type
Changed

Background

Cost-effectiveness (CE) evaluations, for existing and anticipated programs, are common in healthcare, but are rarely used in health professions education (HPE). A systematic review of HPE literature found not only few examples of CE evaluations, but also unclear and inconsistent methodology.1 One proposed reason HPE has been slow to adopt CE evaluations is uncertainty over terminology and how to adapt this methodology to HPE.2 CE evaluations present further challenges for HPE since educational outcomes are often not easily monetized. However, given the reality of constrained budgets and limited resources, CE evaluations can be a powerful tool for educators to strengthen arguments for proposed innovations, and for scholars seeking to conduct rigorous work that sustains critical review.

Innovation

This project aims to make CE evaluations more understandable to HPE educators, using a one-page infographic and glossary. This will provide a primer, operationalizing the steps involved in CE evaluations and addressing why and when CE evaluations might be considered in HPE. To improve comprehension, this is being developed collaboratively with health professions educators and an economist. This infographic will be submitted for publication, as a resource to facilitate educators’ scholarly work and conversations with fiscal administrators.

Results

The infographic includes 1) an overview of CE evaluations, 2) information about inputs required for CE evaluations, 3) guidance on interpreting results, 4) a glossary of key terminology, and 5) considerations for why educators might consider this type of analysis. A final draft will be pilot tested with a focus group to assess interdisciplinary accessibility.

Discussion

Discussions between health professions educators and an economist on this infographic uncovered concepts that were poorly understood or defined differently across disciplines, determining specific knowledge gaps and misunderstandings. For example, facilitating conversation between educators and economists highlighted key terms that were a source of misunderstanding. These were then added to the glossary, creating a shared vocabulary. This also helped clarify the steps and information necessary for conducting CE evaluations in HPE, particularly the issue of perspective choice for the analysis (educator, patient, learner, etc.). Overall, this collaboration aimed at making CE evaluations more approachable and understandable for HPE professionals through this infographic.

References
  1. Foo J, Cook DA, Walsh K, et al. Cost evaluations in health professions education: a systematic review of methods and reporting quality. Med Educ. 2019;53(12):1196-1208. doi:10.1111/medu.13936
  2. Maloney S, Reeves S, Rivers G, Ilic D, Foo J, Walsh K. The Prato Statement on cost and value in professional and interprofessional education. J Interprof Care. 2017;31(1):1-4. doi:10.1080/13561820.2016.1257255
Issue
Federal Practitioner 42(suppl 7)
Publications
Topics
Sections

Background

Cost-effectiveness (CE) evaluations, for existing and anticipated programs, are common in healthcare, but are rarely used in health professions education (HPE). A systematic review of HPE literature found not only few examples of CE evaluations, but also unclear and inconsistent methodology.1 One proposed reason HPE has been slow to adopt CE evaluations is uncertainty over terminology and how to adapt this methodology to HPE.2 CE evaluations present further challenges for HPE since educational outcomes are often not easily monetized. However, given the reality of constrained budgets and limited resources, CE evaluations can be a powerful tool for educators to strengthen arguments for proposed innovations, and for scholars seeking to conduct rigorous work that sustains critical review.

Innovation

This project aims to make CE evaluations more understandable to HPE educators, using a one-page infographic and glossary. This will provide a primer, operationalizing the steps involved in CE evaluations and addressing why and when CE evaluations might be considered in HPE. To improve comprehension, this is being developed collaboratively with health professions educators and an economist. This infographic will be submitted for publication, as a resource to facilitate educators’ scholarly work and conversations with fiscal administrators.

Results

The infographic includes 1) an overview of CE evaluations, 2) information about inputs required for CE evaluations, 3) guidance on interpreting results, 4) a glossary of key terminology, and 5) considerations for why educators might consider this type of analysis. A final draft will be pilot tested with a focus group to assess interdisciplinary accessibility.

Discussion

Discussions between health professions educators and an economist on this infographic uncovered concepts that were poorly understood or defined differently across disciplines, determining specific knowledge gaps and misunderstandings. For example, facilitating conversation between educators and economists highlighted key terms that were a source of misunderstanding. These were then added to the glossary, creating a shared vocabulary. This also helped clarify the steps and information necessary for conducting CE evaluations in HPE, particularly the issue of perspective choice for the analysis (educator, patient, learner, etc.). Overall, this collaboration aimed at making CE evaluations more approachable and understandable for HPE professionals through this infographic.

Background

Cost-effectiveness (CE) evaluations, for existing and anticipated programs, are common in healthcare, but are rarely used in health professions education (HPE). A systematic review of HPE literature found not only few examples of CE evaluations, but also unclear and inconsistent methodology.1 One proposed reason HPE has been slow to adopt CE evaluations is uncertainty over terminology and how to adapt this methodology to HPE.2 CE evaluations present further challenges for HPE since educational outcomes are often not easily monetized. However, given the reality of constrained budgets and limited resources, CE evaluations can be a powerful tool for educators to strengthen arguments for proposed innovations, and for scholars seeking to conduct rigorous work that sustains critical review.

Innovation

This project aims to make CE evaluations more understandable to HPE educators, using a one-page infographic and glossary. This will provide a primer, operationalizing the steps involved in CE evaluations and addressing why and when CE evaluations might be considered in HPE. To improve comprehension, this is being developed collaboratively with health professions educators and an economist. This infographic will be submitted for publication, as a resource to facilitate educators’ scholarly work and conversations with fiscal administrators.

Results

The infographic includes 1) an overview of CE evaluations, 2) information about inputs required for CE evaluations, 3) guidance on interpreting results, 4) a glossary of key terminology, and 5) considerations for why educators might consider this type of analysis. A final draft will be pilot tested with a focus group to assess interdisciplinary accessibility.

Discussion

Discussions between health professions educators and an economist on this infographic uncovered concepts that were poorly understood or defined differently across disciplines, determining specific knowledge gaps and misunderstandings. For example, facilitating conversation between educators and economists highlighted key terms that were a source of misunderstanding. These were then added to the glossary, creating a shared vocabulary. This also helped clarify the steps and information necessary for conducting CE evaluations in HPE, particularly the issue of perspective choice for the analysis (educator, patient, learner, etc.). Overall, this collaboration aimed at making CE evaluations more approachable and understandable for HPE professionals through this infographic.

References
  1. Foo J, Cook DA, Walsh K, et al. Cost evaluations in health professions education: a systematic review of methods and reporting quality. Med Educ. 2019;53(12):1196-1208. doi:10.1111/medu.13936
  2. Maloney S, Reeves S, Rivers G, Ilic D, Foo J, Walsh K. The Prato Statement on cost and value in professional and interprofessional education. J Interprof Care. 2017;31(1):1-4. doi:10.1080/13561820.2016.1257255
References
  1. Foo J, Cook DA, Walsh K, et al. Cost evaluations in health professions education: a systematic review of methods and reporting quality. Med Educ. 2019;53(12):1196-1208. doi:10.1111/medu.13936
  2. Maloney S, Reeves S, Rivers G, Ilic D, Foo J, Walsh K. The Prato Statement on cost and value in professional and interprofessional education. J Interprof Care. 2017;31(1):1-4. doi:10.1080/13561820.2016.1257255
Issue
Federal Practitioner 42(suppl 7)
Issue
Federal Practitioner 42(suppl 7)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Cost Analysis of Dermatology Residency Applications From 2021 to 2024 Using the Texas Seeking Transparency in Application to Residency Database

Article Type
Changed
Display Headline

Cost Analysis of Dermatology Residency Applications From 2021 to 2024 Using the Texas Seeking Transparency in Application to Residency Database

To the Editor:

Residency applicants, especially in competitive specialties such as dermatology, face major financial barriers due to the high costs of applications, interviews, and away rotations.1 While several studies have examined application costs of other specialties, few have analyzed expenses associated with dermatology applications.1,2 There are no data examining costs following the start of the COVID-19 pandemic in 2020; thus, our study evaluated dermatology application cost trends from 2021 to 2024 and compared them to other specialties to identify strategies to reduce the financial burden on applicants.

Self-reported total application costs, application fees, interview expenses, and away rotation costs from 2021 to 2024 were collected from the Texas Seeking Transparency in Application to Residency (STAR) database powered by the UT Southwestern Medical Center (Dallas, Texas).3 The mean total application expenses per year were compared among specialties, and an analysis of variance was used to determine if the differences were statistically significant.

The number of applicants who recorded information in the Texas STAR database was 110 in 2021, 163 in 2022, 136 in 2023, and 129 in 2024.3 The total dermatology application expenses increased from $2805 in 2021 to $6231 in 2024; interview costs increased from $404 in 2021 to $911 in 2024; and away rotation costs increased from $850 in 2021 to $3812 in 2024 (all P<.05)(Table). There was no significant change in application fees during the study period ($2176 in 2021 to $2125 in 2024 [P=.58]). Dermatology had the fourth highest average total cost over the study period compared to all other specialties, increasing from $2250 in 2021 to $5250 in 2024, following orthopedic surgery ($2250 in 2021 to $6750 in 2024), plastic surgery ($2250 in 2021 to $9750 in 2024), and neurosurgery ($1750 in 2021 to $11,250 in 2024).

CT116006216-Table

Our study found that dermatology residency application costs have increased significantly from 2021 to 2024, primarily driven by rising interview and away rotation expenses (both P<.05). This trend places dermatology among the most expensive fields to apply to for residency. A cross-sectional survey of dermatology residency program directors identified away rotations as one of the top 5 selection criteria, underscoring their importance in the matching process.4 In addition, a cross-sectional analysis of 345 dermatology residents found that 26.2% matched at institutions where they had mentors, including those they connected with through away rotations.5,6 Overall, the high cost of away rotations partially may reflect the competitive nature of the specialty, as building connections at programs may enhance the chances of matching. These costs also can vary based on geography, as rotating in high-cost urban centers can be more expensive than in rural areas; however, rural rotations may be less common due to limited program availability and applicant preferences. For example, nearly 50% of 2024 Electronic Residency Application Service applicants indicated a preference for urban settings, while fewer than 5% selected rural settings.7 Additionally, the high costs associated with applying to residency programs and completing away rotations can disproportionately impact students from rural backgrounds and underrepresented minorities, who may have fewer financial resources.

In our study, the lower application-related expenses in 2021 (during the pandemic) compared to those of 2024 (postpandemic) likely stem from the Association of American Medical Colleges’ recommendation to conduct virtual interviews during the pandemic.8 In 2024, some dermatology programs returned to in-person interviews, with some applicants consequently incurring higher costs related to travel, lodging, and other associated expenses.8 A cost-analysis study of 4153 dermatology applicants from 2016 to 2021 found that the average application costs were $1759 per applicant during the pandemic, when virtual interviews replaced in-person ones, whereas costs were $8476 per applicant during periods with in-person interviews and no COVID-19 restrictions.2 However, we did not observe a significant change in application fees over our study period, likely because the pandemic did not affect application numbers. A cross-sectional analysis of dermatology applicants during the pandemic similarly reported reductions in application-related expenses during the period when interviews were conducted virtually,9 supporting the trend observed in our study. Overall, our findings taken together with other studies highlight the pandemic’s role in reducing expenses and underscore the potential for exploring additional cost-saving measures.

Implementing strategies to reduce these financial burdens—including virtual interviews, increasing student funding for away rotations, and limiting the number of applications individual students can submit—could help alleviate socioeconomic disparities. The new signaling system for residency programs aims to reduce the number of applications submitted, as applicants typically receive interviews only from the limited number of programs they signal, reducing overall application costs. However, our data from the Texas STAR database suggest that application numbers remained relatively stable from 2021 to 2024, indicating that, despite signaling, many applicants still may apply broadly in hopes of improving their chances in an increasingly competitive field. Although a definitive solution to reducing the financial burden on dermatology applicants remains elusive, these strategies can raise awareness and encourage important dialogues.

Limitations of our study include the voluntary nature of the Texas STAR survey, leading to potential voluntary response bias, as well as the small sample size. Students who choose to submit cost data may differ systematically from those who do not; for example, students who match may be more likely to report their outcomes, while those who do not match may be less likely to participate, potentially introducing selection bias. In addition, general awareness of the Texas STAR survey may vary across institutions and among students, further limiting the number of students who participate. Additionally, 2021 was the only presignaling year included, making it difficult to assess longer-term trends. Despite these limitations, the Texas STAR database remains a valuable resource for analyzing general residency application expenses and trends, as it offers comprehensive data from more than 100 medical schools and includes many variables.3

In conclusion, our study found that total dermatology residency application costs have increased significantly from 2021 to 2024 (all P<.05), making dermatology among the most expensive specialties for applying. This study sets the foundation for future survey-based research for applicants and program directors on strategies to alleviate financial burdens.

References
  1. Mansouri B, Walker GD, Mitchell J, et al. The cost of applying to dermatology residency: 2014 data estimates. J Am Acad Dermatol. 2016;74:754-756. doi:10.1016/j.jaad.2015.10.049
  2. Gorgy M, Shah S, Arbuiso S, et al. Comparison of cost changes due to the COVID-19 pandemic for dermatology residency applications in the USA. Clin Exp Dermatol. 2022;47:600-602. doi:10.1111/ced.15001<.li>
  3. UT Southwestern. Texas STAR. 2024. Accessed November 5, 2025. https://www.utsouthwestern.edu/education/medical-school/about-the-school/student-affairs/texas-star.html
  4. Baldwin K, Weidner Z, Ahn J, et al. Are away rotations critical for a successful match in orthopaedic surgery? Clin Orthop Relat Res. 2009;467:3340-3345. doi:10.1007/s11999-009-0920-9
  5. Yeh C, Desai AD, Wilson BN, et al. Cross-sectional analysis of scholarly work and mentor relationships in matched dermatology residency applicants. J Am Acad Dermatol. 2022;86:1437-1439. doi:10.1016/j.jaad.2021.06.861
  6. Gorouhi F, Alikhan A, Rezaei A, et al. Dermatology residency selection criteria with an emphasis on program characteristics: a national program director survey. Dermatol Res Pract. 2014;2014:692760. doi:10.1155/2014/692760
  7. Association of American Medical Colleges. Decoding geographic and setting preferences in residency selection. January 18, 2024. Accessed October 27, 2025. https://www.aamc.org/services/eras-institutions/geographic-preferences
  8. Association of American Medical Colleges. Virtual interviews: tips for program directors. Updated May 14, 2020. https://med.stanford.edu/content/dam/sm/gme/program_portal/pd/pd_meet/2019-2020/8-6-20-Virtual_Interview_Tips_for_Program_Directors_05142020.pdf
  9. Williams GE, Zimmerman JM, Wiggins CJ, et al. The indelible marks on dermatology: impacts of COVID-19 on dermatology residency match using the Texas STAR database. Clin Dermatol. 2023;41:215-218. doi:10.1016/j.clindermatol.2022.12.001
Article PDF
Author and Disclosure Information

Naeha Pathak (ORCID: 0000-0002-9870-0704) is from the Icahn School of Medicine at Mount Sinai, New York, New York. Dr. Lipner (ORCID: 0000-0001-5913-9304) is from the Israel Englander Department of Dermatology, Weill Cornell Medicine, New York.

The authors have no relevant financial disclosures to report.

Correspondence: Shari R. Lipner, MD, PhD, 1305 York Ave, 9th Floor, New York, NY 10021 (shl9032@med.cornell.edu).

Cutis. 2025 December;116(6):216-217. doi:10.12788/cutis.1303

Issue
Cutis - 116(6)
Publications
Topics
Page Number
216-217
Sections
Author and Disclosure Information

Naeha Pathak (ORCID: 0000-0002-9870-0704) is from the Icahn School of Medicine at Mount Sinai, New York, New York. Dr. Lipner (ORCID: 0000-0001-5913-9304) is from the Israel Englander Department of Dermatology, Weill Cornell Medicine, New York.

The authors have no relevant financial disclosures to report.

Correspondence: Shari R. Lipner, MD, PhD, 1305 York Ave, 9th Floor, New York, NY 10021 (shl9032@med.cornell.edu).

Cutis. 2025 December;116(6):216-217. doi:10.12788/cutis.1303

Author and Disclosure Information

Naeha Pathak (ORCID: 0000-0002-9870-0704) is from the Icahn School of Medicine at Mount Sinai, New York, New York. Dr. Lipner (ORCID: 0000-0001-5913-9304) is from the Israel Englander Department of Dermatology, Weill Cornell Medicine, New York.

The authors have no relevant financial disclosures to report.

Correspondence: Shari R. Lipner, MD, PhD, 1305 York Ave, 9th Floor, New York, NY 10021 (shl9032@med.cornell.edu).

Cutis. 2025 December;116(6):216-217. doi:10.12788/cutis.1303

Article PDF
Article PDF

To the Editor:

Residency applicants, especially in competitive specialties such as dermatology, face major financial barriers due to the high costs of applications, interviews, and away rotations.1 While several studies have examined application costs of other specialties, few have analyzed expenses associated with dermatology applications.1,2 There are no data examining costs following the start of the COVID-19 pandemic in 2020; thus, our study evaluated dermatology application cost trends from 2021 to 2024 and compared them to other specialties to identify strategies to reduce the financial burden on applicants.

Self-reported total application costs, application fees, interview expenses, and away rotation costs from 2021 to 2024 were collected from the Texas Seeking Transparency in Application to Residency (STAR) database powered by the UT Southwestern Medical Center (Dallas, Texas).3 The mean total application expenses per year were compared among specialties, and an analysis of variance was used to determine if the differences were statistically significant.

The number of applicants who recorded information in the Texas STAR database was 110 in 2021, 163 in 2022, 136 in 2023, and 129 in 2024.3 The total dermatology application expenses increased from $2805 in 2021 to $6231 in 2024; interview costs increased from $404 in 2021 to $911 in 2024; and away rotation costs increased from $850 in 2021 to $3812 in 2024 (all P<.05)(Table). There was no significant change in application fees during the study period ($2176 in 2021 to $2125 in 2024 [P=.58]). Dermatology had the fourth highest average total cost over the study period compared to all other specialties, increasing from $2250 in 2021 to $5250 in 2024, following orthopedic surgery ($2250 in 2021 to $6750 in 2024), plastic surgery ($2250 in 2021 to $9750 in 2024), and neurosurgery ($1750 in 2021 to $11,250 in 2024).

CT116006216-Table

Our study found that dermatology residency application costs have increased significantly from 2021 to 2024, primarily driven by rising interview and away rotation expenses (both P<.05). This trend places dermatology among the most expensive fields to apply to for residency. A cross-sectional survey of dermatology residency program directors identified away rotations as one of the top 5 selection criteria, underscoring their importance in the matching process.4 In addition, a cross-sectional analysis of 345 dermatology residents found that 26.2% matched at institutions where they had mentors, including those they connected with through away rotations.5,6 Overall, the high cost of away rotations partially may reflect the competitive nature of the specialty, as building connections at programs may enhance the chances of matching. These costs also can vary based on geography, as rotating in high-cost urban centers can be more expensive than in rural areas; however, rural rotations may be less common due to limited program availability and applicant preferences. For example, nearly 50% of 2024 Electronic Residency Application Service applicants indicated a preference for urban settings, while fewer than 5% selected rural settings.7 Additionally, the high costs associated with applying to residency programs and completing away rotations can disproportionately impact students from rural backgrounds and underrepresented minorities, who may have fewer financial resources.

In our study, the lower application-related expenses in 2021 (during the pandemic) compared to those of 2024 (postpandemic) likely stem from the Association of American Medical Colleges’ recommendation to conduct virtual interviews during the pandemic.8 In 2024, some dermatology programs returned to in-person interviews, with some applicants consequently incurring higher costs related to travel, lodging, and other associated expenses.8 A cost-analysis study of 4153 dermatology applicants from 2016 to 2021 found that the average application costs were $1759 per applicant during the pandemic, when virtual interviews replaced in-person ones, whereas costs were $8476 per applicant during periods with in-person interviews and no COVID-19 restrictions.2 However, we did not observe a significant change in application fees over our study period, likely because the pandemic did not affect application numbers. A cross-sectional analysis of dermatology applicants during the pandemic similarly reported reductions in application-related expenses during the period when interviews were conducted virtually,9 supporting the trend observed in our study. Overall, our findings taken together with other studies highlight the pandemic’s role in reducing expenses and underscore the potential for exploring additional cost-saving measures.

Implementing strategies to reduce these financial burdens—including virtual interviews, increasing student funding for away rotations, and limiting the number of applications individual students can submit—could help alleviate socioeconomic disparities. The new signaling system for residency programs aims to reduce the number of applications submitted, as applicants typically receive interviews only from the limited number of programs they signal, reducing overall application costs. However, our data from the Texas STAR database suggest that application numbers remained relatively stable from 2021 to 2024, indicating that, despite signaling, many applicants still may apply broadly in hopes of improving their chances in an increasingly competitive field. Although a definitive solution to reducing the financial burden on dermatology applicants remains elusive, these strategies can raise awareness and encourage important dialogues.

Limitations of our study include the voluntary nature of the Texas STAR survey, leading to potential voluntary response bias, as well as the small sample size. Students who choose to submit cost data may differ systematically from those who do not; for example, students who match may be more likely to report their outcomes, while those who do not match may be less likely to participate, potentially introducing selection bias. In addition, general awareness of the Texas STAR survey may vary across institutions and among students, further limiting the number of students who participate. Additionally, 2021 was the only presignaling year included, making it difficult to assess longer-term trends. Despite these limitations, the Texas STAR database remains a valuable resource for analyzing general residency application expenses and trends, as it offers comprehensive data from more than 100 medical schools and includes many variables.3

In conclusion, our study found that total dermatology residency application costs have increased significantly from 2021 to 2024 (all P<.05), making dermatology among the most expensive specialties for applying. This study sets the foundation for future survey-based research for applicants and program directors on strategies to alleviate financial burdens.

To the Editor:

Residency applicants, especially in competitive specialties such as dermatology, face major financial barriers due to the high costs of applications, interviews, and away rotations.1 While several studies have examined application costs of other specialties, few have analyzed expenses associated with dermatology applications.1,2 There are no data examining costs following the start of the COVID-19 pandemic in 2020; thus, our study evaluated dermatology application cost trends from 2021 to 2024 and compared them to other specialties to identify strategies to reduce the financial burden on applicants.

Self-reported total application costs, application fees, interview expenses, and away rotation costs from 2021 to 2024 were collected from the Texas Seeking Transparency in Application to Residency (STAR) database powered by the UT Southwestern Medical Center (Dallas, Texas).3 The mean total application expenses per year were compared among specialties, and an analysis of variance was used to determine if the differences were statistically significant.

The number of applicants who recorded information in the Texas STAR database was 110 in 2021, 163 in 2022, 136 in 2023, and 129 in 2024.3 The total dermatology application expenses increased from $2805 in 2021 to $6231 in 2024; interview costs increased from $404 in 2021 to $911 in 2024; and away rotation costs increased from $850 in 2021 to $3812 in 2024 (all P<.05)(Table). There was no significant change in application fees during the study period ($2176 in 2021 to $2125 in 2024 [P=.58]). Dermatology had the fourth highest average total cost over the study period compared to all other specialties, increasing from $2250 in 2021 to $5250 in 2024, following orthopedic surgery ($2250 in 2021 to $6750 in 2024), plastic surgery ($2250 in 2021 to $9750 in 2024), and neurosurgery ($1750 in 2021 to $11,250 in 2024).

CT116006216-Table

Our study found that dermatology residency application costs have increased significantly from 2021 to 2024, primarily driven by rising interview and away rotation expenses (both P<.05). This trend places dermatology among the most expensive fields to apply to for residency. A cross-sectional survey of dermatology residency program directors identified away rotations as one of the top 5 selection criteria, underscoring their importance in the matching process.4 In addition, a cross-sectional analysis of 345 dermatology residents found that 26.2% matched at institutions where they had mentors, including those they connected with through away rotations.5,6 Overall, the high cost of away rotations partially may reflect the competitive nature of the specialty, as building connections at programs may enhance the chances of matching. These costs also can vary based on geography, as rotating in high-cost urban centers can be more expensive than in rural areas; however, rural rotations may be less common due to limited program availability and applicant preferences. For example, nearly 50% of 2024 Electronic Residency Application Service applicants indicated a preference for urban settings, while fewer than 5% selected rural settings.7 Additionally, the high costs associated with applying to residency programs and completing away rotations can disproportionately impact students from rural backgrounds and underrepresented minorities, who may have fewer financial resources.

In our study, the lower application-related expenses in 2021 (during the pandemic) compared to those of 2024 (postpandemic) likely stem from the Association of American Medical Colleges’ recommendation to conduct virtual interviews during the pandemic.8 In 2024, some dermatology programs returned to in-person interviews, with some applicants consequently incurring higher costs related to travel, lodging, and other associated expenses.8 A cost-analysis study of 4153 dermatology applicants from 2016 to 2021 found that the average application costs were $1759 per applicant during the pandemic, when virtual interviews replaced in-person ones, whereas costs were $8476 per applicant during periods with in-person interviews and no COVID-19 restrictions.2 However, we did not observe a significant change in application fees over our study period, likely because the pandemic did not affect application numbers. A cross-sectional analysis of dermatology applicants during the pandemic similarly reported reductions in application-related expenses during the period when interviews were conducted virtually,9 supporting the trend observed in our study. Overall, our findings taken together with other studies highlight the pandemic’s role in reducing expenses and underscore the potential for exploring additional cost-saving measures.

Implementing strategies to reduce these financial burdens—including virtual interviews, increasing student funding for away rotations, and limiting the number of applications individual students can submit—could help alleviate socioeconomic disparities. The new signaling system for residency programs aims to reduce the number of applications submitted, as applicants typically receive interviews only from the limited number of programs they signal, reducing overall application costs. However, our data from the Texas STAR database suggest that application numbers remained relatively stable from 2021 to 2024, indicating that, despite signaling, many applicants still may apply broadly in hopes of improving their chances in an increasingly competitive field. Although a definitive solution to reducing the financial burden on dermatology applicants remains elusive, these strategies can raise awareness and encourage important dialogues.

Limitations of our study include the voluntary nature of the Texas STAR survey, leading to potential voluntary response bias, as well as the small sample size. Students who choose to submit cost data may differ systematically from those who do not; for example, students who match may be more likely to report their outcomes, while those who do not match may be less likely to participate, potentially introducing selection bias. In addition, general awareness of the Texas STAR survey may vary across institutions and among students, further limiting the number of students who participate. Additionally, 2021 was the only presignaling year included, making it difficult to assess longer-term trends. Despite these limitations, the Texas STAR database remains a valuable resource for analyzing general residency application expenses and trends, as it offers comprehensive data from more than 100 medical schools and includes many variables.3

In conclusion, our study found that total dermatology residency application costs have increased significantly from 2021 to 2024 (all P<.05), making dermatology among the most expensive specialties for applying. This study sets the foundation for future survey-based research for applicants and program directors on strategies to alleviate financial burdens.

References
  1. Mansouri B, Walker GD, Mitchell J, et al. The cost of applying to dermatology residency: 2014 data estimates. J Am Acad Dermatol. 2016;74:754-756. doi:10.1016/j.jaad.2015.10.049
  2. Gorgy M, Shah S, Arbuiso S, et al. Comparison of cost changes due to the COVID-19 pandemic for dermatology residency applications in the USA. Clin Exp Dermatol. 2022;47:600-602. doi:10.1111/ced.15001<.li>
  3. UT Southwestern. Texas STAR. 2024. Accessed November 5, 2025. https://www.utsouthwestern.edu/education/medical-school/about-the-school/student-affairs/texas-star.html
  4. Baldwin K, Weidner Z, Ahn J, et al. Are away rotations critical for a successful match in orthopaedic surgery? Clin Orthop Relat Res. 2009;467:3340-3345. doi:10.1007/s11999-009-0920-9
  5. Yeh C, Desai AD, Wilson BN, et al. Cross-sectional analysis of scholarly work and mentor relationships in matched dermatology residency applicants. J Am Acad Dermatol. 2022;86:1437-1439. doi:10.1016/j.jaad.2021.06.861
  6. Gorouhi F, Alikhan A, Rezaei A, et al. Dermatology residency selection criteria with an emphasis on program characteristics: a national program director survey. Dermatol Res Pract. 2014;2014:692760. doi:10.1155/2014/692760
  7. Association of American Medical Colleges. Decoding geographic and setting preferences in residency selection. January 18, 2024. Accessed October 27, 2025. https://www.aamc.org/services/eras-institutions/geographic-preferences
  8. Association of American Medical Colleges. Virtual interviews: tips for program directors. Updated May 14, 2020. https://med.stanford.edu/content/dam/sm/gme/program_portal/pd/pd_meet/2019-2020/8-6-20-Virtual_Interview_Tips_for_Program_Directors_05142020.pdf
  9. Williams GE, Zimmerman JM, Wiggins CJ, et al. The indelible marks on dermatology: impacts of COVID-19 on dermatology residency match using the Texas STAR database. Clin Dermatol. 2023;41:215-218. doi:10.1016/j.clindermatol.2022.12.001
References
  1. Mansouri B, Walker GD, Mitchell J, et al. The cost of applying to dermatology residency: 2014 data estimates. J Am Acad Dermatol. 2016;74:754-756. doi:10.1016/j.jaad.2015.10.049
  2. Gorgy M, Shah S, Arbuiso S, et al. Comparison of cost changes due to the COVID-19 pandemic for dermatology residency applications in the USA. Clin Exp Dermatol. 2022;47:600-602. doi:10.1111/ced.15001<.li>
  3. UT Southwestern. Texas STAR. 2024. Accessed November 5, 2025. https://www.utsouthwestern.edu/education/medical-school/about-the-school/student-affairs/texas-star.html
  4. Baldwin K, Weidner Z, Ahn J, et al. Are away rotations critical for a successful match in orthopaedic surgery? Clin Orthop Relat Res. 2009;467:3340-3345. doi:10.1007/s11999-009-0920-9
  5. Yeh C, Desai AD, Wilson BN, et al. Cross-sectional analysis of scholarly work and mentor relationships in matched dermatology residency applicants. J Am Acad Dermatol. 2022;86:1437-1439. doi:10.1016/j.jaad.2021.06.861
  6. Gorouhi F, Alikhan A, Rezaei A, et al. Dermatology residency selection criteria with an emphasis on program characteristics: a national program director survey. Dermatol Res Pract. 2014;2014:692760. doi:10.1155/2014/692760
  7. Association of American Medical Colleges. Decoding geographic and setting preferences in residency selection. January 18, 2024. Accessed October 27, 2025. https://www.aamc.org/services/eras-institutions/geographic-preferences
  8. Association of American Medical Colleges. Virtual interviews: tips for program directors. Updated May 14, 2020. https://med.stanford.edu/content/dam/sm/gme/program_portal/pd/pd_meet/2019-2020/8-6-20-Virtual_Interview_Tips_for_Program_Directors_05142020.pdf
  9. Williams GE, Zimmerman JM, Wiggins CJ, et al. The indelible marks on dermatology: impacts of COVID-19 on dermatology residency match using the Texas STAR database. Clin Dermatol. 2023;41:215-218. doi:10.1016/j.clindermatol.2022.12.001
Issue
Cutis - 116(6)
Issue
Cutis - 116(6)
Page Number
216-217
Page Number
216-217
Publications
Publications
Topics
Article Type
Display Headline

Cost Analysis of Dermatology Residency Applications From 2021 to 2024 Using the Texas Seeking Transparency in Application to Residency Database

Display Headline

Cost Analysis of Dermatology Residency Applications From 2021 to 2024 Using the Texas Seeking Transparency in Application to Residency Database

Sections
Inside the Article

PRACTICE POINTS

  • Dermatology application costs increased from 2021 to 2024, largely due to expenses related to away rotations and, in some cases, a return to in-person interviews.
  • Away rotations play a critical role in the dermatology match; however, they also contribute substantially to financial burden.
  • The cost-saving impact of virtual interviews during the COVID-19 pandemic highlights a meaningful opportunity for future cost reduction.
  • Further interventions are needed to meaningfully reduce financial burden and promote equity.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Staff Perspectives on the VISN 20 Tele-Neuropsychology Program

Article Type
Changed
Display Headline

Staff Perspectives on the VISN 20 Tele-Neuropsychology Program

There are 2.7 million (48%) rural veterans enrolled in the Veterans Health Administration (VHA).1 Many VHA-enrolled rural veterans are aged ≥ 65 years (54%), a medically complex population that requires more extensive health care.1 These veterans may live far from US Department of Veterans Affairs (VA) medical centers (VAMCs) and often receive most of their care at rural community-based outpatient clinics (CBOCs). In addition to face-to-face (F2F) services provided at these clinics, many patient care needs may be met using telehealth technology, which can connect veterans at CBOCs with remote health care practitioners (HCPs).

This technology is used across medical specialties throughout the VA and has expanded into neuropsychology services to improve access amid the shortage of rural neuropsychologists. Prior research suggests that access to neuropsychology services improves the functional outcomes of people with diverse medical conditions, including dementia, brain injury, and epilepsy, and reduces emergency department visits, hospitalization duration, and health care costs.2-6 Given that veterans unable to access neuropsychology services may be at risk for poorer outcomes, identifying ways to improve access is a priority. Tele-neuropsychology (teleNP) has been used to expand access for rural veterans in need of these services.7,8 

TeleNP is the application of audiovisual technologies to enable remote clinical encounters for neuropsychological assessments.9 TeleNP has been shown to be generally equivalent to F2F care, without significant differences compared with in-person visits.10-13 TeleNP was increasingly implemented following the COVID-19 pandemic and remains an enduring and expanding feature of neuropsychology care delivery.8,14-18 TeleNP services can increase access to care, especially for rural veterans and those with limited transportation. 

Research in non-VA samples suggests a high level of clinician satisfaction with teleNP.16 In VA samples, research has found high levels of patient satisfaction with teleNP both within Veterans Integrated Services Network (VISN) 20 and in a VA health care system outside VISN 20.7,19 Investigating staff perceptions of these services and their utility compared with non-VA F2F visits is pertinent to the overall feasibility and effectiveness of teleNP. 

TELE-NEUROPSYCHOLOGY PROGRAM 

A clinical resource hub (CRH) is a VISN-governed program that provides veteran health care when local VHA facilities have service gaps.20,21 CRH 20 serves several Pacific Northwest VISN 20 health care systems and began providing teleNP in 2015. The CRH 20 teleNP service serves older adults in rural settings with > 570 teleNP evaluations completed over a recent 12-month period (May 2023 to May 2024). In the CRH 20 teleNP program, veterans are offered services by CRH 20 neuropsychologists via telehealth to a patient’s local VAMC, larger health care clinic, CBOC, or via Veterans Video Connect to the home. 

FIGURE. Usefulness of face-to-face and tele-neuropsychology evaluations and reports (N = 18). Abbreviations: VA, US Department of Veterans Affairs.
FIGURE. Usefulness of face-to-face and tele-neuropsychology evaluations and reports (N = 18). Abbreviations: VA, US Department of Veterans Affairs.

Referral pathways to the CRH 20 teleNP program differ across sites. For VISN 20 sites that do not have any in-house neuropsychology services, referrals are initiated by HCPs from any discipline. At 2 sites with in-house neuropsychology programs, CRH 20 teleNP referrals typically are forwarded from the inhouse service whenever the veteran prefers to be seen at an outlying clinic. All sites, including the CBOCs, are equipped fully for testing, and the HCP encounters veterans in a private office via video-based telehealth technology after a telehealth technician orients them to the space. The private office minimizes environmental disruptions and uses standardized technology to ensure valid results. A limited number of evaluations are offered at home (< 5% of the evaluations) if the veteran is unable to come to a VHA facility, has access to reliable internet, and a minimally distracting home setting. 

In VISN 20, teleNP is a routine practice for delivering services to rural sites, most of which lack neuropsychologists. However, there is limited information about the extent to which the referral sources find the service useful. This quality improvement (QI) project aimed to better understand how well-established teleNP services were received by referral sources/stakeholders and how services could be improved. Prior to the advent of the CRH 20 teleNP program, staff had the option of referring for F2F evaluations in the local community (outside the VA) at some sites, an option that remains. This QI project examined staff perspectives on the usefulness of CRH 20 teleNP services compared with non-VA F2F services. We administered an anonymous, confidential survey examining these factors to VISN 20 staff within 4 VA health care systems. 

METHODS 

This QI project used a mixed quantitative and qualitative descriptive survey design to elicit feedback. The authors (3 neuropsychologists, 1 geropsychologist, and 1 research coordinator) developed the survey questions. The 13-question survey was voluntary, anonymous, and confidential, and respondents were given an opportunity to ask questions, with the first author serving as the point of contact. 

The survey ascertained information about respondents and their work setting (ie, facility type, specific work setting and location, profession, and rurality of patients). First respondents were asked whether they have referred patients to neuropsychology services in the past year. Those who had not referred patients during the past year were asked about reasons for nonreferral with an option to provide an open-ended response. Respondents who did refer were asked how they refer for neuropsychology services and about the usefulness and timeliness of both teleNP and non-VA F2F services. Respondents were asked to respond with their preference for teleNP vs non-VA F2F with an open-ended prompt. Finally, respondents were invited to share any feedback for improvement regarding teleNP services. 

A link to the survey, hosted on the VA Research Electronic Data Capture system, was emailed to facility and service line leaders at the 4 VISN 20 health care systems for distribution to the staff. All staff were included because in many of the facilities, particularly those that are highly rural with low staffing, it is not uncommon for technicians, nurses, and other support staff to assist with placing consults. In particular, VISN 20 nurses often have an optimal understanding of referral pathways to care for patients and are positioned to give and receive feedback about the utility of neuropsychological evaluations. The Research and Development Committee at the Boise VA Medical Center determined this project to be QI and exempt from institutional review board oversight. The VISN 20 employee labor relations HR supervisor approved this survey, with union awareness. Responses were anonymous. 

Data were imported into Microsoft Excel and IBM SPSS Statistics for further analysis. Data were summarized using descriptive statistics, frequencies, and percentages. Nonparametric χ2 and Wilcoxon signed-rank tests were used to test for differences. An inductive approach to develop codes was used for the 3 open-ended questions. Two authors (CC, CEG) independently coded the responses and reviewed discrepancies. Final code applications were based on consensus. 

RESULTS 

The survey was deployed for 1 month between February 7, 2024, and June 15, 2024, at each of the 4 health care systems. Thirty-three staff members responded; of these, 1 person did not respond to an item on whether they referred for neuropsychology services. Eighteen of 33 respondents reported referring patients to teleNP or F2F neuropsychology services in the past year. Fourteen of the 33 respondents stated they did not refer; of these, 2 were unfamiliar with the teleNP service and 12 provided other reasons (eg, new to VA, not in their professional scope to order consults, did not have patients needing services). 

The analysis focused on the 18 respondents who referred for neuropsychology services. Thirteen were within health care system A, and 5 were within health care system B (which had no nearby non-VA contracted neuropsychology services) and none were in the other 2 health care systems. Ten of 18 respondents (56%) stated they practiced primarily in a rural setting. Five respondents worked in a CBOC, 12 in a main VA facility, 9 in a primary care setting, 8 in a mental health setting, and 3 in other settings (eg, domiciliary). Participants could select > 1 setting. The 18 respondents who referred to neuropsychology services included 7 psychologists, 1 nurse, 2 social workers, 1 social services assistant, 4 nurse practitioners, 2 physicians, and 1 unknown HCP. 

When asked to categorize the usefulness of services, more respondents characterized teleNP as very much so (1 on a 5-point scale) than F2F referrals (Figure). The mean (SD) of 1.5 (0.8) for teleNP usefulness fell between very much so and mostly and 1 respondent indicated not applicable. Similarly, the mean (SD) for non-VA F2F usefulness was 1.7 (0.9); 9 respondents rated this item as not applicable. A Wilcoxon signed-rank test of related samples indicated no significant differences between the pairs of ratings (Z = 1.50; P = .41). 

Respondents with rural patients were more likely to refer them to teleNP services compared with respondents with nonrural patients (χ2 = 5.7; P = .02). However, ratings of teleNP usefulness did not significantly differ for those serving rural vs with nonrural patients (χ2 = 1.4; P = .49). Mean (SD) rating of teleNP usefulness was 1.3 (0.7) for the 9 rural subgroup respondents (between very much so and mostly) vs 1.8 (0.9) for the 8 nonrural subgroup respondents (between very much so and mostly). The mean (SD) rating for non-VA F2F usefulness was 1.8 (1.0) for the 4 rural subgroup respondents and 1.6 (0.8) for the 5 nonrural subgroup, between very much so and mostly for both groups. 

Most respondents had no preference between teleNP or F2F. Notably, the responses underlying this group were multifaceted and corresponded to multiple codes (ie, access, preference for in-person services, technology, space and logistics, and service boundaries and requirements). According to 1 respondent, “the logistics of scheduling/room availability, technological challenges, and client behavioral issues that are likely to occur could possibly be more easily addressed via in-person sessions for some clients and providers.” 

Six of 18 respondents preferred teleNP, citing timeliness, ease of access, and evaluation quality. One respondent noted that the “majority of my veterans live in extremely remote areas” and may need to take a plane for their visit. The 3 respondents who preferred in-person neuropsychology services cited veterans’ preference for in-person services. 

Open-Ended Feedback 

Thirteen respondents offered feedback on what is working well with teleNP services. Reasons mentioned were related to the service (ie, timeliness, access, quality) and the neuropsychologist (ie, communication and HCP skills). One respondent described the service and neuropsychologists positively, stating that they were “responsive, notes are readily available, clear assessments and recommendations, being available by [Microsoft] Teams/email.” 

Ten respondents provided suggestions for improvement. Suggestions focused on expanding services, such as to “all veterans with cognitive/memory concerns that desire testing,” individuals with attention-deficit/hyperactivity disorder and co-occurring mental health concerns, and those in residential programs. Suggestions included hiring psychology technicians or more staff and providing education at local clinics. 

DISCUSSION 

This QI project examines VA staff perspectives on the usefulness of CRH 20 teleNP services and non-VA F2F services. While the small sample size limits generalizability, this preliminary study suggests that VA teleNP evaluations were similar to those conducted F2F in non-VA settings. While ratings of teleNP usefulness did not differ significantly for those serving rural vs nonrural veterans, respondents serving rural patients were more likely to refer patients to teleNP, suggesting that teleNP may increase access in rural settings, consistent with other studies.7,8,13 This article also presents qualitative suggestions for improving teleNP delivery within the VHA. This is the first known initiative to report on VHA staff satisfaction with a teleNP service and expands the limited literature to date on satisfaction with teleNP services. The findings provide initial support for continued use and, potentially, expansion of teleNP services within this CRH remote hub-and-spoke model. 

Limitations 

A significant limitation of the current work is the small sample size of survey respondents. In particular, while teleNP turnaround time was perceived as faster than non-VA F2F care, only 8 respondents reported on timeliness of F2F evaluation results, which renders it difficult to draw conclusions. Interestingly, not all respondents reported referring to neuropsychology services within the previous year; the most common reasons reflect the perception that referral to neuropsychology was outside of that staff member’s role or not clinically indicated. 

One additional possible explanation for the absence of reporting on utility of teleNP specifically is that respondents did not track whether their patient was seen by teleNP or F2F services, based on how the referral process varies at each health care system. For example, in health care system C, a large number of referrals are forwarded to the service by local VA F2F neuropsychologists. This may speak to the seamlessness of the teleNP process, such that local staff and/or referring HCPs are unaware of the modality over which neuropsychology is being conducted. It is plausible that the reason behind this smaller response rate in health care systems B and C relates to how neuropsychology consults are processed at these local VAMCs. We suspect that in these settings, the HCPs referring for neuropsychological evaluations (eg, primary care, mental health) may be unaware that their referrals are being triaged to neuropsychologists in a different program (CRH 20 teleNP). Therefore, they would not necessarily know that they used teleNP and didn’t complete the survey. 

The referral process for these 2 sites contrasts with the process for other VISN 20 sites where there is no local neuropsychology program triaging. In these settings, referrals from local HCPs come directly to teleNP; thus, it is more likely that these HCPs are aware of teleNP services. There were only 2 physicians who completed the survey, which may relate to their workload and a workflow where other staff have been increasingly requested to order the consults for the physician. This type of workflow results in an increase in the number of VHA staff involved in patient care. Ratings of usefulness were highest in health care system B, which does not have neuropsychology services at the facility or in the community; this may relate to elevated teleNP satisfaction ratings. 

Further work may help identify which aspects of a teleNP service make it more useful than F2F care for this population or determine whether there were HCPor setting-specific factors that influenced the ratings (ie, preference for VA care or comparison of favorability ratings for the HCPs who conduct teleNP and F2F within the same system). The latter comparisons could not be drawn in the current systems due to the absence of HCPs who provide both teleNP and F2F modalities within VISN 20. Another consideration for future work would be to use a previously published/validated survey measure and piloting of questions with a naive sample before implementation. 

CONCLUSIONS 

This analysis provides initial support for feasibility and acceptability of teleNP as an alternative to traditional in-person neuropsychological evaluations. The small number of survey respondents may reflect the multiple pathways through which consults are forwarded to CRH 20, which includes both direct HCP referrals and forwarded consults from local neuropsychology services. CRH 20 has completed > 570 teleNP evaluations within 1 year, suggesting that lack of awareness may not be hindering veteran access to the service. Replication with a larger sample that is more broadly representative of key stakeholders in veteran care, identification of populations that would benefit most from teleNP services, and dissemination studies of the expansion of teleNP services are all important directions for future work. The robustness and longevity of the VISN 20 teleNP program, coupled with the preliminary positive findings from this project, demonstrate support for further assessment of the potential impact of telehealth on neuropsychological care within the VHA and show that barriers associated with access to health care services in remote settings may be mitigated through teleNP service delivery.

References
  1. US Department of Veterans Affairs, Office of Rural Health. Rural veterans. Updated March 10, 2025. Accessed July 7, 2025. https://www.ruralhealth.va.gov/aboutus/ruralvets.asp
  2. Braun M, Tupper D, Kaufmann P, et al. Neuropsychological assessment: a valuable tool in the diagnosis and management of neurological, neurodevelopmental, medical, and psychiatric disorders. Cogn Behav Neurol. 2011;24(3):107-114. doi:10.1097/wnn.0b013e3182351289
  3. Donders J. The incremental value of neuropsychological assessment: a critical review. Clin Neuropsychol. 2020;34(1):56-87. doi:10.1080/13854046.2019.1575471
  4. Williams MW, Rapport LJ, Hanks RA, et al. Incremental value of neuropsychological evaluations to computed tomography in predicting long-term outcomes after traumatic brain injury. Clin Neuropsychol. 2013;27(3):356-375. doi:10.1080/13854046.2013.765507
  5. Sieg E, Mai Q, Mosti C, Brook M. The utility of neuropsychological consultation in identifying medical inpatients with suspected cognitive impairment at risk for greater hospital utilization. Clin Neuropsychol. 2019;33(1):75-89. doi:10.1080/13854046.2018.1465124
  6. Vankirk KM, Horner MD, Turner TH, et al. CE hospital service utilization is reduced following neuropsychological evaluation in a sample of U.S. veterans. Clin Neuropsychol. 2013;27(5):750-761. doi:10.1080/13854046.2013.783122
  7. Appleman ER, O’Connor MK, Boucher SJ, et al. Teleneuropsychology clinic development and patient satisfaction. Clin Neuropsychol. 2021;35(4):819-837. doi:10.1080/13854046.2020.1871515
  8. Stelmokas J, Ratcliffe LN, Lengu K, et al. Evaluation of teleneuropsychology services in veterans during COVID-19. Psychol Serv. 2024;21(1):65-72. doi:10.1037/ser0000810
  9. Bilder R Postal KS, Barisa M, et al. Inter Organizational Practice Committee recommendations/guidance for teleneuropsychology in response to the COVID-19 pandemic. Arch Clin Neuropsychol. 2020;35(6):647-659. doi:10.1093/arclin/acaa046
  10. Brearly TW, Shura RD, Martindale SL, et al. Neuropsychological test administration by videoconference: a systematic review and meta-analysis. Neuropsychol Rev. 2017;27(2):174-186. doi:10.1007/s11065-017-9349-1
  11. Brown AD, Kelso W, Eratne D, et al. Investigating equivalence of in-person and telehealth-based neuropsychological assessment performance for individuals being investigated for younger onset dementia. Arch Clin Neuropsychol. 2024;39(5):594-607. doi:10.1093/arclin/acad108
  12. Chapman JE, Ponsford J, Bagot KL, et al. The use of videoconferencing in clinical neuropsychology practice: a mixed methods evaluation of neuropsychologists’ experiences and views. Aust Psychol. 2020;55(6):618-633. doi:10.1111/ap.12471
  13. Marra DE, Hamlet KM, Bauer RM, et al. Validity of teleneuropsychology for older adults in response to COVID-19: a systematic and critical review. Clin Neuropsychol. 2020;34:1411-1452. doi:10.1080/13854046.2020.1769192
  14. Hammers DB, Stolwyk R, Harder L, et al. A survey of international clinical teleneuropsychology service provision prior to COVID-19. Clin Neuropsychol. 2020;34(7-8):1267- 1283. doi:10.1080/13854046.2020.1810323
  15. Marra DE, Hoelzle JB, Davis JJ, et al. Initial changes in neuropsychologists’ clinical practice during the COVID-19 pandemic: a survey study. Clin Neuropsychol. 2020;34(7- 8):1251-1266. doi:10.1080/13854046.2020.1800098
  16. Parsons MW, Gardner MM, Sherman, JC et al. Feasibility and acceptance of direct-to-home teleneuropsychology services during the COVID-19 pandemic. J Int Neuropsychol Soc. 2022;28(2):210-215. doi:10.1017/s1355617721000436
  17. Rochette AD, Rahman-Filipiak A, Spencer RJ, et al. Teleneuropsychology practice survey during COVID-19 within the United States. Appl Neuropsychol Adult. 2022;29(6):1312- 1322. doi:10.1080/23279095.2021.1872576
  18. Messler AC, Hargrave DD, Trittschuh EH, et al. National survey of telehealth neuropsychology practices: current attitudes, practices, and relevance of tele-neuropsychology three years after the onset of COVID-19. Clin Neuropsychol. 2023;39:1017-1036. doi:10.1080/13854046.2023.2192422
  19. Rautman L, Sordahl JA. Veteran satisfaction with tele-neuropsychology services. Clin Neuropsychol. 2018;32:1453949. doi:10.1080/13854046.2018.1453949
  20. US Department of Veterans Affairs. Patient care services: clinical resource hubs. Updated March 20, 2024. Accessed August 4, 2025. https://www.patientcare .va.gov/primarycare/CRH.asp  
  21. Burnett K, Stockdale SE, Yoon J, et al. The Clinical Resource Hub initiative: first-year implementation of the Veterans Health Administration regional telehealth contingency staffing program. Ambul Care Manage. 2023;46(3):228-239. doi:10.1097/JAC.0000000000000468
Article PDF
Author and Disclosure Information

Correspondence: Ana Messler (ana.messler@va.gov) 

Fed Pract. 2025;42(11):e0652. Published online November 20. doi:10.12788/fp.0652

Author affiliations 

aBoise Veterans Affairs Medical Center, Idaho 
bMontana Veterans Affairs Health Care System, Fort Harrison 
cVeterans Affairs Palo Alto Health Care System, California 
dStanford University, Palo Alto, California 

Author disclosures 

The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. 

Ethics and consent 

The Boise Veterans Affairs Medical Center Research and Development Committee determined this project to be quality improvement and exempt from institutional review board review. 

Issue
Federal Practitioner - 42(11)
Publications
Topics
Sections
Author and Disclosure Information

Correspondence: Ana Messler (ana.messler@va.gov) 

Fed Pract. 2025;42(11):e0652. Published online November 20. doi:10.12788/fp.0652

Author affiliations 

aBoise Veterans Affairs Medical Center, Idaho 
bMontana Veterans Affairs Health Care System, Fort Harrison 
cVeterans Affairs Palo Alto Health Care System, California 
dStanford University, Palo Alto, California 

Author disclosures 

The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. 

Ethics and consent 

The Boise Veterans Affairs Medical Center Research and Development Committee determined this project to be quality improvement and exempt from institutional review board review. 

Author and Disclosure Information

Correspondence: Ana Messler (ana.messler@va.gov) 

Fed Pract. 2025;42(11):e0652. Published online November 20. doi:10.12788/fp.0652

Author affiliations 

aBoise Veterans Affairs Medical Center, Idaho 
bMontana Veterans Affairs Health Care System, Fort Harrison 
cVeterans Affairs Palo Alto Health Care System, California 
dStanford University, Palo Alto, California 

Author disclosures 

The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. 

Ethics and consent 

The Boise Veterans Affairs Medical Center Research and Development Committee determined this project to be quality improvement and exempt from institutional review board review. 

Article PDF
Article PDF

There are 2.7 million (48%) rural veterans enrolled in the Veterans Health Administration (VHA).1 Many VHA-enrolled rural veterans are aged ≥ 65 years (54%), a medically complex population that requires more extensive health care.1 These veterans may live far from US Department of Veterans Affairs (VA) medical centers (VAMCs) and often receive most of their care at rural community-based outpatient clinics (CBOCs). In addition to face-to-face (F2F) services provided at these clinics, many patient care needs may be met using telehealth technology, which can connect veterans at CBOCs with remote health care practitioners (HCPs).

This technology is used across medical specialties throughout the VA and has expanded into neuropsychology services to improve access amid the shortage of rural neuropsychologists. Prior research suggests that access to neuropsychology services improves the functional outcomes of people with diverse medical conditions, including dementia, brain injury, and epilepsy, and reduces emergency department visits, hospitalization duration, and health care costs.2-6 Given that veterans unable to access neuropsychology services may be at risk for poorer outcomes, identifying ways to improve access is a priority. Tele-neuropsychology (teleNP) has been used to expand access for rural veterans in need of these services.7,8 

TeleNP is the application of audiovisual technologies to enable remote clinical encounters for neuropsychological assessments.9 TeleNP has been shown to be generally equivalent to F2F care, without significant differences compared with in-person visits.10-13 TeleNP was increasingly implemented following the COVID-19 pandemic and remains an enduring and expanding feature of neuropsychology care delivery.8,14-18 TeleNP services can increase access to care, especially for rural veterans and those with limited transportation. 

Research in non-VA samples suggests a high level of clinician satisfaction with teleNP.16 In VA samples, research has found high levels of patient satisfaction with teleNP both within Veterans Integrated Services Network (VISN) 20 and in a VA health care system outside VISN 20.7,19 Investigating staff perceptions of these services and their utility compared with non-VA F2F visits is pertinent to the overall feasibility and effectiveness of teleNP. 

TELE-NEUROPSYCHOLOGY PROGRAM 

A clinical resource hub (CRH) is a VISN-governed program that provides veteran health care when local VHA facilities have service gaps.20,21 CRH 20 serves several Pacific Northwest VISN 20 health care systems and began providing teleNP in 2015. The CRH 20 teleNP service serves older adults in rural settings with > 570 teleNP evaluations completed over a recent 12-month period (May 2023 to May 2024). In the CRH 20 teleNP program, veterans are offered services by CRH 20 neuropsychologists via telehealth to a patient’s local VAMC, larger health care clinic, CBOC, or via Veterans Video Connect to the home. 

FIGURE. Usefulness of face-to-face and tele-neuropsychology evaluations and reports (N = 18). Abbreviations: VA, US Department of Veterans Affairs.
FIGURE. Usefulness of face-to-face and tele-neuropsychology evaluations and reports (N = 18). Abbreviations: VA, US Department of Veterans Affairs.

Referral pathways to the CRH 20 teleNP program differ across sites. For VISN 20 sites that do not have any in-house neuropsychology services, referrals are initiated by HCPs from any discipline. At 2 sites with in-house neuropsychology programs, CRH 20 teleNP referrals typically are forwarded from the inhouse service whenever the veteran prefers to be seen at an outlying clinic. All sites, including the CBOCs, are equipped fully for testing, and the HCP encounters veterans in a private office via video-based telehealth technology after a telehealth technician orients them to the space. The private office minimizes environmental disruptions and uses standardized technology to ensure valid results. A limited number of evaluations are offered at home (< 5% of the evaluations) if the veteran is unable to come to a VHA facility, has access to reliable internet, and a minimally distracting home setting. 

In VISN 20, teleNP is a routine practice for delivering services to rural sites, most of which lack neuropsychologists. However, there is limited information about the extent to which the referral sources find the service useful. This quality improvement (QI) project aimed to better understand how well-established teleNP services were received by referral sources/stakeholders and how services could be improved. Prior to the advent of the CRH 20 teleNP program, staff had the option of referring for F2F evaluations in the local community (outside the VA) at some sites, an option that remains. This QI project examined staff perspectives on the usefulness of CRH 20 teleNP services compared with non-VA F2F services. We administered an anonymous, confidential survey examining these factors to VISN 20 staff within 4 VA health care systems. 

METHODS 

This QI project used a mixed quantitative and qualitative descriptive survey design to elicit feedback. The authors (3 neuropsychologists, 1 geropsychologist, and 1 research coordinator) developed the survey questions. The 13-question survey was voluntary, anonymous, and confidential, and respondents were given an opportunity to ask questions, with the first author serving as the point of contact. 

The survey ascertained information about respondents and their work setting (ie, facility type, specific work setting and location, profession, and rurality of patients). First respondents were asked whether they have referred patients to neuropsychology services in the past year. Those who had not referred patients during the past year were asked about reasons for nonreferral with an option to provide an open-ended response. Respondents who did refer were asked how they refer for neuropsychology services and about the usefulness and timeliness of both teleNP and non-VA F2F services. Respondents were asked to respond with their preference for teleNP vs non-VA F2F with an open-ended prompt. Finally, respondents were invited to share any feedback for improvement regarding teleNP services. 

A link to the survey, hosted on the VA Research Electronic Data Capture system, was emailed to facility and service line leaders at the 4 VISN 20 health care systems for distribution to the staff. All staff were included because in many of the facilities, particularly those that are highly rural with low staffing, it is not uncommon for technicians, nurses, and other support staff to assist with placing consults. In particular, VISN 20 nurses often have an optimal understanding of referral pathways to care for patients and are positioned to give and receive feedback about the utility of neuropsychological evaluations. The Research and Development Committee at the Boise VA Medical Center determined this project to be QI and exempt from institutional review board oversight. The VISN 20 employee labor relations HR supervisor approved this survey, with union awareness. Responses were anonymous. 

Data were imported into Microsoft Excel and IBM SPSS Statistics for further analysis. Data were summarized using descriptive statistics, frequencies, and percentages. Nonparametric χ2 and Wilcoxon signed-rank tests were used to test for differences. An inductive approach to develop codes was used for the 3 open-ended questions. Two authors (CC, CEG) independently coded the responses and reviewed discrepancies. Final code applications were based on consensus. 

RESULTS 

The survey was deployed for 1 month between February 7, 2024, and June 15, 2024, at each of the 4 health care systems. Thirty-three staff members responded; of these, 1 person did not respond to an item on whether they referred for neuropsychology services. Eighteen of 33 respondents reported referring patients to teleNP or F2F neuropsychology services in the past year. Fourteen of the 33 respondents stated they did not refer; of these, 2 were unfamiliar with the teleNP service and 12 provided other reasons (eg, new to VA, not in their professional scope to order consults, did not have patients needing services). 

The analysis focused on the 18 respondents who referred for neuropsychology services. Thirteen were within health care system A, and 5 were within health care system B (which had no nearby non-VA contracted neuropsychology services) and none were in the other 2 health care systems. Ten of 18 respondents (56%) stated they practiced primarily in a rural setting. Five respondents worked in a CBOC, 12 in a main VA facility, 9 in a primary care setting, 8 in a mental health setting, and 3 in other settings (eg, domiciliary). Participants could select > 1 setting. The 18 respondents who referred to neuropsychology services included 7 psychologists, 1 nurse, 2 social workers, 1 social services assistant, 4 nurse practitioners, 2 physicians, and 1 unknown HCP. 

When asked to categorize the usefulness of services, more respondents characterized teleNP as very much so (1 on a 5-point scale) than F2F referrals (Figure). The mean (SD) of 1.5 (0.8) for teleNP usefulness fell between very much so and mostly and 1 respondent indicated not applicable. Similarly, the mean (SD) for non-VA F2F usefulness was 1.7 (0.9); 9 respondents rated this item as not applicable. A Wilcoxon signed-rank test of related samples indicated no significant differences between the pairs of ratings (Z = 1.50; P = .41). 

Respondents with rural patients were more likely to refer them to teleNP services compared with respondents with nonrural patients (χ2 = 5.7; P = .02). However, ratings of teleNP usefulness did not significantly differ for those serving rural vs with nonrural patients (χ2 = 1.4; P = .49). Mean (SD) rating of teleNP usefulness was 1.3 (0.7) for the 9 rural subgroup respondents (between very much so and mostly) vs 1.8 (0.9) for the 8 nonrural subgroup respondents (between very much so and mostly). The mean (SD) rating for non-VA F2F usefulness was 1.8 (1.0) for the 4 rural subgroup respondents and 1.6 (0.8) for the 5 nonrural subgroup, between very much so and mostly for both groups. 

Most respondents had no preference between teleNP or F2F. Notably, the responses underlying this group were multifaceted and corresponded to multiple codes (ie, access, preference for in-person services, technology, space and logistics, and service boundaries and requirements). According to 1 respondent, “the logistics of scheduling/room availability, technological challenges, and client behavioral issues that are likely to occur could possibly be more easily addressed via in-person sessions for some clients and providers.” 

Six of 18 respondents preferred teleNP, citing timeliness, ease of access, and evaluation quality. One respondent noted that the “majority of my veterans live in extremely remote areas” and may need to take a plane for their visit. The 3 respondents who preferred in-person neuropsychology services cited veterans’ preference for in-person services. 

Open-Ended Feedback 

Thirteen respondents offered feedback on what is working well with teleNP services. Reasons mentioned were related to the service (ie, timeliness, access, quality) and the neuropsychologist (ie, communication and HCP skills). One respondent described the service and neuropsychologists positively, stating that they were “responsive, notes are readily available, clear assessments and recommendations, being available by [Microsoft] Teams/email.” 

Ten respondents provided suggestions for improvement. Suggestions focused on expanding services, such as to “all veterans with cognitive/memory concerns that desire testing,” individuals with attention-deficit/hyperactivity disorder and co-occurring mental health concerns, and those in residential programs. Suggestions included hiring psychology technicians or more staff and providing education at local clinics. 

DISCUSSION 

This QI project examines VA staff perspectives on the usefulness of CRH 20 teleNP services and non-VA F2F services. While the small sample size limits generalizability, this preliminary study suggests that VA teleNP evaluations were similar to those conducted F2F in non-VA settings. While ratings of teleNP usefulness did not differ significantly for those serving rural vs nonrural veterans, respondents serving rural patients were more likely to refer patients to teleNP, suggesting that teleNP may increase access in rural settings, consistent with other studies.7,8,13 This article also presents qualitative suggestions for improving teleNP delivery within the VHA. This is the first known initiative to report on VHA staff satisfaction with a teleNP service and expands the limited literature to date on satisfaction with teleNP services. The findings provide initial support for continued use and, potentially, expansion of teleNP services within this CRH remote hub-and-spoke model. 

Limitations 

A significant limitation of the current work is the small sample size of survey respondents. In particular, while teleNP turnaround time was perceived as faster than non-VA F2F care, only 8 respondents reported on timeliness of F2F evaluation results, which renders it difficult to draw conclusions. Interestingly, not all respondents reported referring to neuropsychology services within the previous year; the most common reasons reflect the perception that referral to neuropsychology was outside of that staff member’s role or not clinically indicated. 

One additional possible explanation for the absence of reporting on utility of teleNP specifically is that respondents did not track whether their patient was seen by teleNP or F2F services, based on how the referral process varies at each health care system. For example, in health care system C, a large number of referrals are forwarded to the service by local VA F2F neuropsychologists. This may speak to the seamlessness of the teleNP process, such that local staff and/or referring HCPs are unaware of the modality over which neuropsychology is being conducted. It is plausible that the reason behind this smaller response rate in health care systems B and C relates to how neuropsychology consults are processed at these local VAMCs. We suspect that in these settings, the HCPs referring for neuropsychological evaluations (eg, primary care, mental health) may be unaware that their referrals are being triaged to neuropsychologists in a different program (CRH 20 teleNP). Therefore, they would not necessarily know that they used teleNP and didn’t complete the survey. 

The referral process for these 2 sites contrasts with the process for other VISN 20 sites where there is no local neuropsychology program triaging. In these settings, referrals from local HCPs come directly to teleNP; thus, it is more likely that these HCPs are aware of teleNP services. There were only 2 physicians who completed the survey, which may relate to their workload and a workflow where other staff have been increasingly requested to order the consults for the physician. This type of workflow results in an increase in the number of VHA staff involved in patient care. Ratings of usefulness were highest in health care system B, which does not have neuropsychology services at the facility or in the community; this may relate to elevated teleNP satisfaction ratings. 

Further work may help identify which aspects of a teleNP service make it more useful than F2F care for this population or determine whether there were HCPor setting-specific factors that influenced the ratings (ie, preference for VA care or comparison of favorability ratings for the HCPs who conduct teleNP and F2F within the same system). The latter comparisons could not be drawn in the current systems due to the absence of HCPs who provide both teleNP and F2F modalities within VISN 20. Another consideration for future work would be to use a previously published/validated survey measure and piloting of questions with a naive sample before implementation. 

CONCLUSIONS 

This analysis provides initial support for feasibility and acceptability of teleNP as an alternative to traditional in-person neuropsychological evaluations. The small number of survey respondents may reflect the multiple pathways through which consults are forwarded to CRH 20, which includes both direct HCP referrals and forwarded consults from local neuropsychology services. CRH 20 has completed > 570 teleNP evaluations within 1 year, suggesting that lack of awareness may not be hindering veteran access to the service. Replication with a larger sample that is more broadly representative of key stakeholders in veteran care, identification of populations that would benefit most from teleNP services, and dissemination studies of the expansion of teleNP services are all important directions for future work. The robustness and longevity of the VISN 20 teleNP program, coupled with the preliminary positive findings from this project, demonstrate support for further assessment of the potential impact of telehealth on neuropsychological care within the VHA and show that barriers associated with access to health care services in remote settings may be mitigated through teleNP service delivery.

There are 2.7 million (48%) rural veterans enrolled in the Veterans Health Administration (VHA).1 Many VHA-enrolled rural veterans are aged ≥ 65 years (54%), a medically complex population that requires more extensive health care.1 These veterans may live far from US Department of Veterans Affairs (VA) medical centers (VAMCs) and often receive most of their care at rural community-based outpatient clinics (CBOCs). In addition to face-to-face (F2F) services provided at these clinics, many patient care needs may be met using telehealth technology, which can connect veterans at CBOCs with remote health care practitioners (HCPs).

This technology is used across medical specialties throughout the VA and has expanded into neuropsychology services to improve access amid the shortage of rural neuropsychologists. Prior research suggests that access to neuropsychology services improves the functional outcomes of people with diverse medical conditions, including dementia, brain injury, and epilepsy, and reduces emergency department visits, hospitalization duration, and health care costs.2-6 Given that veterans unable to access neuropsychology services may be at risk for poorer outcomes, identifying ways to improve access is a priority. Tele-neuropsychology (teleNP) has been used to expand access for rural veterans in need of these services.7,8 

TeleNP is the application of audiovisual technologies to enable remote clinical encounters for neuropsychological assessments.9 TeleNP has been shown to be generally equivalent to F2F care, without significant differences compared with in-person visits.10-13 TeleNP was increasingly implemented following the COVID-19 pandemic and remains an enduring and expanding feature of neuropsychology care delivery.8,14-18 TeleNP services can increase access to care, especially for rural veterans and those with limited transportation. 

Research in non-VA samples suggests a high level of clinician satisfaction with teleNP.16 In VA samples, research has found high levels of patient satisfaction with teleNP both within Veterans Integrated Services Network (VISN) 20 and in a VA health care system outside VISN 20.7,19 Investigating staff perceptions of these services and their utility compared with non-VA F2F visits is pertinent to the overall feasibility and effectiveness of teleNP. 

TELE-NEUROPSYCHOLOGY PROGRAM 

A clinical resource hub (CRH) is a VISN-governed program that provides veteran health care when local VHA facilities have service gaps.20,21 CRH 20 serves several Pacific Northwest VISN 20 health care systems and began providing teleNP in 2015. The CRH 20 teleNP service serves older adults in rural settings with > 570 teleNP evaluations completed over a recent 12-month period (May 2023 to May 2024). In the CRH 20 teleNP program, veterans are offered services by CRH 20 neuropsychologists via telehealth to a patient’s local VAMC, larger health care clinic, CBOC, or via Veterans Video Connect to the home. 

FIGURE. Usefulness of face-to-face and tele-neuropsychology evaluations and reports (N = 18). Abbreviations: VA, US Department of Veterans Affairs.
FIGURE. Usefulness of face-to-face and tele-neuropsychology evaluations and reports (N = 18). Abbreviations: VA, US Department of Veterans Affairs.

Referral pathways to the CRH 20 teleNP program differ across sites. For VISN 20 sites that do not have any in-house neuropsychology services, referrals are initiated by HCPs from any discipline. At 2 sites with in-house neuropsychology programs, CRH 20 teleNP referrals typically are forwarded from the inhouse service whenever the veteran prefers to be seen at an outlying clinic. All sites, including the CBOCs, are equipped fully for testing, and the HCP encounters veterans in a private office via video-based telehealth technology after a telehealth technician orients them to the space. The private office minimizes environmental disruptions and uses standardized technology to ensure valid results. A limited number of evaluations are offered at home (< 5% of the evaluations) if the veteran is unable to come to a VHA facility, has access to reliable internet, and a minimally distracting home setting. 

In VISN 20, teleNP is a routine practice for delivering services to rural sites, most of which lack neuropsychologists. However, there is limited information about the extent to which the referral sources find the service useful. This quality improvement (QI) project aimed to better understand how well-established teleNP services were received by referral sources/stakeholders and how services could be improved. Prior to the advent of the CRH 20 teleNP program, staff had the option of referring for F2F evaluations in the local community (outside the VA) at some sites, an option that remains. This QI project examined staff perspectives on the usefulness of CRH 20 teleNP services compared with non-VA F2F services. We administered an anonymous, confidential survey examining these factors to VISN 20 staff within 4 VA health care systems. 

METHODS 

This QI project used a mixed quantitative and qualitative descriptive survey design to elicit feedback. The authors (3 neuropsychologists, 1 geropsychologist, and 1 research coordinator) developed the survey questions. The 13-question survey was voluntary, anonymous, and confidential, and respondents were given an opportunity to ask questions, with the first author serving as the point of contact. 

The survey ascertained information about respondents and their work setting (ie, facility type, specific work setting and location, profession, and rurality of patients). First respondents were asked whether they have referred patients to neuropsychology services in the past year. Those who had not referred patients during the past year were asked about reasons for nonreferral with an option to provide an open-ended response. Respondents who did refer were asked how they refer for neuropsychology services and about the usefulness and timeliness of both teleNP and non-VA F2F services. Respondents were asked to respond with their preference for teleNP vs non-VA F2F with an open-ended prompt. Finally, respondents were invited to share any feedback for improvement regarding teleNP services. 

A link to the survey, hosted on the VA Research Electronic Data Capture system, was emailed to facility and service line leaders at the 4 VISN 20 health care systems for distribution to the staff. All staff were included because in many of the facilities, particularly those that are highly rural with low staffing, it is not uncommon for technicians, nurses, and other support staff to assist with placing consults. In particular, VISN 20 nurses often have an optimal understanding of referral pathways to care for patients and are positioned to give and receive feedback about the utility of neuropsychological evaluations. The Research and Development Committee at the Boise VA Medical Center determined this project to be QI and exempt from institutional review board oversight. The VISN 20 employee labor relations HR supervisor approved this survey, with union awareness. Responses were anonymous. 

Data were imported into Microsoft Excel and IBM SPSS Statistics for further analysis. Data were summarized using descriptive statistics, frequencies, and percentages. Nonparametric χ2 and Wilcoxon signed-rank tests were used to test for differences. An inductive approach to develop codes was used for the 3 open-ended questions. Two authors (CC, CEG) independently coded the responses and reviewed discrepancies. Final code applications were based on consensus. 

RESULTS 

The survey was deployed for 1 month between February 7, 2024, and June 15, 2024, at each of the 4 health care systems. Thirty-three staff members responded; of these, 1 person did not respond to an item on whether they referred for neuropsychology services. Eighteen of 33 respondents reported referring patients to teleNP or F2F neuropsychology services in the past year. Fourteen of the 33 respondents stated they did not refer; of these, 2 were unfamiliar with the teleNP service and 12 provided other reasons (eg, new to VA, not in their professional scope to order consults, did not have patients needing services). 

The analysis focused on the 18 respondents who referred for neuropsychology services. Thirteen were within health care system A, and 5 were within health care system B (which had no nearby non-VA contracted neuropsychology services) and none were in the other 2 health care systems. Ten of 18 respondents (56%) stated they practiced primarily in a rural setting. Five respondents worked in a CBOC, 12 in a main VA facility, 9 in a primary care setting, 8 in a mental health setting, and 3 in other settings (eg, domiciliary). Participants could select > 1 setting. The 18 respondents who referred to neuropsychology services included 7 psychologists, 1 nurse, 2 social workers, 1 social services assistant, 4 nurse practitioners, 2 physicians, and 1 unknown HCP. 

When asked to categorize the usefulness of services, more respondents characterized teleNP as very much so (1 on a 5-point scale) than F2F referrals (Figure). The mean (SD) of 1.5 (0.8) for teleNP usefulness fell between very much so and mostly and 1 respondent indicated not applicable. Similarly, the mean (SD) for non-VA F2F usefulness was 1.7 (0.9); 9 respondents rated this item as not applicable. A Wilcoxon signed-rank test of related samples indicated no significant differences between the pairs of ratings (Z = 1.50; P = .41). 

Respondents with rural patients were more likely to refer them to teleNP services compared with respondents with nonrural patients (χ2 = 5.7; P = .02). However, ratings of teleNP usefulness did not significantly differ for those serving rural vs with nonrural patients (χ2 = 1.4; P = .49). Mean (SD) rating of teleNP usefulness was 1.3 (0.7) for the 9 rural subgroup respondents (between very much so and mostly) vs 1.8 (0.9) for the 8 nonrural subgroup respondents (between very much so and mostly). The mean (SD) rating for non-VA F2F usefulness was 1.8 (1.0) for the 4 rural subgroup respondents and 1.6 (0.8) for the 5 nonrural subgroup, between very much so and mostly for both groups. 

Most respondents had no preference between teleNP or F2F. Notably, the responses underlying this group were multifaceted and corresponded to multiple codes (ie, access, preference for in-person services, technology, space and logistics, and service boundaries and requirements). According to 1 respondent, “the logistics of scheduling/room availability, technological challenges, and client behavioral issues that are likely to occur could possibly be more easily addressed via in-person sessions for some clients and providers.” 

Six of 18 respondents preferred teleNP, citing timeliness, ease of access, and evaluation quality. One respondent noted that the “majority of my veterans live in extremely remote areas” and may need to take a plane for their visit. The 3 respondents who preferred in-person neuropsychology services cited veterans’ preference for in-person services. 

Open-Ended Feedback 

Thirteen respondents offered feedback on what is working well with teleNP services. Reasons mentioned were related to the service (ie, timeliness, access, quality) and the neuropsychologist (ie, communication and HCP skills). One respondent described the service and neuropsychologists positively, stating that they were “responsive, notes are readily available, clear assessments and recommendations, being available by [Microsoft] Teams/email.” 

Ten respondents provided suggestions for improvement. Suggestions focused on expanding services, such as to “all veterans with cognitive/memory concerns that desire testing,” individuals with attention-deficit/hyperactivity disorder and co-occurring mental health concerns, and those in residential programs. Suggestions included hiring psychology technicians or more staff and providing education at local clinics. 

DISCUSSION 

This QI project examines VA staff perspectives on the usefulness of CRH 20 teleNP services and non-VA F2F services. While the small sample size limits generalizability, this preliminary study suggests that VA teleNP evaluations were similar to those conducted F2F in non-VA settings. While ratings of teleNP usefulness did not differ significantly for those serving rural vs nonrural veterans, respondents serving rural patients were more likely to refer patients to teleNP, suggesting that teleNP may increase access in rural settings, consistent with other studies.7,8,13 This article also presents qualitative suggestions for improving teleNP delivery within the VHA. This is the first known initiative to report on VHA staff satisfaction with a teleNP service and expands the limited literature to date on satisfaction with teleNP services. The findings provide initial support for continued use and, potentially, expansion of teleNP services within this CRH remote hub-and-spoke model. 

Limitations 

A significant limitation of the current work is the small sample size of survey respondents. In particular, while teleNP turnaround time was perceived as faster than non-VA F2F care, only 8 respondents reported on timeliness of F2F evaluation results, which renders it difficult to draw conclusions. Interestingly, not all respondents reported referring to neuropsychology services within the previous year; the most common reasons reflect the perception that referral to neuropsychology was outside of that staff member’s role or not clinically indicated. 

One additional possible explanation for the absence of reporting on utility of teleNP specifically is that respondents did not track whether their patient was seen by teleNP or F2F services, based on how the referral process varies at each health care system. For example, in health care system C, a large number of referrals are forwarded to the service by local VA F2F neuropsychologists. This may speak to the seamlessness of the teleNP process, such that local staff and/or referring HCPs are unaware of the modality over which neuropsychology is being conducted. It is plausible that the reason behind this smaller response rate in health care systems B and C relates to how neuropsychology consults are processed at these local VAMCs. We suspect that in these settings, the HCPs referring for neuropsychological evaluations (eg, primary care, mental health) may be unaware that their referrals are being triaged to neuropsychologists in a different program (CRH 20 teleNP). Therefore, they would not necessarily know that they used teleNP and didn’t complete the survey. 

The referral process for these 2 sites contrasts with the process for other VISN 20 sites where there is no local neuropsychology program triaging. In these settings, referrals from local HCPs come directly to teleNP; thus, it is more likely that these HCPs are aware of teleNP services. There were only 2 physicians who completed the survey, which may relate to their workload and a workflow where other staff have been increasingly requested to order the consults for the physician. This type of workflow results in an increase in the number of VHA staff involved in patient care. Ratings of usefulness were highest in health care system B, which does not have neuropsychology services at the facility or in the community; this may relate to elevated teleNP satisfaction ratings. 

Further work may help identify which aspects of a teleNP service make it more useful than F2F care for this population or determine whether there were HCPor setting-specific factors that influenced the ratings (ie, preference for VA care or comparison of favorability ratings for the HCPs who conduct teleNP and F2F within the same system). The latter comparisons could not be drawn in the current systems due to the absence of HCPs who provide both teleNP and F2F modalities within VISN 20. Another consideration for future work would be to use a previously published/validated survey measure and piloting of questions with a naive sample before implementation. 

CONCLUSIONS 

This analysis provides initial support for feasibility and acceptability of teleNP as an alternative to traditional in-person neuropsychological evaluations. The small number of survey respondents may reflect the multiple pathways through which consults are forwarded to CRH 20, which includes both direct HCP referrals and forwarded consults from local neuropsychology services. CRH 20 has completed > 570 teleNP evaluations within 1 year, suggesting that lack of awareness may not be hindering veteran access to the service. Replication with a larger sample that is more broadly representative of key stakeholders in veteran care, identification of populations that would benefit most from teleNP services, and dissemination studies of the expansion of teleNP services are all important directions for future work. The robustness and longevity of the VISN 20 teleNP program, coupled with the preliminary positive findings from this project, demonstrate support for further assessment of the potential impact of telehealth on neuropsychological care within the VHA and show that barriers associated with access to health care services in remote settings may be mitigated through teleNP service delivery.

References
  1. US Department of Veterans Affairs, Office of Rural Health. Rural veterans. Updated March 10, 2025. Accessed July 7, 2025. https://www.ruralhealth.va.gov/aboutus/ruralvets.asp
  2. Braun M, Tupper D, Kaufmann P, et al. Neuropsychological assessment: a valuable tool in the diagnosis and management of neurological, neurodevelopmental, medical, and psychiatric disorders. Cogn Behav Neurol. 2011;24(3):107-114. doi:10.1097/wnn.0b013e3182351289
  3. Donders J. The incremental value of neuropsychological assessment: a critical review. Clin Neuropsychol. 2020;34(1):56-87. doi:10.1080/13854046.2019.1575471
  4. Williams MW, Rapport LJ, Hanks RA, et al. Incremental value of neuropsychological evaluations to computed tomography in predicting long-term outcomes after traumatic brain injury. Clin Neuropsychol. 2013;27(3):356-375. doi:10.1080/13854046.2013.765507
  5. Sieg E, Mai Q, Mosti C, Brook M. The utility of neuropsychological consultation in identifying medical inpatients with suspected cognitive impairment at risk for greater hospital utilization. Clin Neuropsychol. 2019;33(1):75-89. doi:10.1080/13854046.2018.1465124
  6. Vankirk KM, Horner MD, Turner TH, et al. CE hospital service utilization is reduced following neuropsychological evaluation in a sample of U.S. veterans. Clin Neuropsychol. 2013;27(5):750-761. doi:10.1080/13854046.2013.783122
  7. Appleman ER, O’Connor MK, Boucher SJ, et al. Teleneuropsychology clinic development and patient satisfaction. Clin Neuropsychol. 2021;35(4):819-837. doi:10.1080/13854046.2020.1871515
  8. Stelmokas J, Ratcliffe LN, Lengu K, et al. Evaluation of teleneuropsychology services in veterans during COVID-19. Psychol Serv. 2024;21(1):65-72. doi:10.1037/ser0000810
  9. Bilder R Postal KS, Barisa M, et al. Inter Organizational Practice Committee recommendations/guidance for teleneuropsychology in response to the COVID-19 pandemic. Arch Clin Neuropsychol. 2020;35(6):647-659. doi:10.1093/arclin/acaa046
  10. Brearly TW, Shura RD, Martindale SL, et al. Neuropsychological test administration by videoconference: a systematic review and meta-analysis. Neuropsychol Rev. 2017;27(2):174-186. doi:10.1007/s11065-017-9349-1
  11. Brown AD, Kelso W, Eratne D, et al. Investigating equivalence of in-person and telehealth-based neuropsychological assessment performance for individuals being investigated for younger onset dementia. Arch Clin Neuropsychol. 2024;39(5):594-607. doi:10.1093/arclin/acad108
  12. Chapman JE, Ponsford J, Bagot KL, et al. The use of videoconferencing in clinical neuropsychology practice: a mixed methods evaluation of neuropsychologists’ experiences and views. Aust Psychol. 2020;55(6):618-633. doi:10.1111/ap.12471
  13. Marra DE, Hamlet KM, Bauer RM, et al. Validity of teleneuropsychology for older adults in response to COVID-19: a systematic and critical review. Clin Neuropsychol. 2020;34:1411-1452. doi:10.1080/13854046.2020.1769192
  14. Hammers DB, Stolwyk R, Harder L, et al. A survey of international clinical teleneuropsychology service provision prior to COVID-19. Clin Neuropsychol. 2020;34(7-8):1267- 1283. doi:10.1080/13854046.2020.1810323
  15. Marra DE, Hoelzle JB, Davis JJ, et al. Initial changes in neuropsychologists’ clinical practice during the COVID-19 pandemic: a survey study. Clin Neuropsychol. 2020;34(7- 8):1251-1266. doi:10.1080/13854046.2020.1800098
  16. Parsons MW, Gardner MM, Sherman, JC et al. Feasibility and acceptance of direct-to-home teleneuropsychology services during the COVID-19 pandemic. J Int Neuropsychol Soc. 2022;28(2):210-215. doi:10.1017/s1355617721000436
  17. Rochette AD, Rahman-Filipiak A, Spencer RJ, et al. Teleneuropsychology practice survey during COVID-19 within the United States. Appl Neuropsychol Adult. 2022;29(6):1312- 1322. doi:10.1080/23279095.2021.1872576
  18. Messler AC, Hargrave DD, Trittschuh EH, et al. National survey of telehealth neuropsychology practices: current attitudes, practices, and relevance of tele-neuropsychology three years after the onset of COVID-19. Clin Neuropsychol. 2023;39:1017-1036. doi:10.1080/13854046.2023.2192422
  19. Rautman L, Sordahl JA. Veteran satisfaction with tele-neuropsychology services. Clin Neuropsychol. 2018;32:1453949. doi:10.1080/13854046.2018.1453949
  20. US Department of Veterans Affairs. Patient care services: clinical resource hubs. Updated March 20, 2024. Accessed August 4, 2025. https://www.patientcare .va.gov/primarycare/CRH.asp  
  21. Burnett K, Stockdale SE, Yoon J, et al. The Clinical Resource Hub initiative: first-year implementation of the Veterans Health Administration regional telehealth contingency staffing program. Ambul Care Manage. 2023;46(3):228-239. doi:10.1097/JAC.0000000000000468
References
  1. US Department of Veterans Affairs, Office of Rural Health. Rural veterans. Updated March 10, 2025. Accessed July 7, 2025. https://www.ruralhealth.va.gov/aboutus/ruralvets.asp
  2. Braun M, Tupper D, Kaufmann P, et al. Neuropsychological assessment: a valuable tool in the diagnosis and management of neurological, neurodevelopmental, medical, and psychiatric disorders. Cogn Behav Neurol. 2011;24(3):107-114. doi:10.1097/wnn.0b013e3182351289
  3. Donders J. The incremental value of neuropsychological assessment: a critical review. Clin Neuropsychol. 2020;34(1):56-87. doi:10.1080/13854046.2019.1575471
  4. Williams MW, Rapport LJ, Hanks RA, et al. Incremental value of neuropsychological evaluations to computed tomography in predicting long-term outcomes after traumatic brain injury. Clin Neuropsychol. 2013;27(3):356-375. doi:10.1080/13854046.2013.765507
  5. Sieg E, Mai Q, Mosti C, Brook M. The utility of neuropsychological consultation in identifying medical inpatients with suspected cognitive impairment at risk for greater hospital utilization. Clin Neuropsychol. 2019;33(1):75-89. doi:10.1080/13854046.2018.1465124
  6. Vankirk KM, Horner MD, Turner TH, et al. CE hospital service utilization is reduced following neuropsychological evaluation in a sample of U.S. veterans. Clin Neuropsychol. 2013;27(5):750-761. doi:10.1080/13854046.2013.783122
  7. Appleman ER, O’Connor MK, Boucher SJ, et al. Teleneuropsychology clinic development and patient satisfaction. Clin Neuropsychol. 2021;35(4):819-837. doi:10.1080/13854046.2020.1871515
  8. Stelmokas J, Ratcliffe LN, Lengu K, et al. Evaluation of teleneuropsychology services in veterans during COVID-19. Psychol Serv. 2024;21(1):65-72. doi:10.1037/ser0000810
  9. Bilder R Postal KS, Barisa M, et al. Inter Organizational Practice Committee recommendations/guidance for teleneuropsychology in response to the COVID-19 pandemic. Arch Clin Neuropsychol. 2020;35(6):647-659. doi:10.1093/arclin/acaa046
  10. Brearly TW, Shura RD, Martindale SL, et al. Neuropsychological test administration by videoconference: a systematic review and meta-analysis. Neuropsychol Rev. 2017;27(2):174-186. doi:10.1007/s11065-017-9349-1
  11. Brown AD, Kelso W, Eratne D, et al. Investigating equivalence of in-person and telehealth-based neuropsychological assessment performance for individuals being investigated for younger onset dementia. Arch Clin Neuropsychol. 2024;39(5):594-607. doi:10.1093/arclin/acad108
  12. Chapman JE, Ponsford J, Bagot KL, et al. The use of videoconferencing in clinical neuropsychology practice: a mixed methods evaluation of neuropsychologists’ experiences and views. Aust Psychol. 2020;55(6):618-633. doi:10.1111/ap.12471
  13. Marra DE, Hamlet KM, Bauer RM, et al. Validity of teleneuropsychology for older adults in response to COVID-19: a systematic and critical review. Clin Neuropsychol. 2020;34:1411-1452. doi:10.1080/13854046.2020.1769192
  14. Hammers DB, Stolwyk R, Harder L, et al. A survey of international clinical teleneuropsychology service provision prior to COVID-19. Clin Neuropsychol. 2020;34(7-8):1267- 1283. doi:10.1080/13854046.2020.1810323
  15. Marra DE, Hoelzle JB, Davis JJ, et al. Initial changes in neuropsychologists’ clinical practice during the COVID-19 pandemic: a survey study. Clin Neuropsychol. 2020;34(7- 8):1251-1266. doi:10.1080/13854046.2020.1800098
  16. Parsons MW, Gardner MM, Sherman, JC et al. Feasibility and acceptance of direct-to-home teleneuropsychology services during the COVID-19 pandemic. J Int Neuropsychol Soc. 2022;28(2):210-215. doi:10.1017/s1355617721000436
  17. Rochette AD, Rahman-Filipiak A, Spencer RJ, et al. Teleneuropsychology practice survey during COVID-19 within the United States. Appl Neuropsychol Adult. 2022;29(6):1312- 1322. doi:10.1080/23279095.2021.1872576
  18. Messler AC, Hargrave DD, Trittschuh EH, et al. National survey of telehealth neuropsychology practices: current attitudes, practices, and relevance of tele-neuropsychology three years after the onset of COVID-19. Clin Neuropsychol. 2023;39:1017-1036. doi:10.1080/13854046.2023.2192422
  19. Rautman L, Sordahl JA. Veteran satisfaction with tele-neuropsychology services. Clin Neuropsychol. 2018;32:1453949. doi:10.1080/13854046.2018.1453949
  20. US Department of Veterans Affairs. Patient care services: clinical resource hubs. Updated March 20, 2024. Accessed August 4, 2025. https://www.patientcare .va.gov/primarycare/CRH.asp  
  21. Burnett K, Stockdale SE, Yoon J, et al. The Clinical Resource Hub initiative: first-year implementation of the Veterans Health Administration regional telehealth contingency staffing program. Ambul Care Manage. 2023;46(3):228-239. doi:10.1097/JAC.0000000000000468
Issue
Federal Practitioner - 42(11)
Issue
Federal Practitioner - 42(11)
Publications
Publications
Topics
Article Type
Display Headline

Staff Perspectives on the VISN 20 Tele-Neuropsychology Program

Display Headline

Staff Perspectives on the VISN 20 Tele-Neuropsychology Program

Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

The Role of Dermatologists in Developing AI Tools for Diagnosis and Classification of Skin Disease

Article Type
Changed
Display Headline

The Role of Dermatologists in Developing AI Tools for Diagnosis and Classification of Skin Disease

Use of artificial intelligence (AI) in dermatology has increased over the past decade, likely driven by advances in deep learning algorithms, computing hardware, and machine learning.1 Studies comparing the performance of AI algorithms to dermatologists in classifying skin disorders have shown conflicting results.2,3 In this study, we aimed to analyze AI tools used for diagnosing and classifying skin disease and evaluate the role of dermatologists in the creation of AI technology. We also investigated the number of clinical images used in datasets to train AI programs and compared tools that were created with dermatologist input to those created without dermatologist/clinician involvement.

Methods

A search of PubMed articles indexed for MEDLINE using the terms machine learning, artificial intelligence, and dermatology was conducted on September 18, 2022. Articles were included if they described full-length trials; used machine learning for diagnosis of or screening for dermatologic conditions; and used dermoscopic or gross image datasets of the skin, hair, or nails. Articles were categorized into 4 groups based on the conditions covered: chronic wounds, inflammatory skin diseases, mixed conditions, and pigmented skin lesions. Algorithms were sorted into 4 categories: convolutional/convoluted neural network, deep learning model/deep neural network, AI/artificial neural network, and other. Details regarding Fitzpatrick skin type and skin of color (SoC) inclusion in the articles or AI algorithm datasets were recorded. Univariate and multivariate analyses were performed using Microsoft Excel and SAS Studio 3.8. Sensitivity and specificity were calculated for all included AI technology. Sensitivity, specificity, and the number of clinical images were compared among the included articles using analysis of variance and t tests (α=0.05; P<.05 indicated statistical significance).

Results

Our search yielded 1016 articles, 58 of which met the inclusion criteria. Overall, 25.9% (15/58) of the articles utilized AI to diagnose or classify mixed skin diseases; 22.4% (13/58) for pigmented skin lesions; 19.0% (11/58) for wounds; 17.2% (10/58) for inflammatory skin diseases; and 5.2% (3/58) each for acne, psoriasis, and onychomycosis. Overall, 24.0% (14/58) of articles provided information about Fitzpatrick skin type, and 58.7% (34/58) included clinical images depicting SoC. Furthermore, we found that only 20.7% (12/58) of articles on deep learning models included descriptions of patient ethnicity or race in at least 1 dataset, and only 10.3% (6/58) of studies included any information about skin tone in the dataset. Studies with a dermatologist as the last author (most likely to be supervising the project) were more likely to include clinical images depicting SoC than those without (82.6% [19/23] and 16.7% [3/18], respectively [P=.0411]).

The mean (SD) number of clinical images in the study articles was 28,422 (84,050). Thirty-seven (63.8%) of the study articles included gross images, 17 (29.3%) used dermoscopic images, and 4 (6.9%) used both. Twenty-seven (46.6%) articles used convolutional/convoluted neural networks, 15 (25.9%) used deep learning model/deep neural networks, 8 (13.8%) used other algorithms, 6 (10.3%) used AI/artificial neural network, and 2 (3.4%) used fuzzy algorithms. Most studies were conducted in China (29.3% [17/58]), Germany (12.1% [7/58]), India (10.3% [6/58]), multiple nations (10.3% [6/58]), and the United States (10.3% [6/58]). Overall, 82.8% (48/58) of articles included at least 1 dermatologist coauthor. Sensitivity of the AI models was 0.85, and specificity was 0.85. The average percentage of images in the dataset correctly identified by a physician was 76.87% vs 81.62% of images correctly identified by AI. Average agreement between AI and physician assessment was 77.98%, defined as AI and physician both having the same diagnosis. 

Articles authored by dermatologists contained more clinical images than those without dermatologists in key authorship roles (P<.0001)(eTable). Psoriasis-related algorithms had the fewest (mean [SD]: 3173 [4203]), and pigmented skin lesions had the most clinical images (mean [SD]: 53,19l [155,579]).

RagiCT116005184-eTable

Comment

Our results indicated that AI studies with dermatologist authors had significantly more images in their datasets (ie, the set of clinical images of skin lesions used to train AI algorithms in diagnosing or classifying lesions) than those with nondermatologist authors (P<.0001)(eTable). Similarly, in a study of AI technology for skin cancer diagnosis, AI studies with dermatologist authors (ie, included in the development of the AI algorithm) had more images than studies without dermatologist authors.1 Deep learning textbooks have suggested that 5000 clinical images or training input per output category are needed to produce acceptable algorithm performance, and more than 10 million are needed to produce results superior to human performance.4-10 Despite advances in AI for dermatologic image analysis, the creation of these models often has been directed by nondermatologists1; therefore, dermatologist involvement in AI development is necessary to facilitate collection of larger image datasets and optimal performance for image diagnosis/classification tasks.

We found that 20.7% of articles on deep learning models included descriptions of patient ethnicity or race, and only 10.3% of studies included any information about skin tone in the dataset. Furthermore, American investigators primarily trained models using clinical images of patients with lighter skin tones, whereas Chinese investigators exclusively included images depicting darker skin tones. Similarly, in a study of 52 cutaneous imaging deep learning articles, only 17.3% (9/52) reported race and/or Fitzpatrick skin type, and only 7.7% (4/52) of articles included both.2,6,8 Therefore, dermatologists are needed to contribute images representing diverse populations and collaborate in AI research studies, as their involvement is necessary to ensure the accuracy of AI models in classifying lesions or diagnosing skin lesions across all skin types.

Our search was limited to PubMed, and real-world applications could not be evaluated.

Conclusion

In summary, we found that AI studies with dermatologist authors used larger numbers of clinical images in their datasets and more images representing diverse skin types than studies without. Therefore, we advocate for greater involvement of dermatologists in AI research, which might result in better patient outcomes by improving diagnostic accuracy.

References
  1. Zakhem GA, Fakhoury JW, Motosko CC, et al. Characterizing the role of dermatologists in developing artificial intelligence for assessment of skin cancer. J Am Acad Dermatol. 2021;85:1544-1556.
  2. Daneshjou R, Vodrahalli K, Novoa RA, et al. Disparities in dermatology AI performance on a diverse, curated clinical image set. Sci Adv. 2022;8:eabq6147.
  3. Wu E, Wu K, Daneshjou R, et al. How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nat Med. 2021;27:582-584.
  4. Murphree DH, Puri P, Shamim H, et al. Deep learning for dermatologists: part I. Fundamental concepts. J Am Acad Dermatol. 2022;87:1343-1351.
  5. Goodfellow I, Bengio Y, Courville A. Deep Learning. The MIT Press; 2016.
  6. Kim YH, Kobic A, Vidal NY. Distribution of race and Fitzpatrick skin types in data sets for deep learning in dermatology: a systematic review. J Am Acad Dermatol. 2022;87:460-461.
  7. Liu Y, Jain A, Eng C, et al. A deep learning system for differential diagnosis of skin diseases. Nat Med. 2020;26:900-908.
  8. Zhu CY, Wang YK, Chen HP, et al. A deep learning based framework for diagnosing multiple skin diseases in a clinical environment. Front Med (Lausanne). 2021;8:626369.
  9. Capurro N, Pastore VP, Touijer L, et al. A deep learning approach to direct immunofluorescence pattern recognition in autoimmune bullous diseases. Br J Dermatol. 2024;191:261-266.
  10. Han SS, Park I, Eun Chang S, et al. Augmented intelligence dermatology: deep neural networks empower medical professionals in diagnosing skin cancer and predicting treatment options for 134 skin disorders. J Invest Dermatol. 2020;140:1753-1761.
Article PDF
Author and Disclosure Information

Dr. Ragi is from the Warren Alpert Medical School of Brown University, Providence, Rhode Island. Dr. Desai is from Rutgers New Jersey Medical School, Newark. Drs. Hill and Lipner are from Weill Cornell Medical College, New York, New York. Dr. Lipner is from the Department of Dermatology.

The authors have no relevant financial disclosures to report.

Correspondence: Shari R. Lipner, MD, PhD, Associate Professor of Clinical Dermatology, Weill Cornell Medicine, 1305 York Ave, 9th Floor, New York, NY 10021 (shl9032@med.cornell.edu).

Cutis. 2025 November;116(5):184-185, E4. doi:10.12788/cutis.1295

Issue
Cutis - 116(5)
Publications
Topics
Page Number
184-185, E4
Sections
Author and Disclosure Information

Dr. Ragi is from the Warren Alpert Medical School of Brown University, Providence, Rhode Island. Dr. Desai is from Rutgers New Jersey Medical School, Newark. Drs. Hill and Lipner are from Weill Cornell Medical College, New York, New York. Dr. Lipner is from the Department of Dermatology.

The authors have no relevant financial disclosures to report.

Correspondence: Shari R. Lipner, MD, PhD, Associate Professor of Clinical Dermatology, Weill Cornell Medicine, 1305 York Ave, 9th Floor, New York, NY 10021 (shl9032@med.cornell.edu).

Cutis. 2025 November;116(5):184-185, E4. doi:10.12788/cutis.1295

Author and Disclosure Information

Dr. Ragi is from the Warren Alpert Medical School of Brown University, Providence, Rhode Island. Dr. Desai is from Rutgers New Jersey Medical School, Newark. Drs. Hill and Lipner are from Weill Cornell Medical College, New York, New York. Dr. Lipner is from the Department of Dermatology.

The authors have no relevant financial disclosures to report.

Correspondence: Shari R. Lipner, MD, PhD, Associate Professor of Clinical Dermatology, Weill Cornell Medicine, 1305 York Ave, 9th Floor, New York, NY 10021 (shl9032@med.cornell.edu).

Cutis. 2025 November;116(5):184-185, E4. doi:10.12788/cutis.1295

Article PDF
Article PDF

Use of artificial intelligence (AI) in dermatology has increased over the past decade, likely driven by advances in deep learning algorithms, computing hardware, and machine learning.1 Studies comparing the performance of AI algorithms to dermatologists in classifying skin disorders have shown conflicting results.2,3 In this study, we aimed to analyze AI tools used for diagnosing and classifying skin disease and evaluate the role of dermatologists in the creation of AI technology. We also investigated the number of clinical images used in datasets to train AI programs and compared tools that were created with dermatologist input to those created without dermatologist/clinician involvement.

Methods

A search of PubMed articles indexed for MEDLINE using the terms machine learning, artificial intelligence, and dermatology was conducted on September 18, 2022. Articles were included if they described full-length trials; used machine learning for diagnosis of or screening for dermatologic conditions; and used dermoscopic or gross image datasets of the skin, hair, or nails. Articles were categorized into 4 groups based on the conditions covered: chronic wounds, inflammatory skin diseases, mixed conditions, and pigmented skin lesions. Algorithms were sorted into 4 categories: convolutional/convoluted neural network, deep learning model/deep neural network, AI/artificial neural network, and other. Details regarding Fitzpatrick skin type and skin of color (SoC) inclusion in the articles or AI algorithm datasets were recorded. Univariate and multivariate analyses were performed using Microsoft Excel and SAS Studio 3.8. Sensitivity and specificity were calculated for all included AI technology. Sensitivity, specificity, and the number of clinical images were compared among the included articles using analysis of variance and t tests (α=0.05; P<.05 indicated statistical significance).

Results

Our search yielded 1016 articles, 58 of which met the inclusion criteria. Overall, 25.9% (15/58) of the articles utilized AI to diagnose or classify mixed skin diseases; 22.4% (13/58) for pigmented skin lesions; 19.0% (11/58) for wounds; 17.2% (10/58) for inflammatory skin diseases; and 5.2% (3/58) each for acne, psoriasis, and onychomycosis. Overall, 24.0% (14/58) of articles provided information about Fitzpatrick skin type, and 58.7% (34/58) included clinical images depicting SoC. Furthermore, we found that only 20.7% (12/58) of articles on deep learning models included descriptions of patient ethnicity or race in at least 1 dataset, and only 10.3% (6/58) of studies included any information about skin tone in the dataset. Studies with a dermatologist as the last author (most likely to be supervising the project) were more likely to include clinical images depicting SoC than those without (82.6% [19/23] and 16.7% [3/18], respectively [P=.0411]).

The mean (SD) number of clinical images in the study articles was 28,422 (84,050). Thirty-seven (63.8%) of the study articles included gross images, 17 (29.3%) used dermoscopic images, and 4 (6.9%) used both. Twenty-seven (46.6%) articles used convolutional/convoluted neural networks, 15 (25.9%) used deep learning model/deep neural networks, 8 (13.8%) used other algorithms, 6 (10.3%) used AI/artificial neural network, and 2 (3.4%) used fuzzy algorithms. Most studies were conducted in China (29.3% [17/58]), Germany (12.1% [7/58]), India (10.3% [6/58]), multiple nations (10.3% [6/58]), and the United States (10.3% [6/58]). Overall, 82.8% (48/58) of articles included at least 1 dermatologist coauthor. Sensitivity of the AI models was 0.85, and specificity was 0.85. The average percentage of images in the dataset correctly identified by a physician was 76.87% vs 81.62% of images correctly identified by AI. Average agreement between AI and physician assessment was 77.98%, defined as AI and physician both having the same diagnosis. 

Articles authored by dermatologists contained more clinical images than those without dermatologists in key authorship roles (P<.0001)(eTable). Psoriasis-related algorithms had the fewest (mean [SD]: 3173 [4203]), and pigmented skin lesions had the most clinical images (mean [SD]: 53,19l [155,579]).

RagiCT116005184-eTable

Comment

Our results indicated that AI studies with dermatologist authors had significantly more images in their datasets (ie, the set of clinical images of skin lesions used to train AI algorithms in diagnosing or classifying lesions) than those with nondermatologist authors (P<.0001)(eTable). Similarly, in a study of AI technology for skin cancer diagnosis, AI studies with dermatologist authors (ie, included in the development of the AI algorithm) had more images than studies without dermatologist authors.1 Deep learning textbooks have suggested that 5000 clinical images or training input per output category are needed to produce acceptable algorithm performance, and more than 10 million are needed to produce results superior to human performance.4-10 Despite advances in AI for dermatologic image analysis, the creation of these models often has been directed by nondermatologists1; therefore, dermatologist involvement in AI development is necessary to facilitate collection of larger image datasets and optimal performance for image diagnosis/classification tasks.

We found that 20.7% of articles on deep learning models included descriptions of patient ethnicity or race, and only 10.3% of studies included any information about skin tone in the dataset. Furthermore, American investigators primarily trained models using clinical images of patients with lighter skin tones, whereas Chinese investigators exclusively included images depicting darker skin tones. Similarly, in a study of 52 cutaneous imaging deep learning articles, only 17.3% (9/52) reported race and/or Fitzpatrick skin type, and only 7.7% (4/52) of articles included both.2,6,8 Therefore, dermatologists are needed to contribute images representing diverse populations and collaborate in AI research studies, as their involvement is necessary to ensure the accuracy of AI models in classifying lesions or diagnosing skin lesions across all skin types.

Our search was limited to PubMed, and real-world applications could not be evaluated.

Conclusion

In summary, we found that AI studies with dermatologist authors used larger numbers of clinical images in their datasets and more images representing diverse skin types than studies without. Therefore, we advocate for greater involvement of dermatologists in AI research, which might result in better patient outcomes by improving diagnostic accuracy.

Use of artificial intelligence (AI) in dermatology has increased over the past decade, likely driven by advances in deep learning algorithms, computing hardware, and machine learning.1 Studies comparing the performance of AI algorithms to dermatologists in classifying skin disorders have shown conflicting results.2,3 In this study, we aimed to analyze AI tools used for diagnosing and classifying skin disease and evaluate the role of dermatologists in the creation of AI technology. We also investigated the number of clinical images used in datasets to train AI programs and compared tools that were created with dermatologist input to those created without dermatologist/clinician involvement.

Methods

A search of PubMed articles indexed for MEDLINE using the terms machine learning, artificial intelligence, and dermatology was conducted on September 18, 2022. Articles were included if they described full-length trials; used machine learning for diagnosis of or screening for dermatologic conditions; and used dermoscopic or gross image datasets of the skin, hair, or nails. Articles were categorized into 4 groups based on the conditions covered: chronic wounds, inflammatory skin diseases, mixed conditions, and pigmented skin lesions. Algorithms were sorted into 4 categories: convolutional/convoluted neural network, deep learning model/deep neural network, AI/artificial neural network, and other. Details regarding Fitzpatrick skin type and skin of color (SoC) inclusion in the articles or AI algorithm datasets were recorded. Univariate and multivariate analyses were performed using Microsoft Excel and SAS Studio 3.8. Sensitivity and specificity were calculated for all included AI technology. Sensitivity, specificity, and the number of clinical images were compared among the included articles using analysis of variance and t tests (α=0.05; P<.05 indicated statistical significance).

Results

Our search yielded 1016 articles, 58 of which met the inclusion criteria. Overall, 25.9% (15/58) of the articles utilized AI to diagnose or classify mixed skin diseases; 22.4% (13/58) for pigmented skin lesions; 19.0% (11/58) for wounds; 17.2% (10/58) for inflammatory skin diseases; and 5.2% (3/58) each for acne, psoriasis, and onychomycosis. Overall, 24.0% (14/58) of articles provided information about Fitzpatrick skin type, and 58.7% (34/58) included clinical images depicting SoC. Furthermore, we found that only 20.7% (12/58) of articles on deep learning models included descriptions of patient ethnicity or race in at least 1 dataset, and only 10.3% (6/58) of studies included any information about skin tone in the dataset. Studies with a dermatologist as the last author (most likely to be supervising the project) were more likely to include clinical images depicting SoC than those without (82.6% [19/23] and 16.7% [3/18], respectively [P=.0411]).

The mean (SD) number of clinical images in the study articles was 28,422 (84,050). Thirty-seven (63.8%) of the study articles included gross images, 17 (29.3%) used dermoscopic images, and 4 (6.9%) used both. Twenty-seven (46.6%) articles used convolutional/convoluted neural networks, 15 (25.9%) used deep learning model/deep neural networks, 8 (13.8%) used other algorithms, 6 (10.3%) used AI/artificial neural network, and 2 (3.4%) used fuzzy algorithms. Most studies were conducted in China (29.3% [17/58]), Germany (12.1% [7/58]), India (10.3% [6/58]), multiple nations (10.3% [6/58]), and the United States (10.3% [6/58]). Overall, 82.8% (48/58) of articles included at least 1 dermatologist coauthor. Sensitivity of the AI models was 0.85, and specificity was 0.85. The average percentage of images in the dataset correctly identified by a physician was 76.87% vs 81.62% of images correctly identified by AI. Average agreement between AI and physician assessment was 77.98%, defined as AI and physician both having the same diagnosis. 

Articles authored by dermatologists contained more clinical images than those without dermatologists in key authorship roles (P<.0001)(eTable). Psoriasis-related algorithms had the fewest (mean [SD]: 3173 [4203]), and pigmented skin lesions had the most clinical images (mean [SD]: 53,19l [155,579]).

RagiCT116005184-eTable

Comment

Our results indicated that AI studies with dermatologist authors had significantly more images in their datasets (ie, the set of clinical images of skin lesions used to train AI algorithms in diagnosing or classifying lesions) than those with nondermatologist authors (P<.0001)(eTable). Similarly, in a study of AI technology for skin cancer diagnosis, AI studies with dermatologist authors (ie, included in the development of the AI algorithm) had more images than studies without dermatologist authors.1 Deep learning textbooks have suggested that 5000 clinical images or training input per output category are needed to produce acceptable algorithm performance, and more than 10 million are needed to produce results superior to human performance.4-10 Despite advances in AI for dermatologic image analysis, the creation of these models often has been directed by nondermatologists1; therefore, dermatologist involvement in AI development is necessary to facilitate collection of larger image datasets and optimal performance for image diagnosis/classification tasks.

We found that 20.7% of articles on deep learning models included descriptions of patient ethnicity or race, and only 10.3% of studies included any information about skin tone in the dataset. Furthermore, American investigators primarily trained models using clinical images of patients with lighter skin tones, whereas Chinese investigators exclusively included images depicting darker skin tones. Similarly, in a study of 52 cutaneous imaging deep learning articles, only 17.3% (9/52) reported race and/or Fitzpatrick skin type, and only 7.7% (4/52) of articles included both.2,6,8 Therefore, dermatologists are needed to contribute images representing diverse populations and collaborate in AI research studies, as their involvement is necessary to ensure the accuracy of AI models in classifying lesions or diagnosing skin lesions across all skin types.

Our search was limited to PubMed, and real-world applications could not be evaluated.

Conclusion

In summary, we found that AI studies with dermatologist authors used larger numbers of clinical images in their datasets and more images representing diverse skin types than studies without. Therefore, we advocate for greater involvement of dermatologists in AI research, which might result in better patient outcomes by improving diagnostic accuracy.

References
  1. Zakhem GA, Fakhoury JW, Motosko CC, et al. Characterizing the role of dermatologists in developing artificial intelligence for assessment of skin cancer. J Am Acad Dermatol. 2021;85:1544-1556.
  2. Daneshjou R, Vodrahalli K, Novoa RA, et al. Disparities in dermatology AI performance on a diverse, curated clinical image set. Sci Adv. 2022;8:eabq6147.
  3. Wu E, Wu K, Daneshjou R, et al. How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nat Med. 2021;27:582-584.
  4. Murphree DH, Puri P, Shamim H, et al. Deep learning for dermatologists: part I. Fundamental concepts. J Am Acad Dermatol. 2022;87:1343-1351.
  5. Goodfellow I, Bengio Y, Courville A. Deep Learning. The MIT Press; 2016.
  6. Kim YH, Kobic A, Vidal NY. Distribution of race and Fitzpatrick skin types in data sets for deep learning in dermatology: a systematic review. J Am Acad Dermatol. 2022;87:460-461.
  7. Liu Y, Jain A, Eng C, et al. A deep learning system for differential diagnosis of skin diseases. Nat Med. 2020;26:900-908.
  8. Zhu CY, Wang YK, Chen HP, et al. A deep learning based framework for diagnosing multiple skin diseases in a clinical environment. Front Med (Lausanne). 2021;8:626369.
  9. Capurro N, Pastore VP, Touijer L, et al. A deep learning approach to direct immunofluorescence pattern recognition in autoimmune bullous diseases. Br J Dermatol. 2024;191:261-266.
  10. Han SS, Park I, Eun Chang S, et al. Augmented intelligence dermatology: deep neural networks empower medical professionals in diagnosing skin cancer and predicting treatment options for 134 skin disorders. J Invest Dermatol. 2020;140:1753-1761.
References
  1. Zakhem GA, Fakhoury JW, Motosko CC, et al. Characterizing the role of dermatologists in developing artificial intelligence for assessment of skin cancer. J Am Acad Dermatol. 2021;85:1544-1556.
  2. Daneshjou R, Vodrahalli K, Novoa RA, et al. Disparities in dermatology AI performance on a diverse, curated clinical image set. Sci Adv. 2022;8:eabq6147.
  3. Wu E, Wu K, Daneshjou R, et al. How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nat Med. 2021;27:582-584.
  4. Murphree DH, Puri P, Shamim H, et al. Deep learning for dermatologists: part I. Fundamental concepts. J Am Acad Dermatol. 2022;87:1343-1351.
  5. Goodfellow I, Bengio Y, Courville A. Deep Learning. The MIT Press; 2016.
  6. Kim YH, Kobic A, Vidal NY. Distribution of race and Fitzpatrick skin types in data sets for deep learning in dermatology: a systematic review. J Am Acad Dermatol. 2022;87:460-461.
  7. Liu Y, Jain A, Eng C, et al. A deep learning system for differential diagnosis of skin diseases. Nat Med. 2020;26:900-908.
  8. Zhu CY, Wang YK, Chen HP, et al. A deep learning based framework for diagnosing multiple skin diseases in a clinical environment. Front Med (Lausanne). 2021;8:626369.
  9. Capurro N, Pastore VP, Touijer L, et al. A deep learning approach to direct immunofluorescence pattern recognition in autoimmune bullous diseases. Br J Dermatol. 2024;191:261-266.
  10. Han SS, Park I, Eun Chang S, et al. Augmented intelligence dermatology: deep neural networks empower medical professionals in diagnosing skin cancer and predicting treatment options for 134 skin disorders. J Invest Dermatol. 2020;140:1753-1761.
Issue
Cutis - 116(5)
Issue
Cutis - 116(5)
Page Number
184-185, E4
Page Number
184-185, E4
Publications
Publications
Topics
Article Type
Display Headline

The Role of Dermatologists in Developing AI Tools for Diagnosis and Classification of Skin Disease

Display Headline

The Role of Dermatologists in Developing AI Tools for Diagnosis and Classification of Skin Disease

Sections
Inside the Article

Practice Points

  • Artificial intelligence (AI) technology is emerging as a valuable tool in diagnosing and classifying dermatologic conditions.
  • Despite advances in AI for dermatologic image analysis, the creation of these models often has been directed by nondermatologists.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date