User login
For decades, pre-med students depended on the annual medical school rankings by U.S. News and World Report to decide where to apply for physician education. But after several prominent med schools pulled out of the rankings, one resident began experimenting with artificial intelligence (AI) to create an alternative.
Brandon Turner MD, MSc, a radiation oncology resident at Massachusetts General Hospital in Boston, developed a free do-it-yourself tool using AI that allows prospective students to rank medical schools based on considerations that are most important to them. His research was published online in JAMA Network Open.
“One of the flaws with conventional ranking systems is that the metrics used in these tools are weighted based on the preferences and views of the people who developed these rankings, but those may not work for everyone,” Dr. Turner told this news organization.
He explained that there are different types of metrics used in the U.S. News ranking: one for research and the other for primary care. “The research rankings carry the most prestige and are the ones that most people know about,” he explained. These metrics take into account factors such as how many grant dollars the medical school receives and the average size of those grants per faculty member, Dr. Turner said.
Admission metrics are also included – for example, the median grade point average or MCAT scores of students who have been accepted. “These don’t tell you anything about the research output of the school, only about how selective the school is,” he said.
Primary care metrics might focus on how many graduates of a given school go into primary care, or how other schools rate the quality of primary care training at a given school – a process called peer assessment, Dr. Turner said.
But even though these might be helpful, students may be more interested in the cost of attendance, average debt, representation of minorities, and how many graduates pass their boards, he said. “U.S. News metrics don’t capture these things, but I included them in my algorithm.”
A U.S. News spokesperson said that the publication continues to help students and their families make decisions about their future education. The spokesperson cited U.S. News’ explanation of how it calculates its rankings. “A school’s overall Best Medical Schools rank should be one consideration and not the lone determinant in where a student applies and accepts,” the article states.
Dr. Turner agreed ranking systems are a good starting point when researching med schools, “but the values reflected in the ranking may not reflect an individual’s goals.”
Tyra-Lee Brett, a premed student at the University of South Florida, Tampa, believes an additional tool for students to evaluate medical schools is needed – and she could potentially see herself using Dr. Turner’s creation.
Still, Ms. Brett, a premed trustee of the American Medical Student Association, doesn’t regard any ranking tool as the “be all and end all.” Rather, she feels that the most effective tool would be based on students’ lived experiences. The AMSA is developing a scorecard in which students grade schools based on their opinions about such issues as housing, family planning, and environmental health, she said.
No prior judgments
To develop his algorithm, Dr. Turner used a branch of AI called “unsupervised learning.” It doesn’t make a prior judgment about what the data should look like, Dr. Turner explained.
“You’re just analyzing natural trends within the data.”
The algorithm tries to find and discover clusters or patterns within the data. “It’s like saying to the algorithm: ‘I want you to tell me what schools you think should be grouped together based on the data I feed you,’ which is the data that the user selects based on his or her personal preferences.”
U.S. News has been transparent about the metrics it uses, Dr. Turner notes. “When I started looking into how rankings are developed, I saw that there was transparency, and the reasoning for choosing the metrics used to develop the ranking was pretty sound,” he said.
“But I didn’t see any justification as to why they chose the particular metrics and weighted them in the way that they did.”
Dr. Turner extracted data from the 2023 U.S. News report, which ranked 109 allopathic medical schools, and applied several scenarios to the results to create his alternative ranking system.
In one scenario, he used the same research metrics used by U.S. News, such as a peer research assessment, median federal research activity per full-time faculty member, median GPA, median MCAT, acceptance rate, and faculty-student ratio.
In another scenario, he included four additional metrics: debt, in-state cost of attendance, USMLE Step 1 passing rate, and percentage of underrepresented students with minority race or ethnicity at the school.
For example, a user can rank the importance of the diversity of the class, amount of debt students expect to incur, and amount of research funding the medical school receives. After selecting those factors, the tool generates tiered results displayed in a circle, a shape chosen to avoid the appearance of the hierarchy associated with traditional rankings, Dr. Turner said.
“A prospective student might not care about acceptance rates and MCAT scores, and instead cares about diversity and debt,” Dr. Turner said. He looks forward to extending this approach to the ranking of colleges as well.
‘Imperfect measures’
“The model and interesting online tool that Dr. Turner created allows a premed [student] to generate custom rankings that are in line with their own priorities,” said Christopher Worsham, MD, MPH, a critical care physician in Mass General’s division of pulmonary and critical care medicine.
But Dr. Worsham, also a teaching associate at Harvard Medical School’s department of health care policy, expressed concern that factors figuring into the rankings by U.S. News and Dr. Turner’s alternative “are imperfect measures of medical school quality.”
For example, a student interested in research might favor federal research funding in their customized rankings with Dr. Turner’s model. “But higher research funding doesn’t necessarily translate into a better education for students, particularly when differentiating between two major research systems,” Dr. Worsham noted.
Dr. Worsham added that neither ranking system accurately predicts the quality of doctors graduating from the schools. Instead, he’d like to see ranking systems based on which schools’ graduates deliver the best patient outcomes, whether that’s through direct patient care, impactful research, or leadership within the health care system.
Michael Sauder, PhD, professor of sociology at the University of Iowa, Iowa City, said the model could offer a valuable alternative to the U.S. News ranking system. It might help users develop their own criteria for determining the ranking of medical schools, which is a big improvement over a “one-size-fits-all” approach, Dr. Sauder said.
And Hanna Stotland, an admission consultant based in Chicago, noted that most students rely on rankings because they “don’t have the luxury of advisers who know the ins and outs of different medical schools.” Given the role that rankings play, Ms. Stotland expects that every new ranking tool will have some influence on students.
This tool in particular “has the potential to be useful for students who have identified values they want their medical school to share.” For example, students who care about racial diversity “could use it to easily identify schools that are successful on that metric,” Ms. Stotland said.
Sujay Ratna, a 2nd-year med student at Icahn School of Medicine at Mount Sinai in New York, said he considered the U.S. News ranking his “go-to tool” when he was applying to med school.
But after reading Dr. Turner’s article, the AMSA membership vice president tried the algorithm. “I definitely would have used it had it existed when I was thinking of what schools to apply to and what [schools] to attend.”
The study had no specific funding. Dr. Turner, Dr. Worsham, Dr. Sauder, Ms. Stotland, Ms. Brett, and Mr. Ratna report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
For decades, pre-med students depended on the annual medical school rankings by U.S. News and World Report to decide where to apply for physician education. But after several prominent med schools pulled out of the rankings, one resident began experimenting with artificial intelligence (AI) to create an alternative.
Brandon Turner MD, MSc, a radiation oncology resident at Massachusetts General Hospital in Boston, developed a free do-it-yourself tool using AI that allows prospective students to rank medical schools based on considerations that are most important to them. His research was published online in JAMA Network Open.
“One of the flaws with conventional ranking systems is that the metrics used in these tools are weighted based on the preferences and views of the people who developed these rankings, but those may not work for everyone,” Dr. Turner told this news organization.
He explained that there are different types of metrics used in the U.S. News ranking: one for research and the other for primary care. “The research rankings carry the most prestige and are the ones that most people know about,” he explained. These metrics take into account factors such as how many grant dollars the medical school receives and the average size of those grants per faculty member, Dr. Turner said.
Admission metrics are also included – for example, the median grade point average or MCAT scores of students who have been accepted. “These don’t tell you anything about the research output of the school, only about how selective the school is,” he said.
Primary care metrics might focus on how many graduates of a given school go into primary care, or how other schools rate the quality of primary care training at a given school – a process called peer assessment, Dr. Turner said.
But even though these might be helpful, students may be more interested in the cost of attendance, average debt, representation of minorities, and how many graduates pass their boards, he said. “U.S. News metrics don’t capture these things, but I included them in my algorithm.”
A U.S. News spokesperson said that the publication continues to help students and their families make decisions about their future education. The spokesperson cited U.S. News’ explanation of how it calculates its rankings. “A school’s overall Best Medical Schools rank should be one consideration and not the lone determinant in where a student applies and accepts,” the article states.
Dr. Turner agreed ranking systems are a good starting point when researching med schools, “but the values reflected in the ranking may not reflect an individual’s goals.”
Tyra-Lee Brett, a premed student at the University of South Florida, Tampa, believes an additional tool for students to evaluate medical schools is needed – and she could potentially see herself using Dr. Turner’s creation.
Still, Ms. Brett, a premed trustee of the American Medical Student Association, doesn’t regard any ranking tool as the “be all and end all.” Rather, she feels that the most effective tool would be based on students’ lived experiences. The AMSA is developing a scorecard in which students grade schools based on their opinions about such issues as housing, family planning, and environmental health, she said.
No prior judgments
To develop his algorithm, Dr. Turner used a branch of AI called “unsupervised learning.” It doesn’t make a prior judgment about what the data should look like, Dr. Turner explained.
“You’re just analyzing natural trends within the data.”
The algorithm tries to find and discover clusters or patterns within the data. “It’s like saying to the algorithm: ‘I want you to tell me what schools you think should be grouped together based on the data I feed you,’ which is the data that the user selects based on his or her personal preferences.”
U.S. News has been transparent about the metrics it uses, Dr. Turner notes. “When I started looking into how rankings are developed, I saw that there was transparency, and the reasoning for choosing the metrics used to develop the ranking was pretty sound,” he said.
“But I didn’t see any justification as to why they chose the particular metrics and weighted them in the way that they did.”
Dr. Turner extracted data from the 2023 U.S. News report, which ranked 109 allopathic medical schools, and applied several scenarios to the results to create his alternative ranking system.
In one scenario, he used the same research metrics used by U.S. News, such as a peer research assessment, median federal research activity per full-time faculty member, median GPA, median MCAT, acceptance rate, and faculty-student ratio.
In another scenario, he included four additional metrics: debt, in-state cost of attendance, USMLE Step 1 passing rate, and percentage of underrepresented students with minority race or ethnicity at the school.
For example, a user can rank the importance of the diversity of the class, amount of debt students expect to incur, and amount of research funding the medical school receives. After selecting those factors, the tool generates tiered results displayed in a circle, a shape chosen to avoid the appearance of the hierarchy associated with traditional rankings, Dr. Turner said.
“A prospective student might not care about acceptance rates and MCAT scores, and instead cares about diversity and debt,” Dr. Turner said. He looks forward to extending this approach to the ranking of colleges as well.
‘Imperfect measures’
“The model and interesting online tool that Dr. Turner created allows a premed [student] to generate custom rankings that are in line with their own priorities,” said Christopher Worsham, MD, MPH, a critical care physician in Mass General’s division of pulmonary and critical care medicine.
But Dr. Worsham, also a teaching associate at Harvard Medical School’s department of health care policy, expressed concern that factors figuring into the rankings by U.S. News and Dr. Turner’s alternative “are imperfect measures of medical school quality.”
For example, a student interested in research might favor federal research funding in their customized rankings with Dr. Turner’s model. “But higher research funding doesn’t necessarily translate into a better education for students, particularly when differentiating between two major research systems,” Dr. Worsham noted.
Dr. Worsham added that neither ranking system accurately predicts the quality of doctors graduating from the schools. Instead, he’d like to see ranking systems based on which schools’ graduates deliver the best patient outcomes, whether that’s through direct patient care, impactful research, or leadership within the health care system.
Michael Sauder, PhD, professor of sociology at the University of Iowa, Iowa City, said the model could offer a valuable alternative to the U.S. News ranking system. It might help users develop their own criteria for determining the ranking of medical schools, which is a big improvement over a “one-size-fits-all” approach, Dr. Sauder said.
And Hanna Stotland, an admission consultant based in Chicago, noted that most students rely on rankings because they “don’t have the luxury of advisers who know the ins and outs of different medical schools.” Given the role that rankings play, Ms. Stotland expects that every new ranking tool will have some influence on students.
This tool in particular “has the potential to be useful for students who have identified values they want their medical school to share.” For example, students who care about racial diversity “could use it to easily identify schools that are successful on that metric,” Ms. Stotland said.
Sujay Ratna, a 2nd-year med student at Icahn School of Medicine at Mount Sinai in New York, said he considered the U.S. News ranking his “go-to tool” when he was applying to med school.
But after reading Dr. Turner’s article, the AMSA membership vice president tried the algorithm. “I definitely would have used it had it existed when I was thinking of what schools to apply to and what [schools] to attend.”
The study had no specific funding. Dr. Turner, Dr. Worsham, Dr. Sauder, Ms. Stotland, Ms. Brett, and Mr. Ratna report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
For decades, pre-med students depended on the annual medical school rankings by U.S. News and World Report to decide where to apply for physician education. But after several prominent med schools pulled out of the rankings, one resident began experimenting with artificial intelligence (AI) to create an alternative.
Brandon Turner MD, MSc, a radiation oncology resident at Massachusetts General Hospital in Boston, developed a free do-it-yourself tool using AI that allows prospective students to rank medical schools based on considerations that are most important to them. His research was published online in JAMA Network Open.
“One of the flaws with conventional ranking systems is that the metrics used in these tools are weighted based on the preferences and views of the people who developed these rankings, but those may not work for everyone,” Dr. Turner told this news organization.
He explained that there are different types of metrics used in the U.S. News ranking: one for research and the other for primary care. “The research rankings carry the most prestige and are the ones that most people know about,” he explained. These metrics take into account factors such as how many grant dollars the medical school receives and the average size of those grants per faculty member, Dr. Turner said.
Admission metrics are also included – for example, the median grade point average or MCAT scores of students who have been accepted. “These don’t tell you anything about the research output of the school, only about how selective the school is,” he said.
Primary care metrics might focus on how many graduates of a given school go into primary care, or how other schools rate the quality of primary care training at a given school – a process called peer assessment, Dr. Turner said.
But even though these might be helpful, students may be more interested in the cost of attendance, average debt, representation of minorities, and how many graduates pass their boards, he said. “U.S. News metrics don’t capture these things, but I included them in my algorithm.”
A U.S. News spokesperson said that the publication continues to help students and their families make decisions about their future education. The spokesperson cited U.S. News’ explanation of how it calculates its rankings. “A school’s overall Best Medical Schools rank should be one consideration and not the lone determinant in where a student applies and accepts,” the article states.
Dr. Turner agreed ranking systems are a good starting point when researching med schools, “but the values reflected in the ranking may not reflect an individual’s goals.”
Tyra-Lee Brett, a premed student at the University of South Florida, Tampa, believes an additional tool for students to evaluate medical schools is needed – and she could potentially see herself using Dr. Turner’s creation.
Still, Ms. Brett, a premed trustee of the American Medical Student Association, doesn’t regard any ranking tool as the “be all and end all.” Rather, she feels that the most effective tool would be based on students’ lived experiences. The AMSA is developing a scorecard in which students grade schools based on their opinions about such issues as housing, family planning, and environmental health, she said.
No prior judgments
To develop his algorithm, Dr. Turner used a branch of AI called “unsupervised learning.” It doesn’t make a prior judgment about what the data should look like, Dr. Turner explained.
“You’re just analyzing natural trends within the data.”
The algorithm tries to find and discover clusters or patterns within the data. “It’s like saying to the algorithm: ‘I want you to tell me what schools you think should be grouped together based on the data I feed you,’ which is the data that the user selects based on his or her personal preferences.”
U.S. News has been transparent about the metrics it uses, Dr. Turner notes. “When I started looking into how rankings are developed, I saw that there was transparency, and the reasoning for choosing the metrics used to develop the ranking was pretty sound,” he said.
“But I didn’t see any justification as to why they chose the particular metrics and weighted them in the way that they did.”
Dr. Turner extracted data from the 2023 U.S. News report, which ranked 109 allopathic medical schools, and applied several scenarios to the results to create his alternative ranking system.
In one scenario, he used the same research metrics used by U.S. News, such as a peer research assessment, median federal research activity per full-time faculty member, median GPA, median MCAT, acceptance rate, and faculty-student ratio.
In another scenario, he included four additional metrics: debt, in-state cost of attendance, USMLE Step 1 passing rate, and percentage of underrepresented students with minority race or ethnicity at the school.
For example, a user can rank the importance of the diversity of the class, amount of debt students expect to incur, and amount of research funding the medical school receives. After selecting those factors, the tool generates tiered results displayed in a circle, a shape chosen to avoid the appearance of the hierarchy associated with traditional rankings, Dr. Turner said.
“A prospective student might not care about acceptance rates and MCAT scores, and instead cares about diversity and debt,” Dr. Turner said. He looks forward to extending this approach to the ranking of colleges as well.
‘Imperfect measures’
“The model and interesting online tool that Dr. Turner created allows a premed [student] to generate custom rankings that are in line with their own priorities,” said Christopher Worsham, MD, MPH, a critical care physician in Mass General’s division of pulmonary and critical care medicine.
But Dr. Worsham, also a teaching associate at Harvard Medical School’s department of health care policy, expressed concern that factors figuring into the rankings by U.S. News and Dr. Turner’s alternative “are imperfect measures of medical school quality.”
For example, a student interested in research might favor federal research funding in their customized rankings with Dr. Turner’s model. “But higher research funding doesn’t necessarily translate into a better education for students, particularly when differentiating between two major research systems,” Dr. Worsham noted.
Dr. Worsham added that neither ranking system accurately predicts the quality of doctors graduating from the schools. Instead, he’d like to see ranking systems based on which schools’ graduates deliver the best patient outcomes, whether that’s through direct patient care, impactful research, or leadership within the health care system.
Michael Sauder, PhD, professor of sociology at the University of Iowa, Iowa City, said the model could offer a valuable alternative to the U.S. News ranking system. It might help users develop their own criteria for determining the ranking of medical schools, which is a big improvement over a “one-size-fits-all” approach, Dr. Sauder said.
And Hanna Stotland, an admission consultant based in Chicago, noted that most students rely on rankings because they “don’t have the luxury of advisers who know the ins and outs of different medical schools.” Given the role that rankings play, Ms. Stotland expects that every new ranking tool will have some influence on students.
This tool in particular “has the potential to be useful for students who have identified values they want their medical school to share.” For example, students who care about racial diversity “could use it to easily identify schools that are successful on that metric,” Ms. Stotland said.
Sujay Ratna, a 2nd-year med student at Icahn School of Medicine at Mount Sinai in New York, said he considered the U.S. News ranking his “go-to tool” when he was applying to med school.
But after reading Dr. Turner’s article, the AMSA membership vice president tried the algorithm. “I definitely would have used it had it existed when I was thinking of what schools to apply to and what [schools] to attend.”
The study had no specific funding. Dr. Turner, Dr. Worsham, Dr. Sauder, Ms. Stotland, Ms. Brett, and Mr. Ratna report no relevant financial relationships.
A version of this article first appeared on Medscape.com.