User login
Rheumatologists and their staff have been dutifully recording disease activity and patient-reported outcomes for decades, and now, all that drudgery is beginning to pay off with the introduction of artificial intelligence (AI) and natural language processing systems that can mine electronic health records (EHRs) for nuggets of research gold and accurately predict short-term rheumatoid arthritis (RA) outcomes.
“I think we have learned from our very early experiments that longitudinal deep learning models can forecast rheumatoid arthritis [RA] outcomes with actually surprising efficiency, with fewer patients than we assumed would be needed,” said Jinoos Yazdany, MD, MPH, chief of rheumatology at Zuckerberg San Francisco General Hospital and Trauma Center, and codirector of the University of California San Francisco (UCSF) Quality and Informatics Lab.
At the 2024 Rheumatoid Arthritis Research Summit (RA Summit 2024), presented by the Arthritis Foundation and the Hospital for Special Surgery in New York City, Dr. Yazdany discussed why rheumatologists are well positioned to take advantage of predictive analytics and how natural language processing systems can be used to extract previously hard-to-find data from EHRs, which can then be applied to RA prognostics and research.
Data Galore
EHR data can be particularly useful for RA research because of the large volume of information, clinical data such as notes and imaging, less selection bias compared with other data sources such as cohorts or randomized controlled trials, real-time access, and the fact that many records contain longitudinal data (follow-ups, etc.).
However, EHR data may have gaps or inaccurate coding, and data such as text and images may require significant data processing and scrubbing before it can be used to advance research. In addition, EHR data are subject to patient privacy and security concerns, can be plagued by incompatibility across different systems, and may not represent patients who have less access to care, Dr. Yazdany said.
She noted that most rheumatologists record some measure of RA disease activity and patient physical function, and that patient-reported outcomes have been routinely incorporated into clinical records, especially since the 1980 introduction of the Health Assessment Questionnaire.
“In rheumatology, by achieving consensus and building a national quality measurement program, we have a cohesive national RA outcome measure selection strategy. RA outcomes are available for a majority of patients seen by rheumatologists, and that’s a critical strength of EHR data,” she said.
Spinning Text Into Analytics
The challenge for investigators who want to use this treasure trove of RA data is that more than 80% of the data are in the form of text, which raises questions about how to best extract outcomes data and drug dosing information from the written record.
As described in an article published online in Arthritis Care & Research February 14, 2023, Dr. Yazdany and colleagues at UCSF and Stanford University developed a natural language processing “pipeline” designed to extract RA outcomes from clinical notes on all patients included in the American College of Rheumatology’s Rheumatology Informatics System for Effectiveness (RISE) registry.
The model used expert-curated terms and a text processing tool to identify patterns and numerical scores linked to outcome measures in the records.
“This was an enormously difficult and ambitious project because we had many, many sites, the data was very messy, we had very complicated [independent review board] procedures, and we actually had to go through de-identification procedures because we were using this data for research, so we learned a lot,” Dr. Yazdany said.
The model processed 34 million notes on 854,628 patients across 158 practices and 24 different EHR systems.
In internal validation studies, the models had 95% sensitivity, 87% positive predictive value (PPV), and an F1 score (a measure of predictive performance) of 91%. Applying the model to an EHR from a large, non-RISE health system for external validation, the natural language processing pipeline had a 92% sensitivity, 69% PPV, and an F1 score of 79%.
The investigators also looked at the use of OpenAI large language models, including GPT 3.5 and 4 to interpret complex prescription orders and found that after training with 100 examples, GPT 4 was able to correctly interpret 95.6% of orders. But this experiment came at a high computational and financial cost, with one experiment running north of $3000, Dr. Yazdany cautioned.
Predicting Outcomes
Experiments to see whether an AI system can forecast RA disease activity at the next clinic visit are in their early stages.
Dr. Yazdany and colleagues used EHR data from UCSF and Zuckerberg San Francisco General Hospital on patients with two RA diagnostic codes at 30 days apart, who had at least one disease-modifying antirheumatic drug prescription and two Clinical Disease Activity Index (CDAI) scores 30 days apart.
One model, designed to predict CDAI at the next visit by “playing the odds” based on clinical experience, showed that about 60% of patients at UCSF achieved treat-to-target goals, while the remaining 40% did not.
This model performed barely better than pure chance, with an area under the receiver operating characteristic curve (AUC) of 0.54.
A second model that included the patient’s last CDAI score also fared little better than a roll of the dice, with an AUC of 0.55.
However, a neural network or “deep learning” model designed to process data akin to the way that the human brain works performed much better at predicting outcomes at the second visit, with an AUC of 0.91.
Applying the UCSF-trained neural network model to the Zuckerberg San Francisco General Hospital population, with different patient characteristics from those of UCSF, the AUC was 0.74. Although this result was not as good as that seen when applied to UCSF patients, it demonstrated that the model retains some predictive capability across different hospital systems, Dr. Yazdany said.
The next steps, she said, are to build more robust models based on vast and varied patient data pools that will allow the predictive models to be generalized across various healthcare settings.
The Here and Now
In the Q & A following the presentation, an audience member said that the study was “very cool stuff.”
“Is there a way to sort of get ahead and think of the technology that we’re starting to pilot? Hospitals are already using AI scribes, for example, to collect the data that is going to make it much easier to feed it to the predictive analytics that we’re going to use,” she said.
Dr. Yazdany replied that “over the last couple of years, one of the projects that we’ve worked on is to interview rheumatologists who are participating in the RISE registry about the ways that they are collecting [patient-reported outcomes], and it has been fascinating: A vast majority of people are still using paper forms.”
“The challenge is that our patient populations are very diverse. Technology, and especially filling out forms via online platforms, doesn’t work for everybody, and in some ways, filling out the paper forms when you go to the doctor’s office is a great equalizer. So, I think that we have some real challenges, and the solutions have to be embedded in the real world,” she added.
Dr. Yazdany’s research was supported by grants from the Agency for Healthcare Research & Quality and the National Institutes of Health. She disclosed consulting fees and/or research support from AstraZeneca, Aurinia, Bristol Myers Squibb, Gilead, and Pfizer.
A version of this article appeared on Medscape.com.
Rheumatologists and their staff have been dutifully recording disease activity and patient-reported outcomes for decades, and now, all that drudgery is beginning to pay off with the introduction of artificial intelligence (AI) and natural language processing systems that can mine electronic health records (EHRs) for nuggets of research gold and accurately predict short-term rheumatoid arthritis (RA) outcomes.
“I think we have learned from our very early experiments that longitudinal deep learning models can forecast rheumatoid arthritis [RA] outcomes with actually surprising efficiency, with fewer patients than we assumed would be needed,” said Jinoos Yazdany, MD, MPH, chief of rheumatology at Zuckerberg San Francisco General Hospital and Trauma Center, and codirector of the University of California San Francisco (UCSF) Quality and Informatics Lab.
At the 2024 Rheumatoid Arthritis Research Summit (RA Summit 2024), presented by the Arthritis Foundation and the Hospital for Special Surgery in New York City, Dr. Yazdany discussed why rheumatologists are well positioned to take advantage of predictive analytics and how natural language processing systems can be used to extract previously hard-to-find data from EHRs, which can then be applied to RA prognostics and research.
Data Galore
EHR data can be particularly useful for RA research because of the large volume of information, clinical data such as notes and imaging, less selection bias compared with other data sources such as cohorts or randomized controlled trials, real-time access, and the fact that many records contain longitudinal data (follow-ups, etc.).
However, EHR data may have gaps or inaccurate coding, and data such as text and images may require significant data processing and scrubbing before it can be used to advance research. In addition, EHR data are subject to patient privacy and security concerns, can be plagued by incompatibility across different systems, and may not represent patients who have less access to care, Dr. Yazdany said.
She noted that most rheumatologists record some measure of RA disease activity and patient physical function, and that patient-reported outcomes have been routinely incorporated into clinical records, especially since the 1980 introduction of the Health Assessment Questionnaire.
“In rheumatology, by achieving consensus and building a national quality measurement program, we have a cohesive national RA outcome measure selection strategy. RA outcomes are available for a majority of patients seen by rheumatologists, and that’s a critical strength of EHR data,” she said.
Spinning Text Into Analytics
The challenge for investigators who want to use this treasure trove of RA data is that more than 80% of the data are in the form of text, which raises questions about how to best extract outcomes data and drug dosing information from the written record.
As described in an article published online in Arthritis Care & Research February 14, 2023, Dr. Yazdany and colleagues at UCSF and Stanford University developed a natural language processing “pipeline” designed to extract RA outcomes from clinical notes on all patients included in the American College of Rheumatology’s Rheumatology Informatics System for Effectiveness (RISE) registry.
The model used expert-curated terms and a text processing tool to identify patterns and numerical scores linked to outcome measures in the records.
“This was an enormously difficult and ambitious project because we had many, many sites, the data was very messy, we had very complicated [independent review board] procedures, and we actually had to go through de-identification procedures because we were using this data for research, so we learned a lot,” Dr. Yazdany said.
The model processed 34 million notes on 854,628 patients across 158 practices and 24 different EHR systems.
In internal validation studies, the models had 95% sensitivity, 87% positive predictive value (PPV), and an F1 score (a measure of predictive performance) of 91%. Applying the model to an EHR from a large, non-RISE health system for external validation, the natural language processing pipeline had a 92% sensitivity, 69% PPV, and an F1 score of 79%.
The investigators also looked at the use of OpenAI large language models, including GPT 3.5 and 4 to interpret complex prescription orders and found that after training with 100 examples, GPT 4 was able to correctly interpret 95.6% of orders. But this experiment came at a high computational and financial cost, with one experiment running north of $3000, Dr. Yazdany cautioned.
Predicting Outcomes
Experiments to see whether an AI system can forecast RA disease activity at the next clinic visit are in their early stages.
Dr. Yazdany and colleagues used EHR data from UCSF and Zuckerberg San Francisco General Hospital on patients with two RA diagnostic codes at 30 days apart, who had at least one disease-modifying antirheumatic drug prescription and two Clinical Disease Activity Index (CDAI) scores 30 days apart.
One model, designed to predict CDAI at the next visit by “playing the odds” based on clinical experience, showed that about 60% of patients at UCSF achieved treat-to-target goals, while the remaining 40% did not.
This model performed barely better than pure chance, with an area under the receiver operating characteristic curve (AUC) of 0.54.
A second model that included the patient’s last CDAI score also fared little better than a roll of the dice, with an AUC of 0.55.
However, a neural network or “deep learning” model designed to process data akin to the way that the human brain works performed much better at predicting outcomes at the second visit, with an AUC of 0.91.
Applying the UCSF-trained neural network model to the Zuckerberg San Francisco General Hospital population, with different patient characteristics from those of UCSF, the AUC was 0.74. Although this result was not as good as that seen when applied to UCSF patients, it demonstrated that the model retains some predictive capability across different hospital systems, Dr. Yazdany said.
The next steps, she said, are to build more robust models based on vast and varied patient data pools that will allow the predictive models to be generalized across various healthcare settings.
The Here and Now
In the Q & A following the presentation, an audience member said that the study was “very cool stuff.”
“Is there a way to sort of get ahead and think of the technology that we’re starting to pilot? Hospitals are already using AI scribes, for example, to collect the data that is going to make it much easier to feed it to the predictive analytics that we’re going to use,” she said.
Dr. Yazdany replied that “over the last couple of years, one of the projects that we’ve worked on is to interview rheumatologists who are participating in the RISE registry about the ways that they are collecting [patient-reported outcomes], and it has been fascinating: A vast majority of people are still using paper forms.”
“The challenge is that our patient populations are very diverse. Technology, and especially filling out forms via online platforms, doesn’t work for everybody, and in some ways, filling out the paper forms when you go to the doctor’s office is a great equalizer. So, I think that we have some real challenges, and the solutions have to be embedded in the real world,” she added.
Dr. Yazdany’s research was supported by grants from the Agency for Healthcare Research & Quality and the National Institutes of Health. She disclosed consulting fees and/or research support from AstraZeneca, Aurinia, Bristol Myers Squibb, Gilead, and Pfizer.
A version of this article appeared on Medscape.com.
Rheumatologists and their staff have been dutifully recording disease activity and patient-reported outcomes for decades, and now, all that drudgery is beginning to pay off with the introduction of artificial intelligence (AI) and natural language processing systems that can mine electronic health records (EHRs) for nuggets of research gold and accurately predict short-term rheumatoid arthritis (RA) outcomes.
“I think we have learned from our very early experiments that longitudinal deep learning models can forecast rheumatoid arthritis [RA] outcomes with actually surprising efficiency, with fewer patients than we assumed would be needed,” said Jinoos Yazdany, MD, MPH, chief of rheumatology at Zuckerberg San Francisco General Hospital and Trauma Center, and codirector of the University of California San Francisco (UCSF) Quality and Informatics Lab.
At the 2024 Rheumatoid Arthritis Research Summit (RA Summit 2024), presented by the Arthritis Foundation and the Hospital for Special Surgery in New York City, Dr. Yazdany discussed why rheumatologists are well positioned to take advantage of predictive analytics and how natural language processing systems can be used to extract previously hard-to-find data from EHRs, which can then be applied to RA prognostics and research.
Data Galore
EHR data can be particularly useful for RA research because of the large volume of information, clinical data such as notes and imaging, less selection bias compared with other data sources such as cohorts or randomized controlled trials, real-time access, and the fact that many records contain longitudinal data (follow-ups, etc.).
However, EHR data may have gaps or inaccurate coding, and data such as text and images may require significant data processing and scrubbing before it can be used to advance research. In addition, EHR data are subject to patient privacy and security concerns, can be plagued by incompatibility across different systems, and may not represent patients who have less access to care, Dr. Yazdany said.
She noted that most rheumatologists record some measure of RA disease activity and patient physical function, and that patient-reported outcomes have been routinely incorporated into clinical records, especially since the 1980 introduction of the Health Assessment Questionnaire.
“In rheumatology, by achieving consensus and building a national quality measurement program, we have a cohesive national RA outcome measure selection strategy. RA outcomes are available for a majority of patients seen by rheumatologists, and that’s a critical strength of EHR data,” she said.
Spinning Text Into Analytics
The challenge for investigators who want to use this treasure trove of RA data is that more than 80% of the data are in the form of text, which raises questions about how to best extract outcomes data and drug dosing information from the written record.
As described in an article published online in Arthritis Care & Research February 14, 2023, Dr. Yazdany and colleagues at UCSF and Stanford University developed a natural language processing “pipeline” designed to extract RA outcomes from clinical notes on all patients included in the American College of Rheumatology’s Rheumatology Informatics System for Effectiveness (RISE) registry.
The model used expert-curated terms and a text processing tool to identify patterns and numerical scores linked to outcome measures in the records.
“This was an enormously difficult and ambitious project because we had many, many sites, the data was very messy, we had very complicated [independent review board] procedures, and we actually had to go through de-identification procedures because we were using this data for research, so we learned a lot,” Dr. Yazdany said.
The model processed 34 million notes on 854,628 patients across 158 practices and 24 different EHR systems.
In internal validation studies, the models had 95% sensitivity, 87% positive predictive value (PPV), and an F1 score (a measure of predictive performance) of 91%. Applying the model to an EHR from a large, non-RISE health system for external validation, the natural language processing pipeline had a 92% sensitivity, 69% PPV, and an F1 score of 79%.
The investigators also looked at the use of OpenAI large language models, including GPT 3.5 and 4 to interpret complex prescription orders and found that after training with 100 examples, GPT 4 was able to correctly interpret 95.6% of orders. But this experiment came at a high computational and financial cost, with one experiment running north of $3000, Dr. Yazdany cautioned.
Predicting Outcomes
Experiments to see whether an AI system can forecast RA disease activity at the next clinic visit are in their early stages.
Dr. Yazdany and colleagues used EHR data from UCSF and Zuckerberg San Francisco General Hospital on patients with two RA diagnostic codes at 30 days apart, who had at least one disease-modifying antirheumatic drug prescription and two Clinical Disease Activity Index (CDAI) scores 30 days apart.
One model, designed to predict CDAI at the next visit by “playing the odds” based on clinical experience, showed that about 60% of patients at UCSF achieved treat-to-target goals, while the remaining 40% did not.
This model performed barely better than pure chance, with an area under the receiver operating characteristic curve (AUC) of 0.54.
A second model that included the patient’s last CDAI score also fared little better than a roll of the dice, with an AUC of 0.55.
However, a neural network or “deep learning” model designed to process data akin to the way that the human brain works performed much better at predicting outcomes at the second visit, with an AUC of 0.91.
Applying the UCSF-trained neural network model to the Zuckerberg San Francisco General Hospital population, with different patient characteristics from those of UCSF, the AUC was 0.74. Although this result was not as good as that seen when applied to UCSF patients, it demonstrated that the model retains some predictive capability across different hospital systems, Dr. Yazdany said.
The next steps, she said, are to build more robust models based on vast and varied patient data pools that will allow the predictive models to be generalized across various healthcare settings.
The Here and Now
In the Q & A following the presentation, an audience member said that the study was “very cool stuff.”
“Is there a way to sort of get ahead and think of the technology that we’re starting to pilot? Hospitals are already using AI scribes, for example, to collect the data that is going to make it much easier to feed it to the predictive analytics that we’re going to use,” she said.
Dr. Yazdany replied that “over the last couple of years, one of the projects that we’ve worked on is to interview rheumatologists who are participating in the RISE registry about the ways that they are collecting [patient-reported outcomes], and it has been fascinating: A vast majority of people are still using paper forms.”
“The challenge is that our patient populations are very diverse. Technology, and especially filling out forms via online platforms, doesn’t work for everybody, and in some ways, filling out the paper forms when you go to the doctor’s office is a great equalizer. So, I think that we have some real challenges, and the solutions have to be embedded in the real world,” she added.
Dr. Yazdany’s research was supported by grants from the Agency for Healthcare Research & Quality and the National Institutes of Health. She disclosed consulting fees and/or research support from AstraZeneca, Aurinia, Bristol Myers Squibb, Gilead, and Pfizer.
A version of this article appeared on Medscape.com.
FROM RA SUMMIT 2024