User login
Our EHRs have a drug problem
The “opioid epidemic” has become, perhaps, the most talked-about health crisis of the 21st century. It is a pervasive topic of discussion in the health literature and beyond, written about on the front pages of national newspapers and even mentioned in presidential state-of-the-union addresses.
As practicing physicians, we are all too familiar with the ills of chronic opioid use and have dealt with the implications of the crisis long before the issue attracted the public’s attention. In many ways, we have felt alone in bearing the burdens of caring for patients on chronic controlled substances. Until this point it has been our sacred duty to determine which patients are truly in need of those medications, and which are merely dependent on or – even worse – abusing them.
Health care providers have been largely blamed for the creation of this crisis, but we are not alone. Responsibility must also be shared by the pharmaceutical industry, health insurers, and even the government. Marketing practices, inadequate coverage of pain-relieving procedures and rehabilitation, and poorly-conceived drug policies have created an environment where it has been far too difficult to provide appropriate care for patients with chronic pain. As a result, patients who may have had an alternative to opioids were still started on these medications, and we – their physicians – have been left alone to manage the outcome.
Recently, however, health policy and public awareness have signaled a dramatic shift in the management of long-term pain medication. Significant legislation has been enacted on national, state, and local levels, and parties who are perceived to be responsible for the crisis are being held to task. For example, in August a landmark legal case was decided in an Oklahoma district court. Johnson & Johnson Pharmaceuticals was found guilty of promoting drug addiction through false and misleading marketing and was thus ordered to pay $572 million to the state to fund drug rehabilitation programs. This is likely a harbinger of many more such decisions to come, and the industry as a whole is bracing for the worst.
Physician prescribing practices are also being carefully scrutinized by the DEA, and a significant number of new “checks and balances” have been put in place to address dependence and addiction concerns. Unfortunately, as with all sweeping reform programs, there are good – and not-so-good – aspects to these changes. In many ways, the new tools at our disposal are a powerful way of mitigating drug dependence and diversion while protecting the sanctity of our “prescription pads.” Yet, as with so many other government mandates, we are burdened with the onus of complying with the new mandates for each and every opioid prescription, while our EHRs provide little help. This means more “clicks” for us, which can feel quite burdensome. It doesn’t need to be this way. Below are two straightforward things that can and should occur in order for providers to feel unburdened and to fully embrace the changes.
PDMP integration
One of the major ways of controlling prescription opioid abuse is through effective monitoring. Forty-nine of the 50 U.S. states have developed Prescription Drug Monitoring Programs (PDMPs), with Missouri being the only holdout (due to the politics of individual privacy concerns and conflation with gun control legislation). Most – though not all – of the states with a PDMP also mandate that physicians query a database prior to prescribing controlled substances. While noble and helpful in principle, querying a PDMP can be cumbersome, and the process is rarely integrated into the EHR workflow. Instead, physicians typically need to login to a separate website and manually transpose patient data to search the database. While most states have offered to subsidize PDMP integration with electronic records, EHR vendors have been very slow to develop the capability, leaving most physicians with no choice but to continue the aforementioned workflow. That is, if they comply at all; many well-meaning physicians have told us that they find themselves too harried to use the PDMP consistently. This reduces the value of these databases and places the physicians at significant risk. In some states, failure to query the database can lead to loss of a doctor’s medical license. It is high time that EHR vendors step up and integrate with every state’s prescription drug database.
Electronic prescribing of controlled substances
The other major milestone in prescription opioid management is the electronic prescribing of controlled substances (EPCS). This received national priority when the SUPPORT for Patients and Communities Act was signed into federal law in October of 2018. Included in this act is a requirement that, by January of 2021, all controlled substance prescriptions covered under Medicare Part D be sent electronically. Taking this as inspiration, many states and private companies have adopted more aggressive policies, choosing to implement electronic prescription requirements prior to the 2021 deadline. In Pennsylvania, where we practice, an EPCS requirement goes into effect in October of this year (2019). National pharmacy chains have also taken a more proactive approach. Walmart, for example, has decided that it will require EPCS nationwide in all of its stores beginning in January of 2020.
Essentially physicians have no choice – if they plan to continue to prescribe controlled substances, they will need to begin doing so electronically. Unfortunately, this may not be a straightforward process. While most EHRs offer some sort of EPCS solution, it is typically far from user friendly. Setting up EPCS can be costly and incredibly time consuming, and the procedure of actually submitting controlled prescriptions can be onerous and add many extra clicks. If vendors are serious about assisting in solving the opioid crisis, they need to make streamlining the steps of EPCS a high priority.
A prescription for success
As with so many other topics we’ve written about, we face an ever-increasing burden to provide quality patient care while complying with cumbersome and often unfunded external mandates. In the case of the opioid crisis, we believe we can do better. Our prescription for success? Streamlined workflow, smarter EHRs, and fewer clicks. There is no question that physicians and patients will benefit from effective implementation of the new tools at our disposal, but we need EHR vendors to step up and help carry the load.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on Twitter @doctornotte. Dr. Skolnik is professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health.
The “opioid epidemic” has become, perhaps, the most talked-about health crisis of the 21st century. It is a pervasive topic of discussion in the health literature and beyond, written about on the front pages of national newspapers and even mentioned in presidential state-of-the-union addresses.
As practicing physicians, we are all too familiar with the ills of chronic opioid use and have dealt with the implications of the crisis long before the issue attracted the public’s attention. In many ways, we have felt alone in bearing the burdens of caring for patients on chronic controlled substances. Until this point it has been our sacred duty to determine which patients are truly in need of those medications, and which are merely dependent on or – even worse – abusing them.
Health care providers have been largely blamed for the creation of this crisis, but we are not alone. Responsibility must also be shared by the pharmaceutical industry, health insurers, and even the government. Marketing practices, inadequate coverage of pain-relieving procedures and rehabilitation, and poorly-conceived drug policies have created an environment where it has been far too difficult to provide appropriate care for patients with chronic pain. As a result, patients who may have had an alternative to opioids were still started on these medications, and we – their physicians – have been left alone to manage the outcome.
Recently, however, health policy and public awareness have signaled a dramatic shift in the management of long-term pain medication. Significant legislation has been enacted on national, state, and local levels, and parties who are perceived to be responsible for the crisis are being held to task. For example, in August a landmark legal case was decided in an Oklahoma district court. Johnson & Johnson Pharmaceuticals was found guilty of promoting drug addiction through false and misleading marketing and was thus ordered to pay $572 million to the state to fund drug rehabilitation programs. This is likely a harbinger of many more such decisions to come, and the industry as a whole is bracing for the worst.
Physician prescribing practices are also being carefully scrutinized by the DEA, and a significant number of new “checks and balances” have been put in place to address dependence and addiction concerns. Unfortunately, as with all sweeping reform programs, there are good – and not-so-good – aspects to these changes. In many ways, the new tools at our disposal are a powerful way of mitigating drug dependence and diversion while protecting the sanctity of our “prescription pads.” Yet, as with so many other government mandates, we are burdened with the onus of complying with the new mandates for each and every opioid prescription, while our EHRs provide little help. This means more “clicks” for us, which can feel quite burdensome. It doesn’t need to be this way. Below are two straightforward things that can and should occur in order for providers to feel unburdened and to fully embrace the changes.
PDMP integration
One of the major ways of controlling prescription opioid abuse is through effective monitoring. Forty-nine of the 50 U.S. states have developed Prescription Drug Monitoring Programs (PDMPs), with Missouri being the only holdout (due to the politics of individual privacy concerns and conflation with gun control legislation). Most – though not all – of the states with a PDMP also mandate that physicians query a database prior to prescribing controlled substances. While noble and helpful in principle, querying a PDMP can be cumbersome, and the process is rarely integrated into the EHR workflow. Instead, physicians typically need to login to a separate website and manually transpose patient data to search the database. While most states have offered to subsidize PDMP integration with electronic records, EHR vendors have been very slow to develop the capability, leaving most physicians with no choice but to continue the aforementioned workflow. That is, if they comply at all; many well-meaning physicians have told us that they find themselves too harried to use the PDMP consistently. This reduces the value of these databases and places the physicians at significant risk. In some states, failure to query the database can lead to loss of a doctor’s medical license. It is high time that EHR vendors step up and integrate with every state’s prescription drug database.
Electronic prescribing of controlled substances
The other major milestone in prescription opioid management is the electronic prescribing of controlled substances (EPCS). This received national priority when the SUPPORT for Patients and Communities Act was signed into federal law in October of 2018. Included in this act is a requirement that, by January of 2021, all controlled substance prescriptions covered under Medicare Part D be sent electronically. Taking this as inspiration, many states and private companies have adopted more aggressive policies, choosing to implement electronic prescription requirements prior to the 2021 deadline. In Pennsylvania, where we practice, an EPCS requirement goes into effect in October of this year (2019). National pharmacy chains have also taken a more proactive approach. Walmart, for example, has decided that it will require EPCS nationwide in all of its stores beginning in January of 2020.
Essentially physicians have no choice – if they plan to continue to prescribe controlled substances, they will need to begin doing so electronically. Unfortunately, this may not be a straightforward process. While most EHRs offer some sort of EPCS solution, it is typically far from user friendly. Setting up EPCS can be costly and incredibly time consuming, and the procedure of actually submitting controlled prescriptions can be onerous and add many extra clicks. If vendors are serious about assisting in solving the opioid crisis, they need to make streamlining the steps of EPCS a high priority.
A prescription for success
As with so many other topics we’ve written about, we face an ever-increasing burden to provide quality patient care while complying with cumbersome and often unfunded external mandates. In the case of the opioid crisis, we believe we can do better. Our prescription for success? Streamlined workflow, smarter EHRs, and fewer clicks. There is no question that physicians and patients will benefit from effective implementation of the new tools at our disposal, but we need EHR vendors to step up and help carry the load.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on Twitter @doctornotte. Dr. Skolnik is professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health.
The “opioid epidemic” has become, perhaps, the most talked-about health crisis of the 21st century. It is a pervasive topic of discussion in the health literature and beyond, written about on the front pages of national newspapers and even mentioned in presidential state-of-the-union addresses.
As practicing physicians, we are all too familiar with the ills of chronic opioid use and have dealt with the implications of the crisis long before the issue attracted the public’s attention. In many ways, we have felt alone in bearing the burdens of caring for patients on chronic controlled substances. Until this point it has been our sacred duty to determine which patients are truly in need of those medications, and which are merely dependent on or – even worse – abusing them.
Health care providers have been largely blamed for the creation of this crisis, but we are not alone. Responsibility must also be shared by the pharmaceutical industry, health insurers, and even the government. Marketing practices, inadequate coverage of pain-relieving procedures and rehabilitation, and poorly-conceived drug policies have created an environment where it has been far too difficult to provide appropriate care for patients with chronic pain. As a result, patients who may have had an alternative to opioids were still started on these medications, and we – their physicians – have been left alone to manage the outcome.
Recently, however, health policy and public awareness have signaled a dramatic shift in the management of long-term pain medication. Significant legislation has been enacted on national, state, and local levels, and parties who are perceived to be responsible for the crisis are being held to task. For example, in August a landmark legal case was decided in an Oklahoma district court. Johnson & Johnson Pharmaceuticals was found guilty of promoting drug addiction through false and misleading marketing and was thus ordered to pay $572 million to the state to fund drug rehabilitation programs. This is likely a harbinger of many more such decisions to come, and the industry as a whole is bracing for the worst.
Physician prescribing practices are also being carefully scrutinized by the DEA, and a significant number of new “checks and balances” have been put in place to address dependence and addiction concerns. Unfortunately, as with all sweeping reform programs, there are good – and not-so-good – aspects to these changes. In many ways, the new tools at our disposal are a powerful way of mitigating drug dependence and diversion while protecting the sanctity of our “prescription pads.” Yet, as with so many other government mandates, we are burdened with the onus of complying with the new mandates for each and every opioid prescription, while our EHRs provide little help. This means more “clicks” for us, which can feel quite burdensome. It doesn’t need to be this way. Below are two straightforward things that can and should occur in order for providers to feel unburdened and to fully embrace the changes.
PDMP integration
One of the major ways of controlling prescription opioid abuse is through effective monitoring. Forty-nine of the 50 U.S. states have developed Prescription Drug Monitoring Programs (PDMPs), with Missouri being the only holdout (due to the politics of individual privacy concerns and conflation with gun control legislation). Most – though not all – of the states with a PDMP also mandate that physicians query a database prior to prescribing controlled substances. While noble and helpful in principle, querying a PDMP can be cumbersome, and the process is rarely integrated into the EHR workflow. Instead, physicians typically need to login to a separate website and manually transpose patient data to search the database. While most states have offered to subsidize PDMP integration with electronic records, EHR vendors have been very slow to develop the capability, leaving most physicians with no choice but to continue the aforementioned workflow. That is, if they comply at all; many well-meaning physicians have told us that they find themselves too harried to use the PDMP consistently. This reduces the value of these databases and places the physicians at significant risk. In some states, failure to query the database can lead to loss of a doctor’s medical license. It is high time that EHR vendors step up and integrate with every state’s prescription drug database.
Electronic prescribing of controlled substances
The other major milestone in prescription opioid management is the electronic prescribing of controlled substances (EPCS). This received national priority when the SUPPORT for Patients and Communities Act was signed into federal law in October of 2018. Included in this act is a requirement that, by January of 2021, all controlled substance prescriptions covered under Medicare Part D be sent electronically. Taking this as inspiration, many states and private companies have adopted more aggressive policies, choosing to implement electronic prescription requirements prior to the 2021 deadline. In Pennsylvania, where we practice, an EPCS requirement goes into effect in October of this year (2019). National pharmacy chains have also taken a more proactive approach. Walmart, for example, has decided that it will require EPCS nationwide in all of its stores beginning in January of 2020.
Essentially physicians have no choice – if they plan to continue to prescribe controlled substances, they will need to begin doing so electronically. Unfortunately, this may not be a straightforward process. While most EHRs offer some sort of EPCS solution, it is typically far from user friendly. Setting up EPCS can be costly and incredibly time consuming, and the procedure of actually submitting controlled prescriptions can be onerous and add many extra clicks. If vendors are serious about assisting in solving the opioid crisis, they need to make streamlining the steps of EPCS a high priority.
A prescription for success
As with so many other topics we’ve written about, we face an ever-increasing burden to provide quality patient care while complying with cumbersome and often unfunded external mandates. In the case of the opioid crisis, we believe we can do better. Our prescription for success? Streamlined workflow, smarter EHRs, and fewer clicks. There is no question that physicians and patients will benefit from effective implementation of the new tools at our disposal, but we need EHR vendors to step up and help carry the load.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on Twitter @doctornotte. Dr. Skolnik is professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health.
The 21st Century Cures Act: Tearing down fortresses to put patients first
"A fortress not only protects those inside of it, but it also enslaves them to work.”
– Anthony T. Hincks
As physicians, we spend a great deal of time intending to do our best for the people we serve. We believe fundamentally in the idea that our patients come first, and we toil daily to exercise that belief. We also want our patients to feel they are driving their care as active participants along the journey. Yet time and time again, despite our greatest attempts, those efforts are stymied by the state of modern medicine;
Over the past 10 years, we have done a tremendous job of constructing expensive fortresses around patient information known as electronic health records (EHRs). Billions of dollars have been spent implementing, upgrading, and optimizing. In spite of this, physicians are increasingly frustrated by EHRs (and in many cases, long to return to the days of paper). It isn’t surprising, then, that patients are frustrated as well. We use terms such as “patient-centered care,” but patients feel like they are not in the center at all. Instead, they can find themselves feeling like complete outsiders, at the mercy of the medical juggernaut to make sure they have the appropriate information when they need it. There are several issues that contribute to the frustrations of physicians and patients, but two in particular warrant attention. The first is the diversity of Health IT systems and ongoing issues with EHR interoperability. The second is a provincial attitude surrounding transparency and medical record ownership. We will discuss both of these here, as well as recent legislation designed to advance both concerns.
We have written in previous columns about the many challenges of interoperability. Electronic health records, sold by different vendors, typically won’t “talk” to each other. In spite of years of maturation, issues of compatibility remain. Patient data locked inside of one EHR is not easily accessible by a physician using a different EHR. While efforts have been made to streamline information sharing, there are still many fortresses that cannot be breached.
Bridging the moat
The 21st Century Cures Act, enacted by Congress in December of 2016, seeks to define and require interoperability while addressing many other significant problems in health care. According to the legislation, true interoperability means that health IT should enable the secure exchange of electronic health information with other electronic record systems without special effort on the part of the user; the process should be seamless and shouldn’t be cumbersome for physicians or patients. It also must be fully supported by EHR vendors, but those vendors have been expressing significant concerns with the ways in which the act is being interpreted.
In a recent blog post, the HIMSS Electronic Health Record Association – a consortium of vendors including Epic, Allscripts, eClinicalWorks, as well as several others – expressed “significant concerns regarding timelines, ambiguous language, disincentives for innovation, and definitions related to information blocking.”1 This is not surprising, as the onus for improving interoperability falls squarely on their shoulders, and the work to get there is arduous. Regardless of one’s interpretation, the goal of the Cures act is clear: Arrive at true interoperability in the shortest period of time, while eliminating barriers that prevent patients from accessing their health records. In other words, it asks for the avoidance of “information blocking.”
Breaching the gate
Information blocking, as defined by the Cures Act, is “a practice by a health care provider, health IT developer, health information exchange, or health information network that … is likely to interfere with, prevent, or materially discourage access, exchange, or use of electronic health information.”2 This practice is explicitly prohibited by the legislation – and is ethically wrong – yet it continues to occur implicitly every day as it has for many years. Even if unintentional and solely because of the growing complexity of our information systems, it makes accessing health information incredibly cumbersome for patients. Even worse, attempts to improve patients’ ability to access their health records have only created additional obstacles.
HIPAA (the Health Insurance Portability and Accountability Act of 1996) was designed to protect patient confidentiality and create security around protected health information. While noble in purpose, many have found it burdensome to work within the parameters set forth in the law. Physicians and patients needing legitimate access to clinical data discover endless release forms and convoluted processes standing in their way. Access to the information eventually comes in the form of reams of printed paper or faxed notes that cannot be easily consumed by or integrated into other systems.
The Meaningful Use initiative, while envisioned to improve data exchange and enhance population health, did little to help. Instead of enabling documentation efficiency and improving patient access, it promoted the proliferation of incompatible EHRs and poorly conceived patient portals. It also created heavy costs for both the federal government and physicians and was largely ineffective at producing systems whose use could be considered meaningful. The federal government paid out as much as $44,000 per physician to incentivize them to purchase medical records, while physicians often spent more than the $44,000 and, in many cases, wound up with EHRs that didn’t work well and had to be replaced.
Authors and supporters of the 21st Century Cures Act are hoping to avoid the shortcomings of prior legislation by attaching financial penalties to health care providers or IT vendors who engage in information blocking. While allowing for exceptions in appropriate cases, the law is clear: Patients deserve complete access to their medical records. While this goes against tradition, it has been proven to result in better outcomes.
Initiatives such as the OpenNotes movement have been pushing the value of full transparency for some time, and their website includes a long list of numerous examples to prove it. Indeed, several studies have demonstrated increased physician and patient satisfaction when both parties have ready access to health information. We believe that we, as physicians, should fully support the idea and lobby our EHR vendors to do the same.
It is time to tear down the impenetrable fortresses of traditional medicine, then work diligently to rebuild them with our patients safely inside.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on Twitter @doctornotte. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
References
1. The Electronic Health Record Association blog
"A fortress not only protects those inside of it, but it also enslaves them to work.”
– Anthony T. Hincks
As physicians, we spend a great deal of time intending to do our best for the people we serve. We believe fundamentally in the idea that our patients come first, and we toil daily to exercise that belief. We also want our patients to feel they are driving their care as active participants along the journey. Yet time and time again, despite our greatest attempts, those efforts are stymied by the state of modern medicine;
Over the past 10 years, we have done a tremendous job of constructing expensive fortresses around patient information known as electronic health records (EHRs). Billions of dollars have been spent implementing, upgrading, and optimizing. In spite of this, physicians are increasingly frustrated by EHRs (and in many cases, long to return to the days of paper). It isn’t surprising, then, that patients are frustrated as well. We use terms such as “patient-centered care,” but patients feel like they are not in the center at all. Instead, they can find themselves feeling like complete outsiders, at the mercy of the medical juggernaut to make sure they have the appropriate information when they need it. There are several issues that contribute to the frustrations of physicians and patients, but two in particular warrant attention. The first is the diversity of Health IT systems and ongoing issues with EHR interoperability. The second is a provincial attitude surrounding transparency and medical record ownership. We will discuss both of these here, as well as recent legislation designed to advance both concerns.
We have written in previous columns about the many challenges of interoperability. Electronic health records, sold by different vendors, typically won’t “talk” to each other. In spite of years of maturation, issues of compatibility remain. Patient data locked inside of one EHR is not easily accessible by a physician using a different EHR. While efforts have been made to streamline information sharing, there are still many fortresses that cannot be breached.
Bridging the moat
The 21st Century Cures Act, enacted by Congress in December of 2016, seeks to define and require interoperability while addressing many other significant problems in health care. According to the legislation, true interoperability means that health IT should enable the secure exchange of electronic health information with other electronic record systems without special effort on the part of the user; the process should be seamless and shouldn’t be cumbersome for physicians or patients. It also must be fully supported by EHR vendors, but those vendors have been expressing significant concerns with the ways in which the act is being interpreted.
In a recent blog post, the HIMSS Electronic Health Record Association – a consortium of vendors including Epic, Allscripts, eClinicalWorks, as well as several others – expressed “significant concerns regarding timelines, ambiguous language, disincentives for innovation, and definitions related to information blocking.”1 This is not surprising, as the onus for improving interoperability falls squarely on their shoulders, and the work to get there is arduous. Regardless of one’s interpretation, the goal of the Cures act is clear: Arrive at true interoperability in the shortest period of time, while eliminating barriers that prevent patients from accessing their health records. In other words, it asks for the avoidance of “information blocking.”
Breaching the gate
Information blocking, as defined by the Cures Act, is “a practice by a health care provider, health IT developer, health information exchange, or health information network that … is likely to interfere with, prevent, or materially discourage access, exchange, or use of electronic health information.”2 This practice is explicitly prohibited by the legislation – and is ethically wrong – yet it continues to occur implicitly every day as it has for many years. Even if unintentional and solely because of the growing complexity of our information systems, it makes accessing health information incredibly cumbersome for patients. Even worse, attempts to improve patients’ ability to access their health records have only created additional obstacles.
HIPAA (the Health Insurance Portability and Accountability Act of 1996) was designed to protect patient confidentiality and create security around protected health information. While noble in purpose, many have found it burdensome to work within the parameters set forth in the law. Physicians and patients needing legitimate access to clinical data discover endless release forms and convoluted processes standing in their way. Access to the information eventually comes in the form of reams of printed paper or faxed notes that cannot be easily consumed by or integrated into other systems.
The Meaningful Use initiative, while envisioned to improve data exchange and enhance population health, did little to help. Instead of enabling documentation efficiency and improving patient access, it promoted the proliferation of incompatible EHRs and poorly conceived patient portals. It also created heavy costs for both the federal government and physicians and was largely ineffective at producing systems whose use could be considered meaningful. The federal government paid out as much as $44,000 per physician to incentivize them to purchase medical records, while physicians often spent more than the $44,000 and, in many cases, wound up with EHRs that didn’t work well and had to be replaced.
Authors and supporters of the 21st Century Cures Act are hoping to avoid the shortcomings of prior legislation by attaching financial penalties to health care providers or IT vendors who engage in information blocking. While allowing for exceptions in appropriate cases, the law is clear: Patients deserve complete access to their medical records. While this goes against tradition, it has been proven to result in better outcomes.
Initiatives such as the OpenNotes movement have been pushing the value of full transparency for some time, and their website includes a long list of numerous examples to prove it. Indeed, several studies have demonstrated increased physician and patient satisfaction when both parties have ready access to health information. We believe that we, as physicians, should fully support the idea and lobby our EHR vendors to do the same.
It is time to tear down the impenetrable fortresses of traditional medicine, then work diligently to rebuild them with our patients safely inside.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on Twitter @doctornotte. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
References
1. The Electronic Health Record Association blog
"A fortress not only protects those inside of it, but it also enslaves them to work.”
– Anthony T. Hincks
As physicians, we spend a great deal of time intending to do our best for the people we serve. We believe fundamentally in the idea that our patients come first, and we toil daily to exercise that belief. We also want our patients to feel they are driving their care as active participants along the journey. Yet time and time again, despite our greatest attempts, those efforts are stymied by the state of modern medicine;
Over the past 10 years, we have done a tremendous job of constructing expensive fortresses around patient information known as electronic health records (EHRs). Billions of dollars have been spent implementing, upgrading, and optimizing. In spite of this, physicians are increasingly frustrated by EHRs (and in many cases, long to return to the days of paper). It isn’t surprising, then, that patients are frustrated as well. We use terms such as “patient-centered care,” but patients feel like they are not in the center at all. Instead, they can find themselves feeling like complete outsiders, at the mercy of the medical juggernaut to make sure they have the appropriate information when they need it. There are several issues that contribute to the frustrations of physicians and patients, but two in particular warrant attention. The first is the diversity of Health IT systems and ongoing issues with EHR interoperability. The second is a provincial attitude surrounding transparency and medical record ownership. We will discuss both of these here, as well as recent legislation designed to advance both concerns.
We have written in previous columns about the many challenges of interoperability. Electronic health records, sold by different vendors, typically won’t “talk” to each other. In spite of years of maturation, issues of compatibility remain. Patient data locked inside of one EHR is not easily accessible by a physician using a different EHR. While efforts have been made to streamline information sharing, there are still many fortresses that cannot be breached.
Bridging the moat
The 21st Century Cures Act, enacted by Congress in December of 2016, seeks to define and require interoperability while addressing many other significant problems in health care. According to the legislation, true interoperability means that health IT should enable the secure exchange of electronic health information with other electronic record systems without special effort on the part of the user; the process should be seamless and shouldn’t be cumbersome for physicians or patients. It also must be fully supported by EHR vendors, but those vendors have been expressing significant concerns with the ways in which the act is being interpreted.
In a recent blog post, the HIMSS Electronic Health Record Association – a consortium of vendors including Epic, Allscripts, eClinicalWorks, as well as several others – expressed “significant concerns regarding timelines, ambiguous language, disincentives for innovation, and definitions related to information blocking.”1 This is not surprising, as the onus for improving interoperability falls squarely on their shoulders, and the work to get there is arduous. Regardless of one’s interpretation, the goal of the Cures act is clear: Arrive at true interoperability in the shortest period of time, while eliminating barriers that prevent patients from accessing their health records. In other words, it asks for the avoidance of “information blocking.”
Breaching the gate
Information blocking, as defined by the Cures Act, is “a practice by a health care provider, health IT developer, health information exchange, or health information network that … is likely to interfere with, prevent, or materially discourage access, exchange, or use of electronic health information.”2 This practice is explicitly prohibited by the legislation – and is ethically wrong – yet it continues to occur implicitly every day as it has for many years. Even if unintentional and solely because of the growing complexity of our information systems, it makes accessing health information incredibly cumbersome for patients. Even worse, attempts to improve patients’ ability to access their health records have only created additional obstacles.
HIPAA (the Health Insurance Portability and Accountability Act of 1996) was designed to protect patient confidentiality and create security around protected health information. While noble in purpose, many have found it burdensome to work within the parameters set forth in the law. Physicians and patients needing legitimate access to clinical data discover endless release forms and convoluted processes standing in their way. Access to the information eventually comes in the form of reams of printed paper or faxed notes that cannot be easily consumed by or integrated into other systems.
The Meaningful Use initiative, while envisioned to improve data exchange and enhance population health, did little to help. Instead of enabling documentation efficiency and improving patient access, it promoted the proliferation of incompatible EHRs and poorly conceived patient portals. It also created heavy costs for both the federal government and physicians and was largely ineffective at producing systems whose use could be considered meaningful. The federal government paid out as much as $44,000 per physician to incentivize them to purchase medical records, while physicians often spent more than the $44,000 and, in many cases, wound up with EHRs that didn’t work well and had to be replaced.
Authors and supporters of the 21st Century Cures Act are hoping to avoid the shortcomings of prior legislation by attaching financial penalties to health care providers or IT vendors who engage in information blocking. While allowing for exceptions in appropriate cases, the law is clear: Patients deserve complete access to their medical records. While this goes against tradition, it has been proven to result in better outcomes.
Initiatives such as the OpenNotes movement have been pushing the value of full transparency for some time, and their website includes a long list of numerous examples to prove it. Indeed, several studies have demonstrated increased physician and patient satisfaction when both parties have ready access to health information. We believe that we, as physicians, should fully support the idea and lobby our EHR vendors to do the same.
It is time to tear down the impenetrable fortresses of traditional medicine, then work diligently to rebuild them with our patients safely inside.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on Twitter @doctornotte. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
References
1. The Electronic Health Record Association blog
Pediatric gastroesophageal reflux
guideline, the writing committee defined GER as reflux of stomach contents to the esophagus. GER is considered pathologic and, therefore, gastroesophageal reflux disease (GERD) when it is associated with troublesome symptoms and/or complications that can include esophagitis and aspiration.
In a 2018Infants
GERD is difficult to diagnose in infants. The symptoms of GERD, such as crying after feeds, regurgitation, and irritability, occur commonly in all infants and in any individual infant may not be reflective of GERD. Regurgitation is common, frequent and normal in infants up to 6 months of age. A common challenge occurs when families request treatment for infants with irritability, back arching, and/or regurgitation who are otherwise doing well. In this group of infants it is important to recognize that neither testing nor therapy is indicated unless there is difficulty with feeding, growth, acquisition of milestones, or red flag signs.
In infants with recurrent regurgitation history, physical exam is usually sufficient to distinguish uncomplicated GER from GERD and other more worrisome diagnoses. Red flag symptoms raise the possibility of a different diagnosis. Red flag symptoms include weight loss; lethargy; excessive irritability/pain; onset of vomiting for more than 6 months or persisting past 12-18 months of age; rapidly increasing head circumference; persistent forceful, nocturnal, bloody, or bilious vomiting; abdominal distention; rectal bleeding; and chronic diarrhea. GERD that starts after 6 months of age or which persists after 12 months of age warrants further evaluation, often with referral to a pediatric gastroenterologist.
When GERD is suspected, the first therapeutic steps are to institute behavioral changes. Caregivers should avoid overfeeding and modify the feeding pattern to more frequent feedings consisting of less volume at each feed. The addition of thickeners to feeds does reduce regurgitation, although it may not affect other GERD signs and symptoms. Formula can be thickened with rice cereal, which tends to be an affordable choice that doesn’t clog nipples. Enzymes present in breast milk digest cereal thickeners, so breast milk can be thickened with xanthum gum (after 1 year of age) or carob bean–based products (after 42 weeks gestation).
If these modifications do not improve symptoms, the next step is to change the type of feeds. Some infants in whom GERD is suspected actually have cow’s milk protein allergy (CMPA), so a trial of cow’s milk elimination is warranted. A breastfeeding mother can eliminate all dairy from her diet including casein and whey. Caregivers can switch to an extensively hydrolyzed formula or an amino acid–based formula. The guideline do not recommend soy-based formulas because they are not available in Europe and because a significant percentage of infants with CMPA also develop allergy to soy, and they do not recommend rice hydrolysate formula because of a lack of evidence. Dairy can be reintroduced at a later point. While positional changes including elevating the head of the crib or placing the infant in the left lateral position can help decrease GERD, the American Academy of Pediatrics strongly discourages these positions because of safety concerns, so the guidelines do not recommend positional change.
If a 2-4 week trial of nonpharmacologic interventions fails, the next step is referral to a pediatric gastroenterologist. If a pediatric gastroenterologist is not available, a 4-8 week trial of acid suppressive medication may be given. No trial has shown utility of a trial of acid suppression as a diagnostic test for GERD. Medication should only be used in infants with strongly suspected GERD and, per the guidelines, “should not be used for the treatment of visible regurgitation in otherwise healthy infants.” Medications to treat GER do not have evidence of efficacy, and there is evidence of an increased risk of infection with use of acid suppression, including an increased risk of necrotizing enterocolitis, pneumonia, upper respiratory tract infections, sepsis, urinary tract infections, and Clostridium difficile. If used, proton-pump inhibitors are preferred over histamine-2 receptor blockers. Antacids and alginates are not recommended.
Older children
In children with heartburn or regurgitation without red flag symptoms, a trial of lifestyle changes and dietary education may be initiated. If a child is overweight, it is important to inform the patient and parents that excess body weight is associated with GERD. The head of the bed can be elevated along with left lateral positioning. The guidelines do not support any probiotics or herbal medicines.
If bothersome symptoms persist, a trial of acid-suppressing medication for 4-8 weeks is reasonable. A PPI is preferred to a histamine-2 receptor blocker. PPI safety studies are lacking, but case studies suggest an increase in infections in children taking acid-suppressing medications. Therefore, as with infants, if medications are used they should be prescribed at the lowest dose and for the shortest period of time possible. If medications are not helping, or need to be used long term, referral to a pediatric gastroenterologist can be considered. Of note, the guidelines do support a 4-8 week trial of PPIs in older children as a diagnostic test; this differs from the recommendations for infants, in whom a trial for diagnostic purposes is discouraged.
Diagnostic testing
Refer to a gastroenterologist for endoscopy in cases of persistent symptoms despite PPI use or failure to wean off medication. If there are no erosions, pH monitoring with pH-impedance monitoring or pH-metry can help distinguish between nonerosive reflux disease (NERD), reflux hypersensitivity, and functional heartburn. If it is performed when a child is off of PPIs, endoscopy can also diagnose PPI-responsive eosinophilic esophagitis. Barium contrast, abdominal ultrasonography, and manometry may be considered during the course of a search for an alternative diagnosis, but they should not be used to diagnose or confirm GERD.
The bottom line
Most GER is physiologic and does not need treatment. First-line treatment for GERD in infants and children is nonpharmacologic intervention.
Reference
Rosen R et al. Pediatric Gastroesophageal Reflux Clinical Practice Guidelines: Joint Recommendations of the North American Society for Pediatric Gastroenterology, Hepatology, and Nutrition and the European Society for Pediatric Gastroenterology, Hepatology, and Nutrition. J Pediatr Gastroenterol Nutr. 2018 Mar;66(3):516-554.
Dr. Oh is a third year resident in the Family Medicine Residency at Abington-Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington - Jefferson Health.
guideline, the writing committee defined GER as reflux of stomach contents to the esophagus. GER is considered pathologic and, therefore, gastroesophageal reflux disease (GERD) when it is associated with troublesome symptoms and/or complications that can include esophagitis and aspiration.
In a 2018Infants
GERD is difficult to diagnose in infants. The symptoms of GERD, such as crying after feeds, regurgitation, and irritability, occur commonly in all infants and in any individual infant may not be reflective of GERD. Regurgitation is common, frequent and normal in infants up to 6 months of age. A common challenge occurs when families request treatment for infants with irritability, back arching, and/or regurgitation who are otherwise doing well. In this group of infants it is important to recognize that neither testing nor therapy is indicated unless there is difficulty with feeding, growth, acquisition of milestones, or red flag signs.
In infants with recurrent regurgitation history, physical exam is usually sufficient to distinguish uncomplicated GER from GERD and other more worrisome diagnoses. Red flag symptoms raise the possibility of a different diagnosis. Red flag symptoms include weight loss; lethargy; excessive irritability/pain; onset of vomiting for more than 6 months or persisting past 12-18 months of age; rapidly increasing head circumference; persistent forceful, nocturnal, bloody, or bilious vomiting; abdominal distention; rectal bleeding; and chronic diarrhea. GERD that starts after 6 months of age or which persists after 12 months of age warrants further evaluation, often with referral to a pediatric gastroenterologist.
When GERD is suspected, the first therapeutic steps are to institute behavioral changes. Caregivers should avoid overfeeding and modify the feeding pattern to more frequent feedings consisting of less volume at each feed. The addition of thickeners to feeds does reduce regurgitation, although it may not affect other GERD signs and symptoms. Formula can be thickened with rice cereal, which tends to be an affordable choice that doesn’t clog nipples. Enzymes present in breast milk digest cereal thickeners, so breast milk can be thickened with xanthum gum (after 1 year of age) or carob bean–based products (after 42 weeks gestation).
If these modifications do not improve symptoms, the next step is to change the type of feeds. Some infants in whom GERD is suspected actually have cow’s milk protein allergy (CMPA), so a trial of cow’s milk elimination is warranted. A breastfeeding mother can eliminate all dairy from her diet including casein and whey. Caregivers can switch to an extensively hydrolyzed formula or an amino acid–based formula. The guideline do not recommend soy-based formulas because they are not available in Europe and because a significant percentage of infants with CMPA also develop allergy to soy, and they do not recommend rice hydrolysate formula because of a lack of evidence. Dairy can be reintroduced at a later point. While positional changes including elevating the head of the crib or placing the infant in the left lateral position can help decrease GERD, the American Academy of Pediatrics strongly discourages these positions because of safety concerns, so the guidelines do not recommend positional change.
If a 2-4 week trial of nonpharmacologic interventions fails, the next step is referral to a pediatric gastroenterologist. If a pediatric gastroenterologist is not available, a 4-8 week trial of acid suppressive medication may be given. No trial has shown utility of a trial of acid suppression as a diagnostic test for GERD. Medication should only be used in infants with strongly suspected GERD and, per the guidelines, “should not be used for the treatment of visible regurgitation in otherwise healthy infants.” Medications to treat GER do not have evidence of efficacy, and there is evidence of an increased risk of infection with use of acid suppression, including an increased risk of necrotizing enterocolitis, pneumonia, upper respiratory tract infections, sepsis, urinary tract infections, and Clostridium difficile. If used, proton-pump inhibitors are preferred over histamine-2 receptor blockers. Antacids and alginates are not recommended.
Older children
In children with heartburn or regurgitation without red flag symptoms, a trial of lifestyle changes and dietary education may be initiated. If a child is overweight, it is important to inform the patient and parents that excess body weight is associated with GERD. The head of the bed can be elevated along with left lateral positioning. The guidelines do not support any probiotics or herbal medicines.
If bothersome symptoms persist, a trial of acid-suppressing medication for 4-8 weeks is reasonable. A PPI is preferred to a histamine-2 receptor blocker. PPI safety studies are lacking, but case studies suggest an increase in infections in children taking acid-suppressing medications. Therefore, as with infants, if medications are used they should be prescribed at the lowest dose and for the shortest period of time possible. If medications are not helping, or need to be used long term, referral to a pediatric gastroenterologist can be considered. Of note, the guidelines do support a 4-8 week trial of PPIs in older children as a diagnostic test; this differs from the recommendations for infants, in whom a trial for diagnostic purposes is discouraged.
Diagnostic testing
Refer to a gastroenterologist for endoscopy in cases of persistent symptoms despite PPI use or failure to wean off medication. If there are no erosions, pH monitoring with pH-impedance monitoring or pH-metry can help distinguish between nonerosive reflux disease (NERD), reflux hypersensitivity, and functional heartburn. If it is performed when a child is off of PPIs, endoscopy can also diagnose PPI-responsive eosinophilic esophagitis. Barium contrast, abdominal ultrasonography, and manometry may be considered during the course of a search for an alternative diagnosis, but they should not be used to diagnose or confirm GERD.
The bottom line
Most GER is physiologic and does not need treatment. First-line treatment for GERD in infants and children is nonpharmacologic intervention.
Reference
Rosen R et al. Pediatric Gastroesophageal Reflux Clinical Practice Guidelines: Joint Recommendations of the North American Society for Pediatric Gastroenterology, Hepatology, and Nutrition and the European Society for Pediatric Gastroenterology, Hepatology, and Nutrition. J Pediatr Gastroenterol Nutr. 2018 Mar;66(3):516-554.
Dr. Oh is a third year resident in the Family Medicine Residency at Abington-Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington - Jefferson Health.
guideline, the writing committee defined GER as reflux of stomach contents to the esophagus. GER is considered pathologic and, therefore, gastroesophageal reflux disease (GERD) when it is associated with troublesome symptoms and/or complications that can include esophagitis and aspiration.
In a 2018Infants
GERD is difficult to diagnose in infants. The symptoms of GERD, such as crying after feeds, regurgitation, and irritability, occur commonly in all infants and in any individual infant may not be reflective of GERD. Regurgitation is common, frequent and normal in infants up to 6 months of age. A common challenge occurs when families request treatment for infants with irritability, back arching, and/or regurgitation who are otherwise doing well. In this group of infants it is important to recognize that neither testing nor therapy is indicated unless there is difficulty with feeding, growth, acquisition of milestones, or red flag signs.
In infants with recurrent regurgitation history, physical exam is usually sufficient to distinguish uncomplicated GER from GERD and other more worrisome diagnoses. Red flag symptoms raise the possibility of a different diagnosis. Red flag symptoms include weight loss; lethargy; excessive irritability/pain; onset of vomiting for more than 6 months or persisting past 12-18 months of age; rapidly increasing head circumference; persistent forceful, nocturnal, bloody, or bilious vomiting; abdominal distention; rectal bleeding; and chronic diarrhea. GERD that starts after 6 months of age or which persists after 12 months of age warrants further evaluation, often with referral to a pediatric gastroenterologist.
When GERD is suspected, the first therapeutic steps are to institute behavioral changes. Caregivers should avoid overfeeding and modify the feeding pattern to more frequent feedings consisting of less volume at each feed. The addition of thickeners to feeds does reduce regurgitation, although it may not affect other GERD signs and symptoms. Formula can be thickened with rice cereal, which tends to be an affordable choice that doesn’t clog nipples. Enzymes present in breast milk digest cereal thickeners, so breast milk can be thickened with xanthum gum (after 1 year of age) or carob bean–based products (after 42 weeks gestation).
If these modifications do not improve symptoms, the next step is to change the type of feeds. Some infants in whom GERD is suspected actually have cow’s milk protein allergy (CMPA), so a trial of cow’s milk elimination is warranted. A breastfeeding mother can eliminate all dairy from her diet including casein and whey. Caregivers can switch to an extensively hydrolyzed formula or an amino acid–based formula. The guideline do not recommend soy-based formulas because they are not available in Europe and because a significant percentage of infants with CMPA also develop allergy to soy, and they do not recommend rice hydrolysate formula because of a lack of evidence. Dairy can be reintroduced at a later point. While positional changes including elevating the head of the crib or placing the infant in the left lateral position can help decrease GERD, the American Academy of Pediatrics strongly discourages these positions because of safety concerns, so the guidelines do not recommend positional change.
If a 2-4 week trial of nonpharmacologic interventions fails, the next step is referral to a pediatric gastroenterologist. If a pediatric gastroenterologist is not available, a 4-8 week trial of acid suppressive medication may be given. No trial has shown utility of a trial of acid suppression as a diagnostic test for GERD. Medication should only be used in infants with strongly suspected GERD and, per the guidelines, “should not be used for the treatment of visible regurgitation in otherwise healthy infants.” Medications to treat GER do not have evidence of efficacy, and there is evidence of an increased risk of infection with use of acid suppression, including an increased risk of necrotizing enterocolitis, pneumonia, upper respiratory tract infections, sepsis, urinary tract infections, and Clostridium difficile. If used, proton-pump inhibitors are preferred over histamine-2 receptor blockers. Antacids and alginates are not recommended.
Older children
In children with heartburn or regurgitation without red flag symptoms, a trial of lifestyle changes and dietary education may be initiated. If a child is overweight, it is important to inform the patient and parents that excess body weight is associated with GERD. The head of the bed can be elevated along with left lateral positioning. The guidelines do not support any probiotics or herbal medicines.
If bothersome symptoms persist, a trial of acid-suppressing medication for 4-8 weeks is reasonable. A PPI is preferred to a histamine-2 receptor blocker. PPI safety studies are lacking, but case studies suggest an increase in infections in children taking acid-suppressing medications. Therefore, as with infants, if medications are used they should be prescribed at the lowest dose and for the shortest period of time possible. If medications are not helping, or need to be used long term, referral to a pediatric gastroenterologist can be considered. Of note, the guidelines do support a 4-8 week trial of PPIs in older children as a diagnostic test; this differs from the recommendations for infants, in whom a trial for diagnostic purposes is discouraged.
Diagnostic testing
Refer to a gastroenterologist for endoscopy in cases of persistent symptoms despite PPI use or failure to wean off medication. If there are no erosions, pH monitoring with pH-impedance monitoring or pH-metry can help distinguish between nonerosive reflux disease (NERD), reflux hypersensitivity, and functional heartburn. If it is performed when a child is off of PPIs, endoscopy can also diagnose PPI-responsive eosinophilic esophagitis. Barium contrast, abdominal ultrasonography, and manometry may be considered during the course of a search for an alternative diagnosis, but they should not be used to diagnose or confirm GERD.
The bottom line
Most GER is physiologic and does not need treatment. First-line treatment for GERD in infants and children is nonpharmacologic intervention.
Reference
Rosen R et al. Pediatric Gastroesophageal Reflux Clinical Practice Guidelines: Joint Recommendations of the North American Society for Pediatric Gastroenterology, Hepatology, and Nutrition and the European Society for Pediatric Gastroenterology, Hepatology, and Nutrition. J Pediatr Gastroenterol Nutr. 2018 Mar;66(3):516-554.
Dr. Oh is a third year resident in the Family Medicine Residency at Abington-Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington - Jefferson Health.
Technology and the evolution of medical knowledge: What’s happening in the background
“Knowledge comes, but wisdom lingers. It may not be difficult to store up in the mind a vast quantity of facts within a comparatively short time, but the ability to form judgments requires the severe discipline of hard work and the tempering heat of experience and maturity.” – Calvin Coolidge
The information we use every day in patient care comes from one of two sources: personal experience (our own or that of another clinician) or a research study. Up until a hundred years ago, medicine was primarily a trade in which more experienced clinicians passed along their wisdom to younger clinicians, teaching them the things that they had learned during their long and difficult careers. Knowledge accrued slowly, influenced by the biased observations of each practicing doctor. People tended to remember their successes or unusual outcomes more than their failures or ordinary outcomes. Eventually, doctors realized that their knowledge base would be more accurate and accumulate more efficiently if they looked at the outcomes of many patients given the same treatment. Thus, the observational trial emerged.
As promising and important as the dawn of observational research was, it quickly became apparent that these trials had important limitations. The most notable limitations were the potential for bias and confounding variables to influence the results. Bias occurs when the opinion of the researcher influences how the result is interpreted. Confounders occur when an outcome is generated by some unexpected aspect of the patient, environment, or medication, rather than the thing that is being studied. An example of this might be a study that finds a higher mortality rate in a city by the sea than a city located inland. From these results, one might initially conclude that the sea is unhealthy. After realizing more retired people lived in the city by the sea; however, an individual would probably change his or her mind. In this example, the older age of this city’s population would be a confounding variable that drove the increased mortality in the city by the sea.
To solve the inherent problems with observational trials, the randomized, controlled trial was developed. Our reliance on information from RCTs runs so deep that it is hard to believe that the first modern clinical trial was not reported until 1948, in a paper on streptomycin in the treatment of pulmonary tuberculosis. It followed that faith in the randomized, controlled trial reached almost religious proportions, and the belief that information that does not come from an RCT should not be relied on was held by many, until recently. Why have things changed and what does this have to do with technology?
The first is an increasing recognition that, for all of their advantages, randomized trials have one nagging but critical limitation – generalizability. Randomized trials have strict inclusion and exclusion criteria. We do not have such inclusion and exclusion criteria when we take care of patients in our offices. For example, a recent trial published in the New England Journal of Medicine (2018 Dec 4. doi: 10.1056/NEJMoa1814468.), entitled “Apixaban to prevent venous thromboembolism in patients with cancer,” concluded that apixaban therapy resulted in a lower rate of venous thromboembolism than did placebo in patients starting chemotherapy for cancer. This was a large trial with more than 500 patients enrolled, and it reached an important conclusion with significant clinical implications. When you look at the details of the article, more than 1,800 patients were assessed to find the 500 patients who were eventually included in the trial. This is fairly typical of clinical trials and raises an important point: We need to be careful about how well the results of these clinical trials can be generalized to apply to the patient in front of us. This leads us to the second development that is something happening behind the scenes that each of us has contributed to.
Real-world research
As we see each patient and type information into the EHR, we add to an enormous database of medical information. That database is increasingly being used to advance our knowledge of how medicines actually work in the real world with real patients, and has already started providing answers that supplement, clarify, and even change our perspectives. The information will provide observations derived from real populations that have not been selected or influenced by the way in which a study is conducted. This new field of research is called “real-world research.”
An example of the difference between randomized controlled trial results and real-world research was published in Diabetes Care. This article examined the effectiveness of dipeptidyl peptidase 4 (DPP-4) inhibitors vs. glucagonlike peptide–1 receptor agonists (GLP-1 RAs) in the treatment of patients with diabetes. The goal of the study was to assess the difference in change in hemoglobin A1c between real-world evidence and randomized-trial evidence after initiation of a GLP-1 RA or a DPP-4 inhibitor. In RCTs, GLP-1 RAs decreased HbA1c by about 1.3% while DPP-4 inhibitors decreased HbA1c by about 0.68% (i.e., DPP-4 inhibitors were about half as effective). However, when the effects of these two diabetes drugs were examined using clinical databases in the real world, the two classes of medications had almost the same effect, each decreasing HbA1c by about 0.5%. This difference between RCT and real-world evidence might have been caused by the differential adherence to the two classes of medications, one being an injectable with significant GI side effects, and the other being a pill with few side effects.
The important take-home point is that we are now all contributing to a massive database that can be queried to give quicker, more accurate, more relevant information. Along with personal experience and randomized trials, this third source of clinical information, when used with wisdom, will provide us with the information we need to take ever better care of patients.
References
Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1 RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017;40:1469-78.
Blonde L et al. Interpretation and impact of real-world clinical data for the practicing clinician. Adv Ther. 2018 Nov;35:1763-74.
“Knowledge comes, but wisdom lingers. It may not be difficult to store up in the mind a vast quantity of facts within a comparatively short time, but the ability to form judgments requires the severe discipline of hard work and the tempering heat of experience and maturity.” – Calvin Coolidge
The information we use every day in patient care comes from one of two sources: personal experience (our own or that of another clinician) or a research study. Up until a hundred years ago, medicine was primarily a trade in which more experienced clinicians passed along their wisdom to younger clinicians, teaching them the things that they had learned during their long and difficult careers. Knowledge accrued slowly, influenced by the biased observations of each practicing doctor. People tended to remember their successes or unusual outcomes more than their failures or ordinary outcomes. Eventually, doctors realized that their knowledge base would be more accurate and accumulate more efficiently if they looked at the outcomes of many patients given the same treatment. Thus, the observational trial emerged.
As promising and important as the dawn of observational research was, it quickly became apparent that these trials had important limitations. The most notable limitations were the potential for bias and confounding variables to influence the results. Bias occurs when the opinion of the researcher influences how the result is interpreted. Confounders occur when an outcome is generated by some unexpected aspect of the patient, environment, or medication, rather than the thing that is being studied. An example of this might be a study that finds a higher mortality rate in a city by the sea than a city located inland. From these results, one might initially conclude that the sea is unhealthy. After realizing more retired people lived in the city by the sea; however, an individual would probably change his or her mind. In this example, the older age of this city’s population would be a confounding variable that drove the increased mortality in the city by the sea.
To solve the inherent problems with observational trials, the randomized, controlled trial was developed. Our reliance on information from RCTs runs so deep that it is hard to believe that the first modern clinical trial was not reported until 1948, in a paper on streptomycin in the treatment of pulmonary tuberculosis. It followed that faith in the randomized, controlled trial reached almost religious proportions, and the belief that information that does not come from an RCT should not be relied on was held by many, until recently. Why have things changed and what does this have to do with technology?
The first is an increasing recognition that, for all of their advantages, randomized trials have one nagging but critical limitation – generalizability. Randomized trials have strict inclusion and exclusion criteria. We do not have such inclusion and exclusion criteria when we take care of patients in our offices. For example, a recent trial published in the New England Journal of Medicine (2018 Dec 4. doi: 10.1056/NEJMoa1814468.), entitled “Apixaban to prevent venous thromboembolism in patients with cancer,” concluded that apixaban therapy resulted in a lower rate of venous thromboembolism than did placebo in patients starting chemotherapy for cancer. This was a large trial with more than 500 patients enrolled, and it reached an important conclusion with significant clinical implications. When you look at the details of the article, more than 1,800 patients were assessed to find the 500 patients who were eventually included in the trial. This is fairly typical of clinical trials and raises an important point: We need to be careful about how well the results of these clinical trials can be generalized to apply to the patient in front of us. This leads us to the second development that is something happening behind the scenes that each of us has contributed to.
Real-world research
As we see each patient and type information into the EHR, we add to an enormous database of medical information. That database is increasingly being used to advance our knowledge of how medicines actually work in the real world with real patients, and has already started providing answers that supplement, clarify, and even change our perspectives. The information will provide observations derived from real populations that have not been selected or influenced by the way in which a study is conducted. This new field of research is called “real-world research.”
An example of the difference between randomized controlled trial results and real-world research was published in Diabetes Care. This article examined the effectiveness of dipeptidyl peptidase 4 (DPP-4) inhibitors vs. glucagonlike peptide–1 receptor agonists (GLP-1 RAs) in the treatment of patients with diabetes. The goal of the study was to assess the difference in change in hemoglobin A1c between real-world evidence and randomized-trial evidence after initiation of a GLP-1 RA or a DPP-4 inhibitor. In RCTs, GLP-1 RAs decreased HbA1c by about 1.3% while DPP-4 inhibitors decreased HbA1c by about 0.68% (i.e., DPP-4 inhibitors were about half as effective). However, when the effects of these two diabetes drugs were examined using clinical databases in the real world, the two classes of medications had almost the same effect, each decreasing HbA1c by about 0.5%. This difference between RCT and real-world evidence might have been caused by the differential adherence to the two classes of medications, one being an injectable with significant GI side effects, and the other being a pill with few side effects.
The important take-home point is that we are now all contributing to a massive database that can be queried to give quicker, more accurate, more relevant information. Along with personal experience and randomized trials, this third source of clinical information, when used with wisdom, will provide us with the information we need to take ever better care of patients.
References
Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1 RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017;40:1469-78.
Blonde L et al. Interpretation and impact of real-world clinical data for the practicing clinician. Adv Ther. 2018 Nov;35:1763-74.
“Knowledge comes, but wisdom lingers. It may not be difficult to store up in the mind a vast quantity of facts within a comparatively short time, but the ability to form judgments requires the severe discipline of hard work and the tempering heat of experience and maturity.” – Calvin Coolidge
The information we use every day in patient care comes from one of two sources: personal experience (our own or that of another clinician) or a research study. Up until a hundred years ago, medicine was primarily a trade in which more experienced clinicians passed along their wisdom to younger clinicians, teaching them the things that they had learned during their long and difficult careers. Knowledge accrued slowly, influenced by the biased observations of each practicing doctor. People tended to remember their successes or unusual outcomes more than their failures or ordinary outcomes. Eventually, doctors realized that their knowledge base would be more accurate and accumulate more efficiently if they looked at the outcomes of many patients given the same treatment. Thus, the observational trial emerged.
As promising and important as the dawn of observational research was, it quickly became apparent that these trials had important limitations. The most notable limitations were the potential for bias and confounding variables to influence the results. Bias occurs when the opinion of the researcher influences how the result is interpreted. Confounders occur when an outcome is generated by some unexpected aspect of the patient, environment, or medication, rather than the thing that is being studied. An example of this might be a study that finds a higher mortality rate in a city by the sea than a city located inland. From these results, one might initially conclude that the sea is unhealthy. After realizing more retired people lived in the city by the sea; however, an individual would probably change his or her mind. In this example, the older age of this city’s population would be a confounding variable that drove the increased mortality in the city by the sea.
To solve the inherent problems with observational trials, the randomized, controlled trial was developed. Our reliance on information from RCTs runs so deep that it is hard to believe that the first modern clinical trial was not reported until 1948, in a paper on streptomycin in the treatment of pulmonary tuberculosis. It followed that faith in the randomized, controlled trial reached almost religious proportions, and the belief that information that does not come from an RCT should not be relied on was held by many, until recently. Why have things changed and what does this have to do with technology?
The first is an increasing recognition that, for all of their advantages, randomized trials have one nagging but critical limitation – generalizability. Randomized trials have strict inclusion and exclusion criteria. We do not have such inclusion and exclusion criteria when we take care of patients in our offices. For example, a recent trial published in the New England Journal of Medicine (2018 Dec 4. doi: 10.1056/NEJMoa1814468.), entitled “Apixaban to prevent venous thromboembolism in patients with cancer,” concluded that apixaban therapy resulted in a lower rate of venous thromboembolism than did placebo in patients starting chemotherapy for cancer. This was a large trial with more than 500 patients enrolled, and it reached an important conclusion with significant clinical implications. When you look at the details of the article, more than 1,800 patients were assessed to find the 500 patients who were eventually included in the trial. This is fairly typical of clinical trials and raises an important point: We need to be careful about how well the results of these clinical trials can be generalized to apply to the patient in front of us. This leads us to the second development that is something happening behind the scenes that each of us has contributed to.
Real-world research
As we see each patient and type information into the EHR, we add to an enormous database of medical information. That database is increasingly being used to advance our knowledge of how medicines actually work in the real world with real patients, and has already started providing answers that supplement, clarify, and even change our perspectives. The information will provide observations derived from real populations that have not been selected or influenced by the way in which a study is conducted. This new field of research is called “real-world research.”
An example of the difference between randomized controlled trial results and real-world research was published in Diabetes Care. This article examined the effectiveness of dipeptidyl peptidase 4 (DPP-4) inhibitors vs. glucagonlike peptide–1 receptor agonists (GLP-1 RAs) in the treatment of patients with diabetes. The goal of the study was to assess the difference in change in hemoglobin A1c between real-world evidence and randomized-trial evidence after initiation of a GLP-1 RA or a DPP-4 inhibitor. In RCTs, GLP-1 RAs decreased HbA1c by about 1.3% while DPP-4 inhibitors decreased HbA1c by about 0.68% (i.e., DPP-4 inhibitors were about half as effective). However, when the effects of these two diabetes drugs were examined using clinical databases in the real world, the two classes of medications had almost the same effect, each decreasing HbA1c by about 0.5%. This difference between RCT and real-world evidence might have been caused by the differential adherence to the two classes of medications, one being an injectable with significant GI side effects, and the other being a pill with few side effects.
The important take-home point is that we are now all contributing to a massive database that can be queried to give quicker, more accurate, more relevant information. Along with personal experience and randomized trials, this third source of clinical information, when used with wisdom, will provide us with the information we need to take ever better care of patients.
References
Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1 RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017;40:1469-78.
Blonde L et al. Interpretation and impact of real-world clinical data for the practicing clinician. Adv Ther. 2018 Nov;35:1763-74.
Prevention and Treatment of Traveler’s Diarrhea
Importance
The prevention and treatment of traveler’s diarrhea (TD) is a common reason that patients consult their physician prior to foreign travel. TD can result in lost time and opportunity, as well as overseas medical encounters and hospitalization.
to providers regarding the use of antibiotic and nonantibiotic therapies for the prevention and treatment of TD.Prophylaxis
The panel recommends that antimicrobial prophylaxis should not be used routinely in travelers, but it should be considered for travelers who are at high risk of health-related complications of TD (both strong recommendations, low/very low level of evidence [LOE]). High-risk individuals include those with a history of clinically significant long-term morbidity following an enteric infection or serious chronic illnesses that predisposes them for TD-related complications. Bismuth subsalicylate (BSS) may be considered for any traveler to prevent TD (3, strong recommendation, high LOE). Studies show that a lower dose of 1.05 g/day is preventive, although it is unclear whether it is as effective as higher doses of 2.1 g/day or 4.2 g/day. When prophylaxis is indicated, travelers should be prescribed rifaximin (strong recommendation, moderate LOE) based on susceptibility of most enteric pathogens and the drug’s extremely favorable safety profile. Fluoroquinolones (FQ) are no longer recommended for prophylaxis (strong recommendation, low/very low LOE) because of neurologic and musculoskeletal side effects that may outweigh benefits, as well as emerging resistance of enteric pathogens (70%-80% in Campylobacter spp. from Nepal and Thailand and 65% in Enterotoxigenic Escherichia coli [ETEC] and Enteroaggregative E. coli [EAEC] in India).
Treatment
The following treatment recommendations are based on the classification of TD using functional effects of severity; therefore, the panel made new definitions for TD severity. This is a change from previous definitions that utilized a traditional frequency-based algorithm in order to tailor therapy for the individual. Individuals can be prescribed antibiotics and antimotility agents to take with them during travel, along with advice regarding how to judge when to use each agent.
Mild: diarrhea that is tolerable, is not distressing, and does not interfere with planned activities.
Encourage supportive measures such as rehydration and nonantibiotic, antimotility drugs, such as loperamide or BSS (both strong recommendations, moderate LOE).
Moderate: diarrhea that is distressing or interferes with planned activities.
Antibiotics may be used (weak recommendation, moderate LOE) as early and effective treatment may mitigate the well-described chronic health consequences including irritable bowel syndrome. Three options exist. FQs may be used outside of Southeast and South Asia (strong recommendation, moderate LOE), but their potential for adverse effects and musculoskeletal consequences must be considered. Azithromycin may be used (strong recommendation, high LOE) because studies show no significant differences in efficacy between it and FQs, limited resistance to common TD pathogens (although concerns exist in Nepal), and good side effect profile. Another choice is rifaximin (weak recommendation, moderate LOE), although one should exercise caution for empirical therapy in regions in which being at high risk of invasive pathogens is anticipated.
Loperamide may be used as adjunctive therapy for moderate to severe TD (strong recommendation, high LOE) to add symptomatic relief with curative treatment or as monotherapy in moderate TD (strong recommendation, high LOE). This is specifically true in children aged 2-11 years, in whom loperamide is beneficial without causing severe side effects.
Severe: diarrhea that is incapacitating or completely prevents planned activities; all dysentery (passage of grossly bloody stools).
Antibiotics should be used (strong recommendation, high LOE). Azithromycin is the preferred choice and is first-line for dysentery or febrile diarrhea (strong recommendation, moderate LOE) because of the likelihood of FQ-resistant bacteria being the cause of dysentery. FQs and rifaximin are also choices that can be used to treat severe, nondysenteric TD (both weak recommendations, moderate LOE).
Furthermore, single-dose antibiotics may be used to treat moderate or severe TD (strong recommendation, high LOE) because studies have shown equivalent efficacy for treatment of watery noninvasive diarrhea among FQs (3 days, single dose), azithromycin (3 days, single dose), and rifaximin (3 days, three times daily).
Persistent: diarrhea lasting longer than 2 weeks.
Functional bowel disease (FBD) may occur after bouts of TD and may meet Rome III or IV criteria for irritable bowel syndrome. Thus, in a traveler without pretravel GI disease, in whom the evaluation for microbial etiologies and underlying GI disease is negative, postinfectious FBD must be considered.
Follow-up and diagnostic testing
The panel recommends microbiological testing in returning travelers with severe or persistent symptoms, bloody/mucousy diarrhea, or in those who fail empiric therapy (strong recommendation, low/very low LOE). Molecular testing, aimed at a broad range of clinically relevant pathogens, is preferred when rapid results are clinically important or nonmolecular tests have failed to establish a diagnosis. Furthermore, molecular testing may, in some cases, detect colonization rather than infection.
The bottom line
The expert panel made 20 graded recommendations to help guide the provider with nonantibiotic and antibiotic prophylaxis and treatment of TD. The main take-home points include:
- Prophylaxis should be considered only in high-risk groups; rifaximin is the first choice, and BSS is a second option.
- All travelers should be provided with loperamide and an antibiotic for self-treatment if needed.
- Mild diarrhea should be treated with increased fluid intake and loperamide or BSS.
- Moderate to severe diarrhea should be treated with single-dose antimicrobial therapy of FQ or azithromycin or with rifaximin dosing three times a day.
- Instead of antibiotics, loperamide may be considered as monotherapy for moderate diarrhea; loperamide can be used with antibiotics for both moderate and severe TD.
Dr. Shrestha is a second-year resident in the Family Medicine Residency Program at Abington (Pa.) - Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington - Jefferson Health.
Reference:
Importance
The prevention and treatment of traveler’s diarrhea (TD) is a common reason that patients consult their physician prior to foreign travel. TD can result in lost time and opportunity, as well as overseas medical encounters and hospitalization.
to providers regarding the use of antibiotic and nonantibiotic therapies for the prevention and treatment of TD.Prophylaxis
The panel recommends that antimicrobial prophylaxis should not be used routinely in travelers, but it should be considered for travelers who are at high risk of health-related complications of TD (both strong recommendations, low/very low level of evidence [LOE]). High-risk individuals include those with a history of clinically significant long-term morbidity following an enteric infection or serious chronic illnesses that predisposes them for TD-related complications. Bismuth subsalicylate (BSS) may be considered for any traveler to prevent TD (3, strong recommendation, high LOE). Studies show that a lower dose of 1.05 g/day is preventive, although it is unclear whether it is as effective as higher doses of 2.1 g/day or 4.2 g/day. When prophylaxis is indicated, travelers should be prescribed rifaximin (strong recommendation, moderate LOE) based on susceptibility of most enteric pathogens and the drug’s extremely favorable safety profile. Fluoroquinolones (FQ) are no longer recommended for prophylaxis (strong recommendation, low/very low LOE) because of neurologic and musculoskeletal side effects that may outweigh benefits, as well as emerging resistance of enteric pathogens (70%-80% in Campylobacter spp. from Nepal and Thailand and 65% in Enterotoxigenic Escherichia coli [ETEC] and Enteroaggregative E. coli [EAEC] in India).
Treatment
The following treatment recommendations are based on the classification of TD using functional effects of severity; therefore, the panel made new definitions for TD severity. This is a change from previous definitions that utilized a traditional frequency-based algorithm in order to tailor therapy for the individual. Individuals can be prescribed antibiotics and antimotility agents to take with them during travel, along with advice regarding how to judge when to use each agent.
Mild: diarrhea that is tolerable, is not distressing, and does not interfere with planned activities.
Encourage supportive measures such as rehydration and nonantibiotic, antimotility drugs, such as loperamide or BSS (both strong recommendations, moderate LOE).
Moderate: diarrhea that is distressing or interferes with planned activities.
Antibiotics may be used (weak recommendation, moderate LOE) as early and effective treatment may mitigate the well-described chronic health consequences including irritable bowel syndrome. Three options exist. FQs may be used outside of Southeast and South Asia (strong recommendation, moderate LOE), but their potential for adverse effects and musculoskeletal consequences must be considered. Azithromycin may be used (strong recommendation, high LOE) because studies show no significant differences in efficacy between it and FQs, limited resistance to common TD pathogens (although concerns exist in Nepal), and good side effect profile. Another choice is rifaximin (weak recommendation, moderate LOE), although one should exercise caution for empirical therapy in regions in which being at high risk of invasive pathogens is anticipated.
Loperamide may be used as adjunctive therapy for moderate to severe TD (strong recommendation, high LOE) to add symptomatic relief with curative treatment or as monotherapy in moderate TD (strong recommendation, high LOE). This is specifically true in children aged 2-11 years, in whom loperamide is beneficial without causing severe side effects.
Severe: diarrhea that is incapacitating or completely prevents planned activities; all dysentery (passage of grossly bloody stools).
Antibiotics should be used (strong recommendation, high LOE). Azithromycin is the preferred choice and is first-line for dysentery or febrile diarrhea (strong recommendation, moderate LOE) because of the likelihood of FQ-resistant bacteria being the cause of dysentery. FQs and rifaximin are also choices that can be used to treat severe, nondysenteric TD (both weak recommendations, moderate LOE).
Furthermore, single-dose antibiotics may be used to treat moderate or severe TD (strong recommendation, high LOE) because studies have shown equivalent efficacy for treatment of watery noninvasive diarrhea among FQs (3 days, single dose), azithromycin (3 days, single dose), and rifaximin (3 days, three times daily).
Persistent: diarrhea lasting longer than 2 weeks.
Functional bowel disease (FBD) may occur after bouts of TD and may meet Rome III or IV criteria for irritable bowel syndrome. Thus, in a traveler without pretravel GI disease, in whom the evaluation for microbial etiologies and underlying GI disease is negative, postinfectious FBD must be considered.
Follow-up and diagnostic testing
The panel recommends microbiological testing in returning travelers with severe or persistent symptoms, bloody/mucousy diarrhea, or in those who fail empiric therapy (strong recommendation, low/very low LOE). Molecular testing, aimed at a broad range of clinically relevant pathogens, is preferred when rapid results are clinically important or nonmolecular tests have failed to establish a diagnosis. Furthermore, molecular testing may, in some cases, detect colonization rather than infection.
The bottom line
The expert panel made 20 graded recommendations to help guide the provider with nonantibiotic and antibiotic prophylaxis and treatment of TD. The main take-home points include:
- Prophylaxis should be considered only in high-risk groups; rifaximin is the first choice, and BSS is a second option.
- All travelers should be provided with loperamide and an antibiotic for self-treatment if needed.
- Mild diarrhea should be treated with increased fluid intake and loperamide or BSS.
- Moderate to severe diarrhea should be treated with single-dose antimicrobial therapy of FQ or azithromycin or with rifaximin dosing three times a day.
- Instead of antibiotics, loperamide may be considered as monotherapy for moderate diarrhea; loperamide can be used with antibiotics for both moderate and severe TD.
Dr. Shrestha is a second-year resident in the Family Medicine Residency Program at Abington (Pa.) - Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington - Jefferson Health.
Reference:
Importance
The prevention and treatment of traveler’s diarrhea (TD) is a common reason that patients consult their physician prior to foreign travel. TD can result in lost time and opportunity, as well as overseas medical encounters and hospitalization.
to providers regarding the use of antibiotic and nonantibiotic therapies for the prevention and treatment of TD.Prophylaxis
The panel recommends that antimicrobial prophylaxis should not be used routinely in travelers, but it should be considered for travelers who are at high risk of health-related complications of TD (both strong recommendations, low/very low level of evidence [LOE]). High-risk individuals include those with a history of clinically significant long-term morbidity following an enteric infection or serious chronic illnesses that predisposes them for TD-related complications. Bismuth subsalicylate (BSS) may be considered for any traveler to prevent TD (3, strong recommendation, high LOE). Studies show that a lower dose of 1.05 g/day is preventive, although it is unclear whether it is as effective as higher doses of 2.1 g/day or 4.2 g/day. When prophylaxis is indicated, travelers should be prescribed rifaximin (strong recommendation, moderate LOE) based on susceptibility of most enteric pathogens and the drug’s extremely favorable safety profile. Fluoroquinolones (FQ) are no longer recommended for prophylaxis (strong recommendation, low/very low LOE) because of neurologic and musculoskeletal side effects that may outweigh benefits, as well as emerging resistance of enteric pathogens (70%-80% in Campylobacter spp. from Nepal and Thailand and 65% in Enterotoxigenic Escherichia coli [ETEC] and Enteroaggregative E. coli [EAEC] in India).
Treatment
The following treatment recommendations are based on the classification of TD using functional effects of severity; therefore, the panel made new definitions for TD severity. This is a change from previous definitions that utilized a traditional frequency-based algorithm in order to tailor therapy for the individual. Individuals can be prescribed antibiotics and antimotility agents to take with them during travel, along with advice regarding how to judge when to use each agent.
Mild: diarrhea that is tolerable, is not distressing, and does not interfere with planned activities.
Encourage supportive measures such as rehydration and nonantibiotic, antimotility drugs, such as loperamide or BSS (both strong recommendations, moderate LOE).
Moderate: diarrhea that is distressing or interferes with planned activities.
Antibiotics may be used (weak recommendation, moderate LOE) as early and effective treatment may mitigate the well-described chronic health consequences including irritable bowel syndrome. Three options exist. FQs may be used outside of Southeast and South Asia (strong recommendation, moderate LOE), but their potential for adverse effects and musculoskeletal consequences must be considered. Azithromycin may be used (strong recommendation, high LOE) because studies show no significant differences in efficacy between it and FQs, limited resistance to common TD pathogens (although concerns exist in Nepal), and good side effect profile. Another choice is rifaximin (weak recommendation, moderate LOE), although one should exercise caution for empirical therapy in regions in which being at high risk of invasive pathogens is anticipated.
Loperamide may be used as adjunctive therapy for moderate to severe TD (strong recommendation, high LOE) to add symptomatic relief with curative treatment or as monotherapy in moderate TD (strong recommendation, high LOE). This is specifically true in children aged 2-11 years, in whom loperamide is beneficial without causing severe side effects.
Severe: diarrhea that is incapacitating or completely prevents planned activities; all dysentery (passage of grossly bloody stools).
Antibiotics should be used (strong recommendation, high LOE). Azithromycin is the preferred choice and is first-line for dysentery or febrile diarrhea (strong recommendation, moderate LOE) because of the likelihood of FQ-resistant bacteria being the cause of dysentery. FQs and rifaximin are also choices that can be used to treat severe, nondysenteric TD (both weak recommendations, moderate LOE).
Furthermore, single-dose antibiotics may be used to treat moderate or severe TD (strong recommendation, high LOE) because studies have shown equivalent efficacy for treatment of watery noninvasive diarrhea among FQs (3 days, single dose), azithromycin (3 days, single dose), and rifaximin (3 days, three times daily).
Persistent: diarrhea lasting longer than 2 weeks.
Functional bowel disease (FBD) may occur after bouts of TD and may meet Rome III or IV criteria for irritable bowel syndrome. Thus, in a traveler without pretravel GI disease, in whom the evaluation for microbial etiologies and underlying GI disease is negative, postinfectious FBD must be considered.
Follow-up and diagnostic testing
The panel recommends microbiological testing in returning travelers with severe or persistent symptoms, bloody/mucousy diarrhea, or in those who fail empiric therapy (strong recommendation, low/very low LOE). Molecular testing, aimed at a broad range of clinically relevant pathogens, is preferred when rapid results are clinically important or nonmolecular tests have failed to establish a diagnosis. Furthermore, molecular testing may, in some cases, detect colonization rather than infection.
The bottom line
The expert panel made 20 graded recommendations to help guide the provider with nonantibiotic and antibiotic prophylaxis and treatment of TD. The main take-home points include:
- Prophylaxis should be considered only in high-risk groups; rifaximin is the first choice, and BSS is a second option.
- All travelers should be provided with loperamide and an antibiotic for self-treatment if needed.
- Mild diarrhea should be treated with increased fluid intake and loperamide or BSS.
- Moderate to severe diarrhea should be treated with single-dose antimicrobial therapy of FQ or azithromycin or with rifaximin dosing three times a day.
- Instead of antibiotics, loperamide may be considered as monotherapy for moderate diarrhea; loperamide can be used with antibiotics for both moderate and severe TD.
Dr. Shrestha is a second-year resident in the Family Medicine Residency Program at Abington (Pa.) - Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington - Jefferson Health.
Reference:
Electronic health records and the lost power of prose
“Don’t tell me the moon is shining; show me the glint of light on broken glass,” Anton Chekhov
In March 2006, four programmers turned entrepreneurs launched Twitter. This revolutionary tool experienced a monumental growth in scale over the next 10 years from a handful of users sharing a few thousand messages (known as “tweets”) each day to a global social network of over 300 million users valued at over $25 billion dollars. In fact, on Election Day 2016, Twitter was the No. 1 source of breaking news1, and it has been used as a launchpad for everything from social activism to national revolutions.
When Twitter was first conceived, it was designed to operate through wireless phone carriers’ SMS messaging functionality (aka “via text message”). SMS messages are limited to just 160 characters, so Twitter’s creators decided to restrict tweets to 140 characters, allowing 20 characters for a username. This decision created a necessity for communication efficiency that harks back to the days of the telegraph. From the liberal use of contractions and abbreviations to the tireless search for the shortest synonyms possible, Twitter users have employed countless techniques to enable them to say more with less. While clever and creative, this extreme verbal austerity has pervaded other media as well, becoming the hallmark literary style of the current generation.
Contemporaneous with the Twitter revolution, the medical field has allowed technology to dramatically change its style of communication as well, but in the opposite way. We have become far less efficient in our use of words, yet we seem to be doing a really poor job of expressing ourselves.
Saying less with more
I was once asked to provide expert testimony in a medical malpractice lawsuit. Working in support of the defense, I endured question after question from the plaintiff’s legal team as they picked apart every aspect of the case. Of particular interest was the physician’s documentation. Sadly – yet perhaps unsurprisingly – it was poor. The defendant had clearly used an EHR template and clicked checkboxes to create his note, documenting history, physical exam, assessment, and plan without having typed a single word. While adequate for billing purposes, the note was missing any narrative that could communicate the story of what had transpired during the patient’s visit. Sure, the presenting symptoms and vital signs were there, but the no description of the patient’s appearance had been recorded? What had the physician been thinking? What unspoken messages had led the physician to make the decisions he had made?
Like Twitter, the dawn of EHRs created an entirely new form of communication, but instead of limiting the content of physicians’ notes it expanded it. Objectively, this has made for more complete notes. Subjectively, this has led to notes packed with data, yet devoid of meaningful narrative. While handwritten notes from the previous generation were brief, they included the most important elements of the patient’s history and often the physician’s thought process in forming the differential. The electronically generated notes of today are quite the opposite; they are dense, yet far from illuminating. A clinician referring back to the record might have tremendous difficulty discerning salient features amidst all of the “note bloat.”This puts the patient (and the provider, as in the case above) at risk. Details may be present, but the diagnosis will be missed without the story that ties them all together.
Writing a new chapter
Physicians hoping to create meaningful notes are often stymied by the technology at their disposal or the demands placed on their time. These issues, combined with an ever-growing number of regulatory requirements, are what led to the decay of narrative in the first place. As a result, doctors are looking for alternative ways to buck the trend and bring patients’ stories back to their medical records. These methods are often expensive or involved, but in many cases they dramatically improve quality and efficiency.
An example of a tool that allows doctors to achieve these goals is speech recognition technology. Instead of typing or clicking, physicians dictate into the EHR, creating notes that are typically richer and more akin to a story than a list of symptoms or data points. When voice-to-text is properly deployed and utilized, documentation improves along with efficiency. Alternately, many providers are now employing scribes to accompany them in the exam room and complete the medical record. Taking this step leads to more descriptive notes, better productivity, and happier providers. The use of scribes also seems to result in happier patients, who report better therapeutic interactions when their doctors aren’t typing or staring at a computer screen.
The above-mentioned methods for recording information about a patient during a visit may be too expensive or complicated for some providers, but there are other simple techniques that can be used without incurring additional cost or resources. Previsit planning is one such possibility. By reviewing patient charts in advance of appointments, physicians can look over results, identify preventive health gaps, and anticipate follow-up needs and medication refills. They can then create skeleton notes and prepopulate orders to reduce the documentation burden during the visit. While time consuming at first, physicians have reported this practice actually saves time in the long run and allows them to focus on recording the patient narrative during the visit.
Another strategy is even more simple in concept, though may seem counter-intuitive at first: get better acquainted with the electronic records system. That is, take the time to really learn and understand the tools designed to improve productivity that are available in your EHR, then use them judiciously; take advantage of templates and macros when they’ll make you more efficient yet won’t inhibit your ability to tell the patient’s story; embrace optimization but don’t compromise on narrative. By carefully choosing your words, you’ll paint a clearer picture of every patient and enable safer and more personalized care.
Reference
1. “For Election Day Influence, Twitter Ruled Social Media” New York Times. Nov. 8, 2016.
“Don’t tell me the moon is shining; show me the glint of light on broken glass,” Anton Chekhov
In March 2006, four programmers turned entrepreneurs launched Twitter. This revolutionary tool experienced a monumental growth in scale over the next 10 years from a handful of users sharing a few thousand messages (known as “tweets”) each day to a global social network of over 300 million users valued at over $25 billion dollars. In fact, on Election Day 2016, Twitter was the No. 1 source of breaking news1, and it has been used as a launchpad for everything from social activism to national revolutions.
When Twitter was first conceived, it was designed to operate through wireless phone carriers’ SMS messaging functionality (aka “via text message”). SMS messages are limited to just 160 characters, so Twitter’s creators decided to restrict tweets to 140 characters, allowing 20 characters for a username. This decision created a necessity for communication efficiency that harks back to the days of the telegraph. From the liberal use of contractions and abbreviations to the tireless search for the shortest synonyms possible, Twitter users have employed countless techniques to enable them to say more with less. While clever and creative, this extreme verbal austerity has pervaded other media as well, becoming the hallmark literary style of the current generation.
Contemporaneous with the Twitter revolution, the medical field has allowed technology to dramatically change its style of communication as well, but in the opposite way. We have become far less efficient in our use of words, yet we seem to be doing a really poor job of expressing ourselves.
Saying less with more
I was once asked to provide expert testimony in a medical malpractice lawsuit. Working in support of the defense, I endured question after question from the plaintiff’s legal team as they picked apart every aspect of the case. Of particular interest was the physician’s documentation. Sadly – yet perhaps unsurprisingly – it was poor. The defendant had clearly used an EHR template and clicked checkboxes to create his note, documenting history, physical exam, assessment, and plan without having typed a single word. While adequate for billing purposes, the note was missing any narrative that could communicate the story of what had transpired during the patient’s visit. Sure, the presenting symptoms and vital signs were there, but the no description of the patient’s appearance had been recorded? What had the physician been thinking? What unspoken messages had led the physician to make the decisions he had made?
Like Twitter, the dawn of EHRs created an entirely new form of communication, but instead of limiting the content of physicians’ notes it expanded it. Objectively, this has made for more complete notes. Subjectively, this has led to notes packed with data, yet devoid of meaningful narrative. While handwritten notes from the previous generation were brief, they included the most important elements of the patient’s history and often the physician’s thought process in forming the differential. The electronically generated notes of today are quite the opposite; they are dense, yet far from illuminating. A clinician referring back to the record might have tremendous difficulty discerning salient features amidst all of the “note bloat.”This puts the patient (and the provider, as in the case above) at risk. Details may be present, but the diagnosis will be missed without the story that ties them all together.
Writing a new chapter
Physicians hoping to create meaningful notes are often stymied by the technology at their disposal or the demands placed on their time. These issues, combined with an ever-growing number of regulatory requirements, are what led to the decay of narrative in the first place. As a result, doctors are looking for alternative ways to buck the trend and bring patients’ stories back to their medical records. These methods are often expensive or involved, but in many cases they dramatically improve quality and efficiency.
An example of a tool that allows doctors to achieve these goals is speech recognition technology. Instead of typing or clicking, physicians dictate into the EHR, creating notes that are typically richer and more akin to a story than a list of symptoms or data points. When voice-to-text is properly deployed and utilized, documentation improves along with efficiency. Alternately, many providers are now employing scribes to accompany them in the exam room and complete the medical record. Taking this step leads to more descriptive notes, better productivity, and happier providers. The use of scribes also seems to result in happier patients, who report better therapeutic interactions when their doctors aren’t typing or staring at a computer screen.
The above-mentioned methods for recording information about a patient during a visit may be too expensive or complicated for some providers, but there are other simple techniques that can be used without incurring additional cost or resources. Previsit planning is one such possibility. By reviewing patient charts in advance of appointments, physicians can look over results, identify preventive health gaps, and anticipate follow-up needs and medication refills. They can then create skeleton notes and prepopulate orders to reduce the documentation burden during the visit. While time consuming at first, physicians have reported this practice actually saves time in the long run and allows them to focus on recording the patient narrative during the visit.
Another strategy is even more simple in concept, though may seem counter-intuitive at first: get better acquainted with the electronic records system. That is, take the time to really learn and understand the tools designed to improve productivity that are available in your EHR, then use them judiciously; take advantage of templates and macros when they’ll make you more efficient yet won’t inhibit your ability to tell the patient’s story; embrace optimization but don’t compromise on narrative. By carefully choosing your words, you’ll paint a clearer picture of every patient and enable safer and more personalized care.
Reference
1. “For Election Day Influence, Twitter Ruled Social Media” New York Times. Nov. 8, 2016.
“Don’t tell me the moon is shining; show me the glint of light on broken glass,” Anton Chekhov
In March 2006, four programmers turned entrepreneurs launched Twitter. This revolutionary tool experienced a monumental growth in scale over the next 10 years from a handful of users sharing a few thousand messages (known as “tweets”) each day to a global social network of over 300 million users valued at over $25 billion dollars. In fact, on Election Day 2016, Twitter was the No. 1 source of breaking news1, and it has been used as a launchpad for everything from social activism to national revolutions.
When Twitter was first conceived, it was designed to operate through wireless phone carriers’ SMS messaging functionality (aka “via text message”). SMS messages are limited to just 160 characters, so Twitter’s creators decided to restrict tweets to 140 characters, allowing 20 characters for a username. This decision created a necessity for communication efficiency that harks back to the days of the telegraph. From the liberal use of contractions and abbreviations to the tireless search for the shortest synonyms possible, Twitter users have employed countless techniques to enable them to say more with less. While clever and creative, this extreme verbal austerity has pervaded other media as well, becoming the hallmark literary style of the current generation.
Contemporaneous with the Twitter revolution, the medical field has allowed technology to dramatically change its style of communication as well, but in the opposite way. We have become far less efficient in our use of words, yet we seem to be doing a really poor job of expressing ourselves.
Saying less with more
I was once asked to provide expert testimony in a medical malpractice lawsuit. Working in support of the defense, I endured question after question from the plaintiff’s legal team as they picked apart every aspect of the case. Of particular interest was the physician’s documentation. Sadly – yet perhaps unsurprisingly – it was poor. The defendant had clearly used an EHR template and clicked checkboxes to create his note, documenting history, physical exam, assessment, and plan without having typed a single word. While adequate for billing purposes, the note was missing any narrative that could communicate the story of what had transpired during the patient’s visit. Sure, the presenting symptoms and vital signs were there, but the no description of the patient’s appearance had been recorded? What had the physician been thinking? What unspoken messages had led the physician to make the decisions he had made?
Like Twitter, the dawn of EHRs created an entirely new form of communication, but instead of limiting the content of physicians’ notes it expanded it. Objectively, this has made for more complete notes. Subjectively, this has led to notes packed with data, yet devoid of meaningful narrative. While handwritten notes from the previous generation were brief, they included the most important elements of the patient’s history and often the physician’s thought process in forming the differential. The electronically generated notes of today are quite the opposite; they are dense, yet far from illuminating. A clinician referring back to the record might have tremendous difficulty discerning salient features amidst all of the “note bloat.”This puts the patient (and the provider, as in the case above) at risk. Details may be present, but the diagnosis will be missed without the story that ties them all together.
Writing a new chapter
Physicians hoping to create meaningful notes are often stymied by the technology at their disposal or the demands placed on their time. These issues, combined with an ever-growing number of regulatory requirements, are what led to the decay of narrative in the first place. As a result, doctors are looking for alternative ways to buck the trend and bring patients’ stories back to their medical records. These methods are often expensive or involved, but in many cases they dramatically improve quality and efficiency.
An example of a tool that allows doctors to achieve these goals is speech recognition technology. Instead of typing or clicking, physicians dictate into the EHR, creating notes that are typically richer and more akin to a story than a list of symptoms or data points. When voice-to-text is properly deployed and utilized, documentation improves along with efficiency. Alternately, many providers are now employing scribes to accompany them in the exam room and complete the medical record. Taking this step leads to more descriptive notes, better productivity, and happier providers. The use of scribes also seems to result in happier patients, who report better therapeutic interactions when their doctors aren’t typing or staring at a computer screen.
The above-mentioned methods for recording information about a patient during a visit may be too expensive or complicated for some providers, but there are other simple techniques that can be used without incurring additional cost or resources. Previsit planning is one such possibility. By reviewing patient charts in advance of appointments, physicians can look over results, identify preventive health gaps, and anticipate follow-up needs and medication refills. They can then create skeleton notes and prepopulate orders to reduce the documentation burden during the visit. While time consuming at first, physicians have reported this practice actually saves time in the long run and allows them to focus on recording the patient narrative during the visit.
Another strategy is even more simple in concept, though may seem counter-intuitive at first: get better acquainted with the electronic records system. That is, take the time to really learn and understand the tools designed to improve productivity that are available in your EHR, then use them judiciously; take advantage of templates and macros when they’ll make you more efficient yet won’t inhibit your ability to tell the patient’s story; embrace optimization but don’t compromise on narrative. By carefully choosing your words, you’ll paint a clearer picture of every patient and enable safer and more personalized care.
Reference
1. “For Election Day Influence, Twitter Ruled Social Media” New York Times. Nov. 8, 2016.
Who is in charge here?
My first patient of the afternoon was a simple hypertension follow-up, or so I thought as I was walking into the room. She was a healthy 50-year-old woman with no medical problems other than her blood pressure, which was measured at 130/76 in the office. Her heart and lungs were normal, she had no chest pain or shortness of breath, and she was taking her medications without any problem. All simple enough. I complimented her on how she was doing, told her to continue her medications, and return in 6 months.
She put up her hand and said, “Wait a minute.”
Then she pulled out her smartphone. She tapped open an app, and handed it to me so I could look at a graph of her home blood pressures. The graph had all of her readings from the last 4 months, taken 2-3 times a day. It had automatically labeled each blood pressure in green, yellow, or red to indicate whether they were normal, higher than normal, or elevated, respectively.
Of course, the app creators had determined that a ‘green’ (normal) systolic pressure was less than 120 mm Hg. Values above that were yellow (higher than normal), until a systolic pressure of 130, at which point they became red (elevated). This is consistent with the most recent American Heart Association guidelines, but these guidelines have been the subject of a lot of controversy. There are many, including myself, who believe that the correct systolic pressure to define hypertension should be 140 for many patients, rather than 130. The app disagrees, and patients using the app see the app’s definition of hypertension every time they enter a blood pressure. In the case of my patient, since normal was indicated only by a systolic of less than 120 (which is a relatively rare event), I had to explain the difference between normal blood pressure and her blood pressure goal, and why the two were not the same.
Later that afternoon I was seeing a 60-year-old male who had electrical cardioversion of his atrial fibrillation 2 weeks prior to the visit. He had been sent home, as is usually the case, on an antiarrhythmic and an oral anticoagulant. He was feeling fine and had not noticed any palpitations, chest discomfort, or shortness of breath. I listened to his heart and lungs, which sounded normal, and I told him it sounded like he was doing well. Then he said, “I have an Apple watch.” I had a feeling I knew what was coming next.
He handed me his iPhone and asked me if I could review his rhythm strips. For those unacquainted with the new Apple watch, all he had to do to obtain those strips was open an EKG app and touch the crown of his watch with a finger from his other hand. This essentially made an electrical connection from his left to right arm, allowing the watch to generate a one-lead EKG tracing. The device then provides a computer-generated rhythm strip and sends that image and an interpretation of it to an iPhone, which is connected to the watch via Bluetooth. These results can then be shared or printed out as a pdf document.
The patient wanted to know if the smartphone’s interpretation of those rhythm strips was correct, and if he was really having frequent asymptomatic recurrence of his atrial fibrillation. Unsurprising to me or anyone who has used one of these (or other) phone-based EKG devices, the watch-generated rhythm strips looked clean and clear and the interpretation was spot on. It correctly identified his frequent asymptomatic episodes of atrial fibrillation. This was important information, which markedly affected his medical care.
These two very different examples are early indications that the way that we will be collecting information will rapidly and radically change over the next few years. It has always been clear that making long-term decisions about the treatment of hypertension based on a single reading in the office setting is not optimal. It has been equally clear that a single office EKG provides a limited snapshot into the frequency of intermittent atrial fibrillation. Deciding how to treat patients has never been easy and many decisions are plagued with ambiguity. Having limited information is a blessing and a curse; it’s quick and easy to review a small amount of data, but there is a nagging recognition that those data are only a distant representation of a patient’s real health outside of the office.
As we move forward we will increasingly have the ability to see a patient’s physiologic parameters where and when those values are most important: during the countless hours when they are not in our offices. The new American Heart Association hypertension guideline, issued in late 2017, has placed increased emphasis on ambulatory blood pressure monitoring. Determining how to use all this new information will be a challenge. It will take time to become comfortable with interpreting and making sense of an incredible number of data points. For example, if a patient checks his blood pressure twice a day for 3 months, his efforts will generate 180 separate blood pressure readings! You can bet there is going to be a good deal of inconsistency in those readings, making interpretation challenging. There will also probably be a few high readings, such as the occasional 190/110, which are likely to cause concern and anxiety in patients. There is little question that the availability of such detailed information holds the potential to allow us to make better decisions. The challenge will be in deciding how to use it to actually improve – not just complicate – patient care.
What are your thoughts on this? Feel free to email us at info@ehrpc.com.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on twitter (@doctornotte).
My first patient of the afternoon was a simple hypertension follow-up, or so I thought as I was walking into the room. She was a healthy 50-year-old woman with no medical problems other than her blood pressure, which was measured at 130/76 in the office. Her heart and lungs were normal, she had no chest pain or shortness of breath, and she was taking her medications without any problem. All simple enough. I complimented her on how she was doing, told her to continue her medications, and return in 6 months.
She put up her hand and said, “Wait a minute.”
Then she pulled out her smartphone. She tapped open an app, and handed it to me so I could look at a graph of her home blood pressures. The graph had all of her readings from the last 4 months, taken 2-3 times a day. It had automatically labeled each blood pressure in green, yellow, or red to indicate whether they were normal, higher than normal, or elevated, respectively.
Of course, the app creators had determined that a ‘green’ (normal) systolic pressure was less than 120 mm Hg. Values above that were yellow (higher than normal), until a systolic pressure of 130, at which point they became red (elevated). This is consistent with the most recent American Heart Association guidelines, but these guidelines have been the subject of a lot of controversy. There are many, including myself, who believe that the correct systolic pressure to define hypertension should be 140 for many patients, rather than 130. The app disagrees, and patients using the app see the app’s definition of hypertension every time they enter a blood pressure. In the case of my patient, since normal was indicated only by a systolic of less than 120 (which is a relatively rare event), I had to explain the difference between normal blood pressure and her blood pressure goal, and why the two were not the same.
Later that afternoon I was seeing a 60-year-old male who had electrical cardioversion of his atrial fibrillation 2 weeks prior to the visit. He had been sent home, as is usually the case, on an antiarrhythmic and an oral anticoagulant. He was feeling fine and had not noticed any palpitations, chest discomfort, or shortness of breath. I listened to his heart and lungs, which sounded normal, and I told him it sounded like he was doing well. Then he said, “I have an Apple watch.” I had a feeling I knew what was coming next.
He handed me his iPhone and asked me if I could review his rhythm strips. For those unacquainted with the new Apple watch, all he had to do to obtain those strips was open an EKG app and touch the crown of his watch with a finger from his other hand. This essentially made an electrical connection from his left to right arm, allowing the watch to generate a one-lead EKG tracing. The device then provides a computer-generated rhythm strip and sends that image and an interpretation of it to an iPhone, which is connected to the watch via Bluetooth. These results can then be shared or printed out as a pdf document.
The patient wanted to know if the smartphone’s interpretation of those rhythm strips was correct, and if he was really having frequent asymptomatic recurrence of his atrial fibrillation. Unsurprising to me or anyone who has used one of these (or other) phone-based EKG devices, the watch-generated rhythm strips looked clean and clear and the interpretation was spot on. It correctly identified his frequent asymptomatic episodes of atrial fibrillation. This was important information, which markedly affected his medical care.
These two very different examples are early indications that the way that we will be collecting information will rapidly and radically change over the next few years. It has always been clear that making long-term decisions about the treatment of hypertension based on a single reading in the office setting is not optimal. It has been equally clear that a single office EKG provides a limited snapshot into the frequency of intermittent atrial fibrillation. Deciding how to treat patients has never been easy and many decisions are plagued with ambiguity. Having limited information is a blessing and a curse; it’s quick and easy to review a small amount of data, but there is a nagging recognition that those data are only a distant representation of a patient’s real health outside of the office.
As we move forward we will increasingly have the ability to see a patient’s physiologic parameters where and when those values are most important: during the countless hours when they are not in our offices. The new American Heart Association hypertension guideline, issued in late 2017, has placed increased emphasis on ambulatory blood pressure monitoring. Determining how to use all this new information will be a challenge. It will take time to become comfortable with interpreting and making sense of an incredible number of data points. For example, if a patient checks his blood pressure twice a day for 3 months, his efforts will generate 180 separate blood pressure readings! You can bet there is going to be a good deal of inconsistency in those readings, making interpretation challenging. There will also probably be a few high readings, such as the occasional 190/110, which are likely to cause concern and anxiety in patients. There is little question that the availability of such detailed information holds the potential to allow us to make better decisions. The challenge will be in deciding how to use it to actually improve – not just complicate – patient care.
What are your thoughts on this? Feel free to email us at info@ehrpc.com.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on twitter (@doctornotte).
My first patient of the afternoon was a simple hypertension follow-up, or so I thought as I was walking into the room. She was a healthy 50-year-old woman with no medical problems other than her blood pressure, which was measured at 130/76 in the office. Her heart and lungs were normal, she had no chest pain or shortness of breath, and she was taking her medications without any problem. All simple enough. I complimented her on how she was doing, told her to continue her medications, and return in 6 months.
She put up her hand and said, “Wait a minute.”
Then she pulled out her smartphone. She tapped open an app, and handed it to me so I could look at a graph of her home blood pressures. The graph had all of her readings from the last 4 months, taken 2-3 times a day. It had automatically labeled each blood pressure in green, yellow, or red to indicate whether they were normal, higher than normal, or elevated, respectively.
Of course, the app creators had determined that a ‘green’ (normal) systolic pressure was less than 120 mm Hg. Values above that were yellow (higher than normal), until a systolic pressure of 130, at which point they became red (elevated). This is consistent with the most recent American Heart Association guidelines, but these guidelines have been the subject of a lot of controversy. There are many, including myself, who believe that the correct systolic pressure to define hypertension should be 140 for many patients, rather than 130. The app disagrees, and patients using the app see the app’s definition of hypertension every time they enter a blood pressure. In the case of my patient, since normal was indicated only by a systolic of less than 120 (which is a relatively rare event), I had to explain the difference between normal blood pressure and her blood pressure goal, and why the two were not the same.
Later that afternoon I was seeing a 60-year-old male who had electrical cardioversion of his atrial fibrillation 2 weeks prior to the visit. He had been sent home, as is usually the case, on an antiarrhythmic and an oral anticoagulant. He was feeling fine and had not noticed any palpitations, chest discomfort, or shortness of breath. I listened to his heart and lungs, which sounded normal, and I told him it sounded like he was doing well. Then he said, “I have an Apple watch.” I had a feeling I knew what was coming next.
He handed me his iPhone and asked me if I could review his rhythm strips. For those unacquainted with the new Apple watch, all he had to do to obtain those strips was open an EKG app and touch the crown of his watch with a finger from his other hand. This essentially made an electrical connection from his left to right arm, allowing the watch to generate a one-lead EKG tracing. The device then provides a computer-generated rhythm strip and sends that image and an interpretation of it to an iPhone, which is connected to the watch via Bluetooth. These results can then be shared or printed out as a pdf document.
The patient wanted to know if the smartphone’s interpretation of those rhythm strips was correct, and if he was really having frequent asymptomatic recurrence of his atrial fibrillation. Unsurprising to me or anyone who has used one of these (or other) phone-based EKG devices, the watch-generated rhythm strips looked clean and clear and the interpretation was spot on. It correctly identified his frequent asymptomatic episodes of atrial fibrillation. This was important information, which markedly affected his medical care.
These two very different examples are early indications that the way that we will be collecting information will rapidly and radically change over the next few years. It has always been clear that making long-term decisions about the treatment of hypertension based on a single reading in the office setting is not optimal. It has been equally clear that a single office EKG provides a limited snapshot into the frequency of intermittent atrial fibrillation. Deciding how to treat patients has never been easy and many decisions are plagued with ambiguity. Having limited information is a blessing and a curse; it’s quick and easy to review a small amount of data, but there is a nagging recognition that those data are only a distant representation of a patient’s real health outside of the office.
As we move forward we will increasingly have the ability to see a patient’s physiologic parameters where and when those values are most important: during the countless hours when they are not in our offices. The new American Heart Association hypertension guideline, issued in late 2017, has placed increased emphasis on ambulatory blood pressure monitoring. Determining how to use all this new information will be a challenge. It will take time to become comfortable with interpreting and making sense of an incredible number of data points. For example, if a patient checks his blood pressure twice a day for 3 months, his efforts will generate 180 separate blood pressure readings! You can bet there is going to be a good deal of inconsistency in those readings, making interpretation challenging. There will also probably be a few high readings, such as the occasional 190/110, which are likely to cause concern and anxiety in patients. There is little question that the availability of such detailed information holds the potential to allow us to make better decisions. The challenge will be in deciding how to use it to actually improve – not just complicate – patient care.
What are your thoughts on this? Feel free to email us at info@ehrpc.com.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on twitter (@doctornotte).
Breaking down blockchain: How this novel technology will unfetter health care
One evening in 2016, my 9-year-old son suggested we use Bitcoin to purchase something on the Microsoft Xbox store. Surprised by his suggestion, I was suddenly struck with two thoughts: 1) Microsoft, by accepting Bitcoin, was validating cryptocurrency as a credible form of payment, and 2) I was getting old. My 9-year-old seemed to have a better understanding of a new technology than I did, hardly the first time – or the last time – that happened. In spite of my initial feelings of defeat, I resolved not to cede victory to my son without a fight. I immediately set out to understand cryptocurrencies and, more importantly, the technology underpinning them known as blockchain.
Even just a few years ago, my ignorance of how blockchains work may have been acceptable, but it hardly seems acceptable now. Much more than just cryptocurrency, blockchain technology is beginning to affect every industry that values information sharing and security, and it is about to usher in a revolution in health care. But what are blockchains, and why are they so important?
Explaining blockchains
Blockchains were first conceptualized almost 3 decades ago, but the invention of the first blockchain as we know it today occurred in 2008 by Satoshi Nakomoto, creator of Bitcoin. Blockchains can be thought of as a way to store and communicate information while ensuring its integrity and security. Admittedly, the technology can be a bit confusing, but we’ll attempt to simplify it by focusing on a few fundamental elements.
As the name indicates, the blockchain model relies on a chain of connected blocks. Each block contains some data (which can be financial, medical, legal, or anything else) and bears a unique fingerprint known as a “hash.” Each hash is different and depends entirely on the data stored in the block. In other words, if the contents of the block change, the hash changes, creating an entirely new fingerprint. Each block on the chain also keeps a record of the hash of the previous block. This “links” the chain together, and is the first key to its robust security: If any block is tampered with, its fingerprint will change and it will no longer be linked, thus invalidating all following blocks on the chain.
Ensuring the integrity of the blockchain doesn’t stop there. Just as actual fingerprints can be spoofed by enterprising criminals, hash technology isn’t enough to provide complete security. Thus, several other security features are built into blockchains, with the most noteworthy and important being “decentralization.” This means that blockchains are not stored on any single computer. On the contrary, duplicate copies of every blockchain exist on thousands of computers around the world, creating redundancy and minimizing the vulnerability that any single chain could be tampered with. Before any change in the blockchain can be made and accepted, it must be validated by a majority of the computers storing the chain.
If this all seems perplexing, that’s because it is. Blockchains are complex and difficult to visualize. (But if you’d like a deeper understanding, there are many great YouTube videos that do a great job explaining them.) For now, just remember this: Blockchains are very secure yet highly accessible, and will be essential to how data – especially health data – are stored and communicated in the future.
Blockchains in health care
On Jan. 24, 2019, five major companies (Aetna, Anthem, Health Care Services, IBM, and PNC Bank) “announced a new collaboration to design and create a network using blockchain technology to improve transparency and interoperability in the health care industry.”1 This team of industry leaders is hoping to build the engine that will power the future and impact how health records are created, maintained, and communicated. They’ll achieve this by taking advantage of blockchain’s inclusiveness and decentralization, storing records in a manner that is safe and accessible anywhere a patient seeks care. Because of the redundancy built into blockchains, they can also ensure data integrity. Physicians will benefit from information that is easy to obtain and always accurate; patients will benefit by gaining greater access and ownership of their personal medical records.
The collaboration mentioned above is the latest, but certainly not the first, attempt to exploit the benefits of blockchain for health care. Other major players have already entered the game, and the field is growing quickly. While it’s easy to find their efforts admirable, corporate involvement also means there is money to be saved or made in the space. Chris Ward, head of product for PNC Treasury Management, alluded to this as he commented publicly in the press release: “This collaboration will enable health care–related data and business transactions to occur in way that addresses market demands for transparency and security, while making it easier for the patient, payer, and provider to handle payments. Using this technology, we can remove friction, duplication, and administrative costs that continue to plague the industry.”
Industry executives recognize that interoperability is still the greatest challenge facing the future of health care and are particularly sensitive to the costs of not facing the challenge successfully. Clearly, they see an investment in blockchains as an opportunity to be part of a financially beneficial solution.
Why we should care
As we’ve now covered, there are many advantages of blockchain technology. In fact, we see it as the natural evolution of the patient-centered EHR. Instead of siloed and proprietary information spread across disparate EHRs that can’t communicate, the future of data exchange will be more transparent, yet more secure. Blockchain represents a unique opportunity to democratize the availability of health care information while increasing information quality and lowering costs. It is also shaping up to be the way we’ll exchange sensitive data in the future.
Don’t believe us? Just ask any 9-year-old.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on Twitter, @doctornotte. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
Reference
1. https://newsroom.ibm.com/2019-01-24-Aetna-Anthem-Health-Care-Service-Corporation-PNC-Bank-and-IBM-announce-collaboration-to-establish-blockchain-based-ecosystem-for-the-healthcare-industry
One evening in 2016, my 9-year-old son suggested we use Bitcoin to purchase something on the Microsoft Xbox store. Surprised by his suggestion, I was suddenly struck with two thoughts: 1) Microsoft, by accepting Bitcoin, was validating cryptocurrency as a credible form of payment, and 2) I was getting old. My 9-year-old seemed to have a better understanding of a new technology than I did, hardly the first time – or the last time – that happened. In spite of my initial feelings of defeat, I resolved not to cede victory to my son without a fight. I immediately set out to understand cryptocurrencies and, more importantly, the technology underpinning them known as blockchain.
Even just a few years ago, my ignorance of how blockchains work may have been acceptable, but it hardly seems acceptable now. Much more than just cryptocurrency, blockchain technology is beginning to affect every industry that values information sharing and security, and it is about to usher in a revolution in health care. But what are blockchains, and why are they so important?
Explaining blockchains
Blockchains were first conceptualized almost 3 decades ago, but the invention of the first blockchain as we know it today occurred in 2008 by Satoshi Nakomoto, creator of Bitcoin. Blockchains can be thought of as a way to store and communicate information while ensuring its integrity and security. Admittedly, the technology can be a bit confusing, but we’ll attempt to simplify it by focusing on a few fundamental elements.
As the name indicates, the blockchain model relies on a chain of connected blocks. Each block contains some data (which can be financial, medical, legal, or anything else) and bears a unique fingerprint known as a “hash.” Each hash is different and depends entirely on the data stored in the block. In other words, if the contents of the block change, the hash changes, creating an entirely new fingerprint. Each block on the chain also keeps a record of the hash of the previous block. This “links” the chain together, and is the first key to its robust security: If any block is tampered with, its fingerprint will change and it will no longer be linked, thus invalidating all following blocks on the chain.
Ensuring the integrity of the blockchain doesn’t stop there. Just as actual fingerprints can be spoofed by enterprising criminals, hash technology isn’t enough to provide complete security. Thus, several other security features are built into blockchains, with the most noteworthy and important being “decentralization.” This means that blockchains are not stored on any single computer. On the contrary, duplicate copies of every blockchain exist on thousands of computers around the world, creating redundancy and minimizing the vulnerability that any single chain could be tampered with. Before any change in the blockchain can be made and accepted, it must be validated by a majority of the computers storing the chain.
If this all seems perplexing, that’s because it is. Blockchains are complex and difficult to visualize. (But if you’d like a deeper understanding, there are many great YouTube videos that do a great job explaining them.) For now, just remember this: Blockchains are very secure yet highly accessible, and will be essential to how data – especially health data – are stored and communicated in the future.
Blockchains in health care
On Jan. 24, 2019, five major companies (Aetna, Anthem, Health Care Services, IBM, and PNC Bank) “announced a new collaboration to design and create a network using blockchain technology to improve transparency and interoperability in the health care industry.”1 This team of industry leaders is hoping to build the engine that will power the future and impact how health records are created, maintained, and communicated. They’ll achieve this by taking advantage of blockchain’s inclusiveness and decentralization, storing records in a manner that is safe and accessible anywhere a patient seeks care. Because of the redundancy built into blockchains, they can also ensure data integrity. Physicians will benefit from information that is easy to obtain and always accurate; patients will benefit by gaining greater access and ownership of their personal medical records.
The collaboration mentioned above is the latest, but certainly not the first, attempt to exploit the benefits of blockchain for health care. Other major players have already entered the game, and the field is growing quickly. While it’s easy to find their efforts admirable, corporate involvement also means there is money to be saved or made in the space. Chris Ward, head of product for PNC Treasury Management, alluded to this as he commented publicly in the press release: “This collaboration will enable health care–related data and business transactions to occur in way that addresses market demands for transparency and security, while making it easier for the patient, payer, and provider to handle payments. Using this technology, we can remove friction, duplication, and administrative costs that continue to plague the industry.”
Industry executives recognize that interoperability is still the greatest challenge facing the future of health care and are particularly sensitive to the costs of not facing the challenge successfully. Clearly, they see an investment in blockchains as an opportunity to be part of a financially beneficial solution.
Why we should care
As we’ve now covered, there are many advantages of blockchain technology. In fact, we see it as the natural evolution of the patient-centered EHR. Instead of siloed and proprietary information spread across disparate EHRs that can’t communicate, the future of data exchange will be more transparent, yet more secure. Blockchain represents a unique opportunity to democratize the availability of health care information while increasing information quality and lowering costs. It is also shaping up to be the way we’ll exchange sensitive data in the future.
Don’t believe us? Just ask any 9-year-old.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on Twitter, @doctornotte. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
Reference
1. https://newsroom.ibm.com/2019-01-24-Aetna-Anthem-Health-Care-Service-Corporation-PNC-Bank-and-IBM-announce-collaboration-to-establish-blockchain-based-ecosystem-for-the-healthcare-industry
One evening in 2016, my 9-year-old son suggested we use Bitcoin to purchase something on the Microsoft Xbox store. Surprised by his suggestion, I was suddenly struck with two thoughts: 1) Microsoft, by accepting Bitcoin, was validating cryptocurrency as a credible form of payment, and 2) I was getting old. My 9-year-old seemed to have a better understanding of a new technology than I did, hardly the first time – or the last time – that happened. In spite of my initial feelings of defeat, I resolved not to cede victory to my son without a fight. I immediately set out to understand cryptocurrencies and, more importantly, the technology underpinning them known as blockchain.
Even just a few years ago, my ignorance of how blockchains work may have been acceptable, but it hardly seems acceptable now. Much more than just cryptocurrency, blockchain technology is beginning to affect every industry that values information sharing and security, and it is about to usher in a revolution in health care. But what are blockchains, and why are they so important?
Explaining blockchains
Blockchains were first conceptualized almost 3 decades ago, but the invention of the first blockchain as we know it today occurred in 2008 by Satoshi Nakomoto, creator of Bitcoin. Blockchains can be thought of as a way to store and communicate information while ensuring its integrity and security. Admittedly, the technology can be a bit confusing, but we’ll attempt to simplify it by focusing on a few fundamental elements.
As the name indicates, the blockchain model relies on a chain of connected blocks. Each block contains some data (which can be financial, medical, legal, or anything else) and bears a unique fingerprint known as a “hash.” Each hash is different and depends entirely on the data stored in the block. In other words, if the contents of the block change, the hash changes, creating an entirely new fingerprint. Each block on the chain also keeps a record of the hash of the previous block. This “links” the chain together, and is the first key to its robust security: If any block is tampered with, its fingerprint will change and it will no longer be linked, thus invalidating all following blocks on the chain.
Ensuring the integrity of the blockchain doesn’t stop there. Just as actual fingerprints can be spoofed by enterprising criminals, hash technology isn’t enough to provide complete security. Thus, several other security features are built into blockchains, with the most noteworthy and important being “decentralization.” This means that blockchains are not stored on any single computer. On the contrary, duplicate copies of every blockchain exist on thousands of computers around the world, creating redundancy and minimizing the vulnerability that any single chain could be tampered with. Before any change in the blockchain can be made and accepted, it must be validated by a majority of the computers storing the chain.
If this all seems perplexing, that’s because it is. Blockchains are complex and difficult to visualize. (But if you’d like a deeper understanding, there are many great YouTube videos that do a great job explaining them.) For now, just remember this: Blockchains are very secure yet highly accessible, and will be essential to how data – especially health data – are stored and communicated in the future.
Blockchains in health care
On Jan. 24, 2019, five major companies (Aetna, Anthem, Health Care Services, IBM, and PNC Bank) “announced a new collaboration to design and create a network using blockchain technology to improve transparency and interoperability in the health care industry.”1 This team of industry leaders is hoping to build the engine that will power the future and impact how health records are created, maintained, and communicated. They’ll achieve this by taking advantage of blockchain’s inclusiveness and decentralization, storing records in a manner that is safe and accessible anywhere a patient seeks care. Because of the redundancy built into blockchains, they can also ensure data integrity. Physicians will benefit from information that is easy to obtain and always accurate; patients will benefit by gaining greater access and ownership of their personal medical records.
The collaboration mentioned above is the latest, but certainly not the first, attempt to exploit the benefits of blockchain for health care. Other major players have already entered the game, and the field is growing quickly. While it’s easy to find their efforts admirable, corporate involvement also means there is money to be saved or made in the space. Chris Ward, head of product for PNC Treasury Management, alluded to this as he commented publicly in the press release: “This collaboration will enable health care–related data and business transactions to occur in way that addresses market demands for transparency and security, while making it easier for the patient, payer, and provider to handle payments. Using this technology, we can remove friction, duplication, and administrative costs that continue to plague the industry.”
Industry executives recognize that interoperability is still the greatest challenge facing the future of health care and are particularly sensitive to the costs of not facing the challenge successfully. Clearly, they see an investment in blockchains as an opportunity to be part of a financially beneficial solution.
Why we should care
As we’ve now covered, there are many advantages of blockchain technology. In fact, we see it as the natural evolution of the patient-centered EHR. Instead of siloed and proprietary information spread across disparate EHRs that can’t communicate, the future of data exchange will be more transparent, yet more secure. Blockchain represents a unique opportunity to democratize the availability of health care information while increasing information quality and lowering costs. It is also shaping up to be the way we’ll exchange sensitive data in the future.
Don’t believe us? Just ask any 9-year-old.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on Twitter, @doctornotte. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
Reference
1. https://newsroom.ibm.com/2019-01-24-Aetna-Anthem-Health-Care-Service-Corporation-PNC-Bank-and-IBM-announce-collaboration-to-establish-blockchain-based-ecosystem-for-the-healthcare-industry
American Heart Association guideline on the management of blood cholesterol
The purpose of this guideline is to provide direction for the management of patients with high blood cholesterol to decrease the incidence of atherosclerotic vascular disease. The update was undertaken because new evidence has emerged since the publication of the 2013 ACC/AHA cholesterol guideline about additional cholesterol-lowering agents including ezetimibe and PCSK9 inhibitors.
Measurement and therapeutic modalities
In adults aged 20 years and older who are not on lipid-lowering therapy, measurement of a lipid profile is recommended and is an effective way to estimate atherosclerotic cardiovascular disease (ASCVD) risk and documenting baseline LDL-C.
Statin therapy is divided into three categories: High-intensity statin therapy aims for lowering LDL-C levels by more than 50%, moderate-intensity therapy by 30%-49%, and low-intensity therapy by less than 30%.
Cholesterol management groups
In all individuals at all ages, emphasizing a heart-healthy lifestyle, meaning appropriate diet and exercise, to decrease the risk of developing ASCVD should be advised.
Individuals fall into groups with distinct risk of ASCVD or recurrence of ASCVD and the recommendations are organized according to these risk groups.
Secondary ASCVD prevention: Patients who already have ASCVD by virtue of having had an event or established diagnosis (MI, angina, cerebrovascular accident, or peripheral vascular disease) fall into the secondary prevention category:
- Patients aged 75 years and younger with clinical ASCVD: High-intensity statin therapy should be initiated with aim to reduce LDL-C levels by 50%. In patients who experience statin-related side effects, a moderate-intensity statin should be initiated with the aim to reduce LDL-C by 30%-49%.
- In very high-risk patients with an LDL-C above 70 mg/dL on maximally tolerated statin therapy, it is reasonable to consider the use of a non–statin cholesterol-lowering agent with an LDL-C goal under 70 mg/dL. Ezetimibe (Zetia) can be used initially and if LDL-C remains above 70 mg/dL, then consideration can be given to the addition of a PCSK9-inhibitor therapy (strength of recommendation: ezetimibe – moderate; PCSK9 – strong). The guideline discusses that, even though the evidence supports the efficacy of PCSK9s in reducing the incidence of ASCVD events, the expense of PCSK9 inhibitors give them a high cost, compared with value.
- For patients more than age 75 years with established ASCVD, it is reasonable to continue high-intensity statin therapy if patient is tolerating treatment.
Severe hypercholesterolemia:
- Patients with LDL-C above 190 mg/dL do not need a 10-year risk score calculated. These individuals should receive maximally tolerated statin therapy.
- If patient is unable to achieve 50% reduction in LDL-C and/or have an LDL-C level of 100 mg/dL, the addition of ezetimibe therapy is reasonable.
- If LDL-C is still greater than 100mg/dL on a statin plus ezetimibe, the addition of a PCSK9 inhibitor may be considered. It should be recognized that the addition of a PCSK9 in this circumstance is classified as a weak recommendation.
Diabetes mellitus in adults:
- In patients aged 40-75 years with diabetes, regardless of 10-year ASCVD risk, should be prescribed a moderate-intensity statin (strong recommendation).
- In adults with diabetes mellitus and multiple ASCVD risk factors, it is reasonable to prescribe high-intensity statin therapy with goal to reduce LDL-C by more than 50%.
- In adults with diabetes mellitus and 10-year ASCVD risk of 20% or higher, it may be reasonable to add ezetimibe to maximally tolerated statin therapy to reduce LDL-C levels by 50% or more.
- In patients aged 20-39 years with diabetes that is either of long duration (at least 10 years, type 2 diabetes mellitus; at least 20 years, type 1 diabetes mellitus), or with end-organ damage including albuminuria, chronic renal insufficiency, retinopathy, neuropathy, or ankle-brachial index below 0.9, it may be reasonable to initiate statin therapy (weak recommendation).
Primary prevention in adults: Adults with LDL 70-189 mg/dL and a 10-year risk of a first ASCVD event (fatal and nonfatal MI or stroke) should be estimated by using the pooled cohort equation. Adults should be categorized according to calculated risk of developing ASCVD: low risk (less than 5%), borderline risk (5% to less than 7.5%), intermediate risk (7.5% and higher to less than 20%), and high risk (20% and higher) (strong recommendation:
- Individualized risk and treatment discussion should be done with clinician and patient.
- Adults in the intermediate-risk group (7.5% and higher to less than 20%), should be placed on moderate-intensity statin with LDL-C goal reduction of more than 30%; for optimal risk reduction, especially in high-risk patients, an LDL-C reduction of more than 50% (strong recommendation).
- Risk-enhancing factors can favor initiation of intensification of statin therapy.
- If a decision about statin therapy is uncertain, consider measuring coronary artery calcium (CAC) levels. If CAC is zero, statin therapy may be withheld or delayed, except those with diabetes as above, smokers, and strong familial hypercholesterolemia with premature ASCVD. If CAC score is 1-99, it is reasonable to initiate statin therapy for patients older than age 55 years; If CAC score is 100 or higher or in the 75th percentile or higher, it is reasonable to initiate a statin.
Statin safety: Prior to initiation of a statin, a clinician-patient discussion is recommended detailing ASCVD risk reduction and the potential for side effects/drug interactions. In patients with statin-associated muscle symptoms (SAMS), a detailed account for secondary causes is recommended. In patients with true SAMS, it is recommended to check a creatine kinase level and hepatic function panel; however, routine measurements are not useful. In patients with statin-associated side effects that are not severe, reassess and rechallenge patient to achieve maximal lowering of LDL-C with a modified dosing regimen.
The bottom line
Lifestyle modification is important at all ages, with specific population-guided strategies for lowering cholesterol in subgroups as discussed above. Major changes to the AHA/ACC Cholesterol Clinical Practice Guidelines now mention new agents for lowering cholesterol and using CAC levels as predictability scoring.
Reference
Grundy SM et al. 2018 AHA/ACC/AACVPR/AAPA/ABC/ACPM/ADA/AGS/APhA/ASPC/NLA/PCNA Guideline on the management of blood cholesterol: Executive Summary: A report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Circulation. 2018 Nov 10.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Palko is a second-year resident in the family medicine residency program at Abington Jefferson Hospital.
The purpose of this guideline is to provide direction for the management of patients with high blood cholesterol to decrease the incidence of atherosclerotic vascular disease. The update was undertaken because new evidence has emerged since the publication of the 2013 ACC/AHA cholesterol guideline about additional cholesterol-lowering agents including ezetimibe and PCSK9 inhibitors.
Measurement and therapeutic modalities
In adults aged 20 years and older who are not on lipid-lowering therapy, measurement of a lipid profile is recommended and is an effective way to estimate atherosclerotic cardiovascular disease (ASCVD) risk and documenting baseline LDL-C.
Statin therapy is divided into three categories: High-intensity statin therapy aims for lowering LDL-C levels by more than 50%, moderate-intensity therapy by 30%-49%, and low-intensity therapy by less than 30%.
Cholesterol management groups
In all individuals at all ages, emphasizing a heart-healthy lifestyle, meaning appropriate diet and exercise, to decrease the risk of developing ASCVD should be advised.
Individuals fall into groups with distinct risk of ASCVD or recurrence of ASCVD and the recommendations are organized according to these risk groups.
Secondary ASCVD prevention: Patients who already have ASCVD by virtue of having had an event or established diagnosis (MI, angina, cerebrovascular accident, or peripheral vascular disease) fall into the secondary prevention category:
- Patients aged 75 years and younger with clinical ASCVD: High-intensity statin therapy should be initiated with aim to reduce LDL-C levels by 50%. In patients who experience statin-related side effects, a moderate-intensity statin should be initiated with the aim to reduce LDL-C by 30%-49%.
- In very high-risk patients with an LDL-C above 70 mg/dL on maximally tolerated statin therapy, it is reasonable to consider the use of a non–statin cholesterol-lowering agent with an LDL-C goal under 70 mg/dL. Ezetimibe (Zetia) can be used initially and if LDL-C remains above 70 mg/dL, then consideration can be given to the addition of a PCSK9-inhibitor therapy (strength of recommendation: ezetimibe – moderate; PCSK9 – strong). The guideline discusses that, even though the evidence supports the efficacy of PCSK9s in reducing the incidence of ASCVD events, the expense of PCSK9 inhibitors give them a high cost, compared with value.
- For patients more than age 75 years with established ASCVD, it is reasonable to continue high-intensity statin therapy if patient is tolerating treatment.
Severe hypercholesterolemia:
- Patients with LDL-C above 190 mg/dL do not need a 10-year risk score calculated. These individuals should receive maximally tolerated statin therapy.
- If patient is unable to achieve 50% reduction in LDL-C and/or have an LDL-C level of 100 mg/dL, the addition of ezetimibe therapy is reasonable.
- If LDL-C is still greater than 100mg/dL on a statin plus ezetimibe, the addition of a PCSK9 inhibitor may be considered. It should be recognized that the addition of a PCSK9 in this circumstance is classified as a weak recommendation.
Diabetes mellitus in adults:
- In patients aged 40-75 years with diabetes, regardless of 10-year ASCVD risk, should be prescribed a moderate-intensity statin (strong recommendation).
- In adults with diabetes mellitus and multiple ASCVD risk factors, it is reasonable to prescribe high-intensity statin therapy with goal to reduce LDL-C by more than 50%.
- In adults with diabetes mellitus and 10-year ASCVD risk of 20% or higher, it may be reasonable to add ezetimibe to maximally tolerated statin therapy to reduce LDL-C levels by 50% or more.
- In patients aged 20-39 years with diabetes that is either of long duration (at least 10 years, type 2 diabetes mellitus; at least 20 years, type 1 diabetes mellitus), or with end-organ damage including albuminuria, chronic renal insufficiency, retinopathy, neuropathy, or ankle-brachial index below 0.9, it may be reasonable to initiate statin therapy (weak recommendation).
Primary prevention in adults: Adults with LDL 70-189 mg/dL and a 10-year risk of a first ASCVD event (fatal and nonfatal MI or stroke) should be estimated by using the pooled cohort equation. Adults should be categorized according to calculated risk of developing ASCVD: low risk (less than 5%), borderline risk (5% to less than 7.5%), intermediate risk (7.5% and higher to less than 20%), and high risk (20% and higher) (strong recommendation:
- Individualized risk and treatment discussion should be done with clinician and patient.
- Adults in the intermediate-risk group (7.5% and higher to less than 20%), should be placed on moderate-intensity statin with LDL-C goal reduction of more than 30%; for optimal risk reduction, especially in high-risk patients, an LDL-C reduction of more than 50% (strong recommendation).
- Risk-enhancing factors can favor initiation of intensification of statin therapy.
- If a decision about statin therapy is uncertain, consider measuring coronary artery calcium (CAC) levels. If CAC is zero, statin therapy may be withheld or delayed, except those with diabetes as above, smokers, and strong familial hypercholesterolemia with premature ASCVD. If CAC score is 1-99, it is reasonable to initiate statin therapy for patients older than age 55 years; If CAC score is 100 or higher or in the 75th percentile or higher, it is reasonable to initiate a statin.
Statin safety: Prior to initiation of a statin, a clinician-patient discussion is recommended detailing ASCVD risk reduction and the potential for side effects/drug interactions. In patients with statin-associated muscle symptoms (SAMS), a detailed account for secondary causes is recommended. In patients with true SAMS, it is recommended to check a creatine kinase level and hepatic function panel; however, routine measurements are not useful. In patients with statin-associated side effects that are not severe, reassess and rechallenge patient to achieve maximal lowering of LDL-C with a modified dosing regimen.
The bottom line
Lifestyle modification is important at all ages, with specific population-guided strategies for lowering cholesterol in subgroups as discussed above. Major changes to the AHA/ACC Cholesterol Clinical Practice Guidelines now mention new agents for lowering cholesterol and using CAC levels as predictability scoring.
Reference
Grundy SM et al. 2018 AHA/ACC/AACVPR/AAPA/ABC/ACPM/ADA/AGS/APhA/ASPC/NLA/PCNA Guideline on the management of blood cholesterol: Executive Summary: A report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Circulation. 2018 Nov 10.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Palko is a second-year resident in the family medicine residency program at Abington Jefferson Hospital.
The purpose of this guideline is to provide direction for the management of patients with high blood cholesterol to decrease the incidence of atherosclerotic vascular disease. The update was undertaken because new evidence has emerged since the publication of the 2013 ACC/AHA cholesterol guideline about additional cholesterol-lowering agents including ezetimibe and PCSK9 inhibitors.
Measurement and therapeutic modalities
In adults aged 20 years and older who are not on lipid-lowering therapy, measurement of a lipid profile is recommended and is an effective way to estimate atherosclerotic cardiovascular disease (ASCVD) risk and documenting baseline LDL-C.
Statin therapy is divided into three categories: High-intensity statin therapy aims for lowering LDL-C levels by more than 50%, moderate-intensity therapy by 30%-49%, and low-intensity therapy by less than 30%.
Cholesterol management groups
In all individuals at all ages, emphasizing a heart-healthy lifestyle, meaning appropriate diet and exercise, to decrease the risk of developing ASCVD should be advised.
Individuals fall into groups with distinct risk of ASCVD or recurrence of ASCVD and the recommendations are organized according to these risk groups.
Secondary ASCVD prevention: Patients who already have ASCVD by virtue of having had an event or established diagnosis (MI, angina, cerebrovascular accident, or peripheral vascular disease) fall into the secondary prevention category:
- Patients aged 75 years and younger with clinical ASCVD: High-intensity statin therapy should be initiated with aim to reduce LDL-C levels by 50%. In patients who experience statin-related side effects, a moderate-intensity statin should be initiated with the aim to reduce LDL-C by 30%-49%.
- In very high-risk patients with an LDL-C above 70 mg/dL on maximally tolerated statin therapy, it is reasonable to consider the use of a non–statin cholesterol-lowering agent with an LDL-C goal under 70 mg/dL. Ezetimibe (Zetia) can be used initially and if LDL-C remains above 70 mg/dL, then consideration can be given to the addition of a PCSK9-inhibitor therapy (strength of recommendation: ezetimibe – moderate; PCSK9 – strong). The guideline discusses that, even though the evidence supports the efficacy of PCSK9s in reducing the incidence of ASCVD events, the expense of PCSK9 inhibitors give them a high cost, compared with value.
- For patients more than age 75 years with established ASCVD, it is reasonable to continue high-intensity statin therapy if patient is tolerating treatment.
Severe hypercholesterolemia:
- Patients with LDL-C above 190 mg/dL do not need a 10-year risk score calculated. These individuals should receive maximally tolerated statin therapy.
- If patient is unable to achieve 50% reduction in LDL-C and/or have an LDL-C level of 100 mg/dL, the addition of ezetimibe therapy is reasonable.
- If LDL-C is still greater than 100mg/dL on a statin plus ezetimibe, the addition of a PCSK9 inhibitor may be considered. It should be recognized that the addition of a PCSK9 in this circumstance is classified as a weak recommendation.
Diabetes mellitus in adults:
- In patients aged 40-75 years with diabetes, regardless of 10-year ASCVD risk, should be prescribed a moderate-intensity statin (strong recommendation).
- In adults with diabetes mellitus and multiple ASCVD risk factors, it is reasonable to prescribe high-intensity statin therapy with goal to reduce LDL-C by more than 50%.
- In adults with diabetes mellitus and 10-year ASCVD risk of 20% or higher, it may be reasonable to add ezetimibe to maximally tolerated statin therapy to reduce LDL-C levels by 50% or more.
- In patients aged 20-39 years with diabetes that is either of long duration (at least 10 years, type 2 diabetes mellitus; at least 20 years, type 1 diabetes mellitus), or with end-organ damage including albuminuria, chronic renal insufficiency, retinopathy, neuropathy, or ankle-brachial index below 0.9, it may be reasonable to initiate statin therapy (weak recommendation).
Primary prevention in adults: Adults with LDL 70-189 mg/dL and a 10-year risk of a first ASCVD event (fatal and nonfatal MI or stroke) should be estimated by using the pooled cohort equation. Adults should be categorized according to calculated risk of developing ASCVD: low risk (less than 5%), borderline risk (5% to less than 7.5%), intermediate risk (7.5% and higher to less than 20%), and high risk (20% and higher) (strong recommendation:
- Individualized risk and treatment discussion should be done with clinician and patient.
- Adults in the intermediate-risk group (7.5% and higher to less than 20%), should be placed on moderate-intensity statin with LDL-C goal reduction of more than 30%; for optimal risk reduction, especially in high-risk patients, an LDL-C reduction of more than 50% (strong recommendation).
- Risk-enhancing factors can favor initiation of intensification of statin therapy.
- If a decision about statin therapy is uncertain, consider measuring coronary artery calcium (CAC) levels. If CAC is zero, statin therapy may be withheld or delayed, except those with diabetes as above, smokers, and strong familial hypercholesterolemia with premature ASCVD. If CAC score is 1-99, it is reasonable to initiate statin therapy for patients older than age 55 years; If CAC score is 100 or higher or in the 75th percentile or higher, it is reasonable to initiate a statin.
Statin safety: Prior to initiation of a statin, a clinician-patient discussion is recommended detailing ASCVD risk reduction and the potential for side effects/drug interactions. In patients with statin-associated muscle symptoms (SAMS), a detailed account for secondary causes is recommended. In patients with true SAMS, it is recommended to check a creatine kinase level and hepatic function panel; however, routine measurements are not useful. In patients with statin-associated side effects that are not severe, reassess and rechallenge patient to achieve maximal lowering of LDL-C with a modified dosing regimen.
The bottom line
Lifestyle modification is important at all ages, with specific population-guided strategies for lowering cholesterol in subgroups as discussed above. Major changes to the AHA/ACC Cholesterol Clinical Practice Guidelines now mention new agents for lowering cholesterol and using CAC levels as predictability scoring.
Reference
Grundy SM et al. 2018 AHA/ACC/AACVPR/AAPA/ABC/ACPM/ADA/AGS/APhA/ASPC/NLA/PCNA Guideline on the management of blood cholesterol: Executive Summary: A report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Circulation. 2018 Nov 10.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Palko is a second-year resident in the family medicine residency program at Abington Jefferson Hospital.
Martin Buber, deep learning, and the still soft voice beyond the screen
Life is short, art long, opportunity fleeting. – Hippocrates
The new year provides an opportunity to reflect on old things: to decide what to keep and what to toss out, to contemplate the habits to which we choose to rededicate ourselves, and those we choose to let wane. Over the last few years, while some older physicians have expressed a yearning for the comfort of paper charts, most of us have come to embrace the benefits of the electronic health record. That is a good thing. The EHR offers many advantages over paper, and, like it or not, it’s here to stay.
In 1923, the German philosopher Martin Buber published the book for which he is best known, “I and Thou.” In that book Buber says that there are two ways we can approach relationships: “I-Thou” or “I-It.” In I-It relationships, we view the other person as an “it” to be used to accomplish a purpose or to be experienced without his or her full involvement. In an I-Thou relationship, we appreciate the other person for all their complexity, in their full humanness. We acknowledge and approach the person as a unique individual who has dreams, goals, fears, and wishes that may be different than ours but to which we can still relate.
While the importance and benefits of the electronic record are clear, we must constantly remind ourselves that the EHR is a tool of care and not the goal of care. While the people we see have health needs that must be diagnosed, treated, and recorded, and their illnesses are an important part of their being, they do not define their being. Nor should they define our relationship with them. Patients agree; when surveyed about the attributes of a good physician, they regularly respond that they want their physicians to have a sense of them as people, not just patients.
Recently, I was reminded of the challenge of keeping this simple task in the forefront of care while on hospital service. I had occasion to sit and talk with one of my patients without a computer in the room. This was unusual for me, as I typically fill out the EHR as I am seeing the patient. As I listened to the individual in his gown, lying on his hospital bed and describing the symptoms that brought him to the hospital, I was reminded of the subtle pauses and nuances that occur during focused conversations, during deep listening.
We have written in previous columns about exciting applications of technology that are in the pipeline. Artificial intelligence with “deep learning” is predicted to change the way we diagnose and treat disease. Deep learning is a term that has been used to describe a type of machine analysis where data are interpreted and analyzed in layers, allowing the computer to detect patterns. In the first layer of learning, the computer may identify the way pixels of the same color form a line or a curve. In the next layer it might detect the way that curve resembles a face. Peeling away layer after layer, the computer might eventually recognize whose face is being represented. This is the type of programing that has allowed computers to interpret mammograms and retina scans, detecting patterns that represent cancer or small retinal hemorrhages. While deep learning will be the subject of much excitement over the next few years, at the start of this new year we think it is equally important be reminded of an essential quality of the excellent physician – deep listening.
Deep listening requires a lifetime of practice. We have all experienced it, both as listeners and as those being listened to. When we are in the presence of someone who is truly interested in what we are saying – in our story and in our life – we feel reaffirmed and refreshed. Regardless of the topic of our discussion, we feel a sense of trust, for we believe that the person with whom we are speaking understands us, and, in that understanding, cares about us. We have a sense that we could trust the listener with our lives.
A lifetime of practice – that is the promise of our jobs as physicians. Every time we enter the exam room we have the opportunity to carry out the sacred skill of hearing others, while trying in some way to improve their lives. With each visit we have the opportunity to perfect our craft. Chaucer, the medieval English poet, observed, “the life so short, the craft so long to learn.” It seems he borrowed that idea from a physician, Hippocrates.
Hippocrates opened his medical text with the words, “Vita brevis, ars longa, occasio praeceps,” which means, “Life is short, the art long, opportunity fleeting.” Hippocrates recognized the challenge involved in learning all that is necessary to take care of our fellow man. This challenge has only become more difficult as the quantity of information required to practice competent medicine has increased. In addition, we now need to record data into the EHR to be used for record keeping, billing, and the further advancement of knowledge. Hippocrates’ medical text continued, “The physician must not only be prepared to do what is right himself, but also to make the patient, the attendants, and externals cooperate.”
On the occasion of this New Year, it is a perfect time to reflect and rededicate ourselves to listening to our patients, to being interested in them and their stories. We just may find that in deep listening, and in the trust that comes from that singular focus, lie solutions to many of the largest problems we face in medicine today: burnout, poor adherence, and regaining the moral authority that comes with truly caring for those in need.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Notte is a family physician and associate chief medical information officer for Abington Jefferson Health. Follow him on twitter (@doctornotte).
Life is short, art long, opportunity fleeting. – Hippocrates
The new year provides an opportunity to reflect on old things: to decide what to keep and what to toss out, to contemplate the habits to which we choose to rededicate ourselves, and those we choose to let wane. Over the last few years, while some older physicians have expressed a yearning for the comfort of paper charts, most of us have come to embrace the benefits of the electronic health record. That is a good thing. The EHR offers many advantages over paper, and, like it or not, it’s here to stay.
In 1923, the German philosopher Martin Buber published the book for which he is best known, “I and Thou.” In that book Buber says that there are two ways we can approach relationships: “I-Thou” or “I-It.” In I-It relationships, we view the other person as an “it” to be used to accomplish a purpose or to be experienced without his or her full involvement. In an I-Thou relationship, we appreciate the other person for all their complexity, in their full humanness. We acknowledge and approach the person as a unique individual who has dreams, goals, fears, and wishes that may be different than ours but to which we can still relate.
While the importance and benefits of the electronic record are clear, we must constantly remind ourselves that the EHR is a tool of care and not the goal of care. While the people we see have health needs that must be diagnosed, treated, and recorded, and their illnesses are an important part of their being, they do not define their being. Nor should they define our relationship with them. Patients agree; when surveyed about the attributes of a good physician, they regularly respond that they want their physicians to have a sense of them as people, not just patients.
Recently, I was reminded of the challenge of keeping this simple task in the forefront of care while on hospital service. I had occasion to sit and talk with one of my patients without a computer in the room. This was unusual for me, as I typically fill out the EHR as I am seeing the patient. As I listened to the individual in his gown, lying on his hospital bed and describing the symptoms that brought him to the hospital, I was reminded of the subtle pauses and nuances that occur during focused conversations, during deep listening.
We have written in previous columns about exciting applications of technology that are in the pipeline. Artificial intelligence with “deep learning” is predicted to change the way we diagnose and treat disease. Deep learning is a term that has been used to describe a type of machine analysis where data are interpreted and analyzed in layers, allowing the computer to detect patterns. In the first layer of learning, the computer may identify the way pixels of the same color form a line or a curve. In the next layer it might detect the way that curve resembles a face. Peeling away layer after layer, the computer might eventually recognize whose face is being represented. This is the type of programing that has allowed computers to interpret mammograms and retina scans, detecting patterns that represent cancer or small retinal hemorrhages. While deep learning will be the subject of much excitement over the next few years, at the start of this new year we think it is equally important be reminded of an essential quality of the excellent physician – deep listening.
Deep listening requires a lifetime of practice. We have all experienced it, both as listeners and as those being listened to. When we are in the presence of someone who is truly interested in what we are saying – in our story and in our life – we feel reaffirmed and refreshed. Regardless of the topic of our discussion, we feel a sense of trust, for we believe that the person with whom we are speaking understands us, and, in that understanding, cares about us. We have a sense that we could trust the listener with our lives.
A lifetime of practice – that is the promise of our jobs as physicians. Every time we enter the exam room we have the opportunity to carry out the sacred skill of hearing others, while trying in some way to improve their lives. With each visit we have the opportunity to perfect our craft. Chaucer, the medieval English poet, observed, “the life so short, the craft so long to learn.” It seems he borrowed that idea from a physician, Hippocrates.
Hippocrates opened his medical text with the words, “Vita brevis, ars longa, occasio praeceps,” which means, “Life is short, the art long, opportunity fleeting.” Hippocrates recognized the challenge involved in learning all that is necessary to take care of our fellow man. This challenge has only become more difficult as the quantity of information required to practice competent medicine has increased. In addition, we now need to record data into the EHR to be used for record keeping, billing, and the further advancement of knowledge. Hippocrates’ medical text continued, “The physician must not only be prepared to do what is right himself, but also to make the patient, the attendants, and externals cooperate.”
On the occasion of this New Year, it is a perfect time to reflect and rededicate ourselves to listening to our patients, to being interested in them and their stories. We just may find that in deep listening, and in the trust that comes from that singular focus, lie solutions to many of the largest problems we face in medicine today: burnout, poor adherence, and regaining the moral authority that comes with truly caring for those in need.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Notte is a family physician and associate chief medical information officer for Abington Jefferson Health. Follow him on twitter (@doctornotte).
Life is short, art long, opportunity fleeting. – Hippocrates
The new year provides an opportunity to reflect on old things: to decide what to keep and what to toss out, to contemplate the habits to which we choose to rededicate ourselves, and those we choose to let wane. Over the last few years, while some older physicians have expressed a yearning for the comfort of paper charts, most of us have come to embrace the benefits of the electronic health record. That is a good thing. The EHR offers many advantages over paper, and, like it or not, it’s here to stay.
In 1923, the German philosopher Martin Buber published the book for which he is best known, “I and Thou.” In that book Buber says that there are two ways we can approach relationships: “I-Thou” or “I-It.” In I-It relationships, we view the other person as an “it” to be used to accomplish a purpose or to be experienced without his or her full involvement. In an I-Thou relationship, we appreciate the other person for all their complexity, in their full humanness. We acknowledge and approach the person as a unique individual who has dreams, goals, fears, and wishes that may be different than ours but to which we can still relate.
While the importance and benefits of the electronic record are clear, we must constantly remind ourselves that the EHR is a tool of care and not the goal of care. While the people we see have health needs that must be diagnosed, treated, and recorded, and their illnesses are an important part of their being, they do not define their being. Nor should they define our relationship with them. Patients agree; when surveyed about the attributes of a good physician, they regularly respond that they want their physicians to have a sense of them as people, not just patients.
Recently, I was reminded of the challenge of keeping this simple task in the forefront of care while on hospital service. I had occasion to sit and talk with one of my patients without a computer in the room. This was unusual for me, as I typically fill out the EHR as I am seeing the patient. As I listened to the individual in his gown, lying on his hospital bed and describing the symptoms that brought him to the hospital, I was reminded of the subtle pauses and nuances that occur during focused conversations, during deep listening.
We have written in previous columns about exciting applications of technology that are in the pipeline. Artificial intelligence with “deep learning” is predicted to change the way we diagnose and treat disease. Deep learning is a term that has been used to describe a type of machine analysis where data are interpreted and analyzed in layers, allowing the computer to detect patterns. In the first layer of learning, the computer may identify the way pixels of the same color form a line or a curve. In the next layer it might detect the way that curve resembles a face. Peeling away layer after layer, the computer might eventually recognize whose face is being represented. This is the type of programing that has allowed computers to interpret mammograms and retina scans, detecting patterns that represent cancer or small retinal hemorrhages. While deep learning will be the subject of much excitement over the next few years, at the start of this new year we think it is equally important be reminded of an essential quality of the excellent physician – deep listening.
Deep listening requires a lifetime of practice. We have all experienced it, both as listeners and as those being listened to. When we are in the presence of someone who is truly interested in what we are saying – in our story and in our life – we feel reaffirmed and refreshed. Regardless of the topic of our discussion, we feel a sense of trust, for we believe that the person with whom we are speaking understands us, and, in that understanding, cares about us. We have a sense that we could trust the listener with our lives.
A lifetime of practice – that is the promise of our jobs as physicians. Every time we enter the exam room we have the opportunity to carry out the sacred skill of hearing others, while trying in some way to improve their lives. With each visit we have the opportunity to perfect our craft. Chaucer, the medieval English poet, observed, “the life so short, the craft so long to learn.” It seems he borrowed that idea from a physician, Hippocrates.
Hippocrates opened his medical text with the words, “Vita brevis, ars longa, occasio praeceps,” which means, “Life is short, the art long, opportunity fleeting.” Hippocrates recognized the challenge involved in learning all that is necessary to take care of our fellow man. This challenge has only become more difficult as the quantity of information required to practice competent medicine has increased. In addition, we now need to record data into the EHR to be used for record keeping, billing, and the further advancement of knowledge. Hippocrates’ medical text continued, “The physician must not only be prepared to do what is right himself, but also to make the patient, the attendants, and externals cooperate.”
On the occasion of this New Year, it is a perfect time to reflect and rededicate ourselves to listening to our patients, to being interested in them and their stories. We just may find that in deep listening, and in the trust that comes from that singular focus, lie solutions to many of the largest problems we face in medicine today: burnout, poor adherence, and regaining the moral authority that comes with truly caring for those in need.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Notte is a family physician and associate chief medical information officer for Abington Jefferson Health. Follow him on twitter (@doctornotte).