User login
Where to go with wearables
On Sept. 14 of this year, Apple executives took to the stage to tout the incredible benefits of their new Apple Watch Series 4. While impressively presented in typical Apple fashion, the watch appeared to be only an evolution – not a revolution – in wearable technology. Still, there were a few noteworthy aspects of the new model that seemed to shine a light on the direction of the industry as a whole, and these were all focused on health care.
Like products from FitBit, Garmin, and others, the new Apple Watch can monitor a user’s heart rate and notify if it goes too high or too low. In addition, the watch now includes “fall detection,” and can automatically call for help if its wearer has taken a spill and become unresponsive. Soon it will even be capable of recording a single-lead ECG and detecting atrial fibrillation. While this all sounds fantastic, it also raises an important question in the minds of many physicians (including us): What do we do with all of these new data?
Findings from a Digital Health Study published by the American Medical Association in 20161 reveal that most doctors are aware of growing advances in Mobile Health (mHealth). Interestingly, however, while 85% see potential advantages in mHealth, less than 30% have begun employing it in their practices. This speaks to an adoption divide and highlights the many barriers to overcome before we can bridge it.
First and foremost, providers need confidence in the accuracy of the monitoring equipment, and, thus far, that accuracy has been questionable. Heart rate measurement, for example, is a staple of all currently available fitness wearables, yet is replete with technological pitfalls. This is because most consumer devices rely on optical sensors to measure heart rate. While inexpensive and noninvasive, the accuracy of these sensors can be affected by the interference of sweat, movement, and even the patient’s skin conditions – so much so that FitBit is currently embroiled in a class action lawsuit2 over the issue, in spite of providing disclaimers that a FitBit is “not a medical device.” To improve heart-monitoring capability, Apple has changed to a new sensor technology for this latest generation of Apple Watch. So far its accuracy has yet to be proven, and Apple’s delay in releasing the ECG features until “later this year” suggests there may still be bugs to work out.
Another significant concern raised by the onslaught of wearable health data is how to incorporate it into the electronic health record. Physicians care about efficient data integration, and, when asked in the aforementioned AMA study, physicians named this as their No. 1 functional requirement. EHR vendors have made some strides to allow patients to upload monitoring data directly through an online portal, but the large variety of available consumer devices has made standardizing this process difficult. Doctors have also made it clear that they want it to be straightforward to access and use the information provided by patients, and don’t want it to require special training. These are considerable challenges that will require collaboration between EHR vendors and wearable manufacturers to solve.
The introduction of additional players into the health care space also evokes questions of who owns this new health data set, and who is accountable for its integrity. If history is any indicator, device manufacturers will try their best to eschew any liability, and shift culpability onto patients and physicians. This is causing malpractice insurers to rethink policy coverage and forcing doctors to face a new reality of having “too much information.” While we are excited about the potential for better access to patient monitoring data, we agree that physicians need to understand where their responsibility for these data begins and ends.
Likewise, patients need to understand who has access to their personal health information, and how it’s being used. Privacy concerns will only become more evident as our society becomes ever more connected and as technologies become more invasive. The term “wearable” may soon become antiquated, as more products are coming to market that cross the skin barrier to collect samples directly from the blood or interstitial fluid. Devices such as Abbott’s new FreeStyle Libre continuous blood glucose monitor can be worn for weeks at a time, with its tiny sensor placed just under the skin. It constantly monitors trends in blood sugar and produces enough data points to determine the eating, sleeping, and activity habits of its wearer. This is all uploadable to Abbott’s servers, allowing patients and their providers to review it, thereby further expanding their personal health information footprint.
One encouraging aspect of the expansion mobile health technology is its organic, patient-led adoption. This is quite different from the epoch of electronic health records, which was motivated largely by government financial incentives and resulted in expensive, inefficient software. Patients are expressing a greater desire to take ownership of their health and have a growing interest in personal fitness. Also, the size of the consumer marketplace is forcing vendors to create competitive, high-value, and user-friendly mHealth devices. These products may seem to offer endless possibilities, but patients, vendors, and providers must fully acknowledge existing limitations in order to truly spark a revolution in wearable technology and actually improve patient care.
Dr. Notte is a family physician and clinical informaticist for Abington (Pa.) Memorial Hospital. He is a partner in EHR Practice Consultants, a firm that aids physicians in adopting electronic health records. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
References
1. Digital Health Study: Physicians’ motivations and requirements for adopting digital clinical tools. (2016) American Medical Association.
2. Kate Mclellan et al. v. Fitbit Inc. Fitbit Heart Rate Monitors Fraud & Defects Lawsuit.
On Sept. 14 of this year, Apple executives took to the stage to tout the incredible benefits of their new Apple Watch Series 4. While impressively presented in typical Apple fashion, the watch appeared to be only an evolution – not a revolution – in wearable technology. Still, there were a few noteworthy aspects of the new model that seemed to shine a light on the direction of the industry as a whole, and these were all focused on health care.
Like products from FitBit, Garmin, and others, the new Apple Watch can monitor a user’s heart rate and notify if it goes too high or too low. In addition, the watch now includes “fall detection,” and can automatically call for help if its wearer has taken a spill and become unresponsive. Soon it will even be capable of recording a single-lead ECG and detecting atrial fibrillation. While this all sounds fantastic, it also raises an important question in the minds of many physicians (including us): What do we do with all of these new data?
Findings from a Digital Health Study published by the American Medical Association in 20161 reveal that most doctors are aware of growing advances in Mobile Health (mHealth). Interestingly, however, while 85% see potential advantages in mHealth, less than 30% have begun employing it in their practices. This speaks to an adoption divide and highlights the many barriers to overcome before we can bridge it.
First and foremost, providers need confidence in the accuracy of the monitoring equipment, and, thus far, that accuracy has been questionable. Heart rate measurement, for example, is a staple of all currently available fitness wearables, yet is replete with technological pitfalls. This is because most consumer devices rely on optical sensors to measure heart rate. While inexpensive and noninvasive, the accuracy of these sensors can be affected by the interference of sweat, movement, and even the patient’s skin conditions – so much so that FitBit is currently embroiled in a class action lawsuit2 over the issue, in spite of providing disclaimers that a FitBit is “not a medical device.” To improve heart-monitoring capability, Apple has changed to a new sensor technology for this latest generation of Apple Watch. So far its accuracy has yet to be proven, and Apple’s delay in releasing the ECG features until “later this year” suggests there may still be bugs to work out.
Another significant concern raised by the onslaught of wearable health data is how to incorporate it into the electronic health record. Physicians care about efficient data integration, and, when asked in the aforementioned AMA study, physicians named this as their No. 1 functional requirement. EHR vendors have made some strides to allow patients to upload monitoring data directly through an online portal, but the large variety of available consumer devices has made standardizing this process difficult. Doctors have also made it clear that they want it to be straightforward to access and use the information provided by patients, and don’t want it to require special training. These are considerable challenges that will require collaboration between EHR vendors and wearable manufacturers to solve.
The introduction of additional players into the health care space also evokes questions of who owns this new health data set, and who is accountable for its integrity. If history is any indicator, device manufacturers will try their best to eschew any liability, and shift culpability onto patients and physicians. This is causing malpractice insurers to rethink policy coverage and forcing doctors to face a new reality of having “too much information.” While we are excited about the potential for better access to patient monitoring data, we agree that physicians need to understand where their responsibility for these data begins and ends.
Likewise, patients need to understand who has access to their personal health information, and how it’s being used. Privacy concerns will only become more evident as our society becomes ever more connected and as technologies become more invasive. The term “wearable” may soon become antiquated, as more products are coming to market that cross the skin barrier to collect samples directly from the blood or interstitial fluid. Devices such as Abbott’s new FreeStyle Libre continuous blood glucose monitor can be worn for weeks at a time, with its tiny sensor placed just under the skin. It constantly monitors trends in blood sugar and produces enough data points to determine the eating, sleeping, and activity habits of its wearer. This is all uploadable to Abbott’s servers, allowing patients and their providers to review it, thereby further expanding their personal health information footprint.
One encouraging aspect of the expansion mobile health technology is its organic, patient-led adoption. This is quite different from the epoch of electronic health records, which was motivated largely by government financial incentives and resulted in expensive, inefficient software. Patients are expressing a greater desire to take ownership of their health and have a growing interest in personal fitness. Also, the size of the consumer marketplace is forcing vendors to create competitive, high-value, and user-friendly mHealth devices. These products may seem to offer endless possibilities, but patients, vendors, and providers must fully acknowledge existing limitations in order to truly spark a revolution in wearable technology and actually improve patient care.
Dr. Notte is a family physician and clinical informaticist for Abington (Pa.) Memorial Hospital. He is a partner in EHR Practice Consultants, a firm that aids physicians in adopting electronic health records. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
References
1. Digital Health Study: Physicians’ motivations and requirements for adopting digital clinical tools. (2016) American Medical Association.
2. Kate Mclellan et al. v. Fitbit Inc. Fitbit Heart Rate Monitors Fraud & Defects Lawsuit.
On Sept. 14 of this year, Apple executives took to the stage to tout the incredible benefits of their new Apple Watch Series 4. While impressively presented in typical Apple fashion, the watch appeared to be only an evolution – not a revolution – in wearable technology. Still, there were a few noteworthy aspects of the new model that seemed to shine a light on the direction of the industry as a whole, and these were all focused on health care.
Like products from FitBit, Garmin, and others, the new Apple Watch can monitor a user’s heart rate and notify if it goes too high or too low. In addition, the watch now includes “fall detection,” and can automatically call for help if its wearer has taken a spill and become unresponsive. Soon it will even be capable of recording a single-lead ECG and detecting atrial fibrillation. While this all sounds fantastic, it also raises an important question in the minds of many physicians (including us): What do we do with all of these new data?
Findings from a Digital Health Study published by the American Medical Association in 20161 reveal that most doctors are aware of growing advances in Mobile Health (mHealth). Interestingly, however, while 85% see potential advantages in mHealth, less than 30% have begun employing it in their practices. This speaks to an adoption divide and highlights the many barriers to overcome before we can bridge it.
First and foremost, providers need confidence in the accuracy of the monitoring equipment, and, thus far, that accuracy has been questionable. Heart rate measurement, for example, is a staple of all currently available fitness wearables, yet is replete with technological pitfalls. This is because most consumer devices rely on optical sensors to measure heart rate. While inexpensive and noninvasive, the accuracy of these sensors can be affected by the interference of sweat, movement, and even the patient’s skin conditions – so much so that FitBit is currently embroiled in a class action lawsuit2 over the issue, in spite of providing disclaimers that a FitBit is “not a medical device.” To improve heart-monitoring capability, Apple has changed to a new sensor technology for this latest generation of Apple Watch. So far its accuracy has yet to be proven, and Apple’s delay in releasing the ECG features until “later this year” suggests there may still be bugs to work out.
Another significant concern raised by the onslaught of wearable health data is how to incorporate it into the electronic health record. Physicians care about efficient data integration, and, when asked in the aforementioned AMA study, physicians named this as their No. 1 functional requirement. EHR vendors have made some strides to allow patients to upload monitoring data directly through an online portal, but the large variety of available consumer devices has made standardizing this process difficult. Doctors have also made it clear that they want it to be straightforward to access and use the information provided by patients, and don’t want it to require special training. These are considerable challenges that will require collaboration between EHR vendors and wearable manufacturers to solve.
The introduction of additional players into the health care space also evokes questions of who owns this new health data set, and who is accountable for its integrity. If history is any indicator, device manufacturers will try their best to eschew any liability, and shift culpability onto patients and physicians. This is causing malpractice insurers to rethink policy coverage and forcing doctors to face a new reality of having “too much information.” While we are excited about the potential for better access to patient monitoring data, we agree that physicians need to understand where their responsibility for these data begins and ends.
Likewise, patients need to understand who has access to their personal health information, and how it’s being used. Privacy concerns will only become more evident as our society becomes ever more connected and as technologies become more invasive. The term “wearable” may soon become antiquated, as more products are coming to market that cross the skin barrier to collect samples directly from the blood or interstitial fluid. Devices such as Abbott’s new FreeStyle Libre continuous blood glucose monitor can be worn for weeks at a time, with its tiny sensor placed just under the skin. It constantly monitors trends in blood sugar and produces enough data points to determine the eating, sleeping, and activity habits of its wearer. This is all uploadable to Abbott’s servers, allowing patients and their providers to review it, thereby further expanding their personal health information footprint.
One encouraging aspect of the expansion mobile health technology is its organic, patient-led adoption. This is quite different from the epoch of electronic health records, which was motivated largely by government financial incentives and resulted in expensive, inefficient software. Patients are expressing a greater desire to take ownership of their health and have a growing interest in personal fitness. Also, the size of the consumer marketplace is forcing vendors to create competitive, high-value, and user-friendly mHealth devices. These products may seem to offer endless possibilities, but patients, vendors, and providers must fully acknowledge existing limitations in order to truly spark a revolution in wearable technology and actually improve patient care.
Dr. Notte is a family physician and clinical informaticist for Abington (Pa.) Memorial Hospital. He is a partner in EHR Practice Consultants, a firm that aids physicians in adopting electronic health records. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
References
1. Digital Health Study: Physicians’ motivations and requirements for adopting digital clinical tools. (2016) American Medical Association.
2. Kate Mclellan et al. v. Fitbit Inc. Fitbit Heart Rate Monitors Fraud & Defects Lawsuit.
Real-world data, machine learning, and the reemergence of humanism
As we relentlessly enter information into our EHRs, we typically perceive that we are just recording information about our patients to provide continuity of care and have an accurate representation of what was done. While that is true, the information we record is now increasingly being examined for many additional purposes. A whole new area of study has emerged over the last few years known as “real-world data,” and innovators are beginning to explore how machine learning (currently employed in other areas by such companies as Amazon and Google) may be used to improve the care of patients. The information we are putting into our EHRs is being translated into discrete data and is then combined with data from labs, pharmacies, and claims databases to examine how medications actually work when used in the wide and wild world of practice.
Let’s first talk about why real-world data are important. Traditionally, the evidence we rely upon in medicine has come from randomized trials to give us an unbiased assessment about the safety and the efficacy of the medications that we use. The Achilles’ heel of randomized trials is that, by their nature, they employ a carefully defined group of patients – with specific inclusion and exclusion criteria – who may not be like the patients in our practices. Randomized trials are also conducted in sites that are different than most of our offices. The clinics where randomized trials are conducted have dedicated personnel to follow up on patients, to make sure that patients take their medications, and ensure that patients remember their follow up visits. What this means is that the results in of those studies might not reflect the results seen in the real world.
A nice example of this was reported recently in the area of diabetes management. Randomized trials have shown that the glucagonlike peptide–1 (GLP-1) class of medications have about twice the effectiveness in lowering hemoglobin A1c as do the dipeptidyl peptidase–4 (DPP-4) inhibitor class of medications, but that difference in efficacy is not seen in practice. When looked at in real-world studies, the two classes of medications have about the same glucose-lowering efficacy. Why might that be? In reality, it might be that compliance with GLP-1s is less than that of DPP-4s because of the side effects of nausea and GI intolerance. When patients miss more doses of their GLP-1, they do not achieve the HbA1c lowering seen in trials in which compliance is far better.1
This exploration of real-world outcomes is just a first step in using the information documented in our charts. The exciting next step will be machine learning, also called deep learning.2 In this process, computers look at an enormous number of data points and find relationships that would otherwise not be detected. Imagine a supercomputer analyzing every blood pressure after any medication is changed across thousands, or even millions, of patients, and linking the outcome of that medication choice with the next blood pressure.3 Then imagine the computer meshing millions of data points that include all patients’ weights, ages, sexes, family histories of cardiovascular disease, renal function, etc. and matching those parameters with the specific medication and follow-up blood pressures. While much has been discussed about using genetics to advance personalized medicine, one can imagine these machine-based algorithms discovering connections about which medications work best for individuals with specific characteristics – without the need for additional testing. When the final loop of this cascade is connected, the computer could present recommendations to the clinician about which medication is optimal for the patient and then refine these recommendations, based on outcomes, to optimize safety and efficacy.
Some have argued that there is no way a computer will be able to perform as well as an experienced clinician who utilizes a combination of data and intuition to choose the best medication for his or her patient. This argument is similar to the controversy over autonomous driving cars. Many have asked how you can be assured that the cars will never have an accident. That is, of course, the wrong question. The correct question, as articulated very nicely by one of the innovators in that field, George Holtz, is how we can make a car that is safer than the way that cars are currently being driven (which means fewer deaths than the 15,000 that occur annually with humans behind the wheel).4
Our current method of providing care often leaves patients without appropriate guideline-recommended medications, and many don’t reach their HbA1c, blood pressure, cholesterol, and asthma-control goals. The era of machine learning with machine-generated algorithms may be much closer than we think, which will allow us to spend more time talking with patients, educating them about their disease, and supporting them in their efforts to remain healthy – an attractive future for both us and our patients.
References
1. Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017 Nov;40(11):1469-78.
2. Naylor CD. On the prospects for a (deep) learning health care system. JAMA. 2018 Sep 18;320(11):1099-100.
3. Wang YR et al. Outpatient hypertension treatment, treatment intensification, and control in Western Europe and the United States. Arch Intern Med. 2007 Jan 22;167(2):141-7.
4. Super Hacker George Hotz: “I can make your car drive itself for under $1,000.”
As we relentlessly enter information into our EHRs, we typically perceive that we are just recording information about our patients to provide continuity of care and have an accurate representation of what was done. While that is true, the information we record is now increasingly being examined for many additional purposes. A whole new area of study has emerged over the last few years known as “real-world data,” and innovators are beginning to explore how machine learning (currently employed in other areas by such companies as Amazon and Google) may be used to improve the care of patients. The information we are putting into our EHRs is being translated into discrete data and is then combined with data from labs, pharmacies, and claims databases to examine how medications actually work when used in the wide and wild world of practice.
Let’s first talk about why real-world data are important. Traditionally, the evidence we rely upon in medicine has come from randomized trials to give us an unbiased assessment about the safety and the efficacy of the medications that we use. The Achilles’ heel of randomized trials is that, by their nature, they employ a carefully defined group of patients – with specific inclusion and exclusion criteria – who may not be like the patients in our practices. Randomized trials are also conducted in sites that are different than most of our offices. The clinics where randomized trials are conducted have dedicated personnel to follow up on patients, to make sure that patients take their medications, and ensure that patients remember their follow up visits. What this means is that the results in of those studies might not reflect the results seen in the real world.
A nice example of this was reported recently in the area of diabetes management. Randomized trials have shown that the glucagonlike peptide–1 (GLP-1) class of medications have about twice the effectiveness in lowering hemoglobin A1c as do the dipeptidyl peptidase–4 (DPP-4) inhibitor class of medications, but that difference in efficacy is not seen in practice. When looked at in real-world studies, the two classes of medications have about the same glucose-lowering efficacy. Why might that be? In reality, it might be that compliance with GLP-1s is less than that of DPP-4s because of the side effects of nausea and GI intolerance. When patients miss more doses of their GLP-1, they do not achieve the HbA1c lowering seen in trials in which compliance is far better.1
This exploration of real-world outcomes is just a first step in using the information documented in our charts. The exciting next step will be machine learning, also called deep learning.2 In this process, computers look at an enormous number of data points and find relationships that would otherwise not be detected. Imagine a supercomputer analyzing every blood pressure after any medication is changed across thousands, or even millions, of patients, and linking the outcome of that medication choice with the next blood pressure.3 Then imagine the computer meshing millions of data points that include all patients’ weights, ages, sexes, family histories of cardiovascular disease, renal function, etc. and matching those parameters with the specific medication and follow-up blood pressures. While much has been discussed about using genetics to advance personalized medicine, one can imagine these machine-based algorithms discovering connections about which medications work best for individuals with specific characteristics – without the need for additional testing. When the final loop of this cascade is connected, the computer could present recommendations to the clinician about which medication is optimal for the patient and then refine these recommendations, based on outcomes, to optimize safety and efficacy.
Some have argued that there is no way a computer will be able to perform as well as an experienced clinician who utilizes a combination of data and intuition to choose the best medication for his or her patient. This argument is similar to the controversy over autonomous driving cars. Many have asked how you can be assured that the cars will never have an accident. That is, of course, the wrong question. The correct question, as articulated very nicely by one of the innovators in that field, George Holtz, is how we can make a car that is safer than the way that cars are currently being driven (which means fewer deaths than the 15,000 that occur annually with humans behind the wheel).4
Our current method of providing care often leaves patients without appropriate guideline-recommended medications, and many don’t reach their HbA1c, blood pressure, cholesterol, and asthma-control goals. The era of machine learning with machine-generated algorithms may be much closer than we think, which will allow us to spend more time talking with patients, educating them about their disease, and supporting them in their efforts to remain healthy – an attractive future for both us and our patients.
References
1. Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017 Nov;40(11):1469-78.
2. Naylor CD. On the prospects for a (deep) learning health care system. JAMA. 2018 Sep 18;320(11):1099-100.
3. Wang YR et al. Outpatient hypertension treatment, treatment intensification, and control in Western Europe and the United States. Arch Intern Med. 2007 Jan 22;167(2):141-7.
4. Super Hacker George Hotz: “I can make your car drive itself for under $1,000.”
As we relentlessly enter information into our EHRs, we typically perceive that we are just recording information about our patients to provide continuity of care and have an accurate representation of what was done. While that is true, the information we record is now increasingly being examined for many additional purposes. A whole new area of study has emerged over the last few years known as “real-world data,” and innovators are beginning to explore how machine learning (currently employed in other areas by such companies as Amazon and Google) may be used to improve the care of patients. The information we are putting into our EHRs is being translated into discrete data and is then combined with data from labs, pharmacies, and claims databases to examine how medications actually work when used in the wide and wild world of practice.
Let’s first talk about why real-world data are important. Traditionally, the evidence we rely upon in medicine has come from randomized trials to give us an unbiased assessment about the safety and the efficacy of the medications that we use. The Achilles’ heel of randomized trials is that, by their nature, they employ a carefully defined group of patients – with specific inclusion and exclusion criteria – who may not be like the patients in our practices. Randomized trials are also conducted in sites that are different than most of our offices. The clinics where randomized trials are conducted have dedicated personnel to follow up on patients, to make sure that patients take their medications, and ensure that patients remember their follow up visits. What this means is that the results in of those studies might not reflect the results seen in the real world.
A nice example of this was reported recently in the area of diabetes management. Randomized trials have shown that the glucagonlike peptide–1 (GLP-1) class of medications have about twice the effectiveness in lowering hemoglobin A1c as do the dipeptidyl peptidase–4 (DPP-4) inhibitor class of medications, but that difference in efficacy is not seen in practice. When looked at in real-world studies, the two classes of medications have about the same glucose-lowering efficacy. Why might that be? In reality, it might be that compliance with GLP-1s is less than that of DPP-4s because of the side effects of nausea and GI intolerance. When patients miss more doses of their GLP-1, they do not achieve the HbA1c lowering seen in trials in which compliance is far better.1
This exploration of real-world outcomes is just a first step in using the information documented in our charts. The exciting next step will be machine learning, also called deep learning.2 In this process, computers look at an enormous number of data points and find relationships that would otherwise not be detected. Imagine a supercomputer analyzing every blood pressure after any medication is changed across thousands, or even millions, of patients, and linking the outcome of that medication choice with the next blood pressure.3 Then imagine the computer meshing millions of data points that include all patients’ weights, ages, sexes, family histories of cardiovascular disease, renal function, etc. and matching those parameters with the specific medication and follow-up blood pressures. While much has been discussed about using genetics to advance personalized medicine, one can imagine these machine-based algorithms discovering connections about which medications work best for individuals with specific characteristics – without the need for additional testing. When the final loop of this cascade is connected, the computer could present recommendations to the clinician about which medication is optimal for the patient and then refine these recommendations, based on outcomes, to optimize safety and efficacy.
Some have argued that there is no way a computer will be able to perform as well as an experienced clinician who utilizes a combination of data and intuition to choose the best medication for his or her patient. This argument is similar to the controversy over autonomous driving cars. Many have asked how you can be assured that the cars will never have an accident. That is, of course, the wrong question. The correct question, as articulated very nicely by one of the innovators in that field, George Holtz, is how we can make a car that is safer than the way that cars are currently being driven (which means fewer deaths than the 15,000 that occur annually with humans behind the wheel).4
Our current method of providing care often leaves patients without appropriate guideline-recommended medications, and many don’t reach their HbA1c, blood pressure, cholesterol, and asthma-control goals. The era of machine learning with machine-generated algorithms may be much closer than we think, which will allow us to spend more time talking with patients, educating them about their disease, and supporting them in their efforts to remain healthy – an attractive future for both us and our patients.
References
1. Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017 Nov;40(11):1469-78.
2. Naylor CD. On the prospects for a (deep) learning health care system. JAMA. 2018 Sep 18;320(11):1099-100.
3. Wang YR et al. Outpatient hypertension treatment, treatment intensification, and control in Western Europe and the United States. Arch Intern Med. 2007 Jan 22;167(2):141-7.
4. Super Hacker George Hotz: “I can make your car drive itself for under $1,000.”
Updated AHA recommendations favor nonstatin therapy for cholesterol control
Importance
While statins remain the foundation for treating high cholesterol in order to reduce cardiovascular risk, new evidence has lead to important revisions in the American Heart Association’s recommendations for treatment of hypercholesterolemia in patients at very high cardiovascular risk (secondary prevention) with the addition of specific nonstatin agents. We will briefly review the AHA 2013 guideline recommendations, the relevant new information, and the updated AHA recommendations.
American Heart Association 2013 guidelines
The 2013 American College of Cardiology/AHA cholesterol guidelines recommend either high- or moderate-intensity statin therapy for patients in the four statin benefit groups:
1. Adult patients older than 21 years of age with clinical atherosclerotic cardiovascular disease (ASCVD).
2. Adults older than 21 years of age with low-density lipoprotein cholesterol (LDL-C) above 190 mg/dL.
3. Adults aged 40-75 years without ASCVD but with diabetes and with LDL-C 70-189 mg/dL.
4. Adults aged 40-75 years without either ASCVD or diabetes, with LDL-C 70-189 mg/dL and an estimated 10-year risk for ASCVD of over 7.5% as determined by the Pooled Cohort Equations.
At the time of the 2013 guidelines, there was little evidence to recommend the use of medications other than statins.
Recent evidence
The IMPROVE-IT trial1 was a double-blind, randomized trial involving 18,144 men and women who were older than 50 years and hospitalized for an acute coronary syndrome within the preceding 10 days. They were randomized to either simvastatin plus ezetimibe or simvastatin plus placebo. The primary endpoints were a composite of death from cardiovascular disease, a major coronary event (nonfatal MI, unstable angina requiring admission, or coronary revascularization), or nonfatal stroke. At 1 year, the mean LDL was 69.9 mg/dL in the simvastatin-monotherapy group and 53.2 mg/dL in the simvastatin-ezetimibe group (P under .001), representing a 24% decrease in LDL between the two groups. The rate of the primary endpoints was significantly lower in the simvastatin plus ezetimibe group with a hazard ratio of 0.936 (P = .016). The risk of MI was significantly decreased with an HR of 0.87 (P = .002), and the risk of ischemic stroke significantly decreased with an HR of 0.79 (P = .008). Prespecified safety endpoints showed no significant difference between the two groups.
The FOURIER trial2 examined the PCSK-9 inhibitor, evolocumab. FOURIER was a randomized, double-blind, placebo-controlled study involving 27,564 patients with atherosclerotic cardiovascular disease and LDL levels of 70 mg/dL or higher who were receiving statin therapy (at least atorvastatin 20 mg or equivalent with/without ezetimibe). Patients were between 45 and 80 years old with a history of history of MI, nonhemorrhagic stroke, or symptomatic peripheral artery disease. Patients were randomized to receive subcutaneous injections of evolocumab or matching placebo. The primary endpoints were similar to that of IMPROVE-IT: a composite of cardiovascular death, myocardial infarction, stroke, unstable angina hospitalization, and coronary revascularization. The median LDL on entry was 92 mg/dL for both groups. At 48 weeks, the evolocumab group showed a 59% decrease in LDL, compared with placebo, with a decrease in median LDL from 92 mg/dL to 30 mg/dL. The primary endpoint occurred in 9.8% of the evolocumab group and 11.3% in the placebo group for a hazard ratio of 0.85 (P less than .001) representing a total risk reduction of 13.2%. The risk of MI or stroke and need for revascularization were significantly lower values in the evolocumab group, compared with placebo. Cardiovascular death did not show significant changes. There was no significant difference in rate of serious events.
The ODYSSEY trial3 reported on another PCSK-9 inhibitor, alirocumab, in a randomized, double-blind, placebo-controlled trial involving 18,924 patients who had an ACS in the prior 12 months. At the median follow-up (2.8 years), the LDL of the alirocumab group was 53.3 mg/dL, compared with 101.4 mg/dL in the placebo group. The primary endpoints for cardiovascular risks were similar to those in the FOURIER trial: a risk of 9.5% in the alirocumab group and 11.1% in the placebo group for a total risk reduction of 14.4%. This suggests the class of PCSK-9 inhibitors have a strong correlation with reducing LDL levels 54%-59% and reducing major cardiac adverse events by 13%-15%.
Recommendations
The American College of Cardiology released a focused update that integrated the new evidence regarding the use of nonstatin therapy. The current focused update recommends an overall 50% or greater reduction in LDL for patients with clinical ASCVD. If this reduction is not achieved, ACC suggests that one consider the addition of nonstatin therapy with either ezetimibe or a PCSK-9 inhibitor.4 If a patient requires less than 25% additional LDL reduction, consider ezetimibe; if a patient requires more than 25% additional LDL reduction, consider a PCSK-9 inhibitor. Specifically, the guideline states: “If the patient still has less than 50% reduction in LDL-C (and may consider LDL-C above 70 mg/dL or non–HDL-C above 100 mg/dL), the patient and clinician should enter into a discussion focused on shared decision making regarding the addition of a nonstatin medication to the current regimen.”
The other group that is mentioned in the recommendations, with an acknowledgment that the evidence for benefit in primary prevention is not available, is individuals who have an LDL above 190 mg/dL even while compliant with a maximally effective statin regimen. The guidelines make further but less strong recommendations about a number of risk groups, but the largest and strongest change, based on strong evidence, is the recommendation to consider nonstatin therapy in individuals with established ASCVD, as described above.
Bottom line
Recent trials show significant reductions in LDL, leading to significant reductions in cardiovascular endpoints with ezetimibe and PCSK-9 inhibitors. This has led to an additional ACC recommendation to consider the use of nonstatin therapy in addition to maximal statin therapy in selected patients with established cardiovascular disease.
References
1. Cannon C et al. Ezetimibe added to statin therapy after acute coronary syndromes. N Engl J Med. 2015;372:2387-97.
2. Sabatine M et al. Evolocumab and clinical outcomes in patients with cardiovascular disease. N Engl J Med. 2017;376:1713-22.
3. ODYSSEY Outcomes: Results suggest use of PCSK9 inhibitor reduces CV events, LDL-C in ACS patients. Article from American College of Cardiology. ACC News Story. 2018 Mar 10.
4. Lloyd-Jones DM et al. 2017 Focused update of the 2016 ACC expert consensus decision pathway on the role of nonstatin therapies for LDL-cholesterol lowering in the management of atherosclerotic cardiovascular disease risk: a report of the American College of Cardiology task force on expert consensus decision pathways. J Am Coll Cardiol. 2017 Oct 3;70(14):1785-1822. Epub 2017 Sep 5.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Plako is a second-year resident in the family medicine residency program at Abington Jefferson Hospital.
Importance
While statins remain the foundation for treating high cholesterol in order to reduce cardiovascular risk, new evidence has lead to important revisions in the American Heart Association’s recommendations for treatment of hypercholesterolemia in patients at very high cardiovascular risk (secondary prevention) with the addition of specific nonstatin agents. We will briefly review the AHA 2013 guideline recommendations, the relevant new information, and the updated AHA recommendations.
American Heart Association 2013 guidelines
The 2013 American College of Cardiology/AHA cholesterol guidelines recommend either high- or moderate-intensity statin therapy for patients in the four statin benefit groups:
1. Adult patients older than 21 years of age with clinical atherosclerotic cardiovascular disease (ASCVD).
2. Adults older than 21 years of age with low-density lipoprotein cholesterol (LDL-C) above 190 mg/dL.
3. Adults aged 40-75 years without ASCVD but with diabetes and with LDL-C 70-189 mg/dL.
4. Adults aged 40-75 years without either ASCVD or diabetes, with LDL-C 70-189 mg/dL and an estimated 10-year risk for ASCVD of over 7.5% as determined by the Pooled Cohort Equations.
At the time of the 2013 guidelines, there was little evidence to recommend the use of medications other than statins.
Recent evidence
The IMPROVE-IT trial1 was a double-blind, randomized trial involving 18,144 men and women who were older than 50 years and hospitalized for an acute coronary syndrome within the preceding 10 days. They were randomized to either simvastatin plus ezetimibe or simvastatin plus placebo. The primary endpoints were a composite of death from cardiovascular disease, a major coronary event (nonfatal MI, unstable angina requiring admission, or coronary revascularization), or nonfatal stroke. At 1 year, the mean LDL was 69.9 mg/dL in the simvastatin-monotherapy group and 53.2 mg/dL in the simvastatin-ezetimibe group (P under .001), representing a 24% decrease in LDL between the two groups. The rate of the primary endpoints was significantly lower in the simvastatin plus ezetimibe group with a hazard ratio of 0.936 (P = .016). The risk of MI was significantly decreased with an HR of 0.87 (P = .002), and the risk of ischemic stroke significantly decreased with an HR of 0.79 (P = .008). Prespecified safety endpoints showed no significant difference between the two groups.
The FOURIER trial2 examined the PCSK-9 inhibitor, evolocumab. FOURIER was a randomized, double-blind, placebo-controlled study involving 27,564 patients with atherosclerotic cardiovascular disease and LDL levels of 70 mg/dL or higher who were receiving statin therapy (at least atorvastatin 20 mg or equivalent with/without ezetimibe). Patients were between 45 and 80 years old with a history of history of MI, nonhemorrhagic stroke, or symptomatic peripheral artery disease. Patients were randomized to receive subcutaneous injections of evolocumab or matching placebo. The primary endpoints were similar to that of IMPROVE-IT: a composite of cardiovascular death, myocardial infarction, stroke, unstable angina hospitalization, and coronary revascularization. The median LDL on entry was 92 mg/dL for both groups. At 48 weeks, the evolocumab group showed a 59% decrease in LDL, compared with placebo, with a decrease in median LDL from 92 mg/dL to 30 mg/dL. The primary endpoint occurred in 9.8% of the evolocumab group and 11.3% in the placebo group for a hazard ratio of 0.85 (P less than .001) representing a total risk reduction of 13.2%. The risk of MI or stroke and need for revascularization were significantly lower values in the evolocumab group, compared with placebo. Cardiovascular death did not show significant changes. There was no significant difference in rate of serious events.
The ODYSSEY trial3 reported on another PCSK-9 inhibitor, alirocumab, in a randomized, double-blind, placebo-controlled trial involving 18,924 patients who had an ACS in the prior 12 months. At the median follow-up (2.8 years), the LDL of the alirocumab group was 53.3 mg/dL, compared with 101.4 mg/dL in the placebo group. The primary endpoints for cardiovascular risks were similar to those in the FOURIER trial: a risk of 9.5% in the alirocumab group and 11.1% in the placebo group for a total risk reduction of 14.4%. This suggests the class of PCSK-9 inhibitors have a strong correlation with reducing LDL levels 54%-59% and reducing major cardiac adverse events by 13%-15%.
Recommendations
The American College of Cardiology released a focused update that integrated the new evidence regarding the use of nonstatin therapy. The current focused update recommends an overall 50% or greater reduction in LDL for patients with clinical ASCVD. If this reduction is not achieved, ACC suggests that one consider the addition of nonstatin therapy with either ezetimibe or a PCSK-9 inhibitor.4 If a patient requires less than 25% additional LDL reduction, consider ezetimibe; if a patient requires more than 25% additional LDL reduction, consider a PCSK-9 inhibitor. Specifically, the guideline states: “If the patient still has less than 50% reduction in LDL-C (and may consider LDL-C above 70 mg/dL or non–HDL-C above 100 mg/dL), the patient and clinician should enter into a discussion focused on shared decision making regarding the addition of a nonstatin medication to the current regimen.”
The other group that is mentioned in the recommendations, with an acknowledgment that the evidence for benefit in primary prevention is not available, is individuals who have an LDL above 190 mg/dL even while compliant with a maximally effective statin regimen. The guidelines make further but less strong recommendations about a number of risk groups, but the largest and strongest change, based on strong evidence, is the recommendation to consider nonstatin therapy in individuals with established ASCVD, as described above.
Bottom line
Recent trials show significant reductions in LDL, leading to significant reductions in cardiovascular endpoints with ezetimibe and PCSK-9 inhibitors. This has led to an additional ACC recommendation to consider the use of nonstatin therapy in addition to maximal statin therapy in selected patients with established cardiovascular disease.
References
1. Cannon C et al. Ezetimibe added to statin therapy after acute coronary syndromes. N Engl J Med. 2015;372:2387-97.
2. Sabatine M et al. Evolocumab and clinical outcomes in patients with cardiovascular disease. N Engl J Med. 2017;376:1713-22.
3. ODYSSEY Outcomes: Results suggest use of PCSK9 inhibitor reduces CV events, LDL-C in ACS patients. Article from American College of Cardiology. ACC News Story. 2018 Mar 10.
4. Lloyd-Jones DM et al. 2017 Focused update of the 2016 ACC expert consensus decision pathway on the role of nonstatin therapies for LDL-cholesterol lowering in the management of atherosclerotic cardiovascular disease risk: a report of the American College of Cardiology task force on expert consensus decision pathways. J Am Coll Cardiol. 2017 Oct 3;70(14):1785-1822. Epub 2017 Sep 5.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Plako is a second-year resident in the family medicine residency program at Abington Jefferson Hospital.
Importance
While statins remain the foundation for treating high cholesterol in order to reduce cardiovascular risk, new evidence has lead to important revisions in the American Heart Association’s recommendations for treatment of hypercholesterolemia in patients at very high cardiovascular risk (secondary prevention) with the addition of specific nonstatin agents. We will briefly review the AHA 2013 guideline recommendations, the relevant new information, and the updated AHA recommendations.
American Heart Association 2013 guidelines
The 2013 American College of Cardiology/AHA cholesterol guidelines recommend either high- or moderate-intensity statin therapy for patients in the four statin benefit groups:
1. Adult patients older than 21 years of age with clinical atherosclerotic cardiovascular disease (ASCVD).
2. Adults older than 21 years of age with low-density lipoprotein cholesterol (LDL-C) above 190 mg/dL.
3. Adults aged 40-75 years without ASCVD but with diabetes and with LDL-C 70-189 mg/dL.
4. Adults aged 40-75 years without either ASCVD or diabetes, with LDL-C 70-189 mg/dL and an estimated 10-year risk for ASCVD of over 7.5% as determined by the Pooled Cohort Equations.
At the time of the 2013 guidelines, there was little evidence to recommend the use of medications other than statins.
Recent evidence
The IMPROVE-IT trial1 was a double-blind, randomized trial involving 18,144 men and women who were older than 50 years and hospitalized for an acute coronary syndrome within the preceding 10 days. They were randomized to either simvastatin plus ezetimibe or simvastatin plus placebo. The primary endpoints were a composite of death from cardiovascular disease, a major coronary event (nonfatal MI, unstable angina requiring admission, or coronary revascularization), or nonfatal stroke. At 1 year, the mean LDL was 69.9 mg/dL in the simvastatin-monotherapy group and 53.2 mg/dL in the simvastatin-ezetimibe group (P under .001), representing a 24% decrease in LDL between the two groups. The rate of the primary endpoints was significantly lower in the simvastatin plus ezetimibe group with a hazard ratio of 0.936 (P = .016). The risk of MI was significantly decreased with an HR of 0.87 (P = .002), and the risk of ischemic stroke significantly decreased with an HR of 0.79 (P = .008). Prespecified safety endpoints showed no significant difference between the two groups.
The FOURIER trial2 examined the PCSK-9 inhibitor, evolocumab. FOURIER was a randomized, double-blind, placebo-controlled study involving 27,564 patients with atherosclerotic cardiovascular disease and LDL levels of 70 mg/dL or higher who were receiving statin therapy (at least atorvastatin 20 mg or equivalent with/without ezetimibe). Patients were between 45 and 80 years old with a history of history of MI, nonhemorrhagic stroke, or symptomatic peripheral artery disease. Patients were randomized to receive subcutaneous injections of evolocumab or matching placebo. The primary endpoints were similar to that of IMPROVE-IT: a composite of cardiovascular death, myocardial infarction, stroke, unstable angina hospitalization, and coronary revascularization. The median LDL on entry was 92 mg/dL for both groups. At 48 weeks, the evolocumab group showed a 59% decrease in LDL, compared with placebo, with a decrease in median LDL from 92 mg/dL to 30 mg/dL. The primary endpoint occurred in 9.8% of the evolocumab group and 11.3% in the placebo group for a hazard ratio of 0.85 (P less than .001) representing a total risk reduction of 13.2%. The risk of MI or stroke and need for revascularization were significantly lower values in the evolocumab group, compared with placebo. Cardiovascular death did not show significant changes. There was no significant difference in rate of serious events.
The ODYSSEY trial3 reported on another PCSK-9 inhibitor, alirocumab, in a randomized, double-blind, placebo-controlled trial involving 18,924 patients who had an ACS in the prior 12 months. At the median follow-up (2.8 years), the LDL of the alirocumab group was 53.3 mg/dL, compared with 101.4 mg/dL in the placebo group. The primary endpoints for cardiovascular risks were similar to those in the FOURIER trial: a risk of 9.5% in the alirocumab group and 11.1% in the placebo group for a total risk reduction of 14.4%. This suggests the class of PCSK-9 inhibitors have a strong correlation with reducing LDL levels 54%-59% and reducing major cardiac adverse events by 13%-15%.
Recommendations
The American College of Cardiology released a focused update that integrated the new evidence regarding the use of nonstatin therapy. The current focused update recommends an overall 50% or greater reduction in LDL for patients with clinical ASCVD. If this reduction is not achieved, ACC suggests that one consider the addition of nonstatin therapy with either ezetimibe or a PCSK-9 inhibitor.4 If a patient requires less than 25% additional LDL reduction, consider ezetimibe; if a patient requires more than 25% additional LDL reduction, consider a PCSK-9 inhibitor. Specifically, the guideline states: “If the patient still has less than 50% reduction in LDL-C (and may consider LDL-C above 70 mg/dL or non–HDL-C above 100 mg/dL), the patient and clinician should enter into a discussion focused on shared decision making regarding the addition of a nonstatin medication to the current regimen.”
The other group that is mentioned in the recommendations, with an acknowledgment that the evidence for benefit in primary prevention is not available, is individuals who have an LDL above 190 mg/dL even while compliant with a maximally effective statin regimen. The guidelines make further but less strong recommendations about a number of risk groups, but the largest and strongest change, based on strong evidence, is the recommendation to consider nonstatin therapy in individuals with established ASCVD, as described above.
Bottom line
Recent trials show significant reductions in LDL, leading to significant reductions in cardiovascular endpoints with ezetimibe and PCSK-9 inhibitors. This has led to an additional ACC recommendation to consider the use of nonstatin therapy in addition to maximal statin therapy in selected patients with established cardiovascular disease.
References
1. Cannon C et al. Ezetimibe added to statin therapy after acute coronary syndromes. N Engl J Med. 2015;372:2387-97.
2. Sabatine M et al. Evolocumab and clinical outcomes in patients with cardiovascular disease. N Engl J Med. 2017;376:1713-22.
3. ODYSSEY Outcomes: Results suggest use of PCSK9 inhibitor reduces CV events, LDL-C in ACS patients. Article from American College of Cardiology. ACC News Story. 2018 Mar 10.
4. Lloyd-Jones DM et al. 2017 Focused update of the 2016 ACC expert consensus decision pathway on the role of nonstatin therapies for LDL-cholesterol lowering in the management of atherosclerotic cardiovascular disease risk: a report of the American College of Cardiology task force on expert consensus decision pathways. J Am Coll Cardiol. 2017 Oct 3;70(14):1785-1822. Epub 2017 Sep 5.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Plako is a second-year resident in the family medicine residency program at Abington Jefferson Hospital.
Screening for osteoporosis to prevent fractures: USPSTF recommendation statement
The U.S. Preventive Services Task Force commissioned a systematic evidence review of 168 fair-good quality articles to examine newer evidence on screening for and treatment of osteoporotic fracture in women and men and update its 2011 guideline.
Importance
Osteoporosis leads to increased bone fragility and risk of fractures, specifically hip fractures, that are associated with limitations in ambulation, chronic pain, disability and loss of independence, and decreased quality of life: 21%-30% of those who suffer hip fractures die within 1 year. Osteoporosis is usually asymptomatic until a fracture occurs, thus preventing fractures is the main goal of an osteoporosis screening strategy. With the increasing life expectancy of the U.S. population, the potential preventable burden is likely to increase in future years.
Screening tests
The most commonly used test is central dual energy x-ray absorptiometry (DXA), which provides measurement of bone mineral density (BMD) of the hip and lumbar spine. Most treatment guidelines already use central DXA BMD to define osteoporosis and the threshold at which to start drug therapies for prevention. Other lower-cost and more accessible alternatives include peripheral DXA, which measures BMD at lower forearm and heel, and quantitative ultrasound (QUS), which also evaluates peripheral sites like the calcaneus. QUS does not measure BMD. USPSTF found that the harms associated with screening were small (mainly radiation exposure from DXA and opportunity costs).
Population and risk assessment
The review included adults older than 40 years of age, mostly postmenopausal women, without a history of previous low-trauma fractures, without conditions or medications that may cause secondary osteoporosis, and without increased risk of falls.
Patients at increased risk of osteoporotic fractures include those with parental history of hip fractures, low body weight, excessive alcohol consumption, and smokers. For postmenopausal women younger than 65 years of age with at least one risk factor, a reasonable approach to determine who should be screened with BMD is to use one of the various clinical risk assessment tools available. The most frequently studied tools in women are the Osteoporosis Risk Assessment Instrument (ORAI), Osteoporosis Index of Risk (OSIRIS), Osteoporosis Self-Assessment Tool (OST), and Simple Calculated Osteoporosis Risk Estimation (SCORE). The Fracture Risk Assessment (FRAX) tool calculates the 10-year risk of a major osteoporotic fracture (MOF) using clinical risk factors. For example, one approach is to perform BMD in women younger than 65 years with a FRAX risk greater than 8.4% (the FRAX risk of a 65-year-old woman of mean height and weight without major risk factors).
In men, the prevalence of osteoporosis (4.3%) is generally lower than in women (15.4%). In the absence of other risk factors, it is not till age 80 that the prevalence of osteoporosis in white men starts to reach that of a 65-year-old white woman. While men have similar risk factors as women described above, men with hypogonadism, history of cerebrovascular accident, and history of diabetes are also at increased risk of fracture.
Preventative measures to reduce osteoporotic fractures
Approved drug therapies. The majority of studies were conducted in postmenopausal women. Bisphosphonates, most commonly used and studied, significantly reduced vertebral and nonvertebral fractures but not hip fractures (possibly because of underpowered studies). Raloxifene and parathyroid hormone reduced vertebral fractures but not nonvertebral fractures. Denosumab significantly reduced all three types of fractures. A 2011 review identified that estrogen reduced vertebral fractures, but no new studies were identified for the current review. Data from the Women’s Health Initiative show that women receiving estrogen with or without progesterone had an elevated risk of stroke, venous thromboembolism, and gallbladder disease; their risk for urinary incontinence was increased during the first year of follow-up. In addition, women receiving estrogen plus progestin had a higher risk of invasive breast cancer, coronary heart disease, and probable dementia. The risk of serious adverse events, upper-gastrointestinal events, or cardiovascular events associated with the most common class of medications used, bisphosphonates, is small. Evidence on the effectiveness of medications to treat osteoporosis in men is lacking (only two studies conducted).
Exercise. Engagement in 120-300 minutes of weekly moderate-intensity aerobic activity can reduce the risk of hip fractures, and performance of weekly balance and muscle-strengthening activities can help prevent falls in older adults.
Supplements. In a separate recommendation, USPSTF recommends against daily supplementation with less than 400 IU of vitamin D and less than 1,000 mg of calcium for the primary prevention of fractures in community-dwelling, postmenopausal women. They found insufficient evidence on supplementation with higher doses of vitamin D and calcium in postmenopausal women, or at any dose in men and premenopausal women.
Recommendations from others
The National Osteoporosis Foundation and the International Society for Clinical Densitometry recommend BMD testing in all women older than 65 years, all men over 70 years, postmenopausal women younger than 65 years, and men aged 50-69 years with increased risk factors. The American Academy of Family Physicians recommends against DXA screening in women younger than 65 years and men younger than 70 years with no risk factors.
The bottom line
For all women older than 65 years and postmenopausal women younger than 65 years who are at increased risk, screen for and treat osteoporosis to prevent fractures. For men, there is insufficient evidence to screen.
Dr. Shrestha is a second-year resident in the Family Medicine Residency Program at Abington (Pa.) - Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington - Jefferson Health.
References
1. U.S. Preventative Services Task Force. JAMA. 2018 Jun 26;319(24):2521-31.
2. U.S. Preventative Services Task Force. JAMA. 2018 Jun 26;319(24):2532-51.
The U.S. Preventive Services Task Force commissioned a systematic evidence review of 168 fair-good quality articles to examine newer evidence on screening for and treatment of osteoporotic fracture in women and men and update its 2011 guideline.
Importance
Osteoporosis leads to increased bone fragility and risk of fractures, specifically hip fractures, that are associated with limitations in ambulation, chronic pain, disability and loss of independence, and decreased quality of life: 21%-30% of those who suffer hip fractures die within 1 year. Osteoporosis is usually asymptomatic until a fracture occurs, thus preventing fractures is the main goal of an osteoporosis screening strategy. With the increasing life expectancy of the U.S. population, the potential preventable burden is likely to increase in future years.
Screening tests
The most commonly used test is central dual energy x-ray absorptiometry (DXA), which provides measurement of bone mineral density (BMD) of the hip and lumbar spine. Most treatment guidelines already use central DXA BMD to define osteoporosis and the threshold at which to start drug therapies for prevention. Other lower-cost and more accessible alternatives include peripheral DXA, which measures BMD at lower forearm and heel, and quantitative ultrasound (QUS), which also evaluates peripheral sites like the calcaneus. QUS does not measure BMD. USPSTF found that the harms associated with screening were small (mainly radiation exposure from DXA and opportunity costs).
Population and risk assessment
The review included adults older than 40 years of age, mostly postmenopausal women, without a history of previous low-trauma fractures, without conditions or medications that may cause secondary osteoporosis, and without increased risk of falls.
Patients at increased risk of osteoporotic fractures include those with parental history of hip fractures, low body weight, excessive alcohol consumption, and smokers. For postmenopausal women younger than 65 years of age with at least one risk factor, a reasonable approach to determine who should be screened with BMD is to use one of the various clinical risk assessment tools available. The most frequently studied tools in women are the Osteoporosis Risk Assessment Instrument (ORAI), Osteoporosis Index of Risk (OSIRIS), Osteoporosis Self-Assessment Tool (OST), and Simple Calculated Osteoporosis Risk Estimation (SCORE). The Fracture Risk Assessment (FRAX) tool calculates the 10-year risk of a major osteoporotic fracture (MOF) using clinical risk factors. For example, one approach is to perform BMD in women younger than 65 years with a FRAX risk greater than 8.4% (the FRAX risk of a 65-year-old woman of mean height and weight without major risk factors).
In men, the prevalence of osteoporosis (4.3%) is generally lower than in women (15.4%). In the absence of other risk factors, it is not till age 80 that the prevalence of osteoporosis in white men starts to reach that of a 65-year-old white woman. While men have similar risk factors as women described above, men with hypogonadism, history of cerebrovascular accident, and history of diabetes are also at increased risk of fracture.
Preventative measures to reduce osteoporotic fractures
Approved drug therapies. The majority of studies were conducted in postmenopausal women. Bisphosphonates, most commonly used and studied, significantly reduced vertebral and nonvertebral fractures but not hip fractures (possibly because of underpowered studies). Raloxifene and parathyroid hormone reduced vertebral fractures but not nonvertebral fractures. Denosumab significantly reduced all three types of fractures. A 2011 review identified that estrogen reduced vertebral fractures, but no new studies were identified for the current review. Data from the Women’s Health Initiative show that women receiving estrogen with or without progesterone had an elevated risk of stroke, venous thromboembolism, and gallbladder disease; their risk for urinary incontinence was increased during the first year of follow-up. In addition, women receiving estrogen plus progestin had a higher risk of invasive breast cancer, coronary heart disease, and probable dementia. The risk of serious adverse events, upper-gastrointestinal events, or cardiovascular events associated with the most common class of medications used, bisphosphonates, is small. Evidence on the effectiveness of medications to treat osteoporosis in men is lacking (only two studies conducted).
Exercise. Engagement in 120-300 minutes of weekly moderate-intensity aerobic activity can reduce the risk of hip fractures, and performance of weekly balance and muscle-strengthening activities can help prevent falls in older adults.
Supplements. In a separate recommendation, USPSTF recommends against daily supplementation with less than 400 IU of vitamin D and less than 1,000 mg of calcium for the primary prevention of fractures in community-dwelling, postmenopausal women. They found insufficient evidence on supplementation with higher doses of vitamin D and calcium in postmenopausal women, or at any dose in men and premenopausal women.
Recommendations from others
The National Osteoporosis Foundation and the International Society for Clinical Densitometry recommend BMD testing in all women older than 65 years, all men over 70 years, postmenopausal women younger than 65 years, and men aged 50-69 years with increased risk factors. The American Academy of Family Physicians recommends against DXA screening in women younger than 65 years and men younger than 70 years with no risk factors.
The bottom line
For all women older than 65 years and postmenopausal women younger than 65 years who are at increased risk, screen for and treat osteoporosis to prevent fractures. For men, there is insufficient evidence to screen.
Dr. Shrestha is a second-year resident in the Family Medicine Residency Program at Abington (Pa.) - Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington - Jefferson Health.
References
1. U.S. Preventative Services Task Force. JAMA. 2018 Jun 26;319(24):2521-31.
2. U.S. Preventative Services Task Force. JAMA. 2018 Jun 26;319(24):2532-51.
The U.S. Preventive Services Task Force commissioned a systematic evidence review of 168 fair-good quality articles to examine newer evidence on screening for and treatment of osteoporotic fracture in women and men and update its 2011 guideline.
Importance
Osteoporosis leads to increased bone fragility and risk of fractures, specifically hip fractures, that are associated with limitations in ambulation, chronic pain, disability and loss of independence, and decreased quality of life: 21%-30% of those who suffer hip fractures die within 1 year. Osteoporosis is usually asymptomatic until a fracture occurs, thus preventing fractures is the main goal of an osteoporosis screening strategy. With the increasing life expectancy of the U.S. population, the potential preventable burden is likely to increase in future years.
Screening tests
The most commonly used test is central dual energy x-ray absorptiometry (DXA), which provides measurement of bone mineral density (BMD) of the hip and lumbar spine. Most treatment guidelines already use central DXA BMD to define osteoporosis and the threshold at which to start drug therapies for prevention. Other lower-cost and more accessible alternatives include peripheral DXA, which measures BMD at lower forearm and heel, and quantitative ultrasound (QUS), which also evaluates peripheral sites like the calcaneus. QUS does not measure BMD. USPSTF found that the harms associated with screening were small (mainly radiation exposure from DXA and opportunity costs).
Population and risk assessment
The review included adults older than 40 years of age, mostly postmenopausal women, without a history of previous low-trauma fractures, without conditions or medications that may cause secondary osteoporosis, and without increased risk of falls.
Patients at increased risk of osteoporotic fractures include those with parental history of hip fractures, low body weight, excessive alcohol consumption, and smokers. For postmenopausal women younger than 65 years of age with at least one risk factor, a reasonable approach to determine who should be screened with BMD is to use one of the various clinical risk assessment tools available. The most frequently studied tools in women are the Osteoporosis Risk Assessment Instrument (ORAI), Osteoporosis Index of Risk (OSIRIS), Osteoporosis Self-Assessment Tool (OST), and Simple Calculated Osteoporosis Risk Estimation (SCORE). The Fracture Risk Assessment (FRAX) tool calculates the 10-year risk of a major osteoporotic fracture (MOF) using clinical risk factors. For example, one approach is to perform BMD in women younger than 65 years with a FRAX risk greater than 8.4% (the FRAX risk of a 65-year-old woman of mean height and weight without major risk factors).
In men, the prevalence of osteoporosis (4.3%) is generally lower than in women (15.4%). In the absence of other risk factors, it is not till age 80 that the prevalence of osteoporosis in white men starts to reach that of a 65-year-old white woman. While men have similar risk factors as women described above, men with hypogonadism, history of cerebrovascular accident, and history of diabetes are also at increased risk of fracture.
Preventative measures to reduce osteoporotic fractures
Approved drug therapies. The majority of studies were conducted in postmenopausal women. Bisphosphonates, most commonly used and studied, significantly reduced vertebral and nonvertebral fractures but not hip fractures (possibly because of underpowered studies). Raloxifene and parathyroid hormone reduced vertebral fractures but not nonvertebral fractures. Denosumab significantly reduced all three types of fractures. A 2011 review identified that estrogen reduced vertebral fractures, but no new studies were identified for the current review. Data from the Women’s Health Initiative show that women receiving estrogen with or without progesterone had an elevated risk of stroke, venous thromboembolism, and gallbladder disease; their risk for urinary incontinence was increased during the first year of follow-up. In addition, women receiving estrogen plus progestin had a higher risk of invasive breast cancer, coronary heart disease, and probable dementia. The risk of serious adverse events, upper-gastrointestinal events, or cardiovascular events associated with the most common class of medications used, bisphosphonates, is small. Evidence on the effectiveness of medications to treat osteoporosis in men is lacking (only two studies conducted).
Exercise. Engagement in 120-300 minutes of weekly moderate-intensity aerobic activity can reduce the risk of hip fractures, and performance of weekly balance and muscle-strengthening activities can help prevent falls in older adults.
Supplements. In a separate recommendation, USPSTF recommends against daily supplementation with less than 400 IU of vitamin D and less than 1,000 mg of calcium for the primary prevention of fractures in community-dwelling, postmenopausal women. They found insufficient evidence on supplementation with higher doses of vitamin D and calcium in postmenopausal women, or at any dose in men and premenopausal women.
Recommendations from others
The National Osteoporosis Foundation and the International Society for Clinical Densitometry recommend BMD testing in all women older than 65 years, all men over 70 years, postmenopausal women younger than 65 years, and men aged 50-69 years with increased risk factors. The American Academy of Family Physicians recommends against DXA screening in women younger than 65 years and men younger than 70 years with no risk factors.
The bottom line
For all women older than 65 years and postmenopausal women younger than 65 years who are at increased risk, screen for and treat osteoporosis to prevent fractures. For men, there is insufficient evidence to screen.
Dr. Shrestha is a second-year resident in the Family Medicine Residency Program at Abington (Pa.) - Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington - Jefferson Health.
References
1. U.S. Preventative Services Task Force. JAMA. 2018 Jun 26;319(24):2521-31.
2. U.S. Preventative Services Task Force. JAMA. 2018 Jun 26;319(24):2532-51.
Countdown to launch: Health care IT primed for disruption
On Friday, June 29, at 5:42 a.m., I stood with my family on a Florida shore overlooking Kennedy Space Center. We had gathered with about a hundred other people to watch a rocket launch and were overwhelmed with excitement as the coastline erupted in fire, and the spacecraft lifted off toward the heavens. Standing there watching the spectacle, I couldn’t help but be caught up in the irony of the moment. Here we were, at the place where NASA sent the first Americans into space – on the very shores where the Apollo astronauts set off for the moon to plant our nation’s flag in the lunar dust in July of 1969. Yet now, almost 50 years later, this launch was profoundly different. The rocket wasn’t built by NASA, and the intention of its builders wasn’t exploration. This was a Falcon 9, built by SpaceX, a for-profit company founded by an enterprising billionaire. Most surprisingly, this relatively routine launch was intended to accomplish something that NASA – the United States’ own space agency – currently can’t do on its own: Launch rockets.
Since retiring the Space Shuttle in 2011, the United States has had to rely on others – including even Roscosmos (the Russian space agency) – to ferry passengers, satellites, and cargo into space. Seeing this opportunity in a multibillion-dollar industry, private enterprise has risen to the challenge, innovating more quickly and at a lower cost than “the establishment” has ever been capable of. As a result, space travel has been disrupted by corporations competing in a new “space race.” Instead of national pride or scientific dominance, this race has been fueled by profit and is quite similar to one being run in another industry: health care.
Just 1 day prior to watching the launch – on June 28 – we learned that Amazon had purchased PillPack, a prescription drug home delivery service. The stock market responded to the news, and the establishment (in this case CVS, Walgreen’s, and WalMart, among others) collectively lost $17.5 billion in one day. This isn’t the first time Amazon has disrupted the health care world; in January of this year, they, along with Berkshire Hathaway and JPMorgan Chase, announced a health care partnership to cut costs and improve care delivery for their employees. This move also sent shivers through the market, as health insurers and providers such as Aetna and United Health lost big on expectations that Amazon et al. wouldn’t stop with their own employees. Those of us watching this play out from the sidelines realized we were witnessing a revolution that would mean the end of health care delivery as we know it – and that’s not necessarily a bad thing, especially in the world of Electronic Health Records.
As you’ve probably noticed, it is quite rare to find physicians nowadays who love computers. Once an exciting novelty in health care, PCs have become a burdensome necessity and providers often feel enslaved to the EHRs that run on them. There are numerous reasons for this, but one primary cause is that the hundreds of disparate EHRs currently available sprouted out of health care – a centuries-old and very provincial industry – prior to the development of technical and regulatory standards to govern them. As they’ve grown larger and larger from their primitive underpinnings, these EHRs have become more cumbersome to navigate, and vendors have simply “bolted-on” additional features without significant changes to their near-obsolete software architecture.
It’s worth noting that a few EHR companies purport to be true innovators in platform usability, such as industry giant, Epic. According to CEO Judy Faulkner, Epic pours 50% of their revenue back into research and development (though, as Epic is a privately held company, this number can’t be verified). If accurate, Epic is truly an exception, as most electronic record companies spend about 10%-30% on improving their products – far less than they spend on recruiting new customers. Regardless, the outcome is this: Physician expectations for user interface and user experience have far outpaced the current state of the art of EHRs, and this has left a gap that new players outside the health care establishment are apt to fill.
Like Amazon, other software giants have made significant investments in health care over the past several years. According to their website, Apple has been working with hospitals, scientists, and developers to “help health care providers streamline their work, deliver better care, and conduct medical research.” Similarly, Google claims to be “making a number of big bets in health care and life sciences,” by leveraging their artificial intelligence technology to assist in clinical diagnosis and scientific discovery. In spite of a few false starts in the past, these companies are poised to do more than simply disrupt health care. As experts in user interface and design, they could truly change the way physicians interact with health care technology, and it seems like it’s no longer a question of if, but when we’ll see that happen.
The effort of SpaceX and others to change the way we launch rockets tells a story that transcends space travel – It’s a story of how new thinking, more efficient processes, and better design can disrupt the establishment. It’s worth pointing out that NASA hasn’t given up – they are continuing to develop the Space Launch System, which, when completed, will be the most powerful rocket in the world and be capable of carrying astronauts into deep space. In the meantime, however, NASA is embracing the efforts of private industry to help pave a better way forward and make space travel safer and more accessible for everyone. We are hopeful that EHR vendors and other establishment health care institutions are taking note, adapting to meet the needs of the current generation of physicians and patients, and innovating a better way to launch health care into the future.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on twitter (@doctornotte). Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
On Friday, June 29, at 5:42 a.m., I stood with my family on a Florida shore overlooking Kennedy Space Center. We had gathered with about a hundred other people to watch a rocket launch and were overwhelmed with excitement as the coastline erupted in fire, and the spacecraft lifted off toward the heavens. Standing there watching the spectacle, I couldn’t help but be caught up in the irony of the moment. Here we were, at the place where NASA sent the first Americans into space – on the very shores where the Apollo astronauts set off for the moon to plant our nation’s flag in the lunar dust in July of 1969. Yet now, almost 50 years later, this launch was profoundly different. The rocket wasn’t built by NASA, and the intention of its builders wasn’t exploration. This was a Falcon 9, built by SpaceX, a for-profit company founded by an enterprising billionaire. Most surprisingly, this relatively routine launch was intended to accomplish something that NASA – the United States’ own space agency – currently can’t do on its own: Launch rockets.
Since retiring the Space Shuttle in 2011, the United States has had to rely on others – including even Roscosmos (the Russian space agency) – to ferry passengers, satellites, and cargo into space. Seeing this opportunity in a multibillion-dollar industry, private enterprise has risen to the challenge, innovating more quickly and at a lower cost than “the establishment” has ever been capable of. As a result, space travel has been disrupted by corporations competing in a new “space race.” Instead of national pride or scientific dominance, this race has been fueled by profit and is quite similar to one being run in another industry: health care.
Just 1 day prior to watching the launch – on June 28 – we learned that Amazon had purchased PillPack, a prescription drug home delivery service. The stock market responded to the news, and the establishment (in this case CVS, Walgreen’s, and WalMart, among others) collectively lost $17.5 billion in one day. This isn’t the first time Amazon has disrupted the health care world; in January of this year, they, along with Berkshire Hathaway and JPMorgan Chase, announced a health care partnership to cut costs and improve care delivery for their employees. This move also sent shivers through the market, as health insurers and providers such as Aetna and United Health lost big on expectations that Amazon et al. wouldn’t stop with their own employees. Those of us watching this play out from the sidelines realized we were witnessing a revolution that would mean the end of health care delivery as we know it – and that’s not necessarily a bad thing, especially in the world of Electronic Health Records.
As you’ve probably noticed, it is quite rare to find physicians nowadays who love computers. Once an exciting novelty in health care, PCs have become a burdensome necessity and providers often feel enslaved to the EHRs that run on them. There are numerous reasons for this, but one primary cause is that the hundreds of disparate EHRs currently available sprouted out of health care – a centuries-old and very provincial industry – prior to the development of technical and regulatory standards to govern them. As they’ve grown larger and larger from their primitive underpinnings, these EHRs have become more cumbersome to navigate, and vendors have simply “bolted-on” additional features without significant changes to their near-obsolete software architecture.
It’s worth noting that a few EHR companies purport to be true innovators in platform usability, such as industry giant, Epic. According to CEO Judy Faulkner, Epic pours 50% of their revenue back into research and development (though, as Epic is a privately held company, this number can’t be verified). If accurate, Epic is truly an exception, as most electronic record companies spend about 10%-30% on improving their products – far less than they spend on recruiting new customers. Regardless, the outcome is this: Physician expectations for user interface and user experience have far outpaced the current state of the art of EHRs, and this has left a gap that new players outside the health care establishment are apt to fill.
Like Amazon, other software giants have made significant investments in health care over the past several years. According to their website, Apple has been working with hospitals, scientists, and developers to “help health care providers streamline their work, deliver better care, and conduct medical research.” Similarly, Google claims to be “making a number of big bets in health care and life sciences,” by leveraging their artificial intelligence technology to assist in clinical diagnosis and scientific discovery. In spite of a few false starts in the past, these companies are poised to do more than simply disrupt health care. As experts in user interface and design, they could truly change the way physicians interact with health care technology, and it seems like it’s no longer a question of if, but when we’ll see that happen.
The effort of SpaceX and others to change the way we launch rockets tells a story that transcends space travel – It’s a story of how new thinking, more efficient processes, and better design can disrupt the establishment. It’s worth pointing out that NASA hasn’t given up – they are continuing to develop the Space Launch System, which, when completed, will be the most powerful rocket in the world and be capable of carrying astronauts into deep space. In the meantime, however, NASA is embracing the efforts of private industry to help pave a better way forward and make space travel safer and more accessible for everyone. We are hopeful that EHR vendors and other establishment health care institutions are taking note, adapting to meet the needs of the current generation of physicians and patients, and innovating a better way to launch health care into the future.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on twitter (@doctornotte). Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
On Friday, June 29, at 5:42 a.m., I stood with my family on a Florida shore overlooking Kennedy Space Center. We had gathered with about a hundred other people to watch a rocket launch and were overwhelmed with excitement as the coastline erupted in fire, and the spacecraft lifted off toward the heavens. Standing there watching the spectacle, I couldn’t help but be caught up in the irony of the moment. Here we were, at the place where NASA sent the first Americans into space – on the very shores where the Apollo astronauts set off for the moon to plant our nation’s flag in the lunar dust in July of 1969. Yet now, almost 50 years later, this launch was profoundly different. The rocket wasn’t built by NASA, and the intention of its builders wasn’t exploration. This was a Falcon 9, built by SpaceX, a for-profit company founded by an enterprising billionaire. Most surprisingly, this relatively routine launch was intended to accomplish something that NASA – the United States’ own space agency – currently can’t do on its own: Launch rockets.
Since retiring the Space Shuttle in 2011, the United States has had to rely on others – including even Roscosmos (the Russian space agency) – to ferry passengers, satellites, and cargo into space. Seeing this opportunity in a multibillion-dollar industry, private enterprise has risen to the challenge, innovating more quickly and at a lower cost than “the establishment” has ever been capable of. As a result, space travel has been disrupted by corporations competing in a new “space race.” Instead of national pride or scientific dominance, this race has been fueled by profit and is quite similar to one being run in another industry: health care.
Just 1 day prior to watching the launch – on June 28 – we learned that Amazon had purchased PillPack, a prescription drug home delivery service. The stock market responded to the news, and the establishment (in this case CVS, Walgreen’s, and WalMart, among others) collectively lost $17.5 billion in one day. This isn’t the first time Amazon has disrupted the health care world; in January of this year, they, along with Berkshire Hathaway and JPMorgan Chase, announced a health care partnership to cut costs and improve care delivery for their employees. This move also sent shivers through the market, as health insurers and providers such as Aetna and United Health lost big on expectations that Amazon et al. wouldn’t stop with their own employees. Those of us watching this play out from the sidelines realized we were witnessing a revolution that would mean the end of health care delivery as we know it – and that’s not necessarily a bad thing, especially in the world of Electronic Health Records.
As you’ve probably noticed, it is quite rare to find physicians nowadays who love computers. Once an exciting novelty in health care, PCs have become a burdensome necessity and providers often feel enslaved to the EHRs that run on them. There are numerous reasons for this, but one primary cause is that the hundreds of disparate EHRs currently available sprouted out of health care – a centuries-old and very provincial industry – prior to the development of technical and regulatory standards to govern them. As they’ve grown larger and larger from their primitive underpinnings, these EHRs have become more cumbersome to navigate, and vendors have simply “bolted-on” additional features without significant changes to their near-obsolete software architecture.
It’s worth noting that a few EHR companies purport to be true innovators in platform usability, such as industry giant, Epic. According to CEO Judy Faulkner, Epic pours 50% of their revenue back into research and development (though, as Epic is a privately held company, this number can’t be verified). If accurate, Epic is truly an exception, as most electronic record companies spend about 10%-30% on improving their products – far less than they spend on recruiting new customers. Regardless, the outcome is this: Physician expectations for user interface and user experience have far outpaced the current state of the art of EHRs, and this has left a gap that new players outside the health care establishment are apt to fill.
Like Amazon, other software giants have made significant investments in health care over the past several years. According to their website, Apple has been working with hospitals, scientists, and developers to “help health care providers streamline their work, deliver better care, and conduct medical research.” Similarly, Google claims to be “making a number of big bets in health care and life sciences,” by leveraging their artificial intelligence technology to assist in clinical diagnosis and scientific discovery. In spite of a few false starts in the past, these companies are poised to do more than simply disrupt health care. As experts in user interface and design, they could truly change the way physicians interact with health care technology, and it seems like it’s no longer a question of if, but when we’ll see that happen.
The effort of SpaceX and others to change the way we launch rockets tells a story that transcends space travel – It’s a story of how new thinking, more efficient processes, and better design can disrupt the establishment. It’s worth pointing out that NASA hasn’t given up – they are continuing to develop the Space Launch System, which, when completed, will be the most powerful rocket in the world and be capable of carrying astronauts into deep space. In the meantime, however, NASA is embracing the efforts of private industry to help pave a better way forward and make space travel safer and more accessible for everyone. We are hopeful that EHR vendors and other establishment health care institutions are taking note, adapting to meet the needs of the current generation of physicians and patients, and innovating a better way to launch health care into the future.
Dr. Notte is a family physician and associate chief medical information officer for Abington (Pa.) Jefferson Health. Follow him on twitter (@doctornotte). Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
USPSTF: Fall prevention in the elderly? Think exercise
The United States Preventive Services Task Force (USPSTF) commissioned a systematic evidence review of 62 randomized clinical trials with a total of 35,058 patients to gather evidence on the effectiveness and harms of primary care–relevant interventions to prevent falls in community-dwelling adults 65 years or older.1
It thereby has updated its 2012 statement, in which exercise or physical therapy and vitamin D supplementation were recommended to prevent falls.Importance
Falls are the leading cause of injury-related morbidity and mortality among older adults in the United States. In 2014, almost a third of community-dwelling adults 65 years or older reported falling, resulting in 29 million falls. More than 90% of hip fractures are causes by falls, and 25% of older adults who sustain a hip fracture die within 6 months. Of note, USPSTF has issued two related but separate recommendation statements on the prevention of fractures. Reducing the incidence of falls would not only decrease morbidity burden but also improve the socialization and functioning of older adults.
Scope of review
Out of the 62 randomized clinical trials, 65% of intervention studies targeted patients at high risk of falls; they were most commonly identified by history of prior falls, but mobility, gait, and balance impairment were often also considered. Specific medical diagnoses that could affect fall-related outcomes (osteoporosis, visual impairment, neurocognitive disorders) were excluded. This review did not look at the outcome of studies in populations who were vitamin D deficient because, in this population, vitamin D supplementation would be considered treatment rather than prevention. Of note, women constituted the majority in most studies.
Exercise interventions
USPSTF found five good-quality and 16 fair-quality studies, which altogether included a total of 7,297 patients, that reported on various exercise interventions to prevent falls; altogether, these studies included a total of 7,297 patients. Of the studies, 57% recruited populations at high risk for falls with a mean age ranging from 68 to 88 years. Exercise interventions included supervised individual classes, group classes, and physical therapy. The most common exercise component was gait, balance, and functional training; other common components included, in order of frequency, were resistance training, flexibility training, and endurance training. Most common frequency and duration were three sessions per week for 12 months. Exercise interventions reduced the number of persons experiencing a fall (relative risk 0.89; 95% confidence interval, 0.81-0.97), reduced the number of injurious falls (incidence rate ratio, 0.81; 95% CI, 0.73-0.90), and revealed a statistically insignificant reduction in the number of falls. Reported adverse events were minor and most commonly included pain or bruising related to exercise.
Multifactorial interventions
USPSTF found seven good-quality and 19 fair-quality studies that reported on multifactorial interventions; altogether, these studies included a total of 15,506 patients. Of the studies, 73% recruited populations at high risk for falls, and the mean age ranged from 71.9 to 85 years. Multifactorial interventions had two components:
- Initial assessment to screen for modifiable risk factors for falls (multidisciplinary comprehensive geriatric assessment or specific assessment that evaluated various factors, such as balance, gait, vision, cardiovascular health, medication, environment, cognition, and psychological health).
- Subsequent customized interventions (group or individual exercise, cognitive-behavioral therapy, nutrition, environmental modification, physical or occupational therapy, social or community services, and referral to specialists).
While studies found that multifactorial interventions reduced the number of falls (IRR, 0.79; 95% CI, 0.68-0.91), they did not reduce the number of people who experienced a fall (RR, 0.95; 95% CI, 0.89-1.01) or an injurious fall (RR, 0.94; 95% CI, 0.85-1.03). Four studies reported minor harm, mostly bruising, from exercise. Therefore, USPSTF has recommended that clinicians take into consideration patient’s medical history (including prior falls and comorbidities) to selectively offer multifactorial interventions.
Vitamin D supplementation
USPSTF found four good-quality and three fair-quality studies that reported on the effect of vitamin D supplementation on the prevention of falls; altogether, these studies included a total of 7,531 patients. Of the studies, 43% recruited populations at high risk for falls. The mean age ranged from 71 to 76.8 years, and mean serum 25-OH vitamin D levels ranged from 26.4 to 31.8 ng/mL. Vitamin D formulations and dosages varied among trials from 700 IU/day to 150,000 IU/3 months to 500,000 IU/year. Pooled analyses did not show a significant reduction in falls (IRR, 0.97; 95% CI, 0.79-1.20) or the number of persons experiencing a fall (RR, 0.97; 95% CI, 0.88-1.08). Only two trials reported on injurious falls; one reported an increase and the other reported no statistically significant difference. One study using high doses of Vitamin D supplementation (500,000 IU per year) showed statistically significant increase in all three endpoints.
Recommendation of others for fall prevention
The National Institution of Aging has emphasized exercise for strength and balance, monitoring for environmental hazards, and hearing and vision care, as well as medication management. The American Geriatric Society (AGS) has recommended asking about prior falls annually and assessing gait and balance on those who have experienced a fall. The AGS also has recommended strength and gait training, environmental modification, medication management, and vitamin D supplementation of at least 800 IU/day for those vitamin D deficient or at increased risk of falls. The Center for Disease Control and Prevention recommends STEADI (Stopping Elderly Accidents, Deaths & Injuries), a coordinated approach to implement the AGS’s clinical practice guidelines. The American Academy of Family Physicians recommends exercise or physical therapy and vitamin D supplementation.
The bottom line
Regarding reduction of falls, the USPSTF found adequate evidence that exercise interventions confer a moderate net benefit, multifactorial interventions have a small net benefit, and vitamin D supplementation offers no net benefit in preventing falls.
References
1. Guirquis-Blake JM et al. JAMA. 2018 Apr 24;319(16):1705-16.
2. U.S. Preventive Services Task Force et al. JAMA. 2018 Apr 24;319(16):1696-1704.
Dr. Shrestha is a first-year resident in the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
.
The United States Preventive Services Task Force (USPSTF) commissioned a systematic evidence review of 62 randomized clinical trials with a total of 35,058 patients to gather evidence on the effectiveness and harms of primary care–relevant interventions to prevent falls in community-dwelling adults 65 years or older.1
It thereby has updated its 2012 statement, in which exercise or physical therapy and vitamin D supplementation were recommended to prevent falls.Importance
Falls are the leading cause of injury-related morbidity and mortality among older adults in the United States. In 2014, almost a third of community-dwelling adults 65 years or older reported falling, resulting in 29 million falls. More than 90% of hip fractures are causes by falls, and 25% of older adults who sustain a hip fracture die within 6 months. Of note, USPSTF has issued two related but separate recommendation statements on the prevention of fractures. Reducing the incidence of falls would not only decrease morbidity burden but also improve the socialization and functioning of older adults.
Scope of review
Out of the 62 randomized clinical trials, 65% of intervention studies targeted patients at high risk of falls; they were most commonly identified by history of prior falls, but mobility, gait, and balance impairment were often also considered. Specific medical diagnoses that could affect fall-related outcomes (osteoporosis, visual impairment, neurocognitive disorders) were excluded. This review did not look at the outcome of studies in populations who were vitamin D deficient because, in this population, vitamin D supplementation would be considered treatment rather than prevention. Of note, women constituted the majority in most studies.
Exercise interventions
USPSTF found five good-quality and 16 fair-quality studies, which altogether included a total of 7,297 patients, that reported on various exercise interventions to prevent falls; altogether, these studies included a total of 7,297 patients. Of the studies, 57% recruited populations at high risk for falls with a mean age ranging from 68 to 88 years. Exercise interventions included supervised individual classes, group classes, and physical therapy. The most common exercise component was gait, balance, and functional training; other common components included, in order of frequency, were resistance training, flexibility training, and endurance training. Most common frequency and duration were three sessions per week for 12 months. Exercise interventions reduced the number of persons experiencing a fall (relative risk 0.89; 95% confidence interval, 0.81-0.97), reduced the number of injurious falls (incidence rate ratio, 0.81; 95% CI, 0.73-0.90), and revealed a statistically insignificant reduction in the number of falls. Reported adverse events were minor and most commonly included pain or bruising related to exercise.
Multifactorial interventions
USPSTF found seven good-quality and 19 fair-quality studies that reported on multifactorial interventions; altogether, these studies included a total of 15,506 patients. Of the studies, 73% recruited populations at high risk for falls, and the mean age ranged from 71.9 to 85 years. Multifactorial interventions had two components:
- Initial assessment to screen for modifiable risk factors for falls (multidisciplinary comprehensive geriatric assessment or specific assessment that evaluated various factors, such as balance, gait, vision, cardiovascular health, medication, environment, cognition, and psychological health).
- Subsequent customized interventions (group or individual exercise, cognitive-behavioral therapy, nutrition, environmental modification, physical or occupational therapy, social or community services, and referral to specialists).
While studies found that multifactorial interventions reduced the number of falls (IRR, 0.79; 95% CI, 0.68-0.91), they did not reduce the number of people who experienced a fall (RR, 0.95; 95% CI, 0.89-1.01) or an injurious fall (RR, 0.94; 95% CI, 0.85-1.03). Four studies reported minor harm, mostly bruising, from exercise. Therefore, USPSTF has recommended that clinicians take into consideration patient’s medical history (including prior falls and comorbidities) to selectively offer multifactorial interventions.
Vitamin D supplementation
USPSTF found four good-quality and three fair-quality studies that reported on the effect of vitamin D supplementation on the prevention of falls; altogether, these studies included a total of 7,531 patients. Of the studies, 43% recruited populations at high risk for falls. The mean age ranged from 71 to 76.8 years, and mean serum 25-OH vitamin D levels ranged from 26.4 to 31.8 ng/mL. Vitamin D formulations and dosages varied among trials from 700 IU/day to 150,000 IU/3 months to 500,000 IU/year. Pooled analyses did not show a significant reduction in falls (IRR, 0.97; 95% CI, 0.79-1.20) or the number of persons experiencing a fall (RR, 0.97; 95% CI, 0.88-1.08). Only two trials reported on injurious falls; one reported an increase and the other reported no statistically significant difference. One study using high doses of Vitamin D supplementation (500,000 IU per year) showed statistically significant increase in all three endpoints.
Recommendation of others for fall prevention
The National Institution of Aging has emphasized exercise for strength and balance, monitoring for environmental hazards, and hearing and vision care, as well as medication management. The American Geriatric Society (AGS) has recommended asking about prior falls annually and assessing gait and balance on those who have experienced a fall. The AGS also has recommended strength and gait training, environmental modification, medication management, and vitamin D supplementation of at least 800 IU/day for those vitamin D deficient or at increased risk of falls. The Center for Disease Control and Prevention recommends STEADI (Stopping Elderly Accidents, Deaths & Injuries), a coordinated approach to implement the AGS’s clinical practice guidelines. The American Academy of Family Physicians recommends exercise or physical therapy and vitamin D supplementation.
The bottom line
Regarding reduction of falls, the USPSTF found adequate evidence that exercise interventions confer a moderate net benefit, multifactorial interventions have a small net benefit, and vitamin D supplementation offers no net benefit in preventing falls.
References
1. Guirquis-Blake JM et al. JAMA. 2018 Apr 24;319(16):1705-16.
2. U.S. Preventive Services Task Force et al. JAMA. 2018 Apr 24;319(16):1696-1704.
Dr. Shrestha is a first-year resident in the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
.
The United States Preventive Services Task Force (USPSTF) commissioned a systematic evidence review of 62 randomized clinical trials with a total of 35,058 patients to gather evidence on the effectiveness and harms of primary care–relevant interventions to prevent falls in community-dwelling adults 65 years or older.1
It thereby has updated its 2012 statement, in which exercise or physical therapy and vitamin D supplementation were recommended to prevent falls.Importance
Falls are the leading cause of injury-related morbidity and mortality among older adults in the United States. In 2014, almost a third of community-dwelling adults 65 years or older reported falling, resulting in 29 million falls. More than 90% of hip fractures are causes by falls, and 25% of older adults who sustain a hip fracture die within 6 months. Of note, USPSTF has issued two related but separate recommendation statements on the prevention of fractures. Reducing the incidence of falls would not only decrease morbidity burden but also improve the socialization and functioning of older adults.
Scope of review
Out of the 62 randomized clinical trials, 65% of intervention studies targeted patients at high risk of falls; they were most commonly identified by history of prior falls, but mobility, gait, and balance impairment were often also considered. Specific medical diagnoses that could affect fall-related outcomes (osteoporosis, visual impairment, neurocognitive disorders) were excluded. This review did not look at the outcome of studies in populations who were vitamin D deficient because, in this population, vitamin D supplementation would be considered treatment rather than prevention. Of note, women constituted the majority in most studies.
Exercise interventions
USPSTF found five good-quality and 16 fair-quality studies, which altogether included a total of 7,297 patients, that reported on various exercise interventions to prevent falls; altogether, these studies included a total of 7,297 patients. Of the studies, 57% recruited populations at high risk for falls with a mean age ranging from 68 to 88 years. Exercise interventions included supervised individual classes, group classes, and physical therapy. The most common exercise component was gait, balance, and functional training; other common components included, in order of frequency, were resistance training, flexibility training, and endurance training. Most common frequency and duration were three sessions per week for 12 months. Exercise interventions reduced the number of persons experiencing a fall (relative risk 0.89; 95% confidence interval, 0.81-0.97), reduced the number of injurious falls (incidence rate ratio, 0.81; 95% CI, 0.73-0.90), and revealed a statistically insignificant reduction in the number of falls. Reported adverse events were minor and most commonly included pain or bruising related to exercise.
Multifactorial interventions
USPSTF found seven good-quality and 19 fair-quality studies that reported on multifactorial interventions; altogether, these studies included a total of 15,506 patients. Of the studies, 73% recruited populations at high risk for falls, and the mean age ranged from 71.9 to 85 years. Multifactorial interventions had two components:
- Initial assessment to screen for modifiable risk factors for falls (multidisciplinary comprehensive geriatric assessment or specific assessment that evaluated various factors, such as balance, gait, vision, cardiovascular health, medication, environment, cognition, and psychological health).
- Subsequent customized interventions (group or individual exercise, cognitive-behavioral therapy, nutrition, environmental modification, physical or occupational therapy, social or community services, and referral to specialists).
While studies found that multifactorial interventions reduced the number of falls (IRR, 0.79; 95% CI, 0.68-0.91), they did not reduce the number of people who experienced a fall (RR, 0.95; 95% CI, 0.89-1.01) or an injurious fall (RR, 0.94; 95% CI, 0.85-1.03). Four studies reported minor harm, mostly bruising, from exercise. Therefore, USPSTF has recommended that clinicians take into consideration patient’s medical history (including prior falls and comorbidities) to selectively offer multifactorial interventions.
Vitamin D supplementation
USPSTF found four good-quality and three fair-quality studies that reported on the effect of vitamin D supplementation on the prevention of falls; altogether, these studies included a total of 7,531 patients. Of the studies, 43% recruited populations at high risk for falls. The mean age ranged from 71 to 76.8 years, and mean serum 25-OH vitamin D levels ranged from 26.4 to 31.8 ng/mL. Vitamin D formulations and dosages varied among trials from 700 IU/day to 150,000 IU/3 months to 500,000 IU/year. Pooled analyses did not show a significant reduction in falls (IRR, 0.97; 95% CI, 0.79-1.20) or the number of persons experiencing a fall (RR, 0.97; 95% CI, 0.88-1.08). Only two trials reported on injurious falls; one reported an increase and the other reported no statistically significant difference. One study using high doses of Vitamin D supplementation (500,000 IU per year) showed statistically significant increase in all three endpoints.
Recommendation of others for fall prevention
The National Institution of Aging has emphasized exercise for strength and balance, monitoring for environmental hazards, and hearing and vision care, as well as medication management. The American Geriatric Society (AGS) has recommended asking about prior falls annually and assessing gait and balance on those who have experienced a fall. The AGS also has recommended strength and gait training, environmental modification, medication management, and vitamin D supplementation of at least 800 IU/day for those vitamin D deficient or at increased risk of falls. The Center for Disease Control and Prevention recommends STEADI (Stopping Elderly Accidents, Deaths & Injuries), a coordinated approach to implement the AGS’s clinical practice guidelines. The American Academy of Family Physicians recommends exercise or physical therapy and vitamin D supplementation.
The bottom line
Regarding reduction of falls, the USPSTF found adequate evidence that exercise interventions confer a moderate net benefit, multifactorial interventions have a small net benefit, and vitamin D supplementation offers no net benefit in preventing falls.
References
1. Guirquis-Blake JM et al. JAMA. 2018 Apr 24;319(16):1705-16.
2. U.S. Preventive Services Task Force et al. JAMA. 2018 Apr 24;319(16):1696-1704.
Dr. Shrestha is a first-year resident in the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington Jefferson Health.
.
What is an old doctor to do?
I was in Miami recently to give a talk on diabetes when a physician, Pablo Michel, MD, asked me whether we could address an issue that’s important to him and many of his colleagues. His question was, Do we have any suggestions about how to help “older doctors” such as himself deal with electronic health records?
One of the problems with his question was that he didn’t really look “old”; he looked like he was about 50 years of age and in good shape. This physician had come on a Saturday morning to spend 4 hours learning about diabetes, which made it clear that he cared about his patients, his craft, and staying current with the medical literature.
Further discussion revealed that he also was bothered about what he saw happening on many consult notes that he received, as well as the undermining of history and physical notes by copy and paste; the inclusion of a lot of meaningless information made it hard to find information that was relevant. He said that he had become used to doing his old SOAP notes in a really efficient manner and found he was now slogging through mud having to reproduce large parts of the chart in every note that he did.
I was struck by his questions, as well as his concern for both the quality of care for his patients and the issues he and his colleagues were facing. And it is not just him. Increased computerization of practices has been listed among the top five causes of physician burnout.1
A recent article in Annals of Internal Medicine showed that physicians spent only a quarter of their total time directly talking with patients and 50% of their time on EHR and other administrative tasks.2 It is likely that, among older physicians, the EHR takes proportionally more time and is an even larger cause of burnout. Given the importance of EHR, it seems time to revisit both the dilemma of, and propose some solutions for, this common problem.
One of the core issues for many older physicians is an inability to type. If you don’t type well, then entering a patient’s history or documenting the assessment and plan is unduly burdensome. Ten years ago, we might have suggested learning to type, which was an unrealistic recommendation then and, fortunately, is unnecessary now.
Now, solutions ranging from medical scribes to voice recognition have become commonplace. Voice recognition technology has advanced incredibly over the past 10 years, so much so that it is used now in our everyday life. The most well-known voice technology in everyday life might be Siri, Apple’s voice technology. It is easy now to dictate texts and to look up information. Similar voice technologies are available with the Amazon Echo and Google Assistant.
We now also have the advantage of well-developed medical voice recognition technology that can be used with most EHRs. Although some doctors say that the software is expensive, it can cost about $1,500 for the software and another $200-$300 for a good microphone, as well as the time to train on the software. But that expense needs to be weighed against the lost productivity of not using such software. A common complaint we hear from older doctors is that they are spending 1 to 2 hours a night completing charts. If voice recognition software could shave off half that time, decrease stress, and increase satisfaction, then it would pay for itself in 2 weeks.
Another issue is that, because the EHR enables so many things to be done from the EHR platform, many doctors find themselves doing all the work. It is important to work as a team and let each member of the team contribute to making the process more efficient. It turns out that this usually ends up being satisfying for everyone who contributes to patient care. It requires standing back from the process periodically and thinking about areas of inefficiency and how things can be done better.
One clear example is medication reconciliation: A nurse or clinical pharmacist can go over medicines with patients, and while the physician still needs to review the medications, it takes much less time to review medications than it does to enter each medication with the correct dose. Nurses also can help with preventive health initiatives. Performing recommended preventive health activities ranging from hepatitis C screening to colonoscopy can be greatly facilitated by the participation of nursing staff, and their participation will free up doctors so they can have more time to focus on diagnosis and treatment. Teamwork is critical.
Finally, if you don’t know something that is important to your practice – learn it! We are accustomed to going to CME conferences and spending our time learning about diseases like diabetes, asthma, and COPD. Each of these disease accounts for 5%-10% of the patients we see in our practice, and it is critically important to stay current and learn about them. We use our EHR for 100% of the patients we see; therefore, we should allocate time to learning about how to navigate the EHR and work more efficiently with it.
These issues are real, and the processes continue to change, but by standing back and acknowledging the challenges, we can thoughtfully construct an approach to maximize our ability to continue to have productive, gratifying careers while helping our patients.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Notte is a family physician and associate chief medical information officer for Abington Jefferson Health. Follow him on twitter @doctornotte.
References
1. Medscape Physician Lifestyle Report 2015. Accessed April 27, 2018. https://www.medscape.com/slideshow/lifestyle-2015-overview-6006535#1.
2. Sinsky C et al. Ann Intern Med. 2016;165(11):753-60.
I was in Miami recently to give a talk on diabetes when a physician, Pablo Michel, MD, asked me whether we could address an issue that’s important to him and many of his colleagues. His question was, Do we have any suggestions about how to help “older doctors” such as himself deal with electronic health records?
One of the problems with his question was that he didn’t really look “old”; he looked like he was about 50 years of age and in good shape. This physician had come on a Saturday morning to spend 4 hours learning about diabetes, which made it clear that he cared about his patients, his craft, and staying current with the medical literature.
Further discussion revealed that he also was bothered about what he saw happening on many consult notes that he received, as well as the undermining of history and physical notes by copy and paste; the inclusion of a lot of meaningless information made it hard to find information that was relevant. He said that he had become used to doing his old SOAP notes in a really efficient manner and found he was now slogging through mud having to reproduce large parts of the chart in every note that he did.
I was struck by his questions, as well as his concern for both the quality of care for his patients and the issues he and his colleagues were facing. And it is not just him. Increased computerization of practices has been listed among the top five causes of physician burnout.1
A recent article in Annals of Internal Medicine showed that physicians spent only a quarter of their total time directly talking with patients and 50% of their time on EHR and other administrative tasks.2 It is likely that, among older physicians, the EHR takes proportionally more time and is an even larger cause of burnout. Given the importance of EHR, it seems time to revisit both the dilemma of, and propose some solutions for, this common problem.
One of the core issues for many older physicians is an inability to type. If you don’t type well, then entering a patient’s history or documenting the assessment and plan is unduly burdensome. Ten years ago, we might have suggested learning to type, which was an unrealistic recommendation then and, fortunately, is unnecessary now.
Now, solutions ranging from medical scribes to voice recognition have become commonplace. Voice recognition technology has advanced incredibly over the past 10 years, so much so that it is used now in our everyday life. The most well-known voice technology in everyday life might be Siri, Apple’s voice technology. It is easy now to dictate texts and to look up information. Similar voice technologies are available with the Amazon Echo and Google Assistant.
We now also have the advantage of well-developed medical voice recognition technology that can be used with most EHRs. Although some doctors say that the software is expensive, it can cost about $1,500 for the software and another $200-$300 for a good microphone, as well as the time to train on the software. But that expense needs to be weighed against the lost productivity of not using such software. A common complaint we hear from older doctors is that they are spending 1 to 2 hours a night completing charts. If voice recognition software could shave off half that time, decrease stress, and increase satisfaction, then it would pay for itself in 2 weeks.
Another issue is that, because the EHR enables so many things to be done from the EHR platform, many doctors find themselves doing all the work. It is important to work as a team and let each member of the team contribute to making the process more efficient. It turns out that this usually ends up being satisfying for everyone who contributes to patient care. It requires standing back from the process periodically and thinking about areas of inefficiency and how things can be done better.
One clear example is medication reconciliation: A nurse or clinical pharmacist can go over medicines with patients, and while the physician still needs to review the medications, it takes much less time to review medications than it does to enter each medication with the correct dose. Nurses also can help with preventive health initiatives. Performing recommended preventive health activities ranging from hepatitis C screening to colonoscopy can be greatly facilitated by the participation of nursing staff, and their participation will free up doctors so they can have more time to focus on diagnosis and treatment. Teamwork is critical.
Finally, if you don’t know something that is important to your practice – learn it! We are accustomed to going to CME conferences and spending our time learning about diseases like diabetes, asthma, and COPD. Each of these disease accounts for 5%-10% of the patients we see in our practice, and it is critically important to stay current and learn about them. We use our EHR for 100% of the patients we see; therefore, we should allocate time to learning about how to navigate the EHR and work more efficiently with it.
These issues are real, and the processes continue to change, but by standing back and acknowledging the challenges, we can thoughtfully construct an approach to maximize our ability to continue to have productive, gratifying careers while helping our patients.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Notte is a family physician and associate chief medical information officer for Abington Jefferson Health. Follow him on twitter @doctornotte.
References
1. Medscape Physician Lifestyle Report 2015. Accessed April 27, 2018. https://www.medscape.com/slideshow/lifestyle-2015-overview-6006535#1.
2. Sinsky C et al. Ann Intern Med. 2016;165(11):753-60.
I was in Miami recently to give a talk on diabetes when a physician, Pablo Michel, MD, asked me whether we could address an issue that’s important to him and many of his colleagues. His question was, Do we have any suggestions about how to help “older doctors” such as himself deal with electronic health records?
One of the problems with his question was that he didn’t really look “old”; he looked like he was about 50 years of age and in good shape. This physician had come on a Saturday morning to spend 4 hours learning about diabetes, which made it clear that he cared about his patients, his craft, and staying current with the medical literature.
Further discussion revealed that he also was bothered about what he saw happening on many consult notes that he received, as well as the undermining of history and physical notes by copy and paste; the inclusion of a lot of meaningless information made it hard to find information that was relevant. He said that he had become used to doing his old SOAP notes in a really efficient manner and found he was now slogging through mud having to reproduce large parts of the chart in every note that he did.
I was struck by his questions, as well as his concern for both the quality of care for his patients and the issues he and his colleagues were facing. And it is not just him. Increased computerization of practices has been listed among the top five causes of physician burnout.1
A recent article in Annals of Internal Medicine showed that physicians spent only a quarter of their total time directly talking with patients and 50% of their time on EHR and other administrative tasks.2 It is likely that, among older physicians, the EHR takes proportionally more time and is an even larger cause of burnout. Given the importance of EHR, it seems time to revisit both the dilemma of, and propose some solutions for, this common problem.
One of the core issues for many older physicians is an inability to type. If you don’t type well, then entering a patient’s history or documenting the assessment and plan is unduly burdensome. Ten years ago, we might have suggested learning to type, which was an unrealistic recommendation then and, fortunately, is unnecessary now.
Now, solutions ranging from medical scribes to voice recognition have become commonplace. Voice recognition technology has advanced incredibly over the past 10 years, so much so that it is used now in our everyday life. The most well-known voice technology in everyday life might be Siri, Apple’s voice technology. It is easy now to dictate texts and to look up information. Similar voice technologies are available with the Amazon Echo and Google Assistant.
We now also have the advantage of well-developed medical voice recognition technology that can be used with most EHRs. Although some doctors say that the software is expensive, it can cost about $1,500 for the software and another $200-$300 for a good microphone, as well as the time to train on the software. But that expense needs to be weighed against the lost productivity of not using such software. A common complaint we hear from older doctors is that they are spending 1 to 2 hours a night completing charts. If voice recognition software could shave off half that time, decrease stress, and increase satisfaction, then it would pay for itself in 2 weeks.
Another issue is that, because the EHR enables so many things to be done from the EHR platform, many doctors find themselves doing all the work. It is important to work as a team and let each member of the team contribute to making the process more efficient. It turns out that this usually ends up being satisfying for everyone who contributes to patient care. It requires standing back from the process periodically and thinking about areas of inefficiency and how things can be done better.
One clear example is medication reconciliation: A nurse or clinical pharmacist can go over medicines with patients, and while the physician still needs to review the medications, it takes much less time to review medications than it does to enter each medication with the correct dose. Nurses also can help with preventive health initiatives. Performing recommended preventive health activities ranging from hepatitis C screening to colonoscopy can be greatly facilitated by the participation of nursing staff, and their participation will free up doctors so they can have more time to focus on diagnosis and treatment. Teamwork is critical.
Finally, if you don’t know something that is important to your practice – learn it! We are accustomed to going to CME conferences and spending our time learning about diseases like diabetes, asthma, and COPD. Each of these disease accounts for 5%-10% of the patients we see in our practice, and it is critically important to stay current and learn about them. We use our EHR for 100% of the patients we see; therefore, we should allocate time to learning about how to navigate the EHR and work more efficiently with it.
These issues are real, and the processes continue to change, but by standing back and acknowledging the challenges, we can thoughtfully construct an approach to maximize our ability to continue to have productive, gratifying careers while helping our patients.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Notte is a family physician and associate chief medical information officer for Abington Jefferson Health. Follow him on twitter @doctornotte.
References
1. Medscape Physician Lifestyle Report 2015. Accessed April 27, 2018. https://www.medscape.com/slideshow/lifestyle-2015-overview-6006535#1.
2. Sinsky C et al. Ann Intern Med. 2016;165(11):753-60.
Clinical Guidelines: Testosterone therapy in men with hypogonadism
Guidelines issued jointly by the Endocrine Society and the European Society for Endocrinology provide clinicians with a clear consensus approach to male hypogonadism, commonly referred to by patients as “low T.” Hypogonadism results from “the failure of the testes to produce physiological concentrations of testosterone and/or a normal number of spermatozoa due to pathology at one or more levels of the hypothalamic-pituitary-testicular axis,” according to the definition that serves as the basis for the guidelines.
Primary hypogonadism is caused by abnormalities at the testicular level, and secondary hypogonadism is caused by a defect of the hypothalamic pituitary axis. The two can be distinguished by elevated gonadotropin levels (LH and FSH) in primary hypogonadism, which rise in response to low testosterone levels. In secondary hypogonadism, gonadotropin levels are low or inappropriately normal. Causes of secondary hypogonadism include hyperprolactinemia, severe obesity, iron overload syndromes, opioid use, glucocorticoids, or androgen-deprivation therapy, androgenic-anabolic steroid withdrawal syndrome, idiopathic hypogonadotropic hypogonadism, hypothalamic or pituitary tumors or infiltrative disease, head trauma, and pituitary surgery or irradiation.
The causes of hypogonadism can also be divided into irreversible and reversible disorders. Irreversible disorders include congenital, structural, or destructive disorders that lead to permanent organ dysfunction. Reversible hypogonadism includes causes, such as obesity, opioids, or systemic illness, that can suppress gonadotropin and testosterone concentrations but that may be reversible.
Diagnosis
The signs and symptoms of hypogonadism are often nonspecific and include decreased energy, depressed mood, poor concentration and memory, sleep disturbance, mild normocytic normochromic anemia, reduced muscle bulk and strength, increased body fat, reduced libido, decreased erections, gynecomastia, low-trauma fractures, and loss of body hair. The diagnosis of hypogonadism is made when there are signs and symptoms of testosterone deficiency and of unequivocally and consistently low serum total testosterone and/or free testosterone concentrations.
Serum testosterone concentrations have diurnal variations, with values peaking in the morning. In addition, food intake suppresses testosterone concentrations. Therefore, testosterone levels should be measured in the morning after an overnight fast. Low testosterone concentrations need to be confirmed before making the diagnosis of hypogonadism because 30% of men with an initial testosterone concentration in the hypogonadal range have a normal testosterone concentration on repeat measurement. In addition, testosterone concentrations are not accurate in patients recovering from acute illness or taking medications known to suppress testosterone.
Testing of free testosterone and sex hormone–binding globulin (SHBG) may be considered in patients at risk for increased or decreased SHBG, including the obese, men with diabetes, the elderly, those with HIV or liver disease, or those taking estrogens and medications that may affect SHBG.
In individuals with low testosterone levels, a serum FSH and LH should be ordered to differentiate primary from secondary hypogonadism. Middle-aged and older men with secondary hypogonadism have a low prevalence of hypothalamic/pituitary abnormalities.
Treatment
In patients found to have low testosterone with signs and symptoms of testosterone deficiency, testosterone therapy is recommended to induce and maintain secondary sex characteristics and correct the symptoms of testosterone deficiency. Testosterone-replacement therapy in men with low testosterone levels leads to a small but statistically significant improvement in libido, erectile function, sexual activity or satisfaction, muscle mass, and bone density but does not lead to improvements in energy and mood.
Testosterone replacement should not be done in patient’s planning fertility in the near future, those with prostate or breast cancer, a palpable prostate nodule, a prostate-specific antigen (PSA) level greater than 4 ng/mL, a PSA greater than 3 ng/mL with high risk for prostate cancer, high hematocrit, untreated obstructive sleep apnea, severe lower urinary tract symptoms, uncontrolled heart failure, MI or stroke within the last 6 months, or thrombophilia.
In men undergoing therapy, there is a higher frequency of erythrocytosis (hematocrit greater than 54%; relative risk, 8.14) but no increase in lower urinary tract symptoms. The benefit and risk of regular prostate cancer screening should be discussed prior to starting therapy with men aged 40-69 years with an increased risk of prostate cancer and with all men aged 55-69 years. For those who desire prostate cancer screening, PSA levels should be checked prior to starting therapy, and a digital prostate examination should be done at baseline and at 3-12 months after starting testosterone treatment. After 1 year, prostate cancer screening can be done per standard guidelines.
The decision about therapy in men older than 65 years is challenging because testosterone levels normally decline with age. It is not necessary to prescribe testosterone routinely to men older than 65 years with only low testosterone, and treatment should be reserved for those with symptoms along with low testosterone concentrations.
Men with HIV with low testosterone concentrations and weight loss can be treated with testosterone to induce and maintain body weight and lean muscle mass.
Monitoring
Patients should be evaluated 3-6 months after initiating treatment to see whether symptoms have improved, to see whether there have been adverse reactions, and to check labs.
Serum testosterone should be checked and the dose of testosterone replacement should be adjusted to maintain the serum testosterone level in the mid-normal range for healthy young men. Serum testosterone should be drawn at different times for different formulations – for instance, it should be checked 2-8 hours following a gel application.
Hematocrit should be checked at baseline and 3-6 months into treatment. If hematocrit is greater than 54%, therapy should be held until hematocrit decreases and then restarted at a reduced dose. Screening for prostate cancer should be done if that was decided upon during discussion with the patient. Further urologic evaluation is indicated in men who, during the first year of treatment, develop an increase from baseline PSA greater than 1.4 ng/mL, have a repeat PSA over 4 ng/mL, or have a prostatic abnormality on digital rectal exam.
The bottom line
Hypogonadism is common and presents diagnostic challenges because of nonspecific signs and symptoms. Serum testosterone should be checked on a first-morning fasting specimen. Low testosterone concentrations need to be confirmed before making the diagnosis and should be followed by checking FSH and LH. For those with signs and symptoms of hypogonadism and persistently low testosterone, testosterone replacement therapy can be beneficial, with a goal of maintaining serum testosterone in the mid-range of normal.
Reference
Bhasin S et al. Testosterone therapy in men with hypogonadism: An Endocrine Society clinical practice guideline. J Clin Endocrinol Metab. 2018 May;103(5):1-30.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Hurchick is a third-year resident in the family medicine residency program at Abington Jefferson Health.
Guidelines issued jointly by the Endocrine Society and the European Society for Endocrinology provide clinicians with a clear consensus approach to male hypogonadism, commonly referred to by patients as “low T.” Hypogonadism results from “the failure of the testes to produce physiological concentrations of testosterone and/or a normal number of spermatozoa due to pathology at one or more levels of the hypothalamic-pituitary-testicular axis,” according to the definition that serves as the basis for the guidelines.
Primary hypogonadism is caused by abnormalities at the testicular level, and secondary hypogonadism is caused by a defect of the hypothalamic pituitary axis. The two can be distinguished by elevated gonadotropin levels (LH and FSH) in primary hypogonadism, which rise in response to low testosterone levels. In secondary hypogonadism, gonadotropin levels are low or inappropriately normal. Causes of secondary hypogonadism include hyperprolactinemia, severe obesity, iron overload syndromes, opioid use, glucocorticoids, or androgen-deprivation therapy, androgenic-anabolic steroid withdrawal syndrome, idiopathic hypogonadotropic hypogonadism, hypothalamic or pituitary tumors or infiltrative disease, head trauma, and pituitary surgery or irradiation.
The causes of hypogonadism can also be divided into irreversible and reversible disorders. Irreversible disorders include congenital, structural, or destructive disorders that lead to permanent organ dysfunction. Reversible hypogonadism includes causes, such as obesity, opioids, or systemic illness, that can suppress gonadotropin and testosterone concentrations but that may be reversible.
Diagnosis
The signs and symptoms of hypogonadism are often nonspecific and include decreased energy, depressed mood, poor concentration and memory, sleep disturbance, mild normocytic normochromic anemia, reduced muscle bulk and strength, increased body fat, reduced libido, decreased erections, gynecomastia, low-trauma fractures, and loss of body hair. The diagnosis of hypogonadism is made when there are signs and symptoms of testosterone deficiency and of unequivocally and consistently low serum total testosterone and/or free testosterone concentrations.
Serum testosterone concentrations have diurnal variations, with values peaking in the morning. In addition, food intake suppresses testosterone concentrations. Therefore, testosterone levels should be measured in the morning after an overnight fast. Low testosterone concentrations need to be confirmed before making the diagnosis of hypogonadism because 30% of men with an initial testosterone concentration in the hypogonadal range have a normal testosterone concentration on repeat measurement. In addition, testosterone concentrations are not accurate in patients recovering from acute illness or taking medications known to suppress testosterone.
Testing of free testosterone and sex hormone–binding globulin (SHBG) may be considered in patients at risk for increased or decreased SHBG, including the obese, men with diabetes, the elderly, those with HIV or liver disease, or those taking estrogens and medications that may affect SHBG.
In individuals with low testosterone levels, a serum FSH and LH should be ordered to differentiate primary from secondary hypogonadism. Middle-aged and older men with secondary hypogonadism have a low prevalence of hypothalamic/pituitary abnormalities.
Treatment
In patients found to have low testosterone with signs and symptoms of testosterone deficiency, testosterone therapy is recommended to induce and maintain secondary sex characteristics and correct the symptoms of testosterone deficiency. Testosterone-replacement therapy in men with low testosterone levels leads to a small but statistically significant improvement in libido, erectile function, sexual activity or satisfaction, muscle mass, and bone density but does not lead to improvements in energy and mood.
Testosterone replacement should not be done in patient’s planning fertility in the near future, those with prostate or breast cancer, a palpable prostate nodule, a prostate-specific antigen (PSA) level greater than 4 ng/mL, a PSA greater than 3 ng/mL with high risk for prostate cancer, high hematocrit, untreated obstructive sleep apnea, severe lower urinary tract symptoms, uncontrolled heart failure, MI or stroke within the last 6 months, or thrombophilia.
In men undergoing therapy, there is a higher frequency of erythrocytosis (hematocrit greater than 54%; relative risk, 8.14) but no increase in lower urinary tract symptoms. The benefit and risk of regular prostate cancer screening should be discussed prior to starting therapy with men aged 40-69 years with an increased risk of prostate cancer and with all men aged 55-69 years. For those who desire prostate cancer screening, PSA levels should be checked prior to starting therapy, and a digital prostate examination should be done at baseline and at 3-12 months after starting testosterone treatment. After 1 year, prostate cancer screening can be done per standard guidelines.
The decision about therapy in men older than 65 years is challenging because testosterone levels normally decline with age. It is not necessary to prescribe testosterone routinely to men older than 65 years with only low testosterone, and treatment should be reserved for those with symptoms along with low testosterone concentrations.
Men with HIV with low testosterone concentrations and weight loss can be treated with testosterone to induce and maintain body weight and lean muscle mass.
Monitoring
Patients should be evaluated 3-6 months after initiating treatment to see whether symptoms have improved, to see whether there have been adverse reactions, and to check labs.
Serum testosterone should be checked and the dose of testosterone replacement should be adjusted to maintain the serum testosterone level in the mid-normal range for healthy young men. Serum testosterone should be drawn at different times for different formulations – for instance, it should be checked 2-8 hours following a gel application.
Hematocrit should be checked at baseline and 3-6 months into treatment. If hematocrit is greater than 54%, therapy should be held until hematocrit decreases and then restarted at a reduced dose. Screening for prostate cancer should be done if that was decided upon during discussion with the patient. Further urologic evaluation is indicated in men who, during the first year of treatment, develop an increase from baseline PSA greater than 1.4 ng/mL, have a repeat PSA over 4 ng/mL, or have a prostatic abnormality on digital rectal exam.
The bottom line
Hypogonadism is common and presents diagnostic challenges because of nonspecific signs and symptoms. Serum testosterone should be checked on a first-morning fasting specimen. Low testosterone concentrations need to be confirmed before making the diagnosis and should be followed by checking FSH and LH. For those with signs and symptoms of hypogonadism and persistently low testosterone, testosterone replacement therapy can be beneficial, with a goal of maintaining serum testosterone in the mid-range of normal.
Reference
Bhasin S et al. Testosterone therapy in men with hypogonadism: An Endocrine Society clinical practice guideline. J Clin Endocrinol Metab. 2018 May;103(5):1-30.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Hurchick is a third-year resident in the family medicine residency program at Abington Jefferson Health.
Guidelines issued jointly by the Endocrine Society and the European Society for Endocrinology provide clinicians with a clear consensus approach to male hypogonadism, commonly referred to by patients as “low T.” Hypogonadism results from “the failure of the testes to produce physiological concentrations of testosterone and/or a normal number of spermatozoa due to pathology at one or more levels of the hypothalamic-pituitary-testicular axis,” according to the definition that serves as the basis for the guidelines.
Primary hypogonadism is caused by abnormalities at the testicular level, and secondary hypogonadism is caused by a defect of the hypothalamic pituitary axis. The two can be distinguished by elevated gonadotropin levels (LH and FSH) in primary hypogonadism, which rise in response to low testosterone levels. In secondary hypogonadism, gonadotropin levels are low or inappropriately normal. Causes of secondary hypogonadism include hyperprolactinemia, severe obesity, iron overload syndromes, opioid use, glucocorticoids, or androgen-deprivation therapy, androgenic-anabolic steroid withdrawal syndrome, idiopathic hypogonadotropic hypogonadism, hypothalamic or pituitary tumors or infiltrative disease, head trauma, and pituitary surgery or irradiation.
The causes of hypogonadism can also be divided into irreversible and reversible disorders. Irreversible disorders include congenital, structural, or destructive disorders that lead to permanent organ dysfunction. Reversible hypogonadism includes causes, such as obesity, opioids, or systemic illness, that can suppress gonadotropin and testosterone concentrations but that may be reversible.
Diagnosis
The signs and symptoms of hypogonadism are often nonspecific and include decreased energy, depressed mood, poor concentration and memory, sleep disturbance, mild normocytic normochromic anemia, reduced muscle bulk and strength, increased body fat, reduced libido, decreased erections, gynecomastia, low-trauma fractures, and loss of body hair. The diagnosis of hypogonadism is made when there are signs and symptoms of testosterone deficiency and of unequivocally and consistently low serum total testosterone and/or free testosterone concentrations.
Serum testosterone concentrations have diurnal variations, with values peaking in the morning. In addition, food intake suppresses testosterone concentrations. Therefore, testosterone levels should be measured in the morning after an overnight fast. Low testosterone concentrations need to be confirmed before making the diagnosis of hypogonadism because 30% of men with an initial testosterone concentration in the hypogonadal range have a normal testosterone concentration on repeat measurement. In addition, testosterone concentrations are not accurate in patients recovering from acute illness or taking medications known to suppress testosterone.
Testing of free testosterone and sex hormone–binding globulin (SHBG) may be considered in patients at risk for increased or decreased SHBG, including the obese, men with diabetes, the elderly, those with HIV or liver disease, or those taking estrogens and medications that may affect SHBG.
In individuals with low testosterone levels, a serum FSH and LH should be ordered to differentiate primary from secondary hypogonadism. Middle-aged and older men with secondary hypogonadism have a low prevalence of hypothalamic/pituitary abnormalities.
Treatment
In patients found to have low testosterone with signs and symptoms of testosterone deficiency, testosterone therapy is recommended to induce and maintain secondary sex characteristics and correct the symptoms of testosterone deficiency. Testosterone-replacement therapy in men with low testosterone levels leads to a small but statistically significant improvement in libido, erectile function, sexual activity or satisfaction, muscle mass, and bone density but does not lead to improvements in energy and mood.
Testosterone replacement should not be done in patient’s planning fertility in the near future, those with prostate or breast cancer, a palpable prostate nodule, a prostate-specific antigen (PSA) level greater than 4 ng/mL, a PSA greater than 3 ng/mL with high risk for prostate cancer, high hematocrit, untreated obstructive sleep apnea, severe lower urinary tract symptoms, uncontrolled heart failure, MI or stroke within the last 6 months, or thrombophilia.
In men undergoing therapy, there is a higher frequency of erythrocytosis (hematocrit greater than 54%; relative risk, 8.14) but no increase in lower urinary tract symptoms. The benefit and risk of regular prostate cancer screening should be discussed prior to starting therapy with men aged 40-69 years with an increased risk of prostate cancer and with all men aged 55-69 years. For those who desire prostate cancer screening, PSA levels should be checked prior to starting therapy, and a digital prostate examination should be done at baseline and at 3-12 months after starting testosterone treatment. After 1 year, prostate cancer screening can be done per standard guidelines.
The decision about therapy in men older than 65 years is challenging because testosterone levels normally decline with age. It is not necessary to prescribe testosterone routinely to men older than 65 years with only low testosterone, and treatment should be reserved for those with symptoms along with low testosterone concentrations.
Men with HIV with low testosterone concentrations and weight loss can be treated with testosterone to induce and maintain body weight and lean muscle mass.
Monitoring
Patients should be evaluated 3-6 months after initiating treatment to see whether symptoms have improved, to see whether there have been adverse reactions, and to check labs.
Serum testosterone should be checked and the dose of testosterone replacement should be adjusted to maintain the serum testosterone level in the mid-normal range for healthy young men. Serum testosterone should be drawn at different times for different formulations – for instance, it should be checked 2-8 hours following a gel application.
Hematocrit should be checked at baseline and 3-6 months into treatment. If hematocrit is greater than 54%, therapy should be held until hematocrit decreases and then restarted at a reduced dose. Screening for prostate cancer should be done if that was decided upon during discussion with the patient. Further urologic evaluation is indicated in men who, during the first year of treatment, develop an increase from baseline PSA greater than 1.4 ng/mL, have a repeat PSA over 4 ng/mL, or have a prostatic abnormality on digital rectal exam.
The bottom line
Hypogonadism is common and presents diagnostic challenges because of nonspecific signs and symptoms. Serum testosterone should be checked on a first-morning fasting specimen. Low testosterone concentrations need to be confirmed before making the diagnosis and should be followed by checking FSH and LH. For those with signs and symptoms of hypogonadism and persistently low testosterone, testosterone replacement therapy can be beneficial, with a goal of maintaining serum testosterone in the mid-range of normal.
Reference
Bhasin S et al. Testosterone therapy in men with hypogonadism: An Endocrine Society clinical practice guideline. J Clin Endocrinol Metab. 2018 May;103(5):1-30.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Hurchick is a third-year resident in the family medicine residency program at Abington Jefferson Health.
Early management of patients with acute ischemic stroke
(AIS). Conceptually, early management can be separated into initial triage and decisions about intervention to restore blood flow with thrombolysis or mechanical thrombectomy. If reperfusion therapy is not appropriate, then the focus is on management to minimize further damage from the stroke, decrease the likelihood of recurrence, and lessen secondary problems related to the stroke.
All patients with AIS should receive noncontrast CT to determine if there is evidence of a hemorrhagic stroke and, if such evidence exists, than the patient is not a candidate for thrombolysis. Intravenous alteplase should be considered for patients who present within 3 hours of stroke onset and for selected patients presenting between 3-4.5 hours after stroke onset (for more details, see Table 6 in the guidelines). Selected patients with AIS who present within 6-24 hours of last time they were known to be normal and who have large vessel occlusion in the anterior circulation, may be candidates for mechanical thrombectomy in specialized centers. Patients who are not candidates for acute interventions should then be managed according to early stroke management guidelines.
Early stroke management for patients with AIS admitted to medical floors involves attention to blood pressure, glucose, and antiplatelet therapy. For patients with blood pressure lower than 220/120 mm Hg who did not receive IV alteplase or thrombectomy, treatment of hypertension in the first 48-72 hours after an AIS does not change the outcome. It is reasonable when patients have BP greater than or equal to 220/120 mm Hg, to lower blood pressure by 15% during the first 24 hours after onset of stroke. Starting or restarting antihypertensive therapy during hospitalization in patients with blood pressure higher than 140/90 mm Hg who are neurologically stable improves long-term blood pressure control and is considered a reasonable strategy.
For patients with noncardioembolic AIS, the use of antiplatelet agents rather than oral anticoagulation is recommended. Patients should be treated with aspirin 160 mg-325 mg within 24-48 hours of presentation. In patients unsafe or unable to swallow, rectal or nasogastric administration is recommended. In patients with minor stroke, 21 days of dual-antiplatelet therapy (aspirin and clopidogrel) started within 24 hours can decrease stroke recurrence for the first 90 days after a stroke. This recommendation is based on a single study, the CHANCE trial, in a homogeneous population in China, and its generalizability is not known. If a patient had an AIS while already on aspirin, there is some evidence supporting a decreased risk of major cardiovascular events and recurrent stroke in patients switching to an alternative antiplatelet agent or combination antiplatelet therapy. Because of methodologic issues in the those studies, the guideline concludes that, for those already on aspirin, it is of unclear benefit to increase the dose of aspirin, switch to a different antiplatelet agent, or add a second antiplatelet agent. Switching to warfarin is not beneficial for secondary stroke prevention. High-dose statin therapy should be initiated. For patients with AIS in the setting of atrial fibrillation, oral anticoagulation can be started within 4-14 days after the stroke. One study showed that anticoagulation should not be started before 4 days after the stroke, with a hazard ratio of 0.53 for starting anticoagulation at 4-14 days, compared with less than 4 days.
Hyperglycemia should be controlled to a range of 140-180 mg/dL, because higher values are associated with worse outcomes. Oxygen should be used if needed to maintain oxygen saturation greater than 94%. High-intensity statin therapy should be used, and smoking cessation is strongly encouraged for those who use tobacco, with avoidance of secondhand smoke whenever possible.
Patients should be screened for dysphagia before taking anything per oral, including medications. A nasogastric tube may be considered within the first 7 days, if patients are dysphagic. Oral hygiene protocols may include antibacterial mouth rinse, systematic oral care, and decontamination gel to decrease the risk of pneumonia .
For deep vein thrombosis prophylaxis, intermittent pneumatic compression, in addition to the aspirin that a patient is on is reasonable, and the benefit of prophylactic-dose subcutaneous heparin (unfractionated heparin or low-molecular-weight heparin) in immobile patients with AIS is not well established.
In the poststroke setting, patients should be screened for depression and, if appropriate, treated with antidepressants. Regular skin assessments are recommended with objective scales, and skin friction and pressure should be actively minimized with regular turning, good skin hygiene, and use of specialized mattresses, wheelchair cushions, and seating until mobility returns. Early rehabilitation for hospitalized stroke patients should be provided, but high-dose, very-early mobilization within 24 hours of stroke should not be done because it reduces the odds of a favorable outcome at 3 months.
Completing the diagnostic evaluation for the cause of stroke and decreasing the chance of future strokes should be part of the initial hospitalization. While MRI is more sensitive than is CT for detecting AIS, routine use of MRI in all patients with AIS is not cost effective and therefore is not recommended. For patients with nondisabling AIS in the carotid territory and who are candidates for carotid endarterectomy or stenting, noninvasive imaging of the cervical vessels should be performed within 24 hours of admission, with plans for carotid revascularization between 48 hours and 7 days if indicated. Cardiac monitoring for at least the first 24 hours of admission should be performed, while primarily looking for atrial fibrillation as a cause of stroke. In some patients, prolonged cardiac monitoring may be reasonable. With prolonged cardiac monitoring, atrial fibrillation is newly detected in nearly a quarter of patients with stroke or TIA, but the effect on outcomes is uncertain. Routine use of echocardiography is not recommended but may be done in selected patients. All patients should be screened for diabetes. It is not clear whether screening for thrombophilic states is useful.
All patients should be counseled on stroke, and provided education about it and how it will affect their lives. Following their acute medical stay, all patients will benefit from rehabilitation, with the benefits associated using a program tailored to their needs and outcome goals.
The bottom line
Early management of stroke involves first determining whether someone is a candidate for reperfusion therapy with alteplase or thrombectomy and then, if not, admitting them to a monitored setting to screen for atrial fibrillation and evaluation for carotid stenosis. Patients should be evaluated for both depression and swallowing function, and there should be initiation of deep vein thrombosis prevention, appropriate management of elevated blood pressures, anti-platelet therapy, and statin therapy as well as plans for rehabilitation services.
Reference
Powers WJ et al. on behalf of the American Heart Association Stroke Council. 2018 Guidelines for the early management of patients with acute ischemic stroke: A guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2018 Mar;49(3):e46-e110.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Callahan is an attending physician and preceptor in the family medicine residency program at Abington Jefferson Health.
(AIS). Conceptually, early management can be separated into initial triage and decisions about intervention to restore blood flow with thrombolysis or mechanical thrombectomy. If reperfusion therapy is not appropriate, then the focus is on management to minimize further damage from the stroke, decrease the likelihood of recurrence, and lessen secondary problems related to the stroke.
All patients with AIS should receive noncontrast CT to determine if there is evidence of a hemorrhagic stroke and, if such evidence exists, than the patient is not a candidate for thrombolysis. Intravenous alteplase should be considered for patients who present within 3 hours of stroke onset and for selected patients presenting between 3-4.5 hours after stroke onset (for more details, see Table 6 in the guidelines). Selected patients with AIS who present within 6-24 hours of last time they were known to be normal and who have large vessel occlusion in the anterior circulation, may be candidates for mechanical thrombectomy in specialized centers. Patients who are not candidates for acute interventions should then be managed according to early stroke management guidelines.
Early stroke management for patients with AIS admitted to medical floors involves attention to blood pressure, glucose, and antiplatelet therapy. For patients with blood pressure lower than 220/120 mm Hg who did not receive IV alteplase or thrombectomy, treatment of hypertension in the first 48-72 hours after an AIS does not change the outcome. It is reasonable when patients have BP greater than or equal to 220/120 mm Hg, to lower blood pressure by 15% during the first 24 hours after onset of stroke. Starting or restarting antihypertensive therapy during hospitalization in patients with blood pressure higher than 140/90 mm Hg who are neurologically stable improves long-term blood pressure control and is considered a reasonable strategy.
For patients with noncardioembolic AIS, the use of antiplatelet agents rather than oral anticoagulation is recommended. Patients should be treated with aspirin 160 mg-325 mg within 24-48 hours of presentation. In patients unsafe or unable to swallow, rectal or nasogastric administration is recommended. In patients with minor stroke, 21 days of dual-antiplatelet therapy (aspirin and clopidogrel) started within 24 hours can decrease stroke recurrence for the first 90 days after a stroke. This recommendation is based on a single study, the CHANCE trial, in a homogeneous population in China, and its generalizability is not known. If a patient had an AIS while already on aspirin, there is some evidence supporting a decreased risk of major cardiovascular events and recurrent stroke in patients switching to an alternative antiplatelet agent or combination antiplatelet therapy. Because of methodologic issues in the those studies, the guideline concludes that, for those already on aspirin, it is of unclear benefit to increase the dose of aspirin, switch to a different antiplatelet agent, or add a second antiplatelet agent. Switching to warfarin is not beneficial for secondary stroke prevention. High-dose statin therapy should be initiated. For patients with AIS in the setting of atrial fibrillation, oral anticoagulation can be started within 4-14 days after the stroke. One study showed that anticoagulation should not be started before 4 days after the stroke, with a hazard ratio of 0.53 for starting anticoagulation at 4-14 days, compared with less than 4 days.
Hyperglycemia should be controlled to a range of 140-180 mg/dL, because higher values are associated with worse outcomes. Oxygen should be used if needed to maintain oxygen saturation greater than 94%. High-intensity statin therapy should be used, and smoking cessation is strongly encouraged for those who use tobacco, with avoidance of secondhand smoke whenever possible.
Patients should be screened for dysphagia before taking anything per oral, including medications. A nasogastric tube may be considered within the first 7 days, if patients are dysphagic. Oral hygiene protocols may include antibacterial mouth rinse, systematic oral care, and decontamination gel to decrease the risk of pneumonia .
For deep vein thrombosis prophylaxis, intermittent pneumatic compression, in addition to the aspirin that a patient is on is reasonable, and the benefit of prophylactic-dose subcutaneous heparin (unfractionated heparin or low-molecular-weight heparin) in immobile patients with AIS is not well established.
In the poststroke setting, patients should be screened for depression and, if appropriate, treated with antidepressants. Regular skin assessments are recommended with objective scales, and skin friction and pressure should be actively minimized with regular turning, good skin hygiene, and use of specialized mattresses, wheelchair cushions, and seating until mobility returns. Early rehabilitation for hospitalized stroke patients should be provided, but high-dose, very-early mobilization within 24 hours of stroke should not be done because it reduces the odds of a favorable outcome at 3 months.
Completing the diagnostic evaluation for the cause of stroke and decreasing the chance of future strokes should be part of the initial hospitalization. While MRI is more sensitive than is CT for detecting AIS, routine use of MRI in all patients with AIS is not cost effective and therefore is not recommended. For patients with nondisabling AIS in the carotid territory and who are candidates for carotid endarterectomy or stenting, noninvasive imaging of the cervical vessels should be performed within 24 hours of admission, with plans for carotid revascularization between 48 hours and 7 days if indicated. Cardiac monitoring for at least the first 24 hours of admission should be performed, while primarily looking for atrial fibrillation as a cause of stroke. In some patients, prolonged cardiac monitoring may be reasonable. With prolonged cardiac monitoring, atrial fibrillation is newly detected in nearly a quarter of patients with stroke or TIA, but the effect on outcomes is uncertain. Routine use of echocardiography is not recommended but may be done in selected patients. All patients should be screened for diabetes. It is not clear whether screening for thrombophilic states is useful.
All patients should be counseled on stroke, and provided education about it and how it will affect their lives. Following their acute medical stay, all patients will benefit from rehabilitation, with the benefits associated using a program tailored to their needs and outcome goals.
The bottom line
Early management of stroke involves first determining whether someone is a candidate for reperfusion therapy with alteplase or thrombectomy and then, if not, admitting them to a monitored setting to screen for atrial fibrillation and evaluation for carotid stenosis. Patients should be evaluated for both depression and swallowing function, and there should be initiation of deep vein thrombosis prevention, appropriate management of elevated blood pressures, anti-platelet therapy, and statin therapy as well as plans for rehabilitation services.
Reference
Powers WJ et al. on behalf of the American Heart Association Stroke Council. 2018 Guidelines for the early management of patients with acute ischemic stroke: A guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2018 Mar;49(3):e46-e110.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Callahan is an attending physician and preceptor in the family medicine residency program at Abington Jefferson Health.
(AIS). Conceptually, early management can be separated into initial triage and decisions about intervention to restore blood flow with thrombolysis or mechanical thrombectomy. If reperfusion therapy is not appropriate, then the focus is on management to minimize further damage from the stroke, decrease the likelihood of recurrence, and lessen secondary problems related to the stroke.
All patients with AIS should receive noncontrast CT to determine if there is evidence of a hemorrhagic stroke and, if such evidence exists, than the patient is not a candidate for thrombolysis. Intravenous alteplase should be considered for patients who present within 3 hours of stroke onset and for selected patients presenting between 3-4.5 hours after stroke onset (for more details, see Table 6 in the guidelines). Selected patients with AIS who present within 6-24 hours of last time they were known to be normal and who have large vessel occlusion in the anterior circulation, may be candidates for mechanical thrombectomy in specialized centers. Patients who are not candidates for acute interventions should then be managed according to early stroke management guidelines.
Early stroke management for patients with AIS admitted to medical floors involves attention to blood pressure, glucose, and antiplatelet therapy. For patients with blood pressure lower than 220/120 mm Hg who did not receive IV alteplase or thrombectomy, treatment of hypertension in the first 48-72 hours after an AIS does not change the outcome. It is reasonable when patients have BP greater than or equal to 220/120 mm Hg, to lower blood pressure by 15% during the first 24 hours after onset of stroke. Starting or restarting antihypertensive therapy during hospitalization in patients with blood pressure higher than 140/90 mm Hg who are neurologically stable improves long-term blood pressure control and is considered a reasonable strategy.
For patients with noncardioembolic AIS, the use of antiplatelet agents rather than oral anticoagulation is recommended. Patients should be treated with aspirin 160 mg-325 mg within 24-48 hours of presentation. In patients unsafe or unable to swallow, rectal or nasogastric administration is recommended. In patients with minor stroke, 21 days of dual-antiplatelet therapy (aspirin and clopidogrel) started within 24 hours can decrease stroke recurrence for the first 90 days after a stroke. This recommendation is based on a single study, the CHANCE trial, in a homogeneous population in China, and its generalizability is not known. If a patient had an AIS while already on aspirin, there is some evidence supporting a decreased risk of major cardiovascular events and recurrent stroke in patients switching to an alternative antiplatelet agent or combination antiplatelet therapy. Because of methodologic issues in the those studies, the guideline concludes that, for those already on aspirin, it is of unclear benefit to increase the dose of aspirin, switch to a different antiplatelet agent, or add a second antiplatelet agent. Switching to warfarin is not beneficial for secondary stroke prevention. High-dose statin therapy should be initiated. For patients with AIS in the setting of atrial fibrillation, oral anticoagulation can be started within 4-14 days after the stroke. One study showed that anticoagulation should not be started before 4 days after the stroke, with a hazard ratio of 0.53 for starting anticoagulation at 4-14 days, compared with less than 4 days.
Hyperglycemia should be controlled to a range of 140-180 mg/dL, because higher values are associated with worse outcomes. Oxygen should be used if needed to maintain oxygen saturation greater than 94%. High-intensity statin therapy should be used, and smoking cessation is strongly encouraged for those who use tobacco, with avoidance of secondhand smoke whenever possible.
Patients should be screened for dysphagia before taking anything per oral, including medications. A nasogastric tube may be considered within the first 7 days, if patients are dysphagic. Oral hygiene protocols may include antibacterial mouth rinse, systematic oral care, and decontamination gel to decrease the risk of pneumonia .
For deep vein thrombosis prophylaxis, intermittent pneumatic compression, in addition to the aspirin that a patient is on is reasonable, and the benefit of prophylactic-dose subcutaneous heparin (unfractionated heparin or low-molecular-weight heparin) in immobile patients with AIS is not well established.
In the poststroke setting, patients should be screened for depression and, if appropriate, treated with antidepressants. Regular skin assessments are recommended with objective scales, and skin friction and pressure should be actively minimized with regular turning, good skin hygiene, and use of specialized mattresses, wheelchair cushions, and seating until mobility returns. Early rehabilitation for hospitalized stroke patients should be provided, but high-dose, very-early mobilization within 24 hours of stroke should not be done because it reduces the odds of a favorable outcome at 3 months.
Completing the diagnostic evaluation for the cause of stroke and decreasing the chance of future strokes should be part of the initial hospitalization. While MRI is more sensitive than is CT for detecting AIS, routine use of MRI in all patients with AIS is not cost effective and therefore is not recommended. For patients with nondisabling AIS in the carotid territory and who are candidates for carotid endarterectomy or stenting, noninvasive imaging of the cervical vessels should be performed within 24 hours of admission, with plans for carotid revascularization between 48 hours and 7 days if indicated. Cardiac monitoring for at least the first 24 hours of admission should be performed, while primarily looking for atrial fibrillation as a cause of stroke. In some patients, prolonged cardiac monitoring may be reasonable. With prolonged cardiac monitoring, atrial fibrillation is newly detected in nearly a quarter of patients with stroke or TIA, but the effect on outcomes is uncertain. Routine use of echocardiography is not recommended but may be done in selected patients. All patients should be screened for diabetes. It is not clear whether screening for thrombophilic states is useful.
All patients should be counseled on stroke, and provided education about it and how it will affect their lives. Following their acute medical stay, all patients will benefit from rehabilitation, with the benefits associated using a program tailored to their needs and outcome goals.
The bottom line
Early management of stroke involves first determining whether someone is a candidate for reperfusion therapy with alteplase or thrombectomy and then, if not, admitting them to a monitored setting to screen for atrial fibrillation and evaluation for carotid stenosis. Patients should be evaluated for both depression and swallowing function, and there should be initiation of deep vein thrombosis prevention, appropriate management of elevated blood pressures, anti-platelet therapy, and statin therapy as well as plans for rehabilitation services.
Reference
Powers WJ et al. on behalf of the American Heart Association Stroke Council. 2018 Guidelines for the early management of patients with acute ischemic stroke: A guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2018 Mar;49(3):e46-e110.
Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and an associate director of the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Callahan is an attending physician and preceptor in the family medicine residency program at Abington Jefferson Health.
Screening for adolescent idiopathic scoliosis
The United States Preventive Services Task Force (USPSTF) has issued recommendations on screening for idiopathic scoliosis in asymptomatic children and adolescents aged 10-18 years.1 This recommendation concluded that the current evidence on the benefits and harms of screening is insufficient (I statement) and updated its 2004 recommendation against routine screening, in which it had concluded that the harms of screening exceeded the potential benefits (D recommendation).
Importance
Screening methods
The USPSTF concluded that currently available screening tests can accurately detect adolescent idiopathic scoliosis. Screening methods include visual inspection using the forward bend test, use of scoliometer measurement of the angle of trunk rotation during forward bend test with a rotation of 5 degrees–7 degrees recommended to be referred for radiography, and Moiré topography that enumerates asymmetric contour lines on the back (values greater than 2 are referred to radiography).
The USPSTF reviewed seven fair-quality observational studies (n = 447,243) and concluded that screening with a combination of forward bend test, scoliometer measurement and that Moiré topography had the highest sensitivity (93.8%) and specificity (99.2%), a low false-negative rate (6.2%), the lowest false-positive rate (0.8%), and the highest positive predictive value (81%). Sensitivity was lower when screening programs used only one or two screening tests, and single screening tests were associated with highest false-positive rates.
In general, the potential harms associated with false-positive results include psychological harm, chest radiation exposure, and other unnecessary treatment, but the USPSTF did not find evidence on the direct harms of screening.
Effectiveness of intervention or treatment
Bracing: The USPSTF found five studies (n = 651) that evaluated the effectiveness of treatment with three different types of braces. The average ages of participants ranged from 12 to 13 years, and their curvature severity varied from Cobb angle of 20 degrees to 30 degrees. The largest study (n = 242) was a good-quality, international, controlled clinical trial known as the Bracing in Adolescent Idiopathic Scoliosis Trial; it demonstrated significant benefit and quality-of-life outcomes associated with bracing for 18 hours/day. In this study, the rate of treatment success in the as-treated analysis was 72% in the intervention group and 48% in the control group. The rate of treatment success in the intention-to-treat analysis was 75% in the intervention group and 42% in the control group. The number needed to treat was three to prevent one case of curvature progression past 50%.
Exercise: The USPSTF found just two trials (n = 184) that evaluated the effectiveness of tailored physiotherapeutic, scoliosis-specific exercise treatments. The participants were older than 10 years and had Cobb angles ranging from 10 degrees to 25 degrees. At the 12-month follow-up, the studies showed significant improvement, including those in quality-of-life measures. In one of the trials, the intervention group had a Cobb angle reduction of 4.9 degrees while the control group had an increase of 2.8 degrees.
Harms: Only one good-quality study (n = 242) reported harms of bracing, which include skin problems, body pain, physical limitations, anxiety, and depression. The USPSTF did not find any studies that assessed the harms of treatment with exercise or surgery.
Association between spinal curvature severity and adult health outcomes
The USPSTF did not find any studies that directly addressed whether changes in the severity of spinal curvature in adolescence resulted in changes in adult health outcomes. The USPSTF did review two fair-quality retrospective, observational, long-term, follow-up analyses (n = 339) of adults diagnosed with idiopathic scoliosis in adolescence and treated with either bracing or surgery. Quality of life measurements, pulmonary consequences, and pregnancy outcomes were not significantly different between the two treatment groups or between those treated and those simply observed. However, those treated with bracing did report more negative treatment experience and body distortion.
Recommendation of others
The Scoliosis Research Society, American Academy of Orthopedic Surgeons, Pediatric Orthopedic Society of North America, and American Academy of Pediatrics issued a joint position statement in September 2015 recommending that screening examinations for scoliosis should be performed for females at ages 10 and 12 years and for males at either 13 or 14 years.2
Their rationale, articulated in the statement and in an editorial in JAMA accompanying the publication of the USPSTF statement, is primarily based on findings in the Bracing in Adolescent Idiopathic Scoliosis Trial that showed a 56% decrease in the rate of progression of moderate curves to greater than 50 degrees. The evidence that intervention works – along with concerns about costs, family burden, loss of school time, risks of surgical complications, and the 22% need for long-term revision surgery – makes avoidance of progression of curves in scoliosis a high-value issue. In addition, they reasoned, the screening trials from which the false-positive values were derived were primarily school-based screening and not done in physician offices.
The Bottom Line
All organizations that weigh in on screening for scoliosis now agree on the benefits of bracing to slow curvature progression. They differ on the value they assign to avoiding surgery, to the effectiveness of screening programs in identifying scoliosis, and to the long-term effects of avoiding curvature progression.
Although the joint statement made by pediatric orthopedic societies and the American Academy of Pediatrics had recommended screening examinations, the USPSTF concluded that the current evidence is insufficient and that the balance of benefits and harms of screening for adolescent (aged 10-18 years) idiopathic scoliosis (Cobb angle greater than 10 degrees) cannot be determined, giving an “I” recommendation.
Dr. Aarisha Shrestha is a first-year resident in the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and associate director of the family medicine residency program at Abington Jefferson Health.
References
1. US Preventive Services Task Force. JAMA. 2018;319(2):165–72.
2. HreskoMT et al. SRS/POSNA/AAOS/AAP position statement: Screening for the early detection for idiopathic scoliosis in adolescents. 2015. Accessed December 8, 2017.
The United States Preventive Services Task Force (USPSTF) has issued recommendations on screening for idiopathic scoliosis in asymptomatic children and adolescents aged 10-18 years.1 This recommendation concluded that the current evidence on the benefits and harms of screening is insufficient (I statement) and updated its 2004 recommendation against routine screening, in which it had concluded that the harms of screening exceeded the potential benefits (D recommendation).
Importance
Screening methods
The USPSTF concluded that currently available screening tests can accurately detect adolescent idiopathic scoliosis. Screening methods include visual inspection using the forward bend test, use of scoliometer measurement of the angle of trunk rotation during forward bend test with a rotation of 5 degrees–7 degrees recommended to be referred for radiography, and Moiré topography that enumerates asymmetric contour lines on the back (values greater than 2 are referred to radiography).
The USPSTF reviewed seven fair-quality observational studies (n = 447,243) and concluded that screening with a combination of forward bend test, scoliometer measurement and that Moiré topography had the highest sensitivity (93.8%) and specificity (99.2%), a low false-negative rate (6.2%), the lowest false-positive rate (0.8%), and the highest positive predictive value (81%). Sensitivity was lower when screening programs used only one or two screening tests, and single screening tests were associated with highest false-positive rates.
In general, the potential harms associated with false-positive results include psychological harm, chest radiation exposure, and other unnecessary treatment, but the USPSTF did not find evidence on the direct harms of screening.
Effectiveness of intervention or treatment
Bracing: The USPSTF found five studies (n = 651) that evaluated the effectiveness of treatment with three different types of braces. The average ages of participants ranged from 12 to 13 years, and their curvature severity varied from Cobb angle of 20 degrees to 30 degrees. The largest study (n = 242) was a good-quality, international, controlled clinical trial known as the Bracing in Adolescent Idiopathic Scoliosis Trial; it demonstrated significant benefit and quality-of-life outcomes associated with bracing for 18 hours/day. In this study, the rate of treatment success in the as-treated analysis was 72% in the intervention group and 48% in the control group. The rate of treatment success in the intention-to-treat analysis was 75% in the intervention group and 42% in the control group. The number needed to treat was three to prevent one case of curvature progression past 50%.
Exercise: The USPSTF found just two trials (n = 184) that evaluated the effectiveness of tailored physiotherapeutic, scoliosis-specific exercise treatments. The participants were older than 10 years and had Cobb angles ranging from 10 degrees to 25 degrees. At the 12-month follow-up, the studies showed significant improvement, including those in quality-of-life measures. In one of the trials, the intervention group had a Cobb angle reduction of 4.9 degrees while the control group had an increase of 2.8 degrees.
Harms: Only one good-quality study (n = 242) reported harms of bracing, which include skin problems, body pain, physical limitations, anxiety, and depression. The USPSTF did not find any studies that assessed the harms of treatment with exercise or surgery.
Association between spinal curvature severity and adult health outcomes
The USPSTF did not find any studies that directly addressed whether changes in the severity of spinal curvature in adolescence resulted in changes in adult health outcomes. The USPSTF did review two fair-quality retrospective, observational, long-term, follow-up analyses (n = 339) of adults diagnosed with idiopathic scoliosis in adolescence and treated with either bracing or surgery. Quality of life measurements, pulmonary consequences, and pregnancy outcomes were not significantly different between the two treatment groups or between those treated and those simply observed. However, those treated with bracing did report more negative treatment experience and body distortion.
Recommendation of others
The Scoliosis Research Society, American Academy of Orthopedic Surgeons, Pediatric Orthopedic Society of North America, and American Academy of Pediatrics issued a joint position statement in September 2015 recommending that screening examinations for scoliosis should be performed for females at ages 10 and 12 years and for males at either 13 or 14 years.2
Their rationale, articulated in the statement and in an editorial in JAMA accompanying the publication of the USPSTF statement, is primarily based on findings in the Bracing in Adolescent Idiopathic Scoliosis Trial that showed a 56% decrease in the rate of progression of moderate curves to greater than 50 degrees. The evidence that intervention works – along with concerns about costs, family burden, loss of school time, risks of surgical complications, and the 22% need for long-term revision surgery – makes avoidance of progression of curves in scoliosis a high-value issue. In addition, they reasoned, the screening trials from which the false-positive values were derived were primarily school-based screening and not done in physician offices.
The Bottom Line
All organizations that weigh in on screening for scoliosis now agree on the benefits of bracing to slow curvature progression. They differ on the value they assign to avoiding surgery, to the effectiveness of screening programs in identifying scoliosis, and to the long-term effects of avoiding curvature progression.
Although the joint statement made by pediatric orthopedic societies and the American Academy of Pediatrics had recommended screening examinations, the USPSTF concluded that the current evidence is insufficient and that the balance of benefits and harms of screening for adolescent (aged 10-18 years) idiopathic scoliosis (Cobb angle greater than 10 degrees) cannot be determined, giving an “I” recommendation.
Dr. Aarisha Shrestha is a first-year resident in the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and associate director of the family medicine residency program at Abington Jefferson Health.
References
1. US Preventive Services Task Force. JAMA. 2018;319(2):165–72.
2. HreskoMT et al. SRS/POSNA/AAOS/AAP position statement: Screening for the early detection for idiopathic scoliosis in adolescents. 2015. Accessed December 8, 2017.
The United States Preventive Services Task Force (USPSTF) has issued recommendations on screening for idiopathic scoliosis in asymptomatic children and adolescents aged 10-18 years.1 This recommendation concluded that the current evidence on the benefits and harms of screening is insufficient (I statement) and updated its 2004 recommendation against routine screening, in which it had concluded that the harms of screening exceeded the potential benefits (D recommendation).
Importance
Screening methods
The USPSTF concluded that currently available screening tests can accurately detect adolescent idiopathic scoliosis. Screening methods include visual inspection using the forward bend test, use of scoliometer measurement of the angle of trunk rotation during forward bend test with a rotation of 5 degrees–7 degrees recommended to be referred for radiography, and Moiré topography that enumerates asymmetric contour lines on the back (values greater than 2 are referred to radiography).
The USPSTF reviewed seven fair-quality observational studies (n = 447,243) and concluded that screening with a combination of forward bend test, scoliometer measurement and that Moiré topography had the highest sensitivity (93.8%) and specificity (99.2%), a low false-negative rate (6.2%), the lowest false-positive rate (0.8%), and the highest positive predictive value (81%). Sensitivity was lower when screening programs used only one or two screening tests, and single screening tests were associated with highest false-positive rates.
In general, the potential harms associated with false-positive results include psychological harm, chest radiation exposure, and other unnecessary treatment, but the USPSTF did not find evidence on the direct harms of screening.
Effectiveness of intervention or treatment
Bracing: The USPSTF found five studies (n = 651) that evaluated the effectiveness of treatment with three different types of braces. The average ages of participants ranged from 12 to 13 years, and their curvature severity varied from Cobb angle of 20 degrees to 30 degrees. The largest study (n = 242) was a good-quality, international, controlled clinical trial known as the Bracing in Adolescent Idiopathic Scoliosis Trial; it demonstrated significant benefit and quality-of-life outcomes associated with bracing for 18 hours/day. In this study, the rate of treatment success in the as-treated analysis was 72% in the intervention group and 48% in the control group. The rate of treatment success in the intention-to-treat analysis was 75% in the intervention group and 42% in the control group. The number needed to treat was three to prevent one case of curvature progression past 50%.
Exercise: The USPSTF found just two trials (n = 184) that evaluated the effectiveness of tailored physiotherapeutic, scoliosis-specific exercise treatments. The participants were older than 10 years and had Cobb angles ranging from 10 degrees to 25 degrees. At the 12-month follow-up, the studies showed significant improvement, including those in quality-of-life measures. In one of the trials, the intervention group had a Cobb angle reduction of 4.9 degrees while the control group had an increase of 2.8 degrees.
Harms: Only one good-quality study (n = 242) reported harms of bracing, which include skin problems, body pain, physical limitations, anxiety, and depression. The USPSTF did not find any studies that assessed the harms of treatment with exercise or surgery.
Association between spinal curvature severity and adult health outcomes
The USPSTF did not find any studies that directly addressed whether changes in the severity of spinal curvature in adolescence resulted in changes in adult health outcomes. The USPSTF did review two fair-quality retrospective, observational, long-term, follow-up analyses (n = 339) of adults diagnosed with idiopathic scoliosis in adolescence and treated with either bracing or surgery. Quality of life measurements, pulmonary consequences, and pregnancy outcomes were not significantly different between the two treatment groups or between those treated and those simply observed. However, those treated with bracing did report more negative treatment experience and body distortion.
Recommendation of others
The Scoliosis Research Society, American Academy of Orthopedic Surgeons, Pediatric Orthopedic Society of North America, and American Academy of Pediatrics issued a joint position statement in September 2015 recommending that screening examinations for scoliosis should be performed for females at ages 10 and 12 years and for males at either 13 or 14 years.2
Their rationale, articulated in the statement and in an editorial in JAMA accompanying the publication of the USPSTF statement, is primarily based on findings in the Bracing in Adolescent Idiopathic Scoliosis Trial that showed a 56% decrease in the rate of progression of moderate curves to greater than 50 degrees. The evidence that intervention works – along with concerns about costs, family burden, loss of school time, risks of surgical complications, and the 22% need for long-term revision surgery – makes avoidance of progression of curves in scoliosis a high-value issue. In addition, they reasoned, the screening trials from which the false-positive values were derived were primarily school-based screening and not done in physician offices.
The Bottom Line
All organizations that weigh in on screening for scoliosis now agree on the benefits of bracing to slow curvature progression. They differ on the value they assign to avoiding surgery, to the effectiveness of screening programs in identifying scoliosis, and to the long-term effects of avoiding curvature progression.
Although the joint statement made by pediatric orthopedic societies and the American Academy of Pediatrics had recommended screening examinations, the USPSTF concluded that the current evidence is insufficient and that the balance of benefits and harms of screening for adolescent (aged 10-18 years) idiopathic scoliosis (Cobb angle greater than 10 degrees) cannot be determined, giving an “I” recommendation.
Dr. Aarisha Shrestha is a first-year resident in the family medicine residency program at Abington (Pa.) Jefferson Health. Dr. Skolnik is a professor of family and community medicine at Jefferson Medical College, Philadelphia, and associate director of the family medicine residency program at Abington Jefferson Health.
References
1. US Preventive Services Task Force. JAMA. 2018;319(2):165–72.
2. HreskoMT et al. SRS/POSNA/AAOS/AAP position statement: Screening for the early detection for idiopathic scoliosis in adolescents. 2015. Accessed December 8, 2017.