User login
How Extreme Rainfall Amplifies Health Risks
Climate change is intensifying the variability of precipitation caused by extreme daily and overall rainfall events. Awareness of the effects of these events is crucial for understanding the complex health consequences of climate change. Physicians have often advised their patients to move to a better climate, and when they did, the recommendation was rarely based on precise scientific knowledge. However, the benefits of changing environments were often so evident that they were indisputable.
Today, advanced models, satellite imagery, and biological approaches such as environmental epigenetics are enhancing our understanding of health risks related to climate change.
Extreme Rainfall and Health
The increase in precipitation variability is linked to climate warming, which leads to higher atmospheric humidity and extreme rainfall events. These manifestations can cause rapid weather changes, increasing interactions with harmful aerosols and raising the risk for various cardiovascular and respiratory conditions. However, a full understanding of the association between rain and health has been hindered by conflicting results and methodological issues (limited geographical locations and short observation durations) in studies.
The association between rainfall intensity and health effects is likely nonlinear. Moderate precipitation can mitigate summer heat and help reduce air pollution, an effect that may lower some environmental health risks. Conversely, intense, low-frequency, short-duration rainfall events can have particularly harmful effects on health, as such events can trigger rapid weather changes, increased proliferation of pathogens, and a rise in the risk of various pollutants, potentially exacerbating health conditions.
Rain and Mortality
Using an intensity-duration-frequency model of three rainfall indices (high intensity, low frequency, short duration), a study published in October 2024 combined these with mortality data from 34 countries or regions. Researchers estimated associations between mortality (all cause, cardiovascular, and respiratory) and rainfall events with different return periods (the average time expected before an extreme event of a certain magnitude occurs again) and crucial effect modifiers, including climatic, socioeconomic, and urban environmental conditions.
The analysis included 109,954,744 deaths from all causes; 31,164,161 cardiovascular deaths; and 11,817,278 respiratory deaths. During the study period, from 1980 to 2020, a total of 50,913 rainfall events with a 1-year return period, 8362 events with a 2-year return period, and 3301 events with a 5-year return period were identified.
The most significant finding was a global positive association between all-cause mortality and extreme rainfall events with a 5-year return period. One day of extreme rainfall with a 5-year return period was associated with a cumulative relative risk (RRc) of 1.08 (95% CI, 1.05-1.11) for daily mortality from all causes. Rainfall events with a 2-year return period were associated with increased daily respiratory mortality (RRc, 1.14), while no significant effect was observed for cardiovascular mortality during the same period. Rainfall events with a 5-year return period were associated with an increased risk for both cardiovascular mortality (RRc, 1.05) and respiratory mortality (RRc, 1.29), with the respiratory mortality being significantly higher.
Points of Concern
According to the authors, moderate to high rainfall can exert protective effects through two main mechanisms: Improving air quality (rainfall can reduce the concentration of particulate matter 2.5 cm in diameter or less in the atmosphere) and behavioral changes in people (more time spent in enclosed environments, reducing direct exposure to outdoor air pollution and nonoptimal temperatures). As rainfall intensity increases, the initial protective effects may be overshadowed by a cascade of negative impacts including:
- Critical resource disruptions: Intense rainfall can cause severe disruptions to access to healthcare, infrastructure damage including power outages, and compromised water and food quality.
- Physiological effects: Increased humidity levels facilitate the growth of airborne pathogens, potentially triggering allergic reactions and respiratory issues, particularly in vulnerable individuals. Rapid shifts in atmospheric pressure and temperature fluctuations can lead to cardiovascular and respiratory complications.
- Indirect effects: Extreme rainfall can have profound effects on mental health, inducing stress and anxiety that may exacerbate pre-existing mental health conditions and indirectly contribute to increased overall mortality from nonexternal causes.
The intensity-response curves for the health effects of heavy rainfall showed a nonlinear trend, transitioning from a protective effect at moderate levels of rainfall to a risk for severe harm when rainfall intensity became extreme. Additionally, the significant effects of extreme events were modified by various types of climate and were more pronounced in areas characterized by low variability in precipitation or sparse vegetation cover.
The study demonstrated that various local factors, such as climatic conditions, climate type, and vegetation cover, can potentially influence cardiovascular and respiratory mortality and all-cause mortality related to precipitation. The findings may help physicians convey to their patients the impact of climate change on their health.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Climate change is intensifying the variability of precipitation caused by extreme daily and overall rainfall events. Awareness of the effects of these events is crucial for understanding the complex health consequences of climate change. Physicians have often advised their patients to move to a better climate, and when they did, the recommendation was rarely based on precise scientific knowledge. However, the benefits of changing environments were often so evident that they were indisputable.
Today, advanced models, satellite imagery, and biological approaches such as environmental epigenetics are enhancing our understanding of health risks related to climate change.
Extreme Rainfall and Health
The increase in precipitation variability is linked to climate warming, which leads to higher atmospheric humidity and extreme rainfall events. These manifestations can cause rapid weather changes, increasing interactions with harmful aerosols and raising the risk for various cardiovascular and respiratory conditions. However, a full understanding of the association between rain and health has been hindered by conflicting results and methodological issues (limited geographical locations and short observation durations) in studies.
The association between rainfall intensity and health effects is likely nonlinear. Moderate precipitation can mitigate summer heat and help reduce air pollution, an effect that may lower some environmental health risks. Conversely, intense, low-frequency, short-duration rainfall events can have particularly harmful effects on health, as such events can trigger rapid weather changes, increased proliferation of pathogens, and a rise in the risk of various pollutants, potentially exacerbating health conditions.
Rain and Mortality
Using an intensity-duration-frequency model of three rainfall indices (high intensity, low frequency, short duration), a study published in October 2024 combined these with mortality data from 34 countries or regions. Researchers estimated associations between mortality (all cause, cardiovascular, and respiratory) and rainfall events with different return periods (the average time expected before an extreme event of a certain magnitude occurs again) and crucial effect modifiers, including climatic, socioeconomic, and urban environmental conditions.
The analysis included 109,954,744 deaths from all causes; 31,164,161 cardiovascular deaths; and 11,817,278 respiratory deaths. During the study period, from 1980 to 2020, a total of 50,913 rainfall events with a 1-year return period, 8362 events with a 2-year return period, and 3301 events with a 5-year return period were identified.
The most significant finding was a global positive association between all-cause mortality and extreme rainfall events with a 5-year return period. One day of extreme rainfall with a 5-year return period was associated with a cumulative relative risk (RRc) of 1.08 (95% CI, 1.05-1.11) for daily mortality from all causes. Rainfall events with a 2-year return period were associated with increased daily respiratory mortality (RRc, 1.14), while no significant effect was observed for cardiovascular mortality during the same period. Rainfall events with a 5-year return period were associated with an increased risk for both cardiovascular mortality (RRc, 1.05) and respiratory mortality (RRc, 1.29), with the respiratory mortality being significantly higher.
Points of Concern
According to the authors, moderate to high rainfall can exert protective effects through two main mechanisms: Improving air quality (rainfall can reduce the concentration of particulate matter 2.5 cm in diameter or less in the atmosphere) and behavioral changes in people (more time spent in enclosed environments, reducing direct exposure to outdoor air pollution and nonoptimal temperatures). As rainfall intensity increases, the initial protective effects may be overshadowed by a cascade of negative impacts including:
- Critical resource disruptions: Intense rainfall can cause severe disruptions to access to healthcare, infrastructure damage including power outages, and compromised water and food quality.
- Physiological effects: Increased humidity levels facilitate the growth of airborne pathogens, potentially triggering allergic reactions and respiratory issues, particularly in vulnerable individuals. Rapid shifts in atmospheric pressure and temperature fluctuations can lead to cardiovascular and respiratory complications.
- Indirect effects: Extreme rainfall can have profound effects on mental health, inducing stress and anxiety that may exacerbate pre-existing mental health conditions and indirectly contribute to increased overall mortality from nonexternal causes.
The intensity-response curves for the health effects of heavy rainfall showed a nonlinear trend, transitioning from a protective effect at moderate levels of rainfall to a risk for severe harm when rainfall intensity became extreme. Additionally, the significant effects of extreme events were modified by various types of climate and were more pronounced in areas characterized by low variability in precipitation or sparse vegetation cover.
The study demonstrated that various local factors, such as climatic conditions, climate type, and vegetation cover, can potentially influence cardiovascular and respiratory mortality and all-cause mortality related to precipitation. The findings may help physicians convey to their patients the impact of climate change on their health.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Climate change is intensifying the variability of precipitation caused by extreme daily and overall rainfall events. Awareness of the effects of these events is crucial for understanding the complex health consequences of climate change. Physicians have often advised their patients to move to a better climate, and when they did, the recommendation was rarely based on precise scientific knowledge. However, the benefits of changing environments were often so evident that they were indisputable.
Today, advanced models, satellite imagery, and biological approaches such as environmental epigenetics are enhancing our understanding of health risks related to climate change.
Extreme Rainfall and Health
The increase in precipitation variability is linked to climate warming, which leads to higher atmospheric humidity and extreme rainfall events. These manifestations can cause rapid weather changes, increasing interactions with harmful aerosols and raising the risk for various cardiovascular and respiratory conditions. However, a full understanding of the association between rain and health has been hindered by conflicting results and methodological issues (limited geographical locations and short observation durations) in studies.
The association between rainfall intensity and health effects is likely nonlinear. Moderate precipitation can mitigate summer heat and help reduce air pollution, an effect that may lower some environmental health risks. Conversely, intense, low-frequency, short-duration rainfall events can have particularly harmful effects on health, as such events can trigger rapid weather changes, increased proliferation of pathogens, and a rise in the risk of various pollutants, potentially exacerbating health conditions.
Rain and Mortality
Using an intensity-duration-frequency model of three rainfall indices (high intensity, low frequency, short duration), a study published in October 2024 combined these with mortality data from 34 countries or regions. Researchers estimated associations between mortality (all cause, cardiovascular, and respiratory) and rainfall events with different return periods (the average time expected before an extreme event of a certain magnitude occurs again) and crucial effect modifiers, including climatic, socioeconomic, and urban environmental conditions.
The analysis included 109,954,744 deaths from all causes; 31,164,161 cardiovascular deaths; and 11,817,278 respiratory deaths. During the study period, from 1980 to 2020, a total of 50,913 rainfall events with a 1-year return period, 8362 events with a 2-year return period, and 3301 events with a 5-year return period were identified.
The most significant finding was a global positive association between all-cause mortality and extreme rainfall events with a 5-year return period. One day of extreme rainfall with a 5-year return period was associated with a cumulative relative risk (RRc) of 1.08 (95% CI, 1.05-1.11) for daily mortality from all causes. Rainfall events with a 2-year return period were associated with increased daily respiratory mortality (RRc, 1.14), while no significant effect was observed for cardiovascular mortality during the same period. Rainfall events with a 5-year return period were associated with an increased risk for both cardiovascular mortality (RRc, 1.05) and respiratory mortality (RRc, 1.29), with the respiratory mortality being significantly higher.
Points of Concern
According to the authors, moderate to high rainfall can exert protective effects through two main mechanisms: Improving air quality (rainfall can reduce the concentration of particulate matter 2.5 cm in diameter or less in the atmosphere) and behavioral changes in people (more time spent in enclosed environments, reducing direct exposure to outdoor air pollution and nonoptimal temperatures). As rainfall intensity increases, the initial protective effects may be overshadowed by a cascade of negative impacts including:
- Critical resource disruptions: Intense rainfall can cause severe disruptions to access to healthcare, infrastructure damage including power outages, and compromised water and food quality.
- Physiological effects: Increased humidity levels facilitate the growth of airborne pathogens, potentially triggering allergic reactions and respiratory issues, particularly in vulnerable individuals. Rapid shifts in atmospheric pressure and temperature fluctuations can lead to cardiovascular and respiratory complications.
- Indirect effects: Extreme rainfall can have profound effects on mental health, inducing stress and anxiety that may exacerbate pre-existing mental health conditions and indirectly contribute to increased overall mortality from nonexternal causes.
The intensity-response curves for the health effects of heavy rainfall showed a nonlinear trend, transitioning from a protective effect at moderate levels of rainfall to a risk for severe harm when rainfall intensity became extreme. Additionally, the significant effects of extreme events were modified by various types of climate and were more pronounced in areas characterized by low variability in precipitation or sparse vegetation cover.
The study demonstrated that various local factors, such as climatic conditions, climate type, and vegetation cover, can potentially influence cardiovascular and respiratory mortality and all-cause mortality related to precipitation. The findings may help physicians convey to their patients the impact of climate change on their health.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
When Childhood Cancer Survivors Face Sexual Challenges
Childhood cancers represent a diverse group of neoplasms, and thanks to advances in treatment, survival rates have improved significantly. Today, more than 80%-85% of children diagnosed with cancer in developed countries survive into adulthood.
This increase in survival has brought new challenges, however. Compared with the general population, childhood cancer survivors (CCS) are at a notably higher risk for early mortality, developing secondary cancers, and experiencing various long-term clinical and psychosocial issues stemming from their disease or its treatment.
Long-term follow-up care for CCS is a complex and evolving field. Despite ongoing efforts to establish global and national guidelines, current evidence indicates that the care and management of these patients remain suboptimal.
The disruptions caused by cancer and its treatment can interfere with normal physiological and psychological development, leading to issues with sexual function. This aspect of health is critical as it influences not just physical well-being but also psychosocial, developmental, and emotional health.
Characteristics and Mechanisms
Sexual functioning encompasses the physiological and psychological aspects of sexual behavior, including desire, arousal, orgasm, sexual pleasure, and overall satisfaction.
As CCS reach adolescence or adulthood, they often face sexual and reproductive issues, particularly as they enter romantic relationships.
Sexual functioning is a complex process that relies on the interaction of various factors, including physiological health, psychosexual development, romantic relationships, body image, and desire.
Despite its importance, the impact of childhood cancer on sexual function is often overlooked, even though cancer and its treatments can have lifelong effects.
Sexual Function in CCS
A recent review aimed to summarize the existing research on sexual function among CCS, highlighting assessment tools, key stages of psychosexual development, common sexual problems, and the prevalence of sexual dysfunction.
The review study included 22 studies published between 2000 and 2022, comprising two qualitative, six cohort, and 14 cross-sectional studies.
Most CCS reached all key stages of psychosexual development at an average age of 29.8 years. Although some milestones were achieved later than is typical, many survivors felt they reached these stages at the appropriate time. Sexual initiation was less common among those who had undergone intensive neurotoxic treatments, such as those diagnosed with brain tumors or leukemia in childhood.
In a cross-sectional study of CCS aged 17-39 years, about one third had never engaged in sexual intercourse, 41.4% reported never experiencing sexual attraction, 44.8% were dissatisfied with their sex lives, and many rarely felt sexually attractive to others. Another study found that common issues among CCS included a lack of interest in sex (30%), difficulty enjoying sex (24%), and difficulty becoming aroused (23%). However, comparing and analyzing these problems was challenging due to the lack of standardized assessment criteria.
The prevalence of sexual dysfunction among CCS ranged from 12.3% to 46.5%. For males, the prevalence ranged from 12.3% to 54.0%, while for females, it ranged from 19.9% to 57.0%.
Factors Influencing Sexual Function
The review identified the following four categories of factors influencing sexual function in CCS: Demographic, treatment-related, psychological, and physiological.
Demographic factors: Gender, age, education level, relationship status, income level, and race all play roles in sexual function.
Female survivors reported more severe sexual dysfunction and poorer sexual health than did male survivors. Age at cancer diagnosis, age at evaluation, and the time since diagnosis were closely linked to sexual experiences. Patients diagnosed with cancer during childhood tended to report better sexual function than those diagnosed during adolescence.
Treatment-related factors: The type of cancer and intensity of treatment, along with surgical history, were significant factors. Surgeries involving the spinal cord or sympathetic nerves, as well as a history of prostate or pelvic surgery, were strongly associated with erectile dysfunction in men. In women, pelvic surgeries and treatments to the pelvic area were commonly linked to sexual dysfunction.
The association between treatment intensity and sexual function was noted across several studies, although the results were not always consistent. For example, testicular radiation above 10 Gy was positively correlated with sexual dysfunction. Women who underwent more intensive treatments were more likely to report issues in multiple areas of sexual function, while men in this group were less likely to have children.
Among female CCS, certain types of cancer, such as germ cell tumors, renal tumors, and leukemia, present a higher risk for sexual dysfunction. Women who had CNS tumors in childhood frequently reported problems like difficulty in sexual arousal, low sexual satisfaction, infrequent sexual activity, and fewer sexual partners, compared with survivors of other cancers. Survivors of acute lymphoblastic leukemia and those who underwent hematopoietic stem cell transplantation (HSCT) also showed varying degrees of impaired sexual function, compared with the general population. The HSCT group showed significant testicular damage, including reduced testicular volumes, low testosterone levels, and low sperm counts.
Psychological factors: These factors, such as emotional distress, play a significant role in sexual dysfunction among CCS. Symptoms like anxiety, nervousness during sexual activity, and depression are commonly reported by those with sexual dysfunction. The connection between body image and sexual function is complex. Many CCS with sexual dysfunction express concern about how others, particularly their partners, perceived their altered body image due to cancer and its treatment.
Physiological factors: In male CCS, low serum testosterone levels and low lean muscle mass are linked to an increased risk for sexual dysfunction. Treatments involving alkylating agents or testicular radiation, and surgery or radiotherapy targeting the genitourinary organs or the hypothalamic-pituitary region, can lead to various physiological and endocrine disorders, contributing to sexual dysfunction. Despite these risks, there is a lack of research evaluating sexual function through the lens of the hypothalamic-pituitary-gonadal axis and neuroendocrine pathways.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Childhood cancers represent a diverse group of neoplasms, and thanks to advances in treatment, survival rates have improved significantly. Today, more than 80%-85% of children diagnosed with cancer in developed countries survive into adulthood.
This increase in survival has brought new challenges, however. Compared with the general population, childhood cancer survivors (CCS) are at a notably higher risk for early mortality, developing secondary cancers, and experiencing various long-term clinical and psychosocial issues stemming from their disease or its treatment.
Long-term follow-up care for CCS is a complex and evolving field. Despite ongoing efforts to establish global and national guidelines, current evidence indicates that the care and management of these patients remain suboptimal.
The disruptions caused by cancer and its treatment can interfere with normal physiological and psychological development, leading to issues with sexual function. This aspect of health is critical as it influences not just physical well-being but also psychosocial, developmental, and emotional health.
Characteristics and Mechanisms
Sexual functioning encompasses the physiological and psychological aspects of sexual behavior, including desire, arousal, orgasm, sexual pleasure, and overall satisfaction.
As CCS reach adolescence or adulthood, they often face sexual and reproductive issues, particularly as they enter romantic relationships.
Sexual functioning is a complex process that relies on the interaction of various factors, including physiological health, psychosexual development, romantic relationships, body image, and desire.
Despite its importance, the impact of childhood cancer on sexual function is often overlooked, even though cancer and its treatments can have lifelong effects.
Sexual Function in CCS
A recent review aimed to summarize the existing research on sexual function among CCS, highlighting assessment tools, key stages of psychosexual development, common sexual problems, and the prevalence of sexual dysfunction.
The review study included 22 studies published between 2000 and 2022, comprising two qualitative, six cohort, and 14 cross-sectional studies.
Most CCS reached all key stages of psychosexual development at an average age of 29.8 years. Although some milestones were achieved later than is typical, many survivors felt they reached these stages at the appropriate time. Sexual initiation was less common among those who had undergone intensive neurotoxic treatments, such as those diagnosed with brain tumors or leukemia in childhood.
In a cross-sectional study of CCS aged 17-39 years, about one third had never engaged in sexual intercourse, 41.4% reported never experiencing sexual attraction, 44.8% were dissatisfied with their sex lives, and many rarely felt sexually attractive to others. Another study found that common issues among CCS included a lack of interest in sex (30%), difficulty enjoying sex (24%), and difficulty becoming aroused (23%). However, comparing and analyzing these problems was challenging due to the lack of standardized assessment criteria.
The prevalence of sexual dysfunction among CCS ranged from 12.3% to 46.5%. For males, the prevalence ranged from 12.3% to 54.0%, while for females, it ranged from 19.9% to 57.0%.
Factors Influencing Sexual Function
The review identified the following four categories of factors influencing sexual function in CCS: Demographic, treatment-related, psychological, and physiological.
Demographic factors: Gender, age, education level, relationship status, income level, and race all play roles in sexual function.
Female survivors reported more severe sexual dysfunction and poorer sexual health than did male survivors. Age at cancer diagnosis, age at evaluation, and the time since diagnosis were closely linked to sexual experiences. Patients diagnosed with cancer during childhood tended to report better sexual function than those diagnosed during adolescence.
Treatment-related factors: The type of cancer and intensity of treatment, along with surgical history, were significant factors. Surgeries involving the spinal cord or sympathetic nerves, as well as a history of prostate or pelvic surgery, were strongly associated with erectile dysfunction in men. In women, pelvic surgeries and treatments to the pelvic area were commonly linked to sexual dysfunction.
The association between treatment intensity and sexual function was noted across several studies, although the results were not always consistent. For example, testicular radiation above 10 Gy was positively correlated with sexual dysfunction. Women who underwent more intensive treatments were more likely to report issues in multiple areas of sexual function, while men in this group were less likely to have children.
Among female CCS, certain types of cancer, such as germ cell tumors, renal tumors, and leukemia, present a higher risk for sexual dysfunction. Women who had CNS tumors in childhood frequently reported problems like difficulty in sexual arousal, low sexual satisfaction, infrequent sexual activity, and fewer sexual partners, compared with survivors of other cancers. Survivors of acute lymphoblastic leukemia and those who underwent hematopoietic stem cell transplantation (HSCT) also showed varying degrees of impaired sexual function, compared with the general population. The HSCT group showed significant testicular damage, including reduced testicular volumes, low testosterone levels, and low sperm counts.
Psychological factors: These factors, such as emotional distress, play a significant role in sexual dysfunction among CCS. Symptoms like anxiety, nervousness during sexual activity, and depression are commonly reported by those with sexual dysfunction. The connection between body image and sexual function is complex. Many CCS with sexual dysfunction express concern about how others, particularly their partners, perceived their altered body image due to cancer and its treatment.
Physiological factors: In male CCS, low serum testosterone levels and low lean muscle mass are linked to an increased risk for sexual dysfunction. Treatments involving alkylating agents or testicular radiation, and surgery or radiotherapy targeting the genitourinary organs or the hypothalamic-pituitary region, can lead to various physiological and endocrine disorders, contributing to sexual dysfunction. Despite these risks, there is a lack of research evaluating sexual function through the lens of the hypothalamic-pituitary-gonadal axis and neuroendocrine pathways.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Childhood cancers represent a diverse group of neoplasms, and thanks to advances in treatment, survival rates have improved significantly. Today, more than 80%-85% of children diagnosed with cancer in developed countries survive into adulthood.
This increase in survival has brought new challenges, however. Compared with the general population, childhood cancer survivors (CCS) are at a notably higher risk for early mortality, developing secondary cancers, and experiencing various long-term clinical and psychosocial issues stemming from their disease or its treatment.
Long-term follow-up care for CCS is a complex and evolving field. Despite ongoing efforts to establish global and national guidelines, current evidence indicates that the care and management of these patients remain suboptimal.
The disruptions caused by cancer and its treatment can interfere with normal physiological and psychological development, leading to issues with sexual function. This aspect of health is critical as it influences not just physical well-being but also psychosocial, developmental, and emotional health.
Characteristics and Mechanisms
Sexual functioning encompasses the physiological and psychological aspects of sexual behavior, including desire, arousal, orgasm, sexual pleasure, and overall satisfaction.
As CCS reach adolescence or adulthood, they often face sexual and reproductive issues, particularly as they enter romantic relationships.
Sexual functioning is a complex process that relies on the interaction of various factors, including physiological health, psychosexual development, romantic relationships, body image, and desire.
Despite its importance, the impact of childhood cancer on sexual function is often overlooked, even though cancer and its treatments can have lifelong effects.
Sexual Function in CCS
A recent review aimed to summarize the existing research on sexual function among CCS, highlighting assessment tools, key stages of psychosexual development, common sexual problems, and the prevalence of sexual dysfunction.
The review study included 22 studies published between 2000 and 2022, comprising two qualitative, six cohort, and 14 cross-sectional studies.
Most CCS reached all key stages of psychosexual development at an average age of 29.8 years. Although some milestones were achieved later than is typical, many survivors felt they reached these stages at the appropriate time. Sexual initiation was less common among those who had undergone intensive neurotoxic treatments, such as those diagnosed with brain tumors or leukemia in childhood.
In a cross-sectional study of CCS aged 17-39 years, about one third had never engaged in sexual intercourse, 41.4% reported never experiencing sexual attraction, 44.8% were dissatisfied with their sex lives, and many rarely felt sexually attractive to others. Another study found that common issues among CCS included a lack of interest in sex (30%), difficulty enjoying sex (24%), and difficulty becoming aroused (23%). However, comparing and analyzing these problems was challenging due to the lack of standardized assessment criteria.
The prevalence of sexual dysfunction among CCS ranged from 12.3% to 46.5%. For males, the prevalence ranged from 12.3% to 54.0%, while for females, it ranged from 19.9% to 57.0%.
Factors Influencing Sexual Function
The review identified the following four categories of factors influencing sexual function in CCS: Demographic, treatment-related, psychological, and physiological.
Demographic factors: Gender, age, education level, relationship status, income level, and race all play roles in sexual function.
Female survivors reported more severe sexual dysfunction and poorer sexual health than did male survivors. Age at cancer diagnosis, age at evaluation, and the time since diagnosis were closely linked to sexual experiences. Patients diagnosed with cancer during childhood tended to report better sexual function than those diagnosed during adolescence.
Treatment-related factors: The type of cancer and intensity of treatment, along with surgical history, were significant factors. Surgeries involving the spinal cord or sympathetic nerves, as well as a history of prostate or pelvic surgery, were strongly associated with erectile dysfunction in men. In women, pelvic surgeries and treatments to the pelvic area were commonly linked to sexual dysfunction.
The association between treatment intensity and sexual function was noted across several studies, although the results were not always consistent. For example, testicular radiation above 10 Gy was positively correlated with sexual dysfunction. Women who underwent more intensive treatments were more likely to report issues in multiple areas of sexual function, while men in this group were less likely to have children.
Among female CCS, certain types of cancer, such as germ cell tumors, renal tumors, and leukemia, present a higher risk for sexual dysfunction. Women who had CNS tumors in childhood frequently reported problems like difficulty in sexual arousal, low sexual satisfaction, infrequent sexual activity, and fewer sexual partners, compared with survivors of other cancers. Survivors of acute lymphoblastic leukemia and those who underwent hematopoietic stem cell transplantation (HSCT) also showed varying degrees of impaired sexual function, compared with the general population. The HSCT group showed significant testicular damage, including reduced testicular volumes, low testosterone levels, and low sperm counts.
Psychological factors: These factors, such as emotional distress, play a significant role in sexual dysfunction among CCS. Symptoms like anxiety, nervousness during sexual activity, and depression are commonly reported by those with sexual dysfunction. The connection between body image and sexual function is complex. Many CCS with sexual dysfunction express concern about how others, particularly their partners, perceived their altered body image due to cancer and its treatment.
Physiological factors: In male CCS, low serum testosterone levels and low lean muscle mass are linked to an increased risk for sexual dysfunction. Treatments involving alkylating agents or testicular radiation, and surgery or radiotherapy targeting the genitourinary organs or the hypothalamic-pituitary region, can lead to various physiological and endocrine disorders, contributing to sexual dysfunction. Despite these risks, there is a lack of research evaluating sexual function through the lens of the hypothalamic-pituitary-gonadal axis and neuroendocrine pathways.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Updated Guideline Reflects New Drugs for Type 2 Diabetes
Type 2 diabetes (T2D) is the most common form of diabetes, representing more than 90% of all cases worldwide. The prevalence of T2D is increasing globally, mainly because of behavioral and social factors related to obesity, diet, and physical activity. The International Diabetes Federation estimated in its 2021 report that 537 million adults aged between 20 and 79 years have been diagnosed with diabetes worldwide. The organization predicts an increase to 643 million by 2030 and 743 million by 2045.
The main therapeutic goals for patients with T2D include adequate glycemic control and primary and secondary prevention of atherosclerotic cardiovascular and renal diseases, which represent nearly half of all deaths among adults with T2D. Despite the multiple treatment options available, 16% of adults with T2D have inadequate glycemic control, including hemoglobin A1c levels greater than 9%, even though glycemic control was the focus of the 2017 guidelines of the American College of Physicians.
Therefore, the ACP deemed it necessary to update the previous guidelines, considering new evidence on the efficacy and harms of new pharmacologic treatments in adults with T2D with the goal of reducing the risk for all-cause mortality, cardiovascular morbidity, and progression of chronic kidney disease (CKD) in these patients.
New Drugs
The pharmacologic treatments that the ACP considered while updating its guidelines include glucagon-like peptide 1 (GLP-1) receptor agonists (that is, dulaglutide, exenatide, liraglutide, lixisenatide, and semaglutide), a GLP-1 receptor agonist and a glucose-dependent insulinotropic polypeptide receptor agonist (that is, tirzepatide), sodium-glucose cotransporter 2 (SGLT-2) inhibitors (that is, canagliflozin, dapagliflozin, empagliflozin, and ertugliflozin), dipeptidyl peptidase 4 (DPP-4) inhibitors (that is, alogliptin, linagliptin, saxagliptin, and sitagliptin), and long-acting insulins (that is, insulin glargine and insulin degludec).
Recommendations
The ACP recommends adding an SGLT-2 inhibitor or a GLP-1 agonist to metformin and lifestyle modifications in adults with inadequately controlled T2D (strong recommendation, high certainty of evidence). Use an SGLT-2 inhibitor to reduce the risk for all-cause mortality, major adverse cardiovascular events (MACE), CKD progression, and hospitalization resulting from heart failure, according to the document. Use a GLP-1 agonist to reduce the risk for all-cause mortality, MACE, and strokes.
SGLT-2 inhibitors and GLP-1 agonists are the only newer pharmacological treatments for T2D that have reduced all-cause mortality than placebo or usual care. In indirect comparison, SGLT-2 inhibitors probably reduce the risk for hospitalization resulting from heart failure, while GLP-1 agonists probably reduce the risk for strokes.
Neither class of drugs causes severe hypoglycemia, but both are associated with various harms, as reported in specific warnings. Both classes of drugs lead to weight loss.
Compared with long-acting insulins, SGLT-2 inhibitors can reduce, and GLP-1 agonists probably reduce, all-cause mortality. Compared with DPP-4 inhibitors, GLP-1 agonists probably reduce all-cause mortality.
Compared with DPP-4 inhibitors, SGLT-2 inhibitors probably reduce MACE, as well as compared with sulfonylureas.
The ACP recommends against adding a DPP-4 inhibitor to metformin and lifestyle modifications in adults with inadequately controlled T2D to reduce morbidity and all-cause mortality (strong recommendation, high certainty of evidence).
Compared with usual therapy, DPP-4 inhibitors do not result in differences in all-cause mortality, MACE, myocardial infarction, stroke, hospitalization for chronic heart failure (CHF), CKD progression, or severe hypoglycemia. Compared with SGLT-2 inhibitors, DPP-4 inhibitors may increase hospitalization caused by CHF and probably increase the risk for MACE and CKD progression. Compared with GLP-1 agonists, they probably increase all-cause mortality and hospitalization caused by CHF and the risk for MACE. Metformin is the most common usual therapy in the studies considered.
Considerations for Practice
Metformin (unless contraindicated) and lifestyle modifications represent the first step in managing T2D in most patients, according to the ACP.
The choice of additional therapy requires a risk/benefit assessment and should be personalized on the basis of patient preferences, glycemic control goals, comorbidities, and the risk for hypoglycemia. SGLT-2 inhibitors can be added in patients with T2D and CHF or CKD, according to the ACP. GLP-1 agonists can be added in patients with T2D at increased risk for stroke or for whom total body weight loss is a significant therapeutic goal.
The A1c target should be considered between 7% and 8% in most adults with T2D, and de-escalation of pharmacologic treatments should be considered for A1c levels less than 6.5%. Self-monitoring of blood glucose may not be necessary in patients treated with metformin in combination with an SGLT-2 inhibitor or a GLP-1 agonist, according to the ACP.
The document also holds that, in cases of adequate glycemic control with the addition of an SGLT-2 inhibitor or a GLP-1 agonist, existing treatment with sulfonylureas or long-acting insulin should be reduced or stopped due to the increased risk for severe hypoglycemia.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
Type 2 diabetes (T2D) is the most common form of diabetes, representing more than 90% of all cases worldwide. The prevalence of T2D is increasing globally, mainly because of behavioral and social factors related to obesity, diet, and physical activity. The International Diabetes Federation estimated in its 2021 report that 537 million adults aged between 20 and 79 years have been diagnosed with diabetes worldwide. The organization predicts an increase to 643 million by 2030 and 743 million by 2045.
The main therapeutic goals for patients with T2D include adequate glycemic control and primary and secondary prevention of atherosclerotic cardiovascular and renal diseases, which represent nearly half of all deaths among adults with T2D. Despite the multiple treatment options available, 16% of adults with T2D have inadequate glycemic control, including hemoglobin A1c levels greater than 9%, even though glycemic control was the focus of the 2017 guidelines of the American College of Physicians.
Therefore, the ACP deemed it necessary to update the previous guidelines, considering new evidence on the efficacy and harms of new pharmacologic treatments in adults with T2D with the goal of reducing the risk for all-cause mortality, cardiovascular morbidity, and progression of chronic kidney disease (CKD) in these patients.
New Drugs
The pharmacologic treatments that the ACP considered while updating its guidelines include glucagon-like peptide 1 (GLP-1) receptor agonists (that is, dulaglutide, exenatide, liraglutide, lixisenatide, and semaglutide), a GLP-1 receptor agonist and a glucose-dependent insulinotropic polypeptide receptor agonist (that is, tirzepatide), sodium-glucose cotransporter 2 (SGLT-2) inhibitors (that is, canagliflozin, dapagliflozin, empagliflozin, and ertugliflozin), dipeptidyl peptidase 4 (DPP-4) inhibitors (that is, alogliptin, linagliptin, saxagliptin, and sitagliptin), and long-acting insulins (that is, insulin glargine and insulin degludec).
Recommendations
The ACP recommends adding an SGLT-2 inhibitor or a GLP-1 agonist to metformin and lifestyle modifications in adults with inadequately controlled T2D (strong recommendation, high certainty of evidence). Use an SGLT-2 inhibitor to reduce the risk for all-cause mortality, major adverse cardiovascular events (MACE), CKD progression, and hospitalization resulting from heart failure, according to the document. Use a GLP-1 agonist to reduce the risk for all-cause mortality, MACE, and strokes.
SGLT-2 inhibitors and GLP-1 agonists are the only newer pharmacological treatments for T2D that have reduced all-cause mortality than placebo or usual care. In indirect comparison, SGLT-2 inhibitors probably reduce the risk for hospitalization resulting from heart failure, while GLP-1 agonists probably reduce the risk for strokes.
Neither class of drugs causes severe hypoglycemia, but both are associated with various harms, as reported in specific warnings. Both classes of drugs lead to weight loss.
Compared with long-acting insulins, SGLT-2 inhibitors can reduce, and GLP-1 agonists probably reduce, all-cause mortality. Compared with DPP-4 inhibitors, GLP-1 agonists probably reduce all-cause mortality.
Compared with DPP-4 inhibitors, SGLT-2 inhibitors probably reduce MACE, as well as compared with sulfonylureas.
The ACP recommends against adding a DPP-4 inhibitor to metformin and lifestyle modifications in adults with inadequately controlled T2D to reduce morbidity and all-cause mortality (strong recommendation, high certainty of evidence).
Compared with usual therapy, DPP-4 inhibitors do not result in differences in all-cause mortality, MACE, myocardial infarction, stroke, hospitalization for chronic heart failure (CHF), CKD progression, or severe hypoglycemia. Compared with SGLT-2 inhibitors, DPP-4 inhibitors may increase hospitalization caused by CHF and probably increase the risk for MACE and CKD progression. Compared with GLP-1 agonists, they probably increase all-cause mortality and hospitalization caused by CHF and the risk for MACE. Metformin is the most common usual therapy in the studies considered.
Considerations for Practice
Metformin (unless contraindicated) and lifestyle modifications represent the first step in managing T2D in most patients, according to the ACP.
The choice of additional therapy requires a risk/benefit assessment and should be personalized on the basis of patient preferences, glycemic control goals, comorbidities, and the risk for hypoglycemia. SGLT-2 inhibitors can be added in patients with T2D and CHF or CKD, according to the ACP. GLP-1 agonists can be added in patients with T2D at increased risk for stroke or for whom total body weight loss is a significant therapeutic goal.
The A1c target should be considered between 7% and 8% in most adults with T2D, and de-escalation of pharmacologic treatments should be considered for A1c levels less than 6.5%. Self-monitoring of blood glucose may not be necessary in patients treated with metformin in combination with an SGLT-2 inhibitor or a GLP-1 agonist, according to the ACP.
The document also holds that, in cases of adequate glycemic control with the addition of an SGLT-2 inhibitor or a GLP-1 agonist, existing treatment with sulfonylureas or long-acting insulin should be reduced or stopped due to the increased risk for severe hypoglycemia.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
Type 2 diabetes (T2D) is the most common form of diabetes, representing more than 90% of all cases worldwide. The prevalence of T2D is increasing globally, mainly because of behavioral and social factors related to obesity, diet, and physical activity. The International Diabetes Federation estimated in its 2021 report that 537 million adults aged between 20 and 79 years have been diagnosed with diabetes worldwide. The organization predicts an increase to 643 million by 2030 and 743 million by 2045.
The main therapeutic goals for patients with T2D include adequate glycemic control and primary and secondary prevention of atherosclerotic cardiovascular and renal diseases, which represent nearly half of all deaths among adults with T2D. Despite the multiple treatment options available, 16% of adults with T2D have inadequate glycemic control, including hemoglobin A1c levels greater than 9%, even though glycemic control was the focus of the 2017 guidelines of the American College of Physicians.
Therefore, the ACP deemed it necessary to update the previous guidelines, considering new evidence on the efficacy and harms of new pharmacologic treatments in adults with T2D with the goal of reducing the risk for all-cause mortality, cardiovascular morbidity, and progression of chronic kidney disease (CKD) in these patients.
New Drugs
The pharmacologic treatments that the ACP considered while updating its guidelines include glucagon-like peptide 1 (GLP-1) receptor agonists (that is, dulaglutide, exenatide, liraglutide, lixisenatide, and semaglutide), a GLP-1 receptor agonist and a glucose-dependent insulinotropic polypeptide receptor agonist (that is, tirzepatide), sodium-glucose cotransporter 2 (SGLT-2) inhibitors (that is, canagliflozin, dapagliflozin, empagliflozin, and ertugliflozin), dipeptidyl peptidase 4 (DPP-4) inhibitors (that is, alogliptin, linagliptin, saxagliptin, and sitagliptin), and long-acting insulins (that is, insulin glargine and insulin degludec).
Recommendations
The ACP recommends adding an SGLT-2 inhibitor or a GLP-1 agonist to metformin and lifestyle modifications in adults with inadequately controlled T2D (strong recommendation, high certainty of evidence). Use an SGLT-2 inhibitor to reduce the risk for all-cause mortality, major adverse cardiovascular events (MACE), CKD progression, and hospitalization resulting from heart failure, according to the document. Use a GLP-1 agonist to reduce the risk for all-cause mortality, MACE, and strokes.
SGLT-2 inhibitors and GLP-1 agonists are the only newer pharmacological treatments for T2D that have reduced all-cause mortality than placebo or usual care. In indirect comparison, SGLT-2 inhibitors probably reduce the risk for hospitalization resulting from heart failure, while GLP-1 agonists probably reduce the risk for strokes.
Neither class of drugs causes severe hypoglycemia, but both are associated with various harms, as reported in specific warnings. Both classes of drugs lead to weight loss.
Compared with long-acting insulins, SGLT-2 inhibitors can reduce, and GLP-1 agonists probably reduce, all-cause mortality. Compared with DPP-4 inhibitors, GLP-1 agonists probably reduce all-cause mortality.
Compared with DPP-4 inhibitors, SGLT-2 inhibitors probably reduce MACE, as well as compared with sulfonylureas.
The ACP recommends against adding a DPP-4 inhibitor to metformin and lifestyle modifications in adults with inadequately controlled T2D to reduce morbidity and all-cause mortality (strong recommendation, high certainty of evidence).
Compared with usual therapy, DPP-4 inhibitors do not result in differences in all-cause mortality, MACE, myocardial infarction, stroke, hospitalization for chronic heart failure (CHF), CKD progression, or severe hypoglycemia. Compared with SGLT-2 inhibitors, DPP-4 inhibitors may increase hospitalization caused by CHF and probably increase the risk for MACE and CKD progression. Compared with GLP-1 agonists, they probably increase all-cause mortality and hospitalization caused by CHF and the risk for MACE. Metformin is the most common usual therapy in the studies considered.
Considerations for Practice
Metformin (unless contraindicated) and lifestyle modifications represent the first step in managing T2D in most patients, according to the ACP.
The choice of additional therapy requires a risk/benefit assessment and should be personalized on the basis of patient preferences, glycemic control goals, comorbidities, and the risk for hypoglycemia. SGLT-2 inhibitors can be added in patients with T2D and CHF or CKD, according to the ACP. GLP-1 agonists can be added in patients with T2D at increased risk for stroke or for whom total body weight loss is a significant therapeutic goal.
The A1c target should be considered between 7% and 8% in most adults with T2D, and de-escalation of pharmacologic treatments should be considered for A1c levels less than 6.5%. Self-monitoring of blood glucose may not be necessary in patients treated with metformin in combination with an SGLT-2 inhibitor or a GLP-1 agonist, according to the ACP.
The document also holds that, in cases of adequate glycemic control with the addition of an SGLT-2 inhibitor or a GLP-1 agonist, existing treatment with sulfonylureas or long-acting insulin should be reduced or stopped due to the increased risk for severe hypoglycemia.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
What Health Risks Do Microplastics Pose?
The annual production of plastic worldwide has increased exponentially from about 2 million tons in 1950 to 460 million tons in 2019, and current levels are expected to triple by 2060.
Plastic contains more than 10,000 chemicals, including carcinogenic substances and endocrine disruptors. Plastic and associated chemicals are responsible for widespread pollution, contaminating aquatic (marine and freshwater), terrestrial, and atmospheric environments globally.
Atmospheric concentrations of plastic particles are on the rise, to the extent that in a remote station in the Eastern Alps in Austria, the contribution of micro- and nanoplastics (MNPs) to organic matter was comparable to data collected at an urban site.
The ocean is the ultimate destination for much of the plastic. All oceans, on the surface and in the depths, contain plastic, which is even found in polar sea ice. Many plastics seem to resist decomposition in the ocean and could persist in the environment for decades. Macro- and microplastic (MP) particles have been identified in hundreds of marine species, including species consumed by humans.
The quantity and fate of MP particles (> 10 µm) and smaller nanoplastics (< 10 µm) in aquatic environments are poorly understood, but what is most concerning is their ability to cross biologic barriers and the potential harm associated with their mobility in biologic systems.
MNP Exposure
MNPs can originate from a wide variety of sources, including food, beverages, and food product packaging. Water bottles represent a significant source of ingestible MNPs for people in their daily lives. Recent estimates, using stimulated Raman scattering imaging, documented a concentration of MNP of approximately 2.4 ± 1.3 × 105 particles per liter of bottled water. Around 90% are nanoplastics, which is two to three orders of magnitude higher than previously reported results for larger MPs.
MNPs enter the body primarily through ingestion or inhalation. For example, MNPs can be ingested by drinking liquids or eating food that has been stored or heated in plastic containers from which they have leaked or by using toothpaste that contains them. Infants are exposed to MPs from artificial milk preparation in polypropylene baby bottles, with higher levels than previously detected and ranging from 14,600 to 4,550,000 particles per capita per day.
MNP and Biologic Systems
The possible formation of hetero-aggregates between nanoplastics and natural organic matter has long been recognized as a potential challenge in the analysis of nanoplastics and can influence toxicologic results in biologic exposure. The direct visualization of such hetero-aggregates in real-world samples supports these concerns, but the analysis of MNPs with traditional techniques remains challenging. Unlike engineered nanoparticles (prepared in the laboratory as model systems), the nanoplastics in the environment are label-free and exhibit significant heterogeneity in chemical composition and morphology.
A systematic analysis of evidence on the toxic effects of MNPs on murine models, however, showed that 52.78% of biologic endpoints (related to glucose metabolism, reproduction, oxidative stress, and lipid metabolism) were significantly affected by MNP exposure.
Between Risk and Toxicity
MNP can enter the body in vivo through the digestive tract, respiratory tract, and skin contact. On average, humans could ingest from 0.1 to 5 g of MNP per week through various exposure routes.
MNPs are a potential risk factor for cardiovascular diseases, as suggested by a recent study on 257 patients with carotid atheromatous plaques. In 58.4% of cases, polyvinyl chloride was detected in the carotid artery plaque, with an average level of 5.2 ± 2.4 μg/mg of plaque. Patients with MNPs inside the atheroma had a higher risk (relative risk, 4.53) for a composite cardiovascular event of myocardial infarction, stroke, or death from any cause at 34 months of follow-up than participants where MNPs were not detectable inside the atheromatous plaque.
The potential link between inflammatory bowel disease (IBD) and MPs has been hypothesized by a study that reported a higher fecal MP concentration in patients with IBD than in healthy individuals. Fecal MP level was correlated with disease severity.
However, these studies have not demonstrated a causal relationship between MNPs and disease, and the way MNPs may influence cellular functions and induce stress responses is not yet well understood.
Future Scenarios
Current evidence confirms the fragmentation of plastic beyond the micrometer level and has unequivocally detected nanoplastics in real samples. As with many other particle distributions of the same size in the natural world, there are substantially more nanoplastics, despite their invisibility with conventional imaging techniques, than particles larger than the micron size.
The initial results of studies on MNPs in humans will stimulate future research on the amounts of MNPs that accumulate in tissue over a person’s lifetime. Researchers also will examine how the particles’ characteristics, including their chemical composition, size, and shape, can influence organs and tissues.
The way MNPs can cause harm, including through effects on the immune system and microbiome, will need to be clarified by investigating possible direct cytotoxic effects, consistent with the introductory statement of the Organization for Economic Cooperation and Development global policy forum on plastics, which states, “Plastic pollution is one of the great environmental challenges of the 21st century, causing wide-ranging damage to ecosystems and human health.”
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
The annual production of plastic worldwide has increased exponentially from about 2 million tons in 1950 to 460 million tons in 2019, and current levels are expected to triple by 2060.
Plastic contains more than 10,000 chemicals, including carcinogenic substances and endocrine disruptors. Plastic and associated chemicals are responsible for widespread pollution, contaminating aquatic (marine and freshwater), terrestrial, and atmospheric environments globally.
Atmospheric concentrations of plastic particles are on the rise, to the extent that in a remote station in the Eastern Alps in Austria, the contribution of micro- and nanoplastics (MNPs) to organic matter was comparable to data collected at an urban site.
The ocean is the ultimate destination for much of the plastic. All oceans, on the surface and in the depths, contain plastic, which is even found in polar sea ice. Many plastics seem to resist decomposition in the ocean and could persist in the environment for decades. Macro- and microplastic (MP) particles have been identified in hundreds of marine species, including species consumed by humans.
The quantity and fate of MP particles (> 10 µm) and smaller nanoplastics (< 10 µm) in aquatic environments are poorly understood, but what is most concerning is their ability to cross biologic barriers and the potential harm associated with their mobility in biologic systems.
MNP Exposure
MNPs can originate from a wide variety of sources, including food, beverages, and food product packaging. Water bottles represent a significant source of ingestible MNPs for people in their daily lives. Recent estimates, using stimulated Raman scattering imaging, documented a concentration of MNP of approximately 2.4 ± 1.3 × 105 particles per liter of bottled water. Around 90% are nanoplastics, which is two to three orders of magnitude higher than previously reported results for larger MPs.
MNPs enter the body primarily through ingestion or inhalation. For example, MNPs can be ingested by drinking liquids or eating food that has been stored or heated in plastic containers from which they have leaked or by using toothpaste that contains them. Infants are exposed to MPs from artificial milk preparation in polypropylene baby bottles, with higher levels than previously detected and ranging from 14,600 to 4,550,000 particles per capita per day.
MNP and Biologic Systems
The possible formation of hetero-aggregates between nanoplastics and natural organic matter has long been recognized as a potential challenge in the analysis of nanoplastics and can influence toxicologic results in biologic exposure. The direct visualization of such hetero-aggregates in real-world samples supports these concerns, but the analysis of MNPs with traditional techniques remains challenging. Unlike engineered nanoparticles (prepared in the laboratory as model systems), the nanoplastics in the environment are label-free and exhibit significant heterogeneity in chemical composition and morphology.
A systematic analysis of evidence on the toxic effects of MNPs on murine models, however, showed that 52.78% of biologic endpoints (related to glucose metabolism, reproduction, oxidative stress, and lipid metabolism) were significantly affected by MNP exposure.
Between Risk and Toxicity
MNP can enter the body in vivo through the digestive tract, respiratory tract, and skin contact. On average, humans could ingest from 0.1 to 5 g of MNP per week through various exposure routes.
MNPs are a potential risk factor for cardiovascular diseases, as suggested by a recent study on 257 patients with carotid atheromatous plaques. In 58.4% of cases, polyvinyl chloride was detected in the carotid artery plaque, with an average level of 5.2 ± 2.4 μg/mg of plaque. Patients with MNPs inside the atheroma had a higher risk (relative risk, 4.53) for a composite cardiovascular event of myocardial infarction, stroke, or death from any cause at 34 months of follow-up than participants where MNPs were not detectable inside the atheromatous plaque.
The potential link between inflammatory bowel disease (IBD) and MPs has been hypothesized by a study that reported a higher fecal MP concentration in patients with IBD than in healthy individuals. Fecal MP level was correlated with disease severity.
However, these studies have not demonstrated a causal relationship between MNPs and disease, and the way MNPs may influence cellular functions and induce stress responses is not yet well understood.
Future Scenarios
Current evidence confirms the fragmentation of plastic beyond the micrometer level and has unequivocally detected nanoplastics in real samples. As with many other particle distributions of the same size in the natural world, there are substantially more nanoplastics, despite their invisibility with conventional imaging techniques, than particles larger than the micron size.
The initial results of studies on MNPs in humans will stimulate future research on the amounts of MNPs that accumulate in tissue over a person’s lifetime. Researchers also will examine how the particles’ characteristics, including their chemical composition, size, and shape, can influence organs and tissues.
The way MNPs can cause harm, including through effects on the immune system and microbiome, will need to be clarified by investigating possible direct cytotoxic effects, consistent with the introductory statement of the Organization for Economic Cooperation and Development global policy forum on plastics, which states, “Plastic pollution is one of the great environmental challenges of the 21st century, causing wide-ranging damage to ecosystems and human health.”
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
The annual production of plastic worldwide has increased exponentially from about 2 million tons in 1950 to 460 million tons in 2019, and current levels are expected to triple by 2060.
Plastic contains more than 10,000 chemicals, including carcinogenic substances and endocrine disruptors. Plastic and associated chemicals are responsible for widespread pollution, contaminating aquatic (marine and freshwater), terrestrial, and atmospheric environments globally.
Atmospheric concentrations of plastic particles are on the rise, to the extent that in a remote station in the Eastern Alps in Austria, the contribution of micro- and nanoplastics (MNPs) to organic matter was comparable to data collected at an urban site.
The ocean is the ultimate destination for much of the plastic. All oceans, on the surface and in the depths, contain plastic, which is even found in polar sea ice. Many plastics seem to resist decomposition in the ocean and could persist in the environment for decades. Macro- and microplastic (MP) particles have been identified in hundreds of marine species, including species consumed by humans.
The quantity and fate of MP particles (> 10 µm) and smaller nanoplastics (< 10 µm) in aquatic environments are poorly understood, but what is most concerning is their ability to cross biologic barriers and the potential harm associated with their mobility in biologic systems.
MNP Exposure
MNPs can originate from a wide variety of sources, including food, beverages, and food product packaging. Water bottles represent a significant source of ingestible MNPs for people in their daily lives. Recent estimates, using stimulated Raman scattering imaging, documented a concentration of MNP of approximately 2.4 ± 1.3 × 105 particles per liter of bottled water. Around 90% are nanoplastics, which is two to three orders of magnitude higher than previously reported results for larger MPs.
MNPs enter the body primarily through ingestion or inhalation. For example, MNPs can be ingested by drinking liquids or eating food that has been stored or heated in plastic containers from which they have leaked or by using toothpaste that contains them. Infants are exposed to MPs from artificial milk preparation in polypropylene baby bottles, with higher levels than previously detected and ranging from 14,600 to 4,550,000 particles per capita per day.
MNP and Biologic Systems
The possible formation of hetero-aggregates between nanoplastics and natural organic matter has long been recognized as a potential challenge in the analysis of nanoplastics and can influence toxicologic results in biologic exposure. The direct visualization of such hetero-aggregates in real-world samples supports these concerns, but the analysis of MNPs with traditional techniques remains challenging. Unlike engineered nanoparticles (prepared in the laboratory as model systems), the nanoplastics in the environment are label-free and exhibit significant heterogeneity in chemical composition and morphology.
A systematic analysis of evidence on the toxic effects of MNPs on murine models, however, showed that 52.78% of biologic endpoints (related to glucose metabolism, reproduction, oxidative stress, and lipid metabolism) were significantly affected by MNP exposure.
Between Risk and Toxicity
MNP can enter the body in vivo through the digestive tract, respiratory tract, and skin contact. On average, humans could ingest from 0.1 to 5 g of MNP per week through various exposure routes.
MNPs are a potential risk factor for cardiovascular diseases, as suggested by a recent study on 257 patients with carotid atheromatous plaques. In 58.4% of cases, polyvinyl chloride was detected in the carotid artery plaque, with an average level of 5.2 ± 2.4 μg/mg of plaque. Patients with MNPs inside the atheroma had a higher risk (relative risk, 4.53) for a composite cardiovascular event of myocardial infarction, stroke, or death from any cause at 34 months of follow-up than participants where MNPs were not detectable inside the atheromatous plaque.
The potential link between inflammatory bowel disease (IBD) and MPs has been hypothesized by a study that reported a higher fecal MP concentration in patients with IBD than in healthy individuals. Fecal MP level was correlated with disease severity.
However, these studies have not demonstrated a causal relationship between MNPs and disease, and the way MNPs may influence cellular functions and induce stress responses is not yet well understood.
Future Scenarios
Current evidence confirms the fragmentation of plastic beyond the micrometer level and has unequivocally detected nanoplastics in real samples. As with many other particle distributions of the same size in the natural world, there are substantially more nanoplastics, despite their invisibility with conventional imaging techniques, than particles larger than the micron size.
The initial results of studies on MNPs in humans will stimulate future research on the amounts of MNPs that accumulate in tissue over a person’s lifetime. Researchers also will examine how the particles’ characteristics, including their chemical composition, size, and shape, can influence organs and tissues.
The way MNPs can cause harm, including through effects on the immune system and microbiome, will need to be clarified by investigating possible direct cytotoxic effects, consistent with the introductory statement of the Organization for Economic Cooperation and Development global policy forum on plastics, which states, “Plastic pollution is one of the great environmental challenges of the 21st century, causing wide-ranging damage to ecosystems and human health.”
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Understanding and Promoting Compassion in Medicine
In most Western countries, professional standards dictate that physicians should practice medicine with compassion. Patients also expect compassionate care from physicians because it represents a model capable of providing greater patient satisfaction, fostering better doctor-patient relationships, and enabling better psychological states among patients.
The etymology of the term “compassion” derives from the Latin roots “com,” meaning “together with,” and “pati,” meaning “to endure or suffer.” When discussing compassion, it is necessary to distinguish it from empathy, a term generally used to refer to cognitive or emotional processes in which the perspective of the other (in this case, the patient) is taken. Compassion implies or requires empathy and includes the desire to help or alleviate the suffering of others. Compassion in the medical context is likely a specific instance of a more complex adaptive system that has evolved, not only among humans, to motivate recognition and assistance when others suffer.
Compassion Fatigue
Physicians’ compassion is expected by patients and the profession. It is fundamental for effective clinical practice. Although compassion is central to medical practice, most research related to the topic has focused on “compassion fatigue,” which is understood as a specific type of professional burnout, as if physicians had a limited reserve of compassion that dwindles or becomes exhausted with use or overuse. This is one aspect of a much more complex problem, in which compassion represents the endpoint of a dynamic process that encompasses the influences of the physician, the patient, the clinic, and the institution.
Compassion Capacity: Conditioning Factors
Chronic exposure of physicians to conflicting work demands may be associated with the depletion of their psychological resources and, consequently, emotional and cognitive fatigue that can contribute to poorer work outcomes, including the ability to express compassion.
Rates of professional burnout in medicine are increasing. The driving factors of this phenomenon are largely rooted in organizations and healthcare systems and include excessive workloads, inefficient work processes, administrative burdens, and lack of input or control by physicians regarding issues concerning their work life. The outcome often is early retirement of physicians, a current, increasingly widespread phenomenon and a critical issue not only for the Italian National Health Service but also for other healthcare systems worldwide.
Organizational and Personal Values
There is no clear empirical evidence supporting the hypothesis that working in healthcare environments experienced as discrepant with one’s own values has negative effects on key professional outcomes. However, a study published in the Journal of Internal Medicine highlighted the overall negative effect of misalignment between system values and physicians’ personal values, including impaired ability to provide compassionate care, as well as reduced job satisfaction, burnout, absenteeism, and considering the possibility of early retirement. Results from 1000 surveyed professionals indicate that physicians’ subjective competence in providing compassionate care may remain high, but their ability to express it is compromised. From data analysis, the authors hypothesize that when working in environments with discrepant values, occupational contingencies may repeatedly require physicians to set aside their personal values, which can lead them to refrain from using available skills to keep their performance in line with organizational requirements.
These results and hypotheses are not consistent with the notion of compassion fatigue as a reflection of the cost of care resulting from exposure to repeated suffering. Previous evidence shows that expressing compassion in healthcare facilitates greater understanding, suggesting that providing compassion does not impoverish physicians but rather supports them in the effectiveness of interventions and in their satisfaction.
In summary, this study suggests that what prevents compassion is the inability to provide it when hindered by factors related to the situation in which the physician operates. Improving compassion does not simply depend on motivating individual professionals to be more compassionate or on promoting fundamental skills, but probably on the creation of organizational and clinical conditions in which physician compassion can thrive.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
In most Western countries, professional standards dictate that physicians should practice medicine with compassion. Patients also expect compassionate care from physicians because it represents a model capable of providing greater patient satisfaction, fostering better doctor-patient relationships, and enabling better psychological states among patients.
The etymology of the term “compassion” derives from the Latin roots “com,” meaning “together with,” and “pati,” meaning “to endure or suffer.” When discussing compassion, it is necessary to distinguish it from empathy, a term generally used to refer to cognitive or emotional processes in which the perspective of the other (in this case, the patient) is taken. Compassion implies or requires empathy and includes the desire to help or alleviate the suffering of others. Compassion in the medical context is likely a specific instance of a more complex adaptive system that has evolved, not only among humans, to motivate recognition and assistance when others suffer.
Compassion Fatigue
Physicians’ compassion is expected by patients and the profession. It is fundamental for effective clinical practice. Although compassion is central to medical practice, most research related to the topic has focused on “compassion fatigue,” which is understood as a specific type of professional burnout, as if physicians had a limited reserve of compassion that dwindles or becomes exhausted with use or overuse. This is one aspect of a much more complex problem, in which compassion represents the endpoint of a dynamic process that encompasses the influences of the physician, the patient, the clinic, and the institution.
Compassion Capacity: Conditioning Factors
Chronic exposure of physicians to conflicting work demands may be associated with the depletion of their psychological resources and, consequently, emotional and cognitive fatigue that can contribute to poorer work outcomes, including the ability to express compassion.
Rates of professional burnout in medicine are increasing. The driving factors of this phenomenon are largely rooted in organizations and healthcare systems and include excessive workloads, inefficient work processes, administrative burdens, and lack of input or control by physicians regarding issues concerning their work life. The outcome often is early retirement of physicians, a current, increasingly widespread phenomenon and a critical issue not only for the Italian National Health Service but also for other healthcare systems worldwide.
Organizational and Personal Values
There is no clear empirical evidence supporting the hypothesis that working in healthcare environments experienced as discrepant with one’s own values has negative effects on key professional outcomes. However, a study published in the Journal of Internal Medicine highlighted the overall negative effect of misalignment between system values and physicians’ personal values, including impaired ability to provide compassionate care, as well as reduced job satisfaction, burnout, absenteeism, and considering the possibility of early retirement. Results from 1000 surveyed professionals indicate that physicians’ subjective competence in providing compassionate care may remain high, but their ability to express it is compromised. From data analysis, the authors hypothesize that when working in environments with discrepant values, occupational contingencies may repeatedly require physicians to set aside their personal values, which can lead them to refrain from using available skills to keep their performance in line with organizational requirements.
These results and hypotheses are not consistent with the notion of compassion fatigue as a reflection of the cost of care resulting from exposure to repeated suffering. Previous evidence shows that expressing compassion in healthcare facilitates greater understanding, suggesting that providing compassion does not impoverish physicians but rather supports them in the effectiveness of interventions and in their satisfaction.
In summary, this study suggests that what prevents compassion is the inability to provide it when hindered by factors related to the situation in which the physician operates. Improving compassion does not simply depend on motivating individual professionals to be more compassionate or on promoting fundamental skills, but probably on the creation of organizational and clinical conditions in which physician compassion can thrive.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
In most Western countries, professional standards dictate that physicians should practice medicine with compassion. Patients also expect compassionate care from physicians because it represents a model capable of providing greater patient satisfaction, fostering better doctor-patient relationships, and enabling better psychological states among patients.
The etymology of the term “compassion” derives from the Latin roots “com,” meaning “together with,” and “pati,” meaning “to endure or suffer.” When discussing compassion, it is necessary to distinguish it from empathy, a term generally used to refer to cognitive or emotional processes in which the perspective of the other (in this case, the patient) is taken. Compassion implies or requires empathy and includes the desire to help or alleviate the suffering of others. Compassion in the medical context is likely a specific instance of a more complex adaptive system that has evolved, not only among humans, to motivate recognition and assistance when others suffer.
Compassion Fatigue
Physicians’ compassion is expected by patients and the profession. It is fundamental for effective clinical practice. Although compassion is central to medical practice, most research related to the topic has focused on “compassion fatigue,” which is understood as a specific type of professional burnout, as if physicians had a limited reserve of compassion that dwindles or becomes exhausted with use or overuse. This is one aspect of a much more complex problem, in which compassion represents the endpoint of a dynamic process that encompasses the influences of the physician, the patient, the clinic, and the institution.
Compassion Capacity: Conditioning Factors
Chronic exposure of physicians to conflicting work demands may be associated with the depletion of their psychological resources and, consequently, emotional and cognitive fatigue that can contribute to poorer work outcomes, including the ability to express compassion.
Rates of professional burnout in medicine are increasing. The driving factors of this phenomenon are largely rooted in organizations and healthcare systems and include excessive workloads, inefficient work processes, administrative burdens, and lack of input or control by physicians regarding issues concerning their work life. The outcome often is early retirement of physicians, a current, increasingly widespread phenomenon and a critical issue not only for the Italian National Health Service but also for other healthcare systems worldwide.
Organizational and Personal Values
There is no clear empirical evidence supporting the hypothesis that working in healthcare environments experienced as discrepant with one’s own values has negative effects on key professional outcomes. However, a study published in the Journal of Internal Medicine highlighted the overall negative effect of misalignment between system values and physicians’ personal values, including impaired ability to provide compassionate care, as well as reduced job satisfaction, burnout, absenteeism, and considering the possibility of early retirement. Results from 1000 surveyed professionals indicate that physicians’ subjective competence in providing compassionate care may remain high, but their ability to express it is compromised. From data analysis, the authors hypothesize that when working in environments with discrepant values, occupational contingencies may repeatedly require physicians to set aside their personal values, which can lead them to refrain from using available skills to keep their performance in line with organizational requirements.
These results and hypotheses are not consistent with the notion of compassion fatigue as a reflection of the cost of care resulting from exposure to repeated suffering. Previous evidence shows that expressing compassion in healthcare facilitates greater understanding, suggesting that providing compassion does not impoverish physicians but rather supports them in the effectiveness of interventions and in their satisfaction.
In summary, this study suggests that what prevents compassion is the inability to provide it when hindered by factors related to the situation in which the physician operates. Improving compassion does not simply depend on motivating individual professionals to be more compassionate or on promoting fundamental skills, but probably on the creation of organizational and clinical conditions in which physician compassion can thrive.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
How Does Snoring Affect Cardiovascular Health?
Snoring is a common disorder that affects 20%-40% of the general population. The mechanism of snoring is the vibration of anatomical structures in the pharyngeal airways. The flutter of the soft palate explains the harsh aspect of the snoring sound, which occurs during natural sleep or drug-induced sleep. The presentation of snoring may vary throughout the night or between nights, with a subjective, and therefore inconsistent, assessment of its loudness.
Objective evaluation of snoring is important for clinical decision-making and predicting the effect of therapeutic interventions. It also provides information regarding the site and degree of upper airway obstruction. Snoring is one of the main features of sleep-disordered breathing, including hypopnea events, which reflect partial upper airway obstruction.
Obstructive sleep apnea (OSA) is characterized by episodes of complete (apnea) or partial (hypopnea) collapse of the upper airways with associated oxygen desaturation or awakening from sleep. Most patients with OSA snore loudly almost every night. However, in the Sleep Heart Health Study, one-third of participants with OSA reported no snoring, while one-third of snoring participants did not meet the criteria for OSA. Therefore, subjective assessments of snoring (self-reported) may not be sufficiently reliable to assess its potential impact on cardiovascular (CV) health outcomes.
CV Effects
OSA has been hypothesized as a modifiable risk factor for CV diseases (CVD), including hypertension, coronary artery disease (CAD), atrial fibrillation, heart failure, and stroke, primarily because of the results of traditional observational studies. Snoring is reported as a symptom of the early stage of OSA and has also been associated with a higher risk for CVD. However, establishing causality based on observational studies is difficult because of residual confounding from unknown or unmeasured factors and reverse causality (i.e., the scenario in which CVD increases the risk for OSA or snoring). A Mendelian randomization study, using the natural random allocation of genetic variants as instruments capable of producing results analogous to those of randomized controlled trials, suggested that OSA and snoring increase the risk for hypertension and CAD, with associations partly driven by body mass index (BMI). Conversely, no evidence was found that CVD causally influenced OSA or snoring.
Snoring has been associated with multiple subclinical markers of CV pathology, including high blood pressure, and loud snoring can interfere with restorative sleep and contribute to the risk for hypertension and other adverse outcomes in snorers. However, evidence on the associations between snoring and CV health outcomes remains limited and is primarily based on subjective assessments of snoring or small clinical samples with objective assessments of snoring for only 1 night.
Snoring and Hypertension
A study of 12,287 middle-aged patients (age, 50 years) who were predominantly males (88%) and generally overweight (BMI, 28 kg/m2) determined the prevalence of snoring and its association with the prevalence of hypertension using objective evaluation of snoring over multiple nights and multiple daytime blood pressure measurements. The findings included the following observations:
An increase in snoring duration was associated with a 3-mmHg increase in systolic (SBP) and a 4 mmHg increase in diastolic blood pressure (DBP) in patients with frequent and regular snoring, compared with those with infrequent snoring, regardless of age, BMI, sex, and estimated apnea/hypopnea index.
The association between severe OSA alone and blood pressure had an effect size similar to that of the association between snoring alone and blood pressure. In a model where OSA severity was classified and snoring duration was stratified into quartiles, severe OSA without snoring was associated with 3.6 mmHg higher SBP and 3.5 mmHg higher DBP, compared with the absence of snoring or OSA. Participants without OSA but with intense snoring (4th quartile) had 3.8 mmHg higher SBP and 4.5 mmHg higher DBP compared with participants without nighttime apnea or snoring.
Snoring was significantly associated with uncontrolled hypertension. There was a 20% increase in the probability of uncontrolled hypertension in subjects aged > 50 years with obesity and a 98% increase in subjects aged ≤ 50 years with normal BMI.
Duration of snoring was associated with an 87% increase in the likelihood of uncontrolled hypertension.
Implications for Practice
This study indicates that 15% of a predominantly overweight male population snore for > 20% of the night and about 10% of these subjects without nighttime apnea snore for > 12% of the night.
Regular nighttime snoring is associated with elevated blood pressure and uncontrolled hypertension, regardless of the presence or severity of OSA.
Physicians must be aware of the potential consequences of snoring on the risk for hypertension, and these results highlight the need to consider snoring in clinical care and in the management of sleep problems, especially in the context of managing arterial hypertension.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Snoring is a common disorder that affects 20%-40% of the general population. The mechanism of snoring is the vibration of anatomical structures in the pharyngeal airways. The flutter of the soft palate explains the harsh aspect of the snoring sound, which occurs during natural sleep or drug-induced sleep. The presentation of snoring may vary throughout the night or between nights, with a subjective, and therefore inconsistent, assessment of its loudness.
Objective evaluation of snoring is important for clinical decision-making and predicting the effect of therapeutic interventions. It also provides information regarding the site and degree of upper airway obstruction. Snoring is one of the main features of sleep-disordered breathing, including hypopnea events, which reflect partial upper airway obstruction.
Obstructive sleep apnea (OSA) is characterized by episodes of complete (apnea) or partial (hypopnea) collapse of the upper airways with associated oxygen desaturation or awakening from sleep. Most patients with OSA snore loudly almost every night. However, in the Sleep Heart Health Study, one-third of participants with OSA reported no snoring, while one-third of snoring participants did not meet the criteria for OSA. Therefore, subjective assessments of snoring (self-reported) may not be sufficiently reliable to assess its potential impact on cardiovascular (CV) health outcomes.
CV Effects
OSA has been hypothesized as a modifiable risk factor for CV diseases (CVD), including hypertension, coronary artery disease (CAD), atrial fibrillation, heart failure, and stroke, primarily because of the results of traditional observational studies. Snoring is reported as a symptom of the early stage of OSA and has also been associated with a higher risk for CVD. However, establishing causality based on observational studies is difficult because of residual confounding from unknown or unmeasured factors and reverse causality (i.e., the scenario in which CVD increases the risk for OSA or snoring). A Mendelian randomization study, using the natural random allocation of genetic variants as instruments capable of producing results analogous to those of randomized controlled trials, suggested that OSA and snoring increase the risk for hypertension and CAD, with associations partly driven by body mass index (BMI). Conversely, no evidence was found that CVD causally influenced OSA or snoring.
Snoring has been associated with multiple subclinical markers of CV pathology, including high blood pressure, and loud snoring can interfere with restorative sleep and contribute to the risk for hypertension and other adverse outcomes in snorers. However, evidence on the associations between snoring and CV health outcomes remains limited and is primarily based on subjective assessments of snoring or small clinical samples with objective assessments of snoring for only 1 night.
Snoring and Hypertension
A study of 12,287 middle-aged patients (age, 50 years) who were predominantly males (88%) and generally overweight (BMI, 28 kg/m2) determined the prevalence of snoring and its association with the prevalence of hypertension using objective evaluation of snoring over multiple nights and multiple daytime blood pressure measurements. The findings included the following observations:
An increase in snoring duration was associated with a 3-mmHg increase in systolic (SBP) and a 4 mmHg increase in diastolic blood pressure (DBP) in patients with frequent and regular snoring, compared with those with infrequent snoring, regardless of age, BMI, sex, and estimated apnea/hypopnea index.
The association between severe OSA alone and blood pressure had an effect size similar to that of the association between snoring alone and blood pressure. In a model where OSA severity was classified and snoring duration was stratified into quartiles, severe OSA without snoring was associated with 3.6 mmHg higher SBP and 3.5 mmHg higher DBP, compared with the absence of snoring or OSA. Participants without OSA but with intense snoring (4th quartile) had 3.8 mmHg higher SBP and 4.5 mmHg higher DBP compared with participants without nighttime apnea or snoring.
Snoring was significantly associated with uncontrolled hypertension. There was a 20% increase in the probability of uncontrolled hypertension in subjects aged > 50 years with obesity and a 98% increase in subjects aged ≤ 50 years with normal BMI.
Duration of snoring was associated with an 87% increase in the likelihood of uncontrolled hypertension.
Implications for Practice
This study indicates that 15% of a predominantly overweight male population snore for > 20% of the night and about 10% of these subjects without nighttime apnea snore for > 12% of the night.
Regular nighttime snoring is associated with elevated blood pressure and uncontrolled hypertension, regardless of the presence or severity of OSA.
Physicians must be aware of the potential consequences of snoring on the risk for hypertension, and these results highlight the need to consider snoring in clinical care and in the management of sleep problems, especially in the context of managing arterial hypertension.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Snoring is a common disorder that affects 20%-40% of the general population. The mechanism of snoring is the vibration of anatomical structures in the pharyngeal airways. The flutter of the soft palate explains the harsh aspect of the snoring sound, which occurs during natural sleep or drug-induced sleep. The presentation of snoring may vary throughout the night or between nights, with a subjective, and therefore inconsistent, assessment of its loudness.
Objective evaluation of snoring is important for clinical decision-making and predicting the effect of therapeutic interventions. It also provides information regarding the site and degree of upper airway obstruction. Snoring is one of the main features of sleep-disordered breathing, including hypopnea events, which reflect partial upper airway obstruction.
Obstructive sleep apnea (OSA) is characterized by episodes of complete (apnea) or partial (hypopnea) collapse of the upper airways with associated oxygen desaturation or awakening from sleep. Most patients with OSA snore loudly almost every night. However, in the Sleep Heart Health Study, one-third of participants with OSA reported no snoring, while one-third of snoring participants did not meet the criteria for OSA. Therefore, subjective assessments of snoring (self-reported) may not be sufficiently reliable to assess its potential impact on cardiovascular (CV) health outcomes.
CV Effects
OSA has been hypothesized as a modifiable risk factor for CV diseases (CVD), including hypertension, coronary artery disease (CAD), atrial fibrillation, heart failure, and stroke, primarily because of the results of traditional observational studies. Snoring is reported as a symptom of the early stage of OSA and has also been associated with a higher risk for CVD. However, establishing causality based on observational studies is difficult because of residual confounding from unknown or unmeasured factors and reverse causality (i.e., the scenario in which CVD increases the risk for OSA or snoring). A Mendelian randomization study, using the natural random allocation of genetic variants as instruments capable of producing results analogous to those of randomized controlled trials, suggested that OSA and snoring increase the risk for hypertension and CAD, with associations partly driven by body mass index (BMI). Conversely, no evidence was found that CVD causally influenced OSA or snoring.
Snoring has been associated with multiple subclinical markers of CV pathology, including high blood pressure, and loud snoring can interfere with restorative sleep and contribute to the risk for hypertension and other adverse outcomes in snorers. However, evidence on the associations between snoring and CV health outcomes remains limited and is primarily based on subjective assessments of snoring or small clinical samples with objective assessments of snoring for only 1 night.
Snoring and Hypertension
A study of 12,287 middle-aged patients (age, 50 years) who were predominantly males (88%) and generally overweight (BMI, 28 kg/m2) determined the prevalence of snoring and its association with the prevalence of hypertension using objective evaluation of snoring over multiple nights and multiple daytime blood pressure measurements. The findings included the following observations:
An increase in snoring duration was associated with a 3-mmHg increase in systolic (SBP) and a 4 mmHg increase in diastolic blood pressure (DBP) in patients with frequent and regular snoring, compared with those with infrequent snoring, regardless of age, BMI, sex, and estimated apnea/hypopnea index.
The association between severe OSA alone and blood pressure had an effect size similar to that of the association between snoring alone and blood pressure. In a model where OSA severity was classified and snoring duration was stratified into quartiles, severe OSA without snoring was associated with 3.6 mmHg higher SBP and 3.5 mmHg higher DBP, compared with the absence of snoring or OSA. Participants without OSA but with intense snoring (4th quartile) had 3.8 mmHg higher SBP and 4.5 mmHg higher DBP compared with participants without nighttime apnea or snoring.
Snoring was significantly associated with uncontrolled hypertension. There was a 20% increase in the probability of uncontrolled hypertension in subjects aged > 50 years with obesity and a 98% increase in subjects aged ≤ 50 years with normal BMI.
Duration of snoring was associated with an 87% increase in the likelihood of uncontrolled hypertension.
Implications for Practice
This study indicates that 15% of a predominantly overweight male population snore for > 20% of the night and about 10% of these subjects without nighttime apnea snore for > 12% of the night.
Regular nighttime snoring is associated with elevated blood pressure and uncontrolled hypertension, regardless of the presence or severity of OSA.
Physicians must be aware of the potential consequences of snoring on the risk for hypertension, and these results highlight the need to consider snoring in clinical care and in the management of sleep problems, especially in the context of managing arterial hypertension.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Communicating Bad News to Patients
Communicating bad news to patients is one of the most stressful and challenging clinical tasks for any physician, regardless of his or her specialty. the physician’s emotional state.
This task is more frequent for physicians caring for oncology patients and can also affectThe manner in which bad news is communicated plays a significant role in the psychological burden on the patient, and various communication techniques and guidelines have been developed to enable physicians to perform this difficult task effectively.
Revealing bad news in person whenever possible, to address the emotional responses of patients or relatives, is part of the prevailing expert recommendations. However, it has been acknowledged that in certain situations, communicating bad news over the phone is more feasible.
Since the beginning of the COVID-19 pandemic, the disclosure of bad news over the phone has become a necessary substitute for in-person visits and an integral part of clinical practice worldwide. It remains to be clarified what the real psychological impact on patients and their closest relatives is when delivering bad news over the phone compared with delivering it in person.
Right and Wrong Ways
The most popular guideline for communicating bad news is SPIKES, a six-phase protocol with a special application for cancer patients. It is used in various countries (eg, the United States, France, and Germany) as a guide for this sensitive practice and for training in communication skills in this context. The SPIKES acronym refers to the following six recommended steps for delivering bad news:
- Setting: Set up the conversation.
- Perception: Assess the patient’s perception.
- Invitation: Ask the patient what he or she would like to know.
- Knowledge: Provide the patient with knowledge and information, breaking it down into small parts.
- Emotions: Acknowledge and empathetically address the patient’s emotions.
- Strategy and Summary: Summarize and define a medical action plan.
The lesson from SPIKES is that when a person experiences strong emotions, it is difficult to continue discussing anything, and they will struggle to hear anything. Allowing for silence is fundamental. In addition, empathy allows the patient to express his or her feelings and concerns, as well as provide support. The aim is not to argue but to allow the expression of emotions without criticism. However, these recommendations are primarily based on expert opinion and less on empirical evidence, due to the difficulty of studies in assessing patient outcomes in various phases of these protocols.
A recent study analyzed the differences in psychological distress between patients who received bad news over the phone vs those who received it in person. The study was a systematic review and meta-analysis.
The investigators examined 5944 studies, including 11 qualitative analysis studies, nine meta-analyses, and four randomized controlled trials.
In a set of studies ranging from moderate to good quality, no difference in psychological distress was found when bad news was disclosed over the phone compared with in person, regarding anxiety, depression, and posttraumatic stress disorder.
There was no average difference in patient satisfaction levels when bad news was delivered over the phone compared with in person. The risk for dissatisfaction was similar between groups.
Clinical Practice Guidelines
The demand for telemedicine, including the disclosure of bad news, is growing despite the limited knowledge of potential adverse effects. The results of existing studies suggest that the mode of disclosure may play a secondary role, and the manner in which bad news is communicated may be more important.
Therefore, it is paramount to prepare patients or their families for the possibility of receiving bad news well in advance and, during the conversation, to ensure first and foremost that they are in an appropriate environment. The structure and content of the conversation may be relevant, and adhering to dedicated communication strategies can be a wise choice for the physician and the interlocutor.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Communicating bad news to patients is one of the most stressful and challenging clinical tasks for any physician, regardless of his or her specialty. the physician’s emotional state.
This task is more frequent for physicians caring for oncology patients and can also affectThe manner in which bad news is communicated plays a significant role in the psychological burden on the patient, and various communication techniques and guidelines have been developed to enable physicians to perform this difficult task effectively.
Revealing bad news in person whenever possible, to address the emotional responses of patients or relatives, is part of the prevailing expert recommendations. However, it has been acknowledged that in certain situations, communicating bad news over the phone is more feasible.
Since the beginning of the COVID-19 pandemic, the disclosure of bad news over the phone has become a necessary substitute for in-person visits and an integral part of clinical practice worldwide. It remains to be clarified what the real psychological impact on patients and their closest relatives is when delivering bad news over the phone compared with delivering it in person.
Right and Wrong Ways
The most popular guideline for communicating bad news is SPIKES, a six-phase protocol with a special application for cancer patients. It is used in various countries (eg, the United States, France, and Germany) as a guide for this sensitive practice and for training in communication skills in this context. The SPIKES acronym refers to the following six recommended steps for delivering bad news:
- Setting: Set up the conversation.
- Perception: Assess the patient’s perception.
- Invitation: Ask the patient what he or she would like to know.
- Knowledge: Provide the patient with knowledge and information, breaking it down into small parts.
- Emotions: Acknowledge and empathetically address the patient’s emotions.
- Strategy and Summary: Summarize and define a medical action plan.
The lesson from SPIKES is that when a person experiences strong emotions, it is difficult to continue discussing anything, and they will struggle to hear anything. Allowing for silence is fundamental. In addition, empathy allows the patient to express his or her feelings and concerns, as well as provide support. The aim is not to argue but to allow the expression of emotions without criticism. However, these recommendations are primarily based on expert opinion and less on empirical evidence, due to the difficulty of studies in assessing patient outcomes in various phases of these protocols.
A recent study analyzed the differences in psychological distress between patients who received bad news over the phone vs those who received it in person. The study was a systematic review and meta-analysis.
The investigators examined 5944 studies, including 11 qualitative analysis studies, nine meta-analyses, and four randomized controlled trials.
In a set of studies ranging from moderate to good quality, no difference in psychological distress was found when bad news was disclosed over the phone compared with in person, regarding anxiety, depression, and posttraumatic stress disorder.
There was no average difference in patient satisfaction levels when bad news was delivered over the phone compared with in person. The risk for dissatisfaction was similar between groups.
Clinical Practice Guidelines
The demand for telemedicine, including the disclosure of bad news, is growing despite the limited knowledge of potential adverse effects. The results of existing studies suggest that the mode of disclosure may play a secondary role, and the manner in which bad news is communicated may be more important.
Therefore, it is paramount to prepare patients or their families for the possibility of receiving bad news well in advance and, during the conversation, to ensure first and foremost that they are in an appropriate environment. The structure and content of the conversation may be relevant, and adhering to dedicated communication strategies can be a wise choice for the physician and the interlocutor.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Communicating bad news to patients is one of the most stressful and challenging clinical tasks for any physician, regardless of his or her specialty. the physician’s emotional state.
This task is more frequent for physicians caring for oncology patients and can also affectThe manner in which bad news is communicated plays a significant role in the psychological burden on the patient, and various communication techniques and guidelines have been developed to enable physicians to perform this difficult task effectively.
Revealing bad news in person whenever possible, to address the emotional responses of patients or relatives, is part of the prevailing expert recommendations. However, it has been acknowledged that in certain situations, communicating bad news over the phone is more feasible.
Since the beginning of the COVID-19 pandemic, the disclosure of bad news over the phone has become a necessary substitute for in-person visits and an integral part of clinical practice worldwide. It remains to be clarified what the real psychological impact on patients and their closest relatives is when delivering bad news over the phone compared with delivering it in person.
Right and Wrong Ways
The most popular guideline for communicating bad news is SPIKES, a six-phase protocol with a special application for cancer patients. It is used in various countries (eg, the United States, France, and Germany) as a guide for this sensitive practice and for training in communication skills in this context. The SPIKES acronym refers to the following six recommended steps for delivering bad news:
- Setting: Set up the conversation.
- Perception: Assess the patient’s perception.
- Invitation: Ask the patient what he or she would like to know.
- Knowledge: Provide the patient with knowledge and information, breaking it down into small parts.
- Emotions: Acknowledge and empathetically address the patient’s emotions.
- Strategy and Summary: Summarize and define a medical action plan.
The lesson from SPIKES is that when a person experiences strong emotions, it is difficult to continue discussing anything, and they will struggle to hear anything. Allowing for silence is fundamental. In addition, empathy allows the patient to express his or her feelings and concerns, as well as provide support. The aim is not to argue but to allow the expression of emotions without criticism. However, these recommendations are primarily based on expert opinion and less on empirical evidence, due to the difficulty of studies in assessing patient outcomes in various phases of these protocols.
A recent study analyzed the differences in psychological distress between patients who received bad news over the phone vs those who received it in person. The study was a systematic review and meta-analysis.
The investigators examined 5944 studies, including 11 qualitative analysis studies, nine meta-analyses, and four randomized controlled trials.
In a set of studies ranging from moderate to good quality, no difference in psychological distress was found when bad news was disclosed over the phone compared with in person, regarding anxiety, depression, and posttraumatic stress disorder.
There was no average difference in patient satisfaction levels when bad news was delivered over the phone compared with in person. The risk for dissatisfaction was similar between groups.
Clinical Practice Guidelines
The demand for telemedicine, including the disclosure of bad news, is growing despite the limited knowledge of potential adverse effects. The results of existing studies suggest that the mode of disclosure may play a secondary role, and the manner in which bad news is communicated may be more important.
Therefore, it is paramount to prepare patients or their families for the possibility of receiving bad news well in advance and, during the conversation, to ensure first and foremost that they are in an appropriate environment. The structure and content of the conversation may be relevant, and adhering to dedicated communication strategies can be a wise choice for the physician and the interlocutor.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
How Do Anogenital Injuries Relate to Rape Accusations?
Violence against women by partners is a serious human rights violation and a significant global public health issue. Overall, an estimated 27% of women aged 15-49 years who have been in a relationship have experienced physical or sexual violence (SV) at the hands of a partner. According to 2019 data from the US Department of Justice, SV in the United States occurs every 73 seconds, with child victims every 9 minutes. Lifetime rates of SV are around 17%-18% for women and 3% for men.
The emergency department remains the most common place where patients who have experienced SV seek comprehensive care, including emergency contraception, prophylaxis against sexually transmitted infections, forensic evidence collection for rape cases, and treatment for injuries.
Physical injuries from SV are not always detectable. Studies report variable percentages, ranging from 30%-80% of patients with traumatic SV injuries. Evidence regarding their severity is conflicting.
The presence or absence of anogenital injuries following SV is a factor that can influence both victims’ willingness to report a crime and the judicial decision-making process regarding accusations and convictions.
Rape Myths
The mythology of rape has been under discussion for more than 50 years, encompassing concerns that rape myths reinforce ideas about what does and does not constitute SV and who is a credible victim.
Rape myths, classically defined in the 1980s, are “prejudiced, stereotyped, and false beliefs about rape, rape victims, and rapists,” designed to “deny or minimize perceived harm or blame victims for their victimization.” The concept remains relevant to contemporary societal beliefs and concerns.
A systematic review analyzed elements of rape myths related to victim characteristics and their impact on credibility and blame attribution in the investigative process. Victims who knew the (male) perpetrator and were deemed provocative based on attire were assigned greater blame. In addition, detail and consistency in victims› statements and the presence of physical evidence and injuries increased credibility. However, in certain situations, rape myths may lead to blaming victims who do not fit the “real victim” stereotype, thus resulting in secondary victimization or revictimization.
Anogenital Injuries
Anogenital injuries can occur in relation to consensual sexual activity (CSA), and SV may not be associated with injuries. Therefore, the presence of anogenital injuries does not “prove” SV nor does their absence exclude rape.
This statement is supported by a systematic review and meta-analysis investigating the prevalence of anogenital injuries in women following SV and CSA, using consistent examination techniques for better forensic evidence evaluation in criminal proceedings.
The following two groups were defined for comparison: SV, indicating any nonconsensual sexual contact with the survivor’s anogenital area, and CSA, representing the same type of sexual contact with participants’ consent.
The outcome measure was the presence of anogenital injury (defined as any genital, anal, or perineal injury detected using described techniques in each study). With no universal definition of genital trauma, the result assessment was dichotomous: The presence or absence of injury.
The systematic search yielded 1401 results, and 10 cohort studies published from 1997 to 2022 met the inclusion criteria. The study participants were 3165 women, with 59% (1874/3165) surviving SV.
Anogenital injuries were found in 48% of women who experienced SV (901/1874) and in 31% of those with CSA (394/1291). Anogenital injuries were significantly more likely in women who had experienced SV, compared with those with CSA (risk ratio, 1.59; P < .001). However, both groups had cases where anogenital injuries were either detected or not.
Some SV survivors had no identified anogenital injuries, and women examined after CSA had detectable anogenital injuries. Subgroup analysis for high-quality studies showed no significant differences between groups. These data support the hypothesis that the presence of anogenital injuries does not prove SV, and the absence of injuries does not disprove it.
Point for Practice
Numerous myths reinforce cultural attitudes toward reporting SV. One myth suggests that physical violence, and thus injuries, are inevitable accompaniments to rape. If the victim does not react physically, it might be argued that it was not really rape, or without physical trauma, one might be less inclined to believe that a rape occurred.
Physicians and healthcare professionals involved in the care and support of SV survivors must explicitly reassure them that the lack of anogenital injury evidence does not diminish the credibility of their account.
This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.
Violence against women by partners is a serious human rights violation and a significant global public health issue. Overall, an estimated 27% of women aged 15-49 years who have been in a relationship have experienced physical or sexual violence (SV) at the hands of a partner. According to 2019 data from the US Department of Justice, SV in the United States occurs every 73 seconds, with child victims every 9 minutes. Lifetime rates of SV are around 17%-18% for women and 3% for men.
The emergency department remains the most common place where patients who have experienced SV seek comprehensive care, including emergency contraception, prophylaxis against sexually transmitted infections, forensic evidence collection for rape cases, and treatment for injuries.
Physical injuries from SV are not always detectable. Studies report variable percentages, ranging from 30%-80% of patients with traumatic SV injuries. Evidence regarding their severity is conflicting.
The presence or absence of anogenital injuries following SV is a factor that can influence both victims’ willingness to report a crime and the judicial decision-making process regarding accusations and convictions.
Rape Myths
The mythology of rape has been under discussion for more than 50 years, encompassing concerns that rape myths reinforce ideas about what does and does not constitute SV and who is a credible victim.
Rape myths, classically defined in the 1980s, are “prejudiced, stereotyped, and false beliefs about rape, rape victims, and rapists,” designed to “deny or minimize perceived harm or blame victims for their victimization.” The concept remains relevant to contemporary societal beliefs and concerns.
A systematic review analyzed elements of rape myths related to victim characteristics and their impact on credibility and blame attribution in the investigative process. Victims who knew the (male) perpetrator and were deemed provocative based on attire were assigned greater blame. In addition, detail and consistency in victims› statements and the presence of physical evidence and injuries increased credibility. However, in certain situations, rape myths may lead to blaming victims who do not fit the “real victim” stereotype, thus resulting in secondary victimization or revictimization.
Anogenital Injuries
Anogenital injuries can occur in relation to consensual sexual activity (CSA), and SV may not be associated with injuries. Therefore, the presence of anogenital injuries does not “prove” SV nor does their absence exclude rape.
This statement is supported by a systematic review and meta-analysis investigating the prevalence of anogenital injuries in women following SV and CSA, using consistent examination techniques for better forensic evidence evaluation in criminal proceedings.
The following two groups were defined for comparison: SV, indicating any nonconsensual sexual contact with the survivor’s anogenital area, and CSA, representing the same type of sexual contact with participants’ consent.
The outcome measure was the presence of anogenital injury (defined as any genital, anal, or perineal injury detected using described techniques in each study). With no universal definition of genital trauma, the result assessment was dichotomous: The presence or absence of injury.
The systematic search yielded 1401 results, and 10 cohort studies published from 1997 to 2022 met the inclusion criteria. The study participants were 3165 women, with 59% (1874/3165) surviving SV.
Anogenital injuries were found in 48% of women who experienced SV (901/1874) and in 31% of those with CSA (394/1291). Anogenital injuries were significantly more likely in women who had experienced SV, compared with those with CSA (risk ratio, 1.59; P < .001). However, both groups had cases where anogenital injuries were either detected or not.
Some SV survivors had no identified anogenital injuries, and women examined after CSA had detectable anogenital injuries. Subgroup analysis for high-quality studies showed no significant differences between groups. These data support the hypothesis that the presence of anogenital injuries does not prove SV, and the absence of injuries does not disprove it.
Point for Practice
Numerous myths reinforce cultural attitudes toward reporting SV. One myth suggests that physical violence, and thus injuries, are inevitable accompaniments to rape. If the victim does not react physically, it might be argued that it was not really rape, or without physical trauma, one might be less inclined to believe that a rape occurred.
Physicians and healthcare professionals involved in the care and support of SV survivors must explicitly reassure them that the lack of anogenital injury evidence does not diminish the credibility of their account.
This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.
Violence against women by partners is a serious human rights violation and a significant global public health issue. Overall, an estimated 27% of women aged 15-49 years who have been in a relationship have experienced physical or sexual violence (SV) at the hands of a partner. According to 2019 data from the US Department of Justice, SV in the United States occurs every 73 seconds, with child victims every 9 minutes. Lifetime rates of SV are around 17%-18% for women and 3% for men.
The emergency department remains the most common place where patients who have experienced SV seek comprehensive care, including emergency contraception, prophylaxis against sexually transmitted infections, forensic evidence collection for rape cases, and treatment for injuries.
Physical injuries from SV are not always detectable. Studies report variable percentages, ranging from 30%-80% of patients with traumatic SV injuries. Evidence regarding their severity is conflicting.
The presence or absence of anogenital injuries following SV is a factor that can influence both victims’ willingness to report a crime and the judicial decision-making process regarding accusations and convictions.
Rape Myths
The mythology of rape has been under discussion for more than 50 years, encompassing concerns that rape myths reinforce ideas about what does and does not constitute SV and who is a credible victim.
Rape myths, classically defined in the 1980s, are “prejudiced, stereotyped, and false beliefs about rape, rape victims, and rapists,” designed to “deny or minimize perceived harm or blame victims for their victimization.” The concept remains relevant to contemporary societal beliefs and concerns.
A systematic review analyzed elements of rape myths related to victim characteristics and their impact on credibility and blame attribution in the investigative process. Victims who knew the (male) perpetrator and were deemed provocative based on attire were assigned greater blame. In addition, detail and consistency in victims› statements and the presence of physical evidence and injuries increased credibility. However, in certain situations, rape myths may lead to blaming victims who do not fit the “real victim” stereotype, thus resulting in secondary victimization or revictimization.
Anogenital Injuries
Anogenital injuries can occur in relation to consensual sexual activity (CSA), and SV may not be associated with injuries. Therefore, the presence of anogenital injuries does not “prove” SV nor does their absence exclude rape.
This statement is supported by a systematic review and meta-analysis investigating the prevalence of anogenital injuries in women following SV and CSA, using consistent examination techniques for better forensic evidence evaluation in criminal proceedings.
The following two groups were defined for comparison: SV, indicating any nonconsensual sexual contact with the survivor’s anogenital area, and CSA, representing the same type of sexual contact with participants’ consent.
The outcome measure was the presence of anogenital injury (defined as any genital, anal, or perineal injury detected using described techniques in each study). With no universal definition of genital trauma, the result assessment was dichotomous: The presence or absence of injury.
The systematic search yielded 1401 results, and 10 cohort studies published from 1997 to 2022 met the inclusion criteria. The study participants were 3165 women, with 59% (1874/3165) surviving SV.
Anogenital injuries were found in 48% of women who experienced SV (901/1874) and in 31% of those with CSA (394/1291). Anogenital injuries were significantly more likely in women who had experienced SV, compared with those with CSA (risk ratio, 1.59; P < .001). However, both groups had cases where anogenital injuries were either detected or not.
Some SV survivors had no identified anogenital injuries, and women examined after CSA had detectable anogenital injuries. Subgroup analysis for high-quality studies showed no significant differences between groups. These data support the hypothesis that the presence of anogenital injuries does not prove SV, and the absence of injuries does not disprove it.
Point for Practice
Numerous myths reinforce cultural attitudes toward reporting SV. One myth suggests that physical violence, and thus injuries, are inevitable accompaniments to rape. If the victim does not react physically, it might be argued that it was not really rape, or without physical trauma, one might be less inclined to believe that a rape occurred.
Physicians and healthcare professionals involved in the care and support of SV survivors must explicitly reassure them that the lack of anogenital injury evidence does not diminish the credibility of their account.
This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.
Which factors predict primary nonadherence to medications?
Poor adherence to medication is a real challenge in health care. Despite evidence indicating therapeutic benefit from adhering to a prescribed regimen, it is estimated that around 50% of patients around the world don’t take their medication as it is prescribed – and some simply don’t take them at all.
Nonadherence to medication can be primary or secondary. Primary medication nonadherence occurs when a new medication is prescribed for a patient, but the patient does not obtain the medication or an appropriate alternative within an acceptable period after it was prescribed. Secondary nonadherence measures prescription refills among patients who previously filled their first prescriptions. With most medication adherence research to date focused on secondary nonadherence, PMN has been identified as a major research gap.
Growth in electronic prescribing has partially resolved this issue, and new measures have emerged linking electronic prescribing databases with pharmacy dispensing databases. and which drugs could be at greatest risk of primary nonadherence when prescribed by a primary care physician
Adherence measures
Measuring medication adherence is challenging but can be done using various approaches. It comprises the following approaches:
- Subjective measurements obtained by asking patients, family members, caregivers, and physicians about the patient’s medication use
- Objective measurements obtained by counting pills, examining pharmacy refill records, or using electronic medication event monitoring systems
- Biochemical measurements obtained by adding a nontoxic marker to the medication and detecting its presence in blood or urine or measurement of serum drug levels.
Determining factors
A myriad of factors contributes to poor medication adherence. Some are related to patients (e.g., suboptimal health literacy and lack of involvement in the treatment decision-making process), others are related to physicians (e.g., prescription of complex drug regimens, communication barriers, ineffective communication of information about adverse effects, and provision of care by multiple physicians), and still others are related to health care systems (e.g., office visit time limitations, limited access to care, and lack of health information technology).
Primary nonadherence
The literature has reported substantial variation in primary nonadherence, with estimates ranging from as little as 1.9% of incident prescriptions never filled to as much as 75%.
Investigators for the Canadian study estimated the rate of primary nonadherence, defined as failure to dispense a new medication or its equivalent within 6 months of the prescription date, using data from 150,565 new prescriptions issued to 34,243 patients.
Rate of nonadherence
The following patterns of primary nonadherence were observed:
- Primary nonadherence was lowest for prescriptions issued by prescribers aged 35 years or younger (17.1%) and male prescribers (15.1%).
- It was similar among patients of both sexes.
- It was lowest in the oldest subjects, decreasing with age (odds ratio, 0.91 for each additional 10 years).
- It was highest for drugs prescribed mostly on an as-needed basis, including topical corticosteroids (35.1%) and antihistamines (23.4%).
Predictors of nonadherence
The odds of primary nonadherence exhibited the following patterns:
- Lower for prescriptions issued by male clinicians (OR, 0.66)
- Significantly greater, compared with anti-infectives, for dermatological agents (OR, 1.36) and lowest for cardiovascular agents (OR, 0.46).
- Lower across therapeutic drug categories (except for respiratory agents) for those aged 65 years and older than for those younger than age 65.
In conclusion, in a general medicine setting, the odds of primary nonadherence were higher for younger patients, those who received primary care services from female prescribers, and older patients who were prescribed more medications. Across therapeutic categories, the odds of primary nonadherence were lowest for cardiovascular system agents and highest for dermatological agents.
To date, the lack of a standardized terminology, operational definition, and measurement methods of primary nonadherence has limited our understanding of the extent to which patients do not avail themselves of prescriber-ordered pharmaceutical treatment. These results reaffirm the need to compare the prevalence of such nonadherence in different health care settings.
This article was translated from Univadis Italy. A version appeared on Medscape.com.
Poor adherence to medication is a real challenge in health care. Despite evidence indicating therapeutic benefit from adhering to a prescribed regimen, it is estimated that around 50% of patients around the world don’t take their medication as it is prescribed – and some simply don’t take them at all.
Nonadherence to medication can be primary or secondary. Primary medication nonadherence occurs when a new medication is prescribed for a patient, but the patient does not obtain the medication or an appropriate alternative within an acceptable period after it was prescribed. Secondary nonadherence measures prescription refills among patients who previously filled their first prescriptions. With most medication adherence research to date focused on secondary nonadherence, PMN has been identified as a major research gap.
Growth in electronic prescribing has partially resolved this issue, and new measures have emerged linking electronic prescribing databases with pharmacy dispensing databases. and which drugs could be at greatest risk of primary nonadherence when prescribed by a primary care physician
Adherence measures
Measuring medication adherence is challenging but can be done using various approaches. It comprises the following approaches:
- Subjective measurements obtained by asking patients, family members, caregivers, and physicians about the patient’s medication use
- Objective measurements obtained by counting pills, examining pharmacy refill records, or using electronic medication event monitoring systems
- Biochemical measurements obtained by adding a nontoxic marker to the medication and detecting its presence in blood or urine or measurement of serum drug levels.
Determining factors
A myriad of factors contributes to poor medication adherence. Some are related to patients (e.g., suboptimal health literacy and lack of involvement in the treatment decision-making process), others are related to physicians (e.g., prescription of complex drug regimens, communication barriers, ineffective communication of information about adverse effects, and provision of care by multiple physicians), and still others are related to health care systems (e.g., office visit time limitations, limited access to care, and lack of health information technology).
Primary nonadherence
The literature has reported substantial variation in primary nonadherence, with estimates ranging from as little as 1.9% of incident prescriptions never filled to as much as 75%.
Investigators for the Canadian study estimated the rate of primary nonadherence, defined as failure to dispense a new medication or its equivalent within 6 months of the prescription date, using data from 150,565 new prescriptions issued to 34,243 patients.
Rate of nonadherence
The following patterns of primary nonadherence were observed:
- Primary nonadherence was lowest for prescriptions issued by prescribers aged 35 years or younger (17.1%) and male prescribers (15.1%).
- It was similar among patients of both sexes.
- It was lowest in the oldest subjects, decreasing with age (odds ratio, 0.91 for each additional 10 years).
- It was highest for drugs prescribed mostly on an as-needed basis, including topical corticosteroids (35.1%) and antihistamines (23.4%).
Predictors of nonadherence
The odds of primary nonadherence exhibited the following patterns:
- Lower for prescriptions issued by male clinicians (OR, 0.66)
- Significantly greater, compared with anti-infectives, for dermatological agents (OR, 1.36) and lowest for cardiovascular agents (OR, 0.46).
- Lower across therapeutic drug categories (except for respiratory agents) for those aged 65 years and older than for those younger than age 65.
In conclusion, in a general medicine setting, the odds of primary nonadherence were higher for younger patients, those who received primary care services from female prescribers, and older patients who were prescribed more medications. Across therapeutic categories, the odds of primary nonadherence were lowest for cardiovascular system agents and highest for dermatological agents.
To date, the lack of a standardized terminology, operational definition, and measurement methods of primary nonadherence has limited our understanding of the extent to which patients do not avail themselves of prescriber-ordered pharmaceutical treatment. These results reaffirm the need to compare the prevalence of such nonadherence in different health care settings.
This article was translated from Univadis Italy. A version appeared on Medscape.com.
Poor adherence to medication is a real challenge in health care. Despite evidence indicating therapeutic benefit from adhering to a prescribed regimen, it is estimated that around 50% of patients around the world don’t take their medication as it is prescribed – and some simply don’t take them at all.
Nonadherence to medication can be primary or secondary. Primary medication nonadherence occurs when a new medication is prescribed for a patient, but the patient does not obtain the medication or an appropriate alternative within an acceptable period after it was prescribed. Secondary nonadherence measures prescription refills among patients who previously filled their first prescriptions. With most medication adherence research to date focused on secondary nonadherence, PMN has been identified as a major research gap.
Growth in electronic prescribing has partially resolved this issue, and new measures have emerged linking electronic prescribing databases with pharmacy dispensing databases. and which drugs could be at greatest risk of primary nonadherence when prescribed by a primary care physician
Adherence measures
Measuring medication adherence is challenging but can be done using various approaches. It comprises the following approaches:
- Subjective measurements obtained by asking patients, family members, caregivers, and physicians about the patient’s medication use
- Objective measurements obtained by counting pills, examining pharmacy refill records, or using electronic medication event monitoring systems
- Biochemical measurements obtained by adding a nontoxic marker to the medication and detecting its presence in blood or urine or measurement of serum drug levels.
Determining factors
A myriad of factors contributes to poor medication adherence. Some are related to patients (e.g., suboptimal health literacy and lack of involvement in the treatment decision-making process), others are related to physicians (e.g., prescription of complex drug regimens, communication barriers, ineffective communication of information about adverse effects, and provision of care by multiple physicians), and still others are related to health care systems (e.g., office visit time limitations, limited access to care, and lack of health information technology).
Primary nonadherence
The literature has reported substantial variation in primary nonadherence, with estimates ranging from as little as 1.9% of incident prescriptions never filled to as much as 75%.
Investigators for the Canadian study estimated the rate of primary nonadherence, defined as failure to dispense a new medication or its equivalent within 6 months of the prescription date, using data from 150,565 new prescriptions issued to 34,243 patients.
Rate of nonadherence
The following patterns of primary nonadherence were observed:
- Primary nonadherence was lowest for prescriptions issued by prescribers aged 35 years or younger (17.1%) and male prescribers (15.1%).
- It was similar among patients of both sexes.
- It was lowest in the oldest subjects, decreasing with age (odds ratio, 0.91 for each additional 10 years).
- It was highest for drugs prescribed mostly on an as-needed basis, including topical corticosteroids (35.1%) and antihistamines (23.4%).
Predictors of nonadherence
The odds of primary nonadherence exhibited the following patterns:
- Lower for prescriptions issued by male clinicians (OR, 0.66)
- Significantly greater, compared with anti-infectives, for dermatological agents (OR, 1.36) and lowest for cardiovascular agents (OR, 0.46).
- Lower across therapeutic drug categories (except for respiratory agents) for those aged 65 years and older than for those younger than age 65.
In conclusion, in a general medicine setting, the odds of primary nonadherence were higher for younger patients, those who received primary care services from female prescribers, and older patients who were prescribed more medications. Across therapeutic categories, the odds of primary nonadherence were lowest for cardiovascular system agents and highest for dermatological agents.
To date, the lack of a standardized terminology, operational definition, and measurement methods of primary nonadherence has limited our understanding of the extent to which patients do not avail themselves of prescriber-ordered pharmaceutical treatment. These results reaffirm the need to compare the prevalence of such nonadherence in different health care settings.
This article was translated from Univadis Italy. A version appeared on Medscape.com.
How useful are circulating tumor cells for early diagnosis?
Treatment options for patients with cancer that is detected at a late stage are severely limited, which usually leads to an unfavorable prognosis for such patients. Indeed, the options available for patients with metastatic solid cancers are scarcely curative. Therefore, early diagnosis of neoplasia remains a fundamental mainstay for improving outcomes for cancer patients.
Histopathology is the current gold standard for cancer diagnosis. Biopsy is an invasive procedure that provides physicians with further samples to test but that furnishes limited information concerning tumor heterogeneity. Biopsy specimens are usually obtained only when there is clinical evidence of neoplasia, which significantly limits their usefulness in early diagnosis.
Around 20 years ago, it was discovered that the presence of circulating tumor cells (CTC) in patients with metastatic breast cancer who were about to begin a new line of treatment was predictive of overall and progression-free survival. The prognostic value of CTC was independent of the line of treatment (first or second) and was greater than that of the site of metastasis, the type of therapy, and the time to metastasis after complete primary resection. These results support the idea that the presence of CTC could be used to modify the system for staging advanced disease.
Since then,
Liquid vs. tissue
Liquid biopsy is a minimally invasive tool that is easy to use. It is employed to detect cancer, to assess treatment response, or to monitor disease progression. Liquid biopsy produces test material from primary and metastatic (or micrometastatic) sites and provides a more heterogeneous picture of the entire tumor cell population, compared with specimens obtained with tissue biopsy.
Metastasis
The notion that metastatic lesions are formed from cancer cells that have disseminated from advanced primary tumors has been substantially revised following the identification of disseminated tumor cells (DTC) in the bone marrow of patients with early-stage disease. These results have led researchers to no longer view cancer metastasis as a linear cascade of events but rather as a series of concurrent, partially overlapping processes, as metastasizing cells assume new phenotypes while abandoning older behaviors.
The initiation of metastasis is not simply a cell-autonomous event but is heavily influenced by complex tissue microenvironments. Although colonization of distant tissues by DTC is an extremely inefficient process, at times, relatively numerous CTC can be detected in the blood of cancer patients (> 1,000 CTC/mL of blood plasma), whereas the number of clinically detectable metastases is disproportionately low, confirming that tumor cell diffusion can happen at an early stage but usually occurs later on.
Early dissemination
Little is currently known about the preference of cancer subtypes for distinct tissues or about the receptiveness of a tissue as a metastatic site. What endures as one of the most confounding clinical phenomena is that patients may undergo tumor resection and remain apparently disease free for months, years, and even decades, only to experience relapse and be diagnosed with late-stage metastatic disease. This course may be a result of cell seeding from minimal residual disease after resection of the primary tumor or of preexisting clinically undetectable micrometastases. It may also arise from early disseminated cells that remain dormant and resistant to therapy until they suddenly reawaken to initiate proliferation into clinically detectable macrometastases.
Dormant DTC could be the main reason for delayed detection of metastases. It is thought that around 40% of patients with prostate cancer who undergo radical prostatectomy present with biochemical recurrence, suggesting that it is likely that hidden DTC or micrometastases are present at the time of the procedure. The finding is consistent with the detection of DTC many years after tumor resection, suggesting they were released before surgical treatment. Nevertheless, research into tumor cell dormancy is limited, owing to the invasive and technically challenging nature of obtaining DTC samples, which are predominantly taken from the bone marrow.
CTC metastases
Cancer cells can undergo epithelial-to-mesenchymal transition to facilitate their detachment from the primary tumor and intravasation into the blood circulation (step 1). Dissemination of cancer cells from the primary tumor into circulation can involve either single cells or cell clusters containing multiple CTC as well as immune cells and platelets, known as microemboli. CTC that can survive in circulation (step 2) can exit the bloodstream (step 3) and establish metastatic tumors (step 4), or they can enter dormancy and reside in distant organs, such as the bone marrow.
Use in practice
CTC were discovered over a century ago, but only in recent years has technology been sufficiently advanced to study CTC and to assess their usefulness as biomarkers. Recent evidence suggests that not only do the number of CTC increase during sleep and rest phases but also that these CTC are better able to metastasize, compared to those generated during periods of wakefulness or activity.
CTC clusters (microemboli) are defined as groups of two or more CTC. They can consist of CTC alone (homotypic) or can include various stromal cells, such as cancer-associated fibroblasts or platelets and immune cells (heterotypic). CTC clusters (with or without leukocytes) seem to have greater metastatic capacity, compared with individual CTC.
A multitude of characteristics can be measured in CTC, including genetics and epigenetics, as well as protein levels, which might help in understanding many processes involved in the formation of metastases.
Quantitative assessment of CTC could indicate tumor burden in patients with aggressive cancers, as has been seen in patients with primary lung cancer.
Early cancer diagnosis
Early research into CTC didn’t explore their usefulness in diagnosing early-stage tumors because it was thought that CTC were characteristic of advanced-stage disease. This hypothesis was later rejected following evidence of local intravascular invasion of very early cancer cells, even over a period of several hours. This feature may allow CTC to be detected before the clinical diagnosis of cancer.
CTC have been detected in various neoplastic conditions: in breast cancer, seen in 20% of patients with stage I disease, in 26.8% with stage II disease, and 26.7% with stage III disease; in nonmetastatic colorectal cancer, including stage I and II disease; and in prostate cancer, seen in over 50% of patients with localized disease.
The presence of CTC has been proven to be an unfavorable prognostic predictor of overall survival among patients with early-stage non–small cell lung cancer. It distinguishes patients with pancreatic ductal adenocarcinoma from those with noncancerous pancreatic diseases with a sensitivity of 75% and a specificity of 96.3%.
CTC positivity scoring (appropriately defined), combined with serum prostate-specific antigen level, was predictive of a biopsy diagnosis of clinically significant prostate cancer.
All these data support the utility of CTC in early cancer diagnosis. Their link with metastases, and thus with aggressive tumors, gives them an advantage over other (noninvasive or minimally invasive) biomarkers in the early identification of invasive tumors for therapeutic intervention with better cure rates.
This article was translated from Univadis Italy. A version appeared on Medscape.com.
Treatment options for patients with cancer that is detected at a late stage are severely limited, which usually leads to an unfavorable prognosis for such patients. Indeed, the options available for patients with metastatic solid cancers are scarcely curative. Therefore, early diagnosis of neoplasia remains a fundamental mainstay for improving outcomes for cancer patients.
Histopathology is the current gold standard for cancer diagnosis. Biopsy is an invasive procedure that provides physicians with further samples to test but that furnishes limited information concerning tumor heterogeneity. Biopsy specimens are usually obtained only when there is clinical evidence of neoplasia, which significantly limits their usefulness in early diagnosis.
Around 20 years ago, it was discovered that the presence of circulating tumor cells (CTC) in patients with metastatic breast cancer who were about to begin a new line of treatment was predictive of overall and progression-free survival. The prognostic value of CTC was independent of the line of treatment (first or second) and was greater than that of the site of metastasis, the type of therapy, and the time to metastasis after complete primary resection. These results support the idea that the presence of CTC could be used to modify the system for staging advanced disease.
Since then,
Liquid vs. tissue
Liquid biopsy is a minimally invasive tool that is easy to use. It is employed to detect cancer, to assess treatment response, or to monitor disease progression. Liquid biopsy produces test material from primary and metastatic (or micrometastatic) sites and provides a more heterogeneous picture of the entire tumor cell population, compared with specimens obtained with tissue biopsy.
Metastasis
The notion that metastatic lesions are formed from cancer cells that have disseminated from advanced primary tumors has been substantially revised following the identification of disseminated tumor cells (DTC) in the bone marrow of patients with early-stage disease. These results have led researchers to no longer view cancer metastasis as a linear cascade of events but rather as a series of concurrent, partially overlapping processes, as metastasizing cells assume new phenotypes while abandoning older behaviors.
The initiation of metastasis is not simply a cell-autonomous event but is heavily influenced by complex tissue microenvironments. Although colonization of distant tissues by DTC is an extremely inefficient process, at times, relatively numerous CTC can be detected in the blood of cancer patients (> 1,000 CTC/mL of blood plasma), whereas the number of clinically detectable metastases is disproportionately low, confirming that tumor cell diffusion can happen at an early stage but usually occurs later on.
Early dissemination
Little is currently known about the preference of cancer subtypes for distinct tissues or about the receptiveness of a tissue as a metastatic site. What endures as one of the most confounding clinical phenomena is that patients may undergo tumor resection and remain apparently disease free for months, years, and even decades, only to experience relapse and be diagnosed with late-stage metastatic disease. This course may be a result of cell seeding from minimal residual disease after resection of the primary tumor or of preexisting clinically undetectable micrometastases. It may also arise from early disseminated cells that remain dormant and resistant to therapy until they suddenly reawaken to initiate proliferation into clinically detectable macrometastases.
Dormant DTC could be the main reason for delayed detection of metastases. It is thought that around 40% of patients with prostate cancer who undergo radical prostatectomy present with biochemical recurrence, suggesting that it is likely that hidden DTC or micrometastases are present at the time of the procedure. The finding is consistent with the detection of DTC many years after tumor resection, suggesting they were released before surgical treatment. Nevertheless, research into tumor cell dormancy is limited, owing to the invasive and technically challenging nature of obtaining DTC samples, which are predominantly taken from the bone marrow.
CTC metastases
Cancer cells can undergo epithelial-to-mesenchymal transition to facilitate their detachment from the primary tumor and intravasation into the blood circulation (step 1). Dissemination of cancer cells from the primary tumor into circulation can involve either single cells or cell clusters containing multiple CTC as well as immune cells and platelets, known as microemboli. CTC that can survive in circulation (step 2) can exit the bloodstream (step 3) and establish metastatic tumors (step 4), or they can enter dormancy and reside in distant organs, such as the bone marrow.
Use in practice
CTC were discovered over a century ago, but only in recent years has technology been sufficiently advanced to study CTC and to assess their usefulness as biomarkers. Recent evidence suggests that not only do the number of CTC increase during sleep and rest phases but also that these CTC are better able to metastasize, compared to those generated during periods of wakefulness or activity.
CTC clusters (microemboli) are defined as groups of two or more CTC. They can consist of CTC alone (homotypic) or can include various stromal cells, such as cancer-associated fibroblasts or platelets and immune cells (heterotypic). CTC clusters (with or without leukocytes) seem to have greater metastatic capacity, compared with individual CTC.
A multitude of characteristics can be measured in CTC, including genetics and epigenetics, as well as protein levels, which might help in understanding many processes involved in the formation of metastases.
Quantitative assessment of CTC could indicate tumor burden in patients with aggressive cancers, as has been seen in patients with primary lung cancer.
Early cancer diagnosis
Early research into CTC didn’t explore their usefulness in diagnosing early-stage tumors because it was thought that CTC were characteristic of advanced-stage disease. This hypothesis was later rejected following evidence of local intravascular invasion of very early cancer cells, even over a period of several hours. This feature may allow CTC to be detected before the clinical diagnosis of cancer.
CTC have been detected in various neoplastic conditions: in breast cancer, seen in 20% of patients with stage I disease, in 26.8% with stage II disease, and 26.7% with stage III disease; in nonmetastatic colorectal cancer, including stage I and II disease; and in prostate cancer, seen in over 50% of patients with localized disease.
The presence of CTC has been proven to be an unfavorable prognostic predictor of overall survival among patients with early-stage non–small cell lung cancer. It distinguishes patients with pancreatic ductal adenocarcinoma from those with noncancerous pancreatic diseases with a sensitivity of 75% and a specificity of 96.3%.
CTC positivity scoring (appropriately defined), combined with serum prostate-specific antigen level, was predictive of a biopsy diagnosis of clinically significant prostate cancer.
All these data support the utility of CTC in early cancer diagnosis. Their link with metastases, and thus with aggressive tumors, gives them an advantage over other (noninvasive or minimally invasive) biomarkers in the early identification of invasive tumors for therapeutic intervention with better cure rates.
This article was translated from Univadis Italy. A version appeared on Medscape.com.
Treatment options for patients with cancer that is detected at a late stage are severely limited, which usually leads to an unfavorable prognosis for such patients. Indeed, the options available for patients with metastatic solid cancers are scarcely curative. Therefore, early diagnosis of neoplasia remains a fundamental mainstay for improving outcomes for cancer patients.
Histopathology is the current gold standard for cancer diagnosis. Biopsy is an invasive procedure that provides physicians with further samples to test but that furnishes limited information concerning tumor heterogeneity. Biopsy specimens are usually obtained only when there is clinical evidence of neoplasia, which significantly limits their usefulness in early diagnosis.
Around 20 years ago, it was discovered that the presence of circulating tumor cells (CTC) in patients with metastatic breast cancer who were about to begin a new line of treatment was predictive of overall and progression-free survival. The prognostic value of CTC was independent of the line of treatment (first or second) and was greater than that of the site of metastasis, the type of therapy, and the time to metastasis after complete primary resection. These results support the idea that the presence of CTC could be used to modify the system for staging advanced disease.
Since then,
Liquid vs. tissue
Liquid biopsy is a minimally invasive tool that is easy to use. It is employed to detect cancer, to assess treatment response, or to monitor disease progression. Liquid biopsy produces test material from primary and metastatic (or micrometastatic) sites and provides a more heterogeneous picture of the entire tumor cell population, compared with specimens obtained with tissue biopsy.
Metastasis
The notion that metastatic lesions are formed from cancer cells that have disseminated from advanced primary tumors has been substantially revised following the identification of disseminated tumor cells (DTC) in the bone marrow of patients with early-stage disease. These results have led researchers to no longer view cancer metastasis as a linear cascade of events but rather as a series of concurrent, partially overlapping processes, as metastasizing cells assume new phenotypes while abandoning older behaviors.
The initiation of metastasis is not simply a cell-autonomous event but is heavily influenced by complex tissue microenvironments. Although colonization of distant tissues by DTC is an extremely inefficient process, at times, relatively numerous CTC can be detected in the blood of cancer patients (> 1,000 CTC/mL of blood plasma), whereas the number of clinically detectable metastases is disproportionately low, confirming that tumor cell diffusion can happen at an early stage but usually occurs later on.
Early dissemination
Little is currently known about the preference of cancer subtypes for distinct tissues or about the receptiveness of a tissue as a metastatic site. What endures as one of the most confounding clinical phenomena is that patients may undergo tumor resection and remain apparently disease free for months, years, and even decades, only to experience relapse and be diagnosed with late-stage metastatic disease. This course may be a result of cell seeding from minimal residual disease after resection of the primary tumor or of preexisting clinically undetectable micrometastases. It may also arise from early disseminated cells that remain dormant and resistant to therapy until they suddenly reawaken to initiate proliferation into clinically detectable macrometastases.
Dormant DTC could be the main reason for delayed detection of metastases. It is thought that around 40% of patients with prostate cancer who undergo radical prostatectomy present with biochemical recurrence, suggesting that it is likely that hidden DTC or micrometastases are present at the time of the procedure. The finding is consistent with the detection of DTC many years after tumor resection, suggesting they were released before surgical treatment. Nevertheless, research into tumor cell dormancy is limited, owing to the invasive and technically challenging nature of obtaining DTC samples, which are predominantly taken from the bone marrow.
CTC metastases
Cancer cells can undergo epithelial-to-mesenchymal transition to facilitate their detachment from the primary tumor and intravasation into the blood circulation (step 1). Dissemination of cancer cells from the primary tumor into circulation can involve either single cells or cell clusters containing multiple CTC as well as immune cells and platelets, known as microemboli. CTC that can survive in circulation (step 2) can exit the bloodstream (step 3) and establish metastatic tumors (step 4), or they can enter dormancy and reside in distant organs, such as the bone marrow.
Use in practice
CTC were discovered over a century ago, but only in recent years has technology been sufficiently advanced to study CTC and to assess their usefulness as biomarkers. Recent evidence suggests that not only do the number of CTC increase during sleep and rest phases but also that these CTC are better able to metastasize, compared to those generated during periods of wakefulness or activity.
CTC clusters (microemboli) are defined as groups of two or more CTC. They can consist of CTC alone (homotypic) or can include various stromal cells, such as cancer-associated fibroblasts or platelets and immune cells (heterotypic). CTC clusters (with or without leukocytes) seem to have greater metastatic capacity, compared with individual CTC.
A multitude of characteristics can be measured in CTC, including genetics and epigenetics, as well as protein levels, which might help in understanding many processes involved in the formation of metastases.
Quantitative assessment of CTC could indicate tumor burden in patients with aggressive cancers, as has been seen in patients with primary lung cancer.
Early cancer diagnosis
Early research into CTC didn’t explore their usefulness in diagnosing early-stage tumors because it was thought that CTC were characteristic of advanced-stage disease. This hypothesis was later rejected following evidence of local intravascular invasion of very early cancer cells, even over a period of several hours. This feature may allow CTC to be detected before the clinical diagnosis of cancer.
CTC have been detected in various neoplastic conditions: in breast cancer, seen in 20% of patients with stage I disease, in 26.8% with stage II disease, and 26.7% with stage III disease; in nonmetastatic colorectal cancer, including stage I and II disease; and in prostate cancer, seen in over 50% of patients with localized disease.
The presence of CTC has been proven to be an unfavorable prognostic predictor of overall survival among patients with early-stage non–small cell lung cancer. It distinguishes patients with pancreatic ductal adenocarcinoma from those with noncancerous pancreatic diseases with a sensitivity of 75% and a specificity of 96.3%.
CTC positivity scoring (appropriately defined), combined with serum prostate-specific antigen level, was predictive of a biopsy diagnosis of clinically significant prostate cancer.
All these data support the utility of CTC in early cancer diagnosis. Their link with metastases, and thus with aggressive tumors, gives them an advantage over other (noninvasive or minimally invasive) biomarkers in the early identification of invasive tumors for therapeutic intervention with better cure rates.
This article was translated from Univadis Italy. A version appeared on Medscape.com.