User login
Home Visitors May Need More Training to Spot Delays
A home visit program designed to identify early childhood language delays not only failed to spot most delayed children, but also failed to refer the vast majority of identified children for further evaluation or intervention.
The results suggest that the home visitors didn't get enough training to properly screen children and that the visitors lacked the skills necessary to communicate concerns about developmental delays to parents, according to Tracy M. King, M.D., and colleagues (J. Dev. Behav. Pediatr.2005;26:293–303).
“This study argues for prudence in the ongoing proliferation of home visiting programs and for caution in setting expectations regarding child development outcomes,” said Dr. King of Johns Hopkins University School of Medicine, Baltimore, and her coinvestigators.
The researchers compared language delay identification rates for children enrolled in the Hawaii Healthy Start Program (304) with rates in a group of control children (209). All of the children were at high risk of developmental delay, child abuse, or neglect.
The Hawaii Healthy Start Program (HHSP) provides a regular home visitor, who teaches parents about child development, models good parental behavior, and links parents to a medical provider. The visitor also performs childhood developmental testing–including language testing–when the child is 3 years old. The control group did not receive any home visitation services.
The home visitors identified only 24% of children with severe language delay. Parents and primary care providers in the HHSP group each identified 31% of such children, while parents in the control group identified almost twice as many (56%).
The fact that parents in the control group had an increased identification rate raises the concern that the home visitors actually interfered with identification. This could be because they lack sufficient training and are giving parents false reassurance of the child's language development.
Among children with any language delay, home visits identified 17%. Parents and primary care providers also did poorly in this group, identifying 26% and 24%, respectively. Parents in the control group identified 20% of children with any language delay and primary care providers, 25%.
Particularly concerning were the low referral rates after children were identified, the investigators said. Among the 72 children identified as having delays, only 2 were referred to their primary care provider, and none were referred to local early intervention programs.
Poor parental identification rates could be related to the high-risk communities in which the families lived, the investigators said. “It may be that language delays have become so prevalent in certain at-risk communities that it is no longer possible for parents to make accurate assessments of their child's development based on comparisons with the child's peers.”
Poor home visitor and medical provider identification rates are probably due to inadequate training in child development.
In an accompanying editorial, Shirley Russ, M.D., and Neal Halfon, M.D., said identification rates could be improved by using trained nurses as home visitors. Similar programs employing nurses have higher family retention rates and much better identification and referral rates (J. Dev. Behav. Pediatr. 2005;26:304–5).
“Professional nurses would be more likely to have knowledge of early childhood systems and resources in the community and would also have had more training in communicating about health and development issues to parents,” said Dr. Russ and Dr. Halfon of the University of California, Los Angeles.
Dr. King and colleagues replied in a second commentary that unfortunately visiting nurse programs are costly and difficult to staff in areas such as Hawaii (J. Dev. Behav. Pediatr. 2005;26:307).
A home visit program designed to identify early childhood language delays not only failed to spot most delayed children, but also failed to refer the vast majority of identified children for further evaluation or intervention.
The results suggest that the home visitors didn't get enough training to properly screen children and that the visitors lacked the skills necessary to communicate concerns about developmental delays to parents, according to Tracy M. King, M.D., and colleagues (J. Dev. Behav. Pediatr.2005;26:293–303).
“This study argues for prudence in the ongoing proliferation of home visiting programs and for caution in setting expectations regarding child development outcomes,” said Dr. King of Johns Hopkins University School of Medicine, Baltimore, and her coinvestigators.
The researchers compared language delay identification rates for children enrolled in the Hawaii Healthy Start Program (304) with rates in a group of control children (209). All of the children were at high risk of developmental delay, child abuse, or neglect.
The Hawaii Healthy Start Program (HHSP) provides a regular home visitor, who teaches parents about child development, models good parental behavior, and links parents to a medical provider. The visitor also performs childhood developmental testing–including language testing–when the child is 3 years old. The control group did not receive any home visitation services.
The home visitors identified only 24% of children with severe language delay. Parents and primary care providers in the HHSP group each identified 31% of such children, while parents in the control group identified almost twice as many (56%).
The fact that parents in the control group had an increased identification rate raises the concern that the home visitors actually interfered with identification. This could be because they lack sufficient training and are giving parents false reassurance of the child's language development.
Among children with any language delay, home visits identified 17%. Parents and primary care providers also did poorly in this group, identifying 26% and 24%, respectively. Parents in the control group identified 20% of children with any language delay and primary care providers, 25%.
Particularly concerning were the low referral rates after children were identified, the investigators said. Among the 72 children identified as having delays, only 2 were referred to their primary care provider, and none were referred to local early intervention programs.
Poor parental identification rates could be related to the high-risk communities in which the families lived, the investigators said. “It may be that language delays have become so prevalent in certain at-risk communities that it is no longer possible for parents to make accurate assessments of their child's development based on comparisons with the child's peers.”
Poor home visitor and medical provider identification rates are probably due to inadequate training in child development.
In an accompanying editorial, Shirley Russ, M.D., and Neal Halfon, M.D., said identification rates could be improved by using trained nurses as home visitors. Similar programs employing nurses have higher family retention rates and much better identification and referral rates (J. Dev. Behav. Pediatr. 2005;26:304–5).
“Professional nurses would be more likely to have knowledge of early childhood systems and resources in the community and would also have had more training in communicating about health and development issues to parents,” said Dr. Russ and Dr. Halfon of the University of California, Los Angeles.
Dr. King and colleagues replied in a second commentary that unfortunately visiting nurse programs are costly and difficult to staff in areas such as Hawaii (J. Dev. Behav. Pediatr. 2005;26:307).
A home visit program designed to identify early childhood language delays not only failed to spot most delayed children, but also failed to refer the vast majority of identified children for further evaluation or intervention.
The results suggest that the home visitors didn't get enough training to properly screen children and that the visitors lacked the skills necessary to communicate concerns about developmental delays to parents, according to Tracy M. King, M.D., and colleagues (J. Dev. Behav. Pediatr.2005;26:293–303).
“This study argues for prudence in the ongoing proliferation of home visiting programs and for caution in setting expectations regarding child development outcomes,” said Dr. King of Johns Hopkins University School of Medicine, Baltimore, and her coinvestigators.
The researchers compared language delay identification rates for children enrolled in the Hawaii Healthy Start Program (304) with rates in a group of control children (209). All of the children were at high risk of developmental delay, child abuse, or neglect.
The Hawaii Healthy Start Program (HHSP) provides a regular home visitor, who teaches parents about child development, models good parental behavior, and links parents to a medical provider. The visitor also performs childhood developmental testing–including language testing–when the child is 3 years old. The control group did not receive any home visitation services.
The home visitors identified only 24% of children with severe language delay. Parents and primary care providers in the HHSP group each identified 31% of such children, while parents in the control group identified almost twice as many (56%).
The fact that parents in the control group had an increased identification rate raises the concern that the home visitors actually interfered with identification. This could be because they lack sufficient training and are giving parents false reassurance of the child's language development.
Among children with any language delay, home visits identified 17%. Parents and primary care providers also did poorly in this group, identifying 26% and 24%, respectively. Parents in the control group identified 20% of children with any language delay and primary care providers, 25%.
Particularly concerning were the low referral rates after children were identified, the investigators said. Among the 72 children identified as having delays, only 2 were referred to their primary care provider, and none were referred to local early intervention programs.
Poor parental identification rates could be related to the high-risk communities in which the families lived, the investigators said. “It may be that language delays have become so prevalent in certain at-risk communities that it is no longer possible for parents to make accurate assessments of their child's development based on comparisons with the child's peers.”
Poor home visitor and medical provider identification rates are probably due to inadequate training in child development.
In an accompanying editorial, Shirley Russ, M.D., and Neal Halfon, M.D., said identification rates could be improved by using trained nurses as home visitors. Similar programs employing nurses have higher family retention rates and much better identification and referral rates (J. Dev. Behav. Pediatr. 2005;26:304–5).
“Professional nurses would be more likely to have knowledge of early childhood systems and resources in the community and would also have had more training in communicating about health and development issues to parents,” said Dr. Russ and Dr. Halfon of the University of California, Los Angeles.
Dr. King and colleagues replied in a second commentary that unfortunately visiting nurse programs are costly and difficult to staff in areas such as Hawaii (J. Dev. Behav. Pediatr. 2005;26:307).
Hepatic Encephalopathy Treatments Remain Unproven
CAMBRIDGE, MD. — Two existing medications—an antibiotic and a hypoglycemic agent—may add some strength to the poorly outfitted armamentarium for hepatic encephalopathy, Steve Solga, M.D., said at a hepatobiliary update sponsored by Johns Hopkins University.
The altered brain function of hepatic encephalopathy appears to be related to increased ammonia levels in the blood, although controversy remains on this issue. Intestinal dysmotility, common in cirrhosis, causes an overgrowth of urease-positive bacteria and increased nitrogen absorption. The impaired liver is unable to process this extra load, so ammonia levels increase.
Generally, treatment is aimed at decreasing ammonia production and absorption; neomycin and lactulose are the most common therapies. Neomycin directly decreases the gut flora, whereas lactulose decreases gut bacteria load by promoting elimination and tilts the bacterial balance toward nonammoniagenic types.
The problem, Dr. Solga said, is that while lactulose is safe, it is not as effective in resolving symptoms as is neomycin. But neomycin may not be safe for many patients.
“Some literature suggests that long-term use is associated with irreversible ototoxicity and nephrotoxicity, and that it shouldn't be given for longer than 2 weeks for hepatic encephalopathy in patients with preexisting renal impairment.”
Importantly, neither treatment has been adequately studied in well-designed randomized trials, he added.
Rifaximin, another poorly absorbed antibiotic often used for “traveler's diarrhea,” is being studied for use in hepatic encephalopathy. “Most of the trials indicate that safety is relatively well established, but we don't have solid efficacy data yet for hepatic encephalopathy,” he said.
But according to a 2005 review of 15 studies, rifaximin was at least as effective as lactulose and neomycin in improving neurologic symptoms and in reducing blood ammonia levels (Rev. Gastroenterol. Disord. 2005;5[suppl. 1]:10–8).
The hypoglycemic agent acarbose might have some benefit for hepatic encephalopathy patients who are diabetic, he added. The drug promotes the growth of saccrolytic bacteria. An Italian study of 107 patients found that acarbose significantly decreased blood ammonia and improved intellectual function, while controlling blood sugar (Clin. Gastroenterol. Hepatol. 2005;3:184–91).
Finally, gut flora therapy, in the form of either prebiotics or probiotics, has potential. However, this treatment is still in its infancy. There are also regulatory issues to contend with, inasmuch as it remains unclear whether probiotics are drugs or supplements.
CAMBRIDGE, MD. — Two existing medications—an antibiotic and a hypoglycemic agent—may add some strength to the poorly outfitted armamentarium for hepatic encephalopathy, Steve Solga, M.D., said at a hepatobiliary update sponsored by Johns Hopkins University.
The altered brain function of hepatic encephalopathy appears to be related to increased ammonia levels in the blood, although controversy remains on this issue. Intestinal dysmotility, common in cirrhosis, causes an overgrowth of urease-positive bacteria and increased nitrogen absorption. The impaired liver is unable to process this extra load, so ammonia levels increase.
Generally, treatment is aimed at decreasing ammonia production and absorption; neomycin and lactulose are the most common therapies. Neomycin directly decreases the gut flora, whereas lactulose decreases gut bacteria load by promoting elimination and tilts the bacterial balance toward nonammoniagenic types.
The problem, Dr. Solga said, is that while lactulose is safe, it is not as effective in resolving symptoms as is neomycin. But neomycin may not be safe for many patients.
“Some literature suggests that long-term use is associated with irreversible ototoxicity and nephrotoxicity, and that it shouldn't be given for longer than 2 weeks for hepatic encephalopathy in patients with preexisting renal impairment.”
Importantly, neither treatment has been adequately studied in well-designed randomized trials, he added.
Rifaximin, another poorly absorbed antibiotic often used for “traveler's diarrhea,” is being studied for use in hepatic encephalopathy. “Most of the trials indicate that safety is relatively well established, but we don't have solid efficacy data yet for hepatic encephalopathy,” he said.
But according to a 2005 review of 15 studies, rifaximin was at least as effective as lactulose and neomycin in improving neurologic symptoms and in reducing blood ammonia levels (Rev. Gastroenterol. Disord. 2005;5[suppl. 1]:10–8).
The hypoglycemic agent acarbose might have some benefit for hepatic encephalopathy patients who are diabetic, he added. The drug promotes the growth of saccrolytic bacteria. An Italian study of 107 patients found that acarbose significantly decreased blood ammonia and improved intellectual function, while controlling blood sugar (Clin. Gastroenterol. Hepatol. 2005;3:184–91).
Finally, gut flora therapy, in the form of either prebiotics or probiotics, has potential. However, this treatment is still in its infancy. There are also regulatory issues to contend with, inasmuch as it remains unclear whether probiotics are drugs or supplements.
CAMBRIDGE, MD. — Two existing medications—an antibiotic and a hypoglycemic agent—may add some strength to the poorly outfitted armamentarium for hepatic encephalopathy, Steve Solga, M.D., said at a hepatobiliary update sponsored by Johns Hopkins University.
The altered brain function of hepatic encephalopathy appears to be related to increased ammonia levels in the blood, although controversy remains on this issue. Intestinal dysmotility, common in cirrhosis, causes an overgrowth of urease-positive bacteria and increased nitrogen absorption. The impaired liver is unable to process this extra load, so ammonia levels increase.
Generally, treatment is aimed at decreasing ammonia production and absorption; neomycin and lactulose are the most common therapies. Neomycin directly decreases the gut flora, whereas lactulose decreases gut bacteria load by promoting elimination and tilts the bacterial balance toward nonammoniagenic types.
The problem, Dr. Solga said, is that while lactulose is safe, it is not as effective in resolving symptoms as is neomycin. But neomycin may not be safe for many patients.
“Some literature suggests that long-term use is associated with irreversible ototoxicity and nephrotoxicity, and that it shouldn't be given for longer than 2 weeks for hepatic encephalopathy in patients with preexisting renal impairment.”
Importantly, neither treatment has been adequately studied in well-designed randomized trials, he added.
Rifaximin, another poorly absorbed antibiotic often used for “traveler's diarrhea,” is being studied for use in hepatic encephalopathy. “Most of the trials indicate that safety is relatively well established, but we don't have solid efficacy data yet for hepatic encephalopathy,” he said.
But according to a 2005 review of 15 studies, rifaximin was at least as effective as lactulose and neomycin in improving neurologic symptoms and in reducing blood ammonia levels (Rev. Gastroenterol. Disord. 2005;5[suppl. 1]:10–8).
The hypoglycemic agent acarbose might have some benefit for hepatic encephalopathy patients who are diabetic, he added. The drug promotes the growth of saccrolytic bacteria. An Italian study of 107 patients found that acarbose significantly decreased blood ammonia and improved intellectual function, while controlling blood sugar (Clin. Gastroenterol. Hepatol. 2005;3:184–91).
Finally, gut flora therapy, in the form of either prebiotics or probiotics, has potential. However, this treatment is still in its infancy. There are also regulatory issues to contend with, inasmuch as it remains unclear whether probiotics are drugs or supplements.
Avoid Surgery in Cases of Severe Hepatitis, Advanced Cirrhosis
CAMBRIDGE, MD. — The increased risk of mortality in patients who undergo surgery for serious liver disease is reason to postpone an operation until the disorder responds to treatment or resolves, Adrian Reuben, M.B., said at a hepatobiliary update sponsored by Johns Hopkins University.
“Surgery is contraindicated in those with acute hepatitis—especially alcoholic hepatitis—and severe chronic hepatitis and advanced cirrhosis,” said Dr. Reuben of the Medical University of South Carolina, Charleston.
“In cardiac surgery, patients with Child-Turcotte-Pugh [CTP] Class A scores do well, but for anyone with CTP Class B or C, surgery may be prohibitively dangerous,” he said.
Patients with severe liver disease are more susceptible to infection, which aggravates vasodilation and exacerbates the hyperdynamic circulation.
“This can precipitate hepatorenal syndrome or convert existing hepatorenal syndrome from stage II to stage I,” he explained.
Other reasons for adverse outcomes include concomitant renal dysfunction; reduced hepatic drug metabolism; poor nutrition, which is common in those with advanced liver disease; and ascites. Ascites carries the risks of infection, poor wound closure, and dehiscence, and it impairs respiration.
Mortality risk is much greater in patients with cirrhosis and increases steadily with higher CTP score.
Dr. Reuben reviewed five studies of abdominal surgery in patients with cirrhosis conducted from 1984 to 2004. Among a total of 391 patients, overall mortality ranged from 16% to 28%, with a range of 8%–19% for elective surgery and 32%–50% for emergency surgery. Rates were much lower among those with CTP Class A (3%–10%) than those with CTP Class C (55%–100%).
Other variables predictive of mortality in these studies were encephalopathy, ascites, infection, coagulopathy (high international normalized ratio [INR]), high creatinine, and gastrointestinal and pulmonary operations.
The risk of postsurgical mortality is increased in both viral and alcoholic hepatitis. “With acute viral hepatitis, the increased risk is about 10%–15%. With alcoholic hepatitis, it's vastly increased: 55%–100%,” Dr. Reuben said.
“You must also be very aware of alcoholic hepatitis; sometimes it mimics acute cholangitis,” he added.
An increased mortality risk has also been associated with nonalcoholic fatty liver disease (NAFLD). A 1998 study that looked at hepatic resection for cancer showed a 3% mortality rate for those with nonfatty livers. Mortality increased to 7% for those with mild NAFLD and to 14% for those with moderate to severe disease (J. Gastrointest. Surg. 1998;2:292–8).
Biliary tract surgery is also risky for the cirrhotic patient. Only those with very low scores (less than 8) on the Model for End-Stage Liver Disease (MELD) scale are at minimal or no risk. Laparoscopic surgery is recommended for cirrhotic patients, because it reduces blood loss, postoperative complications, anesthetic and surgical times, and length of hospital stay.
Arthroplasties are also dangerous for the patient with cirrhosis, he said, with combined mortality and complication rates increasing with liver disease severity. The rates are about 11% in those with CTP Class A disease, almost 50% among those with CTP Class B, and 100% in those with CTP Class C.
If surgery is necessary in patients with cirrhosis, all nephrotoxic drugs should be avoided, and opiates should be limited. Opiates can cause sedation and lead to constipation, a contributing factor to hepatic encephalopathy.
Cirrhotic patients undergoing transurethral prostatectomy had a 7% mortality rate, compared with 2% in controls, he said.
CAMBRIDGE, MD. — The increased risk of mortality in patients who undergo surgery for serious liver disease is reason to postpone an operation until the disorder responds to treatment or resolves, Adrian Reuben, M.B., said at a hepatobiliary update sponsored by Johns Hopkins University.
“Surgery is contraindicated in those with acute hepatitis—especially alcoholic hepatitis—and severe chronic hepatitis and advanced cirrhosis,” said Dr. Reuben of the Medical University of South Carolina, Charleston.
“In cardiac surgery, patients with Child-Turcotte-Pugh [CTP] Class A scores do well, but for anyone with CTP Class B or C, surgery may be prohibitively dangerous,” he said.
Patients with severe liver disease are more susceptible to infection, which aggravates vasodilation and exacerbates the hyperdynamic circulation.
“This can precipitate hepatorenal syndrome or convert existing hepatorenal syndrome from stage II to stage I,” he explained.
Other reasons for adverse outcomes include concomitant renal dysfunction; reduced hepatic drug metabolism; poor nutrition, which is common in those with advanced liver disease; and ascites. Ascites carries the risks of infection, poor wound closure, and dehiscence, and it impairs respiration.
Mortality risk is much greater in patients with cirrhosis and increases steadily with higher CTP score.
Dr. Reuben reviewed five studies of abdominal surgery in patients with cirrhosis conducted from 1984 to 2004. Among a total of 391 patients, overall mortality ranged from 16% to 28%, with a range of 8%–19% for elective surgery and 32%–50% for emergency surgery. Rates were much lower among those with CTP Class A (3%–10%) than those with CTP Class C (55%–100%).
Other variables predictive of mortality in these studies were encephalopathy, ascites, infection, coagulopathy (high international normalized ratio [INR]), high creatinine, and gastrointestinal and pulmonary operations.
The risk of postsurgical mortality is increased in both viral and alcoholic hepatitis. “With acute viral hepatitis, the increased risk is about 10%–15%. With alcoholic hepatitis, it's vastly increased: 55%–100%,” Dr. Reuben said.
“You must also be very aware of alcoholic hepatitis; sometimes it mimics acute cholangitis,” he added.
An increased mortality risk has also been associated with nonalcoholic fatty liver disease (NAFLD). A 1998 study that looked at hepatic resection for cancer showed a 3% mortality rate for those with nonfatty livers. Mortality increased to 7% for those with mild NAFLD and to 14% for those with moderate to severe disease (J. Gastrointest. Surg. 1998;2:292–8).
Biliary tract surgery is also risky for the cirrhotic patient. Only those with very low scores (less than 8) on the Model for End-Stage Liver Disease (MELD) scale are at minimal or no risk. Laparoscopic surgery is recommended for cirrhotic patients, because it reduces blood loss, postoperative complications, anesthetic and surgical times, and length of hospital stay.
Arthroplasties are also dangerous for the patient with cirrhosis, he said, with combined mortality and complication rates increasing with liver disease severity. The rates are about 11% in those with CTP Class A disease, almost 50% among those with CTP Class B, and 100% in those with CTP Class C.
If surgery is necessary in patients with cirrhosis, all nephrotoxic drugs should be avoided, and opiates should be limited. Opiates can cause sedation and lead to constipation, a contributing factor to hepatic encephalopathy.
Cirrhotic patients undergoing transurethral prostatectomy had a 7% mortality rate, compared with 2% in controls, he said.
CAMBRIDGE, MD. — The increased risk of mortality in patients who undergo surgery for serious liver disease is reason to postpone an operation until the disorder responds to treatment or resolves, Adrian Reuben, M.B., said at a hepatobiliary update sponsored by Johns Hopkins University.
“Surgery is contraindicated in those with acute hepatitis—especially alcoholic hepatitis—and severe chronic hepatitis and advanced cirrhosis,” said Dr. Reuben of the Medical University of South Carolina, Charleston.
“In cardiac surgery, patients with Child-Turcotte-Pugh [CTP] Class A scores do well, but for anyone with CTP Class B or C, surgery may be prohibitively dangerous,” he said.
Patients with severe liver disease are more susceptible to infection, which aggravates vasodilation and exacerbates the hyperdynamic circulation.
“This can precipitate hepatorenal syndrome or convert existing hepatorenal syndrome from stage II to stage I,” he explained.
Other reasons for adverse outcomes include concomitant renal dysfunction; reduced hepatic drug metabolism; poor nutrition, which is common in those with advanced liver disease; and ascites. Ascites carries the risks of infection, poor wound closure, and dehiscence, and it impairs respiration.
Mortality risk is much greater in patients with cirrhosis and increases steadily with higher CTP score.
Dr. Reuben reviewed five studies of abdominal surgery in patients with cirrhosis conducted from 1984 to 2004. Among a total of 391 patients, overall mortality ranged from 16% to 28%, with a range of 8%–19% for elective surgery and 32%–50% for emergency surgery. Rates were much lower among those with CTP Class A (3%–10%) than those with CTP Class C (55%–100%).
Other variables predictive of mortality in these studies were encephalopathy, ascites, infection, coagulopathy (high international normalized ratio [INR]), high creatinine, and gastrointestinal and pulmonary operations.
The risk of postsurgical mortality is increased in both viral and alcoholic hepatitis. “With acute viral hepatitis, the increased risk is about 10%–15%. With alcoholic hepatitis, it's vastly increased: 55%–100%,” Dr. Reuben said.
“You must also be very aware of alcoholic hepatitis; sometimes it mimics acute cholangitis,” he added.
An increased mortality risk has also been associated with nonalcoholic fatty liver disease (NAFLD). A 1998 study that looked at hepatic resection for cancer showed a 3% mortality rate for those with nonfatty livers. Mortality increased to 7% for those with mild NAFLD and to 14% for those with moderate to severe disease (J. Gastrointest. Surg. 1998;2:292–8).
Biliary tract surgery is also risky for the cirrhotic patient. Only those with very low scores (less than 8) on the Model for End-Stage Liver Disease (MELD) scale are at minimal or no risk. Laparoscopic surgery is recommended for cirrhotic patients, because it reduces blood loss, postoperative complications, anesthetic and surgical times, and length of hospital stay.
Arthroplasties are also dangerous for the patient with cirrhosis, he said, with combined mortality and complication rates increasing with liver disease severity. The rates are about 11% in those with CTP Class A disease, almost 50% among those with CTP Class B, and 100% in those with CTP Class C.
If surgery is necessary in patients with cirrhosis, all nephrotoxic drugs should be avoided, and opiates should be limited. Opiates can cause sedation and lead to constipation, a contributing factor to hepatic encephalopathy.
Cirrhotic patients undergoing transurethral prostatectomy had a 7% mortality rate, compared with 2% in controls, he said.
β-Blockers Cut Risk of First Bleed From Esophageal Varices by 50%
CAMBRIDGE, MD. — β-Blockers remain the best choice for primary prevention of bleeding from esophageal varices in patients with end-stage liver disease.
Variceal banding, while at least as effective as β-blockers in preventing a first bleed, should be reserved for those who don't respond to or can't tolerate β-blockers, or who are noncompliant with drug therapy, Sergey Kantsevoy, M.D., said at a hepatobiliary update sponsored by Johns Hopkins University.
Esophageal varices develop in up to 60% of patients with cirrhosis. If varices rupture, they carry a significant mortality risk of 20%–40%, depending on the severity of the liver disease. Therefore, all patients with end-stage liver disease should undergo upper endoscopy to screen for varices, said Dr. Kantsevoy of Johns Hopkins University, Baltimore.
Unselected patients don't benefit from primary prevention strategies for esophageal varices, but there is great benefit for high-risk patients, he said. However, despite the mortality risk of bleeds and the proven benefit of treatment, only 46% of those referred for liver transplantation had been screened for esophageal varices (Am. J. Gastroenterol. 2001;96:833–7).
If the initial endoscopy does not identify varices, the patient should have a repeat endoscopy every 2 years. If the varices are small, a repeat endoscopy every 1–2 years is indicated, depending on the severity of liver disease.
Patients with large varices should be offered prophylactic therapy. β-Blockers are the medical therapy of choice. They reduce portal pressure by reducing cardiac output and increasing resistance in collateral veins. The drugs have been shown to reduce the risk of a first variceal bleed by half and to reduce mortality by up to 45%, compared with placebo.
Unfortunately, Dr. Kantsevoy said, β-blockers are contraindicated in up to 20% of end-stage liver disease patients. In addition, “despite adequate β-blockage, at least 30% will not achieve reduction in portal pressure sufficient to prevent bleeding, and about 30% will have side effects including heart failure, hypotension, bronchoconstriction, fatigue, and impotence.”
Endoscopic variceal banding may be considered for these patients. Band ligation has been shown to be as effective as β-blockage at reducing the incidence of bleeding, but the procedure carries no significant mortality advantage over medical therapy.
Endoscopic sclerotherapy has been investigated in these patients, but it is not recommended for primary prevention because it is associated with a high rate of adverse events.
Postsclerotherapy complications occur in up to 20% of patients and include ulceration, stricture formation, and esophageal perforation, Dr. Kantsevoy said.
CAMBRIDGE, MD. — β-Blockers remain the best choice for primary prevention of bleeding from esophageal varices in patients with end-stage liver disease.
Variceal banding, while at least as effective as β-blockers in preventing a first bleed, should be reserved for those who don't respond to or can't tolerate β-blockers, or who are noncompliant with drug therapy, Sergey Kantsevoy, M.D., said at a hepatobiliary update sponsored by Johns Hopkins University.
Esophageal varices develop in up to 60% of patients with cirrhosis. If varices rupture, they carry a significant mortality risk of 20%–40%, depending on the severity of the liver disease. Therefore, all patients with end-stage liver disease should undergo upper endoscopy to screen for varices, said Dr. Kantsevoy of Johns Hopkins University, Baltimore.
Unselected patients don't benefit from primary prevention strategies for esophageal varices, but there is great benefit for high-risk patients, he said. However, despite the mortality risk of bleeds and the proven benefit of treatment, only 46% of those referred for liver transplantation had been screened for esophageal varices (Am. J. Gastroenterol. 2001;96:833–7).
If the initial endoscopy does not identify varices, the patient should have a repeat endoscopy every 2 years. If the varices are small, a repeat endoscopy every 1–2 years is indicated, depending on the severity of liver disease.
Patients with large varices should be offered prophylactic therapy. β-Blockers are the medical therapy of choice. They reduce portal pressure by reducing cardiac output and increasing resistance in collateral veins. The drugs have been shown to reduce the risk of a first variceal bleed by half and to reduce mortality by up to 45%, compared with placebo.
Unfortunately, Dr. Kantsevoy said, β-blockers are contraindicated in up to 20% of end-stage liver disease patients. In addition, “despite adequate β-blockage, at least 30% will not achieve reduction in portal pressure sufficient to prevent bleeding, and about 30% will have side effects including heart failure, hypotension, bronchoconstriction, fatigue, and impotence.”
Endoscopic variceal banding may be considered for these patients. Band ligation has been shown to be as effective as β-blockage at reducing the incidence of bleeding, but the procedure carries no significant mortality advantage over medical therapy.
Endoscopic sclerotherapy has been investigated in these patients, but it is not recommended for primary prevention because it is associated with a high rate of adverse events.
Postsclerotherapy complications occur in up to 20% of patients and include ulceration, stricture formation, and esophageal perforation, Dr. Kantsevoy said.
CAMBRIDGE, MD. — β-Blockers remain the best choice for primary prevention of bleeding from esophageal varices in patients with end-stage liver disease.
Variceal banding, while at least as effective as β-blockers in preventing a first bleed, should be reserved for those who don't respond to or can't tolerate β-blockers, or who are noncompliant with drug therapy, Sergey Kantsevoy, M.D., said at a hepatobiliary update sponsored by Johns Hopkins University.
Esophageal varices develop in up to 60% of patients with cirrhosis. If varices rupture, they carry a significant mortality risk of 20%–40%, depending on the severity of the liver disease. Therefore, all patients with end-stage liver disease should undergo upper endoscopy to screen for varices, said Dr. Kantsevoy of Johns Hopkins University, Baltimore.
Unselected patients don't benefit from primary prevention strategies for esophageal varices, but there is great benefit for high-risk patients, he said. However, despite the mortality risk of bleeds and the proven benefit of treatment, only 46% of those referred for liver transplantation had been screened for esophageal varices (Am. J. Gastroenterol. 2001;96:833–7).
If the initial endoscopy does not identify varices, the patient should have a repeat endoscopy every 2 years. If the varices are small, a repeat endoscopy every 1–2 years is indicated, depending on the severity of liver disease.
Patients with large varices should be offered prophylactic therapy. β-Blockers are the medical therapy of choice. They reduce portal pressure by reducing cardiac output and increasing resistance in collateral veins. The drugs have been shown to reduce the risk of a first variceal bleed by half and to reduce mortality by up to 45%, compared with placebo.
Unfortunately, Dr. Kantsevoy said, β-blockers are contraindicated in up to 20% of end-stage liver disease patients. In addition, “despite adequate β-blockage, at least 30% will not achieve reduction in portal pressure sufficient to prevent bleeding, and about 30% will have side effects including heart failure, hypotension, bronchoconstriction, fatigue, and impotence.”
Endoscopic variceal banding may be considered for these patients. Band ligation has been shown to be as effective as β-blockage at reducing the incidence of bleeding, but the procedure carries no significant mortality advantage over medical therapy.
Endoscopic sclerotherapy has been investigated in these patients, but it is not recommended for primary prevention because it is associated with a high rate of adverse events.
Postsclerotherapy complications occur in up to 20% of patients and include ulceration, stricture formation, and esophageal perforation, Dr. Kantsevoy said.
Aerobic Fitness Cuts Death Risk by 54% in Hypertensive Women
NASHVILLE, TENN. — Higher cardiorespiratory fitness is associated with lower all-cause mortality in hypertensive women, Carolyn E. Barlow said at the annual meeting of the American College of Sports Medicine.
Ms. Barlow, director of data management at the Cooper Institute, Dallas, presented in a poster the results of an open cohort study of almost 13,000 women who were followed for up to 26 years as part of the Cooper Aerobics Center Longitudinal Study, a prospective observational study of lifestyle and health.
All the women were examined at the Cooper Aerobics Center in Dallas from 1971 to 1998, and followed up yearly for mortality.
At baseline, the women received a comprehensive medical examination and exercise prescription. They also took a treadmill test, which was used to determine their fitness level. The lowest 20% in each age group were considered “unfit,” while the upper 80% in each age group were considered “fit.” At baseline, their average age was 43 years. Of the cohort, 51% were normotensive, 31% were prehypertensive (120/80 mm Hg), and 18% were hypertensive (140/90 mm Hg or higher).
There were 298 deaths during the study period. After adjustment for age, exam year, and smoking, a trend toward lower mortality risk was seen in fit women compared with unfit women in each blood pressure group, but only in the hypertensive group was the difference statistically significant. Fit hypertensive women were 54% less likely to die than unfit hypertensive women. Compared with the unfit women, the decreased risk of death was 19% for normotensive fit women and 5% for prehypertensive fit women.
NASHVILLE, TENN. — Higher cardiorespiratory fitness is associated with lower all-cause mortality in hypertensive women, Carolyn E. Barlow said at the annual meeting of the American College of Sports Medicine.
Ms. Barlow, director of data management at the Cooper Institute, Dallas, presented in a poster the results of an open cohort study of almost 13,000 women who were followed for up to 26 years as part of the Cooper Aerobics Center Longitudinal Study, a prospective observational study of lifestyle and health.
All the women were examined at the Cooper Aerobics Center in Dallas from 1971 to 1998, and followed up yearly for mortality.
At baseline, the women received a comprehensive medical examination and exercise prescription. They also took a treadmill test, which was used to determine their fitness level. The lowest 20% in each age group were considered “unfit,” while the upper 80% in each age group were considered “fit.” At baseline, their average age was 43 years. Of the cohort, 51% were normotensive, 31% were prehypertensive (120/80 mm Hg), and 18% were hypertensive (140/90 mm Hg or higher).
There were 298 deaths during the study period. After adjustment for age, exam year, and smoking, a trend toward lower mortality risk was seen in fit women compared with unfit women in each blood pressure group, but only in the hypertensive group was the difference statistically significant. Fit hypertensive women were 54% less likely to die than unfit hypertensive women. Compared with the unfit women, the decreased risk of death was 19% for normotensive fit women and 5% for prehypertensive fit women.
NASHVILLE, TENN. — Higher cardiorespiratory fitness is associated with lower all-cause mortality in hypertensive women, Carolyn E. Barlow said at the annual meeting of the American College of Sports Medicine.
Ms. Barlow, director of data management at the Cooper Institute, Dallas, presented in a poster the results of an open cohort study of almost 13,000 women who were followed for up to 26 years as part of the Cooper Aerobics Center Longitudinal Study, a prospective observational study of lifestyle and health.
All the women were examined at the Cooper Aerobics Center in Dallas from 1971 to 1998, and followed up yearly for mortality.
At baseline, the women received a comprehensive medical examination and exercise prescription. They also took a treadmill test, which was used to determine their fitness level. The lowest 20% in each age group were considered “unfit,” while the upper 80% in each age group were considered “fit.” At baseline, their average age was 43 years. Of the cohort, 51% were normotensive, 31% were prehypertensive (120/80 mm Hg), and 18% were hypertensive (140/90 mm Hg or higher).
There were 298 deaths during the study period. After adjustment for age, exam year, and smoking, a trend toward lower mortality risk was seen in fit women compared with unfit women in each blood pressure group, but only in the hypertensive group was the difference statistically significant. Fit hypertensive women were 54% less likely to die than unfit hypertensive women. Compared with the unfit women, the decreased risk of death was 19% for normotensive fit women and 5% for prehypertensive fit women.
Infant Wheezing Linked to Stop-and-Go Traffic Proximity
Infants who live near roads with lots of stop-and-go bus and truck traffic are significantly more likely to develop wheezing than those who live near steady traffic or those who aren't exposed to much traffic, Patrick Ryan and his associates reported.
The association may be related to increased amounts of diesel exhaust particles (DEP) shed when the vehicles accelerate from a stop, said Mr. Ryan, of the University of Cincinnati, and his colleagues. Other studies have shown that acceleration from stop increases this particulate matter.
The researchers examined wheezing without cold over 1 year in 622 infants (median age 7.5 months). The infants were part of the Cincinnati Childhood Allergy and Air Pollution Study; all had at least one atopic parent (J. All. Clin. Immunol. 2005;116:279–84).
Most (374) of the infants were unexposed to traffic; 176 lived near moving bus and truck traffic, and 99 lived near stop-and-go traffic. Infants exposed to stop-and-go traffic were more likely to be black, have out-of-home care, and have a father with asthma, and they were less likely to have been breast-fed. The researchers adjusted for these variables.
Of the 622 infants, 8% (50) reported wheezing without a cold. The prevalence of wheezing in the unexposed infants was 5.8%. The prevalence was 7.4% in infants exposed to moving traffic, and 17.2% in infants exposed to stop-and-go traffic.
The prevalence of wheezing was three times higher (19%) in infants who lived less than 50 meters from moving traffic compared with the unexposed group. The prevalence of wheezing in those who lived 200–300 meters from moving traffic was 12%.
Infants who live near roads with lots of stop-and-go bus and truck traffic are significantly more likely to develop wheezing than those who live near steady traffic or those who aren't exposed to much traffic, Patrick Ryan and his associates reported.
The association may be related to increased amounts of diesel exhaust particles (DEP) shed when the vehicles accelerate from a stop, said Mr. Ryan, of the University of Cincinnati, and his colleagues. Other studies have shown that acceleration from stop increases this particulate matter.
The researchers examined wheezing without cold over 1 year in 622 infants (median age 7.5 months). The infants were part of the Cincinnati Childhood Allergy and Air Pollution Study; all had at least one atopic parent (J. All. Clin. Immunol. 2005;116:279–84).
Most (374) of the infants were unexposed to traffic; 176 lived near moving bus and truck traffic, and 99 lived near stop-and-go traffic. Infants exposed to stop-and-go traffic were more likely to be black, have out-of-home care, and have a father with asthma, and they were less likely to have been breast-fed. The researchers adjusted for these variables.
Of the 622 infants, 8% (50) reported wheezing without a cold. The prevalence of wheezing in the unexposed infants was 5.8%. The prevalence was 7.4% in infants exposed to moving traffic, and 17.2% in infants exposed to stop-and-go traffic.
The prevalence of wheezing was three times higher (19%) in infants who lived less than 50 meters from moving traffic compared with the unexposed group. The prevalence of wheezing in those who lived 200–300 meters from moving traffic was 12%.
Infants who live near roads with lots of stop-and-go bus and truck traffic are significantly more likely to develop wheezing than those who live near steady traffic or those who aren't exposed to much traffic, Patrick Ryan and his associates reported.
The association may be related to increased amounts of diesel exhaust particles (DEP) shed when the vehicles accelerate from a stop, said Mr. Ryan, of the University of Cincinnati, and his colleagues. Other studies have shown that acceleration from stop increases this particulate matter.
The researchers examined wheezing without cold over 1 year in 622 infants (median age 7.5 months). The infants were part of the Cincinnati Childhood Allergy and Air Pollution Study; all had at least one atopic parent (J. All. Clin. Immunol. 2005;116:279–84).
Most (374) of the infants were unexposed to traffic; 176 lived near moving bus and truck traffic, and 99 lived near stop-and-go traffic. Infants exposed to stop-and-go traffic were more likely to be black, have out-of-home care, and have a father with asthma, and they were less likely to have been breast-fed. The researchers adjusted for these variables.
Of the 622 infants, 8% (50) reported wheezing without a cold. The prevalence of wheezing in the unexposed infants was 5.8%. The prevalence was 7.4% in infants exposed to moving traffic, and 17.2% in infants exposed to stop-and-go traffic.
The prevalence of wheezing was three times higher (19%) in infants who lived less than 50 meters from moving traffic compared with the unexposed group. The prevalence of wheezing in those who lived 200–300 meters from moving traffic was 12%.
Use of Proteomics for Ovarian Ca Spurs Debate
Systematic bias in the design of several underlying studies raises doubt over whether a serum proteomics test based on those studies can accurately identify ovarian cancer, two independent biostatisticians have argued.
The researchers, both of the University of Texas M.D. Anderson Cancer Research Center, Houston, have been unable to reproduce the high sensitivity and specificity rates reported in a 2003 study of the technique (J. Natl. Cancer Inst. 2005;97:307–9).
The problem, said Keith A. Baggerly, Ph.D., and Kevin R. Coombes, Ph.D., lies not in the fundamental concept—that cancer-shed proteins in serum may be able to identify patients who have even very early-stage cancer—but in the way the data sets were processed in both the 2003 study and the original 2002 National Cancer Institute (NCI) study upon which it was based.
“We're not saying proteomics doesn't work,” Dr. Baggerly said in an interview. “It may very well work. But these data sets can't be used to say this approach works.”
The method involves using mass spectroscopy to display proteins in serum as a series of peaks and valleys of varying strength. A computer-driven mathematical algorithm finds unique patterns expressed in the serum of patients with the disease. Several researchers are investigating proteomics' application in ovarian cancer, using different algorithms and spectrometers. All of the decoding work is being performed on three publicly available sets of spectral data, which were processed as part of the original proof-of-concept study by NCI researchers led by Emmanuel F. Petricoin III, M.D. (Lancet 2002;359:572–7).
Dr. Baggerly and Dr. Coombes reanalyzed the data used in a 2003 paper by Wei Zhu, Ph.D., and associates, of the State University of New York at Stony Brook. By using the same NCI data sets—samples from women with ovarian cancer, women with benign ovarian cysts, and healthy controls—but a new protein-recognition pattern, Dr. Zhu achieved perfect discrimination (100% sensitivity, 100% specificity) of patients with ovarian cancer, including early-stage disease, from normal controls (PNAS 2003;100:14666–71). Dr. Zhu's results were even better than those originally reported by Dr. Petricoin and colleagues in their 2002 study.
When Dr. Baggerly reanalyzed the Zhu data, he was unable to arrive at the same results. The Zhu study identified a pattern involving 18 protein peaks that separated controls from cancers. For Dr. Baggerly, the pattern resulted in significant accuracy in the first data set, which contained serum from all three groups, but not in the second data set, which contained only serum from cancer patients and healthy controls.
In the second data set, 13 of the 18 peak differences changed signs—that is, peaks associated with cancer in the first group were associated with controls in the second group, and peaks first associated with controls switched to cancers. “This reversal isn't consistent with a persistent difference between cancer samples and control samples,” Dr. Baggerly said.
The researchers then chose 18 random protein peaks from the same regions of spectral data as Dr. Zhu's peaks. The random peaks separated cancer samples from controls up to 56% of the time, depending on the strength of the signals used. Because the pattern of protein expression was inconsistent between the data sets, they concluded, the values did not represent biologically important changes in the serum of cancer patients.
The problem, Dr. Baggerly asserts, is that Dr. Zhu processed the serum samples in a nonrandomized way that the spectra were acquired in the initial study by Dr. Petricoin and his collegues.
“They ran all the controls on one day and all the cancers on the next day,” Dr. Baggerly said. “This is the worst kind of design when you are using a machine that can be subject to external factors,” such as changes in calibration or mechanical breakdown.
In fact, he said, a June 2004 study in which Dr. Petricoin participated also suffered from just such a problem (Endoc. Relat. Cancer 2004;11:163–78). This study used a different mass spectrometer, which began to break down on day 3 of running the samples. In a letter to the editor, Dr. Petricoin admitted the problem, but said, “We cannot detect whether the cancer data acquired on the previous day were convincingly negatively affected by the spectrometer failure.”
Dr. Baggerly contends that a better design involving randomizing sample processing would allow separation of differences due to biology from those due to external factors.
His failure to find reproducibility does not surprise Dr. Petricoin and his colleague, Lance A. Liotta, M.D., who participated in the 2002 and 2004 studies. Their commentary appears in the same journal. Each of the data sets, all of which are available without restriction online, was generated with different machines and methods to test those machines and methods.
“We would be surprised if the experimentally designed process changes between these two studies did not result in altered spectra. In fact, a goal of these experiments was to study the spectral alterations produced by changing the process,” they said.
Systematic bias in the design of several underlying studies raises doubt over whether a serum proteomics test based on those studies can accurately identify ovarian cancer, two independent biostatisticians have argued.
The researchers, both of the University of Texas M.D. Anderson Cancer Research Center, Houston, have been unable to reproduce the high sensitivity and specificity rates reported in a 2003 study of the technique (J. Natl. Cancer Inst. 2005;97:307–9).
The problem, said Keith A. Baggerly, Ph.D., and Kevin R. Coombes, Ph.D., lies not in the fundamental concept—that cancer-shed proteins in serum may be able to identify patients who have even very early-stage cancer—but in the way the data sets were processed in both the 2003 study and the original 2002 National Cancer Institute (NCI) study upon which it was based.
“We're not saying proteomics doesn't work,” Dr. Baggerly said in an interview. “It may very well work. But these data sets can't be used to say this approach works.”
The method involves using mass spectroscopy to display proteins in serum as a series of peaks and valleys of varying strength. A computer-driven mathematical algorithm finds unique patterns expressed in the serum of patients with the disease. Several researchers are investigating proteomics' application in ovarian cancer, using different algorithms and spectrometers. All of the decoding work is being performed on three publicly available sets of spectral data, which were processed as part of the original proof-of-concept study by NCI researchers led by Emmanuel F. Petricoin III, M.D. (Lancet 2002;359:572–7).
Dr. Baggerly and Dr. Coombes reanalyzed the data used in a 2003 paper by Wei Zhu, Ph.D., and associates, of the State University of New York at Stony Brook. By using the same NCI data sets—samples from women with ovarian cancer, women with benign ovarian cysts, and healthy controls—but a new protein-recognition pattern, Dr. Zhu achieved perfect discrimination (100% sensitivity, 100% specificity) of patients with ovarian cancer, including early-stage disease, from normal controls (PNAS 2003;100:14666–71). Dr. Zhu's results were even better than those originally reported by Dr. Petricoin and colleagues in their 2002 study.
When Dr. Baggerly reanalyzed the Zhu data, he was unable to arrive at the same results. The Zhu study identified a pattern involving 18 protein peaks that separated controls from cancers. For Dr. Baggerly, the pattern resulted in significant accuracy in the first data set, which contained serum from all three groups, but not in the second data set, which contained only serum from cancer patients and healthy controls.
In the second data set, 13 of the 18 peak differences changed signs—that is, peaks associated with cancer in the first group were associated with controls in the second group, and peaks first associated with controls switched to cancers. “This reversal isn't consistent with a persistent difference between cancer samples and control samples,” Dr. Baggerly said.
The researchers then chose 18 random protein peaks from the same regions of spectral data as Dr. Zhu's peaks. The random peaks separated cancer samples from controls up to 56% of the time, depending on the strength of the signals used. Because the pattern of protein expression was inconsistent between the data sets, they concluded, the values did not represent biologically important changes in the serum of cancer patients.
The problem, Dr. Baggerly asserts, is that Dr. Zhu processed the serum samples in a nonrandomized way that the spectra were acquired in the initial study by Dr. Petricoin and his collegues.
“They ran all the controls on one day and all the cancers on the next day,” Dr. Baggerly said. “This is the worst kind of design when you are using a machine that can be subject to external factors,” such as changes in calibration or mechanical breakdown.
In fact, he said, a June 2004 study in which Dr. Petricoin participated also suffered from just such a problem (Endoc. Relat. Cancer 2004;11:163–78). This study used a different mass spectrometer, which began to break down on day 3 of running the samples. In a letter to the editor, Dr. Petricoin admitted the problem, but said, “We cannot detect whether the cancer data acquired on the previous day were convincingly negatively affected by the spectrometer failure.”
Dr. Baggerly contends that a better design involving randomizing sample processing would allow separation of differences due to biology from those due to external factors.
His failure to find reproducibility does not surprise Dr. Petricoin and his colleague, Lance A. Liotta, M.D., who participated in the 2002 and 2004 studies. Their commentary appears in the same journal. Each of the data sets, all of which are available without restriction online, was generated with different machines and methods to test those machines and methods.
“We would be surprised if the experimentally designed process changes between these two studies did not result in altered spectra. In fact, a goal of these experiments was to study the spectral alterations produced by changing the process,” they said.
Systematic bias in the design of several underlying studies raises doubt over whether a serum proteomics test based on those studies can accurately identify ovarian cancer, two independent biostatisticians have argued.
The researchers, both of the University of Texas M.D. Anderson Cancer Research Center, Houston, have been unable to reproduce the high sensitivity and specificity rates reported in a 2003 study of the technique (J. Natl. Cancer Inst. 2005;97:307–9).
The problem, said Keith A. Baggerly, Ph.D., and Kevin R. Coombes, Ph.D., lies not in the fundamental concept—that cancer-shed proteins in serum may be able to identify patients who have even very early-stage cancer—but in the way the data sets were processed in both the 2003 study and the original 2002 National Cancer Institute (NCI) study upon which it was based.
“We're not saying proteomics doesn't work,” Dr. Baggerly said in an interview. “It may very well work. But these data sets can't be used to say this approach works.”
The method involves using mass spectroscopy to display proteins in serum as a series of peaks and valleys of varying strength. A computer-driven mathematical algorithm finds unique patterns expressed in the serum of patients with the disease. Several researchers are investigating proteomics' application in ovarian cancer, using different algorithms and spectrometers. All of the decoding work is being performed on three publicly available sets of spectral data, which were processed as part of the original proof-of-concept study by NCI researchers led by Emmanuel F. Petricoin III, M.D. (Lancet 2002;359:572–7).
Dr. Baggerly and Dr. Coombes reanalyzed the data used in a 2003 paper by Wei Zhu, Ph.D., and associates, of the State University of New York at Stony Brook. By using the same NCI data sets—samples from women with ovarian cancer, women with benign ovarian cysts, and healthy controls—but a new protein-recognition pattern, Dr. Zhu achieved perfect discrimination (100% sensitivity, 100% specificity) of patients with ovarian cancer, including early-stage disease, from normal controls (PNAS 2003;100:14666–71). Dr. Zhu's results were even better than those originally reported by Dr. Petricoin and colleagues in their 2002 study.
When Dr. Baggerly reanalyzed the Zhu data, he was unable to arrive at the same results. The Zhu study identified a pattern involving 18 protein peaks that separated controls from cancers. For Dr. Baggerly, the pattern resulted in significant accuracy in the first data set, which contained serum from all three groups, but not in the second data set, which contained only serum from cancer patients and healthy controls.
In the second data set, 13 of the 18 peak differences changed signs—that is, peaks associated with cancer in the first group were associated with controls in the second group, and peaks first associated with controls switched to cancers. “This reversal isn't consistent with a persistent difference between cancer samples and control samples,” Dr. Baggerly said.
The researchers then chose 18 random protein peaks from the same regions of spectral data as Dr. Zhu's peaks. The random peaks separated cancer samples from controls up to 56% of the time, depending on the strength of the signals used. Because the pattern of protein expression was inconsistent between the data sets, they concluded, the values did not represent biologically important changes in the serum of cancer patients.
The problem, Dr. Baggerly asserts, is that Dr. Zhu processed the serum samples in a nonrandomized way that the spectra were acquired in the initial study by Dr. Petricoin and his collegues.
“They ran all the controls on one day and all the cancers on the next day,” Dr. Baggerly said. “This is the worst kind of design when you are using a machine that can be subject to external factors,” such as changes in calibration or mechanical breakdown.
In fact, he said, a June 2004 study in which Dr. Petricoin participated also suffered from just such a problem (Endoc. Relat. Cancer 2004;11:163–78). This study used a different mass spectrometer, which began to break down on day 3 of running the samples. In a letter to the editor, Dr. Petricoin admitted the problem, but said, “We cannot detect whether the cancer data acquired on the previous day were convincingly negatively affected by the spectrometer failure.”
Dr. Baggerly contends that a better design involving randomizing sample processing would allow separation of differences due to biology from those due to external factors.
His failure to find reproducibility does not surprise Dr. Petricoin and his colleague, Lance A. Liotta, M.D., who participated in the 2002 and 2004 studies. Their commentary appears in the same journal. Each of the data sets, all of which are available without restriction online, was generated with different machines and methods to test those machines and methods.
“We would be surprised if the experimentally designed process changes between these two studies did not result in altered spectra. In fact, a goal of these experiments was to study the spectral alterations produced by changing the process,” they said.
Gene Predicts MRSA-Related Pulmonary Complications
Children with methicillin-resistant Staphylococcus aureus infections are more likely to show abnormal pulmonary imaging than those with methicillin-susceptible S. aureus infections.
The presence of genes encoding for Panton-Valentine leukocidin (PVL), which is much more common in methicillin-resistant S. aureus (MRSA), may be a factor in MRSA-associated pulmonary complications, said Blanca Gonzalez, M.D., and her colleagues.
The gene has been associated with severe necrotizing pneumonia and osteomyelitis, said Dr. Gonzalez of the Texas Children's Hospital, Houston, and her associates (Clin. Infect. Dis. 2005;41:583–90).
The investigators examined pulmonary complications in 70 pediatric patients with MRSA and 43 with methicillin-susceptible S. aureus (MSSA). Pulmonary complications were much more common in the MRSA group than the MSSA group (67% vs. 28%). Two patients with MRSA died, as did one with MSSA.
Of the 47 MRSA patients with abnormal pulmonary imaging, 21 (45%) received a primary diagnosis of pneumonia. Four of these had bacteremia; 14 had empyema; 3 had uncomplicated pneumonia with bacteremia; and 4 had lung abscess. A total of 20 patients (43%) received a primary diagnosis of osteomyelitis; most (85%) had bacteremia. Imaging showed atelectasis in four; eight had pneumonia (three with effusions); and four had pneumatoceles. Six patients had septic emboli, and the rest had multifocal air space disease or interstitial disease.
Patients with a primary diagnosis of pneumonia were significantly younger than those with other invasive MRSA disease (3.5 years vs. 10 years). Again, patients with MSSA who had a primary diagnosis of pneumonia also were significantly younger than those with other invasive disease (7 months vs. 12 years). Only 10 patients with MSSA had pulmonary complications: 2 had a primary diagnosis of pneumonia and also had loculated empyema, 6 had bone or joint infections, and 2 had endocarditis.
Isolates from 103 children were tested for genes encoding for PVL. All but one of the MRSA isolates was positive for PVL, compared with only 2 (26%) of the MSSA isolates. Among the 80 PVL-positive isolates, 51 came from children with abnormal chest radiographs, compared with 2 of 23 PVL-negative isolates.
In an accompanying editorial, Jerome Etienne, M.D., argued for routine testing for PVL.
“Regardless of the localization of the infection, the presence of PVL appears to be associated with increased severity, ranging from cutaneous infection requiring surgical drainage to severe chronic osteomyelitis and deadly necrotizing pneumonia,” said Dr. Etienne of the National Reference Center of Staphylococcus, Lyon, France. “With the increased prevalence of community-acquired MRSA, which usually contain the genes encoding PVL, it is important that clinical laboratories test for detection of this toxin in routine S. aureus isolates” (Clin. Infect. Dis. 2005;41:591–93).
Children with methicillin-resistant Staphylococcus aureus infections are more likely to show abnormal pulmonary imaging than those with methicillin-susceptible S. aureus infections.
The presence of genes encoding for Panton-Valentine leukocidin (PVL), which is much more common in methicillin-resistant S. aureus (MRSA), may be a factor in MRSA-associated pulmonary complications, said Blanca Gonzalez, M.D., and her colleagues.
The gene has been associated with severe necrotizing pneumonia and osteomyelitis, said Dr. Gonzalez of the Texas Children's Hospital, Houston, and her associates (Clin. Infect. Dis. 2005;41:583–90).
The investigators examined pulmonary complications in 70 pediatric patients with MRSA and 43 with methicillin-susceptible S. aureus (MSSA). Pulmonary complications were much more common in the MRSA group than the MSSA group (67% vs. 28%). Two patients with MRSA died, as did one with MSSA.
Of the 47 MRSA patients with abnormal pulmonary imaging, 21 (45%) received a primary diagnosis of pneumonia. Four of these had bacteremia; 14 had empyema; 3 had uncomplicated pneumonia with bacteremia; and 4 had lung abscess. A total of 20 patients (43%) received a primary diagnosis of osteomyelitis; most (85%) had bacteremia. Imaging showed atelectasis in four; eight had pneumonia (three with effusions); and four had pneumatoceles. Six patients had septic emboli, and the rest had multifocal air space disease or interstitial disease.
Patients with a primary diagnosis of pneumonia were significantly younger than those with other invasive MRSA disease (3.5 years vs. 10 years). Again, patients with MSSA who had a primary diagnosis of pneumonia also were significantly younger than those with other invasive disease (7 months vs. 12 years). Only 10 patients with MSSA had pulmonary complications: 2 had a primary diagnosis of pneumonia and also had loculated empyema, 6 had bone or joint infections, and 2 had endocarditis.
Isolates from 103 children were tested for genes encoding for PVL. All but one of the MRSA isolates was positive for PVL, compared with only 2 (26%) of the MSSA isolates. Among the 80 PVL-positive isolates, 51 came from children with abnormal chest radiographs, compared with 2 of 23 PVL-negative isolates.
In an accompanying editorial, Jerome Etienne, M.D., argued for routine testing for PVL.
“Regardless of the localization of the infection, the presence of PVL appears to be associated with increased severity, ranging from cutaneous infection requiring surgical drainage to severe chronic osteomyelitis and deadly necrotizing pneumonia,” said Dr. Etienne of the National Reference Center of Staphylococcus, Lyon, France. “With the increased prevalence of community-acquired MRSA, which usually contain the genes encoding PVL, it is important that clinical laboratories test for detection of this toxin in routine S. aureus isolates” (Clin. Infect. Dis. 2005;41:591–93).
Children with methicillin-resistant Staphylococcus aureus infections are more likely to show abnormal pulmonary imaging than those with methicillin-susceptible S. aureus infections.
The presence of genes encoding for Panton-Valentine leukocidin (PVL), which is much more common in methicillin-resistant S. aureus (MRSA), may be a factor in MRSA-associated pulmonary complications, said Blanca Gonzalez, M.D., and her colleagues.
The gene has been associated with severe necrotizing pneumonia and osteomyelitis, said Dr. Gonzalez of the Texas Children's Hospital, Houston, and her associates (Clin. Infect. Dis. 2005;41:583–90).
The investigators examined pulmonary complications in 70 pediatric patients with MRSA and 43 with methicillin-susceptible S. aureus (MSSA). Pulmonary complications were much more common in the MRSA group than the MSSA group (67% vs. 28%). Two patients with MRSA died, as did one with MSSA.
Of the 47 MRSA patients with abnormal pulmonary imaging, 21 (45%) received a primary diagnosis of pneumonia. Four of these had bacteremia; 14 had empyema; 3 had uncomplicated pneumonia with bacteremia; and 4 had lung abscess. A total of 20 patients (43%) received a primary diagnosis of osteomyelitis; most (85%) had bacteremia. Imaging showed atelectasis in four; eight had pneumonia (three with effusions); and four had pneumatoceles. Six patients had septic emboli, and the rest had multifocal air space disease or interstitial disease.
Patients with a primary diagnosis of pneumonia were significantly younger than those with other invasive MRSA disease (3.5 years vs. 10 years). Again, patients with MSSA who had a primary diagnosis of pneumonia also were significantly younger than those with other invasive disease (7 months vs. 12 years). Only 10 patients with MSSA had pulmonary complications: 2 had a primary diagnosis of pneumonia and also had loculated empyema, 6 had bone or joint infections, and 2 had endocarditis.
Isolates from 103 children were tested for genes encoding for PVL. All but one of the MRSA isolates was positive for PVL, compared with only 2 (26%) of the MSSA isolates. Among the 80 PVL-positive isolates, 51 came from children with abnormal chest radiographs, compared with 2 of 23 PVL-negative isolates.
In an accompanying editorial, Jerome Etienne, M.D., argued for routine testing for PVL.
“Regardless of the localization of the infection, the presence of PVL appears to be associated with increased severity, ranging from cutaneous infection requiring surgical drainage to severe chronic osteomyelitis and deadly necrotizing pneumonia,” said Dr. Etienne of the National Reference Center of Staphylococcus, Lyon, France. “With the increased prevalence of community-acquired MRSA, which usually contain the genes encoding PVL, it is important that clinical laboratories test for detection of this toxin in routine S. aureus isolates” (Clin. Infect. Dis. 2005;41:591–93).
West Nile Outbreak in Gulf States Seen as Unlikely
A mosquito-eradication program is underway in the storm-ravaged Gulf Coast states, and federal officials hope that such an effort, combined with the hurricane's impact on the vector cycle, will prevent a surge in West Nile virus and other mosquito-borne diseases.
The aerial spray program began in mid-September and will be continued as long as it is needed to control mosquito populations, according to the Louisiana State Department of Health.
Although the huge expanses of standing floodwaters are conducive to a mosquito population explosion, the total disruption of the region's normal ecology may discourage mosquito-borne epidemics, said Jennifer Morcone, a spokesperson for the Centers for Disease Control and Prevention.
“Historically, we have not seen increases in these diseases after a storm like this,” she said. “You need a bird population to fuel the transmission cycle and, right now, the bird population in these areas is almost nonexistent.”
However, she said, the CDC has deployed entomologists to monitor mosquito populations and to assist with vector control in the affected areas.
The Louisiana Department of Health and Hospitals—in coordination with the Louisiana Department of Agriculture and Forestry, the CDC, the Agency for Toxic Substances and Disease Registry, the U.S. Environmental Protection Agency, the Department of Defense, and local mosquito control districts—is implementing a plan to reduce mosquitoes and flies in the areas affected by Hurricane Katrina.
The health and hospitals department had developed a management plan in anticipation of the hatching of mosquitoes and flies due to the flooding in the area. Mosquito control is needed to protect public health from the nuisances and diseases they transmit; flies will also be monitored. The plan will continue, based on field monitoring of mosquitoes and flies in the region.
People face two types of increased risks for mosquito-borne diseases in the region: the rise in the number of mosquitos, and increased exposure to the insects. “People are spending a lot more time outside, and even when inside, they may have broken windows and screens that let mosquitoes into the house,” Ms. Morcone said.
It's too soon to predict what impact Hurricane Katrina will have on West Nile virus in the Gulf region, she added. “What we do know is that the virus did exist in every one of these states before the storm and that it is still there. We want people to take precautions against exposure, and we will facilitate that as much as possible.”
As of early September, 821 cases of West Nile virus—of which 18 cases were fatal—had been reported in the United States, marking this as the slowest West Nile season since 2002. By early September 2002, 737 cases had been reported, with 35 fatalities. Numbers soared in 2003 to almost 1,900, with 37 fatalities, and stayed high last year, with 1,191 cases and 30 fatalities.
As in previous years, the highest number of cases (268) occurred in California. Of those, 7 have been fatal; 93 showed neurologic complications (West Nile meningitis, encephalitis, or myelitis). Other hard-hit states include South Dakota (138 cases; 1 fatality; 25 neuroinvasive illnesses); Illinois (89 cases; 1 fatality; 52 neuroinvasive); and Louisiana (52 cases; 4 fatalities; 40 neuroinvasive). Texas has reported only 27 cases, but almost all of them (24) were neuroinvasive; there was 1 fatality.
The reason for the decline this year is unclear, Ms. Morcone said. “If there's one thing we know about West Nile, it's that there's no such thing as a typical season. We have seen areas with large epidemics one year and very small case counts the next. Weather and ecology are among the factors that play a part in West Nile prevalence.”
Although the cases are relatively low, physicians should still stress prevention to their patients. Repellents with DEET(N,N-diethyl-m-toluamide) are most effective for those who are outdoors for extended periods. Repellents with oil of lemon, eucalyptus, and picaridin are probably sufficient for “backyard exposure,” she said.
West Nile virus has also been identified in blood from 163 blood donors, according to the CDC. Of these donors, 3 subsequently developed West Nile neuroinvasive illness, 38 developed West Nile fever, and 3 developed other illnesses.
A mosquito-eradication program is underway in the storm-ravaged Gulf Coast states, and federal officials hope that such an effort, combined with the hurricane's impact on the vector cycle, will prevent a surge in West Nile virus and other mosquito-borne diseases.
The aerial spray program began in mid-September and will be continued as long as it is needed to control mosquito populations, according to the Louisiana State Department of Health.
Although the huge expanses of standing floodwaters are conducive to a mosquito population explosion, the total disruption of the region's normal ecology may discourage mosquito-borne epidemics, said Jennifer Morcone, a spokesperson for the Centers for Disease Control and Prevention.
“Historically, we have not seen increases in these diseases after a storm like this,” she said. “You need a bird population to fuel the transmission cycle and, right now, the bird population in these areas is almost nonexistent.”
However, she said, the CDC has deployed entomologists to monitor mosquito populations and to assist with vector control in the affected areas.
The Louisiana Department of Health and Hospitals—in coordination with the Louisiana Department of Agriculture and Forestry, the CDC, the Agency for Toxic Substances and Disease Registry, the U.S. Environmental Protection Agency, the Department of Defense, and local mosquito control districts—is implementing a plan to reduce mosquitoes and flies in the areas affected by Hurricane Katrina.
The health and hospitals department had developed a management plan in anticipation of the hatching of mosquitoes and flies due to the flooding in the area. Mosquito control is needed to protect public health from the nuisances and diseases they transmit; flies will also be monitored. The plan will continue, based on field monitoring of mosquitoes and flies in the region.
People face two types of increased risks for mosquito-borne diseases in the region: the rise in the number of mosquitos, and increased exposure to the insects. “People are spending a lot more time outside, and even when inside, they may have broken windows and screens that let mosquitoes into the house,” Ms. Morcone said.
It's too soon to predict what impact Hurricane Katrina will have on West Nile virus in the Gulf region, she added. “What we do know is that the virus did exist in every one of these states before the storm and that it is still there. We want people to take precautions against exposure, and we will facilitate that as much as possible.”
As of early September, 821 cases of West Nile virus—of which 18 cases were fatal—had been reported in the United States, marking this as the slowest West Nile season since 2002. By early September 2002, 737 cases had been reported, with 35 fatalities. Numbers soared in 2003 to almost 1,900, with 37 fatalities, and stayed high last year, with 1,191 cases and 30 fatalities.
As in previous years, the highest number of cases (268) occurred in California. Of those, 7 have been fatal; 93 showed neurologic complications (West Nile meningitis, encephalitis, or myelitis). Other hard-hit states include South Dakota (138 cases; 1 fatality; 25 neuroinvasive illnesses); Illinois (89 cases; 1 fatality; 52 neuroinvasive); and Louisiana (52 cases; 4 fatalities; 40 neuroinvasive). Texas has reported only 27 cases, but almost all of them (24) were neuroinvasive; there was 1 fatality.
The reason for the decline this year is unclear, Ms. Morcone said. “If there's one thing we know about West Nile, it's that there's no such thing as a typical season. We have seen areas with large epidemics one year and very small case counts the next. Weather and ecology are among the factors that play a part in West Nile prevalence.”
Although the cases are relatively low, physicians should still stress prevention to their patients. Repellents with DEET(N,N-diethyl-m-toluamide) are most effective for those who are outdoors for extended periods. Repellents with oil of lemon, eucalyptus, and picaridin are probably sufficient for “backyard exposure,” she said.
West Nile virus has also been identified in blood from 163 blood donors, according to the CDC. Of these donors, 3 subsequently developed West Nile neuroinvasive illness, 38 developed West Nile fever, and 3 developed other illnesses.
A mosquito-eradication program is underway in the storm-ravaged Gulf Coast states, and federal officials hope that such an effort, combined with the hurricane's impact on the vector cycle, will prevent a surge in West Nile virus and other mosquito-borne diseases.
The aerial spray program began in mid-September and will be continued as long as it is needed to control mosquito populations, according to the Louisiana State Department of Health.
Although the huge expanses of standing floodwaters are conducive to a mosquito population explosion, the total disruption of the region's normal ecology may discourage mosquito-borne epidemics, said Jennifer Morcone, a spokesperson for the Centers for Disease Control and Prevention.
“Historically, we have not seen increases in these diseases after a storm like this,” she said. “You need a bird population to fuel the transmission cycle and, right now, the bird population in these areas is almost nonexistent.”
However, she said, the CDC has deployed entomologists to monitor mosquito populations and to assist with vector control in the affected areas.
The Louisiana Department of Health and Hospitals—in coordination with the Louisiana Department of Agriculture and Forestry, the CDC, the Agency for Toxic Substances and Disease Registry, the U.S. Environmental Protection Agency, the Department of Defense, and local mosquito control districts—is implementing a plan to reduce mosquitoes and flies in the areas affected by Hurricane Katrina.
The health and hospitals department had developed a management plan in anticipation of the hatching of mosquitoes and flies due to the flooding in the area. Mosquito control is needed to protect public health from the nuisances and diseases they transmit; flies will also be monitored. The plan will continue, based on field monitoring of mosquitoes and flies in the region.
People face two types of increased risks for mosquito-borne diseases in the region: the rise in the number of mosquitos, and increased exposure to the insects. “People are spending a lot more time outside, and even when inside, they may have broken windows and screens that let mosquitoes into the house,” Ms. Morcone said.
It's too soon to predict what impact Hurricane Katrina will have on West Nile virus in the Gulf region, she added. “What we do know is that the virus did exist in every one of these states before the storm and that it is still there. We want people to take precautions against exposure, and we will facilitate that as much as possible.”
As of early September, 821 cases of West Nile virus—of which 18 cases were fatal—had been reported in the United States, marking this as the slowest West Nile season since 2002. By early September 2002, 737 cases had been reported, with 35 fatalities. Numbers soared in 2003 to almost 1,900, with 37 fatalities, and stayed high last year, with 1,191 cases and 30 fatalities.
As in previous years, the highest number of cases (268) occurred in California. Of those, 7 have been fatal; 93 showed neurologic complications (West Nile meningitis, encephalitis, or myelitis). Other hard-hit states include South Dakota (138 cases; 1 fatality; 25 neuroinvasive illnesses); Illinois (89 cases; 1 fatality; 52 neuroinvasive); and Louisiana (52 cases; 4 fatalities; 40 neuroinvasive). Texas has reported only 27 cases, but almost all of them (24) were neuroinvasive; there was 1 fatality.
The reason for the decline this year is unclear, Ms. Morcone said. “If there's one thing we know about West Nile, it's that there's no such thing as a typical season. We have seen areas with large epidemics one year and very small case counts the next. Weather and ecology are among the factors that play a part in West Nile prevalence.”
Although the cases are relatively low, physicians should still stress prevention to their patients. Repellents with DEET(N,N-diethyl-m-toluamide) are most effective for those who are outdoors for extended periods. Repellents with oil of lemon, eucalyptus, and picaridin are probably sufficient for “backyard exposure,” she said.
West Nile virus has also been identified in blood from 163 blood donors, according to the CDC. Of these donors, 3 subsequently developed West Nile neuroinvasive illness, 38 developed West Nile fever, and 3 developed other illnesses.
Pain Expectations Linked to Pain Perception
Decreased expectation of pain diminishes pain perception by 28%—more than a shot of morphine.
Not only do people who expect less pain report feeling less pain, but their brains respond similarly, with functional MRI (fMRI) showing less activation of pain-related areas, according to Tetsuo Koyama, M.D., Ph.D., and colleagues at Wake Forest University, Winston-Salem, N.C.
The team trained 10 healthy volunteers (aged 26–46 years) to associate tones of different durations with increasingly painful heat stimulation. (Proc. Natl. Acad. Sci. 2005;102:12950–5).
Subjects then underwent 30 trials that were monitored with fMRI. About a third of the time, the researchers mixed the signals, so that participants were expecting one temperature, but received a different one. When they expected moderate pain but received severe pain, all 10 subjects reported decreased pain intensity. Findings from fMRIs supported these perceptions, Dr. Koyama and associates said.
Expectations of decreased pain significantly reduced pain intensity-related brain activation; the severe pain evoked the same patterns as expected moderate pain.
“These data provide a neural mechanism that can, in part, explain the positive impact of optimism in chronic disease states,” the investigators wrote.
Decreased expectation of pain diminishes pain perception by 28%—more than a shot of morphine.
Not only do people who expect less pain report feeling less pain, but their brains respond similarly, with functional MRI (fMRI) showing less activation of pain-related areas, according to Tetsuo Koyama, M.D., Ph.D., and colleagues at Wake Forest University, Winston-Salem, N.C.
The team trained 10 healthy volunteers (aged 26–46 years) to associate tones of different durations with increasingly painful heat stimulation. (Proc. Natl. Acad. Sci. 2005;102:12950–5).
Subjects then underwent 30 trials that were monitored with fMRI. About a third of the time, the researchers mixed the signals, so that participants were expecting one temperature, but received a different one. When they expected moderate pain but received severe pain, all 10 subjects reported decreased pain intensity. Findings from fMRIs supported these perceptions, Dr. Koyama and associates said.
Expectations of decreased pain significantly reduced pain intensity-related brain activation; the severe pain evoked the same patterns as expected moderate pain.
“These data provide a neural mechanism that can, in part, explain the positive impact of optimism in chronic disease states,” the investigators wrote.
Decreased expectation of pain diminishes pain perception by 28%—more than a shot of morphine.
Not only do people who expect less pain report feeling less pain, but their brains respond similarly, with functional MRI (fMRI) showing less activation of pain-related areas, according to Tetsuo Koyama, M.D., Ph.D., and colleagues at Wake Forest University, Winston-Salem, N.C.
The team trained 10 healthy volunteers (aged 26–46 years) to associate tones of different durations with increasingly painful heat stimulation. (Proc. Natl. Acad. Sci. 2005;102:12950–5).
Subjects then underwent 30 trials that were monitored with fMRI. About a third of the time, the researchers mixed the signals, so that participants were expecting one temperature, but received a different one. When they expected moderate pain but received severe pain, all 10 subjects reported decreased pain intensity. Findings from fMRIs supported these perceptions, Dr. Koyama and associates said.
Expectations of decreased pain significantly reduced pain intensity-related brain activation; the severe pain evoked the same patterns as expected moderate pain.
“These data provide a neural mechanism that can, in part, explain the positive impact of optimism in chronic disease states,” the investigators wrote.