Given name(s)
Alan J.
Family name
Forster
Degrees
MD, MSc, FRCPC

“July Phenomenon” Revisited

Article Type
Changed
Mon, 05/22/2017 - 21:29
Display Headline
Influence of house‐staff experience on teaching‐hospital mortality: The “July Phenomenon” revisited

The July Phenomenon is a commonly used term referring to poor hospital‐patient outcomes when inexperienced house‐staff start their postgraduate training in July. In addition to being an interesting observation, the validity of July Phenomenon has policy implications for teaching hospitals and residency training programs.

Twenty‐three published studies have tried to determine whether the arrival of new house‐staff is associated with increased patient mortality (see Supporting Appendix A in the online version of this article).123 While those studies make an important attempt to determine the validity of the July Phenomenon, they have some notable limitations. All but four of these studies2, 4, 6, 16 limited their analysis to patients with a specific diagnosis, within a particular hospital unit, or treated by a particular specialty. Many studies limited data to those from a single hospital.1, 3, 4, 10, 11, 14, 15, 20, 22 Nine studies did not include data from the entire year in their analyses,4, 6, 7, 10, 13, 1517, 23 and one did not include data from multiple years.22 One study conducted its analysis on death counts alone and did not account for the number of hospitalized people at risk.6 Finally, the analysis of several studies controlled for no severity of illness markers,6, 10, 21 whereas that from several other studies contained only crude measures of comorbidity and severity of illness.14

In this study, we analyzed data at our teaching hospital to determine if evidence exists for the July Phenomenon at our center. We used a highly discriminative and well‐calibrated multivariate model to calculate the risk of dying in hospital, and quantify the ratio of observed to expected number of hospital deaths. Using this as our outcome statistic, we determined whether or not our hospital experiences a July Phenomenon.

METHODS

This study was approved by The Ottawa Hospital (TOH) Research Ethics Board.

Study Setting

TOH is a tertiary‐care teaching hospital with two inpatient campuses. The hospital operates within a publicly funded health care system, serves a population of approximately 1.5 million people in Ottawa and Eastern Ontario, treats all major trauma patients for the region, and provides most of the oncological care in the region.

TOH is the primary medical teaching hospital at the University of Ottawa. In 2010, there were 197 residents starting their first year of postgraduate training in one of 29 programs.

Inclusion Criteria

The study period extended from April 15, 2004 to December 31, 2008. We used this start time because our hospital switched to new coding systems for procedures and diagnoses in April 2002. Since these new coding systems contributed to our outcome statistic, we used a very long period (ie, two years) for coding patterns to stabilize to ensure that any changes seen were not a function of coding patterns. We ended our study in December 2008 because this was the last date of complete data at the time we started the analysis.

We included all medical, surgical, and obstetrical patients admitted to TOH during this time except those who were: younger than 15 years old; transferred to or from another acute care hospital; or obstetrical patients hospitalized for routine childbirth. These patients were excluded because they were not part of the multivariate model that we used to calculate risk of death in hospital (discussed below).24 These exclusions accounted for 25.4% of all admissions during the study period (36,820less than 15 years old; 12,931transferred to or from the hospital; and 44,220uncomplicated admission for childbirth).

All data used in this study came from The Ottawa Hospital Data Warehouse (TOHDW). This is a repository of clinical, laboratory, and administrative data originating from the hospital's major operational information systems. TOHDW contains information on patient demographics and diagnoses, as well as procedures and patient transfers between different units or hospital services during the admission.

Primary OutcomeRatio of Observed to Expected Number of Deaths per Week

For each study day, we measured the number of hospital deaths from the patient registration table in TOHDW. This statistic was collated for each week to ensure numeric stability, especially in our subgroup analyses.

We calculated the weekly expected number of hospital deaths using an extension of the Escobar model.24 The Escobar is a logistic regression model that estimated the probability of death in hospital that was derived and internally validated on almost 260,000 hospitalizations at 17 hospitals in the Kaiser Permanente Health Plan. It included six covariates that were measurable at admission including: patient age; patient sex; admission urgency (ie, elective or emergent) and service (ie, medical or surgical); admission diagnosis; severity of acute illness as measured by the Laboratory‐based Acute Physiology Score (LAPS); and chronic comorbidities as measured by the COmorbidity Point Score (COPS). Hospitalizations were grouped by admission diagnosis. The final model had excellent discrimination (c‐statistic 0.88) and calibration (P value of Hosmer Lemeshow statistic for entire cohort 0.66). This model was externally validated in our center with a c‐statistic of 0.901.25

We extended the Escobar model in several ways (Wong et al., Derivation and validation of a model to predict the daily risk of death in hospital, 2010, unpublished work). First, we modified it into a survival (rather than a logistic) model so it could estimate a daily probability of death in hospital. Second, we included the same covariates as Escobar except that we expressed LAPS as a time‐dependent covariate (meaning that the model accounted for changes in its value during the hospitalization). Finally, we included other time‐dependent covariates including: admission to intensive care unit; undergoing significant procedures; and awaiting long‐term care. This model had excellent discrimination (concordance probability of 0.895, 95% confidence interval [CI] 0.8890.902) and calibration.

We used this survival model to estimate the daily risk of death for all patients in the hospital each day. Summing these risks over hospital patients on each day returned the daily number of expected hospital deaths. This was collated per week.

The outcome statistic for this study was the ratio of the observed to expected weekly number of hospital deaths. Ratios exceeding 1 indicate that more deaths were observed than were expected (given the distribution of important covariates in those people during that week). This outcome statistic has several advantages. First, it accounts for the number of patients in the hospital each day. This is important because the number of hospital deaths will increase as the number of people in hospital increase. Second, it accounts for the severity of illness in each patient on each hospital day. This accounts for daily changes in risk of patient death, because calculation of the expected number of deaths per day was done using a multivariate survival model that included time‐dependent covariates. Therefore, each individual's predicted hazard of death (which was summed over the entire hospital to calculate the total expected number of deaths in hospital each day) took into account the latest values of these covariates. Previous analyses only accounted for risk of death at admission.

Expressing Physician Experience

The latent measure26 in all July Phenomenon studies is collective house‐staff physician experience. This is quantified by a surrogate date variable in which July 1the date that new house‐staff start their training in North Americarepresents minimal experience and June 30 represents maximal experience. We expressed collective physician experience on a scale from 0 (minimum experience) on July 1 to 1 (maximum experience) on June 30. A similar approach has been used previously13 and has advantages over the other methods used to capture collective house‐staff experience. In the stratified, incomplete approach,47, 911, 13, 1517 periods with inexperienced house‐staff (eg, July and August) are grouped together and compared to times with experienced house‐staff (eg, May and June), while ignoring all other data. The specification of cut‐points for this stratification is arbitrary and the method ignores large amounts of data. In the stratified, complete approach, periods with inexperienced house‐staff (eg, July and August) are grouped together and compared to all other times of the year.8, 12, 14, 1820, 22 This is potentially less biased because there are no lost data. However, the cut‐point for determining when house‐staff transition from inexperienced to experienced is arbitrary, and the model assumes that the transition is sudden. This is suboptimal because acquisition of experience is a gradual, constant process.

The pattern by which collective physician experience changes between July 1st and June 30th is unknown. We therefore expressed this evolution using five different patterns varying from a linear change to a natural logarithmic change (see Supporting Appendix B in the online version of this article).

Analysis

We first examined for autocorrelation in our outcome variable using Ljung‐Box statistics at lag 6 and 12 in PROC ARIMA (SAS 9.2, Cary, NC). If significant autocorrelation was absent in our data, linear regression modeling was used to associate the ratio of the observed to expected number of weekly deaths (the outcome variable) with the collective first year physician experience (the predictor variable). Time‐series methodology was to be used if significant autocorrelation was present.

In our baseline analysis, we included all hospitalizations together. In stratified analyses, we categorized hospitalizations by admission status (emergent vs elective) and admission service (medicine vs surgery).

RESULTS

Between April 15, 2004 and December 31, 2008, The Ottawa Hospital had a total of 152,017 inpatient admissions and 107,731 same day surgeries (an annual rate of 32,222 and 22,835, respectively; an average daily rate of 88 and 63, respectively) that met our study's inclusion criteria. These 259,748 encounters included 164,318 people. Table 1 provides an overall description of the study population.

Description of the Study Cohort
Characteristic 
  • Abbreviations: IQR, interquartile range; LAPS, Laboratory‐based Acute Physiology Score; PIMR, Procedural Independent Mortality Risk (van Walraven et al., The Procedural Independent Mortality Risk [PIMR] score can use administrative data to quantify the independent risk of death in hospital after procedures, 2010, unpublished work).

  • Among admissions where at least one PIMR procedure was performed during the hospitalization.

Patients/hospitalizations, n164,318/259,748
Deaths in‐hospital, n (%)7,679 (3.0)
Length of admission in days, median (IQR)2 (16)
Male, n (%)124,848 (48.1)
Age at admission, median (IQR)60 (4674)
Admission type, n (%) 
Elective surgical136,406 (52.5)
Elective nonsurgical20,104 (7.7)
Emergent surgical32,046 (12.3)
Emergent nonsurgical71,192 (27.4)
Elixhauser score, median (IQR)0 (04)
LAPS at admission, median (IQR)0 (015)
At least one admission to intensive care unit, n (%)7,779 (3.0)
At least one alternative level of care episode, n (%)6,971 (2.7)
At least one PIMR procedure, n (%)47,288 (18.2)
First PIMR score,* median (IQR)2 (52)

Weekly Deaths: Observed, Expected, and Ratio

Figure 1A presents the observed weekly number of deaths during the study period. There was an average of 31 deaths per week (range 1551). Some large fluctuations in the weekly number of deaths were seen; in 2007, for example, the number of observed deaths went from 21 in week 13 up to 46 in week 15. However, no obvious seasonal trends in the observed weekly number of deaths were seen (Figure 1A, heavy line) nor were trends between years obvious.

Figure 1
The weekly number of observed deaths (top plot) and expected deaths (middle plot) for each week of the year (horizontal axis). The bottom plot presents the ratio of weekly observed to expected number of deaths. Each plot presents results for individual study years (light lines) as well as an overall summary for all years (heavy line). The first week of July (when new house‐staff start their training) is represented by the vertical line in the middle of each plot.

Figure 1B presents the expected weekly number of deaths during the study period. The expected weekly number of deaths averaged 29.6 (range 22.238.7). The expected weekly number of deaths was notably less variable than the observed number of deaths. However, important variations in the expected number of deaths were seen; for example, in 2005, the expected number of deaths increased from 24.1 in week 41 to 29.6 in week 44. Again, we saw no obvious seasonal trends in the expected weekly number of deaths (Figure 1B, heavy line).

Figure 1C illustrates the ratio of observed to the expected weekly number of deaths. The average observed to expected ratio slightly exceeded unity (1.05) and ranged from 0.488 (week 24, in 2008) to 1.821 (week 51, in 2008). We saw no obvious seasonal trends in the ratio of the observed to expected number of weekly deaths. In addition, obvious trends in this ratio were absent over the study period.

Association Between House‐Staff Experience and Death in Hospital

We found no evidence of autocorrelation in the ratio of observed to expected weekly number of deaths. The ratio of observed to expected number of hospital deaths was not significantly associated with house‐staff physician experience (Table 2). This conclusion did not change regardless of which house‐staff physician experience pattern was used in the linear model (Table 2). In addition, our analysis found no significant association between physician experience and patient mortality when analyses were stratified by admission service or admission status (Table 2).

Absolute Differences in the Ratio of Observed to Expected Number of Hospital Deaths from Minimal to Maximal Experience
Patient PopulationHouse‐Staff Experience Pattern (95% CI)
LinearSquareSquare RootCubicNatural Logarithm
  • NOTE: This table summarizes the association between collective physician experience and the weekly ratio of observed to expected number of hospital deaths. The first column indicates the patient population included in the analysis. The five patterns of collective house‐staff experience (illustrated in Supporting Appendix B in the online version of this article) are listed across the top. Each entry presents the absolute change in the weekly ratio of observed to expected number of hospital deaths (with its P value in parentheses) when experience changes from the minimal to the maximal value. For example, in the model containing all patients expressing house‐staff experience in a linear pattern (top left), an increase in house‐staff experience from 0 to 1 was associated with an absolute decrease in the ratio of observed to expected numbers of deaths per week of 0.02 (or 2%). Negative values indicate that patient outcomes improve (ie, the ratio of observed to expected number of hospital deaths decreases) with an increase in house‐staff experience.

  • Abbreviations: CI, confidence interval.

All0.03 (0.11, 0.06)0.02 (0.10, 0.07)0.04 (0.15, 0.07)0.01 (0.10, 0.08)0.05 (0.16, 0.07)
Admitting service    
Medicine0.0004 (0.09, 0.10)0.01 (0.08, 0.10)0.01 (0.13, 0.11)0.02 (0.07, 0.11)0.03 (0.15, 0.09)
Surgery0.10 (0.30, 0.10)0.11 (0.30, 0.08)0.12 (0.37, 0.14)0.11 (0.31, 0.08)0.09 (0.35, 0.17)
Admission status    
Elective0.09 (0.53, 0.35)0.10 (0.51, 0.32)0.11 (0.66, 0.44)0.10 (0.53, 0.33)0.11 (0.68, 0.45)
Emergent0.02 (0.11, 0.07)0.01 (0.09, 0.08)0.03 (0.14, 0.08)0.003 (0.09, 0.09)0.04 (0.16, 0.08)

DISCUSSION

It is natural to suspect that physician experience influences patient outcomes. The commonly discussed July Phenomenon explores changes in teaching‐hospital patient outcomes by time of the academic year. This serves as an ecological surrogate for the latent variable of overall house‐staff experience. Our study used a detailed outcomethe ratio of observed to the expected number of weekly hospital deathsthat adjusted for patient severity of illness. We also modeled collective physician experience using a broad range of patterns. We found no significant variation in mortality rates during the academic year; therefore, the risk of death in hospital does not vary by house‐staff experience at our hospital. This is no evidence of a July Phenomenon for mortality at our center.

We were not surprised that the arrival of inexperienced house‐staff did not significantly change patient mortality for several reasons. First year residents are but one group of treating physicians in a teaching hospital. They are surrounded by many other, more experienced physicians who also contribute to patient care and their outcomes. Given these other physicians, the influence that the relatively smaller number of first year residents have on patient outcomes will be minimized. In addition, the role that these more experienced physicians play in patient care will vary by the experience and ability of residents. The influence of new and inexperienced house‐staff in July will be blunted by an increased role played by staff‐people, fellows, and more experienced house‐staff at that time.

Our study was a methodologically rigorous examination of the July Phenomenon. We used a reliable outcome statisticthe ratio of observed to expected weekly number of hospital deathsthat was created with a validated, discriminative, and well‐calibrated model which predicted risk of death in hospital (Wong et al., Derivation and validation of a model to predict the daily risk of death in hospital, 2010, unpublished work). This statistic is inherently understandable and controlled for patient severity of illness. In addition, our study included a very broad and inclusive group of patients over five years at two hospitals.

Twenty‐three other studies have quantitatively sought a July Phenomenon for patient mortality (see Supporting Appendix A in the online version of this article). The studies contained a broad assortment of research methodologies, patient populations, and analytical methodologies. Nineteen of these studies (83%) found no evidence of a July Phenomenon for teaching‐hospital mortality. In contrast, two of these studies found notable adjusted odds ratios for death in hospital (1.41 and 1.34) in patients undergoing either general surgery13 or complex cardiovascular surgery,19 respectively. Blumberg22 also found an increased risk of death in surgical patients in July, but used indirect standardized mortality ratios as the outcome statistic and was based on only 139 cases at Maryland teaching hospitals in 1984. Only Jen et al.16 showed an increased risk of hospital death with new house‐staff in a broad patient population. However, this study was restricted to two arbitrarily chosen days (one before and one after house‐staff change‐over) and showed an increased risk of hospital death (adjusted OR 1.05, 95% CI 1.001.15) whose borderline statistical significance could have been driven by the large sample size of the study (n = 299,741).

Therefore, the vast majority of dataincluding those presented in our analysesshow that the risk of teaching‐hospital death does not significantly increase with the arrival of new house‐staff. This prompts the question as to why the July Phenomenon is commonly presented in popular media as a proven fact.2733 We believe this is likely because the concept of the July Phenomenon is understandable and has a rather morbid attraction to people, both inside and outside of the medical profession. Given the large amount of data refuting the true existence of a July Phenomenon for patient mortality (see Supporting Appendix A in the online version of this article), we believe that this term should only be used only as an example of an interesting idea that is refuted by a proper analysis of the data.

Several limitations of our study are notable. First, our analysis is limited to a single center, albeit with two hospitals. However, ours is one of the largest teaching centers in Canada with many new residents each year. Second, we only examined the association of physician experience on hospital mortality. While it is possible that physician experience significantly influences other patient outcomes, mortality is, obviously, an important and reliably tallied statistic that is used as the primary outcome in most July Phenomenon studies. Third, we excluded approximately a quarter of all hospitalizations from the study. These exclusions were necessary because the Escobar model does not apply to these people and can therefore not be used to predict their risk of death in hospital. However, the vast majority of excluded patients (those less than 15 years old, and women admitted for routine childbirth) have a very low risk of death (the former because they are almost exclusively newborns, and the latter because the risk of maternal death during childbirth is very low). Since these people will contribute very little to either the expected or observed number of deaths, their exclusion will do little to threaten the study's validity. The remaining patients who were transferred to or from other hospitals (n = 12,931) makes a small proportion of the total sampling frame (5% of admissions). Fourth, our study did not identify any significant association between house‐staff experience and patient mortality (Table 2). However, the confidence intervals around our estimates are wide enough, especially in some subgroups such as patients admitted electively, that important changes in patient mortality with house‐staff experience cannot be excluded. For example, whereas our study found that a decrease in the ratio of observed to expected number of deaths exceeding 30% is very unlikely, it is still possible that this decrease is up to 30% (the lower range of the confidence interval in Table 2). However, using this logic, it could also increase by up to 10% (Table 2). Finally, we did not directly measure individual physician experience. New residents can vary extensively in their individual experience and ability. Incorporating individual physician measures of experience and ability would more reliably let us measure the association of new residents with patient outcomes. Without this, we had to rely on an ecological measure of physician experiencenamely calendar date. Again, this method is an industry standard since all studies quantify physician experience ecologically by date (see Supporting Appendix A in the online version of this article).

In summary, our datasimilar to most studies on this topicshow that the risk of death in teaching hospitals does not change with the arrival of new house‐staff.

Files
References
  1. Rich EC,Gifford G,Dowd B.The effects of scheduled intern rotation on the cost and quality of teaching hospital care.Eval Health Prof.1994;17:259272.
  2. Rich EC,Hillson SD,Dowd B,Morris N.Specialty differences in the “July Phenomenon” for Twin Cities teaching hospitals.Med Care.1993;31:7383.
  3. Rich EC,Gifford G,Luxenberg M,Dowd B.The relationship of house staff experience to the cost and quality of inpatient care.JAMA.1990;263:953957.
  4. Buchwald D,Komaroff AL,Cook EF,Epstein AM.Indirect costs for medical education. Is there a July phenomenon?Arch Intern Med.1989;149:765768.
  5. Alshekhlee A,Walbert T,DeGeorgia M,Preston DC,Furlan AJ.The impact of accreditation council for graduate medical education duty hours, the July phenomenon, and hospital teaching status on stroke outcomes.J Stroke Cerebrovasc Dis.2009;18:232238.
  6. Aylin P,Majeed FA.The killing season—Fact or fiction.BMJ1994;309:1690.
  7. Bakaeen FG,Huh J,LeMaire SA, et al.The July effect: Impact of the beginning of the academic cycle on cardiac surgical outcomes in a cohort of 70,616 patients.Ann Thorac Surg.2009;88:7075.
  8. Barry WA,Rosenthal GE.Is there a July phenomenon? The effect of July admission on intensive care mortality and length of stay in teaching hospitals.J Gen Intern Med.2003;18:639645.
  9. Bruckner TA,Carlo WA,Ambalavanan N,Gould JB.Neonatal mortality among low birth weight infants during the initial months of the academic year.J Perinatol.2008;28:691695.
  10. Claridge JA,Schulman AM,Sawyer RG,Ghezel‐Ayagh A,Young JS.The “July Phenomenon” and the care of the severely injured patient: Fact or fiction?Surgery.2001;130:346353.
  11. Dhaliwal AS,Chu D,Deswal A, et al.The July effect and cardiac surgery: The effect of the beginning of the academic cycle on outcomes.Am J Surg.2008;196:720725.
  12. Englesbe MJ,Fan ZH,Baser O,Birkmeyer JD.Mortality in Medicare patients undergoing surgery in July in teaching hospitals.Ann Surg.2009;249:871876.
  13. Englesbe MJ,Pelletier SJ,Magee JC, et al.Seasonal variation in surgical outcomes as measured by the American College of Surgeons–National Surgical Quality Improvement Program (ACS‐NSQIP).Ann Surg.2007;246:456465.
  14. Finkielman JD,Morales IJ,Peters SG, et al.Mortality rate and length of stay of patients admitted to the intensive care unit in July.Crit Care Med.2004;32:11611165.
  15. Highstead RG,Johnson LC,Street JH,Trankiem CT,Kennedy SO,Sava JA.July—As good a time as any to be injured.J Trauma‐Injury Infect Crit Care.2009;67:10871090.
  16. Jen MH,Bottle A,Majeed A,Bell D,Aylin P.Early in‐hospital mortality following trainee doctors' first day at work.PLoS ONE.2009;4.
  17. Peets AD,Boiteau PJE,Doig CJ.Effect of critical care medicine fellows on patient outcome in the intensive care unit.Acad Med.2006;81:S1S4.
  18. Schroeppel TJ,Fischer PE,Magnotti LJ,Croce MA,Fabian TC.The “July Phenomenon”: Is trauma the exception?J Am Coll Surg.2009;209:378384.
  19. Shuhaiber JH,Goldsmith K,Nashef SAM.Impact of cardiothoracic resident turnover on mortality after cardiac surgery: A dynamic human factor.Ann Thorac Surg.2008;86:123131.
  20. Smith ER,Butler WE,Barker FG.Is there a “July Phenomenon” in pediatric neurosurgery at teaching hospitals?J Neurosurg Pediatr.2006;105:169176.
  21. Soltau TD,Carlo WA,Gee J,Gould J,Ambalavanan N.Mortality and morbidity by month of birth of neonates admitted to an academic neonatal intensive care unit.Pediatrics.2008;122:E1048E1052.
  22. Blumberg MS.Measuring surgical quality in Maryland: A model.Health Aff.1988;7:6278.
  23. Inaba K,Recinos G,Teixeira PG, et al.Complications and death at the start of the new academic year: Is there a July phenomenon?J Trauma‐Injury Infect Crit Care.2010;68(1):1922.
  24. Escobar GJ,Greene JD,Scheirer P,Gardner MN,Draper D,Kipnis P.Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases.Med Care.2008;46:232239.
  25. van Walraven C,Escobar GJ,Greene JD,Forster AJ.The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population.J Clin Epidemiol.2010;63:798803.
  26. McCutcheon AL.Introduction: The logic of latent variables.Latent Class Analysis.Newbury Park, CA:Sage;1987:510.
  27. July Effect. Wikipedia. Available at: http://en.wikipedia.org/wiki/July_effect. Accessed April 1,2011.
  28. Study proves “killing season” occurs as new doctors start work. September 23,2010. Herald Scotland. Available at: http://www.heraldscotland.com/news/health/study‐proves‐killing‐season‐occurs‐as‐new‐doctors‐start‐work‐1.921632. Accessed April 1, 2011.
  29. The “July effect”: Worst month for fatal hospital errors, study finds. June 3,2010. ABC News. Available at: http://abcnews.go.com/WN/WellnessNews/july‐month‐fatal‐hospital‐errors‐study‐finds/story?id=10819652. Accessed 1 April, 2011.
  30. “Deaths rise” with junior doctors. September 22,2010. BBC News. Available at: http://news.bbc.co.uk/2/hi/health/8269729.stm. Accessed April 1, 2011.
  31. Raloff Janet.July: When not to go to the hospital. June 2,2010. Science News. Available at: http://www.sciencenews.org/view/generic/id/59865/title/July_When_not_to_go_to_the_hospital. Accessed April 1, 2011.
  32. July: A deadly time for hospitals. July 5,2010. National Public Radio. Available at: http://www.npr.org/templates/story/story.php?storyId=128321489. Accessed April 1, 2011.
  33. Brayer Toni.Medical errors and patient safety: Beware the “July effect.” June 4,2010. Better Health. Available at: http://getbetterhealth.com/medical‐errors‐and‐patient‐safety‐beware‐of‐the‐july‐effect/2010.06.04. Accessed April 1, 2011.
Article PDF
Issue
Journal of Hospital Medicine - 6(7)
Publications
Page Number
389-394
Sections
Files
Files
Article PDF
Article PDF

The July Phenomenon is a commonly used term referring to poor hospital‐patient outcomes when inexperienced house‐staff start their postgraduate training in July. In addition to being an interesting observation, the validity of July Phenomenon has policy implications for teaching hospitals and residency training programs.

Twenty‐three published studies have tried to determine whether the arrival of new house‐staff is associated with increased patient mortality (see Supporting Appendix A in the online version of this article).123 While those studies make an important attempt to determine the validity of the July Phenomenon, they have some notable limitations. All but four of these studies2, 4, 6, 16 limited their analysis to patients with a specific diagnosis, within a particular hospital unit, or treated by a particular specialty. Many studies limited data to those from a single hospital.1, 3, 4, 10, 11, 14, 15, 20, 22 Nine studies did not include data from the entire year in their analyses,4, 6, 7, 10, 13, 1517, 23 and one did not include data from multiple years.22 One study conducted its analysis on death counts alone and did not account for the number of hospitalized people at risk.6 Finally, the analysis of several studies controlled for no severity of illness markers,6, 10, 21 whereas that from several other studies contained only crude measures of comorbidity and severity of illness.14

In this study, we analyzed data at our teaching hospital to determine if evidence exists for the July Phenomenon at our center. We used a highly discriminative and well‐calibrated multivariate model to calculate the risk of dying in hospital, and quantify the ratio of observed to expected number of hospital deaths. Using this as our outcome statistic, we determined whether or not our hospital experiences a July Phenomenon.

METHODS

This study was approved by The Ottawa Hospital (TOH) Research Ethics Board.

Study Setting

TOH is a tertiary‐care teaching hospital with two inpatient campuses. The hospital operates within a publicly funded health care system, serves a population of approximately 1.5 million people in Ottawa and Eastern Ontario, treats all major trauma patients for the region, and provides most of the oncological care in the region.

TOH is the primary medical teaching hospital at the University of Ottawa. In 2010, there were 197 residents starting their first year of postgraduate training in one of 29 programs.

Inclusion Criteria

The study period extended from April 15, 2004 to December 31, 2008. We used this start time because our hospital switched to new coding systems for procedures and diagnoses in April 2002. Since these new coding systems contributed to our outcome statistic, we used a very long period (ie, two years) for coding patterns to stabilize to ensure that any changes seen were not a function of coding patterns. We ended our study in December 2008 because this was the last date of complete data at the time we started the analysis.

We included all medical, surgical, and obstetrical patients admitted to TOH during this time except those who were: younger than 15 years old; transferred to or from another acute care hospital; or obstetrical patients hospitalized for routine childbirth. These patients were excluded because they were not part of the multivariate model that we used to calculate risk of death in hospital (discussed below).24 These exclusions accounted for 25.4% of all admissions during the study period (36,820less than 15 years old; 12,931transferred to or from the hospital; and 44,220uncomplicated admission for childbirth).

All data used in this study came from The Ottawa Hospital Data Warehouse (TOHDW). This is a repository of clinical, laboratory, and administrative data originating from the hospital's major operational information systems. TOHDW contains information on patient demographics and diagnoses, as well as procedures and patient transfers between different units or hospital services during the admission.

Primary OutcomeRatio of Observed to Expected Number of Deaths per Week

For each study day, we measured the number of hospital deaths from the patient registration table in TOHDW. This statistic was collated for each week to ensure numeric stability, especially in our subgroup analyses.

We calculated the weekly expected number of hospital deaths using an extension of the Escobar model.24 The Escobar is a logistic regression model that estimated the probability of death in hospital that was derived and internally validated on almost 260,000 hospitalizations at 17 hospitals in the Kaiser Permanente Health Plan. It included six covariates that were measurable at admission including: patient age; patient sex; admission urgency (ie, elective or emergent) and service (ie, medical or surgical); admission diagnosis; severity of acute illness as measured by the Laboratory‐based Acute Physiology Score (LAPS); and chronic comorbidities as measured by the COmorbidity Point Score (COPS). Hospitalizations were grouped by admission diagnosis. The final model had excellent discrimination (c‐statistic 0.88) and calibration (P value of Hosmer Lemeshow statistic for entire cohort 0.66). This model was externally validated in our center with a c‐statistic of 0.901.25

We extended the Escobar model in several ways (Wong et al., Derivation and validation of a model to predict the daily risk of death in hospital, 2010, unpublished work). First, we modified it into a survival (rather than a logistic) model so it could estimate a daily probability of death in hospital. Second, we included the same covariates as Escobar except that we expressed LAPS as a time‐dependent covariate (meaning that the model accounted for changes in its value during the hospitalization). Finally, we included other time‐dependent covariates including: admission to intensive care unit; undergoing significant procedures; and awaiting long‐term care. This model had excellent discrimination (concordance probability of 0.895, 95% confidence interval [CI] 0.8890.902) and calibration.

We used this survival model to estimate the daily risk of death for all patients in the hospital each day. Summing these risks over hospital patients on each day returned the daily number of expected hospital deaths. This was collated per week.

The outcome statistic for this study was the ratio of the observed to expected weekly number of hospital deaths. Ratios exceeding 1 indicate that more deaths were observed than were expected (given the distribution of important covariates in those people during that week). This outcome statistic has several advantages. First, it accounts for the number of patients in the hospital each day. This is important because the number of hospital deaths will increase as the number of people in hospital increase. Second, it accounts for the severity of illness in each patient on each hospital day. This accounts for daily changes in risk of patient death, because calculation of the expected number of deaths per day was done using a multivariate survival model that included time‐dependent covariates. Therefore, each individual's predicted hazard of death (which was summed over the entire hospital to calculate the total expected number of deaths in hospital each day) took into account the latest values of these covariates. Previous analyses only accounted for risk of death at admission.

Expressing Physician Experience

The latent measure26 in all July Phenomenon studies is collective house‐staff physician experience. This is quantified by a surrogate date variable in which July 1the date that new house‐staff start their training in North Americarepresents minimal experience and June 30 represents maximal experience. We expressed collective physician experience on a scale from 0 (minimum experience) on July 1 to 1 (maximum experience) on June 30. A similar approach has been used previously13 and has advantages over the other methods used to capture collective house‐staff experience. In the stratified, incomplete approach,47, 911, 13, 1517 periods with inexperienced house‐staff (eg, July and August) are grouped together and compared to times with experienced house‐staff (eg, May and June), while ignoring all other data. The specification of cut‐points for this stratification is arbitrary and the method ignores large amounts of data. In the stratified, complete approach, periods with inexperienced house‐staff (eg, July and August) are grouped together and compared to all other times of the year.8, 12, 14, 1820, 22 This is potentially less biased because there are no lost data. However, the cut‐point for determining when house‐staff transition from inexperienced to experienced is arbitrary, and the model assumes that the transition is sudden. This is suboptimal because acquisition of experience is a gradual, constant process.

The pattern by which collective physician experience changes between July 1st and June 30th is unknown. We therefore expressed this evolution using five different patterns varying from a linear change to a natural logarithmic change (see Supporting Appendix B in the online version of this article).

Analysis

We first examined for autocorrelation in our outcome variable using Ljung‐Box statistics at lag 6 and 12 in PROC ARIMA (SAS 9.2, Cary, NC). If significant autocorrelation was absent in our data, linear regression modeling was used to associate the ratio of the observed to expected number of weekly deaths (the outcome variable) with the collective first year physician experience (the predictor variable). Time‐series methodology was to be used if significant autocorrelation was present.

In our baseline analysis, we included all hospitalizations together. In stratified analyses, we categorized hospitalizations by admission status (emergent vs elective) and admission service (medicine vs surgery).

RESULTS

Between April 15, 2004 and December 31, 2008, The Ottawa Hospital had a total of 152,017 inpatient admissions and 107,731 same day surgeries (an annual rate of 32,222 and 22,835, respectively; an average daily rate of 88 and 63, respectively) that met our study's inclusion criteria. These 259,748 encounters included 164,318 people. Table 1 provides an overall description of the study population.

Description of the Study Cohort
Characteristic 
  • Abbreviations: IQR, interquartile range; LAPS, Laboratory‐based Acute Physiology Score; PIMR, Procedural Independent Mortality Risk (van Walraven et al., The Procedural Independent Mortality Risk [PIMR] score can use administrative data to quantify the independent risk of death in hospital after procedures, 2010, unpublished work).

  • Among admissions where at least one PIMR procedure was performed during the hospitalization.

Patients/hospitalizations, n164,318/259,748
Deaths in‐hospital, n (%)7,679 (3.0)
Length of admission in days, median (IQR)2 (16)
Male, n (%)124,848 (48.1)
Age at admission, median (IQR)60 (4674)
Admission type, n (%) 
Elective surgical136,406 (52.5)
Elective nonsurgical20,104 (7.7)
Emergent surgical32,046 (12.3)
Emergent nonsurgical71,192 (27.4)
Elixhauser score, median (IQR)0 (04)
LAPS at admission, median (IQR)0 (015)
At least one admission to intensive care unit, n (%)7,779 (3.0)
At least one alternative level of care episode, n (%)6,971 (2.7)
At least one PIMR procedure, n (%)47,288 (18.2)
First PIMR score,* median (IQR)2 (52)

Weekly Deaths: Observed, Expected, and Ratio

Figure 1A presents the observed weekly number of deaths during the study period. There was an average of 31 deaths per week (range 1551). Some large fluctuations in the weekly number of deaths were seen; in 2007, for example, the number of observed deaths went from 21 in week 13 up to 46 in week 15. However, no obvious seasonal trends in the observed weekly number of deaths were seen (Figure 1A, heavy line) nor were trends between years obvious.

Figure 1
The weekly number of observed deaths (top plot) and expected deaths (middle plot) for each week of the year (horizontal axis). The bottom plot presents the ratio of weekly observed to expected number of deaths. Each plot presents results for individual study years (light lines) as well as an overall summary for all years (heavy line). The first week of July (when new house‐staff start their training) is represented by the vertical line in the middle of each plot.

Figure 1B presents the expected weekly number of deaths during the study period. The expected weekly number of deaths averaged 29.6 (range 22.238.7). The expected weekly number of deaths was notably less variable than the observed number of deaths. However, important variations in the expected number of deaths were seen; for example, in 2005, the expected number of deaths increased from 24.1 in week 41 to 29.6 in week 44. Again, we saw no obvious seasonal trends in the expected weekly number of deaths (Figure 1B, heavy line).

Figure 1C illustrates the ratio of observed to the expected weekly number of deaths. The average observed to expected ratio slightly exceeded unity (1.05) and ranged from 0.488 (week 24, in 2008) to 1.821 (week 51, in 2008). We saw no obvious seasonal trends in the ratio of the observed to expected number of weekly deaths. In addition, obvious trends in this ratio were absent over the study period.

Association Between House‐Staff Experience and Death in Hospital

We found no evidence of autocorrelation in the ratio of observed to expected weekly number of deaths. The ratio of observed to expected number of hospital deaths was not significantly associated with house‐staff physician experience (Table 2). This conclusion did not change regardless of which house‐staff physician experience pattern was used in the linear model (Table 2). In addition, our analysis found no significant association between physician experience and patient mortality when analyses were stratified by admission service or admission status (Table 2).

Absolute Differences in the Ratio of Observed to Expected Number of Hospital Deaths from Minimal to Maximal Experience
Patient PopulationHouse‐Staff Experience Pattern (95% CI)
LinearSquareSquare RootCubicNatural Logarithm
  • NOTE: This table summarizes the association between collective physician experience and the weekly ratio of observed to expected number of hospital deaths. The first column indicates the patient population included in the analysis. The five patterns of collective house‐staff experience (illustrated in Supporting Appendix B in the online version of this article) are listed across the top. Each entry presents the absolute change in the weekly ratio of observed to expected number of hospital deaths (with its P value in parentheses) when experience changes from the minimal to the maximal value. For example, in the model containing all patients expressing house‐staff experience in a linear pattern (top left), an increase in house‐staff experience from 0 to 1 was associated with an absolute decrease in the ratio of observed to expected numbers of deaths per week of 0.02 (or 2%). Negative values indicate that patient outcomes improve (ie, the ratio of observed to expected number of hospital deaths decreases) with an increase in house‐staff experience.

  • Abbreviations: CI, confidence interval.

All0.03 (0.11, 0.06)0.02 (0.10, 0.07)0.04 (0.15, 0.07)0.01 (0.10, 0.08)0.05 (0.16, 0.07)
Admitting service    
Medicine0.0004 (0.09, 0.10)0.01 (0.08, 0.10)0.01 (0.13, 0.11)0.02 (0.07, 0.11)0.03 (0.15, 0.09)
Surgery0.10 (0.30, 0.10)0.11 (0.30, 0.08)0.12 (0.37, 0.14)0.11 (0.31, 0.08)0.09 (0.35, 0.17)
Admission status    
Elective0.09 (0.53, 0.35)0.10 (0.51, 0.32)0.11 (0.66, 0.44)0.10 (0.53, 0.33)0.11 (0.68, 0.45)
Emergent0.02 (0.11, 0.07)0.01 (0.09, 0.08)0.03 (0.14, 0.08)0.003 (0.09, 0.09)0.04 (0.16, 0.08)

DISCUSSION

It is natural to suspect that physician experience influences patient outcomes. The commonly discussed July Phenomenon explores changes in teaching‐hospital patient outcomes by time of the academic year. This serves as an ecological surrogate for the latent variable of overall house‐staff experience. Our study used a detailed outcomethe ratio of observed to the expected number of weekly hospital deathsthat adjusted for patient severity of illness. We also modeled collective physician experience using a broad range of patterns. We found no significant variation in mortality rates during the academic year; therefore, the risk of death in hospital does not vary by house‐staff experience at our hospital. This is no evidence of a July Phenomenon for mortality at our center.

We were not surprised that the arrival of inexperienced house‐staff did not significantly change patient mortality for several reasons. First year residents are but one group of treating physicians in a teaching hospital. They are surrounded by many other, more experienced physicians who also contribute to patient care and their outcomes. Given these other physicians, the influence that the relatively smaller number of first year residents have on patient outcomes will be minimized. In addition, the role that these more experienced physicians play in patient care will vary by the experience and ability of residents. The influence of new and inexperienced house‐staff in July will be blunted by an increased role played by staff‐people, fellows, and more experienced house‐staff at that time.

Our study was a methodologically rigorous examination of the July Phenomenon. We used a reliable outcome statisticthe ratio of observed to expected weekly number of hospital deathsthat was created with a validated, discriminative, and well‐calibrated model which predicted risk of death in hospital (Wong et al., Derivation and validation of a model to predict the daily risk of death in hospital, 2010, unpublished work). This statistic is inherently understandable and controlled for patient severity of illness. In addition, our study included a very broad and inclusive group of patients over five years at two hospitals.

Twenty‐three other studies have quantitatively sought a July Phenomenon for patient mortality (see Supporting Appendix A in the online version of this article). The studies contained a broad assortment of research methodologies, patient populations, and analytical methodologies. Nineteen of these studies (83%) found no evidence of a July Phenomenon for teaching‐hospital mortality. In contrast, two of these studies found notable adjusted odds ratios for death in hospital (1.41 and 1.34) in patients undergoing either general surgery13 or complex cardiovascular surgery,19 respectively. Blumberg22 also found an increased risk of death in surgical patients in July, but used indirect standardized mortality ratios as the outcome statistic and was based on only 139 cases at Maryland teaching hospitals in 1984. Only Jen et al.16 showed an increased risk of hospital death with new house‐staff in a broad patient population. However, this study was restricted to two arbitrarily chosen days (one before and one after house‐staff change‐over) and showed an increased risk of hospital death (adjusted OR 1.05, 95% CI 1.001.15) whose borderline statistical significance could have been driven by the large sample size of the study (n = 299,741).

Therefore, the vast majority of dataincluding those presented in our analysesshow that the risk of teaching‐hospital death does not significantly increase with the arrival of new house‐staff. This prompts the question as to why the July Phenomenon is commonly presented in popular media as a proven fact.2733 We believe this is likely because the concept of the July Phenomenon is understandable and has a rather morbid attraction to people, both inside and outside of the medical profession. Given the large amount of data refuting the true existence of a July Phenomenon for patient mortality (see Supporting Appendix A in the online version of this article), we believe that this term should only be used only as an example of an interesting idea that is refuted by a proper analysis of the data.

Several limitations of our study are notable. First, our analysis is limited to a single center, albeit with two hospitals. However, ours is one of the largest teaching centers in Canada with many new residents each year. Second, we only examined the association of physician experience on hospital mortality. While it is possible that physician experience significantly influences other patient outcomes, mortality is, obviously, an important and reliably tallied statistic that is used as the primary outcome in most July Phenomenon studies. Third, we excluded approximately a quarter of all hospitalizations from the study. These exclusions were necessary because the Escobar model does not apply to these people and can therefore not be used to predict their risk of death in hospital. However, the vast majority of excluded patients (those less than 15 years old, and women admitted for routine childbirth) have a very low risk of death (the former because they are almost exclusively newborns, and the latter because the risk of maternal death during childbirth is very low). Since these people will contribute very little to either the expected or observed number of deaths, their exclusion will do little to threaten the study's validity. The remaining patients who were transferred to or from other hospitals (n = 12,931) makes a small proportion of the total sampling frame (5% of admissions). Fourth, our study did not identify any significant association between house‐staff experience and patient mortality (Table 2). However, the confidence intervals around our estimates are wide enough, especially in some subgroups such as patients admitted electively, that important changes in patient mortality with house‐staff experience cannot be excluded. For example, whereas our study found that a decrease in the ratio of observed to expected number of deaths exceeding 30% is very unlikely, it is still possible that this decrease is up to 30% (the lower range of the confidence interval in Table 2). However, using this logic, it could also increase by up to 10% (Table 2). Finally, we did not directly measure individual physician experience. New residents can vary extensively in their individual experience and ability. Incorporating individual physician measures of experience and ability would more reliably let us measure the association of new residents with patient outcomes. Without this, we had to rely on an ecological measure of physician experiencenamely calendar date. Again, this method is an industry standard since all studies quantify physician experience ecologically by date (see Supporting Appendix A in the online version of this article).

In summary, our datasimilar to most studies on this topicshow that the risk of death in teaching hospitals does not change with the arrival of new house‐staff.

The July Phenomenon is a commonly used term referring to poor hospital‐patient outcomes when inexperienced house‐staff start their postgraduate training in July. In addition to being an interesting observation, the validity of July Phenomenon has policy implications for teaching hospitals and residency training programs.

Twenty‐three published studies have tried to determine whether the arrival of new house‐staff is associated with increased patient mortality (see Supporting Appendix A in the online version of this article).123 While those studies make an important attempt to determine the validity of the July Phenomenon, they have some notable limitations. All but four of these studies2, 4, 6, 16 limited their analysis to patients with a specific diagnosis, within a particular hospital unit, or treated by a particular specialty. Many studies limited data to those from a single hospital.1, 3, 4, 10, 11, 14, 15, 20, 22 Nine studies did not include data from the entire year in their analyses,4, 6, 7, 10, 13, 1517, 23 and one did not include data from multiple years.22 One study conducted its analysis on death counts alone and did not account for the number of hospitalized people at risk.6 Finally, the analysis of several studies controlled for no severity of illness markers,6, 10, 21 whereas that from several other studies contained only crude measures of comorbidity and severity of illness.14

In this study, we analyzed data at our teaching hospital to determine if evidence exists for the July Phenomenon at our center. We used a highly discriminative and well‐calibrated multivariate model to calculate the risk of dying in hospital, and quantify the ratio of observed to expected number of hospital deaths. Using this as our outcome statistic, we determined whether or not our hospital experiences a July Phenomenon.

METHODS

This study was approved by The Ottawa Hospital (TOH) Research Ethics Board.

Study Setting

TOH is a tertiary‐care teaching hospital with two inpatient campuses. The hospital operates within a publicly funded health care system, serves a population of approximately 1.5 million people in Ottawa and Eastern Ontario, treats all major trauma patients for the region, and provides most of the oncological care in the region.

TOH is the primary medical teaching hospital at the University of Ottawa. In 2010, there were 197 residents starting their first year of postgraduate training in one of 29 programs.

Inclusion Criteria

The study period extended from April 15, 2004 to December 31, 2008. We used this start time because our hospital switched to new coding systems for procedures and diagnoses in April 2002. Since these new coding systems contributed to our outcome statistic, we used a very long period (ie, two years) for coding patterns to stabilize to ensure that any changes seen were not a function of coding patterns. We ended our study in December 2008 because this was the last date of complete data at the time we started the analysis.

We included all medical, surgical, and obstetrical patients admitted to TOH during this time except those who were: younger than 15 years old; transferred to or from another acute care hospital; or obstetrical patients hospitalized for routine childbirth. These patients were excluded because they were not part of the multivariate model that we used to calculate risk of death in hospital (discussed below).24 These exclusions accounted for 25.4% of all admissions during the study period (36,820less than 15 years old; 12,931transferred to or from the hospital; and 44,220uncomplicated admission for childbirth).

All data used in this study came from The Ottawa Hospital Data Warehouse (TOHDW). This is a repository of clinical, laboratory, and administrative data originating from the hospital's major operational information systems. TOHDW contains information on patient demographics and diagnoses, as well as procedures and patient transfers between different units or hospital services during the admission.

Primary OutcomeRatio of Observed to Expected Number of Deaths per Week

For each study day, we measured the number of hospital deaths from the patient registration table in TOHDW. This statistic was collated for each week to ensure numeric stability, especially in our subgroup analyses.

We calculated the weekly expected number of hospital deaths using an extension of the Escobar model.24 The Escobar is a logistic regression model that estimated the probability of death in hospital that was derived and internally validated on almost 260,000 hospitalizations at 17 hospitals in the Kaiser Permanente Health Plan. It included six covariates that were measurable at admission including: patient age; patient sex; admission urgency (ie, elective or emergent) and service (ie, medical or surgical); admission diagnosis; severity of acute illness as measured by the Laboratory‐based Acute Physiology Score (LAPS); and chronic comorbidities as measured by the COmorbidity Point Score (COPS). Hospitalizations were grouped by admission diagnosis. The final model had excellent discrimination (c‐statistic 0.88) and calibration (P value of Hosmer Lemeshow statistic for entire cohort 0.66). This model was externally validated in our center with a c‐statistic of 0.901.25

We extended the Escobar model in several ways (Wong et al., Derivation and validation of a model to predict the daily risk of death in hospital, 2010, unpublished work). First, we modified it into a survival (rather than a logistic) model so it could estimate a daily probability of death in hospital. Second, we included the same covariates as Escobar except that we expressed LAPS as a time‐dependent covariate (meaning that the model accounted for changes in its value during the hospitalization). Finally, we included other time‐dependent covariates including: admission to intensive care unit; undergoing significant procedures; and awaiting long‐term care. This model had excellent discrimination (concordance probability of 0.895, 95% confidence interval [CI] 0.8890.902) and calibration.

We used this survival model to estimate the daily risk of death for all patients in the hospital each day. Summing these risks over hospital patients on each day returned the daily number of expected hospital deaths. This was collated per week.

The outcome statistic for this study was the ratio of the observed to expected weekly number of hospital deaths. Ratios exceeding 1 indicate that more deaths were observed than were expected (given the distribution of important covariates in those people during that week). This outcome statistic has several advantages. First, it accounts for the number of patients in the hospital each day. This is important because the number of hospital deaths will increase as the number of people in hospital increase. Second, it accounts for the severity of illness in each patient on each hospital day. This accounts for daily changes in risk of patient death, because calculation of the expected number of deaths per day was done using a multivariate survival model that included time‐dependent covariates. Therefore, each individual's predicted hazard of death (which was summed over the entire hospital to calculate the total expected number of deaths in hospital each day) took into account the latest values of these covariates. Previous analyses only accounted for risk of death at admission.

Expressing Physician Experience

The latent measure26 in all July Phenomenon studies is collective house‐staff physician experience. This is quantified by a surrogate date variable in which July 1the date that new house‐staff start their training in North Americarepresents minimal experience and June 30 represents maximal experience. We expressed collective physician experience on a scale from 0 (minimum experience) on July 1 to 1 (maximum experience) on June 30. A similar approach has been used previously13 and has advantages over the other methods used to capture collective house‐staff experience. In the stratified, incomplete approach,47, 911, 13, 1517 periods with inexperienced house‐staff (eg, July and August) are grouped together and compared to times with experienced house‐staff (eg, May and June), while ignoring all other data. The specification of cut‐points for this stratification is arbitrary and the method ignores large amounts of data. In the stratified, complete approach, periods with inexperienced house‐staff (eg, July and August) are grouped together and compared to all other times of the year.8, 12, 14, 1820, 22 This is potentially less biased because there are no lost data. However, the cut‐point for determining when house‐staff transition from inexperienced to experienced is arbitrary, and the model assumes that the transition is sudden. This is suboptimal because acquisition of experience is a gradual, constant process.

The pattern by which collective physician experience changes between July 1st and June 30th is unknown. We therefore expressed this evolution using five different patterns varying from a linear change to a natural logarithmic change (see Supporting Appendix B in the online version of this article).

Analysis

We first examined for autocorrelation in our outcome variable using Ljung‐Box statistics at lag 6 and 12 in PROC ARIMA (SAS 9.2, Cary, NC). If significant autocorrelation was absent in our data, linear regression modeling was used to associate the ratio of the observed to expected number of weekly deaths (the outcome variable) with the collective first year physician experience (the predictor variable). Time‐series methodology was to be used if significant autocorrelation was present.

In our baseline analysis, we included all hospitalizations together. In stratified analyses, we categorized hospitalizations by admission status (emergent vs elective) and admission service (medicine vs surgery).

RESULTS

Between April 15, 2004 and December 31, 2008, The Ottawa Hospital had a total of 152,017 inpatient admissions and 107,731 same day surgeries (an annual rate of 32,222 and 22,835, respectively; an average daily rate of 88 and 63, respectively) that met our study's inclusion criteria. These 259,748 encounters included 164,318 people. Table 1 provides an overall description of the study population.

Description of the Study Cohort
Characteristic 
  • Abbreviations: IQR, interquartile range; LAPS, Laboratory‐based Acute Physiology Score; PIMR, Procedural Independent Mortality Risk (van Walraven et al., The Procedural Independent Mortality Risk [PIMR] score can use administrative data to quantify the independent risk of death in hospital after procedures, 2010, unpublished work).

  • Among admissions where at least one PIMR procedure was performed during the hospitalization.

Patients/hospitalizations, n164,318/259,748
Deaths in‐hospital, n (%)7,679 (3.0)
Length of admission in days, median (IQR)2 (16)
Male, n (%)124,848 (48.1)
Age at admission, median (IQR)60 (4674)
Admission type, n (%) 
Elective surgical136,406 (52.5)
Elective nonsurgical20,104 (7.7)
Emergent surgical32,046 (12.3)
Emergent nonsurgical71,192 (27.4)
Elixhauser score, median (IQR)0 (04)
LAPS at admission, median (IQR)0 (015)
At least one admission to intensive care unit, n (%)7,779 (3.0)
At least one alternative level of care episode, n (%)6,971 (2.7)
At least one PIMR procedure, n (%)47,288 (18.2)
First PIMR score,* median (IQR)2 (52)

Weekly Deaths: Observed, Expected, and Ratio

Figure 1A presents the observed weekly number of deaths during the study period. There was an average of 31 deaths per week (range 1551). Some large fluctuations in the weekly number of deaths were seen; in 2007, for example, the number of observed deaths went from 21 in week 13 up to 46 in week 15. However, no obvious seasonal trends in the observed weekly number of deaths were seen (Figure 1A, heavy line) nor were trends between years obvious.

Figure 1
The weekly number of observed deaths (top plot) and expected deaths (middle plot) for each week of the year (horizontal axis). The bottom plot presents the ratio of weekly observed to expected number of deaths. Each plot presents results for individual study years (light lines) as well as an overall summary for all years (heavy line). The first week of July (when new house‐staff start their training) is represented by the vertical line in the middle of each plot.

Figure 1B presents the expected weekly number of deaths during the study period. The expected weekly number of deaths averaged 29.6 (range 22.238.7). The expected weekly number of deaths was notably less variable than the observed number of deaths. However, important variations in the expected number of deaths were seen; for example, in 2005, the expected number of deaths increased from 24.1 in week 41 to 29.6 in week 44. Again, we saw no obvious seasonal trends in the expected weekly number of deaths (Figure 1B, heavy line).

Figure 1C illustrates the ratio of observed to the expected weekly number of deaths. The average observed to expected ratio slightly exceeded unity (1.05) and ranged from 0.488 (week 24, in 2008) to 1.821 (week 51, in 2008). We saw no obvious seasonal trends in the ratio of the observed to expected number of weekly deaths. In addition, obvious trends in this ratio were absent over the study period.

Association Between House‐Staff Experience and Death in Hospital

We found no evidence of autocorrelation in the ratio of observed to expected weekly number of deaths. The ratio of observed to expected number of hospital deaths was not significantly associated with house‐staff physician experience (Table 2). This conclusion did not change regardless of which house‐staff physician experience pattern was used in the linear model (Table 2). In addition, our analysis found no significant association between physician experience and patient mortality when analyses were stratified by admission service or admission status (Table 2).

Absolute Differences in the Ratio of Observed to Expected Number of Hospital Deaths from Minimal to Maximal Experience
Patient PopulationHouse‐Staff Experience Pattern (95% CI)
LinearSquareSquare RootCubicNatural Logarithm
  • NOTE: This table summarizes the association between collective physician experience and the weekly ratio of observed to expected number of hospital deaths. The first column indicates the patient population included in the analysis. The five patterns of collective house‐staff experience (illustrated in Supporting Appendix B in the online version of this article) are listed across the top. Each entry presents the absolute change in the weekly ratio of observed to expected number of hospital deaths (with its P value in parentheses) when experience changes from the minimal to the maximal value. For example, in the model containing all patients expressing house‐staff experience in a linear pattern (top left), an increase in house‐staff experience from 0 to 1 was associated with an absolute decrease in the ratio of observed to expected numbers of deaths per week of 0.02 (or 2%). Negative values indicate that patient outcomes improve (ie, the ratio of observed to expected number of hospital deaths decreases) with an increase in house‐staff experience.

  • Abbreviations: CI, confidence interval.

All0.03 (0.11, 0.06)0.02 (0.10, 0.07)0.04 (0.15, 0.07)0.01 (0.10, 0.08)0.05 (0.16, 0.07)
Admitting service    
Medicine0.0004 (0.09, 0.10)0.01 (0.08, 0.10)0.01 (0.13, 0.11)0.02 (0.07, 0.11)0.03 (0.15, 0.09)
Surgery0.10 (0.30, 0.10)0.11 (0.30, 0.08)0.12 (0.37, 0.14)0.11 (0.31, 0.08)0.09 (0.35, 0.17)
Admission status    
Elective0.09 (0.53, 0.35)0.10 (0.51, 0.32)0.11 (0.66, 0.44)0.10 (0.53, 0.33)0.11 (0.68, 0.45)
Emergent0.02 (0.11, 0.07)0.01 (0.09, 0.08)0.03 (0.14, 0.08)0.003 (0.09, 0.09)0.04 (0.16, 0.08)

DISCUSSION

It is natural to suspect that physician experience influences patient outcomes. The commonly discussed July Phenomenon explores changes in teaching‐hospital patient outcomes by time of the academic year. This serves as an ecological surrogate for the latent variable of overall house‐staff experience. Our study used a detailed outcomethe ratio of observed to the expected number of weekly hospital deathsthat adjusted for patient severity of illness. We also modeled collective physician experience using a broad range of patterns. We found no significant variation in mortality rates during the academic year; therefore, the risk of death in hospital does not vary by house‐staff experience at our hospital. This is no evidence of a July Phenomenon for mortality at our center.

We were not surprised that the arrival of inexperienced house‐staff did not significantly change patient mortality for several reasons. First year residents are but one group of treating physicians in a teaching hospital. They are surrounded by many other, more experienced physicians who also contribute to patient care and their outcomes. Given these other physicians, the influence that the relatively smaller number of first year residents have on patient outcomes will be minimized. In addition, the role that these more experienced physicians play in patient care will vary by the experience and ability of residents. The influence of new and inexperienced house‐staff in July will be blunted by an increased role played by staff‐people, fellows, and more experienced house‐staff at that time.

Our study was a methodologically rigorous examination of the July Phenomenon. We used a reliable outcome statisticthe ratio of observed to expected weekly number of hospital deathsthat was created with a validated, discriminative, and well‐calibrated model which predicted risk of death in hospital (Wong et al., Derivation and validation of a model to predict the daily risk of death in hospital, 2010, unpublished work). This statistic is inherently understandable and controlled for patient severity of illness. In addition, our study included a very broad and inclusive group of patients over five years at two hospitals.

Twenty‐three other studies have quantitatively sought a July Phenomenon for patient mortality (see Supporting Appendix A in the online version of this article). The studies contained a broad assortment of research methodologies, patient populations, and analytical methodologies. Nineteen of these studies (83%) found no evidence of a July Phenomenon for teaching‐hospital mortality. In contrast, two of these studies found notable adjusted odds ratios for death in hospital (1.41 and 1.34) in patients undergoing either general surgery13 or complex cardiovascular surgery,19 respectively. Blumberg22 also found an increased risk of death in surgical patients in July, but used indirect standardized mortality ratios as the outcome statistic and was based on only 139 cases at Maryland teaching hospitals in 1984. Only Jen et al.16 showed an increased risk of hospital death with new house‐staff in a broad patient population. However, this study was restricted to two arbitrarily chosen days (one before and one after house‐staff change‐over) and showed an increased risk of hospital death (adjusted OR 1.05, 95% CI 1.001.15) whose borderline statistical significance could have been driven by the large sample size of the study (n = 299,741).

Therefore, the vast majority of dataincluding those presented in our analysesshow that the risk of teaching‐hospital death does not significantly increase with the arrival of new house‐staff. This prompts the question as to why the July Phenomenon is commonly presented in popular media as a proven fact.2733 We believe this is likely because the concept of the July Phenomenon is understandable and has a rather morbid attraction to people, both inside and outside of the medical profession. Given the large amount of data refuting the true existence of a July Phenomenon for patient mortality (see Supporting Appendix A in the online version of this article), we believe that this term should only be used only as an example of an interesting idea that is refuted by a proper analysis of the data.

Several limitations of our study are notable. First, our analysis is limited to a single center, albeit with two hospitals. However, ours is one of the largest teaching centers in Canada with many new residents each year. Second, we only examined the association of physician experience on hospital mortality. While it is possible that physician experience significantly influences other patient outcomes, mortality is, obviously, an important and reliably tallied statistic that is used as the primary outcome in most July Phenomenon studies. Third, we excluded approximately a quarter of all hospitalizations from the study. These exclusions were necessary because the Escobar model does not apply to these people and can therefore not be used to predict their risk of death in hospital. However, the vast majority of excluded patients (those less than 15 years old, and women admitted for routine childbirth) have a very low risk of death (the former because they are almost exclusively newborns, and the latter because the risk of maternal death during childbirth is very low). Since these people will contribute very little to either the expected or observed number of deaths, their exclusion will do little to threaten the study's validity. The remaining patients who were transferred to or from other hospitals (n = 12,931) makes a small proportion of the total sampling frame (5% of admissions). Fourth, our study did not identify any significant association between house‐staff experience and patient mortality (Table 2). However, the confidence intervals around our estimates are wide enough, especially in some subgroups such as patients admitted electively, that important changes in patient mortality with house‐staff experience cannot be excluded. For example, whereas our study found that a decrease in the ratio of observed to expected number of deaths exceeding 30% is very unlikely, it is still possible that this decrease is up to 30% (the lower range of the confidence interval in Table 2). However, using this logic, it could also increase by up to 10% (Table 2). Finally, we did not directly measure individual physician experience. New residents can vary extensively in their individual experience and ability. Incorporating individual physician measures of experience and ability would more reliably let us measure the association of new residents with patient outcomes. Without this, we had to rely on an ecological measure of physician experiencenamely calendar date. Again, this method is an industry standard since all studies quantify physician experience ecologically by date (see Supporting Appendix A in the online version of this article).

In summary, our datasimilar to most studies on this topicshow that the risk of death in teaching hospitals does not change with the arrival of new house‐staff.

References
  1. Rich EC,Gifford G,Dowd B.The effects of scheduled intern rotation on the cost and quality of teaching hospital care.Eval Health Prof.1994;17:259272.
  2. Rich EC,Hillson SD,Dowd B,Morris N.Specialty differences in the “July Phenomenon” for Twin Cities teaching hospitals.Med Care.1993;31:7383.
  3. Rich EC,Gifford G,Luxenberg M,Dowd B.The relationship of house staff experience to the cost and quality of inpatient care.JAMA.1990;263:953957.
  4. Buchwald D,Komaroff AL,Cook EF,Epstein AM.Indirect costs for medical education. Is there a July phenomenon?Arch Intern Med.1989;149:765768.
  5. Alshekhlee A,Walbert T,DeGeorgia M,Preston DC,Furlan AJ.The impact of accreditation council for graduate medical education duty hours, the July phenomenon, and hospital teaching status on stroke outcomes.J Stroke Cerebrovasc Dis.2009;18:232238.
  6. Aylin P,Majeed FA.The killing season—Fact or fiction.BMJ1994;309:1690.
  7. Bakaeen FG,Huh J,LeMaire SA, et al.The July effect: Impact of the beginning of the academic cycle on cardiac surgical outcomes in a cohort of 70,616 patients.Ann Thorac Surg.2009;88:7075.
  8. Barry WA,Rosenthal GE.Is there a July phenomenon? The effect of July admission on intensive care mortality and length of stay in teaching hospitals.J Gen Intern Med.2003;18:639645.
  9. Bruckner TA,Carlo WA,Ambalavanan N,Gould JB.Neonatal mortality among low birth weight infants during the initial months of the academic year.J Perinatol.2008;28:691695.
  10. Claridge JA,Schulman AM,Sawyer RG,Ghezel‐Ayagh A,Young JS.The “July Phenomenon” and the care of the severely injured patient: Fact or fiction?Surgery.2001;130:346353.
  11. Dhaliwal AS,Chu D,Deswal A, et al.The July effect and cardiac surgery: The effect of the beginning of the academic cycle on outcomes.Am J Surg.2008;196:720725.
  12. Englesbe MJ,Fan ZH,Baser O,Birkmeyer JD.Mortality in Medicare patients undergoing surgery in July in teaching hospitals.Ann Surg.2009;249:871876.
  13. Englesbe MJ,Pelletier SJ,Magee JC, et al.Seasonal variation in surgical outcomes as measured by the American College of Surgeons–National Surgical Quality Improvement Program (ACS‐NSQIP).Ann Surg.2007;246:456465.
  14. Finkielman JD,Morales IJ,Peters SG, et al.Mortality rate and length of stay of patients admitted to the intensive care unit in July.Crit Care Med.2004;32:11611165.
  15. Highstead RG,Johnson LC,Street JH,Trankiem CT,Kennedy SO,Sava JA.July—As good a time as any to be injured.J Trauma‐Injury Infect Crit Care.2009;67:10871090.
  16. Jen MH,Bottle A,Majeed A,Bell D,Aylin P.Early in‐hospital mortality following trainee doctors' first day at work.PLoS ONE.2009;4.
  17. Peets AD,Boiteau PJE,Doig CJ.Effect of critical care medicine fellows on patient outcome in the intensive care unit.Acad Med.2006;81:S1S4.
  18. Schroeppel TJ,Fischer PE,Magnotti LJ,Croce MA,Fabian TC.The “July Phenomenon”: Is trauma the exception?J Am Coll Surg.2009;209:378384.
  19. Shuhaiber JH,Goldsmith K,Nashef SAM.Impact of cardiothoracic resident turnover on mortality after cardiac surgery: A dynamic human factor.Ann Thorac Surg.2008;86:123131.
  20. Smith ER,Butler WE,Barker FG.Is there a “July Phenomenon” in pediatric neurosurgery at teaching hospitals?J Neurosurg Pediatr.2006;105:169176.
  21. Soltau TD,Carlo WA,Gee J,Gould J,Ambalavanan N.Mortality and morbidity by month of birth of neonates admitted to an academic neonatal intensive care unit.Pediatrics.2008;122:E1048E1052.
  22. Blumberg MS.Measuring surgical quality in Maryland: A model.Health Aff.1988;7:6278.
  23. Inaba K,Recinos G,Teixeira PG, et al.Complications and death at the start of the new academic year: Is there a July phenomenon?J Trauma‐Injury Infect Crit Care.2010;68(1):1922.
  24. Escobar GJ,Greene JD,Scheirer P,Gardner MN,Draper D,Kipnis P.Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases.Med Care.2008;46:232239.
  25. van Walraven C,Escobar GJ,Greene JD,Forster AJ.The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population.J Clin Epidemiol.2010;63:798803.
  26. McCutcheon AL.Introduction: The logic of latent variables.Latent Class Analysis.Newbury Park, CA:Sage;1987:510.
  27. July Effect. Wikipedia. Available at: http://en.wikipedia.org/wiki/July_effect. Accessed April 1,2011.
  28. Study proves “killing season” occurs as new doctors start work. September 23,2010. Herald Scotland. Available at: http://www.heraldscotland.com/news/health/study‐proves‐killing‐season‐occurs‐as‐new‐doctors‐start‐work‐1.921632. Accessed April 1, 2011.
  29. The “July effect”: Worst month for fatal hospital errors, study finds. June 3,2010. ABC News. Available at: http://abcnews.go.com/WN/WellnessNews/july‐month‐fatal‐hospital‐errors‐study‐finds/story?id=10819652. Accessed 1 April, 2011.
  30. “Deaths rise” with junior doctors. September 22,2010. BBC News. Available at: http://news.bbc.co.uk/2/hi/health/8269729.stm. Accessed April 1, 2011.
  31. Raloff Janet.July: When not to go to the hospital. June 2,2010. Science News. Available at: http://www.sciencenews.org/view/generic/id/59865/title/July_When_not_to_go_to_the_hospital. Accessed April 1, 2011.
  32. July: A deadly time for hospitals. July 5,2010. National Public Radio. Available at: http://www.npr.org/templates/story/story.php?storyId=128321489. Accessed April 1, 2011.
  33. Brayer Toni.Medical errors and patient safety: Beware the “July effect.” June 4,2010. Better Health. Available at: http://getbetterhealth.com/medical‐errors‐and‐patient‐safety‐beware‐of‐the‐july‐effect/2010.06.04. Accessed April 1, 2011.
References
  1. Rich EC,Gifford G,Dowd B.The effects of scheduled intern rotation on the cost and quality of teaching hospital care.Eval Health Prof.1994;17:259272.
  2. Rich EC,Hillson SD,Dowd B,Morris N.Specialty differences in the “July Phenomenon” for Twin Cities teaching hospitals.Med Care.1993;31:7383.
  3. Rich EC,Gifford G,Luxenberg M,Dowd B.The relationship of house staff experience to the cost and quality of inpatient care.JAMA.1990;263:953957.
  4. Buchwald D,Komaroff AL,Cook EF,Epstein AM.Indirect costs for medical education. Is there a July phenomenon?Arch Intern Med.1989;149:765768.
  5. Alshekhlee A,Walbert T,DeGeorgia M,Preston DC,Furlan AJ.The impact of accreditation council for graduate medical education duty hours, the July phenomenon, and hospital teaching status on stroke outcomes.J Stroke Cerebrovasc Dis.2009;18:232238.
  6. Aylin P,Majeed FA.The killing season—Fact or fiction.BMJ1994;309:1690.
  7. Bakaeen FG,Huh J,LeMaire SA, et al.The July effect: Impact of the beginning of the academic cycle on cardiac surgical outcomes in a cohort of 70,616 patients.Ann Thorac Surg.2009;88:7075.
  8. Barry WA,Rosenthal GE.Is there a July phenomenon? The effect of July admission on intensive care mortality and length of stay in teaching hospitals.J Gen Intern Med.2003;18:639645.
  9. Bruckner TA,Carlo WA,Ambalavanan N,Gould JB.Neonatal mortality among low birth weight infants during the initial months of the academic year.J Perinatol.2008;28:691695.
  10. Claridge JA,Schulman AM,Sawyer RG,Ghezel‐Ayagh A,Young JS.The “July Phenomenon” and the care of the severely injured patient: Fact or fiction?Surgery.2001;130:346353.
  11. Dhaliwal AS,Chu D,Deswal A, et al.The July effect and cardiac surgery: The effect of the beginning of the academic cycle on outcomes.Am J Surg.2008;196:720725.
  12. Englesbe MJ,Fan ZH,Baser O,Birkmeyer JD.Mortality in Medicare patients undergoing surgery in July in teaching hospitals.Ann Surg.2009;249:871876.
  13. Englesbe MJ,Pelletier SJ,Magee JC, et al.Seasonal variation in surgical outcomes as measured by the American College of Surgeons–National Surgical Quality Improvement Program (ACS‐NSQIP).Ann Surg.2007;246:456465.
  14. Finkielman JD,Morales IJ,Peters SG, et al.Mortality rate and length of stay of patients admitted to the intensive care unit in July.Crit Care Med.2004;32:11611165.
  15. Highstead RG,Johnson LC,Street JH,Trankiem CT,Kennedy SO,Sava JA.July—As good a time as any to be injured.J Trauma‐Injury Infect Crit Care.2009;67:10871090.
  16. Jen MH,Bottle A,Majeed A,Bell D,Aylin P.Early in‐hospital mortality following trainee doctors' first day at work.PLoS ONE.2009;4.
  17. Peets AD,Boiteau PJE,Doig CJ.Effect of critical care medicine fellows on patient outcome in the intensive care unit.Acad Med.2006;81:S1S4.
  18. Schroeppel TJ,Fischer PE,Magnotti LJ,Croce MA,Fabian TC.The “July Phenomenon”: Is trauma the exception?J Am Coll Surg.2009;209:378384.
  19. Shuhaiber JH,Goldsmith K,Nashef SAM.Impact of cardiothoracic resident turnover on mortality after cardiac surgery: A dynamic human factor.Ann Thorac Surg.2008;86:123131.
  20. Smith ER,Butler WE,Barker FG.Is there a “July Phenomenon” in pediatric neurosurgery at teaching hospitals?J Neurosurg Pediatr.2006;105:169176.
  21. Soltau TD,Carlo WA,Gee J,Gould J,Ambalavanan N.Mortality and morbidity by month of birth of neonates admitted to an academic neonatal intensive care unit.Pediatrics.2008;122:E1048E1052.
  22. Blumberg MS.Measuring surgical quality in Maryland: A model.Health Aff.1988;7:6278.
  23. Inaba K,Recinos G,Teixeira PG, et al.Complications and death at the start of the new academic year: Is there a July phenomenon?J Trauma‐Injury Infect Crit Care.2010;68(1):1922.
  24. Escobar GJ,Greene JD,Scheirer P,Gardner MN,Draper D,Kipnis P.Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases.Med Care.2008;46:232239.
  25. van Walraven C,Escobar GJ,Greene JD,Forster AJ.The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population.J Clin Epidemiol.2010;63:798803.
  26. McCutcheon AL.Introduction: The logic of latent variables.Latent Class Analysis.Newbury Park, CA:Sage;1987:510.
  27. July Effect. Wikipedia. Available at: http://en.wikipedia.org/wiki/July_effect. Accessed April 1,2011.
  28. Study proves “killing season” occurs as new doctors start work. September 23,2010. Herald Scotland. Available at: http://www.heraldscotland.com/news/health/study‐proves‐killing‐season‐occurs‐as‐new‐doctors‐start‐work‐1.921632. Accessed April 1, 2011.
  29. The “July effect”: Worst month for fatal hospital errors, study finds. June 3,2010. ABC News. Available at: http://abcnews.go.com/WN/WellnessNews/july‐month‐fatal‐hospital‐errors‐study‐finds/story?id=10819652. Accessed 1 April, 2011.
  30. “Deaths rise” with junior doctors. September 22,2010. BBC News. Available at: http://news.bbc.co.uk/2/hi/health/8269729.stm. Accessed April 1, 2011.
  31. Raloff Janet.July: When not to go to the hospital. June 2,2010. Science News. Available at: http://www.sciencenews.org/view/generic/id/59865/title/July_When_not_to_go_to_the_hospital. Accessed April 1, 2011.
  32. July: A deadly time for hospitals. July 5,2010. National Public Radio. Available at: http://www.npr.org/templates/story/story.php?storyId=128321489. Accessed April 1, 2011.
  33. Brayer Toni.Medical errors and patient safety: Beware the “July effect.” June 4,2010. Better Health. Available at: http://getbetterhealth.com/medical‐errors‐and‐patient‐safety‐beware‐of‐the‐july‐effect/2010.06.04. Accessed April 1, 2011.
Issue
Journal of Hospital Medicine - 6(7)
Issue
Journal of Hospital Medicine - 6(7)
Page Number
389-394
Page Number
389-394
Publications
Publications
Article Type
Display Headline
Influence of house‐staff experience on teaching‐hospital mortality: The “July Phenomenon” revisited
Display Headline
Influence of house‐staff experience on teaching‐hospital mortality: The “July Phenomenon” revisited
Sections
Article Source

Copyright © 2011 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Ottawa Health Research Institute, ASB1‐003, 1053 Carling Ave., Ottawa, ON, Canada K1Y 4E9
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Information Continuity on Outcomes

Article Type
Changed
Sun, 05/28/2017 - 20:11
Display Headline
The independent association of provider and information continuity on outcomes after hospital discharge: Implications for hospitalists

Hospitalists are common in North America.1, 2 Hospitalists have been associated with a range of beneficial outcomes including decreased length of stay.3, 4 A primary concern of the hospitalist model is its potential detrimental effect on continuity of care5 partly because patients are often not seen by their hospitalists after discharge.

Continuity of care6 is primarily composed of provider continuity (an ongoing relationship between a patient and a particular provider over time) and information continuity (availability of data from prior events for subsequent patient encounters).6 The association between continuity of care and patient outcomes has been quantified in many studies.720 However, the relationship of continuity and outcomes is especially relevant after discharge from the hospital since this is a time when patients have a high risk of poor patient outcomes21 and poor provider22 and information continuity.2325

The association between continuity and outcomes after hospital discharge has been directly quantified in 2 studies. One found that patients seen by a physician who treated them in the hospital had a significant adjusted relative risk reduction in 30‐day death or readmission of 5% and 3%, respectively.22 The other study found that patients discharged from a general medicine ward were less likely to be readmitted if they were seen by physicians who had access to their discharge summary.23 However, neither of these studies concurrently measured the influence of provider and information continuity on patient outcomes.

Determining whether and how continuity of care influences patient outcomes after hospital discharge is essential to improve health care in an evidence‐based fashion. In addition, the influence that hospital physician follow‐up has on patient outcomes can best be determined by measuring provider and information continuity in patients after hospital discharge. This study sought to measure the independent association of several provider and information continuity measures on death or urgent readmission after hospital discharge.

Methods

Study Design

This was a multicenter prospective cohort study of consecutive patients discharged to the community from the medical or surgical services of 11 Ontario hospitals (6 university‐affiliated hospitals and 5 community hospitals) in 5 cities after an elective or emergency hospitalization. Patients were invited to participate in the study if they were cognitively intact, had a telephone, and provided written informed consent. Patients were excluded if they were less than 18 years old, were discharged to nursing homes, or were not proficient in English and did not have someone to help communicate with study staff. Enrolled patients were excluded from the analysis if they had less than 2 physician visits prior to one of the study's outcomes or the end of patient observation (which was 6 months postdischarge). This final exclusion criterion was necessary since 2 continuity measures (including postdischarge physician continuity and postdischarge information continuity) were incalculable with less than 2 physician visits during follow‐up (Supporting information). The study was approved by the research ethics board of each participating hospital.

Data Collection

Prior to hospital discharge, patients were interviewed by study personnel to identify their baseline functional status, their living conditions, all physicians who regularly treated the patient prior to admission (including both family physicians and consultants), and chronic medical conditions. The latter were confirmed by a review of the patient's chart and hospital discharge summary, when available. Patients also provided principal contacts whom we could contact in the event patients could not be reached. The chart and discharge summary were also used to identify diagnoses in hospitalincluding complications (diagnoses arising in the hospital)and medications at discharge.

Patients or their designated contacts were telephoned 1, 3, and 6 months after hospital discharge to identify the date and the physician of all postdischarge physician visits. For each postdischarge physician visit, we determined whether the physician had access to a discharge summary for the index hospitalization. We also determined the availability of information from all previous postdischarge visits that the patient had with other physicians. The methods used to collect these data were previously detailed.26 Briefly, we used three complementary methods to elicit this information from each follow‐up physician. First, patients gave the physician a survey on which the physician listed all prior visits with other doctors for which they had information. If this survey was not returned, we faxed the survey to the physician. If the faxed survey was not returned, we telephoned the physician or their office staff and administered the survey over the telephone.

Continuity Measures

We measured components of both provider and information continuity. For the posthospitalization period, we measured provider continuity for physicians who had provided patient care during three distinct phases: the prehospital period; the hospital period; and the postdischarge period. Prehospital physicians were those classified by the patient as their regular physician(s) (defined as physiciansboth family physicians and consultantsthat they had seen in the past and were likely to see again in the future). Hospital provider continuity was divided into 2 components: hospital physician continuity (ie, the most responsible physician in the hospital); and hospital consultant continuity (ie, another physician who consulted on the patient during admission). Information continuity was divided into discharge summary continuity and postdischarge visit information continuity.

We quantified provider and information continuity using Breslau's Usual Provider of Continuity (UPC)27 measure. It is a widely used and validated continuity measure whose values are meaningful and interpretable.6 The UPC measures the proportion of visits with the physician of interest (for provider continuity) or the proportion of visits having the information of interest (for information continuity). The UPC was calculated as: $${\rm UPC} = {\rm n}_{\rm i} / {\rm N}$$where UPC is the Usual Provider of Continuity; ni is the number of postdischarge visits to the physician type of interest (eg, prehospital; hospital; postdischarge) or the number of visits at which the information of interest (eg, discharge summary) was available; and N is the total number of postdischarge visits. The UPC ranges from 0 to 1 where 0 is perfect discontinuity and 1 is perfect continuity. Details regarding the provider and information continuity measures are given in the supporting information and were discussed in greater detail in a previous study.28

As the formulae in the supporting information suggest, all continuity measures were incalculable prior to the first postdischarge visit and all continuity measures changed value at each visit during patient observation. In addition, a particular physician visit could increase multiple continuity measures simultaneously. For example, a visit with a physician who was the hospital physician and who regularly treated the patient prior to the hospitalization would increase both hospital and prehospital provider continuity. If the patient had previously seen the physician after discharge, the visit would also increase postdischarge physician continuity.

Study Outcomes

Outcomes for the study included time to all‐cause death and time to all‐cause, urgent readmission. To be classified as urgent, readmissions could not be arranged when the patient was originally discharged from hospital or more than 4 weeks prior to the readmission. All hospital admissions meeting these criteria during the 6 month study period were labeled in this study as urgent readmissions even if they were unrelated to the index admission.

Principal contacts were called if we were unable to reach the patient to determine their outcomes. If the patient's vital status remained unclear, we contacted the Office of the Provincial Registrar to determine if and when the patient died during the 6 months after discharge from hospital.

Analysis

Outcome incidence densities and 95% confidence intervals [CIs] were calculated using PROC GENMOD in SAS to account for clustering of patients in hospitals. We used multivariate proportional hazards modeling to determine the independent association of provider and information continuity measures with time to death and time to urgent readmission. Patient observation started when patients were discharged from the hospital. Patient observation ended at the earliest of the following: death; urgent readmission to the hospital; end of follow‐up (which was 6 months after discharge from the hospital) or loss to follow‐up. Because hospital consultant continuity was very highly skewed (95.6% of patients had a value of 0; mean value of 0.016; skewness 6.9), it was not included in the primary regression models but was included in a sensitivity analysis.

To adjust for potential confounders in the association between continuity and the outcomes, our model included all factors that were independently associated with either the outcome or any continuity measure. Factors associated with death or urgent readmission were summarized using the LACE index.29 This index combines a patient's hospital length of stay, admission acuity, patient comorbidity (measured with the Charlson Score30 using updated disease category weights by Schneeweiss et al.),31 and emergency room utilization (measured as the number of visits in the 6 months prior to admission) into a single number ranging from 0 to 19. The LACE index was moderately discriminative and highly accurate at predicting 30‐day death or urgent readmission.29 In a separate study,28 we found that the following factors were independently associated with at least one of the continuity measures: patient age; patient sex; number of admissions in previous 6 months; number of regular treating physicians prior to admission; hospital service (medicine vs. surgery); and number of complications in the hospital (defined as new problems arising after admission to hospital). By including all factors that were independently associated with either the outcome or continuity, we controlled for all measured factors that could act as confounders in the association between continuity and outcomes. We accounted for the clustered study design by using conditional proportional hazards models that stratified by hospitals.32 Analytical details are given in the supporting information.

Results

Between October 2002 and July 2006, we enrolled 5035 patients from 11 hospitals (Figure 1). Of the 5035 patients, 274 (5.4%) had no follow up interview with study personnel. A total of 885 (17.6%) had fewer than 2 post discharge physician visits and were not included in the continuity analyses. This left 3876 patients for this analysis (77.0% of the original cohort), of which 3727 had complete follow up (96.1% of the study cohort). A total of 531 patients (10.6% of the original cohort) had incomplete follow‐up because: 342 (6.8%) were lost to follow‐up; 172 (3.4%) refused participation; and 24 (0.5%) were transferred into a nursing home during the first month of observation.

Figure 1
Patient follow‐up. Creation of the study cohort (n = 3876) from the original cohort is illustrated. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]
Figure 2
Time to death or urgent readmission. This figure summarizes outcomes for the study cohort. The horizontal axis presents days from discharge. The vertical axis presents proportion of the cohort without death or urgent readmission. The gray line presents time to death; the black line presents time to urgent readmission. Dotted lines present the 95% CI for each survival curve.

The 3876 study patients are described in Table 1. Overall, these people had a mean age of 62 and most commonly had no physical limitations. Almost a third of patients had been admitted to the hospital in the previous 6 months. A total of 7.6% of patients had no regular prehospital physician while 5.8% had more than one regular prehospital physician. Patients were evenly split between acute and elective admissions and 12% had a complication during their admission. They were discharged after a median of 4 days on a median of 4 medications.

Description of Study Cohort
FactorValueDeath or Urgent ReadmissionAll (n = 3876)
No (n = 3491)Yes (n = 385)
  • Abbreviations: CABG, coronary artery bypass graft; CAD, coronary artery disease; IQR, interquartile range; SD, standard deviation.

Mean patient age, years (SD) 61.59 16.1667.70 15.5362.19 16.20
Female (%) 1838 (52.6)217 (56.4)2055 (53.0)
Lives alone (%) 791 (22.7)107 (27.8)898 (23.2)
# activities of daily living requiring aids (%)03277 (93.9)354 (91.9)3631 (93.7)
 1125 (3.6)20 (5.2)145 (3.7)
 >189 (2.5)11 (2.8)100 (2.8)
# physicians who see patient regularly (%)0241 (6.9)22 (5.7)263 (6.8)
 13060 (87.7)333 (86.5)3393 (87.5)
 2150 (4.3)21 (5.5)171 (4.4)
 >2281 (8.0)31 (8.0)312 (8.0)
# admissions in previous 6 months (%)02420 (69.3)222 (57.7)2642 (68.2)
 1833 (23.9)103 (26.8)936 (24.1)
 >1238 (6.8)60 (15.6)298 (7.7)
Index hospitalization description    
Number of discharge medications (IQR) 4 (2‐7)6 (3‐9)4 (2‐7)
Admitted to medical service (%) 1440 (41.2)231 (60.0)1671 (43.1)
Acute diagnoses:    
CAD (%) 238 (6.8)23 (6.0)261 (6.7)
Neoplasm of unspecified nature (%) 196 (5.6)35 (9.1)231 (6.0)
Heart failure (%) 127 (3.6)38 (9.9)165 (4.3)
Acute procedures    
CABG (%) 182 (5.2)14 (3.6)196 (5.1)
Total knee arthoplasty (%) 173 (5.0)10 (2.6)183 (4.7)
Total hip arthroplasty (%) 118 (3.4)(0.5)120 (3.1)
Complication during admission (%) 403 (11.5)63 (16.4)466 (12.0)
LACE index: mean (SD) 8.0 (3.6)10.3 (3.8)8.2 (3.7)
Length of stay in days: median (IQR) 4 (2‐7)6 (3‐10)4 (2‐8)
Acute/emergent admission (%) 1851 (53.0)272 (70.6)2123 (54.8)
Charlson score (%)02771 (79.4)241 (62.6)3012 (77.7)
 1103 (3.0)17 (4.4)120 (3.1)
 2446 (12.8)86 (22.3)532 (13.7)
 >2171 (4.9)41 (10.6)212 (5.5)
Emergency room use (# visits/ year) (%)02342 (67.1)190 (49.4)2532 (65.3)
 1761 (21.8)101 (26.2)862 (22.2)
 >1388 (11.1)94 (24.4)482 (12.4)

Patients were observed in the study for a median of 175 days (interquartile range [IQR] 175‐178). During this time they had a median of 4 physician visits (IQR 3‐6). The first postdischarge physician visit occurred a median of 10 days (IQR 6‐18) after discharge from hospital.

Continuity Measures

Table 2 summarizes all continuity scores. Since continuity scores varied significantly over time,28 Table 2 provides continuity scores on the last day of patient observation. Preadmission provider, postdischarge provider, and discharge summary continuity all had similar values and distributions with median values ranging between 0.444 and 0.571. 1797 (46.4%) patients had a hospital physician provider continuity scorae of 0.

Ranges of Continuity Measures on Last Day of Patient Observation
 Minimum25th PercentileMedian75th PercentileMaximum
Provider continuity     
A: Pre‐admission physician00.1430.4440.6671.000
B: Hospital physician000.1430.4001.000
C: Post‐discharge physician00.3330.5710.7501.000
Information continuity     
D: Discharge summary00.0950.5000.8001.000
E: Post‐discharge information000.1820.5001.000

Study Outcomes

During a median of 175 days of observation, 45 patients died (event rate 2.6 events per 100 patient‐years observation [95% CI 2.0‐3.4]) and 340 patients were urgently readmitted (event rate 19.6 events per 100 patient‐years observation [95% CI 15.9‐24.3]). Figure 2 presents the survival curves for time to death and time to urgent readmission. The hazard of death was consistent through the observation period but the risk of urgent readmission decreased slightly after 90 days postdischarge.

Association Between Continuity and Outcomes

Table 3 summarizes the association between provider and information continuity with study outcomes. No continuity measure was associated with time to death by itself (Table 3, column A) or with the other continuity measures (Table 3, column B). Preadmission physician continuity was associated with a significantly decreased risk of urgent readmission. When the proportion of postdischarge visits with a prehospital physician increased by 10%, the adjusted risk of urgent readmission decreased by 6% (adjusted hazards ratio (adj‐HR)) of 0.94 (95% CI, 0.91‐0.98). None of the other continuity measuresincluding hospital physicianwere significantly associated with urgent readmission either by themselves (Table 3, column A) or after adjusting for other continuity measures (Table 3, column B).

Association of Provider and Information Continuity With Post‐Discharge Outcomes
 Outcome
Death (95% CI)Urgent Readmission (95% CI)
A: Adjusted for Other Confounders OnlyB: Adjusted for Other Confounders and Continuity MeasuresA: Adjusted for Other Confounders OnlyB: Adjusted for Other Confounders and Continuity Measures
  • NOTE: The adjusted hazards ratio with 95% CI is presented. In columns A, each continuity measure was included in a model without the other continuity measures but with the other confounders. Because this resulted in 5 separate models, adjusted hazard ratios for the other confounders are not given in columns A. In columns B, the model includes all continuity measures and covariates. The hazard ratio for provider and information continuity scores expresses changes in the risk of the outcome when the continuity score increases by 0.1. A hazard ratio could not be estimated in the death model for number of regular physicians because of empty cells (ie, no one who died was without a regular physician).

  • Abbreviation: CI, confidence interval.29

  • Hazard ratio expresses the influence of an increase in the variable's unit by 1.

  • Variable included in each of the 5 survival models (one for each continuity measure). Results varied between the models.

  • Comparator group is 0.

Provider continuity        
A: Pre‐admission physician1.03(0.95, 1.12)1.06(0.95, 1.18)0.95(0.92, 0.98)0.94(0.91, 0.98)
B: Hospital physician0.87(0.74, 1.02)0.86(0.70, 1.03)0.98(0.94, 1.02)0.97(0.92, 1.01)
C: Post‐discharge physician0.97(0.89, 1.06)0.93(0.84, 1.04)0.98(0.95, 1.01)0.98(0.94, 1.02)
Information continuity        
D: Discharge Summary0.96(0.89, 1.04)0.94(0.87, 1.03)1.01(0.98, 1.04)1.02(0.99, 1.05)
E: Post‐discharge information1.01(0.94, 1.08)1.03(0.95, 1.11)1.00(0.97, 1.03)1.03(0.95, 1.11)
Other confounders        
Patient age in decades*  1.43(1.13, 1.82)  1.18(1.10, 1.28)
Female  1.50(0.81, 2.77)  1.16(0.94, 1.44)
# physicians who see patient regularly        
1      1.46(0.92, 2.34)
2      2.17(1.11, 4.26)
>2      3.71(1.55, 8.88)
Complications during admission        
1  1.38(0.61, 3.10)  0.81(0.55, 1.17)
>1  1.01(0.28, 3.58)  0.91(0.56, 1.48)
# admissions in previous 6 months        
1  1.27(0.59, 2.70)  1.34(1.02, 1.76)
>1  1.42(0.55, 3.67)  1.78(1.26, 2.51)
LACE index*  1.16(1.06, 1.26)  1.10(1.07, 1.14)

Increased patient age and increased LACE index score were both strongly associated with an increased risk of death (adj‐HR 1.43 [1.13‐1.82] and 1.16 [1.06‐1.26], respectively) and urgent readmission (adj‐HR 1.18 [1.10‐1.28] and 1.10 [1.07‐1.14], respectively). Hospitalization in the 6 months prior to admission significantly increased the risk of urgent readmission but not death. The risk of urgent readmission increased significantly as the number of regular prehospital physicians increased.

Sensitivity Analyses

Our study conclusions did not change in the sensitivity analyses. The number of postdischarge physician visits (expressed as a time‐dependent covariate) was not associated with either death or with urgent readmission and preadmission physician continuity remained significantly associated with time to urgent readmission (supporting information). Adding consultant continuity to the model also did not change our results (supporting information). In‐hospital consultant continuity was associated with an increased risk of urgent readmission (adj‐HR 1.10, 95% CI, 1.01‐1.20). The association between pre‐admission physician continuity and time to urgent readmission did not interact significantly with patient age, LACE index score, or number of previous admissions.

Discussion

This large, prospective cohort study measured the independent association of several provider and information continuity measures with important outcomes in patients discharged from hospital. After adjusting for potential confounders, we found that increased continuity with physicians who regularly cared for the patient prior to the admission was significantly and independently associated with a decreased risk of urgent readmission. Our data suggest that continuity with the hospital physician did not independently influence the risk of patient death or urgent readmission after discharge.

Although hospital physician continuity did not significantly change patient outcomes, we found that follow‐up with a physician who regularly treated the patient prior to their admission was associated with a significantly decreased risk of urgent readmission. This could reflect the important role that a patient's regular physician plays in their health care. Other studies have shown a positive association between continuity with a regular physician and improved outcomes including decreased emergency room utilization7, 8 and decreased hospitalization.10, 11

We were somewhat disappointed that information continuity was not independently associated with improved patient outcomes. Information continuity is likely more amenable to modification than is provider continuity. Of course, our study findings do not mean that information continuity does not improve patient outcomes, as in other studies.23, 33 Instead, our results could reflect that we solely measured the availability of information to physicians. Future studies that measure the quality, relevance, and actual utilization of patient information will be better able to discern the influence of information continuity on patient outcomes.

We believe that our study was methodologically strong and unique. We captured both provider and information continuity in a large group of representative patients using a broad range of measures that captured continuity's diverse components including both provider and information continuity. The continuity measures were expressed and properly analyzed as time‐dependent variables in a multivariate model.34 Our analysis controlled for important potential confounders. Our follow‐up and data collection was rigorous with 96.1% of our study group having complete follow‐up. Finally, the analysis used multiple imputation to appropriately handle missing data in the one incomplete variable (post‐discharge information continuity).3537

Several limitations of our study should be kept in mind. We are uncertain how our results might generalize to patients discharged from obstetrical or psychiatric services or people in other health systems. Our analysis had to exclude patients with less than two physician visits after discharge since this was the minimum required to calculate postdischarge physician and information continuity. Data collection for postdischarge information continuity was incomplete with data missing for 19.0% of all 15 401 visits in the original cohort.38 However, a response rate of 81.0% is very good39 when compared to other survey‐based studies40 and we accounted for the missing data using multiple imputation methods. The primary outcomes of our studytime to death or urgent readmissionmay be relatively insensitive to modification of quality of care, which is presumably improved by increased continuity.41 For example, Clarke found that the majority of readmissions in all patient groups were unavoidable with 94% of medical readmissions 1 month postdischarge judged to be unavoidable.42 Future studies regarding the effects of continuity could focus on its association with other outcomes that are more reflective of quality of care such as the risk of adverse events or medical error.21 Such outcomes would presumably be more sensitive to improved quality of care from increased continuity.

We believe that our study's major limitation was its inability to establish a causal association between continuity and patient outcomes. Our finding that increased consultant continuity was associated with an increased risk of poor outcomes highlights this concern. Presumably, patient follow‐up with a hospital consultant indicates a disease status with a high risk of bad patient outcomesa risk that is not entirely accounted for by the covariates used in this study. If we accept that unresolved confounding explains this association, the same could also apply to the association between preadmission physician continuity and improved outcomes. Perhaps patients who are doing well after discharge from hospital are able to return to their regular physician. Our analysis would therefore identify an association between increased preadmission physician continuity and improved patient outcomes. Analyses could also incorporate more discriminative measures of severity of hospital illness, such as those developed by Escobar et al.43 Since patients may experience health events after their discharge from hospital that could influence outcomes, recording these and expressing them in the study model as time‐dependent covariates will be important. Finally, similar to the classic study by Wasson et al.44 in 1984, a proper randomized trial that measures the effect of a continuity‐building intervention on both continuity of care and patient outcomes would help determine how continuity influences outcomes.

In conclusion, after discharge from hospital, increased continuity with physicians who routinely care for the patient is significantly and independently associated with a decreased risk of urgent readmission. Continuity with the hospital physician after discharge did not independently influence the risk of patient death or urgent readmission in our study. Further research is required to determine the causal association between preadmission physician continuity and improved outcomes. Until that time, clinicians should strive to optimize continuity with physicians their patients have seen prior to the hospitalization.

References
  1. Society of Hospital Medicine.2009.Ref Type: Internet Communication.
  2. Kralovec PD,Miller JA,Wellikson L,Huddleston JM.The status of hospital medicine groups in the United States.J Hosp Med.2006;1:7580.
  3. Wachter RM,Goldman L.The hospitalist movement 5 years later. [see comment].JAMA.2002;287:487494. [Review]
  4. Lindenauer PK,Pantilat SZ,Katz PP,Wachter RM.Hospitalists and the practice of inpatient medicine: results of a survey of the National Association of Inpatient Physicians. [see comment].Ann Intern Med.1999;130:343349.
  5. Pantilat SZ,Lindenauer PK,Katz PP,Wachter RM.Primary care physician attitudes regarding communication with hospitalists.Am J Med.2001;111:15S20S.
  6. Reid R,Haggerty J,McKendry R.Defusing the confusion: concepts and measures of continuity of healthcare.Ottawa,Canadian Health Services Research Foundation. Ref Type: Report.2002;150.
  7. Brousseau DC,Meurer JR,Isenberg ML,Kuhn EM,Gorelick MH.Association between infant continuity of care and pediatric emergency department utilization.Pediatrics.2004;113:738741.
  8. Christakis DA,Wright JA,Koepsell TD,Emerson S,Connell FA.Is greater continuity of care associated with less emergency department utilization?Pediatrics.1999;103:738742.
  9. Christakis DA,Mell L,Koepsell TD,Zimmerman FJ,Connell FA.Association of lower continuity of care with greater risk of emergency department use and hospitalization in children.Pediatrics.2001;107:524529.
  10. Gill JM,Mainous AG,The role of provider continuity in preventing hospitalizations.Arch Fam Med.1998;7:352357.
  11. Mainous AG,Gill JM.The importance of continuity of care in the likelihood of future hospitalization: is site of care equivalent to a primary clinician?Am J Public Health.1998;88:15391541.
  12. Baker R,Mainous AG,Gray DP,Love MM.Exploration of the relationship between continuity, trust in regular doctors and patient satisfaction with consultations with family doctors.Scand J Prim Health Care.2003;21:2732.
  13. Beattie P,Dowda M,Turner C,Michener L,Nelson R.Longitudinal continuity of care is associated with high patient satisfaction with physical therapy.Phys Ther.2005;85:10461052.
  14. Chang FC,Donald MS,Anthony L,Maureen F,Elizabeth AS.Provider continuity and outcomes of care for persons with schizophrenia.Ment Health Serv Res.2000;V2:201211.
  15. Christakis DA,Wright JA,Zimmerman FJ,Bassett AL,Connell FA.Continuity of care is associated with well‐coordinated care.Ambul Pediatr.2003;3:8286.
  16. Flocke SA,Stange KC,Zyzanski SJ.The impact of insurance type and forced discontinuity on the delivery of primary care. [see comments.].J Fam Pract.1997;45:129135.
  17. Flocke SA.Measuring attributes of primary care: development of a new instrument.J Fam Pract.1997;45:6474.
  18. Flynn SP.Continuity of care during pregnancy: the effect of provider continuity on outcome.J Fam Pract.1985;21:375380.
  19. Kerse N,Buetow S,Mainous AG,Young G,Coster G,Arroll B.Physician‐patient relationship and medication compliance: a primary care investigation.Ann Fam Med.2004;2:455461.
  20. Litaker D,Ritter C,Ober S,Aron D.Continuity of care and cardiovascular risk factor management: does care by a single clinician add to informational continuity provided by electronic medical records?Am J Manag Care.2005;11:689696.
  21. Forster AJ,Murff HJ,Peterson JF,Gandhi TK,Bates DW.The incidence and severity of adverse events affecting patients after discharge from the hospital.Ann Intern Med.2003;138:161167.
  22. van Walraven C,Mamdani MM,Fang J,Austin PC.Continuity of care and patient outcomes after hospital discharge.J Gen lntern Med.2004;19:624645.
  23. van Walraven C,Seth R,Austin PC,Laupacis A.Effect of discharge summary availability during post‐discharge visits on hospital readmission.J Gen Intern Med.2002;17:186192.
  24. Bell CM,Schnipper JL,Auerbach AD, et al.Association of communication between hospital‐based physicians and primary care providers with patient outcomes.[see comment].J Gen Intern Med2009;24(3):381386.
  25. Kripalani S,LeFevre F,Phillips CO,Williams MV,Basaviah P,Baker DW.Deficits in communication and information transfer between hospital‐based and primary care physicians: implications for patient safety and continuity of care.JAMA.2007;297:831841.
  26. van Walraven C,Taljaard M,Bell C, et al.Information exchange among physicians caring for the same patient in the community.Can Med Assoc J.2008;179:10131018.
  27. Breslau N,Reeb KG.Continuity of care in a university‐based practice.J Med Educ.1975;965969.
  28. van Walraven C,Taljaard M,Bell CM, et al.Provider and information continuity after discharge from hospital: a prospective cohort study.2009. Ref Type: Unpublished Work.
  29. van Walraven C,Dhalla IA,Bell CM, et al.Derivation and validation of the LACE index to predict early death or unplanned readmission after discharge from hospital to the community.CMAJ. (In press)
  30. Charlson ME,Pompei P,Ales KL,MacKenzie CR.A new method of classifying prognostic comorbidity in longitudinal studies: development and validation.J Chronic Dis.1987;40:373383.
  31. Schneeweiss S,Wang PS,Avorn J,Glynn RJ.Improved comorbidity adjustment for predicting mortality in Medicare populations.Health Serv Res.2003;38(4):11031120.
  32. Glidden DV,Vittinghoff E.Modelling clustered survival data from multicentre clinical trials.Stat Med.2004;23:369388.
  33. Stiell A,Forster AJ,Stiell IG,van Walraven C.Prevalence of information gaps in the emergency department and the effect on patient outcomes.CMAJ.2003;169:10231028.
  34. van Walraven C,Davis D,Forster AJ,Wells GA.Time‐dependent bias due to improper analytical methodology is common in prominent medical journals.J Clin Epidemiol.2004;57:672682.
  35. Raghunathan TE.What do we do with missing data? Some options for analysis of incomplete data.Annu Rev Public Health.2004;25:99117.
  36. van Dijk MR,Steyerberg EW,Stenning SP,Habbema JD.Survival estimates of a prognostic classification depended more on year of treatment than on imputation of missing values.J Clin Epidemiol.2006;59:246253. [Review]
  37. Gorelick MH.Bias arising from missing data in predictive models.[see comment].J Clin Epidemiol.2006;59:11151123.
  38. van Walraven C,Taljaard M,Bell CM, et al.Information exchange among physicians caring for the same patient in the community.CMAJ.2008;179:10131018.
  39. Fowler FJ.Survey Research Methods.2nd ed.,Beverly Hills:Sage;1993.
  40. Asch DA,Jedrziewski K,Christiakis NA.Response rates to mail surveys published in medical journals.J Clin Epidemiol.1997;50:11291136.
  41. Hasan M.Readmission of patients to hospital: still ill defined and poorly understood.Int J Qual Health Care.2001;13:177179.
  42. Clarke A.Are readmissions avoidable?Br Med J.1990;301:11361138.
  43. Escobar GJ,Greene JD,Scheirer P,Gardner MN,Draper D,Kipnis P.Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases.Med Care.2008;46:232239.
  44. Wasson JH,Sauvigne AE,Mogielnicki RP, et al.Continuity of outpatient medical care in elderly men. A randomized trial.JAMA.1984;252:24132417.
Article PDF
Issue
Journal of Hospital Medicine - 5(7)
Publications
Page Number
398-405
Legacy Keywords
continuity, death, readmission
Sections
Article PDF
Article PDF

Hospitalists are common in North America.1, 2 Hospitalists have been associated with a range of beneficial outcomes including decreased length of stay.3, 4 A primary concern of the hospitalist model is its potential detrimental effect on continuity of care5 partly because patients are often not seen by their hospitalists after discharge.

Continuity of care6 is primarily composed of provider continuity (an ongoing relationship between a patient and a particular provider over time) and information continuity (availability of data from prior events for subsequent patient encounters).6 The association between continuity of care and patient outcomes has been quantified in many studies.720 However, the relationship of continuity and outcomes is especially relevant after discharge from the hospital since this is a time when patients have a high risk of poor patient outcomes21 and poor provider22 and information continuity.2325

The association between continuity and outcomes after hospital discharge has been directly quantified in 2 studies. One found that patients seen by a physician who treated them in the hospital had a significant adjusted relative risk reduction in 30‐day death or readmission of 5% and 3%, respectively.22 The other study found that patients discharged from a general medicine ward were less likely to be readmitted if they were seen by physicians who had access to their discharge summary.23 However, neither of these studies concurrently measured the influence of provider and information continuity on patient outcomes.

Determining whether and how continuity of care influences patient outcomes after hospital discharge is essential to improve health care in an evidence‐based fashion. In addition, the influence that hospital physician follow‐up has on patient outcomes can best be determined by measuring provider and information continuity in patients after hospital discharge. This study sought to measure the independent association of several provider and information continuity measures on death or urgent readmission after hospital discharge.

Methods

Study Design

This was a multicenter prospective cohort study of consecutive patients discharged to the community from the medical or surgical services of 11 Ontario hospitals (6 university‐affiliated hospitals and 5 community hospitals) in 5 cities after an elective or emergency hospitalization. Patients were invited to participate in the study if they were cognitively intact, had a telephone, and provided written informed consent. Patients were excluded if they were less than 18 years old, were discharged to nursing homes, or were not proficient in English and did not have someone to help communicate with study staff. Enrolled patients were excluded from the analysis if they had less than 2 physician visits prior to one of the study's outcomes or the end of patient observation (which was 6 months postdischarge). This final exclusion criterion was necessary since 2 continuity measures (including postdischarge physician continuity and postdischarge information continuity) were incalculable with less than 2 physician visits during follow‐up (Supporting information). The study was approved by the research ethics board of each participating hospital.

Data Collection

Prior to hospital discharge, patients were interviewed by study personnel to identify their baseline functional status, their living conditions, all physicians who regularly treated the patient prior to admission (including both family physicians and consultants), and chronic medical conditions. The latter were confirmed by a review of the patient's chart and hospital discharge summary, when available. Patients also provided principal contacts whom we could contact in the event patients could not be reached. The chart and discharge summary were also used to identify diagnoses in hospitalincluding complications (diagnoses arising in the hospital)and medications at discharge.

Patients or their designated contacts were telephoned 1, 3, and 6 months after hospital discharge to identify the date and the physician of all postdischarge physician visits. For each postdischarge physician visit, we determined whether the physician had access to a discharge summary for the index hospitalization. We also determined the availability of information from all previous postdischarge visits that the patient had with other physicians. The methods used to collect these data were previously detailed.26 Briefly, we used three complementary methods to elicit this information from each follow‐up physician. First, patients gave the physician a survey on which the physician listed all prior visits with other doctors for which they had information. If this survey was not returned, we faxed the survey to the physician. If the faxed survey was not returned, we telephoned the physician or their office staff and administered the survey over the telephone.

Continuity Measures

We measured components of both provider and information continuity. For the posthospitalization period, we measured provider continuity for physicians who had provided patient care during three distinct phases: the prehospital period; the hospital period; and the postdischarge period. Prehospital physicians were those classified by the patient as their regular physician(s) (defined as physiciansboth family physicians and consultantsthat they had seen in the past and were likely to see again in the future). Hospital provider continuity was divided into 2 components: hospital physician continuity (ie, the most responsible physician in the hospital); and hospital consultant continuity (ie, another physician who consulted on the patient during admission). Information continuity was divided into discharge summary continuity and postdischarge visit information continuity.

We quantified provider and information continuity using Breslau's Usual Provider of Continuity (UPC)27 measure. It is a widely used and validated continuity measure whose values are meaningful and interpretable.6 The UPC measures the proportion of visits with the physician of interest (for provider continuity) or the proportion of visits having the information of interest (for information continuity). The UPC was calculated as: $${\rm UPC} = {\rm n}_{\rm i} / {\rm N}$$where UPC is the Usual Provider of Continuity; ni is the number of postdischarge visits to the physician type of interest (eg, prehospital; hospital; postdischarge) or the number of visits at which the information of interest (eg, discharge summary) was available; and N is the total number of postdischarge visits. The UPC ranges from 0 to 1 where 0 is perfect discontinuity and 1 is perfect continuity. Details regarding the provider and information continuity measures are given in the supporting information and were discussed in greater detail in a previous study.28

As the formulae in the supporting information suggest, all continuity measures were incalculable prior to the first postdischarge visit and all continuity measures changed value at each visit during patient observation. In addition, a particular physician visit could increase multiple continuity measures simultaneously. For example, a visit with a physician who was the hospital physician and who regularly treated the patient prior to the hospitalization would increase both hospital and prehospital provider continuity. If the patient had previously seen the physician after discharge, the visit would also increase postdischarge physician continuity.

Study Outcomes

Outcomes for the study included time to all‐cause death and time to all‐cause, urgent readmission. To be classified as urgent, readmissions could not be arranged when the patient was originally discharged from hospital or more than 4 weeks prior to the readmission. All hospital admissions meeting these criteria during the 6 month study period were labeled in this study as urgent readmissions even if they were unrelated to the index admission.

Principal contacts were called if we were unable to reach the patient to determine their outcomes. If the patient's vital status remained unclear, we contacted the Office of the Provincial Registrar to determine if and when the patient died during the 6 months after discharge from hospital.

Analysis

Outcome incidence densities and 95% confidence intervals [CIs] were calculated using PROC GENMOD in SAS to account for clustering of patients in hospitals. We used multivariate proportional hazards modeling to determine the independent association of provider and information continuity measures with time to death and time to urgent readmission. Patient observation started when patients were discharged from the hospital. Patient observation ended at the earliest of the following: death; urgent readmission to the hospital; end of follow‐up (which was 6 months after discharge from the hospital) or loss to follow‐up. Because hospital consultant continuity was very highly skewed (95.6% of patients had a value of 0; mean value of 0.016; skewness 6.9), it was not included in the primary regression models but was included in a sensitivity analysis.

To adjust for potential confounders in the association between continuity and the outcomes, our model included all factors that were independently associated with either the outcome or any continuity measure. Factors associated with death or urgent readmission were summarized using the LACE index.29 This index combines a patient's hospital length of stay, admission acuity, patient comorbidity (measured with the Charlson Score30 using updated disease category weights by Schneeweiss et al.),31 and emergency room utilization (measured as the number of visits in the 6 months prior to admission) into a single number ranging from 0 to 19. The LACE index was moderately discriminative and highly accurate at predicting 30‐day death or urgent readmission.29 In a separate study,28 we found that the following factors were independently associated with at least one of the continuity measures: patient age; patient sex; number of admissions in previous 6 months; number of regular treating physicians prior to admission; hospital service (medicine vs. surgery); and number of complications in the hospital (defined as new problems arising after admission to hospital). By including all factors that were independently associated with either the outcome or continuity, we controlled for all measured factors that could act as confounders in the association between continuity and outcomes. We accounted for the clustered study design by using conditional proportional hazards models that stratified by hospitals.32 Analytical details are given in the supporting information.

Results

Between October 2002 and July 2006, we enrolled 5035 patients from 11 hospitals (Figure 1). Of the 5035 patients, 274 (5.4%) had no follow up interview with study personnel. A total of 885 (17.6%) had fewer than 2 post discharge physician visits and were not included in the continuity analyses. This left 3876 patients for this analysis (77.0% of the original cohort), of which 3727 had complete follow up (96.1% of the study cohort). A total of 531 patients (10.6% of the original cohort) had incomplete follow‐up because: 342 (6.8%) were lost to follow‐up; 172 (3.4%) refused participation; and 24 (0.5%) were transferred into a nursing home during the first month of observation.

Figure 1
Patient follow‐up. Creation of the study cohort (n = 3876) from the original cohort is illustrated. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]
Figure 2
Time to death or urgent readmission. This figure summarizes outcomes for the study cohort. The horizontal axis presents days from discharge. The vertical axis presents proportion of the cohort without death or urgent readmission. The gray line presents time to death; the black line presents time to urgent readmission. Dotted lines present the 95% CI for each survival curve.

The 3876 study patients are described in Table 1. Overall, these people had a mean age of 62 and most commonly had no physical limitations. Almost a third of patients had been admitted to the hospital in the previous 6 months. A total of 7.6% of patients had no regular prehospital physician while 5.8% had more than one regular prehospital physician. Patients were evenly split between acute and elective admissions and 12% had a complication during their admission. They were discharged after a median of 4 days on a median of 4 medications.

Description of Study Cohort
FactorValueDeath or Urgent ReadmissionAll (n = 3876)
No (n = 3491)Yes (n = 385)
  • Abbreviations: CABG, coronary artery bypass graft; CAD, coronary artery disease; IQR, interquartile range; SD, standard deviation.

Mean patient age, years (SD) 61.59 16.1667.70 15.5362.19 16.20
Female (%) 1838 (52.6)217 (56.4)2055 (53.0)
Lives alone (%) 791 (22.7)107 (27.8)898 (23.2)
# activities of daily living requiring aids (%)03277 (93.9)354 (91.9)3631 (93.7)
 1125 (3.6)20 (5.2)145 (3.7)
 >189 (2.5)11 (2.8)100 (2.8)
# physicians who see patient regularly (%)0241 (6.9)22 (5.7)263 (6.8)
 13060 (87.7)333 (86.5)3393 (87.5)
 2150 (4.3)21 (5.5)171 (4.4)
 >2281 (8.0)31 (8.0)312 (8.0)
# admissions in previous 6 months (%)02420 (69.3)222 (57.7)2642 (68.2)
 1833 (23.9)103 (26.8)936 (24.1)
 >1238 (6.8)60 (15.6)298 (7.7)
Index hospitalization description    
Number of discharge medications (IQR) 4 (2‐7)6 (3‐9)4 (2‐7)
Admitted to medical service (%) 1440 (41.2)231 (60.0)1671 (43.1)
Acute diagnoses:    
CAD (%) 238 (6.8)23 (6.0)261 (6.7)
Neoplasm of unspecified nature (%) 196 (5.6)35 (9.1)231 (6.0)
Heart failure (%) 127 (3.6)38 (9.9)165 (4.3)
Acute procedures    
CABG (%) 182 (5.2)14 (3.6)196 (5.1)
Total knee arthoplasty (%) 173 (5.0)10 (2.6)183 (4.7)
Total hip arthroplasty (%) 118 (3.4)(0.5)120 (3.1)
Complication during admission (%) 403 (11.5)63 (16.4)466 (12.0)
LACE index: mean (SD) 8.0 (3.6)10.3 (3.8)8.2 (3.7)
Length of stay in days: median (IQR) 4 (2‐7)6 (3‐10)4 (2‐8)
Acute/emergent admission (%) 1851 (53.0)272 (70.6)2123 (54.8)
Charlson score (%)02771 (79.4)241 (62.6)3012 (77.7)
 1103 (3.0)17 (4.4)120 (3.1)
 2446 (12.8)86 (22.3)532 (13.7)
 >2171 (4.9)41 (10.6)212 (5.5)
Emergency room use (# visits/ year) (%)02342 (67.1)190 (49.4)2532 (65.3)
 1761 (21.8)101 (26.2)862 (22.2)
 >1388 (11.1)94 (24.4)482 (12.4)

Patients were observed in the study for a median of 175 days (interquartile range [IQR] 175‐178). During this time they had a median of 4 physician visits (IQR 3‐6). The first postdischarge physician visit occurred a median of 10 days (IQR 6‐18) after discharge from hospital.

Continuity Measures

Table 2 summarizes all continuity scores. Since continuity scores varied significantly over time,28 Table 2 provides continuity scores on the last day of patient observation. Preadmission provider, postdischarge provider, and discharge summary continuity all had similar values and distributions with median values ranging between 0.444 and 0.571. 1797 (46.4%) patients had a hospital physician provider continuity scorae of 0.

Ranges of Continuity Measures on Last Day of Patient Observation
 Minimum25th PercentileMedian75th PercentileMaximum
Provider continuity     
A: Pre‐admission physician00.1430.4440.6671.000
B: Hospital physician000.1430.4001.000
C: Post‐discharge physician00.3330.5710.7501.000
Information continuity     
D: Discharge summary00.0950.5000.8001.000
E: Post‐discharge information000.1820.5001.000

Study Outcomes

During a median of 175 days of observation, 45 patients died (event rate 2.6 events per 100 patient‐years observation [95% CI 2.0‐3.4]) and 340 patients were urgently readmitted (event rate 19.6 events per 100 patient‐years observation [95% CI 15.9‐24.3]). Figure 2 presents the survival curves for time to death and time to urgent readmission. The hazard of death was consistent through the observation period but the risk of urgent readmission decreased slightly after 90 days postdischarge.

Association Between Continuity and Outcomes

Table 3 summarizes the association between provider and information continuity with study outcomes. No continuity measure was associated with time to death by itself (Table 3, column A) or with the other continuity measures (Table 3, column B). Preadmission physician continuity was associated with a significantly decreased risk of urgent readmission. When the proportion of postdischarge visits with a prehospital physician increased by 10%, the adjusted risk of urgent readmission decreased by 6% (adjusted hazards ratio (adj‐HR)) of 0.94 (95% CI, 0.91‐0.98). None of the other continuity measuresincluding hospital physicianwere significantly associated with urgent readmission either by themselves (Table 3, column A) or after adjusting for other continuity measures (Table 3, column B).

Association of Provider and Information Continuity With Post‐Discharge Outcomes
 Outcome
Death (95% CI)Urgent Readmission (95% CI)
A: Adjusted for Other Confounders OnlyB: Adjusted for Other Confounders and Continuity MeasuresA: Adjusted for Other Confounders OnlyB: Adjusted for Other Confounders and Continuity Measures
  • NOTE: The adjusted hazards ratio with 95% CI is presented. In columns A, each continuity measure was included in a model without the other continuity measures but with the other confounders. Because this resulted in 5 separate models, adjusted hazard ratios for the other confounders are not given in columns A. In columns B, the model includes all continuity measures and covariates. The hazard ratio for provider and information continuity scores expresses changes in the risk of the outcome when the continuity score increases by 0.1. A hazard ratio could not be estimated in the death model for number of regular physicians because of empty cells (ie, no one who died was without a regular physician).

  • Abbreviation: CI, confidence interval.29

  • Hazard ratio expresses the influence of an increase in the variable's unit by 1.

  • Variable included in each of the 5 survival models (one for each continuity measure). Results varied between the models.

  • Comparator group is 0.

Provider continuity        
A: Pre‐admission physician1.03(0.95, 1.12)1.06(0.95, 1.18)0.95(0.92, 0.98)0.94(0.91, 0.98)
B: Hospital physician0.87(0.74, 1.02)0.86(0.70, 1.03)0.98(0.94, 1.02)0.97(0.92, 1.01)
C: Post‐discharge physician0.97(0.89, 1.06)0.93(0.84, 1.04)0.98(0.95, 1.01)0.98(0.94, 1.02)
Information continuity        
D: Discharge Summary0.96(0.89, 1.04)0.94(0.87, 1.03)1.01(0.98, 1.04)1.02(0.99, 1.05)
E: Post‐discharge information1.01(0.94, 1.08)1.03(0.95, 1.11)1.00(0.97, 1.03)1.03(0.95, 1.11)
Other confounders        
Patient age in decades*  1.43(1.13, 1.82)  1.18(1.10, 1.28)
Female  1.50(0.81, 2.77)  1.16(0.94, 1.44)
# physicians who see patient regularly        
1      1.46(0.92, 2.34)
2      2.17(1.11, 4.26)
>2      3.71(1.55, 8.88)
Complications during admission        
1  1.38(0.61, 3.10)  0.81(0.55, 1.17)
>1  1.01(0.28, 3.58)  0.91(0.56, 1.48)
# admissions in previous 6 months        
1  1.27(0.59, 2.70)  1.34(1.02, 1.76)
>1  1.42(0.55, 3.67)  1.78(1.26, 2.51)
LACE index*  1.16(1.06, 1.26)  1.10(1.07, 1.14)

Increased patient age and increased LACE index score were both strongly associated with an increased risk of death (adj‐HR 1.43 [1.13‐1.82] and 1.16 [1.06‐1.26], respectively) and urgent readmission (adj‐HR 1.18 [1.10‐1.28] and 1.10 [1.07‐1.14], respectively). Hospitalization in the 6 months prior to admission significantly increased the risk of urgent readmission but not death. The risk of urgent readmission increased significantly as the number of regular prehospital physicians increased.

Sensitivity Analyses

Our study conclusions did not change in the sensitivity analyses. The number of postdischarge physician visits (expressed as a time‐dependent covariate) was not associated with either death or with urgent readmission and preadmission physician continuity remained significantly associated with time to urgent readmission (supporting information). Adding consultant continuity to the model also did not change our results (supporting information). In‐hospital consultant continuity was associated with an increased risk of urgent readmission (adj‐HR 1.10, 95% CI, 1.01‐1.20). The association between pre‐admission physician continuity and time to urgent readmission did not interact significantly with patient age, LACE index score, or number of previous admissions.

Discussion

This large, prospective cohort study measured the independent association of several provider and information continuity measures with important outcomes in patients discharged from hospital. After adjusting for potential confounders, we found that increased continuity with physicians who regularly cared for the patient prior to the admission was significantly and independently associated with a decreased risk of urgent readmission. Our data suggest that continuity with the hospital physician did not independently influence the risk of patient death or urgent readmission after discharge.

Although hospital physician continuity did not significantly change patient outcomes, we found that follow‐up with a physician who regularly treated the patient prior to their admission was associated with a significantly decreased risk of urgent readmission. This could reflect the important role that a patient's regular physician plays in their health care. Other studies have shown a positive association between continuity with a regular physician and improved outcomes including decreased emergency room utilization7, 8 and decreased hospitalization.10, 11

We were somewhat disappointed that information continuity was not independently associated with improved patient outcomes. Information continuity is likely more amenable to modification than is provider continuity. Of course, our study findings do not mean that information continuity does not improve patient outcomes, as in other studies.23, 33 Instead, our results could reflect that we solely measured the availability of information to physicians. Future studies that measure the quality, relevance, and actual utilization of patient information will be better able to discern the influence of information continuity on patient outcomes.

We believe that our study was methodologically strong and unique. We captured both provider and information continuity in a large group of representative patients using a broad range of measures that captured continuity's diverse components including both provider and information continuity. The continuity measures were expressed and properly analyzed as time‐dependent variables in a multivariate model.34 Our analysis controlled for important potential confounders. Our follow‐up and data collection was rigorous with 96.1% of our study group having complete follow‐up. Finally, the analysis used multiple imputation to appropriately handle missing data in the one incomplete variable (post‐discharge information continuity).3537

Several limitations of our study should be kept in mind. We are uncertain how our results might generalize to patients discharged from obstetrical or psychiatric services or people in other health systems. Our analysis had to exclude patients with less than two physician visits after discharge since this was the minimum required to calculate postdischarge physician and information continuity. Data collection for postdischarge information continuity was incomplete with data missing for 19.0% of all 15 401 visits in the original cohort.38 However, a response rate of 81.0% is very good39 when compared to other survey‐based studies40 and we accounted for the missing data using multiple imputation methods. The primary outcomes of our studytime to death or urgent readmissionmay be relatively insensitive to modification of quality of care, which is presumably improved by increased continuity.41 For example, Clarke found that the majority of readmissions in all patient groups were unavoidable with 94% of medical readmissions 1 month postdischarge judged to be unavoidable.42 Future studies regarding the effects of continuity could focus on its association with other outcomes that are more reflective of quality of care such as the risk of adverse events or medical error.21 Such outcomes would presumably be more sensitive to improved quality of care from increased continuity.

We believe that our study's major limitation was its inability to establish a causal association between continuity and patient outcomes. Our finding that increased consultant continuity was associated with an increased risk of poor outcomes highlights this concern. Presumably, patient follow‐up with a hospital consultant indicates a disease status with a high risk of bad patient outcomesa risk that is not entirely accounted for by the covariates used in this study. If we accept that unresolved confounding explains this association, the same could also apply to the association between preadmission physician continuity and improved outcomes. Perhaps patients who are doing well after discharge from hospital are able to return to their regular physician. Our analysis would therefore identify an association between increased preadmission physician continuity and improved patient outcomes. Analyses could also incorporate more discriminative measures of severity of hospital illness, such as those developed by Escobar et al.43 Since patients may experience health events after their discharge from hospital that could influence outcomes, recording these and expressing them in the study model as time‐dependent covariates will be important. Finally, similar to the classic study by Wasson et al.44 in 1984, a proper randomized trial that measures the effect of a continuity‐building intervention on both continuity of care and patient outcomes would help determine how continuity influences outcomes.

In conclusion, after discharge from hospital, increased continuity with physicians who routinely care for the patient is significantly and independently associated with a decreased risk of urgent readmission. Continuity with the hospital physician after discharge did not independently influence the risk of patient death or urgent readmission in our study. Further research is required to determine the causal association between preadmission physician continuity and improved outcomes. Until that time, clinicians should strive to optimize continuity with physicians their patients have seen prior to the hospitalization.

Hospitalists are common in North America.1, 2 Hospitalists have been associated with a range of beneficial outcomes including decreased length of stay.3, 4 A primary concern of the hospitalist model is its potential detrimental effect on continuity of care5 partly because patients are often not seen by their hospitalists after discharge.

Continuity of care6 is primarily composed of provider continuity (an ongoing relationship between a patient and a particular provider over time) and information continuity (availability of data from prior events for subsequent patient encounters).6 The association between continuity of care and patient outcomes has been quantified in many studies.720 However, the relationship of continuity and outcomes is especially relevant after discharge from the hospital since this is a time when patients have a high risk of poor patient outcomes21 and poor provider22 and information continuity.2325

The association between continuity and outcomes after hospital discharge has been directly quantified in 2 studies. One found that patients seen by a physician who treated them in the hospital had a significant adjusted relative risk reduction in 30‐day death or readmission of 5% and 3%, respectively.22 The other study found that patients discharged from a general medicine ward were less likely to be readmitted if they were seen by physicians who had access to their discharge summary.23 However, neither of these studies concurrently measured the influence of provider and information continuity on patient outcomes.

Determining whether and how continuity of care influences patient outcomes after hospital discharge is essential to improve health care in an evidence‐based fashion. In addition, the influence that hospital physician follow‐up has on patient outcomes can best be determined by measuring provider and information continuity in patients after hospital discharge. This study sought to measure the independent association of several provider and information continuity measures on death or urgent readmission after hospital discharge.

Methods

Study Design

This was a multicenter prospective cohort study of consecutive patients discharged to the community from the medical or surgical services of 11 Ontario hospitals (6 university‐affiliated hospitals and 5 community hospitals) in 5 cities after an elective or emergency hospitalization. Patients were invited to participate in the study if they were cognitively intact, had a telephone, and provided written informed consent. Patients were excluded if they were less than 18 years old, were discharged to nursing homes, or were not proficient in English and did not have someone to help communicate with study staff. Enrolled patients were excluded from the analysis if they had less than 2 physician visits prior to one of the study's outcomes or the end of patient observation (which was 6 months postdischarge). This final exclusion criterion was necessary since 2 continuity measures (including postdischarge physician continuity and postdischarge information continuity) were incalculable with less than 2 physician visits during follow‐up (Supporting information). The study was approved by the research ethics board of each participating hospital.

Data Collection

Prior to hospital discharge, patients were interviewed by study personnel to identify their baseline functional status, their living conditions, all physicians who regularly treated the patient prior to admission (including both family physicians and consultants), and chronic medical conditions. The latter were confirmed by a review of the patient's chart and hospital discharge summary, when available. Patients also provided principal contacts whom we could contact in the event patients could not be reached. The chart and discharge summary were also used to identify diagnoses in hospitalincluding complications (diagnoses arising in the hospital)and medications at discharge.

Patients or their designated contacts were telephoned 1, 3, and 6 months after hospital discharge to identify the date and the physician of all postdischarge physician visits. For each postdischarge physician visit, we determined whether the physician had access to a discharge summary for the index hospitalization. We also determined the availability of information from all previous postdischarge visits that the patient had with other physicians. The methods used to collect these data were previously detailed.26 Briefly, we used three complementary methods to elicit this information from each follow‐up physician. First, patients gave the physician a survey on which the physician listed all prior visits with other doctors for which they had information. If this survey was not returned, we faxed the survey to the physician. If the faxed survey was not returned, we telephoned the physician or their office staff and administered the survey over the telephone.

Continuity Measures

We measured components of both provider and information continuity. For the posthospitalization period, we measured provider continuity for physicians who had provided patient care during three distinct phases: the prehospital period; the hospital period; and the postdischarge period. Prehospital physicians were those classified by the patient as their regular physician(s) (defined as physiciansboth family physicians and consultantsthat they had seen in the past and were likely to see again in the future). Hospital provider continuity was divided into 2 components: hospital physician continuity (ie, the most responsible physician in the hospital); and hospital consultant continuity (ie, another physician who consulted on the patient during admission). Information continuity was divided into discharge summary continuity and postdischarge visit information continuity.

We quantified provider and information continuity using Breslau's Usual Provider of Continuity (UPC)27 measure. It is a widely used and validated continuity measure whose values are meaningful and interpretable.6 The UPC measures the proportion of visits with the physician of interest (for provider continuity) or the proportion of visits having the information of interest (for information continuity). The UPC was calculated as: $${\rm UPC} = {\rm n}_{\rm i} / {\rm N}$$where UPC is the Usual Provider of Continuity; ni is the number of postdischarge visits to the physician type of interest (eg, prehospital; hospital; postdischarge) or the number of visits at which the information of interest (eg, discharge summary) was available; and N is the total number of postdischarge visits. The UPC ranges from 0 to 1 where 0 is perfect discontinuity and 1 is perfect continuity. Details regarding the provider and information continuity measures are given in the supporting information and were discussed in greater detail in a previous study.28

As the formulae in the supporting information suggest, all continuity measures were incalculable prior to the first postdischarge visit and all continuity measures changed value at each visit during patient observation. In addition, a particular physician visit could increase multiple continuity measures simultaneously. For example, a visit with a physician who was the hospital physician and who regularly treated the patient prior to the hospitalization would increase both hospital and prehospital provider continuity. If the patient had previously seen the physician after discharge, the visit would also increase postdischarge physician continuity.

Study Outcomes

Outcomes for the study included time to all‐cause death and time to all‐cause, urgent readmission. To be classified as urgent, readmissions could not be arranged when the patient was originally discharged from hospital or more than 4 weeks prior to the readmission. All hospital admissions meeting these criteria during the 6 month study period were labeled in this study as urgent readmissions even if they were unrelated to the index admission.

Principal contacts were called if we were unable to reach the patient to determine their outcomes. If the patient's vital status remained unclear, we contacted the Office of the Provincial Registrar to determine if and when the patient died during the 6 months after discharge from hospital.

Analysis

Outcome incidence densities and 95% confidence intervals [CIs] were calculated using PROC GENMOD in SAS to account for clustering of patients in hospitals. We used multivariate proportional hazards modeling to determine the independent association of provider and information continuity measures with time to death and time to urgent readmission. Patient observation started when patients were discharged from the hospital. Patient observation ended at the earliest of the following: death; urgent readmission to the hospital; end of follow‐up (which was 6 months after discharge from the hospital) or loss to follow‐up. Because hospital consultant continuity was very highly skewed (95.6% of patients had a value of 0; mean value of 0.016; skewness 6.9), it was not included in the primary regression models but was included in a sensitivity analysis.

To adjust for potential confounders in the association between continuity and the outcomes, our model included all factors that were independently associated with either the outcome or any continuity measure. Factors associated with death or urgent readmission were summarized using the LACE index.29 This index combines a patient's hospital length of stay, admission acuity, patient comorbidity (measured with the Charlson Score30 using updated disease category weights by Schneeweiss et al.),31 and emergency room utilization (measured as the number of visits in the 6 months prior to admission) into a single number ranging from 0 to 19. The LACE index was moderately discriminative and highly accurate at predicting 30‐day death or urgent readmission.29 In a separate study,28 we found that the following factors were independently associated with at least one of the continuity measures: patient age; patient sex; number of admissions in previous 6 months; number of regular treating physicians prior to admission; hospital service (medicine vs. surgery); and number of complications in the hospital (defined as new problems arising after admission to hospital). By including all factors that were independently associated with either the outcome or continuity, we controlled for all measured factors that could act as confounders in the association between continuity and outcomes. We accounted for the clustered study design by using conditional proportional hazards models that stratified by hospitals.32 Analytical details are given in the supporting information.

Results

Between October 2002 and July 2006, we enrolled 5035 patients from 11 hospitals (Figure 1). Of the 5035 patients, 274 (5.4%) had no follow up interview with study personnel. A total of 885 (17.6%) had fewer than 2 post discharge physician visits and were not included in the continuity analyses. This left 3876 patients for this analysis (77.0% of the original cohort), of which 3727 had complete follow up (96.1% of the study cohort). A total of 531 patients (10.6% of the original cohort) had incomplete follow‐up because: 342 (6.8%) were lost to follow‐up; 172 (3.4%) refused participation; and 24 (0.5%) were transferred into a nursing home during the first month of observation.

Figure 1
Patient follow‐up. Creation of the study cohort (n = 3876) from the original cohort is illustrated. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]
Figure 2
Time to death or urgent readmission. This figure summarizes outcomes for the study cohort. The horizontal axis presents days from discharge. The vertical axis presents proportion of the cohort without death or urgent readmission. The gray line presents time to death; the black line presents time to urgent readmission. Dotted lines present the 95% CI for each survival curve.

The 3876 study patients are described in Table 1. Overall, these people had a mean age of 62 and most commonly had no physical limitations. Almost a third of patients had been admitted to the hospital in the previous 6 months. A total of 7.6% of patients had no regular prehospital physician while 5.8% had more than one regular prehospital physician. Patients were evenly split between acute and elective admissions and 12% had a complication during their admission. They were discharged after a median of 4 days on a median of 4 medications.

Description of Study Cohort
FactorValueDeath or Urgent ReadmissionAll (n = 3876)
No (n = 3491)Yes (n = 385)
  • Abbreviations: CABG, coronary artery bypass graft; CAD, coronary artery disease; IQR, interquartile range; SD, standard deviation.

Mean patient age, years (SD) 61.59 16.1667.70 15.5362.19 16.20
Female (%) 1838 (52.6)217 (56.4)2055 (53.0)
Lives alone (%) 791 (22.7)107 (27.8)898 (23.2)
# activities of daily living requiring aids (%)03277 (93.9)354 (91.9)3631 (93.7)
 1125 (3.6)20 (5.2)145 (3.7)
 >189 (2.5)11 (2.8)100 (2.8)
# physicians who see patient regularly (%)0241 (6.9)22 (5.7)263 (6.8)
 13060 (87.7)333 (86.5)3393 (87.5)
 2150 (4.3)21 (5.5)171 (4.4)
 >2281 (8.0)31 (8.0)312 (8.0)
# admissions in previous 6 months (%)02420 (69.3)222 (57.7)2642 (68.2)
 1833 (23.9)103 (26.8)936 (24.1)
 >1238 (6.8)60 (15.6)298 (7.7)
Index hospitalization description    
Number of discharge medications (IQR) 4 (2‐7)6 (3‐9)4 (2‐7)
Admitted to medical service (%) 1440 (41.2)231 (60.0)1671 (43.1)
Acute diagnoses:    
CAD (%) 238 (6.8)23 (6.0)261 (6.7)
Neoplasm of unspecified nature (%) 196 (5.6)35 (9.1)231 (6.0)
Heart failure (%) 127 (3.6)38 (9.9)165 (4.3)
Acute procedures    
CABG (%) 182 (5.2)14 (3.6)196 (5.1)
Total knee arthoplasty (%) 173 (5.0)10 (2.6)183 (4.7)
Total hip arthroplasty (%) 118 (3.4)(0.5)120 (3.1)
Complication during admission (%) 403 (11.5)63 (16.4)466 (12.0)
LACE index: mean (SD) 8.0 (3.6)10.3 (3.8)8.2 (3.7)
Length of stay in days: median (IQR) 4 (2‐7)6 (3‐10)4 (2‐8)
Acute/emergent admission (%) 1851 (53.0)272 (70.6)2123 (54.8)
Charlson score (%)02771 (79.4)241 (62.6)3012 (77.7)
 1103 (3.0)17 (4.4)120 (3.1)
 2446 (12.8)86 (22.3)532 (13.7)
 >2171 (4.9)41 (10.6)212 (5.5)
Emergency room use (# visits/ year) (%)02342 (67.1)190 (49.4)2532 (65.3)
 1761 (21.8)101 (26.2)862 (22.2)
 >1388 (11.1)94 (24.4)482 (12.4)

Patients were observed in the study for a median of 175 days (interquartile range [IQR] 175‐178). During this time they had a median of 4 physician visits (IQR 3‐6). The first postdischarge physician visit occurred a median of 10 days (IQR 6‐18) after discharge from hospital.

Continuity Measures

Table 2 summarizes all continuity scores. Since continuity scores varied significantly over time,28 Table 2 provides continuity scores on the last day of patient observation. Preadmission provider, postdischarge provider, and discharge summary continuity all had similar values and distributions with median values ranging between 0.444 and 0.571. 1797 (46.4%) patients had a hospital physician provider continuity scorae of 0.

Ranges of Continuity Measures on Last Day of Patient Observation
 Minimum25th PercentileMedian75th PercentileMaximum
Provider continuity     
A: Pre‐admission physician00.1430.4440.6671.000
B: Hospital physician000.1430.4001.000
C: Post‐discharge physician00.3330.5710.7501.000
Information continuity     
D: Discharge summary00.0950.5000.8001.000
E: Post‐discharge information000.1820.5001.000

Study Outcomes

During a median of 175 days of observation, 45 patients died (event rate 2.6 events per 100 patient‐years observation [95% CI 2.0‐3.4]) and 340 patients were urgently readmitted (event rate 19.6 events per 100 patient‐years observation [95% CI 15.9‐24.3]). Figure 2 presents the survival curves for time to death and time to urgent readmission. The hazard of death was consistent through the observation period but the risk of urgent readmission decreased slightly after 90 days postdischarge.

Association Between Continuity and Outcomes

Table 3 summarizes the association between provider and information continuity with study outcomes. No continuity measure was associated with time to death by itself (Table 3, column A) or with the other continuity measures (Table 3, column B). Preadmission physician continuity was associated with a significantly decreased risk of urgent readmission. When the proportion of postdischarge visits with a prehospital physician increased by 10%, the adjusted risk of urgent readmission decreased by 6% (adjusted hazards ratio (adj‐HR)) of 0.94 (95% CI, 0.91‐0.98). None of the other continuity measuresincluding hospital physicianwere significantly associated with urgent readmission either by themselves (Table 3, column A) or after adjusting for other continuity measures (Table 3, column B).

Association of Provider and Information Continuity With Post‐Discharge Outcomes
 Outcome
Death (95% CI)Urgent Readmission (95% CI)
A: Adjusted for Other Confounders OnlyB: Adjusted for Other Confounders and Continuity MeasuresA: Adjusted for Other Confounders OnlyB: Adjusted for Other Confounders and Continuity Measures
  • NOTE: The adjusted hazards ratio with 95% CI is presented. In columns A, each continuity measure was included in a model without the other continuity measures but with the other confounders. Because this resulted in 5 separate models, adjusted hazard ratios for the other confounders are not given in columns A. In columns B, the model includes all continuity measures and covariates. The hazard ratio for provider and information continuity scores expresses changes in the risk of the outcome when the continuity score increases by 0.1. A hazard ratio could not be estimated in the death model for number of regular physicians because of empty cells (ie, no one who died was without a regular physician).

  • Abbreviation: CI, confidence interval.29

  • Hazard ratio expresses the influence of an increase in the variable's unit by 1.

  • Variable included in each of the 5 survival models (one for each continuity measure). Results varied between the models.

  • Comparator group is 0.

Provider continuity        
A: Pre‐admission physician1.03(0.95, 1.12)1.06(0.95, 1.18)0.95(0.92, 0.98)0.94(0.91, 0.98)
B: Hospital physician0.87(0.74, 1.02)0.86(0.70, 1.03)0.98(0.94, 1.02)0.97(0.92, 1.01)
C: Post‐discharge physician0.97(0.89, 1.06)0.93(0.84, 1.04)0.98(0.95, 1.01)0.98(0.94, 1.02)
Information continuity        
D: Discharge Summary0.96(0.89, 1.04)0.94(0.87, 1.03)1.01(0.98, 1.04)1.02(0.99, 1.05)
E: Post‐discharge information1.01(0.94, 1.08)1.03(0.95, 1.11)1.00(0.97, 1.03)1.03(0.95, 1.11)
Other confounders        
Patient age in decades*  1.43(1.13, 1.82)  1.18(1.10, 1.28)
Female  1.50(0.81, 2.77)  1.16(0.94, 1.44)
# physicians who see patient regularly        
1      1.46(0.92, 2.34)
2      2.17(1.11, 4.26)
>2      3.71(1.55, 8.88)
Complications during admission        
1  1.38(0.61, 3.10)  0.81(0.55, 1.17)
>1  1.01(0.28, 3.58)  0.91(0.56, 1.48)
# admissions in previous 6 months        
1  1.27(0.59, 2.70)  1.34(1.02, 1.76)
>1  1.42(0.55, 3.67)  1.78(1.26, 2.51)
LACE index*  1.16(1.06, 1.26)  1.10(1.07, 1.14)

Increased patient age and increased LACE index score were both strongly associated with an increased risk of death (adj‐HR 1.43 [1.13‐1.82] and 1.16 [1.06‐1.26], respectively) and urgent readmission (adj‐HR 1.18 [1.10‐1.28] and 1.10 [1.07‐1.14], respectively). Hospitalization in the 6 months prior to admission significantly increased the risk of urgent readmission but not death. The risk of urgent readmission increased significantly as the number of regular prehospital physicians increased.

Sensitivity Analyses

Our study conclusions did not change in the sensitivity analyses. The number of postdischarge physician visits (expressed as a time‐dependent covariate) was not associated with either death or with urgent readmission and preadmission physician continuity remained significantly associated with time to urgent readmission (supporting information). Adding consultant continuity to the model also did not change our results (supporting information). In‐hospital consultant continuity was associated with an increased risk of urgent readmission (adj‐HR 1.10, 95% CI, 1.01‐1.20). The association between pre‐admission physician continuity and time to urgent readmission did not interact significantly with patient age, LACE index score, or number of previous admissions.

Discussion

This large, prospective cohort study measured the independent association of several provider and information continuity measures with important outcomes in patients discharged from hospital. After adjusting for potential confounders, we found that increased continuity with physicians who regularly cared for the patient prior to the admission was significantly and independently associated with a decreased risk of urgent readmission. Our data suggest that continuity with the hospital physician did not independently influence the risk of patient death or urgent readmission after discharge.

Although hospital physician continuity did not significantly change patient outcomes, we found that follow‐up with a physician who regularly treated the patient prior to their admission was associated with a significantly decreased risk of urgent readmission. This could reflect the important role that a patient's regular physician plays in their health care. Other studies have shown a positive association between continuity with a regular physician and improved outcomes including decreased emergency room utilization7, 8 and decreased hospitalization.10, 11

We were somewhat disappointed that information continuity was not independently associated with improved patient outcomes. Information continuity is likely more amenable to modification than is provider continuity. Of course, our study findings do not mean that information continuity does not improve patient outcomes, as in other studies.23, 33 Instead, our results could reflect that we solely measured the availability of information to physicians. Future studies that measure the quality, relevance, and actual utilization of patient information will be better able to discern the influence of information continuity on patient outcomes.

We believe that our study was methodologically strong and unique. We captured both provider and information continuity in a large group of representative patients using a broad range of measures that captured continuity's diverse components including both provider and information continuity. The continuity measures were expressed and properly analyzed as time‐dependent variables in a multivariate model.34 Our analysis controlled for important potential confounders. Our follow‐up and data collection was rigorous with 96.1% of our study group having complete follow‐up. Finally, the analysis used multiple imputation to appropriately handle missing data in the one incomplete variable (post‐discharge information continuity).3537

Several limitations of our study should be kept in mind. We are uncertain how our results might generalize to patients discharged from obstetrical or psychiatric services or people in other health systems. Our analysis had to exclude patients with less than two physician visits after discharge since this was the minimum required to calculate postdischarge physician and information continuity. Data collection for postdischarge information continuity was incomplete with data missing for 19.0% of all 15 401 visits in the original cohort.38 However, a response rate of 81.0% is very good39 when compared to other survey‐based studies40 and we accounted for the missing data using multiple imputation methods. The primary outcomes of our studytime to death or urgent readmissionmay be relatively insensitive to modification of quality of care, which is presumably improved by increased continuity.41 For example, Clarke found that the majority of readmissions in all patient groups were unavoidable with 94% of medical readmissions 1 month postdischarge judged to be unavoidable.42 Future studies regarding the effects of continuity could focus on its association with other outcomes that are more reflective of quality of care such as the risk of adverse events or medical error.21 Such outcomes would presumably be more sensitive to improved quality of care from increased continuity.

We believe that our study's major limitation was its inability to establish a causal association between continuity and patient outcomes. Our finding that increased consultant continuity was associated with an increased risk of poor outcomes highlights this concern. Presumably, patient follow‐up with a hospital consultant indicates a disease status with a high risk of bad patient outcomesa risk that is not entirely accounted for by the covariates used in this study. If we accept that unresolved confounding explains this association, the same could also apply to the association between preadmission physician continuity and improved outcomes. Perhaps patients who are doing well after discharge from hospital are able to return to their regular physician. Our analysis would therefore identify an association between increased preadmission physician continuity and improved patient outcomes. Analyses could also incorporate more discriminative measures of severity of hospital illness, such as those developed by Escobar et al.43 Since patients may experience health events after their discharge from hospital that could influence outcomes, recording these and expressing them in the study model as time‐dependent covariates will be important. Finally, similar to the classic study by Wasson et al.44 in 1984, a proper randomized trial that measures the effect of a continuity‐building intervention on both continuity of care and patient outcomes would help determine how continuity influences outcomes.

In conclusion, after discharge from hospital, increased continuity with physicians who routinely care for the patient is significantly and independently associated with a decreased risk of urgent readmission. Continuity with the hospital physician after discharge did not independently influence the risk of patient death or urgent readmission in our study. Further research is required to determine the causal association between preadmission physician continuity and improved outcomes. Until that time, clinicians should strive to optimize continuity with physicians their patients have seen prior to the hospitalization.

References
  1. Society of Hospital Medicine.2009.Ref Type: Internet Communication.
  2. Kralovec PD,Miller JA,Wellikson L,Huddleston JM.The status of hospital medicine groups in the United States.J Hosp Med.2006;1:7580.
  3. Wachter RM,Goldman L.The hospitalist movement 5 years later. [see comment].JAMA.2002;287:487494. [Review]
  4. Lindenauer PK,Pantilat SZ,Katz PP,Wachter RM.Hospitalists and the practice of inpatient medicine: results of a survey of the National Association of Inpatient Physicians. [see comment].Ann Intern Med.1999;130:343349.
  5. Pantilat SZ,Lindenauer PK,Katz PP,Wachter RM.Primary care physician attitudes regarding communication with hospitalists.Am J Med.2001;111:15S20S.
  6. Reid R,Haggerty J,McKendry R.Defusing the confusion: concepts and measures of continuity of healthcare.Ottawa,Canadian Health Services Research Foundation. Ref Type: Report.2002;150.
  7. Brousseau DC,Meurer JR,Isenberg ML,Kuhn EM,Gorelick MH.Association between infant continuity of care and pediatric emergency department utilization.Pediatrics.2004;113:738741.
  8. Christakis DA,Wright JA,Koepsell TD,Emerson S,Connell FA.Is greater continuity of care associated with less emergency department utilization?Pediatrics.1999;103:738742.
  9. Christakis DA,Mell L,Koepsell TD,Zimmerman FJ,Connell FA.Association of lower continuity of care with greater risk of emergency department use and hospitalization in children.Pediatrics.2001;107:524529.
  10. Gill JM,Mainous AG,The role of provider continuity in preventing hospitalizations.Arch Fam Med.1998;7:352357.
  11. Mainous AG,Gill JM.The importance of continuity of care in the likelihood of future hospitalization: is site of care equivalent to a primary clinician?Am J Public Health.1998;88:15391541.
  12. Baker R,Mainous AG,Gray DP,Love MM.Exploration of the relationship between continuity, trust in regular doctors and patient satisfaction with consultations with family doctors.Scand J Prim Health Care.2003;21:2732.
  13. Beattie P,Dowda M,Turner C,Michener L,Nelson R.Longitudinal continuity of care is associated with high patient satisfaction with physical therapy.Phys Ther.2005;85:10461052.
  14. Chang FC,Donald MS,Anthony L,Maureen F,Elizabeth AS.Provider continuity and outcomes of care for persons with schizophrenia.Ment Health Serv Res.2000;V2:201211.
  15. Christakis DA,Wright JA,Zimmerman FJ,Bassett AL,Connell FA.Continuity of care is associated with well‐coordinated care.Ambul Pediatr.2003;3:8286.
  16. Flocke SA,Stange KC,Zyzanski SJ.The impact of insurance type and forced discontinuity on the delivery of primary care. [see comments.].J Fam Pract.1997;45:129135.
  17. Flocke SA.Measuring attributes of primary care: development of a new instrument.J Fam Pract.1997;45:6474.
  18. Flynn SP.Continuity of care during pregnancy: the effect of provider continuity on outcome.J Fam Pract.1985;21:375380.
  19. Kerse N,Buetow S,Mainous AG,Young G,Coster G,Arroll B.Physician‐patient relationship and medication compliance: a primary care investigation.Ann Fam Med.2004;2:455461.
  20. Litaker D,Ritter C,Ober S,Aron D.Continuity of care and cardiovascular risk factor management: does care by a single clinician add to informational continuity provided by electronic medical records?Am J Manag Care.2005;11:689696.
  21. Forster AJ,Murff HJ,Peterson JF,Gandhi TK,Bates DW.The incidence and severity of adverse events affecting patients after discharge from the hospital.Ann Intern Med.2003;138:161167.
  22. van Walraven C,Mamdani MM,Fang J,Austin PC.Continuity of care and patient outcomes after hospital discharge.J Gen lntern Med.2004;19:624645.
  23. van Walraven C,Seth R,Austin PC,Laupacis A.Effect of discharge summary availability during post‐discharge visits on hospital readmission.J Gen Intern Med.2002;17:186192.
  24. Bell CM,Schnipper JL,Auerbach AD, et al.Association of communication between hospital‐based physicians and primary care providers with patient outcomes.[see comment].J Gen Intern Med2009;24(3):381386.
  25. Kripalani S,LeFevre F,Phillips CO,Williams MV,Basaviah P,Baker DW.Deficits in communication and information transfer between hospital‐based and primary care physicians: implications for patient safety and continuity of care.JAMA.2007;297:831841.
  26. van Walraven C,Taljaard M,Bell C, et al.Information exchange among physicians caring for the same patient in the community.Can Med Assoc J.2008;179:10131018.
  27. Breslau N,Reeb KG.Continuity of care in a university‐based practice.J Med Educ.1975;965969.
  28. van Walraven C,Taljaard M,Bell CM, et al.Provider and information continuity after discharge from hospital: a prospective cohort study.2009. Ref Type: Unpublished Work.
  29. van Walraven C,Dhalla IA,Bell CM, et al.Derivation and validation of the LACE index to predict early death or unplanned readmission after discharge from hospital to the community.CMAJ. (In press)
  30. Charlson ME,Pompei P,Ales KL,MacKenzie CR.A new method of classifying prognostic comorbidity in longitudinal studies: development and validation.J Chronic Dis.1987;40:373383.
  31. Schneeweiss S,Wang PS,Avorn J,Glynn RJ.Improved comorbidity adjustment for predicting mortality in Medicare populations.Health Serv Res.2003;38(4):11031120.
  32. Glidden DV,Vittinghoff E.Modelling clustered survival data from multicentre clinical trials.Stat Med.2004;23:369388.
  33. Stiell A,Forster AJ,Stiell IG,van Walraven C.Prevalence of information gaps in the emergency department and the effect on patient outcomes.CMAJ.2003;169:10231028.
  34. van Walraven C,Davis D,Forster AJ,Wells GA.Time‐dependent bias due to improper analytical methodology is common in prominent medical journals.J Clin Epidemiol.2004;57:672682.
  35. Raghunathan TE.What do we do with missing data? Some options for analysis of incomplete data.Annu Rev Public Health.2004;25:99117.
  36. van Dijk MR,Steyerberg EW,Stenning SP,Habbema JD.Survival estimates of a prognostic classification depended more on year of treatment than on imputation of missing values.J Clin Epidemiol.2006;59:246253. [Review]
  37. Gorelick MH.Bias arising from missing data in predictive models.[see comment].J Clin Epidemiol.2006;59:11151123.
  38. van Walraven C,Taljaard M,Bell CM, et al.Information exchange among physicians caring for the same patient in the community.CMAJ.2008;179:10131018.
  39. Fowler FJ.Survey Research Methods.2nd ed.,Beverly Hills:Sage;1993.
  40. Asch DA,Jedrziewski K,Christiakis NA.Response rates to mail surveys published in medical journals.J Clin Epidemiol.1997;50:11291136.
  41. Hasan M.Readmission of patients to hospital: still ill defined and poorly understood.Int J Qual Health Care.2001;13:177179.
  42. Clarke A.Are readmissions avoidable?Br Med J.1990;301:11361138.
  43. Escobar GJ,Greene JD,Scheirer P,Gardner MN,Draper D,Kipnis P.Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases.Med Care.2008;46:232239.
  44. Wasson JH,Sauvigne AE,Mogielnicki RP, et al.Continuity of outpatient medical care in elderly men. A randomized trial.JAMA.1984;252:24132417.
References
  1. Society of Hospital Medicine.2009.Ref Type: Internet Communication.
  2. Kralovec PD,Miller JA,Wellikson L,Huddleston JM.The status of hospital medicine groups in the United States.J Hosp Med.2006;1:7580.
  3. Wachter RM,Goldman L.The hospitalist movement 5 years later. [see comment].JAMA.2002;287:487494. [Review]
  4. Lindenauer PK,Pantilat SZ,Katz PP,Wachter RM.Hospitalists and the practice of inpatient medicine: results of a survey of the National Association of Inpatient Physicians. [see comment].Ann Intern Med.1999;130:343349.
  5. Pantilat SZ,Lindenauer PK,Katz PP,Wachter RM.Primary care physician attitudes regarding communication with hospitalists.Am J Med.2001;111:15S20S.
  6. Reid R,Haggerty J,McKendry R.Defusing the confusion: concepts and measures of continuity of healthcare.Ottawa,Canadian Health Services Research Foundation. Ref Type: Report.2002;150.
  7. Brousseau DC,Meurer JR,Isenberg ML,Kuhn EM,Gorelick MH.Association between infant continuity of care and pediatric emergency department utilization.Pediatrics.2004;113:738741.
  8. Christakis DA,Wright JA,Koepsell TD,Emerson S,Connell FA.Is greater continuity of care associated with less emergency department utilization?Pediatrics.1999;103:738742.
  9. Christakis DA,Mell L,Koepsell TD,Zimmerman FJ,Connell FA.Association of lower continuity of care with greater risk of emergency department use and hospitalization in children.Pediatrics.2001;107:524529.
  10. Gill JM,Mainous AG,The role of provider continuity in preventing hospitalizations.Arch Fam Med.1998;7:352357.
  11. Mainous AG,Gill JM.The importance of continuity of care in the likelihood of future hospitalization: is site of care equivalent to a primary clinician?Am J Public Health.1998;88:15391541.
  12. Baker R,Mainous AG,Gray DP,Love MM.Exploration of the relationship between continuity, trust in regular doctors and patient satisfaction with consultations with family doctors.Scand J Prim Health Care.2003;21:2732.
  13. Beattie P,Dowda M,Turner C,Michener L,Nelson R.Longitudinal continuity of care is associated with high patient satisfaction with physical therapy.Phys Ther.2005;85:10461052.
  14. Chang FC,Donald MS,Anthony L,Maureen F,Elizabeth AS.Provider continuity and outcomes of care for persons with schizophrenia.Ment Health Serv Res.2000;V2:201211.
  15. Christakis DA,Wright JA,Zimmerman FJ,Bassett AL,Connell FA.Continuity of care is associated with well‐coordinated care.Ambul Pediatr.2003;3:8286.
  16. Flocke SA,Stange KC,Zyzanski SJ.The impact of insurance type and forced discontinuity on the delivery of primary care. [see comments.].J Fam Pract.1997;45:129135.
  17. Flocke SA.Measuring attributes of primary care: development of a new instrument.J Fam Pract.1997;45:6474.
  18. Flynn SP.Continuity of care during pregnancy: the effect of provider continuity on outcome.J Fam Pract.1985;21:375380.
  19. Kerse N,Buetow S,Mainous AG,Young G,Coster G,Arroll B.Physician‐patient relationship and medication compliance: a primary care investigation.Ann Fam Med.2004;2:455461.
  20. Litaker D,Ritter C,Ober S,Aron D.Continuity of care and cardiovascular risk factor management: does care by a single clinician add to informational continuity provided by electronic medical records?Am J Manag Care.2005;11:689696.
  21. Forster AJ,Murff HJ,Peterson JF,Gandhi TK,Bates DW.The incidence and severity of adverse events affecting patients after discharge from the hospital.Ann Intern Med.2003;138:161167.
  22. van Walraven C,Mamdani MM,Fang J,Austin PC.Continuity of care and patient outcomes after hospital discharge.J Gen lntern Med.2004;19:624645.
  23. van Walraven C,Seth R,Austin PC,Laupacis A.Effect of discharge summary availability during post‐discharge visits on hospital readmission.J Gen Intern Med.2002;17:186192.
  24. Bell CM,Schnipper JL,Auerbach AD, et al.Association of communication between hospital‐based physicians and primary care providers with patient outcomes.[see comment].J Gen Intern Med2009;24(3):381386.
  25. Kripalani S,LeFevre F,Phillips CO,Williams MV,Basaviah P,Baker DW.Deficits in communication and information transfer between hospital‐based and primary care physicians: implications for patient safety and continuity of care.JAMA.2007;297:831841.
  26. van Walraven C,Taljaard M,Bell C, et al.Information exchange among physicians caring for the same patient in the community.Can Med Assoc J.2008;179:10131018.
  27. Breslau N,Reeb KG.Continuity of care in a university‐based practice.J Med Educ.1975;965969.
  28. van Walraven C,Taljaard M,Bell CM, et al.Provider and information continuity after discharge from hospital: a prospective cohort study.2009. Ref Type: Unpublished Work.
  29. van Walraven C,Dhalla IA,Bell CM, et al.Derivation and validation of the LACE index to predict early death or unplanned readmission after discharge from hospital to the community.CMAJ. (In press)
  30. Charlson ME,Pompei P,Ales KL,MacKenzie CR.A new method of classifying prognostic comorbidity in longitudinal studies: development and validation.J Chronic Dis.1987;40:373383.
  31. Schneeweiss S,Wang PS,Avorn J,Glynn RJ.Improved comorbidity adjustment for predicting mortality in Medicare populations.Health Serv Res.2003;38(4):11031120.
  32. Glidden DV,Vittinghoff E.Modelling clustered survival data from multicentre clinical trials.Stat Med.2004;23:369388.
  33. Stiell A,Forster AJ,Stiell IG,van Walraven C.Prevalence of information gaps in the emergency department and the effect on patient outcomes.CMAJ.2003;169:10231028.
  34. van Walraven C,Davis D,Forster AJ,Wells GA.Time‐dependent bias due to improper analytical methodology is common in prominent medical journals.J Clin Epidemiol.2004;57:672682.
  35. Raghunathan TE.What do we do with missing data? Some options for analysis of incomplete data.Annu Rev Public Health.2004;25:99117.
  36. van Dijk MR,Steyerberg EW,Stenning SP,Habbema JD.Survival estimates of a prognostic classification depended more on year of treatment than on imputation of missing values.J Clin Epidemiol.2006;59:246253. [Review]
  37. Gorelick MH.Bias arising from missing data in predictive models.[see comment].J Clin Epidemiol.2006;59:11151123.
  38. van Walraven C,Taljaard M,Bell CM, et al.Information exchange among physicians caring for the same patient in the community.CMAJ.2008;179:10131018.
  39. Fowler FJ.Survey Research Methods.2nd ed.,Beverly Hills:Sage;1993.
  40. Asch DA,Jedrziewski K,Christiakis NA.Response rates to mail surveys published in medical journals.J Clin Epidemiol.1997;50:11291136.
  41. Hasan M.Readmission of patients to hospital: still ill defined and poorly understood.Int J Qual Health Care.2001;13:177179.
  42. Clarke A.Are readmissions avoidable?Br Med J.1990;301:11361138.
  43. Escobar GJ,Greene JD,Scheirer P,Gardner MN,Draper D,Kipnis P.Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases.Med Care.2008;46:232239.
  44. Wasson JH,Sauvigne AE,Mogielnicki RP, et al.Continuity of outpatient medical care in elderly men. A randomized trial.JAMA.1984;252:24132417.
Issue
Journal of Hospital Medicine - 5(7)
Issue
Journal of Hospital Medicine - 5(7)
Page Number
398-405
Page Number
398-405
Publications
Publications
Article Type
Display Headline
The independent association of provider and information continuity on outcomes after hospital discharge: Implications for hospitalists
Display Headline
The independent association of provider and information continuity on outcomes after hospital discharge: Implications for hospitalists
Legacy Keywords
continuity, death, readmission
Legacy Keywords
continuity, death, readmission
Sections
Article Source

Copyright © 2010 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
ASB1‐003 1053 Carling Ave, Ottawa ON, K1Y 4E9, Canada
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media