Nevertheless, there is a general assumption that the teaching hospitals provide better care than nonteaching hospitals. They have a greater concentration of clinical expertise, a focus on clinical research, and technological superiority. They also score better in the national analysis of the quality of hospital care performed each year by the respected National Opinion Research Center at the University of Chicago and published as "America's Best Hospitals" in U.S. News & World Report. (1) In these analyses, the teaching hospitals regularly head the list.
Yet nonteaching hospitals have changed considerably over the past several decades. Trainees from the teaching hospitals, most of them board-certified, now constitute the clinical staffs of nonteaching hospitals, and with only a few exceptions (such as transplantation), the nonteaching hospitals have narrowed the technological gap. Thus, even without extensive data, one would guess that any differences in the quality of care between teaching and nonteaching hospitals would be rather small and difficult to quantify. Compounding the difficulty of such an analysis are the still unresolved problems of accurately measuring the quality of care received by hospitalized patients.
Comparing one type of hospital with another by measuring the quality of health care is an important but still rather elusive goal. Even though the field of quality measurement is nearly 20 years old, experts disagree about how adequately the quality of care can be measured today. In a six-part series on the quality of care published in the Journal two years ago, some experts asserted that although the methods of assessing quality are far from perfect, some approaches are sufficiently reliable for current use. (2,3,4) Despite the complexity of making such judgments, the public is intensely interested in assessments of hospital quality. In the 1998 "America's Best Hospitals" report, academic hospitals dominated the list, and several teaching hospitals in Massachusetts scored among the best. (1)
This issue of the Journal contains two interesting studies, both of which compare the quality of care in teaching hospitals with that in nonteaching hospitals. (5,6) In these studies, as well as in one recently published elsewhere, (7) the teaching hospitals again come out at the top.
The study by Chen et al. from Yale compared the outcomes of nearly 150,000 elderly hospitalized Medicare beneficiaries with acute myocardial infarction treated at the 60 top-ranked hospitals in the 1995, 1996, and 1997 lists of "America's Best Hospitals" with the outcomes of patients at all other hospitals. (5) As it turned out, 59 of the top 60 hospitals are major teaching hospitals. Using an independent data base -- namely, that of the Cooperative Cardiovascular Project -- Chen et al. found that 30-day mortality was significantly lower in the top 60 hospitals than in the other 4612 hospitals. The most interesting aspect of the finding was the apparent reason. Most of the survival advantage could be attributed to a rather low-technology intervention: greater use of beta-blockers and aspirin after acute infarction in the top hospitals.
The study by Taylor et al. from Duke compared major teaching hospitals with other types of hospitals with regard to the outcome (and cost) for nearly 2700 Medicare patients with hip fracture, congestive heart failure, coronary heart disease, or stroke. (6) Although costs were higher in the major teaching hospitals than in other hospitals, patients with these conditions in major teaching hospitals had significantly lower mortality rates. Though the trend favored the major teaching hospitals for all the conditions studied, patients with hip fractures who were hospitalized in major teaching hospitals were the only ones with a significant survival advantage when the diseases were looked at individually.
A recent study by investigators from Harvard also concludes that the care in teaching hospitals is superior. (7) In ratings by physicians and nurses who applied "process" criteria to data in the medical records of more than 1700 Medicare beneficiaries with congestive heart failure and pneumonia, the overall quality of care for both conditions was better in teaching hospitals than in nonteaching hospitals. As in the study from Yale, simple low-technology measures seem to have made the difference. The adequacy of diagnostic assessment, the use of certain drugs, and changes in therapy in response to new information were significantly better in the teaching hospitals. Interestingly, nonteaching hospitals scored better on measures of nursing care.
Of course, measures of morbidity and mortality are only part of the picture, and a recent report on the quality of 48 Massachusetts hospitals presents still another important part. In a study by the Picker Institute sponsored by a consortium of hospitals, health maintenance organizations, businesses, and the Massachusetts Medical Society, the nonteaching hospitals came out on top. (8) The major teaching hospitals, which had such an exemplary record in the study of "America's Best Hospitals," scored substantially lower than many hospitals with a nonteaching or a minor teaching role.
Although a few media reports missed it, the explanation of the difference in results was immediately apparent. The Picker study assessed exclusively how patients viewed their hospitals. It sampled patients' perceptions of how adequately hospitals handled their emotional needs, whether they were given adequate information, whether their discomfort was adequately treated, whether care was coordinated, whether they had adequate continuity of care, and whether their families were adequately involved in their care. By contrast, "America's Best Hospitals" is based on an extensive data base of "structure, process, and outcome" variables including staff-to-bed ratios, availability of high-technology equipment, mortality rates, and nominations by randomly selected board-certified physicians. (1,5) These divergent reports show quite clearly that both the teaching hospitals and the nonteaching hospitals still have much to learn about providing high-quality care, and they put into sharp focus the kinds of improvements in quality that each must achieve.
Teaching hospitals must deal with the adverse consequences to patients of concentrating too exclusively on their special responsibilities in research, teaching, and technological development. If the Picker Institute's analysis of Massachusetts hospitals is relevant to other parts of the country (and I am inclined to believe it is), then nonteaching hospitals have a substantial edge over teaching hospitals in many of the human dimensions of care. The short rotations of both house staff and attending physicians in teaching hospitals often result in discontinuity of care and deficiencies in patient education and emotional support. The involvement of medical students, residents, and multiple consultants takes time and sometimes interferes with an orderly decision-making process. Appointments with specialists may be delayed for weeks or even months. Patients still endure long waits for procedures and ungracious treatment by rushed attendants in understaffed day-surgery areas and emergency rooms.
At the same time, nonteaching hospitals cannot be satisfied with their high levels of patient satisfaction when their standards of clinical practice are substantially below those of teaching hospitals. Nonteaching hospitals have much to learn from teaching hospitals about these essential aspects of care. The studies in this issue of the Journal and elsewhere show that improving the quality of care in nonteaching hospitals does not necessarily require more equipment or even more specialists. Simply giving the right drug, or even starting treatment at the right time, can mean the difference between suffering and health, life and death. If on-the-spot house staff contribute to the excellence of such decisions in teaching hospitals, increasing the number of full-time physicians in nonteaching hospitals might make up for this difference. (9)
We must redouble our efforts to give optimal care within the constraints of our budgets. Simple changes in our practices and procedures are often all that are needed. We must continue to polish our methods of assessing all dimensions of the quality of care, be willing to make the results public, and act on them decisively. We still have a long way to go.
Jerome P. Kassirer, M.D.