 Section 17 of Final Report of the Advisory Committee on Human Radiation Experiments. This is a LibriVox recording. All LibriVox recordings are in the public domain. For more information or to volunteer, please visit LibriVox.org. Recording by Melanie Young. Final Report of the Advisory Committee on Human Radiation Experiments. Ethics of Human Subjects Research. A Historical Perspective. Chapter 2 Part 2 The Real World of Human Experimentation. It would be historically irresponsible, however, to rely solely on records related directly to the Nuremberg Medical Trial and evaluating the post-war scene in American medical research. The panorama of American thought and practice in human experimentation was considerably more complex than Ivy acknowledged on the witness stand in Nuremberg. In general, it does seem that most American medical scientists probably sought to approximate the practices suggested in the Nuremberg Code and the AMA principles when working with healthy volunteers. Indeed, a subtle yet pervasive indication of the recognition during this period that consent should be obtained from healthy subjects was the widespread use of the term volunteer to describe such research participants. Yet, as Advisory Committee member Susan Litterer has recently pointed out, the use of the word volunteer cannot always be taken as an indication that researchers intended to use subjects who had knowingly and freely agreed to participate in an experiment. It seems that researchers sometimes used volunteer as a synonym for research subject, with no special meaning intended regarding the decision of the participants to join in an experiment. Even with this ambiguity, it is, however, quite clear that a strong tradition of consent has existed in research with healthy subjects, research that generally offered no prospect of medical benefit to the participant. In the United States, much of this tradition has rested on the well-known example of Walter Reed's turn of the century experiments when he employed informed volunteers to establish the mosquito as the vector of transmission for yellow fever. Indeed, it seems that a tradition of research with consenting subjects has been particularly strong among Reed's military descendants in the field of infectious disease research, which has frequently required the use of healthy subjects. For example, Dr. Theodore Woodward, a physician researcher commissioned in the Army, conducted vaccine research during the 1950s with healthy subjects under the auspices of the Armed Forces Epidemiological Board. In a recent interview conducted by the Advisory Committee, Woodward recalled that the risk of exposure to diseases such as typhus were always fully disclosed to potential healthy subjects and that their consent was obtained. Since some of these studies were conducted in other countries with non-English speakers, the disclosure was given in the volunteer's language. Of his own values during this time, Woodward stated, If I gave someone something that could make them sick or kill them and hadn't told them, I'm a murderer. Similarly, Dr. John Arnold, a physician who conducted Army-sponsored malaria research on prisoners from the late 1940s through the mid-1950s, recalled that he always obtained written permission from his subjects. Not all the evidence on consent and healthy subjects comes from the military tradition. A particularly compelling general characterization of research with normal volunteers during this period comes from the analytic summary of a conference on the concept of consent in clinical research, which the Law Medicine Research Institute, or LMRI, of Boston University convened on April 29, 1961. At this conference, 21 researchers from universities, hospitals, and pharmaceutical companies across the country were brought together to explore problems arising from the legal and ethical requirements of informed consent of research subjects. The LMRI project was what one might now call a fact-finding mission. The LMRI staff was attempting to define and to analyze the actual patterns of administrative practice governing the conduct of clinical research in the United States during the early 1960s. N. S. Harris, an LMRI staff member and author of the conference's final report, offered a simple but significant assessment of the handling of healthy participants in non-therapeutic research as expressed by the researchers at the meeting, whose careers included the decade-and-a-half since the end of World War II. The conferees indicated that normal subjects are usually fully informed. Even so, researchers who almost certainly knew better sometimes employed unconcenting healthy subjects in research that offered them no medical benefits. For example, Dr. Louis Lasagna, who has since become a respected authority on bioethics, stated in an interview conducted by the advisory committee that between 1952 and 1954, when he was a research fellow at Harvard Medical School, he helped carry out secret army-sponsored experiments in which hallucinogens were administered to healthy subjects without their full knowledge or consent. The idea was that we were supposed to give hallucinogens or possible hallucinogens to healthy volunteers and see if we could worm out of them secret information. And it went like this. A volunteer would be told, now we're going to ask you a lot of questions. But under no circumstances tell us your mother's maiden name or your social security number. I forget what. I refused to participate in this because it was so mindless that a psychologist did the interviewing and they'd give them a drug and ask them a number of questions and, sure enough, one of the questions was, what is your mother's maiden name? Well, it was laughable in retrospect. The subjects weren't informed about anything. Lasagna, reflecting not with pride on the episode, offered the following explanation. It wasn't that we were Nazis and said, if we ask for consent, we lose our subjects. It was just that we were so ethically insensitive that it never occurred to us that you ought to level with people that they were in an experiment. This might have been true for Lasagna, the young research fellow, but the explanation is harder to understand for the director of the research project, Henry Beecher. Beecher was a Harvard anesthesiologist who, as we will see later in this chapter and in Chapter 3, would emerge as an important figure in biomedical research and ethics during the mid-1960s. If American researchers experimenting on healthy subjects sometimes did not strive to follow the standards enunciated at Nuremberg, research practices with sick patients seem even more problematic in retrospect. Advisory Committee member Jay Katz has recently argued that this type of research still gives rise to ethical difficulties for physicians engaged in research with patients and he has offered an explanation. In conflating clinical trials in therapy as well as patients and subjects, as if both were one and the same, physician investigators unwittingly become double agents with conflicting loyalties. It is likely that such confusion and conflict would have been as troublesome several decades ago, if not more troublesome than it is today. The immediate post-war period was a time of vast expansion and change in American medical science. See Introduction. Clinical research was emerging as a new and prestigious career possibility for a growing number of medical school graduates. Most of these young clinical researchers almost certainly would have absorbed in their early training a paternalistic approach to medical practice that was not seriously challenged until the 1970s. This approach encouraged physicians to take the responsibility for determining what was in the best interest of their patients and to act accordingly. The general public allowed physicians to act with great authority in assuming this responsibility because of an implicit trust that doctors were guided in their actions by a desire to help their patients. This paternalistic approach to medical practice can be traced to the Hippocratic Admonition, to help or at least do no harm. Another long-standing medical tradition that can be found in Hippocratic Medicine is the belief that each patient poses a unique medical problem calling for creative solution. Creativity in the treatment of individuals which was not commonly thought of as requiring consent could be and often was called experimentation. This tradition of medical tinkering without explicit and informed consent from a patient was intended to achieve proper treatment for an individual's ailments. But it seems also to have served often unconsciously as a justification for some researchers who engaged in large-scale clinical research projects without particular concern for consent from patients. Members of the medical profession and the American public have today come to better understand the intellectual and institutional distinctions between organized medical research and standard medical practice. There were significant differences between research and practice in the 1950s but these differences were harder to recognize because they were relatively new. For example, randomized, controlled, double-blind trials of drugs which have brought so much benefit to medical practice by greatly decreasing bias in the testing of new medicines were introduced in the 1950s. The post-war period also brought an unprecedented expansion of universities and research institutes. Many more physicians than ever before were no longer solely concerned or even primarily concerned with aiding individual patients. These medical scientists instead set their sights on goals they deemed more important expanding basic knowledge of the natural world, curing a dread disease for the benefit of many not one and in some cases helping to defend the nation against foreign aggressors. At the same time this new breed of clinical researchers was motivated by more pragmatic concerns such as getting published and moving up the academic career ladder. But these differences between medical practice and medical science which seem relatively clear in retrospect were not necessarily easy to recognize at the time. And coming to terms with these differences was not especially convenient for researchers. Using readily available patients as clinical material was an expedient solution to a need for human subjects. As difficult and inconvenient as it might have been for researchers in the boom years of American medical science following World War II to confront the fundamental differences between therapeutic and non-therapeutic relationships with other human beings it was not impossible. Otto E. Gutentag, a physician at the University of California School of Medicine in San Francisco directly addressed these issues in a 1953 Science Magazine article. Gutentag's article and three others that appeared with it originated as presentations in a symposium held in 1951 on The Problem of Experimentation on Human Beings at Gutentag's home institution. Gutentag constructed his paper around a comparison between the traditional role of the physician as healer and the relatively new role of physician as medical researcher. Gutentag referred to the former as physician friend and the latter as physician experimenter. He explicitly laid out the manner in which medical research could conflict with the traditional doctor-patient relationship. Historically, one human being is in distress in need crying for help and another fellow human being is concerned and wants to help and the desire for it precipitates the relationship. Here both the healthy and the sick persons are fellow companions, partners to conquer a common enemy who has overwhelmed one of them. Objective experimentation to confirm or disprove some doubtful or suggested biological generalization is foreign to this relationship for it would involve taking advantage of the patient's cry for help and of his insecurity. Gutentag worried that a physician experimenter could not resist the temptation to take advantage of the patient's cry for help. To prevent the experimental exploitation of the sick that he envisioned or knew about Gutentag suggested the following arrangement. Research and care would not be pursued by the same doctor for the same person but would be kept distinct. The physician friend and the physician experimenter would be two different persons as far as a single patient is concerned. The responsibility for the patient as patient would rest during the experimental period with the physician friend unless the patient decided differently. Retaining his original physician as personal advisor, the patient would at least be under less conflict than he is at present when the question of experimentation arises. Among physicians, Gutentag was nearly unique in medicine in those days in raising such problems in print. Another example of concern about the moral issues raised by research at the bedside comes from what might be an unexpected source, a Catholic theologian writing in 1945. In the course of a general review of issues in moral theology, John C. Ford, a prominent Jesuit scholar, devoted several pages to the matter of experimentation with human subjects. Ford was not a physician but his thoughts on this topic, published a year before the beginning of the Nuremberg Medical Trial, suggest that a thoughtful observer could recognize even decades ago serious problems with conducting medical research on unconcenting hospital patients. The point of getting the patient's consent before conducting an experiment is increasingly important, I believe, because of reports which occasionally reach me of grave abuses in this matter. In some cases, especially charity cases, patients were not provided with a sure, well-tried and effective remedy that is at hand but instead are subjected to other treatment. The purpose of delaying the well-tried remedy is not to cure this patient but to discover experimentally what the effects of the new treatment will be. In the hope, of course, that a new discovery will benefit later generations and that the delay in administering the well-tried remedy will not harm the patient too much. This sort of thing is not only immoral but unethical from the physician's own standpoint and is illegal as well. The transcripts and reports produced in the Law Medicine Research Institute's effort during the early 1960s to gather information on ethical and administrative practices in research in medical settings suggest that by this time more researchers had come to recognize the troubling issues associated with using sick patients as subjects in research that could not benefit them. The body of evidence from the LMRI project also suggest that problems with this type of human experimentation had been widespread before the early 1960s and remained common at that time. The transcript of a May 1, 1961 closed-door meeting of medical researchers organized by LMRI to explore issues in pediatric research shows a medical scientist from the University of Iowa offering a revealing generalization from which none of his colleagues dissented. In order to understand this transcript excerpt one must know that item A1 on the meeting agenda related to research primarily directed toward the advancement of medical science and item A2 referred to clinical investigation primarily directed toward diagnostic, therapeutic and or prophylactic benefit to patients. We have done a thousand things with an implied feeling of consent. We wear two hats. Item A2 allows us to do A1 but we feel uncomfortable about it. The responsibility of the physician includes responsibility to advance in knowledge. Things are different now in this problem of a secondary role i.e. to advance knowledge is increasingly in front stage. This researcher acknowledged that many physicians during the period let themselves slide into non-therapeutic research with patients. He provided the additional and significant assessment that he and his colleagues felt guilty about this behavior even though it was quite common. An even more probing analysis of these issues had taken place two days earlier at the April 29, 1961 LMRI conference on the concept of consent referred to above in our discussion of research with healthy subjects. The participants at this meeting recognized that research with sick patients could be both therapeutic and non-therapeutic. Interestingly, they suggested that patients employed for research in which there was the possibility of therapeutic benefit with minimal or moderate risk were usually informed of the proposed study. The author of the conference report offered the plausible explanation that informing subjects in potentially beneficial research is psychologically more comfortable for investigators because the therapeutic expectations of potential subjects coincide with the purpose and expected results of the experiment. The conferees identified research in which patients are used for studies unrelated to their own disease or in studies in which therapeutic benefits are unlikely as the most problematic. Those at the meeting indicated that it is most often subjects in this category to whom disclosure is not main. The conference report outlined an approach employed by many researchers including some at the meeting in which rather than seeking consent from patients for research that offers them no benefit the therapeutic illusion is maintained and the patient is often not even told he is participating in research. Instead, he is told he is just going to have a test. If the experimental procedure involves minimal risk but some discomfort such as hourly urine collection all you do is tell the patient we want you to urinate every hour. We merely let them assume that it is part of the hospital work that is being done. Again, it is important to note that the conference participants displayed some moral discomfort with this pattern of behavior as can be seen from the following exchange. Dr. X. There is a matter here of whether the patient is not informed because the risk is too trivial or because it's too serious. Dr. Y. I think you're getting right at it. There's a great difference in not telling the patient because you're afraid he won't participate and not telling him because you don't think there is a conceivable risk and it's so trivial you don't bother to inform him. Dr. Z. On the question of whether it's acceptable not to tell we would say that it is not permissible on the grounds of refusal potential. It is also important to draw out of this transcript excerpt the general point that most researchers in this period appear not to have had great ethical qualms about enrolling an uninformed patient in a research project if the risk was deemed low or non-existent. Of course the varying definitions of low risk could lead to problems with this approach. Indeed the participants at the concept of consent conference grappled at length with this very issue without ever reaching consensus. A minority steadfastly asserted that participants in an experiment should be asked for consent even if the risk would be extremely low such as in only taking a small clipping of hair. The advisory committee's ethics oral history project has provided extensive additional evidence that medical researchers sometimes, perhaps even often, took liberties with sick patients during the decades immediately following World War II. The element of opportunism was recounted in several interviews. Dr. Lasagna, who was involved in pain management studies in post-operative patients at Harvard in the 1950s explained rather bluntly. Mostly I'm ashamed to say it was as if and I'm putting this very crudely purposely as if you'd ordered a bunch of rats from a laboratory and you had experimental subjects available to you. They were never asked by anybody. They might have guessed they were involved in something because a young woman would come around every hour and ask them how they were and quantified their pain. We never made any efforts to find out if they guessed that they were part of it. Other researchers told similar tales with a similar mixture of matter-of-fact reporting and regretful recollection. Dr. Paul Beeson remembered a study he conducted in the 1940s while a professor at Emory University on patients with bacterial endocarditis an invariably fatal disease at the time. He recalled that he thought it would be interesting to use the new technique of cardiac catheterization to compare the number of bacteria in the blood at different points in circulation. This is something I wouldn't dare do now. It would do no good for the patient. They had to come to the lab and lie on a fluoroscopic table for a couple of hours. A catheter was put into the heart, a femoral needle was put in so we could get femoral arterial blood and so on. All I could say at the end was that these poor people were lying there and we had nothing to offer them and it might have given them some comfort that a lot of people were paying attention to them for this one study. I don't remember ever asking their permission to do it. I did go around and see them, of course, and said, we want to do a study on you in the x-ray department. We'll do it tomorrow morning. And they said yes. There was never any question. Such a thing as informed consent, that term didn't even exist at that time. If I were ever on a hospital ethics committee today I wouldn't ever pass on that particular study. Radiologist Leonard Sagan recalled an experiment in which he assisted during his training on a metabolic unit at Moffitt Hospital in San Francisco in 1956 to 1957. At the time the adrenal gland was hot stuff. ACTH, adrenal corticotropic hormone had just become available and it was an important tool for exploring the function of the adrenal gland. This was the project I was involved in during that year, the study of adrenal function in patients with thyroid disease, both hypo and hyperthyroid disease. So what did we do? I'd find some patients in the hospital and I'd add a little ACTH to their infusion and collect urine and measure output of urinary corticoids. I didn't consider it dangerous, but I didn't consider it necessary to inform them either. So far as they were concerned, this was part of their treatment. They didn't know and no one had asked me to tell them. As far as I know, informed consent was not practiced any place in that hospital at the time. Sagan viewed the above experiment as conforming not only with the practices of the particular hospital, but also in accord with the high degree of professional autonomy and respect that was granted to physicians in this era. In 1945 or 1950 the doctor was king or queen. It never occurred to a doctor to ask for consent for anything. People say, oh, injection with plutonium, why didn't the doctor tell the patient? Doctors weren't in the habit of telling the patients anything. They were in charge and nobody questioned their authority. Now that seems egregious, but at the time that's the way the world was. Another investigator, Dr. Stuart Finch, who was a professor of medicine at Yale during the 1950s and 1960s, recalled instances when oncologists there were overly aggressive in pursuing experimental therapies with terminal patients. It's very easy to talk a terminal patient into taking that medication or to try that compound or whatever the substance is. Sometimes the oncologist got way over enthused using it. It's very easy when you have a dying patient to say, look, you're going to die. Why don't you let me try this substance on you? I don't think if they have informed consent or not, it makes much difference at that point. Economically disadvantaged patients seem to have been perceived by some physicians as particularly appropriate subjects for medical experimentation. Dr. Beeson offered a frank description of a quid pro quo rationale that was probably quite common in justifying the use of poor patients in medical research. We were taking care of them and felt we had a right to get some return from them, since it wouldn't be in professional fees and since our taxes were paying their hospital bills. Another investigator, Dr. Thomas Chalmers, who began his career in medical research during the 1940s, identified sick patients as the most vulnerable type of experimental subjects, more vulnerable even than prisoners. One of the real ludicrous aspects of talking about a prisoner being a captive and therefore needing more protection than others is there's nobody more captive than a sick patient. You've got pain. You feel awful. You've got this one person who's going to help you. You do anything he says. You're not a captive. You can't, especially if you're sick and dying, discharge the doctor and get another one without a great deal of trauma and possible loss of life-saving measures. Thus, as compared with prisoners who are now generally viewed to be vulnerable to coercion, those who are sick may be even more compromised in their ability to withstand subtle pressure to be research subjects. You might be candidates for medical research as proved to be an especially troublesome issue in the era following Nuremberg. End of Section 17. Recording by Melanie Young. Section 18 of Final Report of the Advisory Committee on Human Radiation Experiments. This is a LibriVox recording. All LibriVox recordings are in the public domain. For more information or to volunteer, please visit LibriVox.org. Recording by Melanie Young. Final Report of the Advisory Committee on Human Radiation Experiments. Ethics of Human Subjects Research. A Historical Perspective. Chapter 2, Part 3. Nuremberg and Research with Patients. The record of conducting non-therapeutic research on consenting sick patients during the post-war period discussed above seems to stand in particularly sharp contrast with the claims about the conduct of research involving human subjects in the United States that Andrew Ivy made during his testimony in Nuremberg. We have seen how some observers, even before Nuremberg, recognized that employing uninformed, vulnerable sick patients solely as a means to a scientific end was simply wrong. We must, however, also acknowledge that the particulars of the Nuremberg Medical Trial did not call for careful attention to the issues surrounding research with sick patients. None of the German physicians at Nuremberg stood accused of exploiting patients for experimental purposes. Nonetheless, it is likely that Andrew Ivy would have argued that consent was appropriate in virtually all instances of medical research. Dr. Herman Wagatsky, who worked closely under Ivy at Northwestern in the late 1930s and early 1940s, explicitly commented during an Ethics Oral History Project interview that he did not believe that his mentor drew any sort of ethical line between various types of clinical research. I don't think he made any distinction between research with sick patients and research with healthy subjects. Research was research. It didn't make any difference. Additional evidence that Ivy would have supported standards of consent for research with ill as well as with healthy subjects comes from his response to a set of rules for human experimentation by the German Ministry of Interior in 1931, presented to him after he had prepared his written report for the AMA in the fall of 1946. These rules appear to be considerably more comprehensive and sophisticated than the Nuremberg Code itself. Most significantly, the 1931 German standards cover both therapeutic and non-therapeutic research, calling for consent in both types of medical investigation. For reasons that are not clear, the prosecution team at Nuremberg did not choose to place much emphasis on these German standards in constructing the case. Ivy did, however, attempt, without much help from the prosecution, to initiate a discussion of the 1931 standards during his testimony. It is clear from the trial transcript that Ivy saw a rough equivalence between the more detailed and extensive German rules and those formulated by the AMA with his assistance. Shortly after discussing the AMA principles on the witness stand, Ivy had the following exchange with prosecutor Alexander G. Hardy. Question. Do you have any further statements to make concerning the rules of medical ethics, concerning experimentation in human beings? Answer. Well, I find that since making my report to the American Medical Association that a decree of the Minister of Public Welfare, Ivy should have said the Minister of the Interior, of Germany in 1931 on the subject of regulations for modern therapy for the performance of scientific experiments in human beings contains all the AMA principles which I have read. Hardy did not take what now seems an obvious opportunity to allow Ivy to expand further on these rules. However, a few minutes later, Ivy brought up the German standards again on his own. And again, Hardy did not pursue the topic further. At this point, Ivy stated his general agreement of 1931 even more firmly. I cited the principles from the Reich Minister of the Interior dated February 28, 1931 to indicate that the ethical principles for the use of human beings as subjects in medical experiments in Germany in 1931 were similar to these which I have enunciated and which have been approved by the House of Delegates of the American Medical Association. Ivy's assertion of similarity between the AMA principles and those in the 1931 German document may not meet with agreement among those who compare the two. Though they may be viewed as similar in philosophy and intent, the German Interior Ministry document is far more detailed and comprehensive than that of the AMA. Contrary to Ivy's claims at Nuremberg and the positioning of Ivy by the prosecution, he cannot in any full sense be taken as the embodiment of the entire American medical profession in the years immediately following World War II. Again, Dr. Rogotsky spoke to this point in his recent interview. Well, I've always felt that that stuff that Ivy wrote up during the time of the trials was pretty much an expression of his personal philosophy about research and it was the kind of understanding that we had in working with him about how he felt. Voluntariness being number one, you had to volunteer and had to be in a situation where you could volunteer and consent in the sense that you didn't do anything to anybody that they didn't know what you were doing, that you explained to people what it was you were going to do and why you were going to do it and that sort of thing. Even if it is true that Andrew Ivy would have wholeheartedly endorsed the notion of obtaining consent from any research subject, whether an experiment held the possibility of personal benefit or not, whether the subjects were sick or healthy, it seems likely that the AMA House of Delegates would have been hesitant to endorse a condensation of Ivy's principles of research ethics if they had been explicitly extended to cover all categories of clinical investigation. Obtaining consent from patients within the normal clinical relationship was not a common practice in late 1946. At that time and for many years to come, patient trust and medical beneficence were viewed as the unshakable moral foundations on which meaningful interactions between professional healers and the sick should be built. In fact, it was not until 1981 that the AMA's judicial council specifically endorsed informed consent as an appropriate part of the therapeutic doctor-patient relationship. But in the end it must be acknowledged that the facts of the Nuremberg Medical Trial did not force Andrew Ivy, the AMA House of Delegates, the Nuremberg prosecutors or the judges to grapple with the distinctions between research with sick patients and research with healthy subjects or therapeutic and non-therapeutic research. The Nuremberg defendants stood accused of ghastly experimental acts that were absolutely without therapeutic intent. And their unfortunate subjects were never under any illusion that they were receiving medical treatment. To rebut the claims of some of the medical defendants that obtaining consent from research subjects was not a clearly established principle, Ivy could and did offer a variety of examples on the witness stand from a long tradition of human experimentation on consenting healthy subjects. Ivy and the members of the prosecution team were not faced with what might have been a more troubling process. Finding examples of well-organized non-therapeutic experiments on sick patients in which the subjects had clearly offered consent simply put, the Nuremberg Medical Trial did not demand it. American medical researchers reactions to news of the Nuremberg Medical Trial. It is important to have some understanding of the extent to which American medical scientists paid attention to the events of the Nuremberg Medical Trial and made connections with the messages that emanated from the courtroom in Germany. The Nuremberg Medical Trial received coverage in the American popular press but it would almost certainly be an exaggeration to refer to this attention as exhaustive. Historian David Rothman has provided the following summary of the trials coverage in the New York Times. Over 1945 and 1946 fewer than a dozen articles appeared in the New York Times on the Nazi medical research. The indictment of 42 doctors in the fall of 1946 was a page 5 story and the opening of the trial a page 9 story. The announcement of the guilty verdict in August 1947 was a front page story but the execution of 7 of the defendants was again relegated to the back pages. The advisory committee's ethics oral history project suggests that American medical researchers perhaps like the American public generally were not carefully following the daily developments in Nuremberg. For example Dr. John Arnold a researcher who during the medical trial was involved in malaria experiments on prisoners at Stateville prison in Illinois offered a particularly vivid if somewhat anachronistic recollection of the scant attention paid to the Nuremberg medical trial among American medical scientists. We were dimly aware of it and as you ask me now I'm astonished that we were not hanging on the TV at the time watching for each twist and turn of the argument to develop but we weren't. It might have been expected that the researchers at Stateville would have been particularly concerned with the events at Nuremberg because some of the medical defendants claimed during the trial that the wartime malaria experiments at the Illinois prison were analogous to the experiments carried out in the Nazi concentration camps. The strongest statement of awareness came from Dr. Herbert Abrams a radiologist who was in his residency at Montefiore hospital in the Bronx throughout most of the trial. The Nuremberg medical trial was part of the history of the day and there was extensive so that the manner of human experimentation as it had been done by the Nazis was very much in the news. We were all aware of it. I think that people experienced this kind of revulsion about it that might anticipate. It was surely something at least in the environment I was in. We were aware of and that affected the thinking of everyone who was involved in clinical investigation. It seems likely however that the environment this young physician was in would have caused a heightened awareness of a trial dealing with Nazi medical professionals. Montefiore is a traditionally Jewish hospital that was home to many Jewish refugee physicians who had fled the terror and oppression of the Nazi regime. A trial of German physicians almost certainly would have been of particular interest in this setting. Even among American medical researchers who might have been aware of events at Nuremberg it seems that many did not perceive specific personal implications in the medical trial. Rothman has enunciated this historical view most fully. He asserts that the prevailing view was that the Nuremberg medical defendants were Nazis first and last. By definition nothing they did and no code drawn up in response to them was relevant to the United States. Jay Katz has offered a similar summation of the immediate response of the medical community to the Nuremberg Code. It was a good code for barbarians but an unnecessary code for ordinary physicians. Several participants in the Ethics Oral History Project affirmed the interpretations of Rothman and Katz using similar language. Said one physician there was a disconnect between the Nuremberg Code and its application to American researchers. The interpretation of these codes by American physicians was that they were necessary for barbarians but not for fine upstanding people. This same physician later acknowledged that in a sense some American researchers did not pay attention to the lessons of the Nuremberg medical trial because it was not convenient to do so. The connection between those horrendous acts carried out by German medical scientists in the concentration camps and our everyday investigations was not made by American medical researchers for reasons of self-interest to be perfectly frank. As I see it now I'm saddened that we didn't see the connection but that's what was done. It's hard to tell you now how we rationalized but the fact is we did. The popular press mirrored the view that human experimentation in the United States was not a morally troubling enterprise it was as American as apple pie. Between 1948 and 1960 magazines such as the Saturday Evening Post Readers Digest and the American Mercury ran human interest stories on human guinea pigs. These stories generally focused on specific groups of healthy subjects. For example, doctors, conscientious subjectors, medical students, soldiers and described them as volunteers. The articles explained the ordeals to which the volunteers had submitted themselves. Among these men and women the New York Times informed its Sunday readership in 1958 you will find those who will take shots of the new vaccines who will fly higher than anyone else who will watch malaria infected mosquitoes feed on their bare arms. The articles assured the public that the volunteers had plausible often noble reasons for volunteering for such seemingly gruesome treatment. The explanations included social redemption especially in the case of prisoners religious or other beliefs particularly for conscientious objectors. The advancement of science service to society and thrill seeking. In some most articles in the popular press were uncritical toward experimentation on humans and assumed that those involved had freely volunteered to participate. However, a smaller number of press reports in the late 1940s and 1950s did suggest some tension between the words at Nuremberg and the practices in America. As early as 1948 for example science news reported the Soviet claim that Americans were using Nazi methods in the conduct of prison experiments. Concern also began to be voiced about the dangers to volunteer guinea pigs. In October 1954 for another example the magazine Christian Century called on the army to halt at the first sign of danger experiments at the Fitzsimmons hospital in Denver where soldiers were called upon to eat foods exposed to cobalt radiation. It is also possible that press accounts of experiments with patients rather than healthy subjects were more inclined to be critical even in the late 1940s. A Saturday evening post article from the January 15th 1949 issue describes how a VA physician kept quiet about streptomycin trials involving the medical departments of the army, navy and VA because of the risk of congressional chastisement from publicity conscious members of the house and senate who might have screamed. You can't experiment on our heroes if it had been known that the army and navy veterans of former wars were being used in the medical investigation. This was a real worry of the doctors who formulated the clinical program. Evidence suggests that some American researchers were genuinely and deeply concerned with the issues surrounding human experimentation during the years immediately following World War II. One source of insight into the thinking of American physicians engaged in clinical research during the 1950s is found in the groundbreaking work of medical sociologist Renee C. Fox. For two five month periods between September 1951 and January 1953 Fox spent long days in continuous direct and intimate contact with the physicians and patients in a metabolic research ward that she autonomously called Ward F. Second. In 1959 Fox reported with remarkable sensitivity and eloquence on the ethical dilemmas faced by the physicians conducting research on this ward. She did not suggest that the scientist under her observation were unaware of the Nuremberg Code. Instead she offered a point by point paraphrasing of the code identified as the basic principles governing research on human subjects which the physicians of the metabolic group her collective term for the researchers whom she studied were required to observe. Rather than being unconscious or contemptuous of a set of principles intended for barbarians Fox reported that the researchers on Ward F. Second were sometimes troubled by their inability to apply the high and essentially unquestioned standards enunciated at the Nuremberg Medical Trial. The physicians of the metabolic group were deeply committed to these principles and conscientiously tried to live up to them in the research they carried out on patients. However, like most norms the basic principles of human experimentation are formulated on such an abstract level that they only provide general guides to actual behavior. Partly as a consequence, the physicians of the metabolic group often found it difficult to judge whether or not a particular experiment in which they were engaged kept within the bounds delineated by these principles. Sometimes private discussions among researchers about the ethical aspects of human experimentation led to public events. A good example from the early 1950s is the symposium held on October 10th, 1951 at the University of California School of Medicine in San Francisco at which Otto Gutentag made the presentation discussed earlier. One of Gutentag's colleagues Dr. Michael B. Shimpkin organized the symposium in response to some confidential criticism that he had received for research carried out under his direction with patients at the University of California's Laboratory of Experimental Oncology. The exact nature of this criticism is unclear from the records that remain of the episode but Shimpkin reported in a memoir that remedial steps were taken including written protocols for all new departures in clinical research which we asked the cancer board of the medical school to review. In his memoirs Shimpkin also recalls that patients were screened carefully before they were admitted to the Laboratory of Experimental Oncology. They had to understand the experimental nature of our work and every procedure was again explained to them. The initial release form even included agreement to an autopsy. The understanding did not absolve us of negligence nor deprive patients of the course to legal actions but did set the tone and nature of our relationships. In all our five years of operations not a single threat or implied threat of action against us was voiced. Two patients did instruct us to terminate our attempts at therapy. The criticism Shimpkin experienced also demonstrated to him that a more open discussion of clinical research might be of benefit to his colleagues. According to his recollection there was an almost visible thawing of attitude by the airing of the problem at the symposium. Less than a year after Shimpkin's 1951 San Francisco symposium the organizers of the first international congress of the histopathology of the nervous system which was held in Rome were sufficiently concerned with ethical issues that they invited Pope Pius XII to address the moral limits of medical methods of research and treatment. In a speech before 427 medical researchers from around the world including 62 Americans the Pope firmly endorsed the principle of obtaining consent from research subjects whether sick or healthy. He also pointed his audience to the relatively recent lessons of the Nuremberg Medical Trial which he summed up as teaching that man should not exist for the use of society on the contrary the community exists for the good of man. In an interview in 1961 Dr. Thomas Rivers a prominent American virus researcher recalled that the Pope's words had been influential among medical scientists working during the 1950s. In September 1952 Pope Pius XII had given a speech at the first international congress on the histopathology of the nervous system in which he outlined the Roman Catholic Church's position on the moral limits of human experimentation for purposes of medical research. That speech had a very broad impact on medical scientists both here and broad showing influence of the Nuremberg Medical Trial can be seen by looking at two editions of the best known textbook of American medical jurisprudence in the mid-20th century. In the 1949 edition of Doctor and Patient and the Law Louis J. Regan a physician and lawyer offered very little under the heading experimentation and what he did offer made the effort. The physician must keep abreast of medical progress but he is responsible if he goes beyond usual and standard procedures to the point of experimentation. If such treatment is considered indicated it should not be undertaken until consultation has been had and until the patient has signed a paper acknowledging and assuming the risk. However in Regan's next edition the text published in 1956 his few lines on human experimentation had been expanded to three pages. He presented a lengthy paraphrasing of the Nuremberg Code and he repeated verbatim without quotation marks the judges preamble to the code stating that all agree about these principles. Regan characterized the standards enunciated by the judges at Nuremberg as the most carefully developed set of precepts specifically drawn to meet the problem of human experimentation. Immediately following his discussion of Nuremberg Regan laid out the 1946 standards of the American Medical Association which as he put it researchers needed to meet in order to conform with the ethics of the American Medical Association. End of section 18 Recording by Melannie Young of final report of the advisory committee on human radiation experiments. This is a LibriVox recording all LibriVox recordings are in the public domain. For more information or to volunteer please visit LibriVox.org Recording by Melannie Young of the advisory committee on human radiation experiments. Ethics of human subjects research, a historical perspective chapter 2, part 4 New Times New Codes In the spring of 1959 the National Society for Medical Research, NSMR an organization that Andrew Ivy had helped to found in 1946 sponsored a national conference on the legal environment of medicine at the University of Chicago. Human experimentation was one of the major topics presented for discussion by the 148 conference participants primarily medical researchers from around the country. The published report of this conference reveals that the many researchers who gathered in Chicago understood the Nuremberg Code well enough to use it as a point of departure for discussion. As a group the conferees acknowledged that the 10 principles of the Nuremberg Code have become the principal guidepost to the ethics of clinical research in the western world. Not all those in attendance however seemed to have been entirely pleased with this state of affairs. A committee on the reevaluation of the Nuremberg experimental principles reported general agreement with the spirit of these precautions but discomfort with a number of particulars. For example, they suggested that the absolute requirement for consent in the code's first principle might be softened by inserting either explicit or reasonably presumed before the word consent. They also added a clause that would allow for third-party permission for those not capable of personal consent. The 1959 NSMR conference strongly suggests that by the late 1950s many and perhaps even most American medical researchers had come to recognize the Nuremberg Code as the most authoritative single answer to an important question. What are the rules for human experimentation? The same conference also provides evidence that many researchers who were giving the ethical issues surrounding human experimentation serious attention at this time were not entirely happy with the prospect of living by the letter of the code. The sources of discomfort with the Nuremberg Code can be grouped retrospectively into three broad categories. First, some recognize the discrepancies between what they had come to know as real practices in research on patient subjects and what they read in the lofty idealized language of the code. Others simply disagreed with some elements of the code. Still others disliked the very idea of a single concrete set of standards to guide behavior in such a complex matter as human experimentation. Henry Beecher, the Harvard-based medical researcher who was a mentor in the early 1950s, published a paper Experimentation in Man in the Journal of the American Medical Association only a few months before the NSMR conference in Chicago. In this lengthy piece, Beecher addressed a mixture of all three sources of discomfort with the Nuremberg Code. Beecher offered the assertion that it is unethical and immoral for potentially dangerous experiments without the subject's knowledge and consent, as the central conclusion of his paper. But, even with this strong statement, he was not entirely happy with the first clause of the code. He viewed the Nuremberg Consent Clause as too extreme and not squaring with the realities of clinical research. It is easy enough to say as point one of the Nuremberg Code does that the subject should have sufficient knowledge and comprehension of the elements of the subject matter involved, as to enable him to make an understanding and enlightened decision. Practically, this is often quite impossible. For the complexities of essential medical research have reached the point where the full implications and possible hazards cannot always be known to anyone and are often communicable only to a few informed investigators and sometimes not even to them. Certainly, the full implications of work to be done are often not really communicable to lay subjects. Point one states a requirement very often impossible of fulfillment. Beecher's second form of difficulty with the code can be found in his opinion of another Nuremberg Clause, which states in part that experiments should not be random and unnecessary in nature. Beecher cited anesthesia, X-rays, radium and penicillin as important medical breakthroughs that had resulted from random experimentation. He further stated that he would not know how to define experiments unnecessary in nature. Finally, Beecher's expressed skepticism in general that any code could provide effective moral guidance for researchers working with human subjects. Near the beginning of his paper, he wrote that the problems of human experimentation do not lend themselves to a series of rigid rules. Later in the piece, he expanded on this thought. It is not my view that many rules can be laid down to govern experimentation in man. In most cases, these are more likely to do harm than good. Rules are not going to curb the unscrupulous. Such abuses, as have occurred, are usually due to ignorance and inexperience. The most effective protection for all concerned depends upon a recognition and understanding of the various aspects of the problem. Another episode involving Henry Beecher further clarifies the medical profession's dissatisfaction with the construction of the Nuremberg Code. In the fall of 1961, Beecher and other members of the Harvard Medical Schools administrative board, the school's governing body, were presented with a set of rigid rules that had begun to appear in Army medical research contracts. The members of the board quickly recognized the principles, policies, and rules of the Surgeon General, Department of the Army, relating to the use of human volunteers in medical research, awarded by the Army as little more than a restatement of the Nuremberg Code. The Army office of the Surgeon General's provisions, as we discussed in Chapter 1, originally appeared in 1954. Given what we have just read of Beecher, it is not surprising that he was uncomfortable with the prospect of working in strict accordance with the Nuremberg Code if he were to receive funding from the Army, nor as we see from the minutes of the administrative board meetings, in which this manner came up for discussion, was Beecher alone in his opposition. At the October 6, 1961 meeting of the board when the Army contract insertion was first mentioned, some members felt that with the minor changes, the regulations were acceptable, while others described the regulations as vague, ambiguous, and in many instances, impossible to fulfill. One of Beecher's fellow board members, Assistant Medical School Dean Joseph W. Gardella M.D., produced a throw-going written critique of the principles, policies, and rules of the Surgeon General, and thus of the Nuremberg Code. Following the October 1961 meeting for the consideration of the other board members, Gardella opened his analysis with some general comments on the intended meaning of the Nuremberg Code. The Nuremberg Code was conceived in reference to Nazi atrocities and was written for their specific purpose of preventing brutal excesses from being committed or excused in the name of science. The Code, however admirable in its intent, and however suitable for the purpose for which it was conceived, is in our opinion not necessarily pertinent to or adequate for the conduct of medical research in the United States. After questioning the pertinence of the Nuremberg Medical Trial to American Medical Science, Gardella went on to raise a general question about the scope of the Nuremberg Code. He strongly suggested that the Code was not meant to cover what he perceived as the morally distinct enterprise of conducting potentially therapeutic research with sick patients. Does it refer only to healthy volunteers who have nothing to gain in terms of their health participating as research subjects? Or does it include the sick, whose physicians foresee for them the possibility of personal benefit through their participation? The distinction is important in that we believe that it would be difficult and might prove to be impossible to devise one set of guiding principles that would apply satisfactorily to both of these two different categories. Gardella offered a variety of specific objections to the Army Surgeon General's principles, but several of these points related directly to the general questions raised above. The first rule of the Army principles stated in a clear example of borrowing from the Nuremberg Code that the voluntary consent of the human subject is absolutely essential. Gardella, like Beecher, did not question the general spirit of this stricture. He worried about the practical application of this seemingly simple idea. Some of Gardella's worries arose specifically in the context of research with sick patients. The concept of voluntary consent is of central importance in any code relating to experimentation on humans. And yet the concept of consent is not satisfactorily defined in the Army principles. The quality of the subject's consent depends upon an interpretation of a factual situation which will frequently be complex. Could the subject comprehend what he was told? Did he in fact comprehend? How far was his consent influenced by his condition or by his trust in his physician? These questions may be easily answered in the case of the healthy volunteer. They may be more difficult for the sick. Perhaps the most significant addition to the Nuremberg Code found in the Army principles was the requirement for written consent from research subjects. Gardella objected to this requirement in research on patients in a firm and revealing fashion. This condition is inappropriate except in connection with healthy normal volunteers. The legal overtones and implications attendant to such a requirement have no place in a patient-physician relationship based on trust. Here such faith and trust serve as the primary basis of the subject's consent. Moreover, being asked to sign a somewhat formal paper is likely to provoke anxiety in the subject, i.e. patient, who can but wonder at the need for so much protocol. Dr. Gardella presented his analysis of the Army principles to the other members of the Harvard Medical School Administrative Board on March 23, 1962. The minutes of that meeting document that Gardella's views were not extreme or exceptional among leading medical scientists in the early 1960s, at least at Harvard University. The members of the Board were in general agreement with the objections and criticisms expressed in Gardella's critique. At this same meeting, Henry Beecher agreed in an expansive moment to attempt to capture in a paragraph or so the broad philosophical and moral principles that underlie the conduct of research on human beings at the Harvard Medical School. The members of the Board hoped that such a statement might satisfy the Army and that it would allow Harvard, as Gardella put it, to avert the catastrophic impact of the Surgeon General's regulation. A few months later, Beecher had completed a two-and-a-half page statement outlining the philosophy and ethical principles governing the conduct of research on human beings at Harvard Medical School. At the June 8, 1962 Board meeting, Beecher's colleagues commended and reaffirmed the views expressed in Beecher's document. In this statement, as in his 1959 published paper, Beecher emphasized the significance of consent, but he also asserted that it is folly to overlook the fact that valid informed consent may be difficult to the point of impossible to obtain in some cases. More than consent, Beecher believed in the significance of a special relationship of trust between subject or patient and the investigator. In the end, Beecher concluded that the only reliable foundation for this relationship was a virtuous medical researcher with virtuous peers. It is this writer's point of view that the best approach to research with human subjects concerns the character, wisdom, experience, honesty, imaginativeness, and sense of responsibility of the investigator who in all cases of doubt or where serious consequences might remotely occur will call in his peers and get the benefit of their counsel. Rigid rules will jeopardize the research establishments of this country where experimentation in man is essential. Available evidence suggests that by offering Henry Beecher's replacement for the Nuremberg Code, representatives of Harvard Medical School were able to extract a clarification during a meeting with Army Surgeon General Leonard D. Heaton on July 12, 1962, that the principles being inserted into Harvard's research contracts with the Army were guidelines rather than rigid rules. While the Harvard Medical School discussion of the Army's principles took place behind closed doors and involved a policy of limited applicability, the leaders of the international medical community were simultaneously engaged in a far more visible and global attempt to bring the standards enunciated in the Nuremberg Code into line with the realities of medical research. The 1964 statement by the World Medical Association, WMA commonly known as the Declaration of Helsinki created two separate categories in laying out rules for human experimentation. Clinical research combined with professional care and non-therapeutic clinical research. In the former category, physicians were required to obtain consent from patient subjects only when consistent with patient psychology. In the latter type of research, the consent requirements were more absolute. Clinical research on a human being cannot be undertaken without his free consent after he has been fully informed. Another noteworthy deviation from the Nuremberg Code is Helsinki's allowance in both therapeutic and non-therapeutic research for third-party permission from a legal guardian. As one might predict from the similarity between the changes introduced by the Declaration of Helsinki and the changes to the Nuremberg Code suggested by the American participants at the NSMR conference in 1959, the WMA document met with widespread approval among researchers in this country. The Nuremberg Code's recommendations including the American Society for Clinical Investigation, the American Federation for Clinical Research, and the American Medical Association offered their quick and enthusiastic endorsements. Compared with the lofty idealized language of the Nuremberg Code, the Helsinki Declaration may have seemed more sensible to many researchers in the early 1960s because it offered rules that could be used in the clinical setting. Conclusion In the late 1940s American medical researchers seldom recognized that research with patient subjects ought to follow the same principles as those applied to healthy subjects. Yet as we have seen in this chapter some of those few who asked themselves hard questions about their research work with patients concluded that people who are ill are entitled to the same as those who are not. That some did in fact reach this conclusion is evidence that it was not beyond the horizon of moral insight at that time. Nevertheless, they were a minority of the community of physician researchers. And the organized medical profession did not exhibit a willingness to reconsider its responsibilities to patients in the burgeoning world of post-war clinical research. While a slowly increasing number of investigators reflected on the ethical treatment of human subjects during the 1950s it was not until the 1960s and a series of highly publicized events with names like Thalidomide Willowbrook and Tuskegee that it became apparent that a professional code, whether it originated in Nuremberg or Helsinki did not provide sufficient protection against exploitation and abuse of human subjects of research. In the next chapter we examine how the federal government became intimately, extensively and visibly involved in the regulation of research with human subjects. End of section 19 recording by Melanie Young. Thank you. Section 1974 marks the upper bound for the period of the advisory committee's historical investigation. That year two landmark events in the history of government policy on research involving human subjects took place. The promulgation by the Department of Health Education and Welfare, DHEW, of comprehensive regulations for oversight of human subject research and passage by Congress of the National Research Act. The DHEW regulations set rules for oversight of human subject research supported by the single largest funding source for such research and the National Research Act authorized the establishment of the National Commission for the protection of human subjects of biomedical and behavioral research, also known as the National Commission, which was charged with examining the conduct of research involving human subjects. In the years following 1974 many of the rules promulgated by DHEW were subsequently adopted by various other government agencies culminating in government-wide regulations under the common rule in 1991. In the first part of this chapter we trace the developments in the 1960s and early 1970s that influenced and led up to the DHEW regulations and the National Research Act. These developments included congressional hearings on the practices of the drug industry and the thalidomide tragedy, critical scholarly writings, interim policies at DHEW, public outcry over controversial cases of medical research and the congressional hearings these cases occasioned. People were surprised and shocked to learn about practices and behaviors they knew to be wrong. While the ethical principles such practices violated may not have been well articulated specific to the enterprise of human research they were part of individuals moral consciousness. The history of these events has been well told before and we only summarize it here drawing heavily on the previous work of other authors. The 1974 regulations were promulgated by DHEW and applied only to that agency. Likewise the National Research Act authorized the establishment of the National Commission and directed it to make recommendations to the secretary of DHEW. In the latter part of this chapter we review developments in policies governing human research during this period in agencies other than DHEW. This is a history that has received comparatively little scholarly attention. In the 1970s just as DHEW was moving ahead with broad new regulations scandal rocked the Department of Defense and the CIA. It was revealed that with operation from university researchers these agencies had engaged in secret experimentation on military and civilian subjects without their knowledge, sometimes with tragic results. The discovery of the existence of these secret programs led to further congressional investigations and to a 1975 Department of the Army review of the effectiveness of the 1953 Area of Defense Wilson memorandum adopting the Nuremberg Code. This Army review led to the eventual declassification of the Wilson memorandum which had been top secret upon its issuance and remained classified until 1975. It also led much later to litigation in which justices of the U.S. Supreme Court for the first time commented on the applicability of the Nuremberg Code to actions undertaken by the U.S. government. The chapter concludes with the discussion of these important events. The Development of Human Subject Research Policy at DHEW As the largest funding source in the federal government for human subject research DHEW led the way in developing regulations aimed at protecting the rights and interests. The evolution of the regulations which would eventually be adopted on a government-wide basis was influenced by revelations of unethical research congressional reaction to these revelations and concern over public perception of such research. That regulations were eventually adopted at all by DHEW was influenced by the political realities of the time of congressional support for a standing regulatory body to oversee human subject research as had been recommended by an influential federally-appointed panel the Tuskegee Syphilis study ad hoc panel. In a trade-off that would have major influence on the future of human subject research oversight the proposed bill creating the standing regulatory body was withdrawn in exchange for the National Research Act establishing the National Commission and an understanding that DHEW would promulgate the aforementioned regulations. This historical backdrop is outlined in the remainder of this chapter. The Thalidomide Tragedy and Congressional Requirement for Patient Consent In 1959 a Senate subcommittee chaired by Senator Estes Kefauver of Tennessee began hearings into the conduct of pharmaceutical companies. Testimony revealed that it was common practice for drug companies to provide samples of experimental drugs whose safety and efficacy had not been established to physicians who were then paid to collect data on their patients taking these drugs. Physicians throughout the country prescribed these drugs to patients without their knowledge or consent as part of this loosely controlled research. These practices and others prompted calls by Kefauver and other senators for an amendment to the Food, Drug and Cosmetic Act of 1938 to address the injuriousness and ineffectiveness of certain drugs. In 1961 the dangers of new drug uses were vividly exemplified by the Thalidomide Disaster in Europe, Canada and the lesser degree the United States. Starting in late 1957 the sedative Thalidomide was given to countless pregnant women and caused thousands of birth defects in newborn infants, most commonly missing or deformed limbs. The Thalidomide Disaster was widely covered by the television networks and the visual impact of these babies stunned viewers and caused Americans to question the protections afforded those receiving investigational agents. It is in large measure because of the Thalidomide episode that the 1962 Kefauver-Harris amendments to the Food, Drug and Cosmetic Act were passed, requiring that informed consent be obtained in the testing of investigational drugs. While such testing occurred mainly with patients in the doctor-patient relationship and in the process severely reduced the effectiveness of this requirement, consent was not required when it was not feasible or was deemed not to be in the best interests of the patient. Both judgments made according to the best judgment of the doctors involved. Despite being limited in scope the Kefauver-Harris amendments were influential in advancing considerations of protections of research subjects first within the DHEW and later throughout the rest of the government. NIH and PHS develop a uniform policy to protect human subjects. In late 1963 concerns were raised within NIH by director James Shannon after disturbing revelations about two research projects funded in part by the Public Health Service and NIH. One was the unsuccessful transplantation of a chimpanzee kidney into a human being at Tulane University, a procedure that promised neither benefit to the recipient nor new scientific information. The transplant was reportedly done with the consent of the patient but without consultation or review by anyone other than the medical team involved. The second was research undertaken in mid-1963 at the Brooklyn Jewish Chronic Disease Hospital. There investigators the chief investigator Dr. Chester M. Southam was a physician at the Sloan Kettering Cancer Research Institute and he received permission to proceed with the work from the hospital's medical director Dr. Emmanuel E. Mandel had undertaken a research project in which they injected live cancer cells into indigent elderly patients without their consent. The research went forward without review by the hospital's research committee and over the objections of three physicians consulted who argued that the proposed subjects were incapable of giving adequate consent to participate. The disclosure of the experiment served to make both less officials like Shannon and the Board of Regents of the University of the State of New York which had jurisdiction over licensure of physicians aware of the shortcomings of procedures in place to protect human subjects. They were further concerned over the public's reaction to disclosure of the research and the impact it would have on research generally and the institutions in particular. After a review the Board of Regents and the researchers they suspended the licenses of Dr. Mandel and Sotham but subsequently stayed the suspension and placed the physicians on probation for one year. There were no immediate repercussions for the hospital, Sloan Kettering the University or PHS but the case nonetheless profoundly affected the subsequent development of federal guidelines to protect research subjects. To add to the ferment NIH officials had closely followed the work of the Law Medicine Research Institute at Boston University which issued survey findings in 1962 showing that few institutions had procedural guidelines covering clinical research and in the year after both the above mentioned cases came to light the World Medical Association issued its Declaration of Helsinki which set standards for clinical research and required that subjects give informed consent prior to enrolling in an experiment. Thus national and world opinion on matters related to the ethics of human subject research created a climate ripe for changes in policies and approaches toward research ethics. Concern over disturbing cases and the growing attention paid to research ethics prompted NIH Director James Shannon to create a committee in late 1963 under the direction of the NIH Associate Chief for Program Development Robert B. Livingston whose office supported centers at which NIH funded research took place. The internal committee was charged with studying problems of inadequate consent and the standards of self-scrutiny involving research protocols and procedures. The committee was also to recommend a suitable set of controls for the protection of human subjects in NIH-sponsored research. The Livingston committee recognized that ethically questionable research exemplified by the research at the Jewish Chronic Disease Hospital could wreak havoc on public perception, increase the likelihood of liability and inhibit research. These problems made it worthwhile to consider central oversight or lack thereof for research contracted out. However, the committee expressed concern over NIH taking to authoritarian a posture toward research oversight and so argued that it would be difficult for the agency to assume responsibility for ethics and research practices. When it issued its report in late 1964 the committee did not recommend any changes in the current NIH policies and, moreover, cautioned that whatever NIH might do by way of designating a code or stipulating standards for acceptable clinical research would be likely to inhibit, delay, or distort the carrying out of clinical research. In deference to physician autonomy and traditional regard for the sanctity of the doctor-patient relationship the report concluded that NIH was not in a position to shape the educational foundations of medical ethics. Director Shannon did not think the conclusions of the Livingston Committee went far enough feeling as he did that NIH should take a position of increased responsibility for research ethics especially in light of the Jewish Chronic Disease Hospital case and its implications for the NIH both internally and in terms of public perception he felt that a stronger reaction was needed. Thus despite the committee's limited conclusions Shannon and Surgeon General Luther Terry together decided in 1965 to propose to the National Advisory Health Council NAHC an advisory committee to the Surgeon General of the Public Health Service that in light of recent problems the NIH should assume responsibility for formal controls on individual investigators. At the NAHC meeting Shannon argued for impartial prior peer review of the risks research posed to subjects and questioned the adequacy of the protections of the rights of subjects. The council's members mostly agreed with Shannon's concerns and three months later issued a resolution concerning research on humans following Shannon's broad recommendations and endorsing the importance of obtaining informed consent from individuals. Be it resolved that the National Advisory Health Council believes that public health service support of clinical research and investigation involving human beings should be provided only if the judgment of the investigator is subject to prior review by his institutional associates to assure an independent determination of the protection of the rights and welfare of the individual or individuals involved, of the appropriateness of the methods used to secure informed consent and of the risks and potential medical benefits of the investigation. What this statement did not do however was explain what would count informed consent. The NAHC recommendations were accepted by the new Surgeon General William H. Stewart and in February 1966 he issued a policy statement requiring PHS grantee institutions to address three topics by committee prior review for all proposed research involving human subjects. This review should assure an independent determination one of the rights and welfare of the individual or individuals involved two of the appropriateness of the methods used to secure informed consent and three of the risks and potential medical benefits of the investigation. The 1966 PHS policy required that institutions give the funding agency a written assurance of compliance but like the NAHC recommendations the policy spoke strictly to the procedural aspects of informed consent and not to its meaning or criteria. Substantive informed consent criteria were established for research at the NIH clinical center shortly after the PHS policy was issued but this new policy applied only to intramural research that is to research undertaken at the clinical center. The clinical center policy was important as the first federal research policy with a specific definition of what constituted informed consent requirements in the research context. The inclusion of specific consent requirements in policies applying to extramural research would not occur however until the mid-1970s. The 1966 PHS policy is significant both for its recognition that patient subjects like healthy subjects should be included in the consent provisions for federally sponsored human experimentation and for its attempt to strike a balance between federal regulation and local control which continues to this day. Such a balancing continued the work begun by the AEC in its provision for local human use communities as a condition for the use of AEC supplied isotopes and the Department of Defense in the provision for a high level review of proposed experimentation. Although a landmark in the government regulation of biomedical research the 1966 policy was to be revised and changed throughout the decade biomedical research drew greater attention and informed consent grew in importance. While from the outset the PHS policy was revised periodically site visits by PHS employees to randomly selected institutions revealed a wide range of compliance these site visits found widespread confusion about how to assess risks and benefits refusal by some researchers to cooperate with the policy and in many cases indifference by those charged with administering research and its rules at local institutions. Complaints of overworked review committees and requests for clarification and guidance came from research institutions all over the country. In response to continued questions about the scope and meaning of the policy DHEW in 1971 produced the institutional guide to DHEW policy on protection of human subjects. Better known as the yellow book because of the covers color this substantial guide contained both the requirements and commentary on how the requirements were to be understood and implemented the guide provided that informed consent was to be obtained from anyone who may be at risk as a consequence of participation in research including both patients and healthy volunteers. As the 1960s progressed increased discussion of research practices appeared in both professional literature and the popular press. One person who advanced the debate in both arenas was Henry Beecher of Harvard Medical School. Henry Beecher, the medical insider, speaks out. Henry Beecher, as noted in Chapter 2 was an active participant in professional discussions of ethics in research during the late 1950s and early 1960s. In March 1965 Beecher focused attention on the issues at a conference for science journalists sponsored by the Upjohn Pharmaceutical Company. There Beecher presented a paper discussing 22 examples of potentially serious ethical violations in experiments that he had found in recent issues of medical journals. Among them was the Brooklyn Jewish Chronic Disease Hospital study. He explained this research had not taken place in a remote corner but in leading medical schools, university hospitals, top medical military departments, governmental institutes, and industry. He also acknowledged that his own conscience was not entirely clear. Lest I seem to stand aside from these matters, I am obliged to say that in years gone by work in my laboratory could have been criticized. Beecher also explained the consciousness-raising purpose of these revelations with stark clarity. It is hoped that blunt presentation of these examples will attract the attention of the uninformed or the thoughtless and careless, the great majority of offenders. In making this presentation to a group of journalists, Beecher was clearly breaking with a professional expectation that such matters should be addressed within the biomedical community. After some reservations on the part of medical journals, the March 1965 paper having been rejected by at least the Journal of the American Medical Association, JAMA, Beecher published a revised version in the New England Journal of Medicine in June 1966. That article, like his presentation at the conference, indicted the entire biomedical research community and the journals that published biomedical research results. Beecher's efforts to focus professional, press, and therefore public awareness on the conduct of research involving human subjects met with some success. A July 1965 article in the New York Times magazine was headlined Doctors must experiment on humans, but what are the patients' rights? In February 1966, as the PHS issued its first uniform policy for biomedical research, more headlines, this time in the Saturday review, asked, do we need new rules for experimentation on people? In July 1966, following Beecher's article in the New England Journal of Medicine and an editorial in JAMA, another article declared, experiments on people, the growing debate. By the mid to late 1960s, professional, governmental, and public attention was all being drawn to issues of research on human subjects. Revelations of purportedly unethical treatment of research subjects would not be over by this time, but changes in policy, largely driven by attention from so many corners, were beginning to move toward a more comprehensive approach to oversight.