 Section 21 of Final Report of the Advisory Committee on Human Radiation Experiments. Final Report of the Advisory Committee on Human Radiation Experiments. Ethics of Human Subjects Research, A Historical Perspective, Chapter 3, Part 2 Public Attention is Galvanized, Willowbrook and Tuskegee. From 1956 to 1972, Dr. Saul Krugman of New York University led a study team at the Willowbrook State School for the Retarded on Staten Island, New York. The study was not secret or hidden, it was one of the twenty-two projects Beecher discussed as ethically troublesome in his 1966 article. The Willowbrook study was discovered by the media beginning in the late 1960s and was the subject of further discussion of the case in separate places by Beecher, theologian Paul Ramsey, and physician Stephen Goldby. Noting the high incidence of hepatitis among the residents of the school, nearly all of whom were profoundly mentally impaired children and adolescents, Krugman and his colleagues injected some of them with a mild form of hepatitis serum. The researchers justified their work on the grounds that the subjects probably would have become infected anyway, and they hoped to find a profile axis for the virus by studying it from the earliest stages of infection. Before beginning the work Krugman discussed it with many physician colleagues and sought approval from the Armed Forces Epidemiological Board, which approved and funded the research, and the executive faculty of the New York University School of Medicine who approved the research. A review committee for human experimentation did not exist in 1955, but later, when such a committee was formed, it too approved the research. According to Krugman, the parents of each subject signed a consent form after receiving a detailed explanation of the research without any pressure to enroll their child. Some critics argued that the content of the consent form was itself deceiving, since it seemed to say that the children were to receive a vaccine against the virus. Moreover, charges of coercion arose. It is alleged that the parents who enrolled their children in the study were initially offered more rapid admission to the school through the Hepatitis Unit, and later found, due to overcrowding, that the only route for admission of new patients was through the Hepatitis Unit. Commentators further argued that the fault in the doctor's study lay in their deliberate attempt to infect the children with or without parental consent, as opposed to studying the course of the disease in children who naturally became sick. Soon after Willowbrook, another research project, the Tuskegee syphilis study, provoked widespread public outcry when it was revealed the study had exposed people to unnecessary and serious harm with no prospect of direct benefit to them. Beginning in 1932, public health service physicians sought to trace the natural history of syphilis by observing some four hundred African-American men affected by the disease, and another group of approximately two hundred African-American men without syphilis serving as controls. All the subjects lived in or around Tuskegee, Alabama. Originally designed to be a short-term study in the range of six to eight months, some researchers successfully argued that the potential scientific value of longer-term study was so great that the research ought to go on indefinitely. The subjects were enticed into the study with offers of free medical examinations. Many of those who came from around the area to be tested by government doctors had never had a blood test before and had no idea what one was. Once selected to be subjects in the study, the men were not informed as to the nature of their disease or of the fact that the research held no therapeutic benefit for them. Subjects were asked to appear for special free treatments which included purely diagnostic procedures, such as lumbar punctures. By the mid-1940s it was becoming clear that the death rate for the infected men in the study was twice as high as for those in the control group. This was the period in which penicillin was discovered and soon after began to be used to treat syphilis, at least in its primary stage. The study was reviewed by public health service officials and medical societies and reported by a number of journals from the early 1930s to 1970. In the 1960s a growing number of criticisms began to appear, although the study was not stopped until 1973. Thus men with a confirmed disease were not told of their diagnosis and were deceived into participating in the study under the guise of its being therapeutic for unspecified maladies. In addition to exposing the subjects to the additional harms of participation in the study, the false belief that treatment was being administered prevented the subjects from otherwise seeking medical care for their disease. As at Willowbrook, a justification given after the fact for the research, was that the disease had appeared in a way that was natural and inevitable and that the study would be of immense benefit to future patients. Over this forty-year history at least twenty-eight participants died, and approximately one hundred more suffered blindness and insanity from untreated syphilis before the study was stopped. In 1972 an account of the study was published on the front page of the New York Times. In response the Department of Health Education and Welfare appointed the Tuskegee syphilis study ad hoc panel to review the Tuskegee study, as well as the department's policies and procedures for the protection of human subjects. The work of the ad hoc panel, which consisted of physicians, a university president, a theologian, an attorney, and a labor representative, contributed in large measure to the passage of the first comprehensive regulations for federally sponsored human subjects research. One member of the ad hoc panel, who is also a member of the advisory committee, Jay Katz, expressed his dismay over the unwillingness or incapacity of society to mobilize the necessary resources for treatment at the beginning of the study and the deliberate efforts of the investigators to obstruct the opportunity for treatment. Despite the fact that the Public Health Service policy for the protection of human subjects had been in place for six years by the time the Tuskegee study was revealed, it was exposed by a journalist rather than by the review committee. Although an institutional committee had allegedly reviewed the Tuskegee study, the study was not discontinued until after the recommendation of the ad hoc panel. The human rights abuses of the Tuskegee study demonstrated the need for both prior and ongoing review, in that the study had been undertaken before prior review requirements were in place and the prevailing review policies during the period of the study were so flawed that the study was allowed to continue. As a result of their deliberations, the ad hoc panel found that neither the Department of Health Education and Welfare nor any other agency in the government had adequate policies for oversight of human subjects research. The panel recommended that the Tuskegee study be stopped immediately and that remaining subjects be given necessary medical care resulting from their participation. The panel also recommended that Congress establish a permanent body with the authority to regulate at least all federally supported research involving human subjects. In summary, the panel concluded that despite the lessons of Nuremberg, the Jewish Chronic Disease Hospital case, and the Declaration of Helsinki, human subject research oversight and mechanisms to ensure informed consent were still inadequate and new approaches were needed to adequately protect the rights and welfare of human subjects. Congressional response to the abuses of human subjects, the National Research Act. Public attention to abuses such as those inflicted on the subjects of the Tuskegee study increased during the late 1960s and early 1970s. Following the initial revelations about the Tuskegee syphilis study, several bills were introduced in Congress to regulate the conduct of human experimentation. In February 1973 Senator Edward Kennedy held hearings on these bills, the Tuskegee study, experimentation with prisoners, children, poor women, and a variety of other issues related to biomedical research and the need for a national body to consider the ethics of research and advancing medical technology. After the hearings, Senator Kennedy introduced an unsuccessful bill to create a National Human Experimentation Board, as recommended by the Tuskegee syphilis study ad hoc panel. When it became clear, however, that the bill would not be successful, Senator Kennedy introduced the bill that would become the National Research Act, endorsing the regulations about to be promulgated by the Department of Health, Education, and Welfare, and establishing the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research in return for DHEW's issuance of human subject research regulations. The trade-off was clear. No national regulatory body, in return for regulations applying to the research funded or performed by the government agency responsible for the greatest proportion of human subject research. This meant that the goal of oversight of all federally funded research would not be achieved, and that whatever oversight did exist was left to the funding agencies rather than to an independent body. On May 30, 1974, DHEW published the bill regulations for the use of human subjects in the Federal Register. These regulations required that each grantee institution form a committee, what became known as an Institutional Review Board, or IRB, to approve all research proposals before they were passed to DHEW for funding consideration. These committees were charged with reviewing the safety of the proposals brought to them as well as the adequacy of the informed consent obtained from each subject prior to participation in the research. Additionally, the regulations defined not only the procedure for obtaining informed consent, but substantive criteria for it as well. Shortly after the announcement of the DHEW regulations, in July 1974, the National Research Act was passed, and with it came the establishment of the National Commission. The National Commission, charged with advising the Secretary of DHEW, though the National Research Act did not require the Secretary to follow the commission's recommendations, existed over the next four years and published seventeen reports and appendix volumes. During its tenure, the commission did pioneering work as it addressed issues of autonomy, informed consent, and third-party permission, particularly in relation to research involving vulnerable subjects such as prisoners, children, and people with cognitive disabilities. It was also charged with examining the IRB system and procedures for informed consent as background for proposing guidelines that would ensure that basic ethical principles were instituted in the research oversight system and in research involving vulnerable populations. In the course of its deliberations, the commission identified three general moral principles, respect for persons, beneficents, and justice, as the appropriate framework for guiding the ethics of research involving human subjects. These three are known as the Belmont principles, because they appeared in the Belmont report, one of the commission's major publications. The National Commission was required to examine the nature and definition of informed consent, as well as the adequacy of current practices. In its reports, the commission decisively argued that the basic justification for obligations to obtain informed consent is the moral principle of respect for persons. This emphasis on respect for persons meant a great premium was put on autonomous decision-making by the research subject, an emphasis that continues to the current day. While it may not have been the intent of those who sponsored it, the National Research Act, because it was limited to DHEW-funded research, did not ensure that all federally-sponsored research would be subject to requirements for informed consent and prior review. Nonetheless, by this time, as was described below, published policies within the Department of Defense, the Atomic Energy Commission, the Veterans Administration, and NASA did meet these requirements. The passage of the National Research Act and the promulgation of DHEW's regulations were important milestones in the development of federal standards for the protection of human subjects of research. They represented the first national recognition of the need to protect human subjects. Moreover, they attempted to provide for that protection through the IRB requirement and the establishment of the National Commission. The Advisory Committee's Charter requires that it examine the standards for research between 1944 and 1974. These two landmark events in 1974 ushered in a new era in which the conduct and oversight of biomedical experimentation with humans remained a topic of national scrutiny and debate. Eventually the approaches required by the 1974 DHEW regulations would be applied to nearly all federally-funded human research, as described in Chapter 14. The development of requirements for human subject research in other federal agencies. The history and evolution of human subject research policy in the federal government is well documented for the Department of Health Education and Welfare. However, many other agencies, most notably the military services, have important but less well-documented and less well-studied histories. Some of this history is described in Chapter 1 of this report. Here we continue with a brief treatment of that history in the context of the evolution of human subject research policy. Army Policy In 1962 the Army, for the first time, issued as a formal regulation, Army Regulation AR-70-25, the 1953 policy embodied in the Wilson Memorandum. The regulation made explicit as the 1953 Department of Defense and Army Policies had only left implicit basic issues about the scope of the DoD's rules. Unlike the Wilson Memorandum the new regulation applied to all types of research, not simply that related to atomic, biological, or chemical warfare. However the regulation specifically excluded clinical research, that is the research likely to be performed with patients at the Army's many hospitals. In 1963 an ad hoc committee of Army and civilian personnel concluded that the rule applied where research was done by contractors. However tracer research, which arguably posed minimal risk, was excluded. Despite the committee's recommendations no immediate changes were made to the regulation. In 1963 however the Army issued a regulation for radioisotope use, that required local institutions to convene review committees and obtain approval from the Secretary of the Army, pursuant to AR-70-25, when radioisotopes were to be used with volunteer experimental subjects. The regulatory void apparently persisted until 1973, when another rule, AR-40-38, Medical Services Clinical Investigation Program, closed the gap. That rule clearly applied to any person who may be at risk because of participation in clinical investigation, including patients and normal individuals. It required that subjects of research be given an explanation of the proposal in understandable language and sign a volunteer agreement. Moreover clinical research with patients as well as healthy people was to be reviewed by a human use committee. Navy Policy As we saw in Chapter 1 the Navy had required oral consent from research volunteers since at least 1951. Some evidence suggests that written consent was required in the mid-1960s. In a 1964 proposal to study the effects of hypoxia on service personnel it is indicated that assigned consent to voluntarily participate in research experiment and MRI Form 3 would be used. In 1967 a clear requirement for written consent appeared in the Navy's Medical Department Manual. It is unclear whether the policy drew a distinction between research on patients and research on healthy subjects. In 1969 in any event the Secretary of the Navy issued a comprehensive policy requiring written informed consent of research subjects which appeared to cover both groups. Air Force Policy In 1965 the Air Force promulgated AFR 169-8, Medical Education and Research Use of Volunteers in Aerospace Research, which required voluntary and written informed consent from all subjects in any research, development, test and evaluation that may involve distress, pain, damage to health, physical injury or death. As such it seems inclusive of both healthy and patient subjects. Updating the language of the Nuremberg Code's first principle the policy was based on the idea that the voluntary informed consent of the human subject is absolutely essential. Additionally the regulation provided for the appointment of a committee to review all human research proposals at each originating facility. NASA Policy The National Aeronautics and Space Administration, NASA, created in 1958, inherited staff and research expertise from the Department of Defense and other federal agencies. Before 1968 local centers at which research using isotopes was conducted, notably the Ames Research Center and the Manned Spacecraft Center MSC were essentially autonomous. Each center established medical use subcommittees as required by AEC rules. Reorganization within NASA in 1968 combined the medical operations functions and the medical research functions at MSC into one medical research and operations directorate headed by Dr. Charles A. Berry. By 1968 Ames had a policy of requiring informed consent. By definition of course the work of astronauts is frequently risky and experimental. The question of the proper boundary between experimental and occupational activities was one that could not be drawn easily. Consequently the policy authorized the director of Ames to waive the consent requirement in several instances, including when obtaining consent would seriously hamper the research or when test pilots or astronauts were involved. Between 1968 and 1970 prior review for risk and subject consent was adopted at Ames in the form of the Human Research Experiments Review Board and indirectly at the MSC in accordance with the AEC requirements for a medical use committee. In 1972 the prior review provisions and consent requirements of Ames and the MSC were reformulated in a NASA-wide policy. This policy required voluntary and written informed consent from subjects prior to participation. The policy continued to provide waivers for exceptional cases as in the Ames policy and did not apply to research conducted by NASA contractors or grantees. The development of NASA's policy like those at the Public Health Service, National Institute of Health and the Department of Defense appeared at a time when the public was becoming increasingly interested in biomedical research. In contrast with the 1940s and 1950s bureaucratic developments during the 1960s and 1970s were mirrored by growing public debate about the adequacy of protections for human subjects. End of Section 21, Recording by Maria Casper. Section 22 of Final Report of the Advisory Committee on Human Radiation Experiments. This is a LibriVox recording. All LibriVox recordings are in the public domain. For more information or to volunteer please visit LibriVox.org. Final Report of the Advisory Committee on Human Radiation Experiments. Ethics of Human Subjects Research, A Historical Perspective, Chapter 3, Part 3 Supreme Court dissents invoke the Nuremberg Code. CIA and Department of Defense Human Subjects Research scandals. As we have seen, the development of federal legislation for government-sponsored research with human subjects arose in part because of institutional and governmental concern and public reaction to perceived abuses and failures by the government. Around the same time that the 1974 National Research Act was enacted, a scandal arose surrounding the discovery of secret Cold War chemical experiments conducted by the CIA and Department of Defense. The review of these experiments led to the rediscovery of the previously secret 1953 Wilson Memorandum, and later to the first Supreme Court decision in which comment was made, in dissent, on the application of the Nuremberg Code to the conduct of the U.S. government. In December 1974 the New York Times reported that the CIA had conducted illegal domestic activities, including experiments on U.S. citizens during the 1960s. That report prompted investigations by both Congress in the form of the Church Committee and a Presidential Commission known as the Rockefeller Commission into the domestic activities of the CIA, the FBI, and intelligence-related agencies of the military. In the summer of 1975 congressional hearings and the Rockefeller Commission report revealed to the public for the first time that the CIA and the Department of Defense had conducted experiments on both cognizant and unwitting human subjects as part of an extensive program to influence and control human behavior through the use of psychoactive drugs, such as LSD and mescaline, and other chemical, biological, and psychological means. They also revealed that at least one subject had died after administration of LSD. Frank Olson, an army scientist, was given LSD without his knowledge or consent in 1953 as part of a CIA experiment and apparently committed suicide a week later. Subsequent reports would show that another person, Harold Blower, a professional tennis player in New York City, died as a result of a secret army experiment involving mescaline. The CIA program, known principally by the code name M. Kultra, began in 1950 and was motivated largely in response to alleged Soviet, Chinese, and North Korean uses of mind-control techniques on U.S. prisoners of war in Korea. Because most of the M. Kultra records were deliberately destroyed in 1973 by order of then Director of Central Intelligence Richard Helms, it is impossible to have a complete understanding of the more than one hundred fifty individually funded research projects sponsored by M. Kultra and the related CIA programs. Central Intelligence Agency documents suggest that radiation was part of the M. Kultra program and that the Agency considered and explored uses of radiation for these purposes. However, the documents that remain from M. Kultra, at least as currently brought to light, do not show that the CIA itself carried out any of these proposals on human subjects. The Congressional Committee investigating the CIA research, chaired by Senator Frank Church, concluded that prior consent was obviously not obtained from any of these subjects. The Committee noted that the experiments sponsored by these researchers call into question the decision by the Agencies not to fix guidelines for experiments. Documents show that the CIA participated in at least two of the Department of Defense committees whose discussions in 1952 led up to the issuance of the Wilson Memorandum. Following the recommendations of the Church Committee, President Gerald Ford in 1976 issued the first Executive Order on Intelligence Activities, which among other things prohibited experimentation with drugs on human subjects except with the informed consent in writing and witnessed by a disinterested party of each such human subject and in accordance with the guidelines issued by the National Commission. Subsequent orders by Presidents Carter and Reagan expanded the directive to apply to any human experimentation. Following on the heels of the revelations about CIA experiments were similar stories about the Army. In response, in 1975 the Secretary of the Army instructed the Army Inspector General to conduct an investigation. Among the findings of the Inspector General was the existence of the then still classified 1953 Secretary of Defense Wilson Memorandum. In response to the Inspector General's investigation, the Wilson Memorandum was declassified in August 1975. The Inspector General also found that the requirements of the 1953 Memorandum had, at least in regard to Army drug testing, been essentially followed as written. The Army only used volunteers for its drug testing program with one or two exceptions. However, the Inspector General concluded that the volunteers were not fully informed as required prior to their participation, and the methods of procuring their services in many cases appeared not to have been in accord with the intent of the Department of the Army policies governing the use of volunteers in research. The Inspector General also noted that the evidence clearly reflected that every possible medical consideration was observed by the professional investigators at the Medical Research Laboratories. This conclusion, if accurate, is in striking contrast to what took place at the CIA. The revelations about the CIA and the Army prompted a number of subjects or their survivors to file lawsuits against the federal government for conducting illegal experiments. Although the government aggressively and sometimes successfully sought to avoid legal liability, several plaintiffs did receive compensation through court order, out-of-court settlement, or acts of Congress. Previously the CIA and the Army had actively and successfully sought to withhold incriminating information even as they secretly provided compensation to the families. One subject of Army drug experimentation, James Stanley, an Army Sergeant, brought an important, albeit unsuccessful, suit. The government argued that Stanley was barred from suing it under a legal doctrine known as the Firas Doctrine after a 1950 Supreme Court case Firas v. United States that prohibits members of the armed forces from suing the government for any harms that were inflicted incident to service. In 1987 the Supreme Court affirmed this defense in a five to four decision that dismissed Stanley's case. The majority argued that a test for liability that depends on the extent to which particular suits would call into question military discipline and decision making would itself require judicial inquiry into and hence intrusion upon military matters. In dissent Justice William Brennan argued that the need to preserve military discipline should not protect the government from liability and punishment for serious violations of constitutional rights. The medical trials at Nuremberg in 1947 deeply impressed upon the world that experimentation with unknowing human subjects is morally and legally unacceptable. The United States Military Tribunal established the Nuremberg Code as a standard against which to judge German scientists who experimented with human subjects. In defiance of this principle military intelligence officials began surreptitiously testing chemical and biological materials, including LSD. Justice Sandra Day O'Connor, writing a separate dissent, stated, no judicially crafted rule should insulate from liability the involuntary and unknowing human experimentation alleged to have occurred in this case. Indeed, as Justice Brennan observes, the United States played an instrumental role in the criminal prosecution of Nazi officials who experimented with human subjects during the Second World War, and the standards that the Nuremberg Military Tribunals developed to judge the behavior of the defendants, stated that the voluntary consent of the human subject is absolutely essential to satisfy moral, ethical and legal concepts. If this principle is violated, the very least that society can do is to see that the victims are compensated as best they can be by the perpetrators. This is the only Supreme Court case to address the application of the Nuremberg Code to experimentation sponsored by the U.S. government. And while the suit was unsuccessful, dissenting opinions put the Army and by association the entire government on notice that the use of individuals without their consent is unacceptable. The limited application of the Nuremberg Code in U.S. courts does not detract from the power of the principles it espouses, especially in light of stories of failure to follow these principles that appeared in the media and professional literature during the 1960s and 1970s, and the policies eventually adopted in the mid-1970s. Conclusion The 1960s and early 1970s witnessed an extraordinary growth in government, institutional and public awareness of issues in the use of human subjects, fueled by scandals and an increasing emphasis on individual expression. The branches of the military had articulated policies during this period in spite of numerous problems in implementation. By 1974 the Department of Health Education and Welfare had established a set of regulations and a system of local review, and Congress had established a commission to issue recommendations for further change to the DHEW. Together these advances created a model and laid the groundwork for human subjects protections for all federal agencies. Many conditions coalesced into the framework for the regulation of the use of human subjects in federally funded research that is the basis for today's system. Described further in Chapter 14, this framework is undergirded by the three Belmont principles that were identified by the National Commission as governing the ethics of research with human subjects, respect for persons, beneficence and justice. The federal regulations and the conceptual framework built on the Belmont principles became so widely adopted and cited that it might be argued that their establishment marked the end of serious shortcomings in federal research ethics policies. Whether this position is well supported is evaluated in light of the advisory committee's contemporary studies in Part 3. By 1974 DHEW had extensive policies to protect human subjects within its purview. Policies were more variable among other government agencies. By 1975 the branches of the military set about adopting their own more comprehensive policies for human subject research, and the CIA was required by Executive Order to comply with consent requirements in human subject research in light of scandalous practices in the past. In order to evaluate the adequacy of the efforts taken to protect people before these policies were established, we must take into account both the government's policies and rules and the norms and practices of medicine reviewed in chapters 1 through 3. The advisory committee's framework for the consideration of these factors is presented in the next chapter. End of Section 22, Recording by Maria Casper. Section 23 of Final Report of the Advisory Committee on Human Radiation Experiments. This is a LibriVox recording. All LibriVox recordings are in the public domain. For more information or to volunteer, please visit LibriVox.org. Recording by Elsie Selwyn. Final Report of the Advisory Committee on Human Radiation Experiments. Ethics of Human Subjects Research, A Historical Perspective, Chapter 4, Part 1. According to the mission set out in our charter, the advisory committee is in essence a National Ethics Commission. In this capacity we were obliged to develop an ethical framework for judging the human radiation experiments. This proved to be one of our most difficult tasks, for we were not only dealing with complex events that occurred decades ago, but also with some of the most controversial issues in moral philosophy. This chapter sets out the standards that we believe are appropriate for evaluating human radiation experiments, and offers reasons for relying on them. It then applies these standards to the results of the historical research we have conducted and draws ethical conclusions. Fulfilling our charge to determine the ethical and scientific standards and criteria to evaluate human radiation experiments that took place between 1944 and 1974 requires consideration of a complex question. Is it correct to evaluate the events, policies, and practices of the past, in the agents responsible for them, against ethical standards and values that we accepted valid today, but that may not have been widely accepted then? Or must we limit our ethical evaluation of the past to those standards and values that were widely accepted at the time? This is the problem of retrospective moral judgment. Quite apart from the issue of the validity of projecting current standards onto the past, there is another question that this chapter must address. In a pluralistic society such as ours, is there at present a sufficiently broad consensus on ethical standards to make possible a public evaluation that is not simply the arbitrary imposition of one particular moral point of view among several or even many? This is the problem of value pluralism. The ethical framework the advisory committee employs takes both these issues into account. This chapter is divided into two parts, and the first part we present and defend the ethical framework adopted by the committee for the evaluation of human radiation experiments conducted from 1944 to 1974, and the agents responsible for them. We begin by identifying the types of moral judgments with which the committee is concerned, and the different kinds of ethical standards against which these judgments can be made. We next address two challenges to the position that the advisory committee can use these or any other standards to make valid ethical judgments. These challenges are, one, that the diversity of views about ethics in American society invalidates any effort by a public body such as the advisory committee to make moral judgments, and, two, that the diversity of views about ethics across times similarly invalidates our making defensible moral judgments about the past. Although the committee does not accept these challenges as definitive, we discuss these as well as other factors that influence or limit ethical evaluation. We include here a discussion of an issue of particular relevance to our charge, what role of any considerations of national security should play in the committee's ethical framework. We also consider factors that can mitigate the blame we would otherwise place on agents, whether individuals or collective entities, for having conducted morally wrong actions. In the second part of the chapter, we explore how the committee's ethical framework can be used to evaluate both experiments conducted in the past and the people and institutions that sponsored and conducted them. Drawing on the history presented in chapters one through three, we illustrate how, when applied, the framework is specified by context and detail. This specification of the framework continues in part two of the report when the framework is used to evaluate specific cases. An ethical framework. Two types of moral judgment. For purposes of the committee's charge, there are two main types of moral judgment. Judgments about the moral quality of actions, policies, practices, institutions, and organizations, and judgments about the praise worthiness or blame worthiness of individual agents and in some cases entities such as professions and governments. Insofar as these can be viewed as collective agents with powers and responsibilities. The first type contains several kinds of judgments. Actions may be judged to be obligatory, wrong, or permissible. Institutions, policies, and practices can be characterized as just or unjust, equitable or inequitable, humane or inhumane. Organizations can be said to be responsible or negligent, fair dealing, or exploitative. The second type of judgment about the praise worthiness or blame worthiness of agents also contains a diversity of determinations. Agents, whether individual or collective, can be judged to be culpable or praiseworthy for this or that action or policy, to be generous or mean-spirited, responsible or negligent, to respect the moral equality of people or to discriminate against certain individuals or groups, and so on. Three kinds of ethical standards. A recognized way to make moral judgments is to evaluate the facts of a case and the context of ethical standards. The committee identified three kinds of ethical standards as relevant to the evaluation of the human radiation experiments. 1. Basic ethical principles that are widely accepted and generally regarded as so fundamental as to be applicable to the past as well as the present. 2. The policies of government departments and agencies at the time. And 3. Rules of professional ethics that were widely accepted at the time. Basic ethical principles. Basic ethical principles are general standards or rules that all morally serious individuals accept. The advisory committee has identified six basic ethical principles as particularly relevant to our work. One ought not to treat people as mere means to the ends of others. One ought not to deceive others. One ought not to inflict harm or risk of harm. One ought to promote welfare and prevent harm. One ought to treat people fairly and with equal respect. And one ought to respect the self-determination of others. These principles state moral requirements. They are principles of obligation telling us what we ought to do. Every principle on this list has exceptions. Because every moral principle can justifiably be overridden by other basic principles and circumstances when they conflict. To give priority to one principle over another is not a moral mistake. It is a reality of moral judgment. The justifiability of such judgments depends on many factors in the circumstance. It is not possible to assign priorities to these principles in the abstract. Far more social consensus exists about the acceptability of these basic principles than exists about any philosophical, religious, or political theory of ethics. This is not surprising given the central social importance of morality and the fact that its precepts are embraced in some form by virtually all major ethical theories and traditions. These principles are at the deepest level of any person's commitment to a moral way of life. It is important to emphasize that the validity of these basic principles is not typically thought of as limited by time. We commonly judge agents in the past by these standards. For example, the passing of 50 years in no way changes the fact that Hitler's extermination of millions of people was wrong, nor does it erase or even diminish his culpability, nor would the passing of 100 years, or a thousand do so. This is not to deny that it might be inappropriate to apply to the distant past some ethical principles to which we now subscribe. It is only to note that there are some principles so basic that we ordinarily assume, with good reason, that they are applicable to the past as well as the present, and will be applicable in the future as well. We regard these principles as basic because any minimally acceptable ethical standpoint must include them. Policies of government departments and agencies The policies of departments and agencies of the government can be understood as statements of commitment on the part of those governmental organizations and hence of individuals in them to conduct their affairs according to the rules and procedures that constitute those policies. In this sense, policies create ethical obligations. When a department or agency adopts a particular policy, it in effect promises to make reasonable efforts to abide by it, at least where participation in the organization is voluntary and where the organization's defining purpose is morally legitimate. It is not, for example, a criminal organization. To assume a role in the organization is to assume the obligations that attach to that role. Depending upon their roles in the organization, particular individuals may have a greater or lesser responsibility for helping to ensure that the policy commitments of the organization are honored. For example, high-level managers who formulate organizational policies have an obligation to take reasonable steps to ensure that these policies are effectively implemented. If they fail to discharge those obligations, they have done wrong and are blameworthy, unless some extenuating circumstance absolves them of responsibility. One sort of extenuating circumstance is that the policy in question is unethical. In that case, we would hold an individual blameless for not attempting to implement it, at least if the individual did so because of recognition that the policy was unethical. Moreover, we might praise the individual for attempting an institutional reform at some professional or personal risk. Different types of organizations have different defining purposes, and these differences determine the character of the departments or agency's role-derived obligations. All government organizations have special responsibilities to act impartially and to fairly protect all citizens, including the most vulnerable ones. These special obligations constitute a standard for evaluating the conduct of government officials, rules of professional ethics, professions traditionally assume responsibilities for self-regulation, including the promulgation of certain standards to which all members are supposed to adhere. These standards are of two kinds, technical standards that establish the minimum conditions for competent practice and ethical principles that are intended to govern the conduct of members in their practice. In exchange for exercising this responsibility, society implicitly grants professions a degree of autonomy. The privilege of this autonomy in turn creates special obligations for the profession's members. These obligations function as constraints on professionals to reduce the risk that they will use their special power and knowledge to the detriment of those whom they are supposed to serve. Thus, physicians whose special knowledge gives them opportunities for exploiting patients or breaching confidentiality are obligated to act in the patient's best interests in general and to follow various prescriptions for minimizing conflicts of interests. Unlike basic ethical principles that speak to the whole of moral life, rules of professional ethics are particularized to the practices, social functions, and relationships that characterize a profession. Rules of professional ethics are often justified by appeal to basic ethical principles. For example, as we discuss later in this chapter, the obligation to obtain informed consent, which is the rule of research in medical ethics, is grounded in principles of respect for self-determination, the promotion of others' welfare, and the non-infliction of harm. In one respect, rules of professional ethics are like the policies of institutions and organizations. They express commitments to which their members may be rightly held by others. That is, rules of professional ethics express the obligations that collective entities impose on their members and constitute a commitment to the public that the members will abide by them. Absent some special justification failure to honor the commitment to fulfill these obligations constitutes a wrong. To the extent that the profession as a collective entity has obligations of self-regulation, failure to fulfill these obligations can lead to judgments of collective blame, ethical pluralism, and the convergence of moral positions. Although we have argued that there is broad agreement about and acceptance of basic ethical principles in the United States, such as principles that enjoin us to promote the welfare of others, and to respect self-determination, people nevertheless disagree about the relative priority or importance of these principles in the moral life. For example, although any minimally acceptable ethical standpoint must include both these principles, some approaches to morality emphasize the importance of respecting self-determination, while others place a higher priority on duties to promote welfare. These differences in approaches to morality pose a problem for public moral discourse. How can a public body, such as the advisory committee, purport to speak on behalf of society as a whole, and at the same time respect this diversity of views about ethics? The key to understanding how this is possible is to appreciate that different ethical approaches can and often do converge on the same ethical conclusions. People can agree about what ought to be done without necessarily appealing to the same moral arguments to defend their common position. This phenomenon of convergence has been observed in the work of other public bodies whose charge was to make ethical evaluations on research involving human subjects, including the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, and the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. For example, both those who take the viewpoint that emphasizes obligations to promote welfare and to refrain from inflicting harm, and those who accord priority to self-determination can agree that law and medical and research practice should recognize a right to informed consent for competent individuals. The argument for requirements of informed consent based on promoting welfare and refraining from inflicting harm assumes that individuals are generally most interested in and knowledgeable about their own well-being. Individuals are thus in the best position to discern what will promote their welfare, and generally when deciding about participation in research or medical care, allowing physicians or others to decide for them runs too great a risk of harm or loss of benefits. By contrast, an approach based on self-determination assumes that, at least for competent individuals, being able to make important decisions concerning one's own life and health is intrinsically valuable, independent of its contribution to promoting one's well-being. The most compelling case for recognizing a right of informed consent for competent subjects and patients draws upon both lines of justification, emphasizing that this requirement is necessary from the perspective of self-determination considered as valuable in itself, and from the standpoint of promoting welfare and refraining from doing harm. Therefore, although people may have different approaches to the moral life, which reflect different priorities among basic moral principles, these differences need not result in a lack of consensus on social policy, or even on particular moral rules such as the rule that competent individuals ought to be allowed to accept or refuse participation in experiments. On the contrary, the fact that the same moral rules or social policies can be grounded in different basic moral principles and points of view greatly strengthens the case for their public endorsement by official bodies charged to speak for society as a whole. The three kinds of ethical standards upon which the committee relies for our ethical evaluations, the basic moral principles, government policies, and rules of professional ethics also enjoy a broad consensus. They are not idiosyncratic to a particular ethical value system. Thus, it would be a mistake to think that in order to fulfill our charge of ethical evaluation, the advisory committee must assume that there is only one uniquely correct ethical standpoint. A broad range of views can acknowledge that the medical profession should be held accountable for moral rules it publicly professes, and that individual physicians can be held responsible for abiding by these rules of professional ethics. Likewise, regardless of whether one believes that the ultimate justification for government policies is the goal of promoting welfare and minimizing harms or respect for self-determination, one can agree that policies represent commitments to actions and hence generate obligations. Moreover, any plausible ethical viewpoint will recognize that when individuals assume roles in organizations, they thereby undertake role-derived obligations. We have already argued that the basic ethical principles that we employ in evaluating experiments are widely accepted and command significant allegiance not only from our contemporaries, but also from reflective and morally sensitive individuals and ethical traditions in the past. It would be very implausible to construe any of them as parochial or controversial. Retrospective moral judgment and the challenge of relativism. Some may still have reservations about the project of evaluating the ethics of decisions and actions that occurred several decades ago. The worry is that it is somehow inappropriate, if not muddled, to apply currently accepted standards to earlier periods when they were not accepted, recognized or viewed as matters of obligation. This is an important worry, the one that does not apply to our framework. The position that the values and principles of today cannot be validly applied to past situations in which they may not have been accepted is called historical ethical relativism. This is the thesis that moral judgments across time are invalid because moral judgments can be justified only by reference to a set of shared values, and the values of a society change over time. According to this view, one historical period differs from another by virtue of lacking the relevant. Values contained in the other historical period, namely those that support or justify the particular moral judgments in question, understood in this way historical ethical relativism, if true would explain why some retrospective moral judgments aren't valid, namely where the past society about which the judgments are made lack the values that in our times support our judgments. In other words, the claim is that moral judgments made about actions and agents in one period of history cannot be made from the perspective of the values of another historical period. The question of whether historical ethical relativism limits the validity of retrospective moral judgment is not a mere theoretical puzzle for moral philosophers. It is an eminently practical question since how we answer it has direct and profound implications for what we ought to do now. Most obviously the position we adopt on the validity of retrospective moral judgment will determine whether we should honor claims that people now make for remedies for historical injustices allegedly perpetrated against themselves or their ancestors. Similarly, we must know whether there is any special circumstance resulting from the historical context in which the responsible parties acted that mitigates whatever blame would be appropriate. We return to this question later in the chapter. In addition, something even more fundamental is at stake in the debate over retrospective moral judgment, the possibility of moral progress. The idea of moral progress makes sense only if it is possible to make moral judgments about the past and to make them by appealing to some of the same moral standards that we apply to the present. Unless we can apply the same moral yardstick to the past and the present, we cannot meaningfully say either that there has been moral progress or that there has not. For example, unless some retrospective moral judgments are valid, we cannot say that the abolition of slavery is a case of moral progress, moral regression, or either one. More specifically, unless we can say that slavery was wrong, we cannot say that the abolition of slavery was a moral improvement. For these and other reasons, the acceptance of historical ethical relativism has troubling implications. But even if we were to accept historical ethical relativism as the correct position, it would not follow from this alone that there is anything improper about making judgments about radiation experiments conducted decades ago based on the three kinds of ethical standards the committee has identified. Two of these standards, government policies and rules of professional ethics, are standards used at the time the experiments were conducted. Neither of these kinds of standards involves projecting current cultural values onto a different cultural milieu. We have already argued that basic ethical principles, the third kind of standard adopted by the committee, are not temporally limited. Although there have been changes in ethical values in the United States between the mid-1940s and the present, it is implausible that these changes involve the rejection or affirmation of principles so basic as that it is wrong to treat people as mere means, wrong to inflict harm, or wrong to deceive people. Thus the advisory committee's evaluation of the human radiation experiments in light of these basic principles is based on a simple and we think reasonable assumption that even 50 years ago these principles were pervasive features of moral life and the United States that were widely recognized and accepted, much as we recognize and accept them today. End of Section 23, Recording by Elsie Seltwin. Section 24 of Final Report of the Advisory Committee on Human Radiation Experiments. This is a LibriVox recording. All LibriVox recordings are in the public domain. For more information or to volunteer, please visit LibriVox.org. Recording by Elsie Seltwin. Final Report of the Advisory Committee on Human Radiation Experiments. Ethics of Human Subjects Research. A Historical Perspective. Chapter 4, Part 2. Factors that influence or limit ethical evaluation. Several considerations influence and can limit the ability to reach ethical conclusions about rightness and wrongness and praise and blame. Some of these may be more likely to be present in efforts to evaluate the past, but all can arise when attempts are made to evaluate contemporary events as well. The most important such limitations relevant to the advisory committee's evaluations are these. One, lack of evidence as to whether ethical standards were followed or violated, and if so by whom, and two, the presence of conflicting obligations. The three kinds of ethical standards adopted by the committee can yield the conclusion that an individual or collective agent had or has a particular obligation. But this conclusion is not by itself sufficient to determine in any particular case whether anything wrong was done or whether any individual collective agent deserves blame. Lack of evidence. Sound valuations cannot be made without sufficient evidence. Sometimes it cannot be determined if anything wrong was done because key facts about a case are missing or unclear. Other times there may be sufficient evidence that a wrong was done, but insufficient evidence to determine who performs the action that was wrong or who authorized the policy that was wrong or who was responsible for a practice that was wrong. This is why the advisory committee strove during our tenure to reconstruct the details of the circumstances under which the human radiation experiments themselves took place. However, these records are incomplete, and even the copious documentation we have gathered does not tell as complete a story as sometimes was needed to make ethical evaluations. Conflicting Obligations Because we all have more than one obligation, because they can conflict with one another, and because some obligations are weightier than others, a particular obligation that is otherwise morally binding may not be binding in a particular circumstance, all things considered. For example, a government official might be obligated to follow certain routine procedures, but in a time of dire emergency he or she might have a weightier obligation to avert great harm to many people by taking direct action that disregards the procedures. Similarly, a physician is obligated to keep his patient's condition confidential, but in some cases it is permissible and even obligatory to breach this confidence, for example in order to prevent the spread of deadly infectious diseases. In such cases, the agent has done nothing wrong in failing to do what he or she would ordinarily be morally obligated to do. That obligation has been validly overridden by what is in the particular circumstances a weightier obligation. The presence of conflicting obligations may limit our ability to make moral judgments win, for example, it is difficult to determine in a particular case which obligation should take precedence. At the same time, however, if it can be determined which obligation is weightier, then the presence of this factor does not serve as an impediment to evaluation, rather it can lead to the conclusion that nothing morally wrong was done and that no one should be blamed. An example of a potentially overriding obligation that is especially important for the advisory committee's work is the possibility that during the period of the radiation experiments, obligations to protect national security were sometimes more morally weighty than obligations to comply with standards for human subjects' research. If the threat were great enough, considerations of national security grounded in the basic ethical principle that one ought to promote welfare and prevent harm could justifiably override the basic ethical principle of not using people as mere means to the ends of others, as well as the more specific rule of research ethics requiring the voluntary consent of human subjects had such an overriding obligation to protect national security existed during the period we studied, it also would have relieved responsible individuals of any blame otherwise attributable to them for using individuals in experiments that were crucial to the national defense. Especially during the late 1940s and early 1950s and then again in the first years of the early 1960s, our country was engaged in an intense competition with the Soviet Union. A high premium was placed upon military superiority, not only in conventional warfare but also in atomic, biological, and chemical warfare. The DoD's Wilson Memorandum, when originally promulgated in 1953, declared that it was directed toward the need to pursue atomic, biological, and chemical warfare experiments for defensive purposes in these fields. It would not be surprising, therefore, to discover that, in the government's policies and rules for human subject research, provisions had been made for the possibility that obligations to protect national security might conflict with and take priority over obligations to protect human subjects, and thus that such policies would have included exceptions for national security needs. The moral justification would also not be surprising that, in order to preserve the American way of life with its precious freedoms, some sacrifices of individual rights and interests would have to be made for the greater good. The very phrase Cold War expressed the conviction that we already were engaged in a life or death struggle, and that in war actions may be permissible that would be impermissible in peacetime. Survival in the treacherous and heavily armed post-World War II era might demand no less repugnant as those actions otherwise might be to many Americans. The advisory committee did not undertake an inquiry to determine whether during either World War II or the Cold War there were ever circumstances in which considerations of national security might have justified infringements of their rights and protections that would otherwise be enjoyed by American citizens in the context of human experimentation. Our sources for answering this question were limited to materials pertinent to specific human radiation experiments in declassified defense-related memorandums and transcripts. With regard to the experiments, particular cases are reviewed in Part II of this report, and those experiments that took place under circumstances most closely tied to national security considerations, such as the plutonium injections, c. 5, it does not appear that such considerations would have barred satisfying the basic elements of voluntary consent. Thus, for instance, although the word plutonium was classified until the end of World War II, subjects could still have been asked their permission after having been told that subjects in the experiment would be injected with a radioactive substance with which medical science had had little experience and which might be dangerous, and that would not help them personally but that the experiment was important to protecting the health of the people involved in the war effort or safeguarding the national defense. With regard to defense related documents and none of the memorandums or transcripts of various agencies, did we encounter a formal national security exception to conditions under which human subjects may be used. In none of these materials does an official, military, or civilian argue for the position that individual rights may be justifiably overridden owing to the needs of the nation in the Cold War. And none of them is an official position expressed that the Nuremberg code or other conventions concerning human subjects could be overridden because of national security needs. Some government officials, military, and civilian may have personally advocated the view that obligations to protect national security were more important than obligations to protect the rights and interests of human subjects. It is, of course, possible that the priority placed on national security was so great in some circles of the government that the ability of security interests to override other national interests was implicitly assumed rather than explicitly articulated. It is a matter of historical record that some initiatives undertaken by government officials at some agencies during this period adopted the view that greater national purposes justified the exploitation of individuals. Notorious examples are the CIA's MKULTRA project and the Army's Psychochemical Experiments, which subjected unsuspecting people to experiments with LSD and other substances. However, even the internal investigation of the Department of Defense into these incidents in the 1970s concluded that these incidents were violations of government policy, not recognized legitimate exceptions to it. During the era of the Manhattan Project, the United States and its allies were engaged in a declared and just war against the access powers. Regarding the possibility of a wartime exception, it is well documented that during World War II the Committee on Medical Research, CMR, of the Executive Office of the President funded research on various problems confronting U.S. troops in the field, including dysentery, malaria, and influenza. This research involved the use of many subjects whose capacity to consent to be a volunteer was questionable at best, including children that mentally retarded and prisoners. However, when the CMR considered proposed gonorrhea experiments that would have involved deliberately exposing prisoners to infection, the resulting discussion about the ethics of research exhibited a cautious attitude. The conclusion was that only volunteers could be used and that they had to be carefully informed about the risks and benefits of participation. And these and other classified conversations, the CMR took the position that cares to be taken with human subjects, including conscientious objectors and military personnel. It is difficult to reconcile these deliberations with the fact that many subjects of CMR funded research were not true volunteers. Whether the CMR believed that the needs of a country at war justified the use of people who could not be true volunteers as research subjects is not known. It would, however, be an error to conclude that even in contexts where important national security interests are at stake, such as during wartime, a conflict between obligations to protect national defense and obligations to protect human subjects ought always to be resolved in favor of national security. The question of whether any and all means are morally acceptable for the sake of national security in the national defense is a complex one. Even in the case of a representative democracy that is not an aggressor, it would be wrong to assume that there are no moral constraints in time of war. All of the major religious and secular traditions concerning the morality of warfare recognize that there are substantial limitations upon the manner in which even a just war is conducted. The issue of the morality of total warfare for a just cause, including the use of medical science, was beyond the scope of the advisory committee's charter deliberations and expertise distinguishing between the wrongness of actions and policies and the blameworthiness of agents. Factors that influence or limit judgments about blame, the factors we have just discussed lack of evidence in the presence of conflicting obligations, place limits on our ability to make judgments about both the rightness and wrongness of actions and the blameworthiness of the agents responsible for them. Some factors however place limits only on our ability to make judgments about the blameworthiness of agents. Even in cases where actions or policies are clearly morally wrong, it may be uncertain how blameworthy the agents who conducted or promulgated them are or in fact whether they are blameworthy at all. Some factors make it difficult to affix blame, other factors can mitigate or lessen the blame actors deserve. Four such factors are of particular concern to the committee. One, factual ignorance. Two, culturally induced ignorance about relevant moral considerations. Three, evolution in the interpretations and specification of moral principles. And four, indeterminancy in an organization's division of labor with the result that it is unclear who has responsibility for implementing the commitments of the organization. Factual ignorance. Factual ignorance refers to circumstances in which some information relevant to the moral assessment of a situation is not available to the agent. There are many reasons that this may be so, including that the information and question is beyond the scope of human knowledge at the time, or that there was no good reason to think that a particular item of information was relevant or significant. However, just because an agent's ignorance of morally relevant information leads him or her to commit a morally wrong act, it does not follow that the person is not blameworthy for that act. The agent is blameworthy of a reasonable prudent person in that agent's position should have been aware that some information was required prior to action, and the information could have been obtained without undue effort or cost on his or her part. Some people are in positions that obligate them to make special efforts to acquire knowledge, such as those who are directly responsible for the well-being of others. Determinations of culpable and non-coppable factual ignorance often turn on whether the competent person in the field at that time had that knowledge or had the means to acquire it without undue burdens. Culturally induced moral ignorance. Sometimes cultural factors can prevent individuals from discerning what they are morally required to do, and can therefore mitigate the blame we would otherwise place on individuals for failing to do what they ought to do. In some cases, these factors may have been at work in the past, but are no longer operative in the present because of changes in culture over time. An individual may, like other members of the culture, be morally ignorant because of features of his or her deeply unculturated beliefs. The individual may be unable to recognize, for example, that certain people, such as members of another race, deserve equal respect, or even that they are people with rights. Moral ignorance can impair moral judgment, and hence may result in a failure to act morally. In extreme cases, a culture may instill a moral ignorance so profound that we speak of cultural moral blindness. In some societies, the dominant culture may recognize that it is wrong to exploit people, but fail to recognize certain classes of individuals as being people. Some of those committed to the ideology of slavery may have been morally blind in just this way, and their culture may have induced this blindness. Here it is crucial to distinguish between culpable and non-culpable moral ignorance. The fact that one's moral ignorance is instilled by one's culture does not by itself mean that one is not responsible for being ignorant, nor does it necessarily render one blameless for actions or omissions that result from this ignorance. What matters is not whether the erroneous belief that constitutes the moral ignorance was instilled by one's culture. What matters is the extent to which the individual can be held responsible for maintaining this belief, as opposed to correcting it. Where opportunities for remedying culturally induced moral ignorance are available, a person may rightly be held responsible for remaining in ignorance, and for the wrongful behavior that issues from his or her mistaken beliefs. People who maintain their culturally induced moral ignorance in the face of repeated opportunities for correction typically do so by indulging in unjustifiable rationalizations, such as those associated with racist attitudes. They show an excessive partiality to their own opinions and interests, a willful rejection of facts that they find inconvenient or disturbing, an inflated sense of their own self-worth relative to others, a lack of sensitivity to the predicament of others, and the like. These moral failings are widely recognized as such across a broad spectrum of cultural values and ethical traditions, both religious and secular. Only if an agent could not be reasonably expected to remedy his or her culturally induced moral ignorance would such ignorance exculpate his conduct. But even in cases in which the individual could not be blamed for persisting in ignorance, this would do nothing to show that the actions or omissions resulting from his or her ignorance were not wrong. Non-coppable moral ignorance only exculpates the agent. It does not make the wrong acts right. Evolution and interpretation of ethical principles. There is another respect in which the dependence of our perceptions of right and wrong on our cultural context has a bearing on the advisory committee's evaluations. While basic ethical principles do not change, interpretations and applications of basic ethical principles as they are expressed in more specific rules of conduct do evolve over time through processes of cultural change. Recognizing that more specific moral rules do change has implications for how we judge the past. For example, the current requirement of informed consent is the result of evolution, acceptance of the simple idea that medical treatment requires the consent of the patient, at least in the case of competent adults. Seems to have proceeded by a considerable interval, the more complex notion that informed consent is required. Furthermore, the notion of informed consent itself has undergone refinement and development through common law rulings, through analyses and explanations of these rulings and the scholarly legal literature, through philosophical treatments of the key concepts emerging from legal analyses, and through guidelines and reports by government and professional bodies. For example, as early as 1914, the duty to obtain consent to medical treatment was established in American law. Every human being of adult years and sound mind has the right to determine what shall be done with his own body, and a surgeon who performs an operation without his patient's consent commits an assault. However, it was not until 1957 that the courts decreed that consent must be informed, and this 1957 ruling was only the beginning of a long debate about what it means for consent to be informed. Thus it is probably fair to say that the current understanding of informed consent is more sophisticated and what is required of physicians and scientists more demanding than both the proceeding requirement of consent and earlier interpretations of what counts as informed consent. As the content of the concept has evolved, so has the scope of the corresponding obligation on the part of the professionals. For this reason, it would be inappropriate to blame clinicians or researchers of the 1940s and 1950s of not adhering to the details of the standard that emerged through a complex process of cultural change that was to span decades. At the same time, however, it remains appropriate to hold them to the general requirements of the basic moral principles that underlie informed consent, not treating others as mere means, promoting the welfare of others, and respecting self-determination, inferring bureaucratic responsibilities. It is often unclear in complex organizations, such as government agencies, who has the responsibility for implementing the organization's policies and rules. This is particularly common in new and changing organizations, where it is more likely than in stable organizations that there will be interconnecting lines of authority among employees and officials and job descriptions that are not explicit with respect to responsibility for implementation of policies and initiatives. When policies are not properly implemented in organizations that fit this description, it often is difficult to assign blame to particular individuals. An employee or official of an agency cannot fairly be blamed for a failed or poorly executed policy unless it can be determined with confidence that the person had responsibility for implementing that policy and should have known that he or she had this responsibility. The importance of distinguishing wrongdoing from blameworthiness. Judgements of wrongdoing and judgments of blameworthiness have very different implications. Even where a wrong was done, it does not follow that anyone should be blamed for the wrong. This is because there are factors, including the four we have just described, that can lessen or remove blame from an agent for a morally wrong act, but that cannot in any way make the wrong act right. If experiments violated basic ethical principles, institutional or organizational policies, or rules of professional ethics, then they were and will always be wrong. Whether and how much anyone should be blamed for these wrongs are separate questions. The distinction between the moral status of experiments and that of the individuals who are involved with conducting, funding, or sponsoring them also has important implications for our own time. For a society to make moral progress, individuals must be able to exercise moral judgment about their actions. It is important for social actors to be critical about their activities, even those in which they have been engaged for some time. It is important for them to be able to step back and analyze their actions as right or wrong. If we did not distinguish between actions and agents, then people may feel that, once they have perceived their moral error, it is too late for them to change their ways, to object to the ongoing activity, and to try to rally others in supportive reform. For any generation to initiate morally indicated reforms, it must be able to take this critical stance. As we see in part three of this report, even now there are aspects of our society's use of human subjects that should be critically examined. The actions we ourselves have performed do not condemn us as moral agents unless we refuse to open ourselves to the possibility that we have in some ways been an error. As we have said, even if we are exculpated by our own culturally induced moral ignorance that does not make our wrong acts right, even if we must accept a measure of blame for our actions, we are free to achieve a critical assessment and to initiate and participate in needed change. The significance of judgments about blameworthiness. The committee believes that its first task is to evaluate the rightness or wrongness of the actions, practices, and policies involved in the human radiation experiments that occurred from 1944 to 1974. However, it is also important to consider whether judgments describing blame to individuals or groups or organizations can responsibly be made and whether they ought to be made. There are three main reasons for judging culpability as well as wrongness. First, a crucial part of the committee's task is to make recommendations that will reduce the risk of errors and abuses in human experimentation in the future on the basis of its diagnoses of what went wrong in the past. A complete and accurate diagnosis requires not only stating what wrongs are done but also explaining who was responsible for the wrongs occurring. To do this is likely to yield the judgment that some individuals were morally blameworthy. Second, unless judgments of culpability are made about particular individuals, one important means of deterring future wrongs will be precluded. People contemplating unethical behavior will presumably be more likely to refrain from it. Other things being equal if they believe that they, as individuals, may be held accountable for wrongdoing, then if they can assure themselves that at most their government or their particular government agency or their profession may be subject to blame. Third, ethical evaluation generally involves both evaluation of the rightness or wrongness of actions and the praiseworthiness or blameworthiness of agents. In the absence of any explicit exemption of the latter, sorts of judgment and or mandate, the committee believes it would be arbitrary to exclude them. Having made a case for judgments of culpability as well as wrongness, the committee believes it is very important to distinguish carefully between judging that an individual is culpable for a particular action and judging that he or she is a person of bad moral character. Justifiable judgments of character must be based on accurate information about long-standing and stable patterns of action in a number of areas of a person's life under a variety of different situations. Such patterns cannot usually be inferred from information about a few isolated actions a person performs in one particular department of his or her life unless the actions are so extreme as to be on the order of heinous crimes. End of Section 24. Section 25 of Final Report of the Advisory Committee on Human Radiation Experiments. This is a LibriVox recording. Our LibriVox recordings are in the public domain. For more information or to volunteer, please visit LibriVox.org, recording by Elsie Salwin. Final Report of the Advisory Committee on Human Radiation Experiments. Ethics of Human Subjects of Research, a Historical Perspective, Chapter 4, Part 3. Applying the Ethical Framework. The three kinds of standards presented in this chapter provide a general framework for evaluating the ethics of human radiation experiments. In this section of the chapter we revisit those standards in the specific context of human radiation experiments conducted between 1944 and 1974 and what we have learned about the policies and practices involving human subjects during that period. Basic Ethical Principles. Earlier in this chapter, we identified six basic ethical principles as particularly relevant to our work. One ought not to treat people as mere means to the ends of others. One ought not to deceive others. One ought not to inflict harm or risk of harm. One ought to promote welfare and prevent harm. One ought to treat people fairly and with equal respect. And one ought to respect the self-determination of others. These principles are central to our analysis of the cases we present in part two of the report, although not every case we evaluate engages every principle. Two of the principles, however, recur repeatedly as we consider the ethics of past experiments. These are one ought not to treat people as mere means to the ends of others. And one ought not to inflict harm or risk of harm. Whether an experiment involving human subjects violates the principle not to use people as mere means generally depends on two factors, consent and therapeutic intent. An individual may give his or her consent to being treated as a means to the ends of others. If a person freely consents, then he or she is no longer being used as a mere means, that is, as a means only. Thus, if a person is used as a subject in an experiment from which the person cannot possibly benefit directly, but the person's consent to that use is obtained, the person is not being used as a mere means to the ends of others. By contrast, if a person is used as a subject in such an experiment but the person's consent is not obtained for that use, the person is being used as a mere means to the ends of the investigator conducting the experiment and the institution's funding or sponsoring the experiment. If an action that involves the use of the person is undertaken in whole or in part for that person's benefit, then the person is not being used as a mere means toward the ends of others. Thus, if a person is used as a subject in an experiment that is intended to offer the subject a prospect of direct benefit, then even if the subject's consent has not been obtained, the subject is not being used as a mere means to the ends of others. This is because the experiment is intended to serve the subject's interests as well as the interests of the investigator and funding agency. It may be wrong not to obtain the subject's consent in this case, but the wrong does not stem from a violation of the principle not to use people as mere means. Instead, the wrong reflects the violation of other basic principles, such as the principles in joining us to respect self-determination and to promote welfare and prevent harm. These two factors, the obtaining of consent and an intention to benefit, also can transform the moral quality of an action, that involves the imposition of harm, or risk of harm. One important way to make the imposition of a risk of harm justifiable is to obtain the person's permission for the imposition. The imposition of risk on a person also is more justifiable when the risk is imposed to secure a benefit for that person, although even in the presence of a prospect of offsetting benefit, the imposition of a risk on another without the person's consent is morally questionable because it appears to violate the principle of respect for self-determination. Consider the following example of how the factors of therapeutic intent and consent can transform a morally questionable act into a morally acceptable one. Patients are enrolled in an experiment in which they are given a new drug that is unproven in humans, induces substantial discomfort or even suffering and may produce irreversible damage to vital organs. There is, however, no effective treatment for the condition from which these patient subjects suffer and the condition is life-threatening. The drug is theoretically promising compared with related drugs used in similar diseases and it has proven effective in animals. Further, the opportunity to participate in the experiment is offered to patients while they are lucid, comfortable and at ease. Under these circumstances, the imposition of harm may be transformed into a caring and respectful act. Policies of Government Agencies Where agencies of the government have policies on the conduct of research involving human subjects and where the policies included requirements or rules that are morally sound, these policies constitute standards against which the conduct of the agencies and the people who work there, as well as the experiments the agencies sponsored or conducted can be evaluated. Government agencies must be held responsible for failures to implement their own policies, to do otherwise is to break faith with the American people who have a reasonable expectation that an agency will conduct its affairs and accord with the agency stated policies. As we noted in Chapter 1, it is not always clear, however, whether statements made in letters or memorandums constitute agency policy. When there is little evidence that a statement by a government official was ever implemented, it is often difficult to determine whether this was an instance of an agency failing to implement its own policies or an instance where a statement by a government official was not perceived as agency policy in the first place. Among the general conclusions that can be drawn from the discussions about policies during the late 1940s and early 1950s is that the AEC, DOD, and NIH required investigators to obtain the consent of the healthy or normal subject, and prior group review was required for risk in research using radioisotopes for all private and publicly financed research and in the NIH for all hazardous procedures. Also, in 1953, the Department of Defense adopted the Nuremberg Code as the policy for research related to atomic, biological, and chemical warfare, and the NIH Clinical Center articulated a consent requirement for patient subjects in intramural research. See Chapter 1. Two questions that arise at this juncture are whether an experiment was wrong if it violated one of these policies but took place at another government agency and whether an experiment was wrong if it took place under the auspices of an agency before it promulgated the policy. The answers to both questions is the same. Even if such an experiment was not wrong according to the policy of the agency sponsoring the experiment at the time, the experiment may nevertheless have been ethical based on one or more basic ethical principles or rules of professional ethics. As is the case today, decades ago government officials had obligations to take reasonable steps to see that policies were adequately implemented. Policies constitute organizational commitments, and organizational commitments generate obligations on the part of the organization and its members. In some cases, however, it is not clear that conditions stated by individual officials rise to a level that all would be comfortable calling policies. Accordingly, it is not clear whether corresponding obligations to implement can be inferred. The two letters signed by AEC General Manager Carol Wilson in April and November 1947 are the best examples of this problem. Nevertheless, if it is correct to say that high officials have an obligation to exert due efforts to implement and communicate the rules they are empowered to establish, then they may reasonably be blamed for failures in this regard. Further, if they do not even attempt to articulate rules that are indicated by basic ethical principles and that are clearly relevant to organizational activities that fall under their authority, they are also subject to moral blame. The mitigating condition of culturally induced moral ignorance and does not apply to government officials who fail to exercise their responsibilities to implement or communicate requirements that clearly fell within the ambit of their office and of which they were aware, the very fact that these requirements were articulated by the agencies in which they worked is evidence that officials could not have been morally ignorant of them. We have observed, however, that especially with regard to research involving patients. Policies were frequently unclear. When this research offered patient subjects a chance to benefit medically, the widespread discretion granted physicians to make decisions on behalf of their patients is a mitigating factor in judging the blameworthiness of government officials for failing to impose consent requirements on physician investigators. This failure could be attributed to a cultural moral ignorance concerning the proper limits to the authority of physicians over their patients. The same cannot be said of government officials for failing to impose consent requirements on physician investigators who used patient subjects in research from which the patients could not benefit medically. This use of human subjects took place outside of the therapeutic context that defines the doctor-patient relationship and therefore also was outside of the authority then ceded to physicians. In this case responsible agency officials had already analog to healthy subjects for whom there was a lengthy tradition of policies and rules requiring the use of volunteers and the obtaining of consent. Government officials could and should have perceived the morally identical nature of these cases. That without consent both cases involved violation of the principle not to use people as mere means to the ends of others. Those who were ill should have been granted the same protection as those who were well. In contrast to requirements for consent requirements intended to ensure that risks to experimental subjects were acceptable were far more clearly stated. Government officials are blameworthy if they permitted research to continue that was known to entail unusual risks to the subjects and direct violation of agency policy. Finally some lessons that can be drawn from the experience of the human radiation experiments we considered speak to the conduct of government itself as a collective agent rather than simply to individual government officials. In too many instances as we saw in chapter one we found a lack of clarity about the status within an agency of specific declarations by responsible officials. Particularly when agencies are engaged in activities that may compromise the rights or interests of citizens it is critically important that agencies be clear about their commitments and policies and that they not remain passive in the face of questionable practices for which they may bear some responsibility. In chapter three we saw an effective response to such a situation in the 1960s by the PHS. This example attests to the fact that institutional clarity and active reform measures can succeed and that when they do they can be great forward strides. Rules of professional ethics. Even if the federal government had adopted no formal human research ethics policies whatsoever the medical profession and its members would still have moral obligations to those who entrust themselves to their care. The successes of modern medical research regardless of its funding source are ultimately due to the efforts of talented and dedicated medical scientists. These investigators bear a profound ethical burden in their work with human subjects. Society entrusts them with the privilege of using other human beings to advance their important work. Although society must not discourage them from the pursuit of new information it must also diligently pursue signs that medical scientists have not exercised their ethical responsibility with the care and sensitivity that society has good reason to expect from them. Without reference to the policies adopted by federal agencies what rules of professional ethics were seen by the medical profession during the 1944 to 1974 period as relevant to the conduct of its members engaged in human subjects research. The answer to this question depends upon which kind of experimental situation is under discussion. An experiment on a healthy subject, an experiment on a patient subject without a scientific or clinical basis for an expectation of benefit to the patient subject, or an experiment on a patient subject with a scientific or clinical basis for an expectation of benefit to the patient subject, experiments on healthy subjects. By the mid 1940s it was common to obtain the voluntary consent of healthy subjects who were to participate in biomedical experiments that offered no prospective medical benefit to them. Sophisticated philosophical analysis is not required to reach the conclusion that using a human being and a medical experiment that offers the person no prospective personal benefits without that person's consent is wrong. As we have already noted such conduct violates the basic ethical principle that one ought not to use people as mere means to the ends of others. Experiments on patient subjects without a scientific or clinical basis for an expectation of benefit to the patient subject. The Hippocratic tradition of medical ethics inherited by physicians in the 1940s holds that unless the physician is reasonably sure that his or her treatment is on balance likely to do the patient more good than harm the treatment should not be introduced. The heart of the Hippocratic ethic is the physician's commitment to putting the interests of the patient first. Subjecting one's patient to experimentation that offers no prospective benefit to the patient without his or her consent is a direct repudiation of this commitment. If the patient consents to this use the moral warrant for proceeding with the experiment comes from the patient's permission not from the Hippocratic ethic. Experiments on patient subjects with a scientific or clinical basis for an expectation of benefit to the patient subject. Even in Hippocratic medicine it is recognized that physicians should attempt to use unproven or experimental methods to benefit the patient whether through efforts at cure or palatation but only so long as there is no efficacious standard therapy available and innovative measures are compatible with the obligation to avoid doing harm without the prospect of offsetting benefit. Interventions in this category should be based on scientific reasoning and conservative clinical judgment. Arguably so long as these conditions prevailed it was not thought morally necessary within the medical profession to obtain the patient's consent to such experimentation prior to the 1960s but the physician assumed a corresponding obligation to base his or her deviation from standard practice on the reasonable likelihood of a patient benefit sufficient to outweigh the risks associated with being in the experiment. This type of reasoning too has been available to and accepted by physicians for many years even though the ability to assess and calculate risks has developed greatly. Although the professional ethics of the period thus had relevant moral rules for each of these three experimental situations compliance with these rules is a separate matter. There may be many reasons for specific failures by physicians to adhere to the requirements of their ethical tradition some of which may render them non-copable and there are various limitations on our ability to assign blame for particular cases of a physician's failure to adhere to professional ethics. However any use of human subjects that did not proceed in accordance with these rules of professional ethics was wrong in the sense that it was a violation of sound professional ethical standards. Moreover even if there was then or is now a lack of clarity about the rules of professional ethics recognition of morally serious individuals of basic ethical principles is enough to identify a certain source of human experiments as morally unacceptable. The special moral responsibilities of the medical profession as a whole whether decades ago or in our own time deserve careful consideration especially in so far as previous experience can help formulate lessons for the future. Like the government the medical profession as a whole must be held to a higher standard than individuals in society. Confidence in the medical profession is important because individuals put their very lives and the lives of their loved ones in the hands of those whom the profession has certified as competent to practice. Unlike government officials members of the medical profession are explicitly bound to a moral tradition in their professional relations based on which society grants the medical profession the privilege of largely policing itself. This authority is part of what constitutes the medical profession as a profession but the authority is granted by society on the condition that the profession will adhere to the high moral rules it professes and that if necessary the medical profession will reform or encourage the reform of relevant institutions to ensure that those rules will be honored in practice. Moreover many of the privileges that devolve on the medical profession are granted on the condition that it is sufficiently well organized to police itself with minimal intervention by the government and the legal system. Therefore members of the medical profession are further legitimately expected to engage in organizational conduct that constitutes sound moral practices. Implicit in this arrangement is also the assumption that it will be self-critical even about its relatively well-entrenched attitudes and beliefs so that it will be prepared to undertake reforms. Without this commitment to self-criticism self-regulation cannot be effective and the public's trust in the professional's ability to self-regulate would be unwarranted. Today we regard subjects of biomedical research whose consent was not obtained to have been wronged. Under conditions of significant risk the wrong is greater and in the absence of the potential of offsetting medical benefit greater still. The historical silence of the medical profession with respect to non-therapeutic experiments was perhaps based on the rationale that those who are ill and perhaps dying may be used in experiments because they will not be harmed even though they will not benefit. But this rationale overlooks the principle that people should never be used as mere means and the principle of respect for self-determination. It may also provide insufficient protection against harm given the position of conflict of interest in which the physician researcher may find him or herself. Nevertheless until the mid-1960s medical conventions were silent on experiments with patients subjects that offered no direct benefit but which physicians believed to pose acceptable risk. This silence was a failure of the profession. One defense of the profession in this regard is that it was as subject to the phenomenon we have called cultural moral ignorance as any other group in society at the time including the arguably excessive deference to physician authority on the part of the government and possibly the public at large. However the medical profession was in a wholly different position from the others in several respects. First it insisted upon and was given the privilege of policing its own behavior. Second the profession was the direct beneficiary of the deference paid to it. Third there were already examples of experiments that had involved subject consent that could have served as models of reform. Under these conditions the profession had an obligation to be self-critical concerning the norms and rules that thought appropriate to govern its members conduct. The medical profession could and should have seen that healthy subjects and patient subjects in non-therapeutic experiments were in similar moral positions. Neither was expected to benefit medically. Just as physicians had no moral license to determine an acceptable risk for healthy subjects without their voluntary consent they had no moral license to do so in the case of other subjects who also could not benefit from being in research even if they were patients. The prevailing standards for healthy subject groups could easily have been applied to patient subjects for whom there was no expectation of medical benefit. The moral equivalence of the use of healthy people and ill people as subjects of the experiments from which no subject could possibly benefit directly was perceptible at the time. This moral equivalence would have made it clear that no one, while or sick, should be used as mere means to advance medical science without voluntary consent. Thus this moral ignorance could have and should have been remedied at the time. Indeed it is arguably the case that physicians could and should have seen that using patients in this way was morally worse than using healthy people for in doing so one was violating not only the basic ethical principle not to use people as mere means but also the basic ethical principle to treat people fairly and with equal respect. American physicians are members of a society that places a high value on these basic moral principles still more vital than the advancement of medical science. These principles are as easily known to physicians as to anyone else and it is unacceptable to single oneself out as an exception to these principles simply because one is a member of an esteemed profession. Someone who is ill deserves to be treated with the same respect as someone who is well. Accordingly a physician who failed to tell a patient that what was proposed was an experiment with no therapeutic intent was and is blameworthy to the extent that the experiment entailed significant risk the physician is more blameworthy where it was reasonable to assume that the experiment imposed no risk or minimal risk or inconvenience the blame is less. We argue here that the use of patients in non-therapeutic experiments without their consent was not only a violation of these basic moral principles but also a violation of the Hippocratic principle that was the cornerstone of professional medical ethics at that time. That principle enjoins physicians to act in the best interests of their patients and thus would seem to prohibit subjecting patients to experiments from which they could not benefit. It might be argued that a widespread practice that is not in conformity with the principle of professional ethics invalidates the principle since the practice shows that the profession is not really committed to the principle in the first place. This is a misunderstanding however of what it means for a profession to adopt and espouse a moral principle. Even if many or most physicians sometimes fail or even often fail to comply with the principle it is still coherent to say that the principle is accepted by the profession that the principle has been publicly pronounced and affirmed by the profession as was clearly the case with respect to the Hippocratic ethic. To characterize a great profession as having engaged over many years in unethical conduct, years in which massive progress was being made in curbing some of mankind's greatest ills may strike some as arrogant and unreasonable. However, fair assessment indicates that the circumstance was one of those times in history in which wrongs were committed by very decent people who were in a position to know that a specific aspect of their interactions with others should be improved. Wrongs are not less egregious because they were committed by a member of a certain profession or by people who are very decent in their relationships with other parties. It is common for us to look back at such conduct in amazement that so many otherwise good and decent people could have engaged in it without a high level of self-awareness. Moral consistency requires the advisory committee to conclude that if the use of healthy subjects without consent was understood to be wrong at the time then the use of patients without consent and non-therapeutic experiments should also have been discerned as wrong at the time, no matter how widespread the practice. It should be emphasized, however, that often these non-therapeutic experiments on unconsenting patients constituted only minor wrongs. Often there was little or no risk to patient subjects and no inconvenience. Although it is always morally offensive to use a person as a means only, as the burden on the patient's subject decreased, so too did the seriousness of the wrong. Much the same can be said of experiments that were conducted on patient subjects without their consent, but that offered a prospect of medical benefit. To the extent that such experiments were conducted within the moral environment of the doctor-patient relationship, that is, based on the physicians considered and informed judgments that it was in the patient's best interests to be enrolled in the research, then the less blameworthy the physician was for failing to obtain consent. However, where the risks were great or where there were viable alternatives to participation in research, then the physician was more blameworthy for failing to obtain consent. It is often difficult to establish standards and make judgments about right and wrong and about blame and exculpation. Our charge was all the more difficult because the context of the actions and agents we were asked to evaluate differs from our own. In arriving at this moral framework for evaluating human radiation experiments, we have tried to be fair to history, to considerations of ethics, and above all to the people affected by our analysis. Former subjects, physician investigators, and government officials. End of section 25.