 All right, good evening, everyone. Can you hear me OK? Yeah? All right. So my name is Bob Trug. I'm the director of our Center for Bioethics here at Harvard Medical School. And I'd like to welcome you to the George W. Gay lecture, the oldest endowed lectureship at HMS. Since 1922, the lecture has featured many of the most astute commentators on the social issues of their time. And certainly this year will be no exception. But before introducing Professor Daniel Allen, I'd like to use this opportunity to give you a bit of background on the history of ethics at Harvard University and how Danielle came to be here. In the mid-1980s, then-Harvard President Derek Bach recognized the need for making the work of academia more relevant and useful to solving practical ethical problems in society. In his view, few philosophers and other academicians knew much about professional life, and few professionals knew enough about philosophy to teach and write effectively on ethical issues in the real world. So to address this issue, in 1986, President Bach recruited Dennis Thompson from Princeton to be the founding director of what is now the Edmund J. Safra Center for Ethics, with the goal of bringing professionals together with academicians to address real and practical ethical issues in public life. Now, I would not be standing here tonight if it were not for the opportunity that I had to be both a fellow and then a faculty member at the Safra Center. It was transformative in giving me both the intellectual tools that I needed, as well as the credibility I needed, to make bioethics a major focus of my career in medicine. And I know that this is true for many of my colleagues who are also here in the room this evening and across the professional schools. From our sister program at the law school, the Petrie-Flam Center, under the direction of Glenn Cohen, to the business school, the Ed School, and other departments and programs. So I encourage you to visit our website at the Center for Bioethics here. And in addition to teaching ethics and professionalism in the medical school, we offer a master's degree in bioethics, a fellowship program, postdoctoral positions. And every week, we have one or more conferences for the entire HMS community addressing issues in clinical ethics, organizational ethics, research ethics, public policy, and the law. So please pick up a postcard or a bookmark if you don't already have one. They're in the back there. And in particular, notice the new monthly seminar series that we've begun, as well as our annual conference, which will be this March, both of which focus on ethical issues and emerging technologies. But back to our story. So in 2015, Harvard University, again, turned to Princeton to recruit Daniel Allen to lead the Center, appointed as the James Bryant Conant University Professor at Harvard University. Her intellectual roots are as a classist, drawing lessons from ancient Athens to inform issues in current American political and social life. She has a rare and remarkable ability to speak to diverse audiences, from scholastic academic work to books that are accessible for a general readership, two articles like the one she published in The New Yorker, poignantly relating the tragic story of her cousin and illuminating the injustices of race, crime, and our prison system in America today. Since her arrival at Harvard, she has continued the tradition of using the Edmund J. Safra Center as a bridge between academia and the real world. One of her major initiatives, for example, has been leading the Democratic Knowledge Project, a multifaceted program that seeks to disseminate the skills that democratic citizens need in order to succeed in operating our democracy. For me, I thought that Daniel's spirit was perhaps best captured by an interview that she gave several years ago about her book Talking to Strangers. When asked, at this time in our history, what should democratic citizens do, she answered, explore political questions by trying to make the best possible argument on any given question from the perspective of someone with whom they disagree or whose experience of life in America differs fundamentally from their own. In short, she said, ask themselves when they interact with strangers, whether they have treated them as they would a friend. So when I think about the debate that we heard last night among the Democrats or the impeachment hearings that are perhaps going on even as I speak, I thought that her comments captured the very essence of wisdom. Now, as we all know, one of the great transformational challenges of the past couple of decades has been the insertion, or indeed, we might even say the invasion, of technology into virtually every aspect of our lives. In her typically timely and relevant fashion, this is the subject that Danielle chose to address this evening in her talk, human choice in a hyper-technological age, how we can keep our wits about us in the world of medicine and beyond. So please join me in welcoming Professor Daniel Allen. Thank you so much to the Center for Bioethics to Bob, to Christine, Becca, to the fabulous team that's organized this beautiful event for the invitation. It's a pleasure to be here, and I want to thank also my colleagues and fellows at the Edmund J. Safra Center for Ethics for turning out tonight. I appreciate it. It's great to have you guys here also. So I am presuming that everybody here has some sort of emotional relationship to the rapid emergence of new technologies across the field, digital, genetic, biotech, et cetera. And so I'm just curious to know how many of you have a feeling of, I'll give you two choices, excitement on the one hand and anxiety on the other. And so pick the one that predominates for you. So how many of you feel basically excited about innovation? That's pretty good, I'm impressed. And how many of you feel anxious? Okay, you're gonna, all right, okay. That's a pretty good split though, that's terrific. So I'm going to start tonight by really trying to address some of the sources of anxiety in ways that I hope also help us be appropriate in relationship to our excitement. I think I'm supposed to put that away. Sorry, I'm getting confused about technology. In fact, I'm very confused about technology because I don't know how I'm supposed to use this. All right, okay, good. There we go. So for the last year, oops, I'm getting ahead of myself. See how I, there we go, okay. For the last year at the Edmund J. Safer Center for Ethics, we've been working hard with colleagues across the university to develop a tech and human values collaborative. We've been very glad to partner this year through our fellows program with the Center for Bioethics with a joint fellow. It's been a wonderful relationship to forge extending our other forms of collaboration. We've done the same with the Berkman Klein Center for Internet and Society. One of my slides missing, oops. Sorry, I really don't know what I'm doing. I lost a slide, okay, that's what happened at any rate. Delighted to be partnering with Berkman Klein with Petrie Flom Center for Bioethics, as we heard, and with the Center for Computation or Research on Computation and Society in the Computer Science Department. So the fellows program is currently focused intensively on the ethical consequences of technological and biomedical innovation. And the large purpose of this tech and human values collaborative is to leverage the university's tremendous, exciting intellectual resources to further build capacity and collaborations that, as my colleague in philosophy, Alison Simmons, who's here somewhere, likes to say, should empower humankind to shape technology rather than having us be shaped by it. So the goal is for human beings to shape technology rather than be shaped by it. The issue of technology's relationship to shaping human society and our anxiety about it is well-captured by a couple of very straightforward historical examples. And I was gonna ask you to identify the piece of technology on the left, but you probably already saw it when I dashed ahead. So that's the cotton gin, and then the coal-powered steam engine on the right, easier to recognize. Both of these things were technologies that obviously transformed the world with massively significant consequences. We're still grappling with the consequences of both. How so? It's the case that at the founding of this country, a lot of the people who participated in designing the political institutions genuinely believed that enslavement was on its way out. It looked to them a non-economically sustainable mode of production. The cotton gin transformed that, entrenched the enslaved economy, and completely changed the trajectory of race relations and racial justice in American history. So simple invention, totally transformative of even our current situation. Coal-powered steam engine, the consequences there are obvious. Pollution was the first major sign of the magnitude of the negative change as opposed to the positive unleashing of productivity. Of course, the world learned how to roll back the impact of pollution, but to this day we grapple with the consequences of fossil fuels and what technology unleashed in that regard. So the question is then given the power of emergent technology, the ways it can take us by surprise with unintended consequences, how do we put ourselves in a position to shape rather than be shaped? And so that was the slide I was looking for when I was talking about our program at the SAFRA Center. But we will go ahead and move on. Probably all of you have heard a famous prayer written by the philogen and philosopher Reinhold Niebuhr. Grant me the serenity to accept the things I cannot change, courage to change things I can, and wisdom to know the difference. Tonight I'm going to argue that we need a somewhat different version of that approach to thinking about human power. So the modified text that I propose, modified hope or wish, is grant me the serenity to accept the things I should not change, and the courage to change the things I can, and wisdom to know the difference. So I'll pursue my argument with reference to two specific cases, algorithms used in the criminal justice context, and gene editing. So let's begin. My argument will be that there are two key features of human choice, objective setting, and limits to the power of objective setting we should pay attention to as we think about how to address emergent technologies. So I'm going to start with criminal justice algorithms. You may have heard about the controversy surrounding the risk assessment tool called Compass, developed by a firm called North Point. What this tool does is process a whole lot of data about the histories of criminal justice offenders, educational history, economic history, psychological history, social histories, and so forth, and offers a predictive score, a risk score, concerning the likelihood of recidivism. And the criminal justice system across the country has begun to use this tool to make judgments about whether or not people should be let out on early parole or otherwise have their sanctions adjusted. The main purpose of the tool is to think about issues of safety, public risk, while also trying to facilitate faster return of offenders to society. The tools come under a lot of criticism for bias, and it's basically the sort of standard conventional case for thinking about bias and algorithmic decision-making. And the way in which the bias is often presented, there is a pro-publica investigation that brought out the relationship between false positives and false negatives. So down here in the chart at the bottom, so for white offenders who were labeled higher risk but didn't reoffend, 23.5% of white offenders, white defendants are in that category, white offenders, whereas for African-Americans, 44.9% of those who were labeled higher risk didn't reoffend. In other words, African-Americans are getting labeled higher risk and not reoffending. And in some sense, they're not getting the chance to have that earlier release that the white offenders are getting. And conversely, of white offenders labeled lower risk who did reoffend, the number's 47.7% for African-Americans, 28%. And so that looks like a pretty straightforward picture of unfairness and bias in the algorithm insofar as the white offenders are getting more of a chance for early release than African-Americans who would apparently do equally well by it. There has been a pitched battle over this data and over whether or not the concept of bias is the right way to capture what it represents. And one of the reasons for the heated nature of the argument is the simple fact that the work of statistics has multiple possible ways of defining parity or fairness. And so then the point that is also relevant, the ones that have been focused on for this kind of algorithmic decision-making are what are called precision parity, a ratio between true positives and predicted positives, true positive parity and false positive parity, where the ratio is between true positives and actual positives on the one hand and false positives and actual negatives on the other hand. And the difficulty with these three definitions of fairness or parity, and I'm not gonna pretend to be able to represent the math although Cynthia sitting here could help us out if anybody wants this actually explained, the trouble is that the mathematical ways of generating these kinds of parity cannot be simultaneously achieved if you have an underlying population data set that has within it statistically distinguishable subpopulations or what is here described as different base rates within the population in the data set. In other words, three concepts of fairness are mathematically impossible to build an algorithm that can render all of them simultaneously. So when you're building an algorithm on the kind of data that exists about criminal justice offenders, you have to pick which kind of unfairness do you want to live with and which kind of fairness do you want your algorithm to try to optimize for, all right? So this is the first limit on what algorithms can do or something they can't do. They can't set the objective for the decision made by the algorithm. It has to be a human being who's choosing again which kind of fairness to make the objective for the decision making algorithm and which kind of unfairness we should all just decide to live with, okay? A human being has to set that objective for the decision being made by the algorithm. So that's the first thing to recognize about what algorithms can't do and what human choice making does do, okay? So the second thing to pay attention to is another place where human beings set the objective. So compass is considered to be a second variation or an improvement on a preexisting tool that had been developed in Canada that goes by the acronym LSI. The compass tool, the acronym that that stands for is Correctional Offender Management Profiling for Alternative Sanctions, did you all follow that? Correctional Offender Management Profiling is the core concept. The Canadian tool had as its core concept a level of service inventory. What's at stake in that difference? In the American context, the US context, the data is used to assess for the most part the public safety risk to society of early release for an offender, okay? The question of whether they're likely to presidivate is a question about what's the right kind of public safety balance for society. In the Canadian case, the algorithm is built on the same categories of data, all right? So it tracks the same underlying correlations between socioeconomic status and race, between levels of education and race, between likelihood of coming from a broken family and race, as in the American case. But the Canadian tool was designed to ask and answer the question, what services does a person need to succeed on reentry? Okay, it's the same tool, completely different purposes. And the different nature of the purpose transforms what the algorithm is and what it's doing for us. So one would think about differential rates of treatment for different subpopulations quite differently for a tool whose purpose was to facilitate successful reentry, then one thinks about it for a tool whose purpose is to determine who is too much of a risk to society to be released, all right? So the second thing algorithms can't do is set the objective for the social practice in which the algorithm is used. Human beings have to do that, okay? So human beings have to set the objective for the actual decision being made by the algorithm and human beings have to set the objective for the social practice within which the algorithms are used. All right, so there's a third thing human beings have to do that algorithms can't. So there's a lot of discussion in the world of algorithms about bias in judges and there is a lot of bias in judging. It's, there's empirical evidence that the kind of decision that you'll get if a judge has missed lunch, it's very different from the kind of decision you'll get if the judge has had lunch, okay? So there's definitely variation and variability and algorithms are a tool for trying to protect against that and achieve fairness and equality. But it's also the case that what a thing a judge can do that an algorithm can't do is make decisions in the middle of the process about which thing should be the objective. So criminal justice has multiple values structuring it. There is the value of security for the community and public safety. There is also the value of rehabilitation and you can go on the line. There's the value of restorative justice which is about restoring relationships in a community where something has gone wrong. And judges are in a position to shift course in a proceeding and decide in one instance that rehabilitation ought to be more to the fore, that in another instance, public safety ought to be more to the fore. But once you've designed an algorithm and picked an objective for it, you've locked in that objective. You lose the capacity to be flexible in choosing the goal that you're after. So the other thing that algorithms can't do is exercise flexibility in relation to which objective to prioritize in a particular moment. It may not be, anyway, we'll come back to that. Okay, so the key points then are that human decision making has within it and the human power of choice is distinguished by our job as objective setters, goal setters, people who select the ends for our human decisions, our human practices, and then also shift in our choices among them, recalibrate the weighting of different values, different end goals as we go. Okay, so there are also limits to this objective setting capacity of human beings. So having seen what we should do, the powers we should assert, I wanna turn to the question of where we might hold back in relationship to that objective setting. And for this, I wanna focus on the example of gene editing, okay? So you all probably already know the compelling argument that Michael Sandel has made against enhancement. And as I think about gene editing now, I'm gonna focus specifically on enhancement. I'm gonna leave aside the use of gene editing for curing disease or reducing disease risk. I'm gonna think specifically about the question of whether or not we should use gene editing for purposes of say improving memory or muscle strength or other forms of human capacity. Height would be another possibility. And Michael Sandel has made a strong case against pursuing this kind of perfection. His argument for this is that the grounds of our humanity require accepting that there are things that we cannot control, that there are limits to our will. In his view, accepting the limits to our will is a necessary part of anchoring humility and social solidarity among other key virtues, which are not virtues only, but also in his view, definitional of our humanity. So accepting limits to our will is a matter of protecting things that make us the kinds of creatures we are, human beings, again, with humility and social solidarity. I think Michael Sandel is right about this, but I think that there's more than we can say about where and why we should think about limits to our will, limits to our power of objective setting. And so at this point, I'm going to use an informal idea that I refer to as the doodling thesis, okay? So if you study the work of any human creator, philosopher, novelist, poet, artist, one finds that over the course of the lifetime of that individual, there is immense continuity in what their mind creates. In essence, each of these creators has some sort of doodle, an intellectual or aesthetic shape or form that they iterate on and elaborate on over time to a remarkable extent until it reaches great sophistication. But the seed of the last most sophisticated production that comes at the end of the career is typically already visible in the first earliest efforts. And just as individual human beings doodle, so do human societies. We repeat over and over again recognizable patterns and social effects, most notably domination and inequality. That's our generally shared human doodle. So even when we constrain domination by setting up democracies, we have as of yet not been able to create democratic social forms not marked by significant inequality. We should expect then that human doodling carried out via enhancement editing of the germline will in meaningful ways further extend the reach of our power to create inequality. Moreover, we can also assume that domesticating the human germline through submission to our will will reduce what is available to us in our world to affirm or behold, as Michael Sandel puts it. However powerful the human mind may be, the human mind is not in fact as complex and multifaceted as the natural world. And to seek to domesticate the germline would be to reduce the complexity and variation to the limitations of the human mind. Limitations evident in the human tendency to doodle inequality over and over and over again. For the sake of preserving the existence of more than ourselves and our powerful but limited range of intentionalities, we should accept that we not pursue general alteration of the germline. Our power of obliteration is too immense. Just as we have accepted limits on the use of nuclear power because of its capacity to obliterate, so too should we accept limits on editing of our germline. So what about this idea of accepting limits because of our power to obliterate, accepting limits to where we use our power of setting objectives? This pyramid graph comes from a completely different context from germline editing. It comes from the context of thinking about the impacts of digital technology on our social world. And it's the very recent work of the German, to call themselves, Data Ethics Commission, which has recently released some recommendations. And with this figure, they offer it as what they call a criticality pyramid and risk adapted regulatory system for the use of algorithmic systems, all right? And it's a way of separating out how we should think about different kinds of algorithms. With the algorithms at the very bottom in the green, being algorithms with zero or negligible potential for harm and up top in the red, their ideas that there are applications with an untenable potential for harm for which the response should be a complete or partial ban of that algorithmic system, okay? So in other words, the recommendation is that in the space of algorithms, we recognize that there might be things where the power involved is so immense and the harm to be done is untenable, the potential for harm is untenable, such that we would accept, again, a limit on our capacity to exercise our objective setting power in relationship to those algorithms and accept a ban on them. So my point about germline editing is to suggest that that is in that same red space of a sort of power to use our objective setting capacity that is too significant, has too much potential for obliteration that we should accept a limit there, okay? So, in summary, for this part, if we accept that the relation between human choice making and technology should be that we assert our power to establish objectives for technology while also accepting that there are some domains where we should not employ our power to set objectives to further questions emerge. Who's the we? Who gets to set the objectives? And what is the range of acceptable objectives, okay? So that's what we need to work on next. So to answer these questions, I'll carry on with my two cases, returning back again first to gene editing and then after that to algorithms. So this slide comes from your dean, George Daley, who's been doing a lot of work on the question of ethics of gene editing. And it's a terrific effort to start to frame a decision-making structure for gene editing with three categories, the use of gene editing for disease prevention, the use for modifying disease risk, and then a potential use for enhancements. Of course, we might all want things like low odor production as an enhancement for the human species, but it's in the red category here, right? And why is it in the red category, whereas Huntington's is in the green light category straightforwardly and Alzheimer's is in the middle space. It's a pretty straightforward framework, ultimately. It comes just directly out of medical ethics and the question of what the job of the medical profession is. So the job of the medical profession is straightforwardly to cure disease. And so the use of gene editing for things like Huntington's, cystic fibrosis and so forth, is well within the boundaries of what the profession understands to be its core purpose. Modifying disease risk is also in that space, but that's where we get to more of a gray area, of course, but the profession itself, medical profession, doesn't, in terms of its own understanding of what it does, have a reason to endorse enhancements, okay? So in other words, if one uses a framework for thinking about gene editing that comes directly from medical ethics and the professional ethics of this universe, it seems as if the questions about gene editing are relatively simple and straightforward. The difficulty, of course, is that it's not only doctors, medical professionals, who might have an interest in using the technology of gene editing. So the question is then, are other people going to be interested, for example, in the category of enhancements? And in particular, would state actors be interested in the category of enhancements? And what about consumer markets? Are consumer markets gonna try to drive use of enhancements? And that's where you get the sort of rhetoric and discussion of things like designer babies and the like. So it's interesting that this framework aligns quite well with a statement put out in November 2018 by the Second International Summit on Human Genome Editing. This was the summit that took place just immediately sort of co-timed with the announcement that a Chinese scientist was claiming to have produced the first Bible pregnancies resulting from embryos with gene editing implanted for the pregnancy. And the statement that, sorry, the summit put out included the following remarks limiting, recommending limits, and focusing again on that medical framework. As part of their commitment to fostering in-depth and international discussion about human genome editing, the Academy of Sciences of Hong Kong, the Royal Society of the United Kingdom, and the US National Academy of Sciences and US National Academy of Medicine organized the summit to assess the evolving scientific landscape, possible clinical applications, and attendant societal reactions to human genome editing. While we, the organizing committee of the second summit, applaud the rapid advance of somatic gene editing into clinical trials, we continue to believe that proceeding with any clinical use of germline editing remains irresponsible at this time. So a very strong statement I have highlighted the relevant two phrases, however, to bring out the fact that the single application under discussion is an application in the context of the clinical medical setting. And the difficulty with emergent technologies, of course, is that their potential use escapes the boundaries of the profession that may have brought them into existence. So this is a good and reasonable statement for how one should think about germline editing in the context of the medical profession, but the question is what's left out? What other questions are still on the table? And to some extent, the statement recognizes what's left out, the question of where state actors fit in, for example, consumer markets. But again, there's a certain kind of implicit presumption and how to think about in particular state actors. So reading a little bit more from the statement. In addition to the establishment of an international forum, the organizing committee calls upon national academies and learned societies of science and medicine around the world to continue the practice of holding international summits to review clinical uses of genome editing, to gather diverse perspectives to inform decisions by policymakers, to formulate recommendations and guidelines and to promote coordination among nations and jurisdictions. And so with this invocation of promoting coordination among nations and jurisdictions, the statement is really tying that coordination again to this question of clinical use. So there's an aspiration to tether the world's communities, the world's nations, to the limit in relationship to clinical use that the community here, the Summit on Human Genome Editing is establishing. The difficulty with this, of course, is that the world is full of a diversity of kinds of regime and polity. And it's not entirely clear that they would all want to set objectives in the same way that the medical profession dominated by US medicine would want to set objectives. So this typology provides one way of thinking about the world's state actors. It's a sort of evolved version of Rawls' basic structuring for how to think about governments from his book, Law of Peoples. All right, so you can divide the world into well-ordered regimes and outlaw regimes. And you can think of the category of well-ordered regimes as including constitutional democracies, rights protecting autocracies where people don't get to participate, but nonetheless the government protects their rights at some basic level, and then material well-being providing autocracies. Don't necessarily protect rights, but do provide material well-being. And then there are the non-decent rights-violating regimes. And if you wanna understand and find our detail, the differences among these categories of regime type, one pays attention to what categories of rights they're protecting and what they're delivering with regard to material questions. So negative liberties are rights, are those that are about freedom from interferences of various kinds. So freedom from government interference with regard to your religion, with regard to expression, with regard to rights of association. They're about being left alone to live your life autonomously. Positive liberties and rights are about the rights to participate in democratic processes, to participate in steering the institutions of your society. Social rights are all the rights that we capture with health and education, economic and welfare rights and so forth. And then there's a fourth category that I think we might use to distinguish among regime types, which is in addition to the typology that political philosophers have conventionally used. And that would be a category for whether or not the given regime is achieving social equality and or non-discrimination. It's actually establishing equality in social relations among sub-populations in the community so that discrimination and inequality at the social level don't undermine the securing of rights that's being done at the institutional level. And so you can think of the constitutional democracies of Europe and the US as protecting negative liberties, protecting positive liberties, protecting social rights to varying degrees, Scandinavia, much more so than the US. But with this question about whether we've achieved social equality being very much TBD for all of these constitutional democracies. It's the piece of work we have not figured out how to do yet. We're still trying to figure out how to do. Rights protecting autocracies, sometimes do protect negative liberties, but they don't give out positive liberties at all. They may give out social rights. They may support health, for example, quite well. Material well-being, providing autocracies would be, roughly speaking, the kind of place that China fits. Doesn't really give much in the way of negative rights or liberties. Definitely doesn't provide positive rights or liberties. Does provide social rights strong support for health, for example. Again, social equality, a real problem. As we're currently seeing. But so the point is that when we wanna ask the question of how our state actor is going to respond to the guidance that germline editing be limited to clinical settings, what actually has to think about that question in relationship to the diversity of regime types in the world. And the very different ways they're likely to think about their own powers in relationship to their populations. And so if one began to wonder whether or not, say an outlaw regime, that would be your North Korea, for example, might think, well, actually, well, muscle enhancement and height enhancement, those might be things that help us achieve more of the kind of power that we want. Then if there's a realistic possibility that one of the sort of regime types would start to use the technology for purposes that the scientific and medical community think are problematic, one has to ask a question about the release of those technologies into the world generally and how to think about that control and dissemination. And that's a sort of difficulty, sort of difficult project to undertake, not one that is sort of sits easily with a skill set and expertise of the scientific and medical community. The easiest way to understand what I mean and the kind of work I'm trying to get at is to think about the example of nuclear power, okay? So nuclear power, of course, is the project of collaboration between universities and the government and was developed in the first instances for the purpose of military power. So in that regard, the human being set the objective. The objective was one of power, projection of power and destruction. And then the scientists who were involved in that work also very quickly regretted it, regretted the nature of the technology that they had produced, its power to obliterate. But the world and not just the scientists did quickly recognize the magnitude of the danger and consequently developed incredibly strict controls for the release of the technology. I'm not advocating this for germline editing. That's not the point of the parallel. The point is simply to just get us to think about what it means to imagine how different societies around the globe might use a new technology and to think about whether that changes how we wanna think about our use of it at home. Because the issue of nuclear power has that shape. As we worry about the use of nuclear power around the globe, it changes how we use it here at home as well. And the way that shows up is that the world has five official nuclear weapons states. Okay? So at some level the world agreed to control the dissemination of this technology. So as I said, there are five official nuclear weapons states. Of course that control is imperfect. And so they're in reality, eight states with nuclear weapons and of course more trying to join that club. And so that is a constant element of friction in world politics. But at the same time that this is what we decided on as how state actors should engage with nuclear technology, we had a broader net for commercialization of nuclear technology. So 45 states have nuclear power to support the provision of electricity, all right? So different way of thinking about the dissemination of the technology when the relevant actors are using it for electricity and it's both commercial actors and state actors, of course. But to remind you of the sort of degree of control involved, California has a 1976 law that prohibits the construction of new nuclear power plants until approval of a means to dispose of spent fuel. That continues to be a problem for the nuclear industry in California with the result that California imports most of its electricity, okay? And so we think about electricity in California and you can see the really sort of tight interconnection between decisions about how to regulate technology and broader questions of the political economic structure of a society and its social structure. But the point is that as people thought about different actors using nuclear technology, they developed different protocols of control for how to put in the hands of specific actors that power of objective setting that I started out with. Okay, so that's just, okay. So where does that leave us exactly? So the first point I've really tried to underscore is that human beings set objectives. That is the most important feature of our choice-making, all right? So as opposed to the kinds of ways algorithms work to deliver on an objective, we are the ones who set the objectives. And we do that again in the decisions, in the social practices in which we use technology and in being able to move between different kinds of objectives over time. And the people to whom we allocate the power, authority, and responsibility to set objectives for new technologies include members of professions, just medical profession, consumer markets in some cases, and state actors, all right? And so a first job in thinking about our choice-making power as involving this objective setting entails ascertaining which of these actors should have what kinds of powers, authorities, and responsibilities in relationship to any given technology. But if we, once we have clarity about this issue of connecting the actors who have the authority to set objectives to a given technology, there's still another question that has to be answered, right? And this is the question about the criteria that we would expect these different actors to use in order to make their judgments to set objectives for the use of any particular technology. So here I'm gonna switch back to the space of thinking about algorithms, data sciences, and what we can learn from that context for how we might set objectives. I'm gonna propose, in fact, that it's time for us to change some of our criteria of assessment. So as we think about what overarching objectives should guide us, should guide each of these categories of actors, professions, state actors, consumer markets, in making decisions about the purposes to which technology should be directed, I think it's time for us to revisit some of our basic criteria. The contrast I wanna draw is between 20th century liberalism and 21st century egalitarian participatory democracy, okay? So as we have been used to thinking, I think, often in the context of bioethics, we have focused on core concepts of liberalism, autonomy, fairness, individual rights, the difference principle or principle of equity so that we're paying attention to the distributive consequences, too, of how new technologies will operate. These are important criteria, all of which I think are worth carrying over, but if I had one criticism of 20th century liberalism, it would be that it has routinely undervalued democracy and communitarian values necessary to support democracy. But it has prioritized justice at the expense of democracy, in my view, and arguments I make elsewhere, not recognizing that democracy is in fact one of the best means to justice, that the two concepts aligned with each other needn't be set in counter poise to each other. And so I myself have been happy to find that the same German report that I mentioned has developed a new set of criteria that line up with the picture of a egalitarian participatory democracy that I would generally advocate. It's important that the basic criteria of liberalism are also transported over into the criteria of egalitarian participatory democracy. So one's not losing anything once expanding the criteria set, expanding what's relevant to our decision making as we set objectives for the use of emerging technology. So the criteria as listed here are human dignity, self-determination, privacy, security, democracy, justice and solidarity, participation and social cohesion. And I wanna note that some of the criteria I've been talking about as I've moved through the progress of these remarks are very much incorporated here, folded in. So the medical profession's commitment, for example, to curing disease, to mitigating or preventing disease risk, folds into the concept of human dignity. The questions of the kinds of things that are responsive to the human need for well-being and human flourishing. Relatedly, questions of economic justice are folded in both to justice and solidarity and to the notion that we're trying to build a democracy with social cohesion. So, oops, in sustainability, sorry, missed that one. That was an important one to leave out, I apologize. So let me just, so for example, as a footnote, both the justice and democracy, or justice and solidarity, concept and the participation and cohesion concept include distributive justice and the question of how to build egalitarian and empowering economies that support social cohesion. So some of the common criteria that we have used for thinking about decision-making around technology are very much folded into this framework coming from the German context. But what this framework gives us is in effect a rubric for allocating this power of objective setting that I've tried to describe, okay? Where, again, the question is which actor is the one who gets to set these objectives and then in relationship to which criteria will this actor be likely to succeed? So if we take the case of gene editing again, I wanna show you something at how I think such a rubric might help us understand what we need to do from a policy point of view in this space. So the reason Dean Daley's framework for decision-making around gene editing makes sense, is successful, is tidy, is because the profession is committed to this list of criteria. The profession's commitment to curing disease to the basic norms of medical ethics ensure that it will consider the use of this new technology in alignment with this full group of criteria. With regard to state actors, we have no such immediate assurance, right? We could consider them likely to be using gene editing in the right way only if they too are ready to limit use to medical purposes through the clinical setting. With consumer markets when it comes to gene editing, we don't have any reason to think that sort of consumer marketing of gene editing techniques, again for enhancements of a variety of kinds, are likely to be protective of human dignity, are likely to be protective of the kinds of social cohesion that underpin democracy. To the contrary, the consumerization of gene editing techniques is much more likely to drive further stratification and abandonment of human dignity contexts as a cultural matter. So taking this chart, what one would sort of take away from it is something like where the summit got implicitly but not explicitly, namely that the medical profession has a good decision-making frame for thinking about gene editing. It's a reasonable technology for state actors to be engaged with presuming or provided that those state actors limit their engagement to the criteria of the medical profession and that it is not reasonable to support consumerization, consumer marketing of gene editing. If one agreed with that general framework, one would have then a very specific view about how gene editing technologies that moves into the world ought to be regulated, that there ought to be quite strict controls, licensing requirements and so forth around its use, its translational structures in order to limit its use to that medical context. Again, the example of nuclear power provides you an analogy for the way that can be done. So even with nuclear power in its commercialized forms, the licensing there are among the most strict and onerous of any industry in the country. And as a result, I mean, there is actually, it's a very slow process, the sort of transition of nuclear power into the commercialized space. It's also the case that it's worth saying out loud that there's more commercialized nuclear power in the US than in any other country in the world, which is another way of underscoring the fact that different states have very different judgments about which actor should get these kinds of authorities and responsibilities for objective setting around new technologies. So we are a society that is much more likely to be open to commercialization. I am suggesting that for germline editing, we should try to draw a strict firewall for doing that. And then I'm also suggesting that we do have examples of how to do that, specifically in the context of nuclear technology. All right, so we've been through a lot. This is complicated terrain. I am just like most other people wandering this planet, somebody who feels buffeted by technology, who doesn't have herself any particular training in biology, in computation, et cetera, but who nonetheless feels it's necessary for all of us to try to master this space enough to help shape it, to contribute to how we will collectively allocate those authorities to set objectives and how we will collectively articulate the criteria that we expect our objective setters to use in moving new technologies into the world of application and translation. So it's a hard, complicated space, lots of puzzles to think through, but nothing could really be more urgent. And I return just to the point that I started with, that serenity prayer, to say, oops, rats, there we go. Okay, not rats, that's, but I really do think that it captures what it is we need collectively in this moment. The serenity to accept the things I should not change, courage to change the things I can, and the wisdom to know the difference. So accepting that there are some things we shouldn't change is not really where our cultural ethos is at the moment. We are definitely in the break it and figure out what happens next, sort of mode. And so I am suggesting that we need to do some cultural work to be more accepting of the fact that there are things that we should not change. And we can remind ourselves that the story of nuclear power shows us that we are capable of accepting those limits of recognizing them and understanding them. The challenge really comes in the wisdom part, how to know the difference between the red part of that pyramid and the rest. So I am trying to recover for us the notion that there is a red part of the pyramid for technological change. There are powers we will acquire that we should not exercise. And then there are other powers that we will and should exercise and for those we need to do that with courage, working together on objective setting. But the hardest work of all, the hardest work of all is going to lie in knowing the difference between those categories, the red category and the rest of it. There I think our powers of discernment are somewhat weak at the moment and we could all collectively work together to rebuild the capacity to make that kind of judgment in particular. Thank you for your time and attention. Thank you so much, Danielle. We're gonna open it up here for questions in a moment and go for a few minutes. If I could, if I could ask the first one. Absolutely. I thought you made a very compelling case about germline editing and all the reasons why this is something that we should not do. So how do you respond though to what we know has already happened? So I think it was pretty much a year ago this week that the Chinese scientist did do germline gene editing on two embryos. Now that was in the face of, for years we've said that there should be a moratorium on these, but that was ignored. And then once he did it, there was outrage. There were all these international societies that get together again. They say, this is wrong, there's this moratorium. And now in the last couple of weeks a Russian scientist has said he's moving forward with germline editing to prevent congenital deafness. Again, there was uproar about this. He has since said he won't move forward without approval. But then we look at what they're doing in Russia. They've created a commission to look at this. The person who's heading it, I understand, is a Russian endocrinologist who also happens to be Putin's daughter. And so we really don't know how this is gonna go. And so I'm just wondering how does a structure that you've painted for us work in a world where these constraints are only as good as the governments that are able to regulate them? Great, I think that's a perfect question. Thank you, Bob. I wonder if I can have the slides back. Would that be all right? Please, it'll just be slightly easier to answer if I can, I'm beautiful. Where they're coming. I can see them, you guys can't see them. Okay, great. I'm gonna zip back to, okay. So there's something I didn't say about the moratorium concept that I should have said out loud that is a relevant sort of first part of the answer to your question, which is that the reason the summit offered for the moratorium is not, for example, that the disease prevention approaches shouldn't be used, but because at this point, there's no way of in fact ensuring that the technology is safe for the specific patients being treated with it. So in the case of the Chinese scientist who did this, for instance, the folks who have read the paper suggest that not only is there always some imprecision in the gene editing, but that in his case, it was quite significant imprecision and really people do not know what changes he actually affected in the genes of the embryos and what the long-term consequences will be for the children born in that way. And that's sort of where the level of technology is that you can't at this point kind of control the editing precisely enough to guarantee the kind of curative result that you're promising. So that's sort of a key reason for thinking that there should just be a complete ban at the moment because we're not actually masters of the relevant technology. So then the second thing that I'm sort of putting on the table, which is what your question is really about is how to think about specifically these different categories of use with sort of enhancements being a third category and the interaction among these categories of use with different kinds of actors, so Russia, China as states and what their intentions might be. And so that's where from my point of view, this question of whether or not this is something that we should truly not do and think of as like a nuclear power is of urgent importance. We who are fine with this, but think this is probably a red need to come to a view about how seriously we think this is a red. Because if we are serious about thinking this is a red, then we actually do have to treat this like a nuclear power. And then we have to think about the same ways that international controls were established around nuclear power as being relevant to how we think about the translation of germite editing technology into the world more generally. So, but the first step to doing something like that is actually answering with a sense of clarity amongst ourselves whether we do generally think or genuinely think that this enhancement category should be completely off limits. Yeah, so going off of his question, today we have people known as bio hackers. So these people that are selling these kits, I think the guy is Josiah Zane. I've actually purchased one of the kits and had it delivered to my house. And there are people that are doing germline editing on dogs to make them glow in the dark or doing this for other things. So, and their argument there is to democratize this. And when you talk about nuclear energy and stuff like that, the history is very dark there as well with the research done on pregnant women, soldiers and so forth. So the idea of democratizing germline editing is something that may be a slippery slope because even in America there's not really any grasp on how to keep this out of physicists like Dr. Xi's hands in China, let alone people who are space scientists in America. So there's no question, but that if you decided that you wanted to try to control germline editing, you would have a massive challenge, just playing logistical challenge of enforcement on your hands. But at this point, we haven't actually grappled with what such an enforcement effort would require enough to answer the question of whether we think we could feasibly do it. My guess is we could feasibly do it. There are all kinds of controlled substances in the world. We have varying degrees of success in relationship to those controlled substances with regard to actually controlling them, right? But there are reasons that, for example, marijuana is very hard to control, but something else is like stolen televisions is much easier to control. And so the question of where this would fall on a spectrum of things that you would try to control remains to be answered. So in that regard, I don't think we can just presume it's going to escape and be sort of automatically usable in ways that escape social control until we've fully investigated that question, and we really haven't done that yet. The reason we haven't done that yet is because we haven't answered this question about whether we think it should really be off limits. Hi, thank you. Hello, my name is Anna. I'm sorry. And I'm the joint fellow between these guys as two centers. So with genetic editing, there's two distinctions that usually get made. One is there. So I'm just gonna say that again, I didn't hear it. There's two distinctions that normally get drawn. And you've mentioned both of them. One of them is therapy versus enhancement. And the other one is germline versus everything else, which we'll call existing humans, right? And so my question is about what goes into that red triangle. And if you're concerned, as you made the argument, was about sort of inequalities and domination, then why are you concerned just about germline editing? Why aren't you concerned about existing humans editing? And also, I mean, it's, we also have a problem in general with the therapy enhancement divide, right? Everybody thinks it's a very hard one to maintain. Yep. Now I recognize that it's a hard one to maintain. And so I was just putting that particular problem aside. I wanted to sort of really force the question, the conversation about whether we think that there should be an effort to treat something as off limits on the notion that when someone can come back to that hard problem and you'll make some arbitrary decisions about which things fall on which side of the line, and when we'll live with that arbitrariness as a secondary feature of a prior, more important decision. But at fair points, I was talking mostly about germline, but somatic enhancement is relevant too to the question of what we think should be off limits. So my, I gave two reasons for thinking that enhancement was problematic. One was the sort of doodling of inequality and domination. But the other was of over time restricting the human germline to like what we can imagine, which is a reduction. I mean, in some sense, sort of an impact on ourselves of the same kind that we've seen on, our impact on biodiversity generally, right? And so because the latter thing is an actual obliteration of things that exist sort of over time and continuously and reshaping, I do think that's in a different category than sort of enhancement on existing people that would not be perpetuated in the germline. That said, it seems to me that if you think that enhancement is problematic simply for the inequality reasons that I argued for, then there's lots of ways to think about kind of regulation around it that is mitigating of that or responsive to that piece. So I do think we should separate the germline question from the enhancement of existing humans question. So Lachlan Farram, Dr. B.I. Deaconesson, proud former fellow at the, before it was the SAFRA Center. So thank you so much for this wonderful talk overview. A couple of comments and then a question. First, thanks for the Serenity Prayer or like version two as a palliative care doctor. I truly recite the Serenity Prayer many times a day, but I think you're halfway to what we want because the first can, the Serenity, to accept the things that I should or it should also be the courage to change the things that I should. But thank you for that because we're on the right direction. Second on gene editing, I'll say I was reminded of the 10th anniversary of the Hastings Center and plugging the Center for Bioethics, the president of the Hastings Center now is Dr. Mildred Solomon, leader here for Ethics for Years. At their 10th anniversary, Alastair McIntyre gave a closing talk at the dinner on designing our descendants. And he talked about the characteristics that we would all want to design in our descendants. I think this was 1979. And they were intelligence and all these other kind of skills and the last one was humility. And then when he got to the end and he talked about humility, he said, but there's a really interesting question because if we succeeded in designing humility into our descendants, they might not want to ever design their own descendants. Maybe we could learn something from that. And then a plug for people who are interested when Daniel talks about democracy, her book, our declaration, a reading of the Declaration of Independence and its etiology. And I always thought Thomas Jefferson wrote these brilliant words and it's not just the content, but also there were ideas percolating democratically throughout that were captured there. And it's really an inspiration to those of us who think about how we think about these things. And now the hard question. Nuclear. I knew there was something else coming. Nuclear power. I have spent starting as a Harvard medical student 40 years of my life working with doctors, two Nobel Peace Prizes concerned about nuclear weapons. Those colleagues believe that existential threat to humanity is nuclear weapons. It is proven by history that nuclear power inevitably increases the risk of obtaining nuclear weapons. If we want the human race to survive, we cannot have nuclear power. I have other colleagues who believe the existential threat to human civilization is the environmental climate change causes that there'll be increased war and other conflict, including temptation to use nuclear weapons unless we can have continued economic development, equality, security, those things which require nuclear power in the mix. And I can't get them to talk to each other. So not to solve the problem nationally about what we do about this, but what might the SAFRA Center or the HMS Center for Bioethics do to convene people to talk about how we can get out of the ideological boxes back and forth and say how are we gonna work together to figure out what place, if any, nuclear power can have in a world where nuclear war is an existential threat. What might we here at HMS or the SAFRA Center do so that people could actually figure out how to start applying what Dean Daley has done about gene editing to issues like nuclear power? So, I mean, what we do routinely at the SAFRA Center is try to convene people working on specific concrete problems in structures that give them the chance to bring the values guiding their decisions to the surface and sort of workshop those values and sometimes re-weight the priorities among them and then revisit the practical decisions that they would make in relationship to them. And I think that's the kind of structured conversation that you're describing. We do often convene in this way. We always invite faculty members to come propose things so you can come propose things and then you gotta also like get the organizing and so if we have a faculty member willing to pull the conversation together and to lead and guide that conversation, we are delighted to structure those conversations. I'd be happy to talk with you about that. I wanna say though, in addition, that what you've done is put a spotlight on another thing that is important to recognize about algorithms specifically. So, going back to the criminal justice algorithms for a moment, any given use of the algorithm has a kind of fairness objective of some kind baked into it and some sort of objective for the social practice in which the algorithm is being used. And those things are baked in unless you revisit them. And so, from my point of view, an important feature of kind of algorithmic governance would be structures internal to whatever organization is using the algorithm that permit routine revisiting of the objective setting. Not just revisiting of the data sets and so forth, but routine revisiting of the objective setting. And you're just describing in the context of nuclear power and thinking about environmental questions and need for some sort of revisiting of objective setting. And that's what I mean by bringing values to the surface and scrutinizing them and thinking about are we at a moment where we want to reweight these and judge the practicalities flowing from embracing either of them in refreshed ways. So, that sort of element of human choice making that you kind of continually revisit your objectives is another important feature of the argument I'm making for how we should be thinking about governance of technology. Hello, doctor. Hello. I'm in south of Pichardo. Great. And I'm coming from MGH, I'm a patient navigator. And I really enjoy what you say about cultural work needed. So, my question would be, what do you think would be a good place to start and what would be some of your suggestions since that we could probably start doing now? What do you think? I'm serious. Honestly, I think that's a question that it would, the most likely best answer would come from all of us asking that question, looking around the worlds that we are in, the organizations that we're in and so forth, and paying attention to where the culture around technology is getting shaped in our own organizations and looking for places to intervene or places to raise questions or bring new conversations into the mix. So, your question is a big and hard question. I don't have a good answer for it. That's why I'm turning it back to you. So, what do you think? What should we do? Well, the ideal part would be, we all can understand each other, but I know that sounds extremely naive, but I think that the main part would be to actually challenge going back to the limits that you mentioned earlier, going back to those limits, challenge the norms that we already have in place and not just the norms that we have in organizations, but even ourselves within our cultures and how we embrace certain things, especially like medicine in our hemisphere, how Western medicine is actually treated. Because it's actually one of the challenges that encompass everything, especially that we can see every day. Not everybody's able to assimilate medicine to the same degree, so the same goes to education and the same goes to any other social service available. Okay, thank you. That's super helpful. I appreciate that and we'll think on that. I guess the other thing I would wanna say and answer to your question is to invoke some new educational efforts. Both the embedded ethics program, that philosophy is built with computer science and it sounds like the revision to the medical school curriculum, which are working to put ethics education inside the context of professional education of various kinds, not as a sort of siloed, four days or 364 days of the year, you learn medicine or you learn computer science and then on one day you learn ethics and then you never think about it again. Instead, having a model where every day, at some point in the day, you're going to encounter something that asks you to raise questions of values, normative reasoning, ethical reasoning to the surface and engage with it. And so I think just play in restoring the capacity for people in professions and with technical capacity to see ethical questions and know how to reason in relationship to them would already help us in the direction of the kind of cultural adjustments that we need. Thank you very much for an excellent lecture. Wendy Parcell, School of Public Health. I'm interested in your positioning around who decides. You talk about the individuals that it all comes back to the human beings and they set the objectives, but who's at that table, particularly when you talked about your doodle and what's baked in? Yep. That's a great question. So, oh sorry, I should have done that in this slide. So this is limited and I recognize that and I knew as I was preparing these slides that this was limited and I was a little stuck on how to address its limits. The reason I started with these categories of deciders was because in general, technological innovation is emerging in one or another professional context or sort of computer science doesn't, in the middle of the 20th century we didn't think of it as one of the sort of professions but really we should at this point and it too needs to develop professional ethics in the same way that the other professions have and so that gives us a kind of an initial starting point and then if you're trying to understand what the kind of larger scale impacts are going to be of technology, the other relevant deciders most immediately are the state and markets and you just have to actually pay attention to how they would be most likely to use these emergent technologies as a part of assessing what the likely impact of these technologies are. So that's a kind of, it's a practical structure for thinking about what an emerging new technology means but I mean you're absolutely right that if we think that this is a technology that should be used in a relevant profession or should be delivered via the market or that state actor should use then I would absolutely endorse a kind of deliberative democracy and inclusion principle that suggests that you should structure deliberation around those objectives in ways that bring in full representative constituencies of the society. So I do think, I didn't talk about and there are people sitting in the audience who are much more expert than I on how democratic accountability is relevant to how we govern technology. I would absolutely endorse that kind of approach to the who decides question. Hi, you drew an analogy between nuclear weapons and nuclear power and genetic medicine and one of the things that strikes me about nuclear power and nuclear weapons is that the, while the underlying science is very similar the materials involved are difficult to procure and are very different and the facilities are very different for both of those, for nuclear weapons versus nuclear power whereas for something like genome editing the tools are identical for something like enhancement or medical intervention and also the tools are very accessible and as I'm a graduate student in biological engineering and I'm very familiar with the, just the accessibility of some of the technologies that would be involved and that accessibility is increasing rapidly and so I'm curious, if we were to assume that this difference in accessibility would be a fundamental difference in the way that we could regulate such things as compared to nuclear power what sort of considerations would you make in your description of how they could practically or should given practical considerations be regulated? So you are absolutely right that there is a huge degree of difference in accessibility in the two technologies and so when I was saying that we could use nuclear powers and analogy for protocols of control that affect how you think about state use commercialization and so forth I didn't mean that at the level of the specific mechanisms of control say so for example, the way you have to license a nuclear plant that's not relevant in the same sort of way in this context. What I meant rather was that the sort of policy protocols have a view about the degree of control aspire to in relationship to different actors and then there's a second question of the mechanics of control. So I am recommending as an analogy the degree of control but not the mechanics of control with regard to the mechanics of control that is a very hard question I for which I don't have the answer and it would go back again just to the question of how we think about controlled substances generally and it's complicated because I mean there's lots of ways of thinking about that and lots of people do lots of analyses about whether or not given controls actually work or sort of make distribution problems worse against sort of the substance use or substance drug illegal narcotics and so forth being the kind of key case and there's issues like pricing and taxes are a way that you can affect and control it's not always about sort of criminalized enforcement there's a sort of range of things and it's a matter of for I do not know the practical realities of the materials and the technologies enough to have a view at this point though in general there are not that many substances that at the end of the day a kind of fully libertarian argument tends to win on if you see what I mean so in other words the accessibility of any given set of materials and relevant technologies is not that in itself tends not to be determinative at the end of the day of whether control succeeds but there's a you know and we could think about the music business and the way it struggled with questions of control around IP and how that's evolved over time so there's a sort of big field of learning around the question of control that one would dive into to make sense of this. Thank you to all of you for coming this evening and a special thanks to all of our staff who did such a wonderful job putting this together so thanks much.