 Good morning, everybody. Welcome to the sixth in the series of Asian My Day seminars. I'd like to start by acknowledging the traditional custodians of country throughout Australia and their continuing connection to culture, community, land, sea and sky. So today we have another Harvard PhD. Elektra Bietti is going to be talking to us about from ethics washing to ethics bashing. As usual, we're going to keep the introductions short and get straight into the talk. We'll have about half an hour for the talk, then we'll go to Q&A. We'll have a link up in the chat so you can join the Slack for discussion afterwards. But for now, Elektra, if you wouldn't mind taking it away. Hello, everyone. I'm going to try and get this presentation opened. In the meantime, thank you so much to Seth and Shal for organizing and for inviting me. So hopefully you can now see my presentation. So I'm going to be talking about a paper that got published recently that I presented at Fatstar 2020 in Barcelona. And there's a new version of that paper that I'm still working on that will be part of an edited collection of essays on the ethics of technology more generally. So the argument might slightly diverge from the one you can see in paper and in published version on the ACM repository. So I guess the idea for this paper starts from a reflection around the hype that has been hitting the ethics of AI and the ethics of technology. Lots of funding, lots of attention, lots of people claiming expertise in the field. And yet very little conceptual clarity around the notion of ethics, what we all mean by ethics, what ethics can do for us, whether ultimately we can rescue it from corporate appropriation of the term ethics for potentially unethical practices. And so one of the things that I see happening is that there's a sort of that separation or gap between the way ethics is being talked about and what ethics is said to be. So often it's understood as this neutral and a contextual methodology that produces truth no matter where the people producing those truths are situated. Or at other times ethics is seen as a self-interested rhetoric being deployed inside companies or by companies or through company funding. On the other hand, I believe ethics can be something else. I think ethics can be a capacious methodology for evaluating in principled ways political disagreements about technology but also society and our political institutions more generally. So the aims of this paper are mainly to acknowledge and understand the possibilities of moral and political philosophy and understand it as a capacious, contextually meaningful method that can allow us to do three things. So first, it can help us understand what is currently wrong with ethics as deployed or used or understood today. Secondly, it can help us articulate or reimagine the institutions and the institutional framework that would allow us to move perhaps past the notion of corporate ethics. And towards governance of technology and of artificial intelligence in ways that are socially meaningful and potentially more just. And third, this paper aims by recognizing the possibilities of moral and political philosophy to celebrate methodological plurality. So I'm not attempting to say moral philosophy or moral philosophers are truth providers. And we need to all listen to moral philosophers, but I'm actually saying moral philosophy is one methodology, one method for truth seeking that must be combined to other kinds of knowledges and understandings of the world and of the social good. So the outline for this talk is I'm first going to very briefly define ethics and moral philosophy, or at least the way that I conceive of them in the paper. I'm not going to spend a lot of time on this because we could spend a whole day on this question or perhaps years. Then I'm going to provide some background on the rise of technology ethics and the critiques of ethics washing. Thirdly, I am going to elaborate on those critiques and explore some of the limits of those critiques in the form of what I call ethics bashing. I will then ask what moral philosophy can actually do for us and what are the limits of moral philosophy. So how should we understand the special role and the special place of moral philosophy in thinking about the role of technology in society. I will then use those insights to try and assess ethics washing and corporate ethics through a moral philosophical lens. And I will finally point the way towards a renewed understanding of moral philosophy and general reflection on where we should go next. So first, what is ethics and what is moral philosophy? So very often a lot of philosophers distinguish between ethics and morality. We could spend a lot of time thinking about what these two notions actually mean and how they differ. And actually in the paper I actually take them as co-extensive. And I largely want to focus on what I call moral philosophy but also political philosophy as kind of one broad understanding of a methodology that is some would say rational, others would say principled. But it's a mode of thinking and evaluating disagreements that are value based so grounded in understandings about the world. That relate to technology, politics, institutions and more generally human life. So now let's move to the rise of tech ethics. So do you awareness that technology is embedded in society and somehow affects society and is affected by society is absolutely not new. So political economists in the 19th century started grappling with the role of tech technologies, technological development in the economy during the first and second industrial revolutions. People like Louis Mumford theorized the ways in which techniques and technologies were influencing and determining certain practices in society. And then later in the 20th century people like London winner but also Bruno Latour and many other SPS thinkers started reflecting on the dual relationship between technology and society. So on the one hand society shapes the kinds of technologies that we have and on the other hand the technologies we have shape the society we will have. And what AI has done is it has brought back to the center those SPS reflections and renewed curiosity and awakening to the idea that there are values that we encode in those algorithms in those machine learning models. And that those models that we think are neutral and scientific and apolitical actually have real world effects on people. And so basically what we started thinking again about and I'm not saying AI is kind of the trigger that first prompted us to rethink those thoughts. But AI has definitely been an awakening or a kind of resonating bell for us to rethink some of the biases and some of the power structures in our society and how tech could perpetuate and encode them into our futures. And so that acknowledgement prompted the need for a reintroduction of the social into thinking about technology. And then a big question there is why why did we start talking about ethics so much and why the ethics hype when we could also talk about sociology we could talk about anthropology history psychology etc. And I want to say obviously ethics is one amongst a huge constellation of ways of thinking about the social good. But critics have justifiably asked why why ethics and is there a political motive behind the prioritization and the importance that has been placed on ethics in relation to AI. And so some of the critiques have taken the following forms and ethics is a mere rhetorical move may aimed at reputation washing and there derives the notion of ethics washing which is generally set to be coined by Ben Wagner. Ethics enables deep pocketed well funded companies or governments or act or private actors to shape discourse through fueling kind of a language of principle thinking that is actually political. Ethics can be a way of preempting and avoiding law and regulation. Ethics is an excessively formalistic methodology that enables the legitimation and normalization asserts and preexisting power structures so by using the language of ethics by hiring a philosopher by making a philosopher. Infer or rationalize certain norms we are acting as if those norms were true or ethical when in fact we are just normalizing power structures. And so I think what we can take away from those critiques is they're really important and legitimate and I think we need to take them seriously. But one reason that prompted me to write this paper was that very frequently some of those critiques are taken from very articulate and nuanced thinkers and kind of transposed into a political context in which people then have an over broad tendency to bash everything that is ethics. And so I think so and I call this ethics bashing and I think what it is is it's a failure to recognize that more philosophy can have a special role or a special meaning or a special methodology to contribute to thinking about technology in society. And that that special role actually is important within a consolation of other methodologies and should not just be dismissed because of its potential to be instrumentalized by corporate actors or others. And what I call ethics bashing comes in two main forms or at least those are the forms that I've identified. One is the tendency to conflate the ought and the is. So moral philosophy for a moral philosopher or for a participant in the exercise of thinking about moral principles is an ought. It's aspirational. It's about trying to understand society and trying to posit ground truths. And sometimes that attitude which is I think extremely valuable and should be preserved is conflicted with a descriptive practice of ethics or ethics as an institution as a corporate or political strategy or ethics as a self regulatory strategy. So that's one way in which ethics bashing comes about is by through the conflation of the is and the ought. And secondly, another thing that can be seen in the literature or on Twitter or in general is sharp dichotomies between ethics and other things such as law, justice or politics. And the failure to recognize that ethics is actually a method and a principled way of informing and guiding the law, justice and politics and that ethics is intrinsically related to law to justice and politics. And if we don't understand the role of moral philosophy or political philosophy in thinking about law, justice or politics, we're not really going anywhere. So yeah, so ethics can actually be something meaningful and we should recognize the special role of ethics and moral philosophy in informing more justice and politics and how connected ethics is to these reflections around legal and political institutions. And the second insight that we can take away and one reason why we need to resist ethics bashing is that ethics is everywhere. And when thinking about the place and role of technology in society and of AI, we should always understand ourselves and situate ourselves within some ethical conception of the world, some view that actually is part of a vision of what morality or society entails. And so we cannot actually criticize ethics without actually taking a stance that is an ethical stance. And all this said, I am going to now focus on some of the benefits and limits of moral philosophy. So what moral philosophy can do for us and what it cannot do for us. And this is partly aimed at an audience that is not an audience of philosophers and moral philosophers, but it's also aimed at trying to create a dialogue between non philosophers and philosophers about the potential boundaries and limits of philosophy, but also the potential of philosophy. So what a philosopher can bring to the table if you invite him or her. So limits, the limits of moral philosophy, there are lots of limits. And the way that I want to frame this is how should we criticize moral philosophy as a discipline instead of bashing it. How can we be critical of moral philosophical work in ways that can help and can be constructive for ethicists and moral philosophers themselves. So first, moral philosophy or political philosophy or philosophy in general is often said to be abstract to be inaccessible, to be unsuited to technological environments that are instead fast based dynamic political. And I think it is a fair criticism. But it's also very important, especially today as we grapple with huge questions around the meanings of race, gender justice and inequality to actually stop and think more about what do we mean by these concepts and notions. And that's the role that's what philosophers do. And so perhaps we actually need more of that slow paced thinking, and more of that abstract or non directly politicized thinking in technology and technology policy. Second criticism is that moral philosophy often is formulated in terms of high level principles, but doesn't go far enough prescriptively it's not concrete enough. And so those are criticisms that are often leveled at work around the trolley problem and how the trolley problem can apply or can inform policy around AI and autonomous vehicles for example, or this criticism is leveled against codes of principles on AI. And I think this criticism actually has value and I do think philosophers should be more ready to engage in the kind of actual implications of the principles that they formulate because it is true that high level principles can be misused. So then my third point is exactly this, which is that when applied in context, philosophical ideas that are otherwise abstract can actually cause harm, and that context matters. And so I think we cannot take the conclusions or prescriptions of a philosopher as true per se, because philosophers are humans embedded in contexts. And so when we think about the role of philosophy in formulating real world political effects, I think we need to think about philosophy as we think about all other things it is contextual, and it is situated. A fourth kind of criticism of philosophy and moral philosophy, which is widespread is that philosophy and moral philosophy in particular normalize power. So they entrench certain background norms and make them normal. And I think here we come to a point where we can see the tension or the potential tension between disciplines like anthropology and sociology, and how those disciplines understand ethics, and the way philosophers on the other hand understand ethics. For philosophers norms might be in the background, but the exercise of moral philosophy is precisely to go rescue those background norms and bring them to the foreground and contest them. So for philosophers norms are malleable and inherently contestable. And there is nothing rigid or normalizing about those values and norms. They are things that need to be discussed. There are things that need to be understood. They are things that we need to be able to disagree about in the open and a clear way so as to actually resolve some of these problems surrounding the question of power as well. Fifthly, a limit of moral philosophy is that it can be seen to create appearances of objectivity for ideas or doctrines that are actually subjective. And here I think it's very true that that can happen, but it's also true that good philosophy is humble and is clear about its porosity and its limits and about what it can do and what it cannot do. So all these criticisms can become encouragement directed at philosophers to engage in their work in a way that becomes more helpful for policymakers to then incorporate their insights. So now let's turn to the benefits of philosophy and what moral philosophy can actually do for us. And I think you can do four things and I'm going to try and run through it. I don't know if I have time or how much time I have. I'm no longer seeing Seth so I don't know. But yeah, so I'll paste through. So for benefits, first, moral philosophy is a clarificatory methodology. It enables a meta-level viewpoint from which one can observe and articulate the values at stake or the disputes at stake without necessarily taking part in those disagreements or take part in favor of certain values or against others. Secondly, moral philosophy is an explanatory move of argument. So in that sense it's different from manipulation or emotional persuasion. It's supposed to be principled, dialectic, rational method for adjudicating disputes. I say rational in brackets because I know that some people resist the rationality idea and I don't necessarily I'm not committed to the idea of rationality. I just want to say that it's a way of bridging different modes of thinking and trying to reason through these different modes of thinking. Thirdly, and I think importantly, moral philosophy is not just about processes and scrutinizing and evaluating, for example, how representative would diverse an ethics board within Google might be. But it's also a methodology that allows us to question notions of substantive equality. What is substantive equality and what does substantive equality actually require? Does it require an ethics board or does it require a completely different institution or might it require a completely different governance framework? So I think philosophy can move us from procedural matters to substantive matters. And I think that's really an important role that philosophy can play. And finally, philosophy in a polarized world riddled by ideological conflict. Philosophy can encourage the building of common ground and empathy and can allow us to bridge some disagreements that might otherwise seem unbridgeable. So taking these insights on what moral philosophy can actually bring to the table and its special role and value in thinking, deliberating, discussing, evaluating the role of technology in society. One of the things I do in my paper is I use the methodology of moral philosophy to assess corporate ethics practices. Obviously at a kind of relatively high level, but I try to formulate some principles for thinking about the wrongness or the acceptability of corporate ethics practices. And the main way in which I do this is by looking at the two ways of valuing ethics and ethics efforts. So first, ethics can be valued for its effects. So that's the instrumental value of ethics efforts. For example, ethics can bring about a potentially better society, a better understanding of the issues at stake or profit for a company. So it brings something and it is valuable because of the things it can bring. Secondly, ethics can be valued in itself intrinsically and that is the value that ethics has for the participants in the exercise of moral deliberation, of moral commitment, the value in the exercise of reasoning and thinking itself being meaningful independent of the effects that it has. So thinking about these ways of framing the value of ethics in relation to corporate practices, I believe that corporate ethics has very little instrumental value in so far as its main impact is often, and I don't want to say always, to benefit the companies that fund, commission those efforts and the reputation and profits of those companies and only secondarily and incidentally do these efforts actually benefit the public. And in so far as that is the case, we are not using ethics in an instrumental way to benefit society because there might be other ways of doing ethics that might benefit society more in an instrumental way. Secondly, corporate ethics has very little intrinsic value if performed within the wells of a corporation, for example, and that is because the individuals, the people who are engaging in those efforts are actually limited in what they can do and the kinds of conclusions that they can formulate by their hiring contracts by the kinds of constraints that are placed on their mandates. And so it cannot be a disinterested, capacious exercise in so far as it's actually situated within a political context that has very strong interest in affecting the way technology policy actually develops. And finally, even if we were to recognize that corporate ethics has a lot of instrumental and intrinsic value for society, I think there is a final concern that we need to take into account and that is an epistemic concern that has to do with the fact that private actors are actually using instances of something that appears sacred, that appears good. They're using the language of something that is valuable to hide or legitimate practices that might in fact not be ethical at all. And that has diluting or corrupting effects on the notion of ethics overall. And so that means that if ethics is misused too many times, it ends up becoming counterproductive for society to use that term. So we should resist corporate ethics in practice is one of my conclusions. So finally, my last slide. We do beyond ethics bashing, but we also need to be on ethics washing, and we need to be able to both criticize ethics washing and at the same time understand and recognize the value of ethics and moral philosophy as capacious principles methodologies for evaluating and testing disagreeing on AI related law policies and institutions but not only AI related right it could be any technologies and their role in society. Secondly, we need to acknowledge that moral philosophy and its value are contextual and there is no such thing as a neutral exercise that leads to absolute universal truths. But that we need to consider the political nature or the political role that some philosophers might be playing. But at the same time recognize that what they do has some value and can have value. Thirdly, we have to recognize that there's no escape from ethics. Because we're always doing ethics, we're always thinking about some way or other in which our behavior or choices might affect society or might be ethical or unethical, both in our personal lives and in our jobs and in society more broadly. But we also need to recognize that ethics is not an end all, and it's not the only way of thinking about the social good, and so we need to embrace methodological role in them. And finally, we need to, so this is kind of my only prescriptive demand in this paper, which is that we need to try and isolate ethics, the notion of ethics, ethics practices, engagement and thinking and in moral philosophy from corporate influences from corporate funding. And we need to try to reinvent institutional structures and configurations that could enable more meaningful, capacious and humble reflections and thinking around the impacts and the future of AI and any other technologies. Thanks. Yeah, that was a sort of version of a round of applause. That was fantastic. Thank you. That was really great. I have so many, so many thoughts and so many questions, most of which involved to say, yeah, and, yeah, and, you know, because there's something I think you really nailed beautifully. I noticed among the attendees there's loads of people with a lot of expertise here. So let me just emphasize to folks in the attendees part of this, please do put your questions in the Q&A and I'll make sure to get to them. We're going to start with a question from, so in the spirit of methodological pluralism from a sociologist, Jenny, if you wouldn't mind kicking us off. Hi, I'm Jenny. Thank you so much for an excellent talk that was fantastically delivered and the points really resonated with a lot of what I think many of us here think about. So thank you for that. My question has to do with your point of differentiation for philosophy from other disciplines. So I'm a sociologist and so one of the points that you made was that what distinguishes philosophy is your treatment of norms and in particular that you highlight them, bring them to the fore for the purpose of adjustment because you view them as malleable. And as you were sort of making that point and making your broader points, you sounded a lot to me like a sociologist. And so and so I wonder if maybe there's more shared space in that kind of disciplinary Venn diagram rather than distinctions. And if so I wonder if I wonder if there's more value in thinking about sort of transdisciplinary collaboration as opposed to spending a lot of energy sort of defining why one discipline is more effective or differently effective than the others. Yeah, so it's an excellent question and I would be fascinated to try and find ways where methodologies can be can converge and can be combined. So I'm not a sociologist and I have very limited knowledge on the literature on ethics by sociologists. But from what I've seen, for example, I've been reading Zygon, which is fat, which to me was absolutely fascinating. And I remember that one of the ideas was that there's a moment of ethical breakdown. That is the moment that we need to focus on, because that's when values are being reconsidered in society and that's when actually ethics is happening. But then what happens after the ethical breakdown is everything goes back to normal. And I think the difference between the way philosophers would think about this and the way sociologists would think about this is that for a philosopher there is no normal moment when everything goes back right to like to settled normalization. But it's always an effort of trying to think overthink what we are not thinking about. And I guess I know that it's the same. I can totally see what you say about the fact that a lot of anthropology and sociology is also about trying to kind of render visible the structures on which we exist, the power structures, the biases that we have. So in many ways it's extremely similar. I guess so one differentiation historically and I don't think it needs to be the case in the future is perhaps the emphasis on power that has been missing from a lot of moral and political philosophy. Because there is this understanding that everything is forward looking take what we have and we just move forward. And I think that is a limit actually of a lot of philosophy. But the thing I want to push back is criticisms of formalization necessarily as being bad per se because I do think that we need some form of formalization sometimes I don't think always but sometimes. And I also think we need some forward looking thinking right so. Yeah, so just I have a just a quick thought on that as well I mean fundamentally philosophers who work on this stuff. Start out from assuming that what they're doing is going to be normative and they're trying to figure out what we ought to do like you said electric. And most of the approach taken by the other disciplines has the the sort of the normative content is assumed and then the rest is descriptive. So although there may be there often sort of deep normative commitments for example in sociology. It's it's not that there's a sort of an extensive part where the normative commitments are justified and then there's the the elicitation of the descriptive part. It's the the normative commitments are shared and you and I have talked a bit about this before the sort of the kind of reflexive left wing ishness of so much sociology. And and they're not really kind of expanded upon expanded and I think you get a lot of that in the critical data studies literature where you know because one knows the audience shares the same normative views one doesn't need to defend them. But that leads to some really interesting phenomena leads to a really kind of widespread absolutism for example. And one of the values of sort of deep normative inquiry into why we should believe these things is it gives you also a sense of how much they're worth. Which means that if you then have to make a trade off you can do so in a principled way rather than kind of just kind of table thumb thing and not really being able to make those sorts of trade off. So I think that that's where that's where whatever the stuff that philosophers do is something that needs to be done. It can be done. It's done by everybody. It's not just done by philosophers. But hopefully we're we're particularly good at it. Otherwise we'll fuck with you. So let me just raise a question for you a sort of a pair of questions from the from the discussion in the chat and the Q&A box. I'll give you them both and then I'll tie them together for you. And so one is from Joe Ford who's a lawyer and you who's worked a lot on sort of different modes of regulation sort of the relationship between self regulation and and legal regulation. And he's asking if you're assuming that the legitimation motive is always disingenuous when it comes to corporate ethics routines. And he's asking and I think he sort of it's a leading question whether there are some sectors and products where there is actually a competitive advantage from showing engagement in principle methodologies. So where it's not just a matter of ethics watching it's a matter of you actually do things do improve you get you get your profits will go up if you are more trustworthy and reliable. That's the one part. Another question from David Dank's Carnegie Mellon is you mentioned this idea about having ethics free from corporate influences. How do you think we can influence the design of new technologies especially around data and AI when so much of the cutting edge scientific research is within those companies. It may be a challenge in this context to make a substantive difference to what tech is being built. If we're working only from within the academic sector without those sorts of corporate partnerships. Thanks for these questions. Great questions. So to the first question Joe Ford. I recognize so I think maybe I haven't been as nuanced in my presentation as I am in the paper. But I recognize the possible value of engaging in ethical practices for a company and for society right and so that's also what I talk about when I say instrumental value of ethics. It has effects that might be positive that might be negative but can be broadly positive. The one question I ask in the paper is whether it's good enough to engage in ethical efforts only for instrumental reasons. So only for what ethics might lead to instead of for ethics in itself as a valuable exercise for the participants in the exercise to enable the participation. To enable them to better understand their position their role what they're doing formulate truths that might be helpful to society. But it's actually understanding ethics from the inside as a practice to be engaged in that is missing I think from the way that ethics is used in corporate settings. So I don't know if that answers the question but that would be my primary response and then obviously we could talk about it for ages to David. So how to think about an ethics that is free from corporate influence. I do I don't so I don't necessarily want to say that companies should not get involved in the development of new technologies. Of course that is impractical. But I'm saying that a lot of what causes harm to individuals in the end ultimately at the end of the production process is perhaps an excessive influence of certain kinds of financial incentives into the process or some ways in which certain Logics that are primarily capitalist logics get baked into the process of ethics and of the development of technology and end up resulting in the production of certain kinds of Tech that we might not want or we might not have had otherwise. So I'm not saying that technology is bad per se that we should not develop it and that companies should not get involved in the development of technologies. But I'm saying we need to rethink the pipeline and the practices and the incentives of actors in that pipeline. And how each of them is being influenced by certain financial or other incentives that partly have to do with the way that corporations are structured and also are embedded in a larger context obviously and I'm not Saying that the evil guys are either the CEOs or the companies themselves. I just think the problem is the system or the capital system as a whole. So I think my critique is pretty broad. Matt I think there's a really interesting there's some really interesting work to be done just just on the sociology of academic engagement with with tech companies. Because if we actually were to look at it in terms of you know who have tech companies been funding over the last 15 years obviously first of all intelligence researchers. There's about 7 million of those for every social scientist but then even within the social sciences. You know I can I can count the number of philosophers working with tech companies pretty much on the fingers of one hand. Whereas a lot of the major work in STS in law has actually been directly funded from from these companies. So if we sort of actually looked at it in that like from a purely sociological perspective. I think it would be quite interesting to see what the motivations are where the companies have got what they wanted out of these kinds of interactions which which fields are doing the legitimating. So next question from a teaser. Thank you very much. So I wanted to a little bit press you on this criticism of moral philosophy and ethics. Let's make them to be the same thing. So this this I mean it seems to me that these disciplines are not like rigid rigid things and they're not like rigid kind of people who always have done the same thing. So in ethics we have normative ethics we have meta ethics and we have applied ethics and there are people who move from normative ethics to applied ethics. So one very good example is Peter Singer. So it seems to me that the problem is not really with the limits of moral philosophy. It's it's like how we think about or what we take the moral philosophy to be. And so if we kind of adopt a very general perspective of what moral philosophy is and that's what many ethicists of technology have done. So they do STS and they do moral philosophy then the limit would not really arise or the criticism would not really arise for moral philosophy. But it's more the way we do moral philosophy. I completely agree with everything you've said. So I've talked with many philosophers. I was a fellow in last year at the Center for Ethics at Harvard and I was embedded in an environment full of moral philosophers. So I had the chance to talk to many of them and I've also obviously cultivated a strong interest in moral philosophy for us since forever pretty much. But I think one of the really interesting kinds of reactions that I've got to this paper from philosophers was that I was trying to say that philosophers were political actors that were embedded in context and that therefore philosophy did not have objective universal applicability as a discipline. And was not something that we could just take as round truth. And I think so. And my reaction to that was that I don't think philosophers take themselves to be formulating universal truths for everyone. I think moral philosophers understand perfectly the limits of their work and the relativity of their conclusions and how possibly wrong they might be and how everything is contestable by their colleagues who might write the exact opposite to what they're writing. And so I see them as in this dialogue that is exactly what you were describing about good philosophy. And yet very often when someone tries to say, oh, but there is a moral philosopher embedded at Apple. I don't believe I don't want to take what they're saying as a ground truth because of where they are. And people respond, oh, but you're discrediting moral philosophy. And I want to say no, I'm not discrediting moral philosophy as a discipline. I think there might be value to what that person is doing and saying. I just don't want to believe it as a political statement that will then have real world effects on people through the intermediary huge role of Apple in that equation. So I don't know if that responds to your question, but I think it's a tricky one in the sense that there's both kind of something that philosophy has and is a specificity to it. And at the same time it is kind of an exercise that is human and in the end is political. So I've got a question sort of channeling some of the stuff that has come up so far. I've just got a first point just about how we typically construe the nature of ethics in this context, which is just a comment and then another point about kind of the role of voluntary self-regulation. So the point, the first point is just, I think there's an interesting parallel with some of the AI ethics stuff and some of the literature on the ethics of war in the following way. So the ethics of war is typically taken to be divided into these two types of questions that you can ask. One is about the justice of resort to war as a whole, justice of the war as a whole, whether you had a just cause, whether you had the right intention, proper authority, those sorts of things. The other is about the means by which you fight, whether you are ensuring that you're not targeting civilians, whether you're using only force that is necessary and proportionate. And I think a lot of the discussions around ethics have been very much in the realm of the second kind of question, discussion about the appropriate means to use, given that you're going to do what we're going to do with data and AI. And that's, you know, the sort of the formal constraints on fairness, for example, or explainability. And fewer of the discussions that when ethics is talked about have to do with the questions of when or you to use data in the first case, you know, when or you use AI, what are the purposes for which it's legitimate to use it. When is it appropriately authorized by the people whose data that you're using the same. And, you know, they both of those types of discussions have happened in the academic literature since day one. But only one type of discussion has really been sort of elevated and given prominence by the corporate partners, right, because they obviously are much more interested in hearing about how, you know, how Google can develop ethics as a service. Right, which as they've just done, you know, as distinct from sort of starting out with, you know, what we'd not use it for, and under what circumstances is gathering data, for example, an appropriate. It's just a more a more challenging question for the corporate part. So just that little analogy is the first part. The second part is, you know, often I think that even the even the bashing of corporate ethics. So sort of setting aside the frustrating use of terminology of the word ethics to mean using it to when it really means voluntary self regulation, but using it in a way that sort of criticize the whole field of research that's been going back since Aristotle and before. But even the criticism of corporate ethics, I think is naive about the way in which we achieve kind of beneficial social outcomes generally, because it relies on putting everything into the law. We need regulation. It's like, yeah, we need regulation. Absolutely. But you also need to have good responsible practices. You know, we need to regulate doctors, but we also need professional codes and professional professional practice, because otherwise you're going to get really bad outcomes because you can't legislate for everything. And the law should provide a minimum standard of conduct. And we generally want to go much beyond that. So I guess I do think that there are genuine benefits to having, you know, like robust codes of practice that exceed what the law requires. I think we can walk into gum, you know, I think that we should be advocating for regulation while at the same time making sure that we establish positive soft norms. I do think that one of the areas that has been made by a lot of these sort of codes of practice is that they put into a code of practice what is made sort of norms that cover conduct that is already provided for by the law. So I think if you if you seek a code of practice is in some way supplanting replacing the law, then that's a problem. And some of these ethics codes, including R1 here in Australia, has have done that. They've made it look as though it's a matter of aspirational conduct where really you have to do that. Some of them even have like, you know, acting in accordance with the law as part of the one of their principles. You have to do that anyway. The law is something you have to follow. It's not an extra sort of aspirational principle. So anyway, the two parts of the thought that one is it's a means ends distinction is also important. And the other is there will always be a role for for some element of codes of practice soft norms as well as hard laws. And that's all. I respond. They're comments, but I'll respond briefly to say I completely agree with the use in Bellum. Sorry. Distinction. And I do think and actually I think it connects very well with this self-regulation comment. Because I do think we need to think of moral philosophy as a tool as a methodology, as I've said, that allows us to move beyond what we have today, right? And so I agree that we need a mix of regular story practices. We need the state to potentially oversee certain things. We need companies to behave in certain ways. We need international bodies or NGOs to play particular roles. We need academia to also be active. So there are roles for different actors and all of these actors can be theorized in a variety of ways. And I think philosophy has a role to play in rethinking all of these structures and all of these contexts within which all of these players. Are acting in shaping the technologies that we have. Whether or not I'm completely in favor of codes of conduct and self-regulation, not necessarily, but not necessarily. So I'm not committed against codes of conduct and codes of practice. But, for example, in relation to questions such as disinformation in Europe, the approach has been a code of conduct. And I think the result of having something like a code of conduct or, for example, something like the Facebook oversight board, which is a completely self-regulatory initiative that Facebook has kind of decided to undertake funding and self-regulation. And I think that by doing it and taking huge steps to make it work, which are admirable, is that somehow it's maintaining a level of debate on governance that remains completely dependent on pre-existing actors. So we are assuming that the people who will have a role to play in misinformation or who will have a role to play in content moderation on Facebook are Facebook plus Twitter, Google, and whoever else has signed the code on misinformation. So I'm talking about two different things, right? The Facebook oversight board and the code of practice on disinformation. But overall, it leads to an understanding of governance that is pretty limited because it relies on pre-existing paradigms and on incumbents. And I think that what philosophy can allow us to do is actually take a step back and say, and actually act more as a critical methodology, like sociology, like anthropology, and allow us to think about why we need those companies in the first place. Who we really need them? What do we need them for? And what else, what other kinds of worlds can we imagine where we would have more competitors fulfilling those roles? Where governments fulfilling some of those roles? Where consumers, consumer organizations are bottom-up governance performing some of those tasks, right? And so I do think we need to rethink a lot of our governance structures. And so I do want to see more philosophy in that kind of capacity. Well, I'm going to throw to Sylvie just in a moment. I should say that to the audience that Elektra has a paper I know which explores, does exactly some of that work that you were just describing. And which I heartily encourage everybody to read. I'm sure it's available from one of your websites or pages. Sylvie, if you wouldn't mind asking the last question for us. Yeah. So, yeah, I'm not sure my question is, is really, you know, super related to your talk, but I'll try to formulate it and ask it anyway. So you talked about a little bit, well, I should say before I'm a computer scientist. All right. So you talk about the limits of ethics and about, you know, how to make the best use of ethics in developing, you know, AI systems. I mean, you talked a little bit about this. And so, yeah, I want you to ask a question about this. And I'm not not about, yeah, the best way of ADCs and computer scientists collaborating together to, to build AI systems that, you know, are morally acceptable. And I'm not interested in, in, you know, the, you know, so I'm not focusing on your cooperation building AI systems. I'm more focusing on, let's say a project like ours, which is done at university level. So you said, and I kind of, I agree that a lot of ethics is seen as very abstract. So that's my, you know, that's my perception that either you have very abstract, you know, theories, or you also have philosophers that are, you know, excellent at analyzing a very, very concrete situation. But what's missing is a bit what's at the middle. And it's problematic because a lot of AI systems are made of components that are between these two level where we're trying to have algorithms that solve a class of problems. And so, basically, my question is, what do you think would need to happen for, for this gap to be to be bridged to bring to basically have be able to to understand what needs to be represented in this algorithms to, you know, to capture the ethics of the situation at hand, but you know, at the kind of mid level. Sorry, it's very difficult to articulate my question, but I hope some people got it and if you don't, well, that's alright. No, I think that a very good job and I think it's a very interesting and good question. So actually, today I was at a talk on where they were talking about a multi disciplinary group of computer scientists and lawyers who came together to talk about privacy tools. And one of the things that they discuss regularly like once every two weeks is differential privacy and how it could be used to entrench privacy defaults in structures that are technological structures but also policy more generally. And so I thought like I think that that is a good example to think about how lawyers and computer scientists can interact and what Yeah, so how to think about kind of that interface right and I think my my first response to your question is that I think we need to clarify why we're using a certain type of tool. So why do we need differential privacy, for example, why do we need encryption insurgents or circumstances I think it's great that computer scientists are working on developing certain new technologies, but when it comes to policy when it comes to real world implications I think people need to come together and ask the normative questions of what are these technologies actually performing what are they doing for us. And I think actually both lawyers and computer scientists can be very bad at this because lawyers have a tendency to say this is what the law says. This is the standard of privacy that is required by law. And so what computer scientists should do is just match that requirement of privacy, right. But it might be the case that the law itself is very watered down and is a minimal standard that needs to be increased significantly in future laws right. But what happens is that once the technical standard has become a certain kind of privacy standard it's really difficult for the community to change that standard. And so I think that's where ethics should come into play to scrutinize also the law itself and prevent the interaction of say lawyers and computer scientists from being a purely status quoist view of the world but actually trying to unite every discipline to move beyond current paradigms in trying to make the world better for everyone right. And obviously it's a completely idealized vision of the world that I have as an academic that I'm allowed to have as an academic. But I do think that there are risks in those cross disciplinary synergies. If everyone thinks of their discipline as kind of static or the other one as static. So if a computer scientist thinks of the law as a static thing and the lawyer thinks of computer science and like an algorithm as a thing that cannot change. And each of these people kind of taking things as given. Instead of thinking together about how to change. Now I completely agree with you in fact we also have a number of lawyers in our in our project like will. And every time we when we're having conversation I'm finding myself thinking oh but we should change this and you know like. So I completely agree that the one of the issue is that people take you know even their side and the other side for granted and that we need to elevate ourselves together. Thanks. That's a lovely point to finish on. So let me just encourage folks who didn't get to ask their questions to jump over into the slack channel and ask them there and a lecture will be hopping over there as well to answer some questions. Actually Sylvia just put the one you mentioned up there and put a little comment on it. And look for now let's all thank Electra for a wonderful talk and for joining us that evening in Harvard. Thank you.