 Hi. My phone says it's 215. So we'll go ahead and get started. I'm sorry that the podium is so far back. I would rather be kind of closer and out there near you. Yeah. This is exciting engineering ethics and not only is it alliterative, it's true. So I am Nikki. I'm a tech lead at Canopy Studios, which is why I'm wearing their hoodie. Also, they're full of nice people. I probably wear it anyway. I'm also incidentally a PhD student at Arizona State University thinking about engineering ethics as part of that work. I'm on Twitter as Dr. Nikki, which I've had for years before I went to PhD school. I'm not really a doctor yet. A quick trigger warning. We are going to be talking a little bit just in passing about the philosophy of death and killing versus letting dies. If that's stuff that stresses you out, just know that it's coming. This is one enlightening talk. So we're going to go real fast. Hopefully maybe even end a minute or two early through my 50 slides. And second of all, a choose your own adventure type situation. I'm not going to tell you in this talk what to believe or how to believe it. I'm just going to present some things and encourage you to maybe think about them. I'm also not going to tell you what I believe except that everything we do is kind of laden with our beliefs, but I won't make it too explicit. So not going to tell you where to draw the lines, just encourage you to do so. So when people think about ethics, a lot of what happens is they get ethics and values and morality all kind of mixed up and they're not sure which one they're talking about. So I just want to get some clarity on those. The first is just defining what our values are. And values are just literally that, things that we value. So some of us value family over work. We value free time over money or the opposite. We value health over beauty and cats over dogs. Our values are just that system that we have. And it's important here to point out that nothing is value neutral. Literally not a single thing in the world is devoid of values. We are all people with values and all the things that we make, our values go into that process. And so I think anyone who's ever sat in a stakeholder room negotiating homepage real estate knows that even something as simple as a website is really political, right? And the product that we make at the end has all those politics in it. Even if we think, oh, it's not that important. It's just an ad site. It's just a this. It's still a political value laden process. And the objects in our lives also have values associated with them. So things like seatbelts. We value your safety over your freedom to move about the car, right? Even these objects that exist in our lives, these values are everywhere. Morality is if something's good or bad. We can keep it pretty simple. And morality and ethics are often used synonymously in common vernacular. And we maybe will too here. But morality is the goodness or badness of a thing. And ethics is the systems of goodness or badness, systems of morality that people use. When people talk about ethics in academic situations, they talk about three kinds of ethics. And we won't get too far into them, but I want to make sure that you know about them. The first one is metaethics. And people think about what is the meaning of goodness? What is the meaning of rightness? What is the meaning of truth? And lots of smart people don't even take those words for granted. When we say, well, what's the right thing to do? Well, what does right even mean? And metaethics helps us to think about that. There's a field called normative ethics, which is based on the word norms. What are the expected things in a situation? So it helps us ask questions like, how should we ask? How should we act in the situation? What are the standards of right behavior, these normative sorts of claims? And the one that we're really going to be talking about today is applied ethics, which helps us answer questions, which I think we think about a lot. What should we do when? What should we do if in these kinds of situations? So why are we even here? One, because I think it's important to know that other people are thinking about this stuff. It's easy to feel like, oh, my gosh, I'm the only one worried about big data. I'm the only one thinking about, oh, maybe if you're not on Twitter, you think you're the only one, right? But it's like there's a whole community of people who are thinking about how to be ethical technologists. And it's important to connect with each other. A smart philosopher says that if you don't have someone to talk about ethics with, it's really hard to be ethical. You need other people to bounce those things off of and to help you situate yourself in what the good and the right is. Another reason that we're talking about this is so that you have a door into all of the smart writing that exists about this. Because I know when I started reading about ethics, I was like, this is old and boring and doesn't apply to my life at all. And while that might be true, it's not 100% true. Smart guy named Hickman said, if we can't talk about our experiences, we can't do ethics at all. So all of us already have inroads into ethical thinking. And we're going to come back to this in a minute, but ethics is not the same as justice. And just because we might be doing something ethically, it does not mean we're being just. So here's another equation, that knowledge equals power. This is both a like after school, the more you know, sort of thing, but also something I think that we all know to be intrinsically true, right? People fight for education. And by virtue of having education, we have more power to do things. By virtue of going to school, civil engineers learn how to build bridges. By virtue of having that power, we then also have more responsibility. Because civil engineers know how to build bridges. They have the responsibility to build safe bridges. And associations of civil engineers have codes saying exactly this. Because you know these things, you're on the hook for doing them well and for taking care of people. So because we know how to make technology and we have the power to do so, we have a greater responsibility. And that by extension asks this question of what is our ethical responsibility? Do we even, do we have one? Obviously I'm saying yes we do. So here's a famous thought experiment that I like a lot. It's called the trolley problem. You might have seen it in a meme. It got really popular a couple of years ago. So you are a trolley operator. You're that guy with the unhappy face. And here's a trolley coming. And on one side of the track, the direction you were already going, there are five people tied to the trolley track. So if you do nothing, five people will be hit and killed by the trolley. If you divert it, only one person will be killed. I'm not going to ask you which you would choose, but if you know which one you would choose, raise your hand. Okay, that's about 50, 60% of you are like, yes, I know which one I would pick. Would your answer change if on either one of those sides, it was your child or loved one or mother, right? How many of you would change your answer based on okay, a couple of you? Yeah, totally normal. There's nothing right or wrong about that. But this is a classic ethical problem saying, well, what do our rules say we should do when we're in this sticky, artificial, but sticky situation? And one of the things that that taps into is this notion of active killing versus letting die. And we're going to understand maybe why we're talking about this in a minute, right? But one direction, we're actively killing when we're hitting that one. And the other when we're hitting the five, we're just not doing anything. And oops, some people get hit. So lots of smart people have thought about the answer to the trolley problem and have opinions about what the right answer to the trolley problem is. And we are both physically and ideologically in a system of Western thought. So a lot of our most popular ethical theories are Western. But before we get into some of those, there are great books about Buddhist ethics, great books about Black womanist ethics, great books about Islamic ethics, the famous Islamic thinkers started medical ethics actually, great books about queer ethics, about Jewish ethics, about African ethics, both in the diaspora and specific countries. So if you identify with any of those over the stuff we're about to talk about, go check it out. There's lots of good stuff there. When people here in the West in North America talk about ethics, they always start with Aristotle. They always go back to the Greeks. Aristotle said lots of stuff. The main one was that he advocated a system of virtue ethics, which means that we don't need to, we don't need to decide, okay, when you have one where you have five, you always choose five, here's the rules. Rather, he said, just be a good person. And he outlined some virtues, prudence, temperance, courage, justice. And he said, you know, if you're a person who has these, you're going to do all right, more or less. And another guy said, hang on, we need to do our duty. And these are called deontological ethics, I think because calling things duty based sounds funny. So they call them deontological ethics, which is just a fancy word for duty or rules. And a guy named Kant, who you might have heard of, he was a big fan of this. And he said, his kind of main thing was like, only, if you're going to act in a certain way, it should be a way that's good enough that everyone could act in it. So only act in a way that everyone could do the same thing and the world would be good. It's kind of the core of that idea. And the focus here, so the focus on virtue ethics is on the character of the person. And the focus here is on the character of the behavior. Is your behavior, the character of your behavior so good that everyone could do it. And the focus here on consequentialism is on the outcome. All that matters is the outcome of your action, the consequences. So in that case of the trolley problem, which consequences are better? And a subset of that is utilitarianism. And I know that we're getting back to the exciting part. Hang on. Utilitarianism, where we just need to maximize value. So in that trolley problem, one dead person is better than five dead people, no matter what. So we always got to choose that one based on this system. And part of the reason that we're talking about these systems is because I think as technologists, we're used to thinking in systems even if we don't agree with them. We're going to use React as a JavaScript framework. We all know the rules even if we don't like them. We're going to work within them and then leave it behind. And then we're going to try on another system and another framework. And we're going to work within that. So we're already primed to do this. We've been doing this for our careers, trying on frameworks, thinking through whether or not they fit us and setting them aside or keeping them. And so there's a ton of frameworks that you can pick up, saying do I follow rules? Do I follow virtue? Do I choose consequences? So here's a question for you. Is Facebook ethical? Don't you have to answer yes or no? Don't everyone rush to answer? But if you have an opinion on it, raise your hand. Okay, almost all the hands have gone up. Great. What is there a difference then between actively tracking someone and inserting a company's tracking pixel on a client site? Is there a difference between actively killing someone and letting someone die? Is there a difference between active participation and something that might be unethical and by stand or sideline participation? But the choices that we make are part of our ethical framework there. Here's another question for you. Are driverless cars ethical? And if you haven't seen one, I live in Phoenix where they are everywhere because Arizona is the testing bed for driverless cars. And they are cars that have human people in them. But those human people are not controlling the vehicle at that moment. The car is being driven by software written by humans in a previous time. So if you have an opinion on whether or not driverless cars are ethical, raise your hand. Oh, a couple of hands are shut up. Okay, so fewer of you. I would guess that whether or not you think driverless cars are ethical affects your life less maybe than Facebook. Recently, an article came out that's titled, autonomous vehicles must be programmed to kill. And that's both a clickbait title, but also a true statement. And here is unlike the trolley problem which is hypothetical, artificial, most of us maybe won't ever be trolley drivers. I probably won't, but who knows? All of us have written or participated in the writing of software or the design. All of us are here because this is what we're doing, I think. So at some point, all of us have been a participant in the creation of an algorithm. And for those of you who don't know what that is, an algorithm is just steps to solve a problem. Just a bunch of steps to get to a destination. So this image which I took from that article which is titled above shows three scenarios. And in the first, we are swerving to hit a group of people and as a result choosing to hit one. In the middle, we're swerving to avoid one and hitting nobody. And on the far right, we're just swerving to avoid a group. An algorithm made this choice because a person made this choice and because a person made that choice, they looked at their own ethical code and they put those into the software. Codes, a team maybe. Does that make sense? So we can't pretend that we do not do political things and that we do not make things that affect real people. Here is another exciting but unfortunate thing. For those of you who maybe don't know, this is a picture of the challenger. In 1986, the challenger went up and the challenger came back down about 80 seconds later and everyone on it was killed. But before that happened, a guy named Roger Bougelet knew that it was going to happen. This is a classic case in engineering ethics. Roger Bougelet knew it was going to happen. It was a problem with the O-rings, which are fascinating things on the shuttle. And he said, hey, y'all, if we launch when it's cold, those aren't going to work and there might be problems. And he said something and he said something and he said something and he said something and eventually the vendor and NASA pressured him to be like, you know what, it's going to be fine. And we all know as a result that it wasn't fine. So he did something that was ethical. He did something that I think most of us would say, yeah, that's the right thing to do, like take a stand. But it happened anyway. Whose fault was that? Don't answer. It's not really one answer here. But it's a question I think that we ask ourselves. Well, who's to blame? Who's to blame for the autonomous vehicle that hit someone in Arizona two or three weeks ago? Who's to blame for Facebook? I bet you have opinions about that one. Is it Roger Bougelet's fault that the challenger exploded or is it the vendor's fault? Roger Bougelet nowadays we would call him a whistle blower. And I think that's a lot of what ethics is tied really closely. When we have an ethical problem and we say, okay, well, I'm going to take a stand. I'm going to stand up for something that I believe is correct and I'm going to protect people. And then we're like, well, there's going to be repercussions. And there are some repercussions for whistle blowers, but very few. And it's irresponsible to pretend like that's not the case. So Dr. Gina Rigo is a very smart woman and one of the whistle blowers exposing psychologist involvement in military torture. And she recently said that in order to be a whistle blower nowadays, you need to have a second career and an offshore bank account. And she was already an affluent adult when she exposed, when she whistle blew. So here we come back to this idea that ethics is not justice, that just because you're doing something ethical, it doesn't necessarily mean you're doing something just. And the real truth, that justice is not compatible with capitalism. I see some faces back there that are like, I don't know what to do with that. But I think that it's true because if justice were compatible with capitalism, the challenger wouldn't have gone up. That ethical, the driverless car wouldn't have hit somebody. Capitalism is driving a lot of these choices. And capitalism acts as a barrier to a lot of our freedom to take ethical stands. And I think that's something we have to acknowledge. We can't just pretend that we all have the freedom to be like, I'm going to say no to this and walk away and not have career repercussions, financial repercussions, other sorts of challenges. Yet there's a little footnote down there that's like, yet here we are doing our, our darndest to engage ethically and to engage with justice on our minds in the ways that we make technology. So I have a couple of questions. The first is to ask what's most important to you. And this is a value based question. And most people answer this with my family, my children, my pets, my career, whatever. You can take that answer. Doing good is a lot of one, a common answer. I want to do good and say, okay, well, here's the thing that's most important to me. Does it support the code of ethics I already have? Because all of us already have one. We just haven't, a lot of us haven't articulated it. A lot of us are already acting as though money is the most important thing or safety or social change or whatever. And I think it's absolutely worth taking a moment to articulate that. Some engineering schools make engineers take a small like business card size piece of paper and write their ethics on it. Three or four bullet points. Because you already know them. I see a lot of stressed out faces. Like I can't do that, but you already know what goes on that card. And I think what people find is that they don't either don't like what they honestly see on that card or that having it written down helps them make better choices when the moment comes up. And this is something that we advocate in a lot of contexts. If you decide before you need to, then when you need to, it makes the moment easier. So if here, while at DrupalCon, hopefully you don't have any major ethical conundrums this week, maybe you will. If you do, I would love to chat about them. But decide here at the con, okay, I'm going to be someone hypothetically who values social justice first, money, second and family third. Great. Then when you're in a situation, you can use that rubric to make that choice. I'm someone who believes that the end justifies the means, which is a consequentialist or utilitarian based approach. That approach gets a lot of bad rap. People use it all the time. I'm going to do whatever it takes to feed my kids. That's a consequentialist approach. And then you have that. And then a couple of important reminders. The first is that you are not value neutral either. Just like the technology that you make, your values are with you all the time. And it's easy to forget as we are encouraged to be production machines, creation machines. We're here in a community that tends to value technological contributions over human contributions. It's easy to forget that we are humans contributing human value, which sometimes takes the form of code, but not even maybe most of the time for most of us. So we are not value neutral. And that creating technology is a political act. Every single time we do it for a zillion reasons. It's political because you're the only person like you doing it. It's political because you're political. And it's political because that technology goes into the world and becomes political. The websites you make are used to make money. That's a political thing. Or they're used for literally politics. That's also political. So every day that we go to work, we are doing political things and we have the option to be political in ways that support our ethical frameworks or ways that do not support our ethical frameworks. All of the things that I cited, all of the references will be on my website in a couple of days. I'm on Twitter as Dr. Nikki. And before I stop talking, I'm going to remind you that the contributions are Friday. Sprinting is still fun. Sprinting is also not value neutral. It is a huge value added activity. And there's a bunch of ways that you can get involved. And if you have feedback, you can do that over there. And that's it. Thank you very much. With three minutes to spare, I made it through 50 slides. If anyone has any questions, we have three minutes. If not, take a break. Come to the mic, please, because they're recording it. You said ethics is not just not justice. Not always. Not always. Yeah. You mean justice like human justice or justice like legal justice? I guess I mean both. Academically, we talk about three kinds of justice, like distributive justice, getting everybody what they need, compensatory justice, making up for bad things, and then a third kind that's escaping me. So ethics, what's that? Fair. So I mean all kinds. So there are some, you know, in virtue ethics, for example, Aristotle explicitly calls out justice as one of the virtues. That's what I meant, actually, because justice is one of virtues. Yes. But then you're like without talking about it, you're talking about ethical justice and legal justice. For sure. Yeah. I smushed them together for the lightning talk. Because you say that whistleblowers are punished. Often. Yeah. By the legal system. Yes. And those standing by, they are not always saying that, okay, the punishment is by a legal system. But intrinsically, what the person did is okay from the ethical point of view. Yes, absolutely. Yeah. That's what I wanted to clarify. Yeah, no, I appreciate that. And I was smushing all the kinds of justice into one word, and there's absolutely intrinsic justice and ethical justice that exists. Thank you for that. Anyone else? Cool. Stay for eating stock. It's up next.