 So we'll get started, first, thank you, thank all of you, thanks so much for coming to my talk, I really appreciate it, I know there are multiple tracks, there's lots of ways to spend your time, so I really do appreciate every single one of you for choosing to spend your time here. Thanks also to RubyConf, to all of our fantastic sponsors, and to the venue, I've been in LA now for about three years, I don't make it downtown a lot, LA is kind of like a weird archipelago, and when you're on your own little island you tend to stay there, but this is a fantastic venue, so thank you so much. Let's get started. I tend to speak very quickly, I'm working very hard right now to not talk a mile a minute, because it gets worse when I'm excited and talking about software engineering and Ruby and writing ethically and ethical programs, I think is really, really exciting. So if at any point I start going way too fast, feel free to make some kind of, yeah, thank you Michael, exactly, some kind of very visible gesture and I will slow it down, I'll also check in a couple of times to make sure that this pace is good. So I'm going to talk for about 35 minutes, so we'll have a little bit of time at the end for questions, if for any reason I go a little bit long, feel free to come find me after the show, I'm happy to talk about this stuff forever. So my name is Eric Weinstein, I'm a software engineer here in LA. I'm currently CTO of a little startup that I co-founded called Aux AUX. We are building auction infrastructure for blockchain technology, so a lot of price discovery, a lot of game theory, people sometimes in other programming communities will check in with me to see if I've sort of secretly gone crazy because I'm doing all of this blockchain stuff, but it really is fascinating and that's another thing I'm happy to talk people's ears off again after the show. My undergraduate work was in philosophy, mostly philosophy of physics, but also some applied ethics. My graduate work is in creative writing and also computer science separately. So ethics in software engineering has been on my mind for a while now. The rest of my information is in this human hash I felt compelled to make. So it'll be up at the end again if you'd like to get in touch. I'm Eric Q Weinstein, Q as in Q, Q-U-E-U-E. I guess Quebec is probably a better Q. So please feel free to reach out on Twitter, feel free to find me and I'm happy to have conversations. A few years ago I wrote a book called Ruby Wizardry that teaches Ruby to eight, nine, 10, 11 year olds. So if you want to talk about that, I'm happy to chat about that too. Oftentimes the good folks at No Start shall put together a discount code. So if you would like the book but can't afford it, please let me know. We'll make something work. This is not a long talk, but I still think we benefit from some kind of agenda. So we're gonna start with what it means to be good because it is a fuzzy idea, right, to say we should be good when we write software. Our software should be good. We're gonna look at three mini case studies in Roboethics and then three mini case studies in Machine Ethics. Roboethics concerns itself with humans being ethical when building robots. And I'm gonna be a little bit loose with robot and include program. So any piece of software or hardware that a human designs, Roboethics is about behaving ethically and having ethical considerations top of mind when building these machines. Machine ethics is more of a science fictiony field and we'll see why in a little bit. But this is more about how we design artificial moral agents that are themselves ethical. So if a machine makes a decision, how do we know if the machine is being good? How do we assess the ethical state of the machine's decisions? And how does that compare to how humans make ethical decisions? And then, as I said, we'll hopefully have a little bit of time at the end for questions. So please note, this talk contains stories about real people. No one who was injured or killed is specifically named. There are no images or descriptions of death or gore. However, there is going to be a lot of discussion of how software and hardware can fail and what those failure modes entail. And we're going to touch on some topics that are potentially upsetting for members of the audience. There are a couple that are medically oriented. So it will not hurt my feelings in the slightest if you need to leave. Do not worry about it, but I want it to be up front so that everyone is aware of the content and the stakes involved. So the stakes involved are actually quite personal to me. This is my son who was born in March. He was born 12 weeks prematurely. So we decided to skip the third trimester. I do not recommend this, but luckily there are, I think, kind geniuses is how we describe the folks in the NICU where he spent seven weeks, unbelievably hardworking, kind, passionate, genius people, his doctors, nurses, therapists, everyone who took care of him is phenomenal. He was 12, sorry, two pounds, 13 ounces. When he was born, it's a little hard to see. But if you see that black and that red wire, right below that is my wife's hand. He was about two hands full. So I also like this image and I will show this at every possible family gathering forever because he appears to not appreciate me taking a picture of him. If you're curious, he's wearing a Billy Rubin mask. So he, as part of his treatment, required very bright lights to sort of break up the excess Billy Rubin in his blood, which can harm his eyes. So he's wearing a mask to protect himself. I like to think of it as like a little spa treatment, but now for those in the audience who are concerned, do not worry. He is a happy, healthy, 17-pound, eight-month-old baby. Thank you. He is finally bigger than the dog, which is great. She is a Chihuahua mix, so impressive. Like I said, we are unbelievably lucky to have had the team that we did at Mattel here in Los Angeles. And I cannot overstate the unbelievable gift that was every single member of the NICU staff. But we think a lot about the people in the room. And we don't always think about the people not in the room. Everything from his heart rate to his O2 saturation, every single metric. Somebody wrote software or built hardware to measure it. Every treatment he required, everything from the way his food was administered required a machine to do the right thing. And it's easy to forget in the whirlwind of something like this, where you have a medical emergency, you have something very scary happen to you that there's a lot of hardware and software running in the background that is critical for maintaining people's well-being and their lives. And so this talk is in part about that. So, like I said, the stakes are high in more ways than one. And part of the fuzziness that I want to dispel early on in this talk is what it means to be good. There are a number of ethical theories. We're gonna be thinking mostly about and talking mostly about applied ethics in this talk, which is what to do in concrete situations. How to live ethical lives and make ethical decisions rather than talk about the abstract. What if this were the case? How would we behave? Imagine various thought experiments. So there are a number of schools of thought. I'd like to focus on three. The first is utilitarianism, the idea that we're trying to produce the most good for the most people, whatever that means. And you can imagine if there were some way to measure goodness and there were some way to figure out how much good each person has received then we could do some kind of calculus and say, great, this optimizes. This is the most good for the most people and this is what we should do. Clearly, there are scenarios where that is not going to work. There are also deontological ethical theories and these are rule based. You can think of the Code of Hammurabi, the Ten Commandments, Kant's categorical imperative, which basically say, we're going to have general rules slash laws and they're going to describe what we should do in broad circumstances. Things like it is never acceptable to kill anyone or it is acceptable to kill someone to preserve your life, but in no other case. Things like that. The middle ground that we're going to pursue in this talk, insofar as we talk about how to be good, is what's called casual history. Which is funny actually, because I think I looked this up earlier, those of you with broader vocabularies that I have might know that casual history also means sophistry, like things that appear to be informative but are not. In this context, what we're talking about is sort of case-based ethical reasoning. Extracting rules in sort of a deontological way, but from specific cases. Like what do I do in case A, B, C, all the way to Z, and sort of trying to abstract a rule system rather from there. So as you can see, I'm going to take something of a casualist approach. And I'm going to break one of my own rules briefly and just read this to you. For the purposes of this talk, being good means safeguarding the well-being of moral agents that interact with our software by deriving best practices from specific instances. And that's why we're going to look at these many case studies to figure out what we should be doing. So we'll start with Roboethics. And Roboethics, again, is sort of how humans are ethical when they design robots or design programs. And we're going to look at three cases, the Therac 25, the Volkswagen scandal, which Caleb mentioned in his talk, and the Ethereum Dow hack, which seems different from the first two if you're familiar with them. Sort of a financial thing rather than people's lives or health being endangered, but it is I think illustrative of the broad spectrum of ethical concerns that we should be thinking about. So the Therac 25, raise your hand actually if you're familiar with this. Okay, cool. So I will, please indulge me if you know this story because I think this one is important. The Therac 25 is a machine that's used to perform radiation therapy for cancer patients. It was designed in 1982, and it had two modes. One was to provide megavolt slash x-ray, which was often called photon therapy, and another, which was direct electron beam therapy. So two different modes. The earlier models had hardware locks and were thoroughly tested in terms of the hardware that was used. The Therac 25 gave up some of these hardware constraints in favor of software constraints, and so what happened was between 1985 and 1987, the machine caused massive overdoses of radiation for a number of cancer patients, and I believe six people died as a result of their injuries. So what happened here, right? We have a machine that has two modes of delivering radiation to ostensibly make people better, but it malfunctions. And it actually causes severe injuries and in some cases, death. The review at the time blamed concurrent programming errors. So we know that concurrency is hard, it is also potentially, lethally so when we get things wrong. What happened is it was possible that as a technician, if you selected one mode, realized you had made a mistake and switched to the other mode inside of an eight second window, the software locks would fail to engage. And the patient would receive a 100 times greater dose of radiation than was intended. And what allowed this to happen? As I mentioned, the hardware interlocks replaced software ones. So we now have a less robust way of preventing people from being harmed. The code was not independently reviewed. It apparently was reviewed sort of by the engineers on the team, but there was no independent third party review of the code involved. It was also written entirely in assembly. So we did not have any kind of high level language to help. The failure modes were not thoroughly understood, not just incorrectly programmed, not understood. And the hardware and software combination was actually not tested at all until the machine was assembled on site. So all of these played into the injuries and deaths that resulted in the mid-1980s. And the underlying bugs were everything from arithmetic overflows to race conditions to truly cargo-cultured code. Code that was just copied over from earlier versions of the machine, some of which did not work or do anything in the current machine. How is this speed? Is this good? Thank you. So what do we learn from this, right? We have a machine that is meant to heal people and does the opposite. Well, I think what we gather from this, we see a parallel between the medical profession and our own. And here there's a, in medicine there's something called a standard of care. That is the sequence of treatments that you're supposed to offer. They're sort of the bounds within which you operate as a fully trained, up-to-date medical professional. And deviations from the standard of care do happen. But generally speaking, when you hear about a malpractice lawsuit, it is because someone has deviated from the standard of care and caused injury or death. And in this case, engineers deviated from their standard of care. And we endangered people who depend on us. And in this case, deviating from the standard of care was not following best practices, was not testing our code, was not being 100% sure of the, in the hardware and off-boarding things to software because software is a self-problem, which as we know is not true. Our second mini test case is Volkswagen. So as Caleb mentioned, in 2008, Volkswagen vehicles were caught using what is called a defeat device. So the idea is that the vehicle will operate differently under test mode, then in real life. So you will have fewer emissions, different emissions, different behavior when the vehicle is being tested by regulators, then when the vehicle is out on the road performing in sort of quote unquote production, right? This, it's harder to quantify this one. As I mentioned, there are a number of, a concrete number of deaths with the Therag 25. It was clear exactly who was harmed and who was injured. Less so here. Certainly there was a lot of money that was spent, a lot of fines that were paid. Estimates are that this resulted in 59 premature deaths due to things like emphysema, COPD, general respiratory illness as the result of exposure to the increased fumes from the vehicles. Again, difficult to quantify, but certainly financial and human costs are involved. Now this was allowed to happen because everything from the speed to the steering wheel position to the ambient air pressure could be used to determine whether or not the machine was in test mode. And as software engineers, that sounds like a thing we should have, right? We have test, we have staging, we have production. We have different environments and the code is tested differently in them. And it may on the surface not sound unreasonable for someone to say, let's have a test mode or let's detect when the machine, when our vehicle is being tested. That in and of itself might not be bad. Maybe there's metrics we want to report back to Volkswagen. Maybe there's other things that we can learn from having the ability to detect when the vehicle is under test. But we have a moral hazard here of this quote unquote victimless crime that was not so victimless. An idea here is well, we don't want to have to follow a bunch of really stringent regulations. We're going to lose money for the company. We're going to have to do this and that. No one's going to be hurt if we just have lower emissions in tests. And we pass all of these regulations with flying colors. Everyone's happy. And we make a bunch of money and quote unquote, nobody gets hurt. So what I think we get from this one is that we must always ask ourselves what our code is going to be used for. And I think it's not only our right but our obligation to refuse to write programs that will harm people or other moral agents who interact with them. And we'll talk about this more later and what this means and how hard this can be. But I think the takeaway from this is that we have to be able as people who write software, who build things that people depend on, we have to be able to say no. And as Caleb mentioned, engineers have gone to prison as a result of this. For those of you who are familiar with the Goldman Sachs high frequency trading, debacle, stealing code can send you to prison back in the early days of cryptography. There were questions about freedom of speech because cryptography was generally understood to be a weapon, a sort of part of the top secret sauce that makes the US military work. And so there are always going to be powerful forces that tell us what we can and can't use software for. And I think we need to be empowered to disagree. This third sort of micro test case or micro case study rather in Roboethics concerns the Dow and the Dow existed in 2016. It was the first attempt at sort of a large scale, decentralized autonomous sort of venture capital vehicle. The idea is people put cryptocurrency, in this case ether, into the Dow. Members can vote on how those funds are deployed and sort of have this decentralized VC fund that can be used to fund various projects or initiatives. However, the problem was that there was a smart contract and smart contracts in Ethereum are code that are deployed to the blockchain, to the Ethereum blockchain and are sort of elaborate state machines. They can receive input, they can change state, they can provide output. You can think of a sort of a vending machine is the classic example. I tend to think of things like escrow, like I wanna sell you something. My money, sorry, my item goes in, your money goes in. Once both parties have fulfilled their obligation, they're exchanged, right? Dissintermediation, decentralization, these are the utopian themes involved. However, because of a bug in this contract, 4 million ether, which is about $70 million at the time and as of this morning about $840 million were siphoned out of the Dow. Out of the smart contracts, out of the organization. And this reminds me of a panel I gave a talk on a little bit ago to attorneys who are trying to figure out how blockchain technologies work and what the liabilities involved are. And one attorney raised his hand and said, right, but who do I sue? Do I sue the software engineers? Do I sue the hacker? Do I sue the people who are, do I sue the miners who are confirming these transactions and quote unquote shouldn't? This is a theme that's gonna come up more and more as we get further into the talk. But liability is a huge concern. Now, this could happen. I actually just realized I have the remote, which is why I've been moving around and making the videographer. But thank you so much for dealing with me moving around and wondering. The Ethereum smart contracts are Turing complete. So they can do anything a Turing machine can do. And that means that there are lots of smart contracts that you can implement, that do nothing, that throw errors, that just spin endlessly and waste all of your funds for computation, which in Ethereum is called gas. This means that there are state machines within valid states that are now possible. And the particular bug in this smart contract was a re-entrancy bug. Effectively, the order of two lines should have been transposed. Rather than perform a transaction and then update state, we probably should have updated state and then performed the transaction. But because the transaction was performed first, it was possible to repeatedly siphon funds out of the contract. And this is something that could have been caught again with more robust third party testing, with more comprehensive, maybe gear improvers or more powerful programming languages or tools that would allow us to say, this is actually, this contract can get into a state that we don't want. Now, the result was not as bad, quote unquote, as certainly as the Therag 25 people dying. Not as bad as people losing money and potentially getting sick and dying from the Volkswagen scandal. But at the same time, this one bothers me in some ways more than the others. Because I don't think we learned anything from this one. The result was a hard fork on the Ethereum blockchain. The people who said, well, that's the cost of doing business. That's how things work. People steal your money sometimes. Continued on Ethereum Classic, ETC, ETH is the ether you probably, if you are familiar with, Ethereum know now. And that was the result of making things right. So now there are two parallel universes. There are two ethers and people who held both got more money. Now certainly people who invested money in the Dow lost their money. But I think there was no serious introspection that resulted from this. And I think sort of what Uncle Ben has said, and RIP Stanley, I definitely did not finish these slides this morning or yesterday, with great power comes great responsibility, right? And I think we are obligated to make programs powerful enough to do what they need to do and not more powerful. We don't need to write programs that can do things just because, or maybe we'll need that functionality, or maybe we need turn completeness. I think we're obligated to make programs that do what they do and do not have the ability to do more. I think that one is probably a little bit more contentious. So those are our three case studies in Roboethics, how we should think about writing software and what should kind of cross our minds. Who's going to lose money? Who's going to jail? Who's going to be harmed or potentially killed if I make a mistake? Now we're moving more into the realm of machine ethics. And this got kind of heavy, so I put in this cartoon pirate robot to lighten things up a little bit. It's an illustration from the Ruby Wizardry book. So the three sort of case studies we'll look at here are a little bit more hypothetical, although they rely on technology that exists now. And these are facial recognition, the use of police data in predictive policing and autonomous vehicles. So as I said, there's going to be sort of a theme here. It turns out Minority Report, or Philip K. Dick more generally, just kind of predicted all of these things to some extent. So there's going to be a lot of Minority Report references in the last half of the talk for those of you who are not familiar with machine learning. The idea is simply producing programs that can do work without being explicitly programmed. It's effectively pattern recognition. You say, this is a picture of a car, this is a picture of a car, this is a picture of a car. And then you say, okay, what is this a picture of? You're teaching the machine in some sort of way to replicate human learning to derive a generality from a series of instances. There are a number of different ways of doing machine learning. We're going to talk mostly about supervised learning and in particular decision trees and neural networks. So here's our first one, this is Apple's Face ID. And what I think is interesting about this is not simply being provided access to something based on our biometric data. But what does it mean for the machine to recognize us, right? What does it mean for us to provide our biometric data to Apple, for Face ID, to have our faces and information about us sort of derived from photographs if you're using Google Photos, tools like that. It was fascinating to me that Google knows what my baby son looks like because it has sort of gone through and tagged all these photos. It's like, I don't know who this is, but this is all the same guy, which is fascinating and horrifying to me. And again, this is the sort of obligatory minority report reference where there's that scene where John Enderton goes into the mall. He's recognized by his retinal scan and receives these ads that are tailored to him, right? So who owns these biometric data? Do they belong to us? Do they belong to Apple? What happens if somebody uses my biometric data or it's sold to a third party? What happens if there's a colossal privacy invasion? I almost said piracy invasion because of the robot earlier. Which is much more entertaining. Above and beyond this, what does it mean for the machine to make decisions using my data, my face, my fingerprint? It's really interesting, and I think some people will, or I at first, kind of glossed over this thinking, well, how is this different from the machine accepting my password, right? Hash is the password, looks it up in the database, the hash is matched, great, this is you. I think the difference here, above and beyond ownership, above and beyond the potential for identity theft is that the machine is making a decision in a very rudimentary way. And we don't know why it makes the decision that it makes. We know that if our password is rejected, something's wrong. Maybe there's something going on in the server. Maybe we typed in the wrong password. But there's a deterministic, modulo network hiccups and other things, reason for this. There's not necessarily a deterministic reason why the machine would not recognize us. Is it because I turned my head to the side? Is it the lighting? Is it because the machine has been trained on some data and the machine does not recognize faces that look like mine for some reason? Men versus women, adults versus children, people of various backgrounds, ethnicities, races. And when we entrust machines with the power to do human things, to decide, to recognize, to permit and to deny. We're implicitly giving their actions moral weight. And I think this is the beginning of machines having moral dimensions to the choices that they make. Now, I wouldn't say that any machine that we've created so far is sentient, right? It's not able to make decisions in a classical sense. It's not able to, I would not ascribe an inner life to any machine that we've created so far. But we're getting there, at least in a sort of superficial way. And it's something we don't, I think, treat carefully enough. This is how you know you've made it, by the way, when you get to reference one of your other talks from a talk that you're giving. So this one, again, is referencing the Department of Precrime from Minority Report. And I gave a talk at Euroclosure, I think a little over two years ago, that focused on the use of policing data in Los Angeles and in LA County. And Los Angeles actually has a very robust, very complete set of open data for all kinds of things. And one of those data sets has to do with police investigations. And the one I looked at, effectively, was who got pulled over? And when someone was pulled over, were they arrested? And I used a couple different machine learning approaches here. And this one is a decision tree. And decision trees are nice because the machine can tell you, effectively, why it's chosen to do what it's done. There are some relatively straightforward ways for it to kind of divide the tree at a certain point and say, hey, the most information gain is here. I can actually make two relatively equal sized blobs and then we'll keep doing it. We'll keep doing it. We'll keep doing it. So unsurprisingly, the machine looked at the data, built a decision tree, and 77 or 78% of the time, it was right. And it learned to be right by saying, is this person African American? Is this person a man? This person should go to jail. It's very obvious. Now, I think that this is something that is surprising to people when they hear this. You think about this passionate machine making recommendations. The DOJ and the NIJ have launched initiatives to explore what they call predictive policing, which is much closer to this scary idea of pre-crime. And the idea here is to figure out who is likely to commit a crime, who is likely to commit crimes again when it comes to sentencing. And what people don't realize is if you train a machine on racist or biased data, the machine is going to be racist or biased. And it's particularly dangerous when we say, I didn't do this, a human didn't do this, this is a dispassionate machine. The machine can't be racist, so this is the right answer, and this person goes to jail, and that's the end. Now, in the decision tree example, the machine can sort of explain its decisions. But you can imagine something like a support vector machine, a neural network, a much more elaborate black box machine learning algorithm. That cannot, there's no explanatory power here. Again, biased data, you get biased machines. And the worst, most dangerous part is people saying, humans didn't have a role in this, so this must be the right answer. Absolutely not. The machine has learned from human decisions. And just as Conway's law tells us that organizations are constrained to produce software that mirrors their communication structure. I think we are, when we're using machine learning models, predictive models, we are constrained to mirror the biases present in the data. Now, I don't know if this is a law already. I'm going to call this Weinstein's conjecture because apparently I can cite my own talks and name my own conjectures and laws. If Weinstein's conjecture is a thing, we'll call it something else. We can, actually we can take a vote later, because I think there are cooler names than Weinstein's conjecture. Anyway, so the critical thing again, bad enough to have a bias in, is this a tumor or not? We would like to be right when the machine is making this decision. But if it's who gets a loan, who goes to jail? Who gets arrested? These are critical problems that we have to confront, before we can allow machines to act as moral agents. The last one here, self-driving cars, autonomous vehicles, probably needs relatively little introduction. Many of you are probably aware of the high profile deaths associated with Uber and Tesla when these machines fail. And the question again, calling back to the attorney in the blockchain discussion, is who's liable, right? Is it the machine? Is it the car that's liable? Is it Tesla or Uber or Google? Is it the people who drove around in the car for hours and hours and hours and taught the car how to drive? It's very unclear who's at fault. And again, the machine is not able to explain to us why it shows to do what it did. And this is really dangerous when we, again, start trusting machines to make these decisions on our behalf. And again, if you remember, when I already reported there's that scene where he slides into the car and the car's self-driving down a highway. So again, I don't know how Philip K. Dick predicted my entire talk, but here we are. How many of you are familiar with the trolley problem? Cool, so I won't spend a lot of time on it. The idea is you see a trolley heading to hit, I don't know, five people, you can pull the rail switch and it will hit three people, do you do it? From a distance you can't tell anything about these people, even if you could, you wouldn't know much about their lives, their behaviors, who is good and who is not. It's a hard enough problem for humans. But now the trolley problem is something that we're asking machines to solve. Do we swerve to avoid an obstruction, preserve everybody in the car, but we kill somebody on the sidewalk? Do we try to figure out the number of people who will be harmed or injured or the gravity, the severity of their injuries and make a calculation based on that? None of these sound like good answers. When I say how do we teach our robot children well, this notion kind of calling back to moral hazards in the Volkswagen scandal, how do we perform mechanism design? How do we design these reward functions so the machine does what we want and does not do what we don't? This is not an easy problem. And again, who is ultimately liable. I think the takeaway here that machines actions are not only imbued with moral dimension when we get here, but that need for explanation is critical. And the capacity to accept blame. We have to know why the decision was made and whose fault it was. Because in order for things to function legally, in order for us to learn and move forward, we have to be able to say, this is the reason this happened. And here's what we're going to do to fix it. I think that that explanatory power, the capacity to accept blame is critical in the sort of humanization of our robots. Actually, just a quick thing. I meant to ask this earlier and I totally forgot. How many of you have some kind of formal computer science education? And here I mean a bootcamp or a CS degree or any classroom experience. Great. Keep your hand up if there was an ethical component to your coursework. Somebody made you take an ethic class. OK. It looks like a little bit less than half. I think we have to, have to, have to start teaching ethics in COPSI courses, in universities, in bootcamps. And we're getting out to the TL, DPA, which is what I call the too long, didn't pay attention. So if you were not paying attention for the first part, that's great. That's totally fine. We can get it all in this one slide. We have to have a standard of care, and that means best practices. We have to have a structure or framework for writing software. When we are writing programs, we have to know what's acceptable and what's not. We need the right to refuse. And here I'm thinking of a Hippocratic oath, something like that, where you make a promise, and this is true for attorneys, for doctors, for engineers, for all these professions that we pretend to be. But then we're like, oh, we don't need to be licensed. We don't need to take an oath. We don't need ethics. We can sit down. We can do Git push Heroku Master, and our software is out in the world. Not to pick on Heroku, Git push anybody master. And I think we have to imbue these artificial agents as we build them and make them more complex with the sound moral bases that we lay out for ourselves. Because it's one thing for us to figure out how to be ethical when we write programs. It's another, once they're out in the world, doing their own thing, whether they're smart contracts, neural networks, autonomous vehicles, robots, what have you, have to have some way to say, here's why I chose to do this, here's the reasoning, who made that decision, and here's how we learned from it. Now I had not really intended for this to turn into a call to action, but the recent election cycle has got me all amped up. I'm very excited. I think because people can be injured, people can be killed when we screw up. And people say that. I can't tell you how many places have worked where the answer was, hey, don't worry about it. We had an outage. Nobody's dying. Nobody's going to jail. Sometimes people do die. They do go to jail. People are harmed when we make mistakes. What I think we need is we need an organization as software developers to fight for these things. Now, as you can see, I'm a white man with a beard and a wedding ring. So if I walk into a conference room as CTO, senior engineer, whatever, and I say, we're not doing this. It's not ethical. And we'll have no partner. There's a non-zero, a reasonable chance the business will back off and say, OK, let's at least talk about it and figure out why. But if I have just come out of a boot camp, if I am not a straight white man with a wedding ring and a beard, if I am someone who has substantially less social capital, less privilege, less power in the organization, and I say, I don't want to do this because I think it's unethical. And the answer is, do it or you're fired. And you do have an elderly parent, a new baby, someone who's ill. This is the first time you've actually done a career change, and for the first time financial stability is on the horizon for you. And the difference between you having a semblance of a life and not is taking the stand, the pressure is unbelievable to write that invasive code, to share that third-party data, to gloss over some error in the training data because it will probably shake out with enough training. And what I think we need in the absence of ethical organizations inside of our computer science programs and inside of organizations, I think we do need those. But we need somebody like, I don't know if it's a union, I don't know if it's the EFF, somebody we can go to and say, I need you to have my back because I am not going to write this code. And it is not acceptable for me to be fired for taking the stand. Now, I'm happy to talk about what that means, what that looks like, who does it. But it reminds me of a quote by A. Lee Vizel, who's a Holocaust survivor. And he said, we must take sides. He said, neutrality helps the oppressor, never the victim, and silence encourages the tormentors and never the tormented. And that has a lot of, I think, parallels in the current political climate. And I'm happy to talk about that later too. But most importantly, what that means is, if we don't say something, we agree that we don't need these guidelines. And I think we have to have to have them. So that's all I've got. Thank you so much for coming to my talk. I really do appreciate it. And the question is, how do we operationalize all this in our code reviews, in our day-to-day work? I do think that having an explicit code of conduct for the organization is a great place to start, where someone says, here's, we have RuboCop for linting. We have these ways of building our process. Here's our continuous delivery pipeline. But we can also say, this is how we address these questions. I think having a wiki, case studies, being able to anonymize data and say, somebody came to us and said, you know what, they were not acceptable writing this ad. When I worked at Conde NAS, there was an ad called the guillotine. It did what it sounds like. It was extremely hard to deal with. I started running an ad blocker, so I didn't have to look at it, even though our team was tasked with building it. Even these trivial questions are things that you can build into a wiki, build into an internal document. And someone can say, I don't feel comfortable. What do I do? Now, again, it's tough when you have it within your organization, and you don't have a third party that has your back. But I think that's a great place to start. That was an excellent question. And I have settled, for now, on the Legion of Benevolent Software Developers. So sorry, yes. The question is, what is the name? What name are we thinking of for this group that will defend our rights to not write unethical software? And I've, for now, settled on the Legion of Benevolent Software Developers, but I'm always open to input. And if we build this thing together, I think we can arrive at something great. All right, I think that's all the time I have. Thank you so much. Please come find me if you have questions. Thank you.