 Hi everybody My name is Richard. Hey, I'm here to talk to you today about what is best in life? I Usually wrote this talk about a year ago for an audience of web developers But it applies to anyone who works in tech or engineering disciplines or anything on who cares about ethics in tech It's not particularly high level. There's no coding examples. There are some bad coding jokes You've been warned anyway, I named this talk the good life, but actually it's about ethics in software engineering A bit about my background. I started out at university studying engineering mechanical engineering, which I hated and I switched to same philosophy, which I loved and now ten years later my official job title is front-end engineer Go figure, but so I'm a front-end engineer these days, but I usually refer to myself as a developer I've always felt a bit uncomfortable with the title of engineer though. I couldn't really put my finger on why Like maybe it felt like I was claiming some sort of prestige. I hadn't earned I Didn't complete my engineering degree or study computer science. I'm mostly self-taught I like to think I'm all right at my job, but I still feel like I'm hacking things together most of the time Now back in the day many of us who build websites used to call ourselves webmasters or web mistresses Which I still think is an amazing job title by the way I'm kind of sad that I didn't start my career early enough to have a webmaster role in the job history section My CV It's a bit like social justice warrior. Like people use as an insult, but frankly. I think it sounds badass So we were webmasters and web mistresses and then that title fell out of fashion and we became web developers and now many of us have the very professional sounding title of engineers and I don't think that's a bad thing or that we should stop calling ourselves engineers Call yourself whatever you want like you can be a techno Viking or an agile pirate ninja running a full-stack glitter unicorn If that makes you feel better is one there But we used to be working on dinky little websites that were kind of thrown together And now we're working on massive professional operations that form the financial backbone of multi-billion dollar corporations We've come a long long way from being webmasters and web mistresses But a lot of us are still self-taught and didn't go through any standardized training process and don't get me wrong Like this is great It helps ease barriers to entry, which I think is a very good thing But as the industry gets more professional it might be worth thinking about other trappings of professionalism If we're going to call ourselves engineers, then there are a lot of ethical duties and codes Ethical duties and codes responsibility that go along with that title 100 years ago civil engineering was in a similar situation to how the tech industry is now as The industrial revolutions receded behind them engineers found new ways to use all the fancy new technologies they have developed They grew more sophisticated in their approach and their projects ballooned in scale complexity But as these projects became more ambitious There was an accompanying problem a rise in major engineering disasters The turn of the 20th century saw a wave of epic structural failures including some massive bridge collapses And also the great Boston molasses flood which you can see here Which if I had to name my favorite disaster of all time This would have to be it just for the mental image of a tsunami of liquid sugar 50 feet high Traveling 35 miles an hour consuming everything in its path like it's terrifying also kind of delicious Anyway, these disasters had a profound effect on the way the public saw engineering and The force engineers to confront their shortcomings So as a result they began to regulate themselves more intensely and it established standardized industry codes of ethics So what is ethics? Ethics is a branch of philosophy that is devoted to answering questions about what is best in life so questions like these and I know what you're all thinking I can see the cogs in your software developer minds turning over you're thinking that's easy What is best in life both spaces and tabs on alternating lines What is the good life it's when the client is banned from feature requests? We should live by outsourcing our job to China and spend all day on rid it You should behave towards other people by interrupting them when they have their headphones on and the purpose of life is clearly replacing everything with JavaScript You're all monsters So philosophers like to do things called thought experiments which are like real experiments But even better because you never have to get out of your armchair One of the most famous of them is the trolley problem. I'm sure many of you already familiar with it It's a pretty common one these days, but for those who aren't go through it quickly There is a runaway trolley barreling down the railway tracks and ahead on the tracks There are five people tied up and unable to move The trolley is heading straight for them You're standing some distance off in the train yard next to a lever if you pull this lever The trolley will switch to a different set of tracks. However, you notice that there is one person on the sidetrack You have two options One do nothing and the trolley kills the five people in the main track or two Pull the lever diverting the trolley onto the sidetrack will kill one person So which is the which is the most ethical choice? So now here's the audience interaction part of the talk quick show of hands no wrong answers, which of you would do nothing Okay, and which of you would pull the lever All right. Okay, cool. So you got mostly pulling the lever Now imagine instead of a switch you're standing on a bridge over the tracks next to an extremely large man The trolley is coming and the only way you can stop it is to push the large man onto the tracks He's the only one big enough to slow down the trolley like if you jump on it's not going to do anything trains It's going straight through you so he's looking you right in the eyes. He can see what you're thinking. He's terrified He's begging you not to do it. What do you do? So how many of you would push the large man onto the tracks? But few of this time. Okay, and how many of you would do nothing? All right, well more So the trolley problem has been the subject of many surveys Which tend to find that approximately nine out of ten respondents would throw the switch to kill the one and save the five However in the large man situation the situation reverses and only one and ten people would push him onto the tracks So that mostly corresponds what we got here We're maybe slightly more blood thirsty than that in this crowd, but mostly is pretty similar Incidentally a 2009 survey of professional philosophers found that only 68% of them would throw the switch 8% of not would not switch the remaining 24% had another view or just could not answer So if you're ever tied to a train track by a cartoon villain, you'd better hope that the person by the switch isn't a moral philosopher So why the difference in the two outcomes One theory is that it's because two different parts of your brain are fighting with each other Some researchers looked at people's brains using fMRI machines and demonstrated that personal dilemmas Like pushing a man off a footbridge engage brain regions associated with emotion Whereas impersonal dilemmas like diverting the trolley by flipping a switch engage regions associated with controlled reasoning and These different brain processes essentially compete with each other whenever you have to make a tough tough moral decision Basically inside your brain, you've got a monkey and a robot Literally a monkey and a robot fighting over the controls Every time you have to make a moral decision they do get out The monkey understands something simple like pushing someone off a bridge and it's horrified But it doesn't understand something complex like a mechanical switch So in that situation the gut response is reduced and we're able to throw it through the lever without feeling such a crushing sense of moral horror Now some people have a stronger monkey and some people have a stronger robot Have we seen today and that's great because they're both useful in different situations But this is tricky for we programmers because we work on usually fairly complex problems Which might make it trickier for our monkey brains to trigger some kinds of moral responses By the way, if you think it's hard for programmers to experience the full range of ethical response then spare a thought for autonomous vehicles Self-driving cars don't have meat brains and you can't make a perfectly ethical algorithm You can only make it as good as the humans who programmed it and we can't even agree to whether or not to use tabs or spaces There are some really tricky problems here that self-driving cars will face Like we prefer a self-driving car to swerve into a pile of trash rather than hit someone like just like a person might But computers can make these kinds of decisions quicker than we can So if we decide in advance what we want them to do they'll follow our instructions So we probably want to program our car to hit a single adult rather than a busload of school children, right? But what if the adult is a Nobel Prize-winning cancer researcher? What if the adult is driving the car? Would we want the car to sacrifice its driver? And would you choose to buy a self-driving car that's designed to sacrifice your life to save others? So some researchers at MIT came up with a nice solution for this They built an app to mine data on people's answers to different trolley problems So they can use it to help them decide how autonomous cars should behave in different scenarios The website is called Moro Machine and you can go there right now and start judging scenarios like this one We have to choose between a male athlete driving a car and a jaywalking baby on the one hand The baby doesn't know or probably doesn't know not to cross on a wave signal But on the other hand it might grow up to be Hitler, you know, so it's a tough call Anyway, Moro Machine is cool But it doesn't help us most of the time because we can't outsource all our ethical decision-making to the internet We're just internet individuals working on our laptops and we have these ridiculous meat brains And we have to make our own decisions about whether to kill baby Hitler So of course we sometimes make the wrong call So let's shift gears for a little and consider the Volkswagen emissions scandal You might recall that VW added special software to millions of diesel cars That would detect when their exhaust fumes were being checked by emissions regulators And change performance to pass these tests As a result they managed to completely bypass emission standards in the US, EU and elsewhere For a period of about five years Their workaround allowed them to emit up to 40 times more nitrogen oxide than what US emission standards allow By some estimates, air pollution causes around 40,000 early deaths per year in the UK alone So it's pretty safe to assume that Volkswagen's technical hack is likely a result in several thousands or at least several hundred premature deaths Plus thousands more cases of asthma and other lung disease And as someone who recently started experiencing asthma symptoms for the first time since I was a child Probably because of London's air pollution I take this a little bit personally So when I heard last year that one of the engineers at VW was imprisoned for his royal scandal, I thought, good But on the other hand, I've kind of got to give credit where it's due Like VW's defeat device is a pretty brilliant technical hack It's ingenious I can imagine that the engineers who created it must have felt pretty proud of themselves at the time But at the same time, you also wonder why nobody spoke up at one of their internal meetings to say, hey pals, do you think maybe we're assholes? How do they get it so wrong? Are they just inherently bad people? So maybe it's because the monkey part of their brain was completely unable to deal with the complexity of the problem You've got cars and software hacks and air pollution And then decades later some people you don't know might die It's all a bit much for the poor monkey to handle So we established earlier that ethical reasoning involves an internal struggle for control And a weird thing about humans is that we can sometimes actually forget to act ethically when we're so focused on achieving a goal That we forget to think about the consequences of our actions Or else we justify it to ourselves in ways that don't stand up to scrutiny that we never stop to properly reflect I'm sure we've all done this at some point, I definitely have And it's led to some of my biggest screw ups When you're looking at a wall of code, it's very easy to forget about the humans who will be affected by your decisions, IRL And unlike civil engineering, it's usually easy to fix mistakes, just roll out a patch or an upgrade In tech, we like to move fast and break things, but we don't want to move fast in oncoming traffic and break people I think the monkey brain is a factor in many of our biggest ethical lapses in tech today Whether it's Facebook enabling fake news and Equifax with their criminally sloppy security Or just JavaScript developers being too lazy to bother making their websites accessible for disabled people and keyboard users I'm looking at all of you and me too But I want to believe that people making these decisions are doing so because they're not thinking hard enough about the consequences and the people affected by their actions However, there's also those who say, hey, I don't know about all that ethics stuff, I'm just an engineer It's not my responsibility Like Mr. von Braun here, who knew he couldn't be at the forefront of rocket research in Nazi Germany if he didn't go along And he didn't hear what crimes he had to turn a blind eye to, blind eye to, as long as he was allowed to play with his rockets So to be clear, nobody is exempt from having to behave ethically Scientists and engineers aren't a special group that get to be amoral and don't have to think about this stuff Ethics contaminates everything Whether you're building rockets or designing algorithms to help police identify gang members You have a duty to consider how they might be used With so many examples of ethically compromised decision making in tech, it's easy to get pessimistic There's some good news though If it's easy for people to act ethically when they don't think about it, then the flip side to this is That it follows that people tend to behave ethically when you remind them to And it can happen even when you do it in subtle ways For instance, some researchers in Newcastle found that just hanging up posters of staring human eyes in a cafeteria Was enough to significantly change people's behavior And made people twice as likely to clean up after themselves If just a poster of eyes can achieve that much, then imagine what else we can do with an What we cannot accomplish with just a few well-placed reminders We want to establish an organizational culture where people tend to act morally And where there are lots of positive examples for us to emulate And I think that reminders are a powerful tool to help us achieve this I mentioned before that many engineering industry bodies introduced formal codes of ethics in the early 20th century These came along with more legal regulations and barriers to entry Which I don't think is good for our industry, but ethical codes are a great idea And that's already been touched on in one talk this weekend These are a great way to remind people to act ethically Because basically when you tell people don't be a dick, they'll be less likely to be a dick We already do this with codes of conduct at conferences and other events including EMF camp And open source GitHub repositories We can do this at our organizations too They don't have to be complicated, in fact the simpler the better This one is from the American Society of Civil Engineers, but you can see it fits on one slide Like it's pretty simple The important thing is to set appropriate expectations for ethical behavior There are loads of other codes of ethics around that you can use as inspiration They are very hard to write, right here Read over some of the different codes, discuss them with your colleagues And think about what sort of ethical principles you choose for your own work Your team and your company You can use an existing code of ethics or borrow different aspects or make your own Once you've chosen an ethical code, communicate it within your team How you communicate it is up to you For example, you could include it in your onboarding for new starters Or add ethical checks to checklists and documentation for new projects Or you could run internal publicity campaigns, maybe like posters on the wall Maybe with some eyes on top of them The important thing is that it becomes part of your team and your company culture This act of communicating expectations is important for empowering team members To speak up if they're uncomfortable, before it's too late A few years ago, I was working for a consultancy Who assigned me to a website project for a client that I didn't really improve of But I got so invested in solving the technical aspects of the project That I didn't stop to think about whether I was morally okay with working for this client Until I was already deeply invested I moaned about the client a little bit, some colleagues And they told me, hey, if you didn't want to work for this client, that's fine But you should have said something that started this project It made me realize that it was okay to say no to client projects at this employer But also that the appropriate time to do that is before you start work The later you leave it, the harder it is to do So the next time a dodgy client came along I felt more comfortable expressing my concerns up front Rather than just procrastinating it for later And we ended up turning down the client If we establish policies ahead of time that say that it's okay to speak up If you're uncomfortable, we can avoid these kinds of cop situations I tend to think of it as being a little like encouraging developers to submit bug reports And point out problems in your applications or your processes If everyone feels empowered to speak up, then you're all better off On a related side note If you speak up about ethically dubious practices at your place and your employer doesn't listen You may have a duty to report it to the authorities or otherwise make it public A basic dilemma in engineering ethics is that an engineer has a duty to their client or employer But an even greater duty to report a possible risk to others From a client or an employer failing to follow the engineer's instructions A classic example of this is the Challenger Space Shuttle Disaster NASA engineers raised warnings about the faulty O-rings and the boosters And there are dangers posed by the low temperatures on the morning of the launch But managers disregarded these warnings and failed to adequately report these technical concerns to their supervisors It was later argued that in these circumstances the engineers had a duty to circumvent their managers And shout about the dangers until they were hurt I mentioned building ethics checks into processes as a regular reminder to encourage ethical thinking as early as possible A friend of mine who works as a psychotherapist tells me that their training includes ethics checks as a core part of the processes So whenever they're trying to make a tough decision, there are these questions they can use Which are designed to trigger different types of emotional responses So here's a few examples here The first one here, which I think is quite a monkey brain question, is a good one for triggering emotional reactions like shame Would you be happy for everyone to know the decision you've made? So for example, if you're considering being lazy about making your site accessible Imagine there's a disabled person sitting next to you and think about whether you'd be comfortable explaining your code choices to them The second one seems designed to trigger more of a consequentialist response It's maybe a bit of a more rationalist robot brain approach Do you think the consequences are acceptable? The last one reminds me of Immanuel Kant's Category Comparative That you should only do something if you're okay with it being a universal law If there's any professional philosophy here, please don't judge me, I've got this wrong, but that's my take But you should only do something if you're okay with it being a universal law So that's two, Kant's your thing I think these are a great start, but feel free to build off them or tailor them to your own work Finally, we can encourage developers to develop more empathy for their users by encouraging them to meet them in person A great way to do this is to get developers to sit in on user testing sessions A nice additional benefit of this is that empathy for your users helps you design better user-centered solutions So, win-win So these ideas are just a start, they won't fix anything They won't put a stop to the fact that a small helpful of mega-corporations own our digital lives and strip-mind them for profit But you know, like one step at a time By the way, if you have any questions or suggestions, please come chat to me afterwards Or get in touch with me with using any of my imaginative online handles Thank you