 I ran through a hallway and in front of me there was a nurse and between us this giant plastic box with a small human being inside and we ran past corners and through doors and down an elevator and through more corners and in and bang through a door and come into the neonatal intensive care unit where a horde of nurses descended on us and started asking questions. Morton, he needs an IV right now. He needs to have a feeding tube inserted. He needs all these things. You need to make decisions right now and I stood there and my hands were shaking and I was looking down into this box of this person that we had just emerged into the world five weeks too soon and all I could think was I have questions. What is this? What are the consequences of every action I make now? How do these people come to these decisions about what needs to be done? Who are they? Like what are their values? How do I trust them and why are they doing this and not something else? And these questions stand at the center of everything. We are at the 20 years into this enormous social experiment that was started by accident that created this entire new world of communication and an entire new method of talking to people and sending information around and it's amazing. It allows us to share our ideas and our dreams and our creations with everyone all around the world. There are no limitations to it and it's scary because it allows us to mislead, to manipulate, to destroy people, ideas, democracies, all of this just because of this little thing and now what we have to think about is how not to destroy the world. So I was at a conference a month ago where a large tech company demonstrated some fancy technology that could call you and pretend to be a human and we were sitting in the audience and I remember there were people that were very excited because this is a technological breakthrough and then there were other people that were saying I don't know this doesn't feel right at all but I don't know what to do about this and it got me thinking because we have one of those thermos cans in our house and all the things you can talk to that talk back to you. Google turn on the light. Alexa buy me more diapers. The thing is those things are connected to services that know a lot about us and I started thinking like if you have enough information you should be able to do interesting things with it. For example I am certain that right now there's a tech company that's trying to figure out how I can collect all the data that sits on your phone and that you put on to all social media networks and that you share with your connected devices. Look at that data and look at how it changes and detect whether or not you're moving into a dark space. And what happens if technology can detect that? What happens if technology can actually tell before you can that you're entering into depression or something else? And should that technology then step up and try to do something about it? What if your device could diagnose the emergence of clinical depression? It's not like this isn't being done in other fields. I mean how many of you have a thing on your arm that counts how many steps you take? Those things are used not only to count how many steps you take but they're trying to figure out how people move and how that impacts their health. And these companies are already using that technology to try to detect diseases that may occur or something else. Facebook has an AI that detects whether people are merging into suicidal thoughts that actually runs on Facebook. So if you go far enough it'll start talking to you and try to point you in directions of things. And all this is happening and I was thinking this depression thing would probably happen at some point. Someone's going to do this. And then right after I had this idea and I made the slide and everything the story of the Google selfish ledger came out. I don't know if you've heard about this. This was an idea that someone from Google came up with a couple of years ago where they were saying what if we took all the information we have about someone and we tried to guide society based on it and to better things. And in that video the person says as cycles of collection and comparison extend it may be possible to develop a species level understanding of complex issues such as depression, health, and poverty. It's one of those horrible serendipitous moments where you're like I have an idea. I think that's terrible. Oh no someone also thought about it before and they actually have the ability to do it. I am certain you've all been here in the last couple of years. You see these things happening in your society, in your field and everywhere and you go this doesn't feel right but I don't know how to articulate this. And because I don't know how to articulate it I can't take it anywhere. I can't do anything about it. If you look at media now you just search for the word ethics and you will be bombarded with news stories about tech companies doing things that turn out to cause ethical problems. And every time you read one of those stories you're like yeah this is an issue and that's an issue that's definitely an issue but you didn't really know that until someone pointed it out. And once you know it you go how do I avoid this problem? How do I not become part of this? And then you say hmm you know there are some smart people that are thinking about this I mean there's Mike Montero he's published a designer's code of ethics and there's like a Hippocratic oath for tech workers and universities are teaching classes on CS ethics and there's things like the GDPR which is a response to a lack of ethics in our communities so it's being forced upon us by governments. And all of this is trying to basically solve problems that we haven't solved on our own. So we just want to build great tools for the web. And we thought we were doing great things that would really change everything and then the reality comes down on us that a lot of the stuff that we've been doing inadvertently cost things we didn't need and didn't want and didn't intend at all. So how do I know if I'm doing the right thing? I should put a caveat and say don't I'm not very like I'm not a paragon of good ideas here but there are ways we can judge our actions and figure out how to do the right thing and those things are enveloped in this concept called ethics. So that's what I want to talk to you about. How do we judge whether what we do is good or bad right or wrong in a way that we can use? This comes to this old philosophical concept of ethics. Ethics are rules of behavior based on ideas about what is morally good or bad and that immediately makes you ask what is morals and then you can discover morals are things concerning or related to what is right and wrong in human behavior and you're like okay so that sounds like the same thing and then you get a definition like this that literally says ethics are the signs of morals and morals are the practice of ethics. Thank you philosophy for your horrible ambiguity but there's a point to this. We use ethics to quantify and put into a system the moral ideas we have so that we can use them in our everyday practice. So we have beliefs about what is right or wrong but we need to make a system so that other people can agree with those ideas and so we can use those ideas to further mankind and further ourselves and that's where ethics comes in. So if we talk about design ethics we would end up with something like design ethics are rules of behavior based on ideas about what design is morally good or bad and there are also tools to help make and explain moral judgments about design decisions. Nice and heavy right good for the end of the day. So what if your device could diagnose the emergence of clinical depression? You get this question someone at some point in time is going to ask you this. What is your response? Well I have some questions that we need to discuss first and those questions are what who and why and these questions map to this very old idea that Aristotle had thousands of years ago that all decisions and all judgments are based on these four core principles. They are the material costs the thing that action that the thing is built up. They are the formal costs what was used to build it. They are the efficient costs the person that built it and are and they are the final cost the reason why it was built to begin with and we take these four causes and map them to questions we end up with what are the consequences? How do we uptold our duty of care when we make decisions? Who do we become by doing it and why are we doing it in the first place? Keep these in mind and we'll go through them. So what are the consequences? Whenever you make a decision you always have to think about the consequences. When you don't weird stuff happens. So Genevue used Strava the running app that tracks where you're running. There's a ton of these apps. Strava is one of them and earlier this year they released this really fancy interactive website where they showed heat maps of where people run around the world and it looked amazing and everyone went in and started searching like oh can I see myself running in here I did that and it's kind of neat but when you release that kind of information there's a chance someone might use that information for things you didn't intend. For example turns out a lot of military staff in the United States Army who are abroad on operation wear trackers and figure out where they're going. So Strava inadvertently published maps of where military installations are in the desert in countries where people are having operations. Not exactly what was intended but in hindsight obvious. There's a consequence to doing this and it wasn't thought through properly. This is about consequentialism this old theory of morality that says that morality of an action is to be judged solely by its consequences. So you say an action is good or bad based on what the outcome is alone. So it doesn't matter what you intended if the outcome is good the act was good and there's a variety of this called utilitarianism that specifically says that an action is right if the outcome is useful for the majority of people. Sound familiar? That is literally how we design software. So consequentialism this old philosophical theory is the baseline of open-source software. You know that thing the 80-20 principle designed for the 80 percent to hell with the rest. That's the official definition of 80-20 by the way. Design for the majority is the thing we talk about all the time and the question I would ask as philosopher immediately is who defines that majority and what about the rest? This is the problem with consequentialism and this is the problem with ethics in general that you know when someone comes to you and says I have an ethical theory you can use to do this. There's this challenge that ethics tries to define a specific way of thinking about a problem and then there are all these edge cases that don't work. So in consequentialism thinking about the consequences of reactions are great but it results in excluding people. So the idea of consequentialism is you map out all the consequences in advance so you can move around them and come out with good decisions while in reality the way we approach the world with consequentialism is to say hey there are all these problems I'm not sure we can fix them and then you smash into every single one of the problems and end up somewhere you weren't intending. Unfortunately and that's what you see all these new stories about right when Facebook opened up their API so that people could use their data to do things they didn't intend for Cambridge Analytica to start skewing elections in the United States or other countries right that wasn't the intent but because they didn't think of the possibility that that might have happened it happened and then they have to answer for it. The GDPR is the response to us doing this. The GDPR is literally saying you had your chance to do this right but you messed up so we're just gonna shut down your playground and control it that's what it is. So how do we use consequentialism to actually make better decisions in design? Well there are four things you can ask what are the consequences of every design decision you make? Does this improve the common good of those affected? How do we measure the utility of our actions? And finally who decides what users matter and why? These are the questions you bring up in the design team meeting to make sure you're thinking about the consequences. Which brings us to number two how do we uphold our duty of care? Every time we make a decision we're making decisions on behalf of other people and we have a duty to those people to make sure that our decisions don't harm them in some way. Sometimes people make interesting decisions that have unintended consequences and that obviously could harm people like a research project on whether or not you can determine the sexual orientation of people based on their photos which is an interesting academic exercise but has obvious negative consequences that should have been considered before the research paper was published and everyone was told that this is possible. The same goes when you know a company decides to use facial recognition for things because it's cool and then another government buys the technology and uses it to detect people who might have committed crimes ages ago in crowds. So duty ethics allows us to think about how the acts we perform impact the world in terms of defining what is now normal and what is acceptable behavior. It's based around this idea of the categorical imperative which has this very clunky definition act only in accordance with that maxim through which you can at the same time will that it become a universal law. Wonderful text. So it basically means act in the way you'd want every other person to act in the same situation. So not do the thing you want other people to do to you. Do the thing you want every other person to do. So the rightness of an act according to the ontology or duty ethics is judged by whether or not it's done to the duty of that principle as a whole. So you're not just acting in a way because you want people to act like that. It's because you believe that that's how people should act and you believe everyone should do this. So you do the right thing because your action sets a precedence and you believe that it's your duty to do the right thing so that your action sets a precedence and then it just continues in that circle endlessly. This brings us to that duty of care principle which is you are responsible not only for what you put into the world but what it does to the world and its people. Something that when you start thinking about it changes a lot of the conversation. I mean when all these tech giants are now going out in media and saying, you know, Facebook was designed to be addictive when I worked there. Or, you know, you don't realize but you're being programmed by the thing I built or my favorite, I'll quote from a newspaper article here. Justin Rothstein has tweaked his laptop operating system to block Reddit banned himself from Snapchat which he compares to heroin and imposed limits on his use of Facebook. He was particularly aware of the allure of Facebook likes which he describes as bright things of pseudo pleasure that can be as hollow and seductive as something else because he's the one that created the like button. So the guy that made the thing that we're all addicted to is like it's addictive so I don't use it. That's great but you said a precedence that says we should have like buttons on everything. So how do we use this? Once we know that there's an ethical theory that says we have a duty of care to things, how do we actually use this to further our own agendas? We say what norms are established when we make decisions? What do we say to other people about what they can do? What duties of care do we have and how do we uphold them? And should every other person or company in this position do the same thing I'm currently doing now? If you can't answer that then maybe you shouldn't be doing it. Which brings us to the third question, who do we become by doing this? This is probably the hardest one to understand unless you set it into some sort of context. So you may remember a couple of years ago there was an incident in the United States where Apple was asked to hack iPhones to provide access to their phones. Apple said we are not doing this, not because we don't believe it's important to find out what happened here, but because if we hack this one iPhone, we have set a precedence for every other iPhone to get hacked as well and we can't do that. That was done on principle, not on in this particular case or anything else. This is what's called virtue ethics. And in virtue ethics we say we want to judge acts based on what they do to the actor's character. So when I do something it changes me. And my actions should be judged based on whether or not my actions make me a better person. So what's a better person? Well, it depends on who you ask, which is what makes it tricky. But we can kind of cheat our way into this by saying I think I know what I want to become. So I'm going to model the behavior of the person I want to be, otherwise known as fake it till you become it. And it turns out that's actually a very effective way of making better decisions. If you always think, will I be comfortable in two years when I look back and think about the decision I just made? What about 10 years from now? Is this still comfortable to me? Do I become the person I want to be by making this decision? I mean Aristotle had this list of virtues that he was like, these are the things that we should adhere to. And some of them make sense. I mean, they have things like courage and temperance and thankfulness and modesty and intelligence and logic and all this stuff, right? Today, we have philosophers that are trying to make other lists of virtues. Shannon Valer has made a list of virtues that are called the techno moral virtues, which are honesty and self control and humility and justice and all these other stuff. And that sounds great, but it's very grandiose and everything. A better example of current virtues that we actually use and we aspire to are these designer codes of ethics and these Hippocratic oaths for people who work in tech and stuff like that because they specify the type of people we want to be. And you can even read it in these texts. It says things like, a designer is first and foremost a human being. A designer is responsible for the work they put into the world. A designer values impact or reform. A designer takes time for self reflection. I mean, these are virtues that we can aspire to. And if you adopt that kind of mentality, then you automatically have a way of judging your own decisions. If this feels a little foreign, Fred Meyer from WP Shout has made something very similar. He calls it the small business WordPress developer code of honor, where he says things like, I will not mislead my clients. I will treat my clients with respect. They seem like a little bit obvious, but the reality is unless we agree with ourselves that that's what we're going to do, there is a line somewhere that we haven't defined that we might accidentally cross. The GPL is a great example of our traffic that we all aspire to in this room. The GPL specifically says, I believe all people should be able to use this software and create from it whatever they want. And I'll build that in as a requirement. Matt talked about that a couple of hours ago here on the stage. This idea that no one can buy it, no one can do anything with it unless they want to. There are lots of examples of this. Accessibility is another example. When a Norwegian designer named Ida Ol wanted to build accessibility into everything, she said, to me and the rest of the team I work with, accessibility is simply the right thing to do. I believe accessibility is the right thing to do, so that's why we do it. And to do all this, we ask ourselves, what person or company do we become by doing this? Any act? What behaviors are we modeling for others and for ourselves? And what virtues do we believe in and promote? So then we have consequences. What happened when we did something? We have deontology or duty ethics. What kind of duties of care do we have? And finally we have virtues. What do we become? Which leaves the obvious question, what about the end user? And that's where we have the last question. Why are we doing this? There's this website called have I been pond? Have you been on it? It's always disturbing. You go and put in your email address and see how many people have your password, basically. And when that was published, people were like, wait a second. So someone hacks a bunch of email addresses and this guy publishes the list that sounds odd. Right? That sounds opposite of what we want to do here. And the reality is this is a great example of what's called the capability approach. The capability approach looks at ethics in terms of what happens to the end user, what kind of capabilities is that person granted by doing something? So Google and Apple are both rolling out this new thing now, which allows you to tell your phone how much time you want to spend with your phone. And when you reach that limit, your phone will be like, you are spending too much time with me, please put me down and go live with your life instead. Which grants capabilities to the user to take control back from the thing that was designed to make them addicted. And this comes from this project called time well spent, which looks at the addiction component of cell phones and all these other devices. And this is a good example of capability approach. This idea that we design things around the capabilities we grant to the people who use them rather than what we get out of it as service providers or companies or sellers of products. Now there's a book called start with why is very famous that has the three circles, the how, the why, and the what, the how, the why by Simon Sinek. And he says, why is not about making money, that's the result. Why is a purpose, a cause or a belief? And this maps to this old, old, old idea, asking why is about, is a way of clarifying your desire to shape the world to your vision. So every decision you make when you ask why am I doing this, you have to think about the world you build as you're doing that. The capability approach is a new type of ethics that emerged in the 70s that looks at the just the rightness and goodness of an act in the scope of whether or not it grants freedom to achieve well-being in the end user and whether it grants people capabilities so that they have real opportunities to do and be what they have raised the value. So you're offloading the entire judgment on to what happens to the people you're designing things for. In other words, an action or a design decision is right if it grants or enables those acted upon, your end users, capabilities in the form of real opportunities to do and be what they have raised the value. So this concept of user-centered design needs an upgrade. Instead of thinking about user-centered design and have these users that we think about, we should think about capability-centered design. What capabilities do we grant the users by doing these designs? Because every design decision we make carves a path that our users follow into their future. And right now, the path that we carved with our social experiment have not led where we intended it to go. So instead of thinking about this concept of what is it, move fast and break things that everyone keeps saying all the time, I think we should adopt something new. When someone says move fast and break things, we should respond by saying give everyone the capability to do and be what they have raised the value. That should be our design principle. Heather Burns wrote an article for Smashing Magazine where she exemplifies this in describing privacy-centered design where she says as the creators of applications and the data flows that they create, we can play a critical and positive role in protecting our users from attacks on their privacy. We grant them the capability to control their own privacy instead of doing it for them. So capability approach, if we want to apply it to our design practice, we ask what capabilities are we granting and enabling in the end users? Do these capabilities allow them the freedom to achieve well-being? And finally, what future are we building for them? So when I was in that hospital with that box with that tiny human inside and I asked those questions, what, how, who and why, the reason I could be there and be comfortable with what happened next is because I could answer every question. What are the consequences? Well, they want to treat the baby to make it better and they know how that's done. How do they uphold their duty of care? They're doctors and nurses, they have the Hippocratic oath. They believe in what they're doing. I know what they believe in because I watch medical TV, because my dad is a doctor, because I understand what they're trying to do. Who are they and what do they become? Well, they're doctors and nurses, they're experts at this, they know what they're doing and they always aspire to be something better. And finally, what are they doing this? They are trying to save a life. And the interesting thing is when they make decisions, they are making the exact same set of questions, only they do it in reverse. They start by asking themselves, why am I doing this? Well, I'm trying to fix a problem. Who am I and what do I become by doing this? Well, I try to meet my own standards and move to make it better every time. How do I uphold my duty of care? I follow the rules, I establish new best practices and I do the right thing at all times. And finally, what are the consequences? Well, I should know because I've studied this, I think carefully about it and everything. And it is this system that makes it possible for us to trust other people. So rather than think of it as this onion that I've been showing you, think of it as a bridge. When you're making a decision, you start by asking why. Why am I doing this? Then who do I become? Then what duties of care do I have? And finally, what are the consequences? If you take an existing decision, you're going the other direction. What are the consequences? Because you can see them. What kind of best practices were established here? Who did I become by doing this? And finally, why did we do it? And it is by crossing this bridge in either direction that you get the necessary knowledge to have a firm conversation about what you are doing. This is what allows you to go to your client and question the ethics of something they want you to do. And not just say, this feels uncomfortable to me, but explain why and start a conversation that brings you forward. The reality is, by asking these questions, you can take an uncomfortable situation and turn it into a great opportunity. Because once you turn a proposal or a design question or anything else into a question of what capabilities are we granting the end user? And how does this benefit them? And then what do we become by doing this? And then what kind of best practices are we setting? And how are we protecting the end user? And finally what are the consequences? It becomes a different conversation from I want to put the ad there because then people subscribe to my newsletter. I ran through a hallway and in front of me was a small child running very fast and trying not to trip because he knew that outside the door was a beach and an ocean. And I took his hand and I looked at him and I realized that what I do when I design things is building a future. It's not merely designing things that look nice. It's not merely making something that works. It actually carves a path into the future that every person who interacts with it will follow. And this is what we all do. You all build the future with everything you do. Just think about that. You build the future. This is an awesome power and an awesome responsibility. So standing there holding Leo's hand in the water and I could see the traces of our footsteps as we go down and they are the literal path that leads into his future. I look at him and think how can I do this? Seriously who am I to do this? How do I know what's right? And somewhere in the back of my mind I can hear Robert M. Piercig say the place to improve the world is first in one's own heart and head and hands and then work out for it from there. Thank you very much. Somehow we have time for questions. Does anybody have a question? Come on, be brave. I do not have cookies, I'm sorry. I have some. Do you want to give them away when people ask you questions? Go to the mic please. Okay, go ahead. Hi Martin. I don't have a question. I just want to thank you. Thank you. Sorry. I'm feeling it too. I think that says everything. I work with WordPress but I'm originally a psychologist so I really connected with your whole talk and I'm really... I can't say anything else. Sorry. Just thank you very much. Thank you. Okay, we have a question over there. Let's go there first. Hi, Martin. Hello. Thank you for all the work that you do. I remember when you published the first article about this ethics talk or that you know I assumed you're probably going to talk about it because it's really good things and thank you for speaking and sharing this with us and telling the story of your son and I also have a social science background and it's funny to me because that's the approach you want to take right. We care about users. We care about people. We care about flesh and and bones right. They're human beings in front of us and I would be remiss to say that you know we the people of the privacy core component thing that we're trying to make stuff happen. We spent a bunch of time yesterday informing new users about what privacy looks like and what data means. So there's also an accessibility team, a design team and a community team and marketing team that tries to talk about all these things here within the WordPress space. So practical next steps. We're at a work camp. How do we take some of these things and maybe immediately action in ways that are a little bit more practical. Right so this is this is the challenge right. I give you all this information you're like all I'm supposed to do with all this. This is so much information. So here's what you do. All you have to remember are the four questions. Nothing else. You don't need to go to university and highlight eight years of philosophy to do this. You need to ask the four questions. Why am I doing this? Who do I become? What are my duties of care and how do I uphold them and what are the consequences? By asking these four questions anytime you have a complex decision to make you'll discover that as you start discussing the answers new avenues open up. So people often see they hear ethics and then they think of it as this moralistic blanket that kills creativity. I think of it more of more as a hearth that helps you control the creative flames and direct them so that they produce the most heat possible without burning the house down. And I can tell you from experience implementing this method just asking those four questions in my life in design and everything else helps bring about change because you're not just thinking about the immediacy of what you're trying to solve. You're thinking about what it does. And I'm not when I say every design decision carves a path into the future. I truly mean it. You're not simply dissolving a problem. You're actually building the future. When you think about it that way it seems daunting at first and then you realize I can build a future I'm going to live in and that I want other people to join me in. Is that a good answer for your question? It's not really practical next steps but it is. I gave you four questions to ask. It's the best I can do. Well I mean at the end of the day right you're you're leading us toward I think a world with we have tools to be able to respond. Yes. And that's the key thing right like if you give people a simple answer that says remember four things you know it is a way to navigate these kinds of issues in a larger context and I really appreciate the effort that you put forward here because it is a very good framework for navigating that. And what is your name? I'm Leo. Exactly. I remember it. It's the same name as my son. It's funny. And we've had a couple conversations about it. It's funny. Your son looks very cute on screen. Thank you. Thank you. All right. More questions. My name is Kate. So I think when we talk about ethics and how they impact people often you know it's really important to think about the intersectionality aspects there. So when for example the Google telephone thing I found it extremely creepy but for people who have problems communicating on the phone either because they can't hear or they can't speak or they have crippling levels of anxiety it's actually life-changing right. Similarly you know you have the example about the army base and predicting depression but at least five years ago target was able to very accurately predict if women were pregnant right which can also carry a degree of physical risk. So I guess I have some feedback and an opportunity for you. So the feedback is you did not cover into next intersectionality at all in your talk and the opportunity is do you want to take the opportunity now to elaborate on that a bit. So this is a really interesting thing because my point in the talk is when you start asking questions about things doors open up for a further conversation. You're what you just with the phone talking machine thing right. The thing is that technology is amazing the fact that you can tell a machine to do a call for you when you can't for whatever reason it's actually a good thing in itself because it allows access to things that previously weren't accessible for a myriad of reasons. The problem with it isn't that you can do it the problem is how it's presented when the machine pretends to be a human being. So that you as a the other end of that conversation don't know if you're talking to a human or not right and the purpose of using ethics and design is to identify these issues and figure out how do we work around them so we get the good parts of this technology the well intended parts without accidentally causing harm to someone else because you say that you know there are a lot of people who would be who would see this as a benefit there are also a lot of people who would be directly harmed by this technology the people at the call centers who no no longer know if they're talking to a human being or talking to a machine that's pretending to be a human being the the companies that get a ton of calls from machines because anyone can program any machine to do this and this talk is not specifically about how we solve the world's problems in particular it's about how we start thinking differently about starting conversations around those problems so that we identify issues like intersectionality like complexity in trying to serve a varied user base that may have varied needs and identify the problems before they happen because like accessibility and all of this stuff around intersectionality often has to do with not understanding the full breadth of your user base and then literally drawing a line and saying anyone who falls outside of this defined area is irrelevant to me and that's what a lot of the science decisions are doing they say design for the majority as in the other people don't matter and the people that often fall outside are the people that are literally intersections between multiple different groups so they are marginalized two or three or ten times more than other people and because they then not just fall outside of the average user but fall well outside of the average user they become something that is ignored until you introduce a process where you actively seek that out and ask the questions early on in the process why are we excluding these people is that okay who made that decision and can't we make a solution that also takes into account these people I mean in my city in Vancouver we currently have this conversation which is the city of Vancouver wants to ban straws plastic straws because we all know plastic straws are terrible because we get like a thousand of them they go into the ocean whales eat them it's terrible so the city of Vancouver just decided we're banning these plastic straws and then immediately the accessibility community came and said hey plastic straws are the only way a lot of people are able to drink and no a steel straw won't solve that problem because then you have to clean it and there's a bunch of other problems and paper straws don't work because they get like there are a multitude of levels to this conversation that go beyond just plastic straws are bad for nature so we need to come up with another solution to this and the counselors who came up with this ban then had to roll all the way back and say okay we clearly didn't ask all the questions here we need to start this entire conversation over bring in more people get a better understanding of why we excluded these people that's what this is about giving you the tools to start those conversations all right more conversations with Caitlin later yes definitely it feels like it last question let's go hello thank you for your talk so i'm basically new to this topic of ethics in technology and you get me really interested so where do i go from here like are there any books apart from start with why oh resources you can recommend there's a lot of stuff being written about this so i'll be self-serving and say i wrote an article about this that works through the same stuff in more detail and relates specifically to web design it's on smashing magazine it's called using ethics and web design heather has written a ton of content on gdpr compliance which actually touches on the same subject here smashing also released an article last week i think about how to how we use psychology in design to basically manipulate our course people into doing things and why that's not okay and how we can use the same ways of thinking without harm without harmful consequences and there are also many books being published right now about this topic there's one called um re-engineering humanity which looks at the same kind of ideas um Sharon um Shannon uh what's the hell's the name Shannon Valor's book the techno-moral virtues it's great very technical but it's actually really good and if you just come to me later i have an endless list of reading material that i can give you that looks at all these problems because there are this is complex and the challenge is to not be overwhelmed when you start start with something really simple address try to find some way of getting a path into it and then incrementally move your way forward that's where this quote came from it's somewhere the uh piercing quote that the way we change the world is actually by starting with ourselves doing it small and doing it simple and then slowly building ourselves out awesome thanks a lot thank you thank you everyone that's all the questions thank you Martin