 It's very good to see you today. I'm very honored to be invited to spend the day with you. And we're going to start off, I guess, a little bit more on the thinking side of things, because the talk I pitched and got accepted is about ethics in tech. But first, a little bit about me. So my name is Patricia Orls. I am a C++ programmer, which I guess means I'm in the wrong room. I'm currently teaching C at a college in Oslo, but it's C on Linux, so I'm still in the wrong room. I'm currently specializing in application security or where programming meets security. And that means anything from secure programming in C++ to more security culture. And I do that in my own company called TurtleSec. But previously, I've worked many different places. My first real job was at Opera, and I started in 2005 working as the first full-time employee working on the Unix slash Linux desktop browser. So who used Opera back in the day? Yay, awesome. And who used Opera on FreeBSD? Awesome. And who actually ever had, like, plugins work? OK, that one guy? OK, I did that. I have to be honest, it was never like a priority of the Opera CEOs that Opera should work on FreeBSD. That was never actually a part of anybody's plan or anybody. If we actually did Agile back there, this would be totally off books, because it was never on anybody's plan. But I had this really awesome guy who was a volunteer. He was called Ariane van Leeuwen. And he decided to take it upon himself to do the FreeBSD port of Opera, where he did the packaging and everything. And then later on, we hired him. And he became one of my close colleagues. And we ended up being like a small team of people working on Unix, Linux, eThings. So FreeBSD Opera was definitely a labor of love. And so I'm glad people used it. We did weirder things. Like we did a version of Opera, which was for Debian on Spark. It had one user. We never told management about that one either. We had another one. We had Solaris on x86, also one user. Probably maybe the same guy. But anyway, yes, moving on. So I've been doing this for a while. Been programming now for 20 years. But I have a hobby. I read election documents. And then I petition for more election documents. And then I imagine how I can hack elections. Specifically, Norwegian elections is my hobby. And it's a weird hobby. I've been doing it now for a while. It was an accident. I did my thesis. My master's thesis was making a prototype of the Norwegian election system. So I had to read all of the laws and everything to figure out how elections actually are done in Norway. So yeah, other people knit or do photography. And I do this. So people ask me, why are you so cynical? Why are you always talking about people are evil? And people are going, are you paranoid? And it's like, the thing is, I'm not cynical at all. I'm ridiculously idealistic. I wouldn't do this if I wasn't really idealistic. I believe in democracy. I believe in the democratic process. And I believe that my work or my hobby helps that process. And it takes a lot of time. It takes a lot of time, unpaid time, that I do to do this. And a lot of things that I hate doing, like talking to strangers. But we've gotten, there have been strides in the time while I've been doing this. But the problem is, the reason why I'm doing this is I believe that election processes all over the world, not only in Norway, have become more fragile and even broken through the introduction of computers. And that's really weird coming from me, who's like a professional programmer. And all I do is computers. And I have a whole talk on why this is the case. And if you want to see it and understand why Patricia says these crazy things, then go to my Twitter. And my pinned tweet, there's a whole talk on that. So I'm not going to talk about that today. But I'm going to talk a little bit about how I do this work and what I've learned during this work. And that's going to map into some of the things that I hope we can change as an industry. Because you'd think that I do this by writing good petitions and having meetings and being serious and having people take me seriously and all of those things. And that's absolutely not true. And nobody has actually taken me seriously. Or, well, that's not really true, that the Minister of Justice in 2017. And I have to be precise because we've had a bunch of Ministers of Justice here in Norway since then. But at the time, he said to Reuters that I was basically a threat to national security. So no, not really popular. The Ministry in charge of this and the Directorate in charge of this, I think they probably have played darts with my picture. Because the thing is, I have achieved what I've achieved, not by being taken seriously. I have achieved what I've achieved by being really annoying. So I consider myself to be annoying as a service. Today, we are going to talk about embedded ethics. And embedded here is both embedded in our culture, embedded in the products that we make, and embedded in us. What does this mean, being responsible for the things that we make? And I think we sometimes need to think a little bit about the world that we've made. Because we made a digital world. And we didn't design it that way. It kind of just ended up that way. We made a world that is totally digitized, from birth until death. People can't actually do anything. Like you can't live your life and be off-grid. Some services are only available online. Even public services are only available online. And we put all sorts of things online. Like we put our power services online, our water services online. Like we have put dams online. We have a search engine dedicated to just finding things, installations that are online. And there are the weirdest things that are online. And we made this digital world a little bit like a piece meal over time. It's a legacy system. So we made a digital world. And then we made everybody live in it. Not just us. We made everybody live in it. And the problem is they never had a choice. We built a digital world around everyone in the world. And we have totally failed to make them literate in this world. We made a world they don't even understand. We made a world that they don't understand how it works. And basically, we taught them that it's magic. Made a magic world for you, where you press magic buttons. The magic things happen. Most people don't even understand how a web page is rendered on their device. They don't understand what that is. They don't understand. They was like, yeah, I have internet on my thing. In 2005, it was really, really hard to explain to people what I was doing for a living. And people were like, yeah, so what do you do? No, I work at opera. And they were like, oh, so you work at the opera? And I'm like, no, no, I work at opera. We make a browser. And people are like, what is a browser? And I was like, so you know that little E? And they were like, oh, yeah, the internet? Yeah, yeah, I make that. Just an O. It's 2005, OK? We've gotten further than that now. But this is not long ago. And this is the world we've created, right? And when we fail them, and we have failed them so many times by creating things that are harmful for them, right? And when we fail them, they don't even understand how we fail them. We can't even explain to them how we fail them. If you try to ask a regular normal person what the whole Snowden thing was about, they have no idea. If you ask them about the Cambridge Analytica thing, they don't understand that either. We manage to fail them in ways that regular people don't even understand. Most journalists I talk to about election security, and I talk to a lot of journalists, you wouldn't think so because there's hardly anything written. And the reason for that is that most of them never write anything because they don't understand what I'm saying. I spend literally hours and hours explaining computers, elections, security, all of these things to journalists. And they find it very difficult. And when you think about it, this is like the simplest thing ever. We have paper ballots in Norway, so we have physical paper. This should be very understandable. We have document scanners connected to Windows machines that have software downloaded from the internet. Of course, these machines are connected to the internet. And this is how we count the ballots using computers. This should be of all of the things that are complicated that we do. This is quite simple. And they don't understand what I'm saying. And if they understand what I'm saying, after a lot of explaining, they don't understand the implications. If I find something that is wrong, and they'll go like, yeah, yeah, yeah, but I talked to them and it was a mistake. And I'm like, yeah, I understand it's a mistake. Most of the things that we do when it doesn't do what we wanted to do, it's a mistake. We didn't mean it to. But they generally don't understand the implications for democracy. They don't understand the implications for the voting process. And if I manage to get them that far, they don't understand how to explain it to anybody. Because I just spent a few hours trying to explain it to a journalist. How are they going to explain it in a short article? To normal people. We made a digital world, and we are struggling to protect it. We are putting our users, the general population, our families at risk. And those that make the decisions that we need to make decisions, politicians, people with power, they don't understand it either. I talk to people who have power, political power, have social power. And they don't understand what I'm talking about. I'm trying to get people to change laws and regulations in Norway, and I'm talking to people that don't understand what I'm saying. And I didn't think it was that bad until I had to do it. We sold a story to the general population that all of these magical devices they carry around are benign little monsters that they can do their bidding. They don't understand that these things are built by people who are powerful, that have motivations, that want something. That technology isn't neutral. And now we can't explain how we failed because we sold this story, and they believe it. When I tried to talk to people about the problems that we're having with security and privacy and these things, people think I'm paranoid. Because that is how these people have been portrayed. Like, if you go back and you look at movies in the early 2000s about people saying Snowden-like things in movies, they're always crazy. They're always like the hair all over the place drawing on the wall type crazy, right, in movies. And then suddenly, yeah, and suddenly there we are. It's like, okay, so how are you going to say this to people? Okay, so you know all of the movies about how terrible surveillance, mass surveillance was, and all of the crazy people and everything they said? Okay, it was worse. We can't explain how we sold their privacy. We can't explain because it is mind-boggling how tracked regular people are, how their personal information that they thought they were sharing with their families or their close friends has been sold many times over to private companies, specifically to target them as individuals. We can't explain it, they don't understand it. We can't explain how we broke democracy by creating platforms for mass influence, for propaganda machines. We can't explain it because they don't understand it. They think Facebook is private. We can't explain how we embed devices in their bodies that we don't even fully understand and that we are not fully protecting. You can't explain to a person that, yes, I have to embed this pacemaker in your body, but the thing is, it has a Bluetooth vulnerability. But you know, without it, you're gonna die anyway. So, you know, what you're gonna do? We can't explain how our white male-dominated industry keeps on creating things that are unsuited for people of color or women. We have been doing this for decades and it's only now occurring to us that this is actually a problem because, surprise, the majority of the population of the world is not white males. We have like a good chunk, 50%, who are not male at all and a really large portion of this world is not white. So, we need to start thinking about how we can make things for everyone because we are actually making things for the entire world. People who are not like us at all. You have people that have high-speed internet making things for people who have hardly any internet at all. You have websites that are made in Silicon Valley that don't even load in large parts of the world because they're massive. We can't explain things though. We can't explain our problems of our industry and we need to explain because we need their help. But we can't explain because they don't understand what we're saying. And that's such a problem. And I feel that's so much like, how can I actually, how can I push my little agenda, which is election security forward when I'm talking to people that don't understand what I'm saying? And that's so important. They don't understand what we're saying and they don't even believe it when we tell them. Like, when I say how Norwegian elections are run and the problems of security, people think I'm crazy. And so I'm going to talk a little bit because I've had some conversations and this is my mental model of ethics in all of us. So you have innate, something you just can't help. It's something that you do automatically. You don't even think about it. These are the things that you will see in your workplace culture. Things that you just don't do. These are the things that nobody taught you. It wasn't written anywhere. It was just how we do things around here. Or that we don't do. Like back in the day when I worked at opera, one of those things that I didn't even realize until I left was we don't compromise on our users' privacy. That was a thing. We never said it. It wasn't on a poster on the wall. It wasn't something that somebody told me when I started at the company. It was just a thing. If somebody proposed a solution and we saw a privacy implication and somebody said, yeah, but that would expose some privacy thing for the user. And then we'd go like, yeah, sure. Okay, we can't do that. It was just we don't do that around here. Those are the things you don't even argue about. You don't even think about. You just do. And those are highly influenced by also your national culture. And that's a weird thing if you come from Norway. We have a very non-hierarchical society, and that translates a lot to our businesses. We have a very flat structure, but we also have very strong historical unions in Norway, which have introduced a lot of workplace legislation. It's really, really, really hard to fire somebody in Norway. It's ridiculously difficult to fire somebody in Norway. But then the combined flat structure and the fact that you know it's really hard to fire you means that in Norway, people will actually speak up and they'll say, I think this is wrong. I think this is stupid. They will actually be able to say that to their superiors. And this is something that Norwegians take for granted. We don't even think about it. That we have that freedom at work. But then when I was at Cisco and I'm suddenly meeting a lot of people working in the States who have contracts where they can be fired for no reason the next day, then you're starting to think, I'm not going to bring up anything that's going to rock this boat. I have a mortgage. I have kids. I have things. I'm not going to take the risk of losing my job to tell my boss that this thing is stupid. Then they'll have to deal with that later. And then you have the more conscious ethical decisions, right? And I've been talking to people in the medical field and they will have like, well, it's life or death. So I will do this because a life is at stake or they have these big moral dilemmas. And those are great because it's very clear. It's delineated. You can make conscious decisions. You feel very powerful in the moment. But the problem is all of the stuff in between, right? Because here you have the right versus wrong. But then you have like all of the gray. It's like, what do we do there? And what you also see is that it kind of overlaps with the innate and the conscious thing is all of these socially things, which are very difficult to define. But there are some really interesting research into this area. And specifically, I want to talk about two. They are related. And the first one is the principle of social proof. This is the tendency that people have to when they're not really sure how to react to something, they look around and see how other people are reacting. And then they think that reaction that the other people are having is the right reaction to have. And this is very interesting because that means that as a social group, we have a tendency to react as a group. Which means that if you are in a room where everybody else seems to think something is okay and you don't feel it's okay, then you think automatically that okay, but all of these people who I respect and I think are decent human beings think this is okay. So it probably is okay. And I'm just misunderstanding something. Or I'm just wrong. And then we don't speak up. And so you have something like, nobody else is saying anything. So I guess it's okay. I was like, oh, we've always done it this way. So it must be okay, right? Or all of these people seem to think it's fine. So it must be fine, right? But as you probably have guessed, this very quickly leads to something else. Pluralistic ignorance. And this is when everybody in the room are doing the same thing. Everybody in the room are looking around at everybody else and thinking, okay, but nobody else is saying anything. So I guess it's fine. This is a situation where everybody has like a bad feeling about this thing, but nobody is actually saying anything. And this leads to some interesting things. Like this bystander syndrome comes from this. Bystander syndrome is very interesting. And they've done lots of studies and basically for your information, which might be useful to you, if you fall down or you're sick or something happens to you, you have a crisis in public, you are much more likely to get help if only one person saw you. If more than one person saw you, the chance that you will get help at all plummets. And this is really surprising. You would think like, okay, but I have more people here. So it's more chance that I will get help. But the thing is because of pluralistic ignorance, those people will look at each other and see that nobody else is doing anything. And so it's probably fine. You can break out of pluralistic ignorance and this might be useful to you if you ever fall down and have a stroke on the street. It's basically to look somebody straight at them and say, you, call an ambulance, I'm sick. And that has to do with you are calling out that it is an actual emergency and that somebody has been given responsibility. And what you see then is that the whole bubble kind of burst and then everybody starts coming in. It's really crazy. But this is also what you see in many ethical situations where when you come and you're investigated later, you realize, yeah, but there were so many checks and balances. There were so many people who should have said no, but nobody did. And that's because everybody was kind of waiting for somebody else to say something. So sometimes you have to be that person. You have to be the one who raises their hand and says, you know, I was wondering is this really a good idea? The idea that if it was bad, somebody would have said something is just not true. If you are told by your boss to do something unethical, but legal, what recourse do you have? This is a problem we have in tech because in many other industries, like in medicine and in legal or in accounting or things like that, they actually have ethical boards. They have licensing things. You can have, you can lose your license if you do something that is unethical, even if it is legal. We have no such protections. If your boss tells you to do something, what recourse do you have? Well, in Norway, we argue. We have that privilege. We can argue. We can argue a lot. And oftentimes we can get away with it. Like we might actually avoid doing it. But if our boss tells us to do it, then we basically either do it or we quit. That's our options. In the Volkswagen case, an engineer went to jail. It wasn't an executive that went to jail. An engineer went to jail. I was following orders. Yeah, we got a lot of history with that one. And the thing is, it's happening right now, right? In the US, ICE is getting aided by technological companies to put people in camps. And the same thing happened with tech companies and the Nazis. Tech companies helped the Nazis to build databases over Jews. This is not new. We've been doing this for a while. But the thing that I came back to over and over again, when thinking about this, like how can we fix this, is how are we going to protect whistleblowers? Because the thing is, you are all by yourself. What if you see something that is really bad inside of your company? Who is going to protect you? How are we going to protect them? Seriously, how are we going to protect them? Because we are under NDAs, right? We are in an organization where we are not even allowed to tell anybody what the problem is. So how did other disciplines fix this problem or try to solve this problem? And basically, they made calls of ethics. But we did that, too. There's been bunch. They've been doing that for decades. But they enforced them by unions and professional associations. And some of them even got them put into law. But we don't have them. We are also really bad at being in unions and tech. Oh, sorry. And this is something that we need to change. Because we have no code of ethics. We don't even know what is right and what is wrong. We have no body to evaluate it even. And no consequences. Besides, you know, maybe somebody doesn't want to hire you, but then you just kind of remove the company from your CV and you just move on. But the thing is, do we even know if we're herming people or democracy? Because half of the time what we're making is a component, right? We're making a component to something else. But sometimes you have things like this. Like, we make products to protect children. I say protect and quotes, because that's also debatable. But we make products to protect children that are used to control and abuse intimate partners. And when you talk to these people who work at these companies and you tell them this, they're shocked. It's like nobody thought that this could be used this way. We're making image recognition software that is used to identify protesters. I'm sure nobody who was making that image recognition software ever thought that it would be used to identify and jail protesters protesting for the right to speak their mind. We are building the infrastructures of our countries on hardware we can't inspect and on binary blob drivers and firmware. You know, even if we think that our work is okay, we're putting devices inside of people's houses. We're putting devices in children's rooms. We're putting devices inside of people's bodies that we don't fully know what is there or what they do. So people say, oh, we need regulation. We need more regulation. But I've been talking to people who make laws and they don't have no idea how to regulate us. They believe our propaganda. They believe that these machines, like when I talk to them and I say, okay, yeah, but we have problems around security of these computers. They're Windows machines. I mean, this is like the most popular operating system for malware. And they look at me and they'll go like, yeah, but it's a machine. You know, machines, they can count. I don't know how to talk to them. It's like, they were like, yeah, but if you tell, like I've had a calculator, like people say this, they're like, I have a calculator. If I do two plus two, I get four. And I'm like, okay, I'm a programmer. If you do two plus two, I can make it be five every time. Because that's what they also tell me. Yeah, but the computer gives the same answer every time. And I go like, yeah, we pride ourselves on that. That's sort of what we do. I mean, if it wasn't deterministic, we'd probably call it a bug. Even like the most simple things we can't explain. But there are some lights in the dark. I really love GDPR. It's cost our industry a ton of money, but it actually puts the focus where it should be. On the end user and their data and their right to control that data. But then we have weird things, like the reversion of net neutrality or the EU copyright directive, which is weird. So basically we're trapped in a situation where we're incapable of regulating ourselves because we have basically no power. We're all individuals. And we are unable to be regulated because the people who are supposed to regulate us, they don't even understand what we're doing. They don't even understand what we're saying. We're not able to have a public debate, which we really should be having with the population. We should be like telling them, these are our problems. What is it that we should do? How can we best serve the general population? But we're unable to have a public debate because the informed reporting is practically non-existent. When you read articles about computers, they hardly ever make any sense at all. Why didn't we teach the population? We made a digital world and we made everybody live in it. But we left them illiterate in that world. They don't understand the world we made them live in. Why didn't we teach the population? Could we even though? Could you scale that? Could you basically say we're going to upskill the whole population of a country? Could we do that now? In the 90s, Norway did an attempt at this. They had something called jimmepäseordningen and datakurta. Jimmepäseordningen was a tax rebate that you could get if you bought a computer to have in your house. It made it possible for a lot of people to buy a computer and have it in their homes. Datakurta was basically a certification for regular people on how to have basic computer skills. Basic computer skills here basically meant when you use word and turn it on and off again. So anyone who says admins is like you can bless this datakurta for people being able to turn it off and on again. But the thing is, it stopped there, right? These were the attempts made by the Norwegian government to make the population computer illiterate. But we interpreted literacy in computers too narrowly. We've interpreted it as being able to operate a computer with a GUI interface. And also, well, you know, we have to be truthful about this, our interpretation of computer literacy means can you use Windows? Today, most people own not one, but many computerized devices. Most people have many different operating systems running in their houses. Many of them are unixes or linuxes that they don't even know are there, like inside of their fridge or their oven or their induction countertop, right? We have so many computerized devices in our homes and now we're just putting them online because, you know, internet is good, right? But they don't understand them. Like regular people don't even understand what an operating system is. I tried to explain to people like what is linux and they're like, okay, so is that different buttons? I don't know. It has something to do with computers, right? The problems that we have are fundamental. They're fundamental to our discipline is fundamental ethical questions. Who are we responsible for? Because what we are making is for people and those people become subject to our creations. Right? We have fundamental problems, but they are really difficult to grasp for regular people and we can communicate with them. But there are attempts. This is one of the earliest. The ACM Code of Ethics and Professional Conduct. They tried to make a code of ethics. And then that's great. You can write a paper, but... I tried to read all of these code of conducts and I think the most important part of this one is one, two. Avoid harm. Avoid harm. It sounds a lot like Googles don't be evil. We all know how that went. So avoid harm. Can we do that? Can we avoid harm? Like can we as individuals, as professionals and professionals in this business say that that is something that we can do? That we can stand for? That when we're in the office and somebody says, okay, let's collect all of this personal information about our users and then sell it to an ad agency? That we can say no? I'm an IT professional and we don't do that. Can we create that culture? That we have certain things that we just don't do? Avoid harm. So what did that regulation that other professionals did? How did that work? How did they build that? Like I said before, Norway has a history of powerful unions and we don't appreciate that enough, especially in tech. And the powerful unions in Norway have created so much of what is taken for granted when it comes to workers' rights in Norway. You don't have to be in a union because these things have been slowly but surely pushed into law and regulation. Even if you're not in a union, you are so privileged from the work that these unions have put in to make sure that your workers' protections are in place. And that's not true for many places in the world. We could ask the unions that are focused on tech to get together and make a common ethics board because that's what happened. Both with lawyers and with medicine and with accounting in Norway. They had ethics boards connected to unions. With lawyers, they even like relatively recently managed to put their ethics regulation into an actual regulation. So it is now in law. But it provides them protections as individuals because if someone asks you to do something unethical, you have a protection to say, no, I can't do that. That is against my ethical obligations because we don't have that today. And we have no place to go. Like, what if someone asks you to do something unethical and you don't want to do it? What do you do? You are trapped inside of your business. The unions could protect whistleblowers, give them access to lawyers, give them access to more than they are themselves. And we could have the unions actually be the body that concentrates on the ethics of our industry and not the profitability of our industry. Because the thing is, if you talk to people in tech, we all want to do the right thing. We just don't know how to do that and still get paid. I'm sure you all feel that because I'm sure very few of you actually managed to do BSD things and get paid for it. So it would be nice if we could do the right thing and get paid at the same time. But I want to give you something a little bit more hands-on that you can use today because this is more of a future vision. But you have the problem of social proof. If everyone is looking to everyone else to see what is okay and what is not, and you are the one person in the room who's feeling like this is not okay, how do you do that in the meeting? In the meeting when somebody is saying we are going to sell all of our customers, all of our end-users' personal data to the ad company, what do you say in that room to break through this seeming consensus that this is fine? I ask a very simple question. Could you justify this to a journalist? What will this look like on the front page of a newspaper? Would you be perfectly comfortable sitting down and saying to a journalist exactly why this is fine? And some people might actually get past this one and we're like, yeah, I can probably trick a journalist because we already talked about it. They don't understand what we're saying. So we're good. You should just use some techy terms. It's like, ah, I have no idea what this said. But the problem is, what will experts say? Because they will also read this in the newspaper, right? And the moment you start and lift your head above this particular contract and you look and you try to think, how will this affect our brand? How will this affect how people view us? It is often enough to snap people out of it and people say, you know what, it's not worth it. If this becomes front page news or goes on hacker news, we're basically going to lose all of our customers. Nobody's ever going to trust us again. And that might be enough to break through and stop something without having to risk too much. But just pro tip from someone who's been doing this for a while. If all else fails, if rationality really doesn't work, then perhaps try being annoying as a service. Thank you. We do have some time for questions and there's a microphone there. So if you have some questions, you can go to the mic. Nobody? Ah, sorry. Okay, go get some coffee then. Thank you so much. Yes? You have a question, there's a question. I mean, thank you for this. I like it a lot and actually I came independently to your advice and the thing with Press works perfectly. I tried it. Yeah. And with GTPR especially in the background and I see. So, and I don't want to sign in disrespect because it was very important what you're saying but I have to ask you about the artwork. Yes. I've watched your website as well before and I want to ask how do you... I think I have... Give me a second. Okay. The photos are from Pixabay and they're creative comments. Okay, good. But have you ever heard that graffiti painted for you? Have you ever had graffiti painted for you? No, not yet, but that's a go for the future, definitely. Okay, thank you. Thank you. Nobody else? Okay, thank you. Bye.