 Yes, so today my talk is about hacking ethics in education. And I sort of on purpose made the title a little bit, have different meanings. So I'll be talking about hacking in education first. So a little bit of background why we do hacking and what kind of hacking we do. Then I'll be talking about the hacking ethics in education, how the people feel about the hacking and how they work on hacking. And then I'm talking about hacking ethics into education. And I'll be talking about the different viewpoints, so I'll be talking about the viewpoints from the students, the teachers, and most importantly the administration of the university. The hacking in education, we have a system and network engineering master as where I'm a teacher. And this started at the University of Amsterdam in 2003. So we have been doing this for 11 years. There are, so this deals with various subjects of system and network engineering. We have networking experts, but we also educate forensics experts. So we have various security related courses throughout the year. And there are many different research projects which are very intensive research projects where the students work on a single topic for one month. And they really delve deep into a particular subject. And some examples of what they do within a month or six weeks if this is during the course. One of the first hacks of the OVA chip cards, the Dutch public transport chip card, that happened by students by our education. We did some hack on Tinder, which I may talk about if I have some time. We have found vulnerabilities in mobile banking apps that students worked on. And also the Dutch passport chip was cloned by a student of ours. Now, if we talk about hacking ethics in education, what students think they do when they're hacking, they feel they're very powerful and they can hack anything. They want to break stuff and they think they have the knowledge and they are really into hacking and they want to scrutinize the security of products. And their first aim is to break stuff. They don't want to see if something is secure or not. They want to break stuff. And then if you look at what the teachers think they do, is they're shooting peas. Most of them are not very convinced yet and most of these things are reasonably secure and it's well thought out. It's interesting that the students learn about looking at the security of different products. And it's good that they start shooting at it and see how that feels like. But most of the things aren't really effective. That doesn't mean the hacking itself may not be effective, but the experience for the students is very effective. What the administration thinks that they do is different. They're very scared. They think that these students are all very lead hackers and are breaking stuff and doing this in very nasty ways that the teachers don't see. Now, the problem with this is even if you're shooting peas, sometimes you shoot at the right spot and you break stuff. This is what happened with the mobile banking security. We thought, well, we can let the students look at mobile banking security and they probably have their act together. And it was even audited. The mobile banking app was audited by a very professional company. But the students looked at this for a day or two and it was completely broken. They scared the banking official by showing him his pin number when they did the demonstration. But even if you're not breaking stuff, if you're shooting peas at somebody, they can get very annoyed, even if you're not really breaking stuff. And then if they get annoyed, then very often the old boys network kicks in. And the staff, the upper staff of the company, contacts the upper staff of the administration and they panic and freak out. And that's when we started doing the hacking ethics in education. So the hacking ethics in education because we wanted to show the administration that they really are ethical hackers, that they're thinking about what they're doing. So the administration, they freaked out and they did what they always do when the university administration freaks out. They formed the committee. They formed an ethical committee for the computer science faculty. And this was for the whole faculty and it included all non-standard research. So the standard research that each of the research groups do, that doesn't have to go through the whole ethical evaluation procedure. But if you do anything out of the ordinary, then you have to or you should be doing something with the ethical committee. The problem a little bit with this ethical committee was that it's formed by eight group leaders of the computer science faculty. And the problem with group leaders is that they have a very busy agenda. So the ethical committee doesn't meet very often and they don't have a lot of time to review projects. And like I mentioned earlier in my talk, we have a one-year program and it's a very intensive program. And we don't have a lot of time if students are doing projects to review those projects, especially not if we're talking about the review time in the matter of months. That doesn't work with our education. And finally, there was also a lawyer on the ethical committee. But in this ethical committee, there's no ethicist, which is a bit strange, I think, because I wonder how this makes it an ethical committee. But they think that this is an ethical committee. So we had to deal with this because this was the ethical committee for our computer science faculty. And they were imposing us that all of our projects had to go through them. And then we said, well, maybe we can hack the process and we form a committee ourselves. So we formed an ethical committee for the master education. And we wrote a procedure how to make the students think about the ethical aspects of their security research. When they do a research project or a security project, the first thing they have to do is they write a proposal of what they're planning to do. This is so that we know we have some idea that they know what they're doing, that we think that this is a valuable project, that this is interesting enough for the course that they can get the passing grade at the end of the course. But we changed this so that they also now have to write an ethical paragraph into that project plan. So at the end of the project plan, they have to say, what are the ethical impacts of this project? What is the who is going to get harmed if you if you actually break stuff? Are you impacting on the privacy of user data? Are you getting access to user data? And these kinds of things, they have to write down and they have to defend this to the teacher. Then the teacher of the course with the ethical committee of the master education is a very small, very small ethical committee. And they meet and they evaluate all of the projects that the students do handed in. And we first of all, we check whether the ethical paragraph that they handed in is actually complete, that they identified all the major issues, that they identified all of the things that could go wrong, all the actors that may get involved. And then we give this a traffic light status. We go from green to red. Green means that there is basically no ethical considerations involved. This is an evaluation of a security tool or an offline analysis of something where they don't get into contact with any user data, that it doesn't have an impact on user privacy, etc. Yellow is if they may get into contact with personal data, but in a very confined way. Or we don't think that they will break stuff. And if they do, it will not have a very high impact on society. Orange is where things get a little bit serious, where they possibly have access to a lot of user data, privacy impacting data, or if they actually manage to break things, then this is going to have a very high security impact. And red is a status that we use for projects that actually cross the line, but may be important to do anyway. Or not. I mean, sometimes you have projects and you're going to cross an ethical line, but it may be that the problem is so important or so politically relevant that you think that this is a project that has to be done anyway. But then we agreed with the ethical committee for the faculty that any project that is going to be red, we're going to report to them before we start doing the project and we get permission to do this. And this has worked very well over the last year. From the students standpoint, we've now forced the students to think about ethics when they're doing the research. And this has led to very valuable insights for them, and it's been a very good experience for them also. And it also means that we force them to think about the ethics before they start doing the research and they do an ethics by design in their research plan. And how they set up experiments. And they think about privacy when they're setting up experiments with user data. And this is something that wasn't... So we had some ethical courses and ethical classes during the year, but then it was mostly abstract and we started the discussions. Can you take a computer home with you? If it's an old computer at your company, can you take it? Can you look into users' email boxes? And you do the discussion, but it's still a little bit at an abstract level that doesn't really touch the students at that moment. And then it's hard to see if you really made an impression about the ethics of that issue. But now this is... By incorporating it completely into the research, they're forced to think about this and this impacts them in a very profound way because they have to think about their own research that they're doing right now. And they have to think about this, how do I do this in the right way? And what is the right way? Of course, they're still students, so we work with them to help them see these kind of issues and even, like I mentioned, the traffic light status. That also means that the orange projects get more supervision by the ethics advisor, that they get a little bit closer interaction, that we monitor what they do and that we take care that they don't cross the line. Finally, at the end of the line, if they do find the issues, we do do coordinated disclosure or responsible disclosure, as it's also called. And this is done by the teachers of the education. This is a conscious choice because we find that coordinated or responsible disclosure is not really accepted that much yet. It's getting better, but it's not there yet and it usually takes a very long time. We had some projects before the summer and it took more than two months to finish all of the coordinated disclosure projects and this is just time that the students don't have or if they're doing a project at the end of the year, they're basically gone and we have to make sure that the procedure is still finished. I have some time for the expert slides. Like I said, Tinder. This was a very interesting research and we showed very much how the privacy by design and ethics by design worked very well in this research. A little bit of background. In July 2013, I mean, everybody knows Tinder, right? Who doesn't know Tinder? Oh, okay. Tinder is a dating app. It's a very shallow dating app. This is the interface and the way you manage the interface is you swipe either left or right saying you like this person or you don't and if you both like each other, then you get into contact with each other and then you can start chatting and maybe go on a date or something. The power of Tinder is that it uses GPS to find matches near you. It doesn't really make sense if you get a match that's in America. The chance that you actually meet is very slim. So they have to have the GPS coordinates of the users in order to find matches near you. The problem was that in July 2013, somebody looked at the traffic that Tinder was sending and they found that Tinder was actually sending the GPS coordinates of other users to your phone. In order to measure the distance between the two users. This is, of course, a little bit of a privacy problem. So they fixed this and then in February 2014, somebody started looking at this again and they fixed this by sending the exact distance to the person. But the problem with this is that the API that Tinder exposes is actually open. You can just write the script, query the API, so you take three different points and then you do trilateration. And then you still find the exact location of the user. So this was all external research and then in May 2014, our students started looking into this because the way that Tinder tried to fix this is because they implemented a rounding. So you get, if the matches are very close to you, they say that this user is within one kilometer. But the API actually exposes a lower boundary that you can set very much lower than one kilometer. And if you do so, you get different matches. So they started looking into this and it turns out that they may say that this is within one kilometer, but because you're using the lower bound, it's actually within 500 meters or within 300 meters or something like that. So you do a little bit more querying, but you still find the user location. But then how they researched this is they did privacy by designing their research plan and at first they started, I mean, the naive approach is just to match everything, store everything and then start comparing the results. And we didn't want to do that because we wanted to preserve the privacy of the users. But they used the test user, but they were at the university students are very enthusiastic users of Tinder. So it was very hard to find their own test user. There were too many users. So what we ended up doing, I worked with the students to think about the problem and think about how to solve this in a privacy-friendly way and finally resolved by doing this that we only stored the hashed ID of users that they found in their location and no other results. And in this way, you can find the matches and you can verify results without impeding on the privacy of the users. So this was a very good example of doing the privacy by design and the ethics by design in a security research. And that is actually what I wanted to say and what I wanted to tell you about with our ethical committee. Thank you very, very much for this interesting talk. Do we have some questions? So please line up at the microphones now. We have some time for Q&A. Microphone 2, please. Did your institution, I guess in this case school, did they see any value in what you were doing or did they only perceive you to be a threat? It took a while, so we started this last year and it took a while for them to see the value of this approach. But now that I was enthusiastic and that we get actually very good results, they see some value in this. But it took a very long time, yes. Microphone 3, please. Do you have any tips for organizations that are refreshing their security curriculum to take into account? I would take, if you're refreshing your curriculum and you're doing projects, then I would very much recommend taking this procedure into account. That you start with the students with ethics in their project immediately. There is some question from the IRC. Yes, I have actually two. So the process of ethics is neutral to what extent does the Ethical Committee inflict the norms or values on the students and the second question is to what extent can students act even if their norms and values are not in line with the Committee? Okay. We try not to impose... Sorry. We try not to impose too much of our values into the students. But we are... I mean, the university is a very public organization and has its reputation to think about also. So we have to be careful with that. But we do do a very open... We have a very open debate about the norms and the evaluations that the students do and this is in the discussion. So we try not to impose this. And the second question was... To what extent can students act even if their norms and values are not in line with the Committee? In the end, the Ethical Committee decides and the teacher decides what we can do. And if this doesn't align and we don't agree that this is an important subject to do, then the student is not allowed to do it during the curriculum. He's free to do it in his own time, but it won't be great work. At microphone 2, please. Are you considering an ethical code of conduct a student could take an oath when they graduate from the university? Something is happening in the University of Groningen at the physics department that a student can take an oath they don't work in the arms race industry and work on the... Well, you know what? Something similar. I think EFF has some code of ethics. Yeah, the EFF... I forgot the name. There's a code of ethics for system administrators. And we try to... We educate them about that one. But we don't enforce this on to the students after the curriculum. No. I'm not saying enforcing. No, no, no. In Groningen, this is just an optional thing you can do when you take your ball and go. Okay. No, well, we haven't really thought about it. It's something that we present and that we present as a good thing because it also helps you a little bit with... if you're forced to make hard choices then you can point to this public document and say, this is an oath that I abide by. And this is not something that I thought of but this is something that exists and that other people have thought about. Yeah. So that's what we do, yeah. Okay, thank you. Mike. No. 3. Hi, first let me say that it's really great that you're doing this. I was at the University of Amsterdam also ages ago and we had some extracurricular classes on ethics on the faculty of computer sciences. But what I'm questioning is why don't you expand to the whole ethics question also into the design phase because this is just security research but there's also ethics questions in picking libraries or making design decisions on networking equipment or software design. I think it would be great if it could be expanded. A lot of flaws that we've seen presented here on the CCC come from poor design decisions or because people think, well, I can get away with it. And that's... it's something that we've been subconsciously working already with. But it's not something that we explicitly have been doing, but it may be something for the future. Ah, it's working again. So there are some more questions from the internet. Yeah, one. Did the lawyer even intervene or do anything like did he understand the problem he was presented with? The lawyer in the ethical committee for the faculty is actually very constructive and she helps with discussing the law and what the boundaries of the law are. I'm not completely sure whether she actually stopped anything yet but it is sometimes helpful to understand what the actual boundaries are and what you have to think about. Add microphone 2, please. You talk about the traffic light with the red, green, yellow. The parameters you have to classify which one is red and yellow and green. Do you have that public somewhere? Can I have a look at that? Because that sounds interesting. And so this is still very much a work in progress but the whole procedure is online at the website. If you click through there, the info and then ethics and you see the evaluation procedure and some of the examples that we have for each of the different classes. Yeah. So as I can see, no more questions. Thanks a lot again. Give her another applause.