 So good morning, everyone. My name is Arne, and today I'll be hoping to entertain you a bit with some GPG usability issues. So thanks for being here this early in the morning. I know some of you have had a short night. In short, for the impatient ones, why is GNU PG damn near unusable? Well, actually, I don't know. So more research is needed, as always, because it's not like using a thermometer. We're doing something between social science and security. But I will present some interesting perspectives, or at least what I hope you will find interesting perspectives. This talk is about some possible explanations that usable security research can offer to the question. Now, some context, something about myself. So you have a bit of an idea about where I'm coming from and what colored glasses I have on. So pretty much my background is in mathematics, computer science, and strangely enough, international relations. So my professional background is that I've been doing embedded system security evaluations and training, and I've also been a PhD student studying the usability of security. Currently, I teach the new generation, hoping to bring some new blood into the security world. So I want to do some expectation setting. I want to say what this talk is not about. I will also give some helpful pointers for those of you that are interested in these other areas. So I will not go into too much detail about the issue of truth in security science. Here are some links to some interesting papers that cover this in a lot of detail. Neither will I be giving a security primer. There are some nice links to books on the slide. I will also not be giving a cryptography primer or a history lesson. Neither will I be giving an introduction to PGP. And interestingly enough, even though the talk is titled why is GPG damn near unusable, I will not really be doing much PGP bashing. I think it's quite actually a wonderful effort, and other people have already done the PGP and GNU PG bashing for me. And as I've already mentioned, I will not be giving any definite answers, and a lot of it depends. But then you might ask, it depends. Well, what does it depend on? Well, for one, what users? You're looking at which goals they have in mind, and in what environment, what context they're doing these things. So instead, I want to kill your inspiration. I want to offer you a new view on the security environment. And I will also give you some concrete exercises that you can try out at home or at the office. And some do's and don'ts and pointers for further exploration. So this is a short overview of the talk. I'll start with the background story to why I'm giving this talk. Then an overview of the usable security, research area, some principles and methods for usability, some case studies, then some open questions remaining. So the story. Well, it all started with this book. When I was reading about the Snowden revelations, I read how Snowden tried to contact Glenn Greenwald. So on December 1st, I sent an email saying, writing to Glenn, if you don't use PGP, some people will never be able to contact you. So please install this helpful tool. And if you need any help, please request so. So three days later, Glenn Greenwald says, OK, well, sorry. I don't know how to do that, but I'll look into it. And Snowden writes back, OK, well, sure. And again, if you need any help, I can facilitate contact. Now, a mere seven weeks later, Glenn's like, OK, well, I'll do it within the next days or so. OK, sure. Snowden's like, OK, my sincerest thanks. But actually, in the meantime, Snowden was growing a bit impatient. Because, OK, why are you not encrypting? So he sends an email to Michael Lee saying, OK, well, hello, I'm a friend. Can you help me get in contact with Laura Poitras? In addition to that, he made a 10-minute video for Glenn Greenwald, describing how to use GPG. And actually, I'll have quite a lot of screenshots of that video, and it's quite entertaining. Because of course, Snowden was getting increasingly bothered by the whole situation. Now, this is the video that Snowden made, GPG for Journalists, for Windows. I'll just click through it. Because I think the slides speak for themselves. Take note of all the usability issues you can identify. So just click through the wizard, generate a new key, but we need to enable expert settings, because we want 3,000 bits of keys. We want a very long password, et cetera. And now, of course, we also want to go and find keys on the key server. And we need to make sure that we shouldn't write our draft messages in Gmail, or Thunderbird and Enigmail for that matter, although that issue has been solved. So I think you can start seeing why Glenn Greenwald, even if he did open this video, was like, OK, well, I'm not going to bother. And Snowden is so kind to say, after 12 minutes, if you have any remaining questions, please contact me. So at this year's Hope conference, Snowden actually did a call for ARMS. And he said, OK, we need people to evaluate our security systems. We need people to go and do Red Team. But in addition to that, we also need to look at the user experience issue. So this is a transcript of his kind of manifesto. And he says, well, GPG is really damn near unusable, because, well, of course, you might know command line. And then, OK, you might be OK. But gang damn at the home, she is never going to be able to use GPG. And he also notes that, OK, we're part of a technical elite. And he calls on us to work on the technical literacy of people, because what he explicitly warns against is a high priesthood of technology. So OK, that's a nice call to ARMS. But are we actually up for a new dawn? Well, I want to go into the background of usable security. And I want to show you that we've actually been in a pretty dark time. So back in 1999, there was this paper, why Johnny counting crypts, which described mostly the same broken interface. So if you go back to the video of which I showed some screenshots, then, well, if you look at these screenshots from 1999, well, is there a lot of difference? Well, not really. Nothing much has changed. There's still the same conceptual barriers and same crappy defaults. And most astonishingly, in the paper, there's a description of a user study where users were given 90 minutes to encrypt an email, and most were unable to do so. So I think that pretty much describes damn near unusable. So a timeline from, well, before 1999 to now of the usable security research. So quite a lot has happened, although it's still a grown field. It started the idea of usable security. It was explicitly defined first in 1957. But it was only until, only in 1998, that the first usability tests were carried out. And only in 1969, that the concept of user-centered security was described. An interesting paper also from 1999 shows how, contrary to the general description of users as lazy and basically as the weakest chain in security, this paper describes users as pretty rational beings who see security as an overhead and where they don't understand the usefulness of what they're doing. So the study of PGP 5.0, I've talked about that already. And there was also a study of the CASA network in 2002. And it was found out that a lot of users were accidentally sharing files from personal pictures. Well, who knows? Maybe credit card details, you never know, right? In 2002, a lot of the knowledge of usable security design was concretized in 10 key principles. And if you're interested, I do recommend you to look at the paper. So a solution to the PGP problem was proposed in 2004, or actually it was proposed early, but it was tested in 2005. And it was found that actually if we automate encryption and automate key exchange, then, well, things are pretty workable, except that users still fall for phishing attacks, of course. But last year, another research identified that making security transparent is all nice and well. But it's also dangerous, because users no longer are less likely to trust the system and are also less likely to understand what's really happening. So a paper this year also identified another issue. Users generally have very bad understanding of the email architecture. And email goes from point A to point B. And what happens in between is a no. So before I go on to general usability principles that form the founding pillar of the usable security field, I want to give some examples of usability failures. So you might be familiar with Project Venona. This was an effort by the US intelligence agencies to try and decrypt Soviet communication. And they actually were pretty successful. They uncovered a lot of spying. And well, how did they do this? The Soviets were using one-time pads. And if you reuse a one-time pad, then you leak a lot of information about the plain text. What we also see happening a lot is low password entropy. We have people choosing password, 123456, et cetera. And what I just described, the study looking into the mental models of users of the email architecture and how it works. Well, at the top, we have still a pretty simplified description of how things work. And at the bottom, we have an actual drawing of a research participant when asked, can you draw how an email goes from point A to point B? And it's like, well, it goes from one place to the other. OK. So this died. So these are two screenshots of an email. Well, if I wouldn't have marked them as the plain text and the encrypted email that will be sent, you probably wouldn't have spotted which was which. So this is a pretty big failure in the visibility of the system. You don't see anything. Ah, that's the point. Yes. So on the left, we have a screenshot of a GPG. And as I've already described, command line people. Well, we like command line, but normal people don't. And what we also see is a lot of the jargon that's currently being used even in GUI applications. So on the right, there is a PGP 10.0. Now, I want to close these examples with, oh, you might be wondering, what is this? Well, this is actually an example of a security device from, I think it's around 4,000 years ago. Now, people could use this. Like, why can't we get it right today? So something that you should, this is a little homework exercise. Take a laptop to your grandma, show her PGP. Can she use it, yes or no? Probably not. But who knows? So now, I want to go into the usability cornerstones of usable security. I want to start with heuristics. Some people call them rules of thumb. Other people call them the 10 holy commandments. For example, there's the 10 Commandments of Dieter Rams. There's 10 Commandments of Jacob Nielsen of Don Norman. And it really depends on who you believe in, et cetera. But at the cornerstone of all of these is that design is made for people. And well, actually, Google says it quite well in their guiding mission, is focus on the user and all else will follow. Or as a usability maxim, thou shall test with thy user. Don't just give them the thing. But there is one problem with these heuristics and with this advice, go and test with your user. Because it's pretty abstract advice. So what do you do? Well, you go out into the world to get practice. You start observing people. So one nice exercise to try is go to a vending machine. For example, the ones at the S-Bahn. Just stand next to it and observe people buying tickets. It's quite entertaining, actually. And something you can also do is to search for usability failure. So this is what you already do when you're observing people. But even just Google for usability failure, gooey fail, et cetera. And you will find lots of entertaining stuff. So those were some heuristics. But what about the principles that lie behind those? So usability or interaction design, it's a cycle between the user and the system, between the user and the world. The user acts on the world and gets feedback. And they interpret that. So one important concept is for things to be visible, for the underlying system state to be visible, and to get appropriate feedback from the system. So these are Don Norman's Gulfs of Evaluation and Execution, sort of Ying and Yang. And there's two concrete problems to illustrate. For example, the button problem is like, how do you know what happens when you push a button? And how do you know how to push it? So I unfortunately don't have a picture of it. But at Oxford Station, the taps in the bathrooms, they say push, and you need to turr. Then there's the toilet door problem. The problem of how do you know what state the system is in? How do you know whether an email will be encrypted? So this is a picture. I'm sure how well it displays. But basically, there's two locks. One is actually broken. And when pushing the button that's on the door handle, you usually lock the door. But, well, it broke. So that must have been an entertaining accident. Another, as I've already described, another important concept is that of mental models. It's a question of what idea does the user have of the system by interacting with it? How do they acquire knowledge? For example, how do we achieve discoverability of the system? And how do we ensure that while a user is discovering the system that they are less likely to make mistakes? So this is also the concept of Pokeyoke. And here's an example. You also see it with floppy disks, with USB sticks, et cetera. It's engineered such that users are less likely to make a mistake. Then there's also the idea of enabling knowledge transfer. So how can we do this? One thing is metaphors. And I'm not sure how many of you recognize this, but this is Microsoft Bulb. So traditionally, PC systems have been built on the desktop metaphor. But of course, you shouldn't take metaphors too far. Microsoft Bulb had a little too much. To enable knowledge transfer, you can also standardize systems. And one important tool for this is design languages. So if you're designing for iOS, go look at the design language, the human interface guidelines of iOS. The same for Windows. Go look at the metro design guidelines. And for Android, look at material design. Because another interesting exercise to try out relating to design languages. And also to get familiar with how designers try to communicate with users is to look at an interface and try and decode what the designer is trying to say to the user. And another interesting exercise is to look at not usability, but unusability. So there's this pretty interesting book called Evil by Design. And it goes into all the various techniques that designers use to fool users to get them to buy an extra hotel, car, et cetera. And Ryanair is pretty much the worst offender. So a good example to study. So what if you want to go out into the world and apply these principles, these heuristics? So the first thing to know is that design tends to be a process, whereby the first part is actually defining your problem. So your first brainstorm, then you try and narrow down to come to concrete requirements, after which you go and try out the various approaches and test these. So what materials do usability experts actually use? Of course, there is expensive tools, actor, pro, et cetera. But I think one of the most used materials is still the post-it notes, just paper and pens. And OK, where do you want to go and test? Well, actually, go out into the field and go to the ticket machine of the S-Bahn. But also, go and test it in the lab so that you have a controlled environment. And then you ask, yeah, how do I test? Well, the first thing is go and get some real people. Of course, it can be difficult to actually get people into the lab, but it's not impossible. So once you have people in the lab, here are some methods. There are so many usability evaluation methods that I'm not going to list them all. And I encourage you to go and look them up yourself, because it's also very personal what works for you and what works in your situation. When using these methods, you want to evaluate how well a solution works. So you go and look at some metrics so that at the end of your evaluation, you can say, OK, we've gone down a good job. This can go better. OK, maybe we can remove that. So these are the standardized ones. So how effective are people, et cetera, you can read. For a quick start guide on how to perform usability studies, this is quite a nice one. And the most important thing to remember is that preparation is half the work. First thing is to check that everything's working. Make sure that you have everyone you need in the room, et cetera. And maybe most importantly, usability and usable security is still a growing field. But usability is a very large field. And most likely, all the problems that you are going to face, or at least a large percentage, other people have faced before. So this book describes a lot of the stories of user experience professionals and the things that they've come up against. So a homework exercise, if you feel like it, is looking at basically analyzing who is your user and where are they going to use the application. And also something to think about is how might you involve your user, not just during the usability testing, but also afterwards. Now I want to go into some case studies of encryption systems. Now there's quite a lot, and this is not all. This is just a small selection. But I want to focus on three. I want to focus on the OpenPGP standard on Cryptocast and on TechSecure. So OpenPGP, well, email is now almost 50 years old. We have an encryption standard, Asmai. It's widely used. Well, it's widely usable, but it's not widely used. And well, GnuPG is used widely, but it's not installed by default. And when usability teaches us one thing, it's that default rule, because users don't change defaults. Now you might ask, OK, PGP is not installed by default. So is there actually still a future for OpenPGP? Well, I would argue, yes. We have browser plugins which make it easier for users. And you might say, OK, JavaScript, crypto. I'll come back to that later. But when we look at Milvolo, we see, well, the EFF scorecard has a pretty decent rating, actually compared to that of native BGP implementations. And also, Google has announced and has been working for quite some time on their own plugin for end-to-end encryption. And Yahoo is also involved in that. And after the Snowden revelations, there has been a widespread surge in the interest in encrypted communications. And this is one website where a lot of these are listed. And one project that I especially would like to emphasize is Milpile, because I think it looks at a very interesting approach whereby the question is, can we use OpenPGP as a stepping stone? OpenPGP is not perfect. Metadata is not protected. Header is not protected, et cetera. But maybe when we get people into the ecosystem, we can try and gradually move them to more secure options. Now, what about CryptoCats? So CryptoCats online chat platform that, yes, uses JavaScript. And of course, JavaScript Crypto is bad, but it can be made better. And I think JavaScript Crypto is not the worst problem. CryptoCat had a pretty disastrous problem whereby all messages that were sent were pretty easily decryptable. But actually, this is just history repeating itself, because PGP 1.0 used something called Basomatic, the Basomatic Cypher, which is also pretty weak. And CryptoCat is improving, which is the important thing. There's now a browser plugin. And of course, there's an app for that. And actually CryptoCat is doing really, really well in the EFF benchmarks. And CryptoCat is asking the one question that a lot of other applications are not asking is how can we actually make Crypto fun? When you start CryptoCat, there's jelly noises, and there's interesting facts about cats. It depends on whether you like cats, but still, it keeps you busy. Now, the last case, TechSecure, also has pretty good markings. And actually, just like CryptoCat, the App Store distribution model is something that I think is a valuable one for usability. It makes it easier to install. And something that TechSecure is also looking at is synchronization options for your address book. And I think the most interesting development is on the one side, the Cyanogen Mod integration, so that people will have encryption enabled by default. Because as I mentioned, people don't change defaults. And this one is a bit more controversial, but there's also the WhatsApp partnership. And of course, people are saying, yeah, it's not secure. We know, we know, EFF knows. But at least it's more secure than nothing at all, because doesn't every little bit help? Well, I'd say yes. And at least it's one stepping stone. And all of these are open source. So you can think for yourself, how can I improve these? Now, there's still some open questions remaining in the usable security field and in the wider security field as well. I won't go into all of these. I want to focus on the issues that developers have, issues of end user understanding, and of identity management. Because the development environment, there's the crypto plumbing problem, as some people call it. How do we standardize on a cryptographic algorithm? How do we make everyone use the same system? Because, again, it's history repeating itself. With PGP, we had RSA changed for DSA, because of patent issues, IDEA changed for CASTS, because of patent issues. And now we have something similar. Because for PGP, the question is, which curve do we choose? Because this is from Bernstein, who's got a whole list of all the curves, but a large selection of the curves, analyzing the security. But how do you make, well, pretty much the whole world agree on a single standard? And also, can we move towards safer languages? And I've been talking about the usability of encryption systems for users. But what about for developers? So API usability. And as I've mentioned, language usability. And on top of that, it's not just a technical issue. Because, of course, we want to secure microchips. But we also want to secure social systems. Because, in principle, we live in an open system, in an open society. And a system cannot audit itself. So OK, what do we do? I don't know. I mean, that's why it's an open question. Because how do we ensure the authenticity of my Intel processor in my laptop? How do I know that the random number generator isn't borked? Well, I know it's borked. But then there's the issue of identity management related to key management. And who has the keys to the kingdom? One approach, as I've already mentioned, is key continuity management. Whereby we automate both key exchange and whereby we automate encryption. So one principle is trust on first use. Whereby, well, one approach would be attach your key to any email you send out. And anyone who receives that email just assumes it's the proper key. Of course, it's not fully secure, but at least it's something. And this is really, I think, the major question in interoperability. How do you ensure that you can access your email from multiple devices? Now, of course, there's metadata leakage. PGP doesn't protect metadata. And your friendly security agency knows where you went last summer. So what do we do? We do anonymous routing. We send it over tour. But how do we roll that out? And I think the approach that Milpile is trying to do is interesting and, of course, still an open question, but interesting research nonetheless. Then there's the introduction problem of how I meet someone here after the talk. They tell me who they are. But either I get their card, which is nice, or they say what their name is. But they're not going to spell out their fingerprints. So the idea of ZUKUS Triangle is that identifiers are either human-meaningful, secure, or decentralized. Pick two. So here are some examples of identifiers. So for Bitcoin, lots of random garbage. For OpenPGP, lots of random garbage. For Miniloc, lots of random garbage. So I think an interesting research problem is can we actually make these things memorable? I can memorize email addresses. I can memorize phone numbers. I cannot memorize these. I can try. Then the last open question I want to focus on is that of end user understanding. So, of course, we all know that all devices are monitored. But does the average user, do they know what worms can do? Have they read these books? Do they know where GSEQ is? Do they know that Cupertino has pretty much the same powers? Do they know they're living in a panopticon? Do they know that people are killed based on metadata? Well, I think not. And actually, this is a poster from the university where I did my master's. And interestingly enough, it was founded by a guy who made a fortune selling sugar pills. Now, snake oil. We also have this in crypto. And how is a user to know whether something is secure or not? Now, of course, we have the secure messaging scorecard. But can users find these? I think there's three aspects to end user understanding, which are knowledge acquisition, knowledge transfer, and the verification updating of this knowledge. So as I've already mentioned, we can do dummy proofing. And we can create transparent systems. For knowledge transfer, we can look at appropriate metaphors and design languages. And for verification, we can try and approach truth and advertising. And last but not least, we can do user testing. Because all these open questions that I've described and all this research that's been done, I think it's missing one key issue, which is that the security people and the usability people tend not to really talk to one another. The open source developers and the users, are they talking enough? And I think that's something, if we want a new dawn, that's something that I think we should approach. Yeah, so from my side, that's it. I'm open for any questions. Arnie, thank you very much for your brilliant talk. Now, if you have any questions to ask, would you please line up at the microphones in the ails? The others who would like to leave now, I'd ask you kindly to please leave very quietly so we can hear what the people asking questions will tell us. And those at the microphones, if you could talk slowly, then those translating have no problems in translating what has been asked. Thank you very much. And I think we'll start with microphone number four on the left-hand side. Yes, so if you've been to any successful crypto party lately, you know that crypto parties very quickly turn into not about discussing software or how to use software, but into a thread model discussion and to actually get users to think about what they're trying to protect themselves for. And if a certain messaging app is secure, that still means nothing because there's lots of other stuff that's going on. Can you talk a little bit about that and how that blends into this model about how we need to educate users and what we're actually educating them about and what they actually need to be using? Well, I think that's an interesting point. And I think one issue, one big issue is, OK, sure, we can throw lots of crypto parties, but we're never going to be able to throw enough parties. With one party, you're very lucky you're going to educate 100 people. Just imagine how many parties you need to throw. It's going to be a heck of a party, but yeah. And I think, secondly, the question of threat modeling, I think, sure, that's helpful to do, but I think users do first need an understanding of, for example, the email architecture because how can they do threat modeling when they think that an email magically pops from one computer to the next? I think that's pretty much impossible. I hope that. Thank you very much. So microphone number three, please. Arne, thank you very much for your talk. There's one aspect that I didn't see in your slides, and that's the aspect of the language that we use to describe concepts in PGP and GPT, for that matter. And I know that there was a paper last year about why King George can't encrypt, that tried to propose a new language. Do you think that such initiatives are worthwhile, or are we stuck with this language and should we make as good use of it as we can? I think that's a good point. And actually the question of, OK, what metaphors do we want to use? I think we're pretty much stuck with the language that we're using for the moment, but I think it does make sense to go and look into the future at alternative models. Yeah, so I actually wrote a paper that also goes into that bit looking at the metaphor of handshakes to exchange keys. So for example, you could have an embedded device on your, as a ring or a wristband. I mean, it could even be a smart watch, for that matter. Could you use that shaking of hands to build trust relationships? And I mean, that might be a better metaphor than key signing, webs of trust, et cetera. Because I think that is horribly broken for the concept, trying to explain that to users. Thank you. And at the back in the middle. Thanks. A question from the internet. Mold from the internet wants to know, if you are aware of the PEP project, the Pretty Easy Privacy, any opinions on that? And another question is, how important is the trust level of the crypto to you? Yeah, so actually, there's a screenshot of the PEP project in the slides in the why WhatsApp is horribly insecure. And of course, I agree. And yeah, I've looked at the PEP project for a bit. And I think, yeah, I think it's an interesting approach. But yeah, I still have to read up on it a bit more. Then for the second question, how important is the trust in the crypto? I think that's an important one, especially the question of how do we build, how do we create social systems to ensure reliable cryptography? So one example is the advanced encryption standard competition. Everyone was free to send in entries. The design principles were open. And this is in complete contrast to the data encryption standard, which I think the design principles are still top secrets. So yeah, I think the crypto is something we need to build on. But actually, the crypto is, again, built on other systems where the trust in those systems is even more important. OK, thank you. Now microphone number two, please. Yes, Arne, thank you very much for your excellent talk. I wonder how about what to do with feedback for unusability in open source software. So you published something on GitHub, and you're with a group of people who don't know each other, and one publishes something, and the other publishes something. And then how do we know this software is usable? In commercial software, there's all kind of hooks on the website, on the app that send feedback to the commercial vendor. But in open source software, how do you gather this information? How do you use it? Is there any way to do this in an anonymized way? I haven't seen anything related to this. Is this one of the reasons why open source software is maybe less usable than commercial software? It might be. It might be. But regarding your question, how do you know where the commercial software is usable? Well, one way is looking at what kind of statistics do you get back? But if you push out something totally unusable, and then you're going to expect that the statistics come back looking like shit. So the best approach is to design usability in from the start, the same with security. And I think that is also so even if you want privacy for end users, I think it's still possible to get people into the lab and look at, OK, how are they using the systems? What things can we improve? And what things are working well? So you're saying you should only publish open source software for users if you also test it in a lab? Well, I think this is a bit the discussion of, do we just allow people to build houses however they want to? Or do we have building codes? And I think actually this proposal of holding software developers responsible for what they produce if it's commercial software. I mean, that proposal has been made a long time ago. And the question is, OK, how would that work in an open source software community? And well, actually I don't have an answer to that, but I think it is an interesting question. Thank you. Thank you very much. Number one, please. You said that every little bit helps so we can have systems that don't provide a lot of almost any security. That provide just a bit of security and there's still better than no security. My question is, isn't that actually worse because this promotes a false sense of security and that makes people just use the insecure broken systems which will go and say we have some security with us? I completely agree, but I think that currently people, when you think, OK, an email goes from one system to the other directly and from the studies and interviews that I've done, I've met quite some people who still think an email is secure. So of course, you might give them a false sense of security when you give them a more secure program, but at least it's more secure than email, right? Thank you. Thank you. There's another question on the internet. Yes, thank you. A question from the internet. Bot Crypto, would you finally recommend your grandma to use? Well, unfortunately, my grandma has already passed away, so I mean, her secrets will be safe. Well, actually, I think something like where crypto is enabled by default, say, iMessage, I mean, of course, there's backdoors, et cetera, but at least it's more secure than plain SMS. So I would advise my grandma to use, well, to look at, OK, well, actually, I would first analyze what does she have available, and then I would look at, OK, which is the most secure and still usable. Thank you very much. So microphone number three, please. So just wondering. You told that there is a problem with the missing default installation of GPG on operating systems. But I think this is more of a problem which operating system you choose, because at least I don't know any Linux system which doesn't have GPG installed today by default if you use, at least if you use the normal workstation setup. Yes, I think you've answered your own question, Linux. I mean, unfortunately, Linux is not yet widely default. I mean, I would love it to be, but yeah. But if I send an email to, well, let's say, Microsoft and say, well, install GPG by default, they're not going to listen to me. And I think for all of us, we should do a lot more of that, even with Microsoft. Even if Microsoft is the devil for most of us. Thank you. For that. We should be doing more of what? Making more demands to integrate GPG by default in Microsoft products, for example. Yes, I completely agree. I mean, what you already see happening, well, I mean, it's not very high profile yet. But for example, I mean, I refer to the EFF scorecard a couple of times. But that is some pressure to encourage developers to include security by default. But as I think I've also mentioned, one of the big problems is users at the moment. Developers can say, OK, my system is secure. I mean, what does that mean? And do we hold developers and commercial entities? Do we hold them to, well, truthful advertising standards or not? I mean, I would say, let's go and look at what are companies claiming and do they actually stand up to that? And if not, can we actually sue them? Can we make them tell the truth about what is happening and what is not? So we've got about two more minutes left. So it's maximum two more questions. Number two, please. Yeah, so every security system fails. And I'm interested in what sort of work has been done on how do users recover from failures of this way? How do users recover from failures? Everything will get subverted. Keys will be your best friend will sneak your key off your computer. Something will go wrong with that. Your kids will grab it. And just is there, in general, has somebody looked at these sorts of issues? Why, I think. Is there research on it? Of various aspects of the problem. But as far as I'm aware, not for the general issue and not any field studies specifically looking at, OK, what happens with key compromise, et cetera. But we do have certain cases of things happening, but nothing structured, no structured studies, as far as I'm aware. OK, thank you. Thank you. Number three? Yeah. You mentioned mail pile as a stepping stone for people to start using GNU PG and stuff. But you also talked about males like an average user actually just sees a male coming from one place and ending up at another place. Shouldn't we actually talk about how to make encryption transparent for the users? Why should they actually care about these things? Shouldn't it be embedded in the protocols? Shouldn't we actually talk about embedding it in the protocols? Stop using unsecured protocols. And having all of these, you talked a little about it as putting it in the defaults. But shouldn't we emphasize that a lot more? Yeah, I think we should certainly be working towards, OK, how do we get security by default? But I think I've mentioned it shortly, that making things transparent also has a danger. I mean, this whole, it's a bit like a system should be transparent is a bit like marketing speak. Because actually, we don't want systems to be completely transparent. Because we also want to be able to engage with the systems, look at our systems working as they should be. So I mean, this is a difficult balance to find. But yeah, something that you achieve through usability studies, security analysis, et cetera. Thanks. All right, Arne, thank you very much for giving us your very inspiring talk. Thank you for sharing your information with us. Let's give him a round of applause. Thank you very much.