 Next speaker is Wilfried and he will talk about the hair issue of end-to-end encryption in instant messaging. Let's welcome Wilfried. So can everybody hear me okay? Yes. Right, and to properly introduce myself. I've got about 20 experience with instant messaging in healthcare. I'm also a member of the XMPP Standards Foundation and I've seen about four standards of end-to-end encryption. And I can already feel I'm not happy with any of those. And what I'm going to say is not an opinion of the XMPP Standards Foundation, it's mine. So if you are angry, don't go to the XMPP Standards Foundation, go to me. In this talk, it will have two parts. I will take a look at the threat model, the threat model behind end-to-end encryption. And I will take a look at some practical issues with end-to-end encryption. But first we set the stage a bit to make the difference. You have connection encryption, then you have a connection encrypted to the server. Everything is decrypted, processed in plain text, passed on to another client or to another server, encrypted again, decrypted again there, and so on. But those servers, of course, they process the messages in plain text, but you also need routing information. With end-to-end encryption, take a bit different take. You decrypt and encrypt at the end points, so the servers only see encrypted messages. But important to note, the server still needs routing information, still knows to know where to send the messages. So what's the added value, of course, it's not decrypted at all. And that's very useful if you don't trust your servers. And of course, again, the metadata of the routing, you still have to trust your servers with that. So what's usually named as attack scenarios are things where to use end-to-end encryption against. Well, what's named very often are the secret services, because they love to do large-scale monitoring, love to tap everything. So end-to-end encryption could be a powerful weapon against it. And the other ones are the Facebooks and the Googles and the Amazon. They really like to analyze everything you send so they can profile you and send you nice advertisements. And, well, we know both of them exist and both of them do nasty things, so nice attack scenarios. But let's have a closer look at the secret service attack at first. How does it perform? How do people do that? Well, the first step is some way you attract attention of a secret service. Then next step is they start to analyze your network, and they use metadata for that. And when they think you are interesting enough, they have some options. They can start a tap or they can hack your devices. Well, Interpol published this year some very nice reports. Only in 15% of the cases they were interested in the content of the messages. In 80% of the cases there were only the most important thing for the investigation was the metadata, not the content. So end-to-end hardly protects against this type of attack. And hacking your devices attractive anyway because there's lots of more data there to find and to get. And the big irony of the United Nations, hope you can read it, bearing WhatsApp, is that the nice end-to-end encrypted messenger also could be used as a tech factor. It had some nice holes in it that can be used to hack your phone. So that was a very ironic combination. But let's move on to the big company. What does a Facebook do? Well, they map a social graph. Then based on the social graph, they assume some properties of you. And then they sell advertisement based on the assumed properties. All right. Nice example. It's the line of work I'm active in. And all the patients got friend requests if they could friend each other suggestions for every nice medical secrecy. But this attack is only done by metadata, not the content at all. So end-to-end encryption is useless here again. So we have to come to the heart's conclusion that end-to-end encryption does not protect against the variants. But it does have a really good use case, and I've been there too myself. It works for the server operators. Because when a police agent comes to me with a warrant that they want to have the content of the conversations and it's end-to-end encrypted, I can say, don't have a thing. Don't know. Can't tell you. Or when I want to put a part of my infrastructure in the cloud, Google Cloud, and I think, well, maybe I don't trust Google. Well, then I do some trusted key hash nice things on the server I trust. That's mine. All processor intensive operations, all routing, all things like that, I place to the Google servers without access to the content for a way by Google. And in that way, I use end-to-end encryption to secure me against claims that I gave the content of the conversations to Google. So end-to-end encryption without metadata protection does not protect the end users. It protects the server operators. So think about that again once you start implementing end-to-end encryption. So let's get to the second part of my talk. Quite nice on schedule. There are practical issues, and some of them are not really nice, but I'm not solved well right now, or often done wrong. And some of them really may be solvable, but need some very nice new things. And to start with the first one, storage and forward. It's a quite known situation. You are at a conference, and you have a bad Wi-Fi connection. And you want to send that instant message and, well, sending it. It's got a Wi-Fi, and then the other one is also at the conference and doesn't have Wi-Fi at the same time. So it's stored for some time at the server, and then forwarded later on. Many of the modern systems for end-to-end encryption like to offer something that's called forward secrecy. And that means that even if your keys are compromised later on, the messages are still safe. And that's done by creating rotating keys that are rotated quite often. Well, the faster you rotate, the shorter becomes the lifetime of the message. Because after it, you can't read it anymore after rotation. So you have to make a choice to rotate slowly and have a long time that the messages are stored and forwarded, maybe weeks or months, or you make it fairly short. And then the messages get unreadable after some time. And there's always a trade-off. You have to make a choice. You can, if you want, you can't have both at the same time perfectly. Well, that's possible. That's not the biggest issue. It really becomes interesting when you look at all the trails in archiving. And this is really a question of what are you protecting against, what are the kinds of scenarios you use your end-to-end encryption for. A human rights activist, for example, would say, well, I want to be... It's very possible that I'm searched, that my devices are seized. If there's anything on it that can be traced to human rights activity, they will sue me, torture me. So please, no traces at all. Well, make it memory-processing, fast-rotating, keys, whatever. And you can get quite a bit in that direction. But when you're a medical doctor, you know the guys I'm talking all the time, they say, security, that means having a good archive so that we can see what's happened, that we have an audit trail, that we know who did what. And that there's no stories on the device. Nice tweets by one of my colleagues from another... working for another company. He said, yeah, those devices are far less secure that they're used in healthcare than the servers. So maybe you should trust the servers more and less on the devices. And that is a problem when you want to create an archive and use end-to-end encryption. Because when you create the archive at the endpoints, then you have to trust the mobile device of the nurse who lost it. So when you create the archive at the server, you have to trust some way the server is not using it and doing anything. And this more or less defeats the idea of end-to-end encryption because you could use it for the case you didn't trust the server. Well, of course, some smart asses here will say, let's re-encrypted, create an encrypted archive. Yeah, but then you have to re-encrypted to a static key. And that defeats the idea of perfect forward secrecy. And next thing is when you start with your encrypted archive, how to manage your access. Who can get access to that archive? Is it still the secret service you want to log out? And a very important thing of those archives is it can't be tempered with. If you can change the medical history of somebody, you can kill somebody maybe. So you really want to keep things like that integer. So it's not that easy and certainly not that easy when you want to combine it with end-to-end encryption or like the same Dave Crittland said nicely, imagine my fun. No storage on the device and to end encryption and the searchable archive. Good luck. I've been sketching today with some people from the XSF possible solutions to this. It takes lots of encryption to get this. It's not impossible, I would say, but lots of encryption, state-of-the-art encryption, and we're not there yet that we can combine those things on the right sides. And then there's a next hairy issue when you start implementing end-to-end encryption because it's quite easy when you send from one device to one other device. But that are not the most common cases. There are lots of group chats. It's very nice if you can use multiple devices and have the same chat transparently across multiple devices. Switch from your mobile phone to your laptop. Things that are quite often done here to get that running. Mistake one, share one single secret key across everybody or across all devices. Well, even if you manage to get that one secret key secret enough distributed, if you don't stop trusting one person or stop trusting one device anymore, you can't get that one person out. You need to do all over again. So you really get yourself into trouble there and there's also no way to securely communicate that there's a problem. And other thing that's often done, well, one-to-one was quite easy. So if I want to send a message to you, I equip it for you, for you, for you, for you, and then I send it to each of you. Well, with you here on the front row, that's quite easy. When I have to do it to everybody in this room, I really get into scalability problems. Certainly when you want to receive it on a mobile phone or want to send it back on a mobile phone. Good luck. So what you really want to do here is that I can send it to some magic box that we encrypts it that can run on the server and that we encrypts it to everybody of you without able to see where it's going. Imagine that. It's possible. It's great. It's really lovely. You can create group keys with the Diffie-Hellman exchange. I don't know if you know the bits. You combine keys nicely. There's some kind of shared key that everybody can use to decrypt it. It can be encrypted and everybody can use its own secret key to decrypt it. But the Diffie-Hellman exchange isn't limited to keys. You can put as many keys in it as you like. So that's really great. And also nice, you can add keys to it. You can remove keys from it. So if I stop trusting you, then I can remove your key and if I want to send it to you too, add your key too. And right now the IETF is writing a nice standard for it. It's not a messaging standard but an encryption standard like TLS. It's called message-level security. Just this month they published a draft that is complete enough to start implementing it but it doesn't have time to be fully refueled. And so it is not an operational standard yet. But we may get there. So this is one of the heavy issues where there may be light at the end of the tunnel. But then we have another quite nasty, hairy issue. And regular visitors to Fosdam will know this recognized picture, the key signing party. Tomorrow at lunchtime there will be one again. And then you will be signing each other's PGP keys to make sure the right key belongs to the right person. But that is second bullet, Web of Twists. And I also know of Fosdam where there was a large discussion on what somebody who had a false passport and went around and signed this key with a false passport. And a large discussion was the value of it. Good question. It's hard to know what you really are verifying there. Also something that's used quite often is leap of phase, as we call it in XMPP. Well, the first time you see a key, you see I trust it. And then when afterwards changes in the key you say, well, maybe it's changed, this is suspect. And that sounds very bad because you didn't do any real verification at all. But it may not be as bad as it sounds because when you do a leap of phase it's in the limited amount of time with the person you expect to be communicating with. So it might not be as bad as it sounds, but still it's no verification of the key. And a trust-in-cert party is also very nice. You all know the SSL certificates and let's encrypt and everything around it. But it wasn't the idea of end-to-end encryption that we didn't want to trust on the third party. So now we still put a phase in the hands of an organization that may do good verification, may not do it. Fair verification in person. I know very nice end-to-end encrypted apps and the only way to start communicating with a person is to scan their QR code with your phone and they scan your QR code with their phone. And then you verify to keep properties in person. It's nice, really. And if you can trust your app then you can also trust the verification, but it's not scalable. And then there's some new project going on to create identity-based cryptography. And that's a very nice idea. For example, can you use your email address as public key? Get them equal in one way or another. That's a very nice idea. The issue is to get it running, you need something that looks like a trusted third-party light with some nice cryptography and some extra features, but still you're dependent on a third party. So how happy are you with that? Well, better than a leap of faith, but... And then when you go on with your keys, how do you handle changes in your keys? What happens when you refocus a key? And do I handle it well if I start chatting on both my laptop and my phone? How do I make sure they... So that's some clumsy procedure, probably. And feel that this... and your end-to-end encryption is useless because you can put a man in the middle in it. So that's quite an issue. Well, nice on time. I come to my conclusion. I, in Goldberg, one of the authors of OTR once nicely said, bad encryption is better than no encryption. But in the case of end-to-end encryption, I'm really there to doubt that. And I think that the way end-to-end encryption is presented to you is now. We can set states that bad encryption is a false sense of security. So there are some links and resources, things I've talked about. There are some nice experimental projects going on that want to create metadata analysis-resistant kits. They are all experimental, but nice doing nice things. And I would like to open the floor for questions. Hi. One of the things I would like to add is that trust changes over time. Trust changes over time. So if you trust me today in 10 years down the line, you might not trust me anymore because our relationship might have changed. It's something missing quite in encryption right now. Yeah. There is no TTL. Well, with forward secrecy and the rotating keys, and also you have the part of revoking and bits beside it that those are often forgotten or not implemented well. Yeah, absolutely true. And it's also a question between kind of digital trust and relational trust. Hi. At the beginning of your talk, you mentioned that the Secret Service is mostly interested in metadata. And the first thought I had is, well, aren't they not more interested in that exactly because most message data is encrypted now so it's more difficult to get there? No. And it's very interesting when you already look at World War II or before that, activities of Secret Services, then they were also started to graph or contact and not wiretap. Well, that's still good. It's really much more interesting and much easier to analyze. Yeah. To your right. So the question is, so with your reasoning, basically you say that hop-by-hop encryption, which is done in email and XMPP is perfectly fine and there's no need for end-to-end encryption. But if we look at TLS, then that precisely does end-to-end encryption. So basically your argument would be that if we would come up with a TLS version where every router in the Internet would first decrypt traffic and then would re-encrypt it for the next hop, then it would basically be as secure as TLS today. And I think that people would just run away screaming if you would present that as an idea. I think there's a disconnect between how you present end-to-end encryption and how end-to-end encryption is currently working in TLS. TLS is not end-to-end encryption. Of course. My web browser connects to your server. They set up a connection by the entire path between the web browser and the server is encrypted. The fun thing is you trust your browser, hopefully. You trust your server, hopefully. And you trust that it isn't the server from Facebook, hopefully. And then you have a nice encrypted connection. But when you start end-to-end messaging, the server is not the endpoint. Then the endpoints are the two users and there may be one, two, three servers in between. So that's the first part of the question. Yeah, there is a difference in the use case of end-to-end encryption and the use case of web browsing. The second part is I don't say, yeah, that TLS is enough. Because, well, let's have a look at what's app. They can do lots of metadata analysis. Yeah. Or whatever instant messaging. My point is to defeat the attacks we usually say end-to-end encryption is a solution for. To defeat those, we really need to do more than just end-to-end encryption. So thank you for your questions. Thank you for your talk.