 We've got a great talk to wrap this up for the first day. John is going to talk ‑‑ or Justin is going to talk about secure messaging. And this is a pretty important topic. I think we all ‑‑ everybody in this room generally understands the needs for this. But I think in talking to them that this is going to help us kind of make the case for the people that we know at home, right, the muggles that we know. So I've got a couple of secure messaging apps on my phone. And right now the secure messaging apps do a great job of secure messaging and also giving me a list of all the hackers that I know and none of the muggles that I know. So hopefully this can help us get that address book and wicker a little bit bigger, right? So let's give Justin a big hand. My name is Justin Engler. I'm with NCC group. I'm here to talk about secure messaging for normal people. So this is not a talk for crypto geeks. The idea here is to try to get people who don't already know about this topic something to start with, to spin up their knowledge and to try to look at all the different apps that are available and try to make their own decisions about which ones to use. So before we really dive into this, I'd like to take an informal poll. How many people in here are crypto geeks? Okay. The door is over there. Any journalists? A couple? Lawyers? A couple? Normal people who do not fit any of those categories? Quite a few. All right. Good. So the goal of this talk is to lay out the foundations of what a crypto app or secure messaging app does without getting into the really heavy stuff that will scare away the noobs. We're not going to cover any new research. There's not going to be any math. We're not going to do any cryptology or crypt analysis. We're not going to talk about operational security at all. We're not going to talk about specific applications. I'm going to get to that in a second. And we're not going to talk about any crypto things in really, really fine detail. So this is my slide to try to scare you crypto geeks at the door. What we are going to talk about, really basic crypto stuff, often oversimplified, I'm kind of going for the 80% right just to try to make it so that it doesn't get too crazy. That's a list there in short of the things we're going to cover in the rest of the talk. The different types of threats that will go after the messages that you send, how you can defend yourself and then how those threats will try to counter the defenses that you put in. And then at the end, we're going to talk about kind of a list of things that crypto apps say that they do that don't really do what they sound like. So my job with NCC is I'm an application penetration tester. So my job is to break applications. And as it turns out, I've broken quite a few crypto messaging apps, both the kind that are advertised as secure and also like a messaging app that is part of some other larger platform. Because I've got so many customers that I'll do this stuff, I can't talk about any of the apps in specific. There's a couple of times where I'll talk about maybe a piece of software and I'll talk about larger things like OSs or platforms or things like that. But the actual messaging apps, I'm not going to say anything about any of them. If you ask a question about it, I'm not going to answer. So that's where we're at with that. We've got one more piece of before we dive into the deep stuff. I'm going to say the word government a lot. I don't necessarily mean this government or that government. If I have a specific government in mind, I will name it. So whenever I say government, just assume a government of some type. It's important that you have the standard, I'm not doing anything wrong so why do I have to hide anything. There are a lot of people in the world who live under a system of government where they are being censored or being oppressed and these kinds of apps are useful for people in those areas to be able to do what they need to do to try to get themselves to have a better life. So even if you think that you don't need these things, other people need these things. Furthermore, in the U.S., there's even legally enshrined things where you are allowed to keep secrets from the government. For example, attorney-client privilege is one that you're pretty much, the courts say that you're supposed to keep this stuff away from them. So this is another good counter-argument for the whole. You have nothing to hide stuff. All right. So we're going to talk about messages. So for the purposes of this talk, a message is just when two people, in this case Alice and Bob, want to send some sort of data from one person to the other. We're not going to really focus on messages between two computers or system updates or anything like that. Really talking about person A wants to communicate with person B. And until we get a little deeper, the actual type of network that's involved doesn't matter. The devices that are being used don't matter. I don't care if it's a desktop. I don't care if it's a phone. I don't care if it's a landline telephone. I don't care if it's a postcard going through the mail. Like they all follow this general pattern. So if you don't encrypt anything, then there are eavesdroppers. And those eavesdroppers can read everything that you say in any of those cases. And they can do that passively. So what that means is they don't have to do any extra work. They just kind of sit and record. And everything that goes across whatever this medium is, they'll pick up and then they can analyze later. If there's no encryption, that means they get both the message content and the metadata. And we'll talk a little bit about the difference between those two. Most of this talk we'll be talking about the content specifically. And then later we'll have a separate section about metadata. So the first question is which app should I use? I don't know which app you should use because I don't know who you are. What we really need to do is think about why you want to use these apps, who you think is out to read your stuff, and then use that to make a determination of what kinds of features you need in an app so that you can then make a good choice. So these are examples of people who might need to use a secure messaging app. So I fall under the first category. I don't really have anything specific to hide. I have things like financial data that are important. I also just in general think it's wrong for people to be listening in on my stuff. So I try to encrypt what I can. Activists obviously have a huge need to protect things like financial data, business plans, trade secrets, that kind of thing. Activists will want to protect what they're doing from whatever government might be able to move against them. And that might not even be the government where they live. You could certainly imagine cases where an activist lives in country A but is doing something that country B doesn't like and country B is acting against that activist even though they don't live there. Last is one that was actually added fairly recently. I talked to someone who told me that she used encryption because she had a lot of online harassers. And she was worried that if she didn't encrypt things they would be able to get a hold of that data and use it against her. So that's why she used it. Journalists often if they're dealing with sources who have important information they'll want to protect their communications between the journalist and their source. And lawyers as we talked about have good reason to protect their communication with their clients. So once we know who you are then we can talk about the people who are out to listen for your stuff. So we kind of divided our threats into kind of two axes. An opportunistic attacker is just interested in collecting as much about everybody. They don't necessarily have a particular person in mind. A targeted attacker obviously is the opposite. They already know we are after Justin and so they're going to look at that person specifically. The other axis is your resources. So a low resources attacker might be something like a single hacker or a group or a small company. Whereas a high resources attacker might be a large company or a government, things like that. So these different types of attackers have different means available and as we go through the different security methods and countermeasures against them we'll kind of try to highlight who can do what against them. So I lied earlier and I said this wasn't a crypto babble talk but I have to cover just a tiny bit before we can talk about everything else. But this shouldn't be too painful. So for the rest of this talk if I say something is encrypted what I mean is it's impossible to read or modify if you don't have the key. In real life that's not necessarily true there's all kinds of other things that could go wrong if something is supposedly encrypted. But we're not going to cover any of that in this talk and we're still going to come up with all kinds of reasons why things could go poorly when you send it via secure message. So public key and private key is kind of a difficult concept because the naming isn't great. You can think of a public key as the blueprints for a lock. So I send you these blueprints for a lock. You can build the lock, lock something with it and then send it back to me and I have the key that can open it. No one else can open it. So in this way I could have everyone would be able to send me something that no one else could read except for me. Signatures are kind of the inverse of that. You can use public and private keys. I can sign something and then you can use my public key to verify that I'm the one who actually wrote that document or made that file or whatever it is. You can kind of think of signatures as the public key is like a signature sample that somebody wrote in a book and then you're comparing the signature that's on the new package or whatever, the check that was written to this existing signature to see if they match except it's all with math and so it can't be done wrong for the purposes of this talk. Fingerprints, a lot of people who are privacy sensitive and they start learning about cryptography and then people start asking them for their fingerprints and they get freaked out. When we're talking about fingerprints here, what we mean is something that serves as a kind of shortened version of something else. Usually we use this because keys are really long and so we'll take a fingerprint of the key and then we can share that fingerprint with someone and they can know if they got the right key or not. Lastly the word trust, that doesn't mean that I trust you to drive my car or get my laundry or anything like that. What trust in this case means is that I both am confident in your identity and I'm also willing to let you vouch for someone else's identity. This one's tricky because I might accidentally say the word trust and it won't be clear during the talk. If it's not clear, somebody raise your hand and say what did you mean and I'll tell you. Okay, so transport layer security, also known as TLS, the old version is SSL and you'll often hear people use the terms interchangeably. They are slightly different but for this talk they're the same. The problem with kind of the very first step of sending a message to someone else is you need to know how to get that message to them. The easiest way for that to happen on the Internet as it's built now is to have some server somewhere that both sides know about and then I can send a message to the server and then either the server will send it on to the right person or the other person will connect to the server too and get the message from the server. So with transport layer security, that's one way we can try to secure this type of message as it goes across. So you can see that the eavesdroppers there can't listen to this particular kind of traffic and this is also the little lock in your browser, that's TLS. So a passive attack doesn't work against TLS because it's encrypted between the server and the party that's sending the message. So no eavesdropping. However, the kind of naive way to do TLS and a lot of people did this a long time ago, it's not as bad as it used to be. If you just say, yeah, I'll accept an encrypted connection and you don't bother to find out who you connected to, then the bad guy, instead of just passively eavesdropping, could pretend to be the server you were trying to talk to, you send the message, it's encrypted and so you think it's fine, on the other side the attacker makes another encrypted connection to the real server and sends your data along, but now he's seen it because it was only encrypted in that first step between you and the attacker, so now the attacker can read the traffic, modify it, whatever. This is harder to do than a passive attack because you have to be there actively manning the middling stuff to make it actually work, but it's not that hard. In fact, it actually scales pretty well. So if any of you work at businesses that are fairly large, almost all of them do this. So when you're on the corporate network, they'll actually man in the middle of your traffic, they've already installed a certificate that says, oh yeah, you can trust this server, you don't ever talk to the real server, you talk to their middle server, it decrypts everything, tries to check it for security or whatever the heck they're doing, and then passes it along. Governments can do this too. Again, harder than a passive attack, but still scales well. So to solve this problem, you need to verify that the server you're talking to is the server that you actually wanted to talk to. The TLS kind of system overall has this thing called certificate authorities that handle this problem, and all your browsers do this by default already. If we were to go back to one of these guys, it is likely that at this point Bob would be getting a warning in his browser saying we don't think this connection is secure, and then Bob would probably just click through it. But if you're using an app like on your phone, hopefully the authors of that app made it so that there's no way for the user to bypass that if it's not a secure connection, it just stops. So let's explain how all that works. So when you make a TLS connection to a server, that server sends you back a certificate. The certificate is essentially just a list of identifying information of what the server is. And it's signed by a certificate authority. Your browser or your operating system has a list of all those certificate authorities it trusts. And so if this certificate was signed by the certificate authority, then you know that that certificate authority that you trust is vouching for the identity of the server. The problem here is that there are a whole bunch of certificate authorities, and all of them can vouch for anyone. I looked at Firefox yesterday, there were 90 different CAs in there. And those are everything from private entities in the US that are essentially businesses that do this as a business, all the way up to Hong Kong Post Office. There's a couple other ones that are like clearly this is a government CA. So the weird thing is that it's not likely that these kinds of attacks would be taken against you while you're doing online banking. But if it's something where a government is more interested in you specifically, they might find a way to forge a certificate for you and inject it into that stream to make a man in the middle against you. So there's a way around that too. It's called certificate pinning. So on the client side, instead of just trusting any certificate as long as it was signed by somebody, you say, I already know which certificate I'm expecting. I've talked to the server before, or maybe this server is part of some app that I'm already using. And that app knows which servers it's supposed to talk to. So we'll just mark those and we'll know that this certificate or this public key is the one we're supposed to talk to. And then if something else comes up, it's just the same as if the man in the middle guy didn't have any kind of signature, it just fails. This is great because now instead of having to trust all 90 of those CAs, you have shifted your trust risk. So now, let's say you're using Android. It works the same on iOS. You get this crypto messaging app from Google Play Store. So now maybe Google could have modified it and sent it to you in the same with Apple if Apple sent it to you through their app store. But that's the only person you have to trust now is just Apple or just Google and the app developer, instead of all these other CAs and everyone who has access to them and so on. So back to TLS again. If we pretend that an app is doing TLS totally correctly, they're pinning, they're validating certs, everything is going well, there's still a huge problem with TLS that makes it totally insecure for a lot of secure messaging applications. And that problem is this. We're encrypting between Bob and the server, between Alice and the server. But in between when it's on the server, it's not encrypted, it's totally in the open. And that means that whoever runs the server or whoever can bring some force to bear against the people who run the server or whoever can hack that server or whatever, those people can all still read the clear text of the communications between. Also you could have servers, let's say that someone runs a messaging service because they want to learn about your habits so they can sell the targeted ads, this is another way that they would do that, right? Even though your communications to their server are encrypted, they would still be able to read all the stuff you're talking about so then they'll start serving you ads based on those things. Most kind of instant messaging or anything like that, email, that has some security on it stops here. So because so many of these things are run by people who are interested in targeted ads, this is where we end up. Oftentimes if something is kind of advertised as a secure messaging thing, then it goes a little further. So let's talk about the next step in the process. What we'd really like to see is that instead of the encryption going between Alice in the server and Bob in the server, we have encryption that ends at Alice and Bob. So now all the server sees as garbage as it goes through and they can't read anything. The server's still in the middle, they just can't read the text because all they see is encrypted text and they just have to pass it along. So in order to make this happen, so before we had the CA system and we could download an app that had a certificate, so that's how we would get all these keys between the two parties. But now, since we've got Alice and Bob communicating directly, they need to have their own keys and they need to have ways to exchange their keys so that they can encrypt messages to each other. So the easiest way to do that is to ask the server. Alice wants to send a message to Bob, she doesn't have Bob's key, so she says, hey server, what's Bob's key? And server gives back the key and then she starts up the encrypted channel with Bob and sends the message. Anybody see what the problem is here? So how about the server just says, yep, this is definitely Bob's key, but it's not and gives back a key that the server knows and then again on the other side sends the wrong one back and now the server is the endpoint of the encryption instead of Bob and all the stuff can be read again. This is definitely tougher than the other man in the middle attack because now instead of just some random person in the middle, it pretty much has to be that server that does it or someone who can coerce that server or someone who's hacked that server, et cetera, but it's still much harder. So to prevent this one, what we really need to do is in a similar fashion to how we talked about, you need to verify that the server you're talking to is really the server you wanted. Now we need to find a way that Alice can look at a key and determine if it's really Bob's key or not. So this process is called key validation and the idea is to prove the ownership of a key. Ideally, whatever app you're using to send messages will do this one time. So once Alice has sent one message to Bob and they've done their key validation scheme, as long as that key doesn't change, everything's fine, you don't have to redo this tedious process every time. And then until there's something, some event that happens that causes you to have to rekey. A couple of different ways that we can make this happen. Kind of the simplest one is a fermented bean paste. Trust on first use simply means that the first time that Alice sends a message to Bob, she'll just say, okay, the key that I got must be his key and save that and we're done. This is really simple. Alice doesn't really have to do any work. If any of you use SSH, kinda looks like this model. But the bad news is if the adversary that's trying to eavesdrop on you was already there for the first connection, then you just stored their key instead of the one you wanted. The other probably bigger problem here is that if Bob drops his phone in the toilet, then later on Alice tries to send a message and she gets this wrong key back. There's no way to solve the problem now because she sees that there's a key mismatch. She could ask him over the messaging thing, hey, did you drop your toilet in the phone? And the eavesdropper says, yes, drop the phone in the toilet. So you're kinda stuck at this point. So a little bit better would be to use out of band validation. So here's the fingerprint we were talking about before. Keys are really long. So instead, we're gonna take a fingerprint of the key and we're gonna share it over some medium besides whatever messaging app we're trying to secure right now. So you could maybe do that in person. You could do it over the phone. You could do it over SMS. You could use some social media. You could post a billboard with your public key. Anything you want that tries to get that without using the thing you're trying to secure because it's not secure yet. One of the nice things about this is that it doesn't have to happen during the communications process. You could do it some other time and then set up the keys and then later you'll know that that later communication is secure. If you do in-person verification, that is pretty good. If I already know, Alice and Bob already know each other, you know, then it's pretty easy for them to just like show each other their keys or maybe your app has a way to like import keys via the camera or something. That's really tough to beat. Also the fact that it can be ad hoc is nice. So if it was part of like a protocol, then the attacker would might be able to see when it's going to happen and then get in the middle and so on. If I just randomly SMS my buddy and say, hey, here's my key, like the attacker would have to be waiting for that, be able to understand that my SMS is actually having a public key in it, then they'd have to intercept it, change it, send it. It's hard. So the bad news is that you're limited to the security of whatever other band are using. So for example, there are a lot of secure messaging apps that'll do things like, here's a number, now read it over the phone to the other person. And if that thing is secure, then the thing on their screen will match what you just read and we're good. But that assumes that whoever your adversary is isn't able to like fake you reading numbers. And that sounds hard, but there's been some research in the past year so that makes it sound like it's doable. In addition, if you're talking about voice over the internet, people are used to like weird choppy audio and stuff. So there's a good chance that it might be believable even if the attacker like has to synthesize fake voice for you. The other problem here obviously is that you have to have that second channel already there. If someone wants to talk to me over the internet, I've never talked to them before, I don't know them at all. Like it's really hard now for me to find some other way to validate their key. Another way to do this is to rely on the trust of others to build your own trust. So Alice wants to send a message to Bob. Alice doesn't know Bob's key, but Alice knows Carol, Carol knows Dave and Dave knows Bob. So now by kind of chaining those keys along, Alice, as long as those people all can verify that, then Alice can build a trusted connection to Bob by getting the key. There's a couple different ways that transitive trust can work. The web of trust is the most common one. It's what you see in PGP. It's very old and well-established. What the web of trust is is very convenient, but essentially it's a server somewhere that shows who knows who. So you can automatically look it up, which means that if someone's interested, they could build this graph of everyone who knows who and start to do analysis based on those things. Furthermore, let's go back and look at this here. So Frank there up in the corner, he doesn't know anybody. So if Frank wants to send a message to Bob, there's no way for the web of trust to help him out because he has no connections. So he'll have to use some other method like tofu or how to band or something to bootstrap his way into the web of trust. Crypto parties are the usual way you do that. You go to crypto party, you meet a bunch of people, hopefully those people know other people and now you're kind of hooked into the web. So the other slightly more private way to do this is a trusted introduction. We use the same graph we just did, but instead of it being automated and you look it up on the server, it just kind of happens organically where Alice asks Carol who asks Frank who gives the information back. So it's all ad hoc. This means that there's no server that has this map of everybody's metadata. Speaking of metadata, let's talk a little bit about metadata. Everything we talked about so far was pretty much message content. So what did I actually write to the other person? Metadata is all the stuff that you can learn about a message without knowing the content. So who is the sender? Who is the recipient? What time was it sent? What was the duration of the communication? How big was the communication? And what app was used to communicate? You can also often find out. And if you are, this stuff alone is enough to get you in trouble. So if you are a high level government official and metadata shows you communicating with a muck raking whistleblower journalist, like that alone is enough to get you in big trouble even if they can't actually see what you said. So this stuff is important. And it's even worse if then they see oh, he's also using this encrypted app to do the communications, right? Metadata can be picked up in a few different places. So that server that's in the middle that we've kind of been showing doing all the connections for you, that could easily be logging all this traffic even if it's all end to end encrypted and they can't log the message content, they still know who sent messages when, they know who they went to and who they came from, all that stuff. They probably know the IP addresses that it came from. So if you did this from your house or from your phone, they might be able to tie that to an actual identity in the real world. A lot of messaging services require a phone number and that makes it even more difficult to be anonymous. You can go buy a burner phone and cash, wear a hat, whatever, but that's actually still making it much easier to track you. And finally, a lot of these apps will ask you to upload their contact list. Some of them do it automatically, some of them ask you first. The idea is that it's going to pull up all of your regular contacts, send them up to that server so you can easily use this new app to send messages to all the people you know. But now the server has just gotten this new social graph of all these people you know, even if you didn't send them a message. And furthermore, let's pretend that this server that you're using is open source. You don't have any way to prove that the server is not doing all this stuff. Even if they say that they're running some open source, blah, blah, blah, you don't know what they're actually running on the server side. So they could easily be logging when they're not supposed to be, or whatever. Also, they might say that they're going to obfuscate your contact list. We're going to take all the phone numbers that you know and we're going to hash them so that then no one can figure it out waiting for the last from the crypto geeks that stayed because the space of available phone numbers is not really something that would work out. You could easily brute force that hash and come back with all the phone numbers without much problem. No, that's not going to be enough. He said they'd salt them. It's way outside of the scope of this conversation. Okay, so even if the server is kind of the good guys, then we still have to worry about everybody else trying to collect metadata. So your ISP could be doing it. If the government has taps on various lines, they could be doing it. Again, they're going to have the IP addresses of what connects. They might have the phone numbers or other phone identifiers for that kind of stuff. And they could definitely force the servers involved to give it to them, but they can also often infer what's going on. So if you pretend that the server is not colluding with these other government attackers who are trying to steal your stuff, you end up with this weird situation where the, okay, Alice sends a message to Bob at midnight. So what's going to happen from the government, Eve's dropper's point of view, is Alice sent a message to messaging server at midnight. Messaging server sent a message to Bob at midnight or maybe slightly after midnight. So by doing those kinds of tricks, they can infer what was going on even if they don't actually have access to the server. Another way you can do this is with size. So you might be able to tell that Alice and Bob were communicating because you saw Bob send a package of a particular size and then the server happened to send out a package of that particular size out to Alice. So now we can assume that that's who was communicating and not that Bob was talking to Frank or Alice was talking to Carol or any of those things. So to do that inference, you can look at connections, you can look at timing, you can look at size and you can look at existence of traffic. So this is a traffic confirmation attack. If the app allows someone to send you a message, let's pretend that the government has a van parked outside your house. They wanna see if this alias that they have some fake name for is really you. They send a message of a certain size at a certain time to that alias and then they're watching your Wi-Fi with their government van to see if that thing comes through that's the same size if it does, then they can link those two things together. This is probably an okay time to talk about Tor. Tor will protect you from the first thing only. Instead of the government knowing that you've connected to this secure messaging server, instead they know you connected to Tor. But there have been a bunch of different attacks against timing and size correlation against Tor and it gets into really deep stuff about how many different Tor pieces you have to have in the puzzle to be able to do it. Just know that it's not infallible. It's possible to do that in some cases. Not to mention the fact that in some cases just using Tor is evidence enough. So there was a kid who called in a bomb threat to Stanford because he didn't wanna take his exams. I think it was Stanford, it might have been East Coast School. And he did this over the internet. He used Tor and so now he thought he was safe because Tor was protecting his IP address so no one could see who sent the message. He sent it from his campus like dorm room. So the IT department said, oh well look, there's only like five people using Tor on this whole campus so let's just go ask all of them. And so they just brought each one of them into the room and said, you were the bomb threat person, right? And then one of them said, yeah, that was me. So like just the fact that you're using some of these things is often evidence enough. Until we get to the point where everyone uses this stuff ubiquitously all the time, then we can give those kinds of people cover. Well, not necessarily bomb threat, color in, but people who are using it for a legitimate reason. All right, another thing that we need to talk about if we're gonna talk about secure messaging is what happens when the device that has the app that does the messaging gets taken by your adversary, whoever that may be. If there were any logs on the app that show who you were talking to, what you were talking about, they're stored, the adversary gets all those things. If you had a contact list in the app, the adversary gets that, they can start using that to build metadata. They also get your keys most likely and if they get your keys then that means they can impersonate you to all of your friends. There's another kind of interesting wrinkle to what happens when your adversary steals your keys. So we talked before about if you're a passive attacker, you can't read encrypted data. But you can still record it. You can just sit there and record all this encrypted garbage, you don't know what it says, but you can just keep it. Later on, they find out who you are and they take your phone. If they have all this recorded data, they might be able to now use the keys that they just took off your phone and decrypt all that stuff they've been storing all this time. The way to prevent that is to have forward secrecy. There's a lot of crypto mumbo-jumbo involved, but the idea is you stack another key on top of the key you're already using that's temporary. And so then since the attacker was passive and not active, they weren't able to man in the middle that extra key and so therefore they won't be able to read this stuff even after they steal the key. Sometimes also called perfect forward secrecy. For this talk, they're the same thing. So besides all the things that we just talked about, they're all important, so you need to think about things like does my app use perfect forward secrecy? Does my app use end-to-end encryption? Does my app use some sort of key validation? Those things are all really important, but those are only a piece of the puzzle. So we kind of waved away the crypto stuff and said, oh yeah, if we say encrypted, it's secure, right? In the real world, that's not how it works. There's a bunch of things that could go wrong when the application developers are writing the crypto code. They could use the wrong things. They could have other non-crypto related vulnerabilities in the app where the attacker can just like take over the app and then when things come in, they can just read them as the app decrypts them. So if you really want to be confident in the app that you're using, then you need to have somebody audit the app. Now since I know that all of you are normal people and not like crypto auditors, you probably can't do it yourself. So you kind of have a couple options. If you could use an open source app and then anyone could audit it and tell you what they got. If you're using an app that is closed source, then you're gonna have to have somebody else audit it and then look at the results. If someone else audited it, but you can't see the results of what the audit was, then that audit wasn't of much use to you. You could have said everything is broken and all you see is it was audited. So you need to have some assurance of not only was it tested by somebody, but also that either it did well in the test or it did poorly in the test, but now the things are fixed and now there's a retest and now it's fine. A lot of crypto geeks will tell you that you have to have open source or you cannot have a secure app. I'm going to probably get beat up later, but I'm going to say that's not true. Just because anyone could audit an app doesn't mean that anyone is going to actually audit the app. There are plenty of things that are open source that no one has time to look at. And because we're talking to normal people here, it's not like I can just tell you, well, you need to audit the app yourself. So you have to be using an app that has been audited. Even if you assume that the app you're using is good, then you have to think about the OS. Is the OS you're using open source? Because if not, then who knows what backdoors are in it? Like all the same arguments that a crypto geek would tell you about the closed source app also apply to the OS now. So if you believe in, also so iPhone has a little bit of open source in it, Android has more open source in it, but both of them have a whole bunch of closed source binary junk that you can't audit. So you can't even say, well, I'm using Android, therefore it's open source and I'm good. The OS is important, but even if you assume you had some magic OS that had no crazy binaries that no one can audit and no one can look at, then you still also have the firmware on the phone, which is going to be closed source. Things that run the radio on the phone, those are closed source. And often those have things like automatically receive an update and apply it from the phone network. And so again, if you're thinking about attackers that have a lot of power, they could do things straight to that. And even though the rest of your phone is fine, the rest of it all becomes irrelevant. Again, with the hardware as well. So just having your open source messaging program isn't enough by itself. Furthermore, we have another problem. Let's pretend that somebody that you trust has audited the app, whether it's open source or closed source, and they, whatever they said, you're happy that this is an app that you want to use. So now, where did you get the app? If you're a normal person, you didn't build it yourself. Even if you did build it yourself, you didn't read all the source code and make sure it matched what was audited. At least I've never met anyone who does that. Did you download it from some website somewhere? Did you get it from an app store? Every one of those sources is now a new attack vector against you, because they could have added stuff to the code that you're gonna run that could have all kinds of bad stuff in it that wasn't ever audited. This is a really hard problem to solve and there's not a good solution yet. What you need is something called a deterministic build. So deterministic build means that your auditor will get this particular version of stuff, be able to give his audit report, and then he'll say, and here's the hash. I guess I didn't talk about hash. Here's like a fingerprint of what that build that I audited was. And then later, when you download that app from the website or the app store or whatever, you can verify that you get the same fingerprint that the auditor got, and then you know exactly what you're getting. We're not really there yet. We'll probably never be able to do that on iOS because of the way the ecosystem is there. Android, we're almost there. You can kind of do it manually. Maybe later we'll have an automated way to do that and that would be nice. All right, almost done. There's a whole bunch of things that crypto apps will tell you that they do that aren't quite what they say. So auto delete, there's a bunch of apps that have these things that are like, well, you send this message and it has a time limit and after that nobody can read it anymore. That's total bull. The problem is that the other person on the other side isn't necessarily using the same client that you are. They could have modified their client or used a third-party client that still can receive these messages, but doesn't follow the rules about when it's supposed to delete them. Once you've sent a message to someone that's theirs, you can't enforce what they're going to do with it. I mean, at the worst case, they could just take a photo of their phone, right? Related to this, any apps that say they notify you or prevent screenshots, same story. Someone could be using a third-party version of that app that doesn't do those things and can still receive the messages so then they could take all the screenshots they want and you'll never know. One time pad, this is a little bit of a deep topic for crypto, but essentially a one-time pad is an unbreakable form of cryptography. The bad news is, is that when you're using it in an app like this, it is totally not unbreakable. It's not very good. The problem is that what makes a one-time pad good is that you have a long sequence of random stuff and you use that to encode all of your data that you're going to send, but you're never gonna have a long enough thing to send all the things that you're going to want to send. So then you're gonna have to get more one-time pad data from somebody else and then you're gonna have to send it to somebody else and whatever you're using to get and send it, like now you've just collapsed all your security down to however good those things are because you can't use a one-time pad to send more one-time pad. Hardware crypto, there's a few devices out there that like plug this into your phone and then it does magic crypto and the phone can't read it because you don't trust your phone. It might have been hacked. So the problem is the phone could instead just turn on its microphone and listen to what's being said even though there's other things sending encrypted data. So not necessarily a huge help. Geofencing, there are a lot of apps that say you send a message and it'll only go to people within one mile or whatever, there's a bunch of different ones that do this. That all happens on the server side based on what the client reports. So if I was at home in Washington state I could have my client say I'm at DEF CON and then I would start getting all the messages from people who are at DEF CON. There's not anything that enforces that to happen. Mesh networks are related instead of using the server connection you're like connecting to other people nearby and sending data that way. These aren't any more secure than any other app. They still need all the same crypto stuff on top of that mesh network because any adversary who's there listening could pick up that stuff just like anybody else can. Military grade is a fun one. There are a lot of things that advertise themselves as military grade. This usually means that we're talking about a specific type of crypto algorithm but that doesn't address any of the stuff we talked about in this talk at all. So even if it's military grade all the things that we just talked about could still be totally wrong. A good way to think about military grade is like saying this car is safe because it has a bulletproof windshield. So yes it is safe against being shot in the windshield but it doesn't tell you anything else about the rest of the car. Bespoke crypto. Generally if someone's using a secret magic crypto method that no one's heard before it's probably not well tested which means it might not work as well. There's a lot of crypto people who will argue about this but that's generally how it works. You wanna use things that are well understood and broadly used. Multiple devices is a tough problem. So if you've got an iPad and a computer and an iPhone and an Android device and you wanna use someone to send a message to you from their crypto app and you receive it on any of those devices it's actually a pretty tricky problem because now you have to have devices sign each other's keys or you have to have multiple identities or the server has to manage it all and then the server can add new devices that it really has. Turns out to be a really hard problem. All right. So even with an app that does everything right and solves all the things that we talked about you're still not gonna be totally effective against all the different types of adversaries. The low resource people, yeah no problem. The high resource opportunistic people you can stop kind of bulk message collection so you can't read all of the data necessarily. But metadata is probably still on the table. Very difficult to handle. And as for targeted high resource people you're never gonna win against that by choosing the correct crypto app. You need to do things like go to spy school and learn trade craft and make sure that they never steal your phone and they could buy O-Days that are against your phone and use them like you're not gonna win against this. So the choice of your crypto app is not going to solve the problem of a really powerful entity coming after you specifically. So what can you do? You need to understand who you are, why you're trying to secure things, who you're trying to secure them from. You need to understand the features that the apps you're using have. You need to decide if the app is doing the things that it says it's going to do and you need to find a way to get that app in a secure manner. I can't tell you what the best thing for you to use is because that's something you have to decide for yourself. Key validation is probably the biggest, if you can only take away one thing from this, it's key validation. If you're using an app where you can't figure out how to do key validation, assume that it doesn't do key validation and use it appropriately. EFF took a lot of flak for this scorecard for some reason but I thought it was great for kind of the everyday person to learn what the different apps can do. So I think hopefully they're going to update it soon but that's a great starting point for looking at out of all the different secure apps out there, what do they do? So that's all we've got. Thanks to Kara for doing all the cool diagrams with the little hacker guy. And Tom, he's kind of my crypto guru so for all the big stuff, he's back in the corner. If you have questions that are like crazy crypto deep math stuff, I'm not going to answer them but he can. More information, the white paper that covers most of this stuff is on your DEF CON CD, it'll also be on the website and we'll be putting an updated one as well as the slides probably next week. All right, so I got one question. On your phone, what is it that you use for secure messaging? I can't answer that. I said at the very beginning I couldn't answer specific ones, I can't do it. Is there anything that you won't use that people might be tempted to use? If you are worried about real attackers you need to use something at a bare minimum that does key validation. That's the best I can tell you and there's a good list of different ones that are at least popular in this crowd that all can do key validation. I wish they would all get along so we wouldn't all have to have five different apps but can't win. All right, well that was very good. That was very good. Let's give them a hand.