 So, we'll have a very special session today called the extended Lightning Talk session. Under the name of Now I Sprinkle Thee With Crypto Dust, basically the Internet re-engineering session where the talkers are going to present the latest developments and new approaches on applying cryptography to make the Internet safe again and a happy place for everyone. So, please let me introduce the first speaker, Mr. Ryan Lecky from Cloudflare. Hello, everyone. I'm going to talk about trusting servers you can't touch. So, just a quick background, I've been interested in trusting computing technology for about two decades now. And then back when this first became commercial, the Palladium TCPA stuff from Intel and Microsoft, I thought it was a huge threat to individual freedom that you wouldn't be able to run software on desktop general-purpose computers, they would lock you out. And then I realized a few years ago that, and I was really against it. And then I realized system security is so weak that it's already impossible to run the application safely on general-purpose computers and that people are already using completely locked-down platforms like cell phones all the time. So the battle is already lost, so we might as well get the benefit of trusting computing. So I started a company about three years ago to try to build this for servers and I sold it to Cloudflare in 2014 and I've been working for Cloudflare since then, working on interesting stuff like this. So why should you care about trusting anything that you can't touch? Like if it's your server and you've got physical custody of it, it's pretty easy to at least know that that's the server you're talking to by connecting to it directly and things like that. There's a lot of good reasons. You might not want to have your server in your house. Good reasons to use Colo, lots of reasons. You might want to run a legally challenging application. I ran a remailer for a long time around 9-11 and that was a very challenging thing. I ran a remailer while I was working in Iraq and I got all sorts of exciting people contacting me that were actually like 30 feet away from me, so it was very interesting. You might want to have something where you've got multi-site redundancy applications on multiple coasts of a country or around the world and you're only in one place at one time. Then there's of course CDNs, there's big companies that have lots of servers around the world. Any of the big consumer web apps have servers all around the world and then of course there's a cloud where everyone wants to host their applications these days. This is a really hard problem. There isn't a single solution that's going to solve it and nobody has solved the really hard case of this problem. They've only solved some easy cases and some adjacent main cases, but I believe there are some ways to solve it in a more thorough and general purpose way. So there's two forms of this problem. There's the protecting application on a server that you have at one time and then you want to send out to someplace so you have them all in a central depot, you configure them and you send them out. That's a very hard problem and then you have the extremely hard version of this where you never had custody of the server and you're just signing up to some cloud service that you've never actually had direct physical access to and you have to then trust those servers. There's a variety of solutions to this and they range on this spectrum ranging from reasonably good security and really expensive down to reasonably good security sometimes to not so great security but very cheap. The government solution to this is to build some crazy infrastructure with multiple sites where you've got security guards at each site and policies and everything else, you've got multi-parties at each site and we've seen how well with Snowden this really worked. So like there's a lot of industries that use the same model that are very regulated that are almost governmental but this isn't really an interesting technique. It's not really scalable. It's not going to work for new applications. It's not really terribly exciting so we'll sort of gloss over it. There's the computers and safe technique which is the enterprise model where you have a secure cage in a data center somewhere and you can trust the security of the cabinet. Cabinets are just like chicken wire and anybody with lock picking tools or anybody with the ability to bribe employees or the ability to legally compel access has access but it's a pretty good model. It works pretty well. It works for most commercial applications today and you can do it anywhere from very lax all the way to very secure and it's generally what people use today. It's just very expensive. It's very hard for me as an individual to deploy servers in hundreds of sites around the world like VPS style servers that I could with the same level security that I get if I co-located servers myself in dedicated cages so it's not so attractive from that perspective. Then there's chip and package level security. So you've got everything all the way from smart cards all the way up to hardware security modules so from like a dollar up to $30,000. They use the same kind of security model where you've got a secure processor element inside some sort of tamper resistant package and the idea of being that anybody tamper is with the package it erases keys inside and there's some way that you can guarantee that the attacks on the package necessarily are detected and everything else. As with anything you can do a very good job with this. You can do a very bad job with this. It's one of the better techniques but it's hard to develop for and there's a lot of problems and it's usually for very static applications where you've got a single function. So probably the most widespread deployment is GSM, SIMS and which is a smart card deployment where they do a very limited number of functions. So this is interesting but it's not really so much interesting for general purpose applications. Then there was the whole DRM technology base that was created in the 90s and they just kept sort of developing. The DRM use case was to protect the data from the owner of a system, see the remote rights holder so somebody who wanted to lease out a movie to an end user would know that the user can't copy the data. That use case really wasn't very successful. It was in fact so unsuccessful and such a big PR fiasco that they killed it completely except they kept developing it and they switched it over to sort of a more enterprise world. There's Intel V Pro which is used for desktop management, is a direct outgrowth of this. There's Intel TXT on servers. It's interesting and it's definitely an area to explore. However there's a lot of problems. It's very complicated. It was designed around like 8 bit microcontrollers. It was designed 20 years ago basically and extended. So it's not very easy to develop for. It's not very well documented. And as we've seen with the EFI attacks and all sorts of other stuff during this conference and really over the past 20 years, there's really not a high level of security that you can gain from this technology. It has the advantage that it's commodity. It's already in a lot of the hardware out there. So you can do some interesting stuff but it's a standalone protection against a server that someone has custody of. It's not terribly great protection. And then there's of course hardware security modules of the embedded version on the high end. And they're so expensive that they're not really a great solution for most of these things. You can pay about five grand for one now but usually they're $20,000 to $30,000. And usually you have to sign an NDA, get access to an API. Very, very expensive and difficult to do. They also don't generally work in the cloud. There is one exception that I know about. The Amazon Cloud HSM where they charge you $5,000 upfront and then about $1,200 a month to rent access to an HSM. There's a very limited number of these vendors. And if I were building a really interesting application, I'd be really afraid to use one of these because these things are inherently black boxes. There's no way that you can easily, as an end user audit quantity one of these things, to do a real audit you have to tear them down and do all sorts of destructive analysis. And there's another interesting problem of using these things in the cloud that I'll get to in a second. Plus they're really slow. They're usually like a 46 or a low end arm inside this envelope for $30,000. So they're slower than the hosts that they're attached to. So they're also not really a great solution. So the best practice that most people have for applications these days is to segment their application. They have an untrusted and a trusted part of their application. They allow users to access the front end portion of it. And if you have a system where it's a distributed front end, you might have lots of front end servers all around the world that aren't very highly trusted. Then you have your back end servers that are highly trusted, that are in a smaller number of locations where you can protect them. It's a pretty good model. Splitting applications across multiple machines, then actively monitoring them and responding when an event happens. And then using HSMs in a limited number of cases where it's possible. So this is really the best thing that people do today. However, it's not really terribly, it's not perfect. And there's some serious pitfalls to the model. So I respect that Amazon was first to deploy HSMs in a large commercial cloud environment, but there were some issues. The HSM model was generally designed around someone having direct physical access to the HSM so you knew you were talking to this HSM. Unfortunately, when you're talking to it in the cloud, you have never seen this HSM, and you don't necessarily know that it's the HSM you're talking to, or that you're even talking to an HSM at all. You're just talking to something over some sort of API. So in the first version of this, they didn't have any way you could prove that you're actually talking to an HSM. It could have been virtualized. So the obvious solution to that was to build a key into it at manufacturer time that can do an attestation operation to prove that you're talking to a real device. The problem is now the current model is to test it from within an Amazon VM inside your VPC. So that VM you could also tamper with at the same time. So the real lesson here is you have to build these systems to have external auditing or auditability. However, there are some solutions that are really exciting that might solve this. There's a company called Private Core that got bought by Facebook earlier this year that was doing something really interesting with the extension of Tresor or Trevisor, where you could run a hypervisor entirely inside the L1, L2, L3 cache of an Intel Xeon, which is maybe 20, 30 megabytes of storage, and then use the CPU operations, the crypto operations that are inside this core. So you have AAS and I, and you have some public operations and everything else. So this is all within the CPU die, and it's all pinned inside the CPU die. CPU die has some properties of, it's much harder to extract keys from it. You can't just plug a PCIe card in like a slot screamer and pull memory out like you could if it was main memory. So it's pretty cool. There's a thing called Intel SGX that's coming out in probably 2016. It's under NDA when the exact date is, and I don't actually know what the date is, so I can't tell you for two reasons. And there's an ARM equivalent of it, which is basically like an HSM on a chip, which will be pretty awesome. However, that's at least a year away, probably two years away for all this stuff. The other solution to this would be low-end HSMs. There's no fundamental reason why these HSMs have to cost 20,000 to $30,000. It's just that they only sell like a couple thousand of them a year, and they're companies that sell to banks that don't really care how much they pay. So... Oops. So, oh, yeah. Okay, so... Okay, so that's the solution. And then there's cloud hosts that are using keys embedded in the host system. So, there are some solutions that are in the pipeline, and we should be optimistic about them and be working on them. Very much. Thank you very much for your talk. That's it. So, just another announcement. Due to this tightly packed schedule, we won't have time for Q and A at the end, but if you're interested and you're here in the room, then you just walk up to the speakers and ask them, or if you're from the internet, then you probably know how to use the internet, and you can look the contact info up. So, let me just start the next talk then. Ah, geez, this Mac hardware. So, it's like this, and then you have to do this one. Oh, and yet, yeah. So, where's the POS ONE key? The home key on the Mac. So, just, you didn't see anything. So, please welcome Andre Erbsen and Daniel Ziegler. Hello. So, we are presenting a public infrastructure project that would use a consensus-based system to map user names to public keys. Now, there are good security solutions out there, some of which we have already heard are painful to use for even technical people. Now, it doesn't have to be like that. Most of us can manage with SSH, but do we actually always check the host key fingerprint? Similarly, you can trust a certificate authority. These systems tend to be much more usable. However, trusting a certificate authority means you're trusting a certificate authority which might not always be a good idea. Now, incrementally building on top of that and improving the station are systems like certificate transparency and it's messaging counterpart used for end-to-end which allow the user to verify what keys have been reported to be theirs. Yet, when something malicious is done, the user will not have proof that something bad happened. Just resetting the account would have had the same result. Now, we want to do better than that. We want to have a verifiable public mapping from user names to public keys. And we want to do so without having a central trusted party, without requiring the user to regularly check in with the system. And we want to do this on low-end devices that fit into your pocket. Right. So the way we've designed the semantics of our system, Dname, is pretty comparable to Namecoin. There's one single global namespace where users can register names of their choice first come, first serve. That does mean that like with any system, for example, Twitter, if you look up at NSA, you might not get exactly who you're looking for. And crucially, unlike some systems, changing the public key associated with a name that already exists requires a signature from the old key. And the way we've implemented this in Dname is by storing all of the state in a Merkle prefix tree where the public keys are in the leaves. That it's a prefix tree means that the hash of the name determines what path you take to find the right leaf. Like the first bit says whether you go left or right from the root and you go down the tree like that. And that it's a Merkle tree means that every node contains the hashes of its children. So the root hash effectively summarizes the tree. So if our hash is collision resistant, then there's no way to create a different mapping with the same root hash. And the reason this data structure is nice is because if a client has somehow gotten the right root hash, a server can efficiently prove to them that a particular mapping is correct. So the client just has to download the path, verify that all the hashes are correct, in particular the root hash, and verify that that path actually corresponds to that name. Now how we make this data structure actually represent the state of the world in the way we want it, we will have a bunch of servers run by independent organizations and people around the world with some of them being designated leader servers which select what operations will be applied to the state. Note that these servers are not trusted to verify their operations correct or to even check signatures. This will be done independently by each verifier. However, the clients will send all operations they want to perform to that leader servers. The leaders will broadcast them to the rest of the servers and the server will, by the end of the round, sign the new state which will be distributed to clients. Now to look up a name, a client can contact any of the servers, download the signatures, check that there's enough signatures, probably just one server asserting that a Merkle Tree is the current state of the world which should not be sufficient and then perform the Merkle Tree lookup algorithm that Daniel just described to find the public key. So if clients follow this process, they get a strong any trust guarantee about the correctness of the public key that they found. Specifically, if two clients accept a lookup from overlapping sets of good verifiers, so that means they only need one good verifier in common, then that means that they see a consistent view of the mapping where all the semantics have been preserved correctly. So, in practice, this means that Laura Poitras could upload her key, verify that the correct key is in the system and publish her username in her articles and we hope that we can make D-name good enough that the next Snowden could just look up her username and use that to establish first contact. That is an ambitious goal, but given that Tor relies on nine directory authorities, we hope that our comparable approach can be made solid as well. Now, we have coded up everything we just talked about three times. The final version is just about 2,000 lines of go code, go is a memory-safe language and we believe our code is reasonably readable. It's under a permissible license available online and the installation and setup process is really as simple as the slides right now described. You run the go package manager to download the client. You initialize your account, which currently requires an invite about which we will explain later and then you upload your PCP fingerprint or your SSH public key. Now, how somebody would access that? Well, to create the PCP encrypted message, do a D-name user, the command is up there. We also have a GUI applet that does this. D-name can also be used to store SSH host keys, so instead of clicking yes to the fingerprint question every time you connect to a new machine, we could download those from D-name. Again, you can wrap this command in a nice thing around SSH. We also experimented with patching PON to use D-name keys for contact initialization, yet PON is designed for strong anonymity and even though you wouldn't need a secret key, you would still need both users to add each other as a contact and that is appropriate where it is appropriate, but it's not as usable as email. However, we are prototyping a new application using the same protocol right now. Right, so there's lots of work left for us to do. First of all, we have several ideas about how we want to improve the protocol. For one, we don't really have a good solution for name hoarding or spam right now. Currently, we require email verification from a really strict whitelist and we also think it should be possible to run our consensus protocol without designing or designating a particular set of leaders. Really, any large quorum of verifiers should suffice and that would lead to better availability for updates and be a more egalitarian system. We also want to integrate more applications and most importantly, we need our code to get reviewed and we need to get people running independent verifiers and that's what you can help with. So if you're interested in trying out D-name, looking at the code and maybe even running a server, you can go to that URL on GitHub or contact us. And so we'd like to thank Professor Nikolaj Zaldovich at MIT for collaborating with us and Jan Zu, Adam Langley and Jelle von Inhoof for their useful discussion and of course, MIT for paying us. Thank you. Yeah, thank you very much. So we'll just take the next slides, which is somewhere over here. New development in OTR. All the way here again. Okay, then please welcome Jure van Bergen. By the way, that's the best pronunciation of my name ever by foreign people. So, yeah, clap for that. So, great. So I would like to talk 10 minutes about OTR, which is all the fault of Ian Goldberg and Co, which have helped to keep people safe over the last decade or so. So my name is Jure van Bergen. This is my email address and I confirm that this is my PTP fingerprint if you would like to contact me after. So let's have a short introduction to OTR and why it's great that these things exist. So OTR is about a decade young. It's about 10 years old by now. Mostly it's protocol agnostic. So what we mean by that is that you could use it over Java. You could use it over Yahoo Messenger. You could use it over MSN if there is still such a thing that exists. And it offers a great amount of security. There might be some issues because it's 10 years old and there's some things that we have to consider when we switch to, for example, elliptic curve cryptography or if we would like to bump the key sizes of, for example, the DSA session keys. But these things are coming probably over the next year in 2015. And most of all, it's a peer-reviewed design and it has withstand 10 years of scrutiny. And of course there has been some issues and these issues have mostly been passed. Those great things that you can authenticate somebody by using the socialist millionaire protocol, there's things like using a shared secret that you can discuss with people or just verifying somebody out of band by their fingerprint over another channel, like, say, Twitter. And most of all, it's open source. So anybody can inspect the code. Anybody can compile it and everybody can submit patches back. So I would like to think of OTR as an ecosystem. And by having an ecosystem, I mean that OTR, it's much more than just a protocol. It's much more than a specification. And it's also the implementations of this code and the reference implementations, like, for example, in JavaScripts or in Python or in Go. So it's much bigger than that and we can't just say that, you know, OTR probably isn't broken by the NSA, but it's much more. So one of these things is that some of the reference implementations, like, say, in Python or in other languages, are incomplete. Some of them have only implemented a certain specification of OTR, like only version two. And most of all, probably not version three, which is one of the most up-to-date recent versions of OTR. And we would like to really fix this. So if you are a Python programmer or if you are a Go lang programmer, I would really like to ask you to contribute to these projects so far. And then there's such a thing as sometimes, you might have heard of Pitchin and LibPurple, and I see some people laughing, but if it takes them six months to update all the Windows libraries that have at least one remotely exploit, where the oldest one was six years old, it takes them six months to update, we might want to reconsider using these things. And it might be the case that you are using Pitchin and you are probably not an active target or whatever group, but it's something to consider. And then there's such a thing as that. Crypto is often bypassed, and by meaning that if you might be using Pitchin, we might want to reconsider that and maybe write a new client. And then there's such a thing as usability. And usability is the name of the game, no matter what. I've seen a lot of people struggling with setting up OTR, and OTR is probably one of the simplest protocols to understand, and I've heard a lot of good things that journalists, users, activists, find OTR relatively easy to use, compared to something like PGP, for example. So this is very important, and we should make the clients as easy to use as possible. Pitchin or Jetsy isn't one of them, but CryptoCAD, for example, is one of these projects that makes it more easy for users to use and use strong cryptography. So the state of OTR, we have gone from desktop clients and we have moved to such a thing as protocol of, you know, often online messaging, is that if I send you a message right now, you know, there will be a message like 10 minutes later from somebody, because we have moved to mobile devices. So we have kind of moved from the desktop to the smartphone. And we have seen, you know, great stuff like Jetsy here, which is being done by the Guardian Project. I think CryptoCAD is also working on, like, Android and iPhone platforms. And, you know, that is really great. And we need more of those. So this is kind of, you know, just like mostly open source things and some are better and some are kind of could use more work. And I would encourage you to work on that once again. So, you know, kind of the state of OTR, there's like a bunch of reference implementations. The most popular one or the most well known one is probably Lippo OTR with this written in C. There's a bug tracker these days which isn't on source for us anymore. It's on bergs.otr.im. So if you have found any issues recently that you would like to see fixed, also for Pitching OTR, please send your pet just there or please open a bug if you found something weird that you would like to see fixed and we will try to fix it as soon as possible. There's also things that pure Python OTR and there have been people who have come up to me the past few days and have debunked a few issues and have patched them. I would really like to thank them for them which is pretty awesome because it was one of the issues where somebody might have been dropping plain text instead of encrypting. So these things need a lot more scrutiny. And there's, you know, Java implementates this being used by ChatSecure which is called OTR for Java. So if you're a Java hacker and you would like to help on strong secure crypto, you should consider helping the OTR for Java guys. Also Golang, OTR by Adam Langley. So I am clients. I would really... So we have been bitching about, and I'm sorry to use the word bitching, but we've been bitching a lot about Pitching and it's time to take action. It's time to get rid of this and write something in a more secure fashion because people have been relying on this and I work on Tails as well and it bothers me that Tails ships Pitching at the moment because the user deserves better. So we really, really, really have to fix this as soon as possible with OTR support, Tor support by default and hopefully also MP1Sec which is sort of like a group encryption protocol. So then you have the kind of like ChatPairDime as I mentioned before that we have kind of switched and we have to deal with low latency and high latency kind of like messages and it could use a lot more help and maybe someday we will implement something like asynchronous OTR. So maybe there will be something like a red chat, maybe not, maybe it will be something different. We might not implement it, we know, but it's something to consider by making new protocols. So how can you help? Please, please, please work on better IM clients. Other the software that we have been like working on and the community has been working on, the ecosystem has been working on for over the years. And most of all donate to these projects that you, you know, if you use them, like consider like giving them a few dollars to buy a coffee. This is really important and these people deserve a lot more love. Thank you. So this is the end, but this is not the end for now. This is the end of the talk, but it is in the end of OTR. So I happily invite you to subscribe to our mailing list, the OTR development mailing list as you can see here. And there's also an ice sheet channel for short communication cycles and OSTC and that's my talk. Thank you very much. Thank you. So the next ones have a more special presentation style. I have to close this one. Ask, ask, ask. I'm escaping. Ah, it's escaping. Okay, gee. There it is for the screen. Seems to work, but it always ends up with something. Ah, there it is. So then please welcome Elijah Sparrow and Christoph Glinter. No. No, you're, I'm sorry, I mixed that up. Are you a leap? All right, okay. So then please welcome Equinox. No, no, I'm Elijah. Elijah, I just asked. Okay. It's just, I, nevermind. My name's Elijah from the Leap Encryption Access Project. I'll talk a little bit about what we're doing and then Christoph will talk a little bit about a related project called Pixelated. The, our goals are to bring back the 1990s. Not all the 90s, but specifically the part of the 90s that involved not having all of our communication routed through a couple of global monopolists spent, bent on world domination. And so specifically we're looking at bringing back unencumbered open protocols among federated servers providers. Now we're kind of looking at, so sorry. What is federation? Typically it's user to provider to provider to user like XMPP or email. But I think there's a broader definition that might include the way Pond works where you kind of cut out one of those providers or any kind of, it's very useful to have a stable server somewhere that can act as a gatekeeper that kind of prevents civil attacks. So these days the cool kids are really into peer to peer and that's cool. More power to you with your blockchain. But we think that we can do federation right and there's a lot of specific cases where federation has certain advantages. Our two goals with federation is that we need to update it for the 21st century, so the federation in the 1990s had some problems. And most important is that the user should never have to trust the provider for storing their content and ideally none of the content in transit or any of the metadata. But from a provider's perspective, so LEAP was started by a lot of people who have a long history in trying to run service providers. It's equally important to not have the liability of storing clear text for users and to be able to deal with abusive users. And so there's some tension between those two which is a lot of the work that we tried to resolve. So federation is not dead. These are not projects that we're working on but these are interesting new projects that were announced recently in the last couple of weeks. So Descent is a pretty cool, provably anonymous chat routing protocol that doesn't necessarily have to be a traditional service provider model but it is designed with that in mind. And then Connex is designed around, it's like certificate transparency but it's designed around having a service provider model. So specifically the activities of LEAP, I will go faster. We have three things. We create the LEAP platform for automating CIS admin drudgery, a bunch of new protocols to make it so that you don't have to trust the provider. You, the user doesn't have to trust the provider and a bit mass client to try to make the whole experience for the end user equivalent to what they might be used to and as seamless as possible. This is an example of using the LEAP platform. With these commands, you would become a VPN provider. Now, there's a little bit, there's a lot more to it but you get the basic idea. It also includes a whole testing and monitoring framework. Again, the idea is to take all the incredibly boring shit work out of being a CIS admin if you've ever been a CIS admin for a while and try to make being, maintaining a medium-sized provider an actually fun experience. So some of the new protocols, just a few of them. The mainstay of what we do is everything's built on this thing we call SolidAD, which allows us to store all the data in the cloud but also make it searchable locally and synchronized among devices, presents this database API to locally running code and then we also have a whole set of protocols to manage the user registration and password stuff and one of the third major components is handling keys in a way that are invisible to the user and that encode all the possible best practices of OpenPGP, which is what we're currently using but we could swap it out with something else. So the third thing, the BitMask client. This is a screenshot of what it looks like on Linux currently. The stable version doesn't actually have email working but will in the coming month and you can, if you have a Google listening device in your pocket right now, you can go to the Play Store and install the BitMask app. I'll just say one thing about it. Our goal is to have very, very minimal UI in general so in this case, if you want to use email with BitMask you connect with a traditional mail user agent to a locally running IMAP or SMTB proxy and then later we'll look at something, an alternative to that. So regarding the email, we set out actually two years ago maybe longer than two years ago and our goal was to attain all the possible security properties that we could think of for better next generation email but make it super easy to use and this is an incredibly fucking insane goal and somebody should have shot us. So we had it working a year ago but we still haven't released it because there's so many little things to work out. So we're very close but we've actually, I think we're pretty close to attaining all or most of these and I'd like to show you exactly how but I cannot in a lot of time but just briefly we use this kind of solid ad to support all of our storage that's synchronized among devices but also protect all the metadata while it's stored on the server and in some cases while in transit and also we've started with a very simple system of federated automatic discovery and key validation that's forward, will be forward compatible with all the cool new things that people are working on like maybe Dname or Connex and also we've started to do some hidden service relay of SMTP, obviously it's very limited between provider and then the next step is to do from a pond like from user to provider relay and then to make the user experience as seamless and as most like what people are used to we rely heavily on secure remote password so that the provider never has that password so we can use it for all kinds of other things like decrypting local secrets that are used for solid ad. Oh, I guess since Dime's gonna go after us let me just basically say that infrastructure approaches we're in this category of infrastructure approaches that are I think harder to implement and offer more security potential. Sometimes the client approaches are more appropriate depending on the context so Dime is similar to what we're working on they took a slightly different strategy and then there's also a bunch of interesting client approaches. Okay, so we thought Leap is a cool project and but what we wanted to do is we wanted to increase the cost of dragnet surveillance and we can't really do that without mass adoption we really wanted to have as many people as possible to use GPG and use encryption and if you really want to have everybody encrypting every time, everything every time you really today you need a web interface. People use Gmail because it has a web interface. Nobody really wants to install any software on their computer anymore. So we thought Leap really does a cool job in encrypting everything but it does a poor job in having a solution that works for everybody so we thought we'll extend it a little bit and have a web interface for Leap. So what we did is we wanted to have a web interface that is encrypting everything and but also is good looking and has everything you need for a web interface from email and that is mostly search and tagging and the problem is and that is a solution we really, really did on purpose is now that you have everything on the server and there is no crypto in the browser you also have the private key material on the web server. We thought this is a, it's okay in our situation because we really didn't want to be a solution for the snones of this world but we wanted to be something that is as usable as possible and we thought having a provider managing your key material might be a usable tradeoff because then the provider can have backups for your key you can't lose them. Maybe on crossing the borders you don't want to have the keys with you anyway and that's how the web interface looks at the moment. So we are trying, this is as transparent as possible so as soon as you start typing in mail and sending one it gets encrypted and every mail that you receive gets decrypted without even noticing. We have these little orange things that they're telling you or the status of the encryption but that's it, you don't have to do anything. You can't forget to click the icon to encrypt mail when you're sending them. Yeah, I think. When Kristoff says you can put, when Kristoff says you could, that the keys are stored on the server with the pixelated approach. It might be the server, it might be some little embedded device in your house, it might be your friend's server. It's designed to provide maximal flexibility so you get to decide who to trust and you can move that trust around. Yeah, Leap is a federated so it's not Google that knows every key on the planet. It is, I don't know, your private server at home or I don't know, your company or something like that and we thought it was a good tradeoff. Thanks. Thanks a lot, Elijah and Kristoff. So how do I escape the full screen? Okay, there it is. So the next talk is going to be Equinox this time, this time for real. So again, full screen and you are ready to go. Thank you. Hi, I am Equinox. I am trying to present to you about using the crypto that you already have. So this talk is not going to be about algorithms, it's not going to be about protocols or schemes or anything. This is about making the best use of the stuff we already have and I think that's something that has been a little bit neglected. So there's Dain and DNS in the title of this talk, there's TLS and I guess you're wondering why I'm saying that this is not a crypto talk and I'm gonna try and show to you why we should use these protocols and how we can combine them in a useful way to get to a point where we have a better internet. So let's do the quick intro so you know what I'm talking about. This is just your plain old DNS system. You have delegations from the various zones down. This is the data for the Torre Projects website. Then we have recently tried to add security to that by pushing along the same lines, key material. This is the DNS tech part and even more recently we're trying to push data into DNS that ties this key infrastructure to the web server certificate. As you can see on the bottom, there's the Torre Projects certificate for the website. And this is basically the very, very abbreviated introduction to Dain, which is DNS-seq authenticated naming of entities, I think. But this is not new and this is not anything special. And it's not something that has received a lot of positive feedback because the protocol is really annoying. You can do it in all of service attacks with it. You need to trust ICANN because it's tied to the domain name infrastructure. If they are broken, then everything's broken. And I guess the NSA is really in a good place there because they can just go to .NET and ask them for the keys and then they can sign your domain with a different key and they can push another certificate that your web server uses. So this doesn't seem to be useful, but there's Torre in the name of this talk. So hopefully you're wondering where this is going. What this is about is that this system with Dain and DNS-seq is one single hierarchy and it's an online system. So we can use Torre. We can try and use the DNS system not on its own but actually by including Torre to get a common base here. Basically, we have this layer here that allows us to mix and de-isolate. So we can ask Torre, what is the current DNS key for my own domain? That's the left bottom part here. And as a user, I can go ask Torre, hey, I got this domain name. Can you fetch me the DNS key for it? You can do that more than once. You can check if the reply is consistent. And this is only possible because there's only one system here. You can't have two CAs issuing certificates for the same domain name. It's an online system. It needs to happen live. You can't do offline attacks. And suddenly this technology that is so far bug-ridden and error-prone is becoming interesting. It makes it possible to have this common layer of trust that establishes another chain aside from the root domain infrastructure. So we can go check. And that's actually something that I guess we should do. We may need to improve Torre to do this in a more efficient way. There's the problem that DNS requests are usually UDP and you don't want to establish a session for it. So it's not well suited to the current infrastructure. We need to do a lot of pushing if we want to take advantage of this scheme here because most software doesn't even support Dain yet. If you look at the major browsers, they are taking a long time to implement it with really bad excuses. And we also need to push Dain and DNS on our own domains and we need to make sure our registrar is actually supported, which is taking longer than it should. But in the end, getting to this picture, I think would make the internet more secure for anyone using it in the plain old way of going to a website and establishing a secure connection. And well, I think that's already it. You may have noticed this talk doesn't really contain anything new. It doesn't talk about algorithms. There's no stuff that needs to go to the ISO for centralization. And I think we have been neglecting trying to make use of the tools we have here. We need to watch out for these combinations and try and see if we can make the existing systems just more secure by combining them. And that was actually what I wanted to leave. Yeah, and go screw the CAs. Okay, thank you. So last talk, I have to put it in here. So hopefully it shows up somewhere here. Cue jeopardy melody. So there it is, finally. Flash. Yeah, it's behind disk not ejected properly. So, Bildschirm Präsentation. Please welcome Ladar Levison. Where's that clicker? Hi, folks. I'm here to talk about dying. It's kind of a big day for me. I just pushed this massive document out to the internet. It's the architectures, specifications, threat model. Pretty much about a year's worth of work, 108 pages it is guaranteed to put you to sleep. That's where you can go to find it. I'm actually putting a call out because I want feedback. It's still early enough that if people make good suggestions about little things that we can do to make the system better, more secure, more reliable, more user friendly, I certainly want to hear them. We set up a forum on darkmail.info so we can engage electronically and talk about what's in that massive PDF file. And hopefully come up with some good ideas about how to improve this. And then of course we posted some code to GitHub. All of the key management stuff is kind of working most of the time when it feels like it. The message parsing in generation library is almost done. We're gonna push that out next month. We've already started the server integration and right now your client is kind of a command line tool. Whoops. I broke it. It overheated. So that's my basic model. If you see these little icons on the outside, those are your keys. We actually call them signets. We've kind of given up on the X509 format. The signet format is just a very simple binary format that carries your cryptographic information along with some signatures. But I also built it in a very flexible way because I realized that I can't solve everyone's problems right out of the gate. So I wanted to build a format that would be a gateway to even more secure protocols. A way for you to take your key, stick it out on the internet, use this sort of more traditional service provider model, but also advertise that if somebody starts up a conversation with you, they can click a button and all of a sudden you guys are talking peer to peer after a Diffie-Hellman handshake through tour. Those were our goals. Put simply, we wanted to make encryption automatic for the masses. We wanted to make it much more difficult to sort of track your social graph. And basically whenever possible, we wanted to bring the security of the system down to two things, the strength of your password and the strength of your end points defenses, two things that we can control most of the time. Our goal was to do that. Now I'm an American, I tend to like the Second Amendment, I know it has a bad rap over here, but if you go back 200 years, what the Second Amendment is really about, it's about giving people the ability to defend themselves. And I just feel like right now, we're all wandering around the internet, buck naked. And what we need is a new generation of protocols that allow us to do everything we do today on the internet, but do it without having to trust the infrastructure to protect us, without having to trust our service providers. I want us to go back to a world where the service provider is just hauling a bunch of empty, nameless containers like we do with ships. Now the way I built it, it's relatively transport agnostic. I created this bastardized version of SMTP that I call DMTP, but there isn't anything preventing that from going over a completely different transport like Tor or carrier pigeons. The cornerstone of the entire system, and I'm not gonna be able to go through everything in 10 minutes, but I'm gonna give it a shot, is this DNS record that a service provider posts with their signing key. Everything ties back to that. And I think I actually posted the wrong property. No, no, sorry. I posted the right property with the wrong name. So this is a slightly more complex DNS record, and it actually sort of links up with the previous talk. If you are using DNS sec, you can actually stick a signed fingerprint for your TLS certificate in the DNS system and you don't even need to get your cert signed by a CA. Of course, if you're not using DNS sec, you still need to go pay for a cert. We're not stupid. I just thought I'd cover briefly some of the fields that I'm adding to the organizational level signet. You know, I'm not just about improving security, I'm about improving accessibility and usability, and part of that is making it easy for people to configure their clients. Easy for their web browser to figure out where to go to let them log in. But like I said, the format is flexible, oops. Which means I put in fields from the get go if you happen to be running Tor and you would like to let people access your system over Tor or deliver messages through Tor. All you gotta do is populate that field with the right information. Now that's a message. And what I want you to take away from this, and I can't go through all the details right now, you probably can't see all of the details, is that each one of those blue boxes is actually encrypted completely separately. And if you can see what's after the dash, you'll be able to see who can access which particular box. Now one of the very sort of new avant-garde things that I did was I moved the envelope information out of the protocol and into the encrypted message object. And you've actually got two chunks here that are relevant, the origin and the destination chunk, which means the sending service provider can see what domain the message needs to go to, but has no idea who on that domain it's going to. The destination domain gets the message, knows where it came from, but doesn't know who sent it. They need to know where it came from. In case they don't like it and they wanna send it back. Christmas time, right? Nobody returned any packages. So when I started with this project, I wanted to not just fix the current system, I wanted to think about email from the ground up with a focus on security. Because one of the big problems we have with email today is the long tail. Everybody wants to be backwards compatible all the way back to those PDP-11s from 40 years ago that were running buggy versions of Sendmail. So I started from the get-go and I said, we're gonna have some certain security requirements. I was gonna use AESCBC because I wasn't sure I trusted GCM, but I kind of got talked out of that. Made it simple. If you're running a DMTP only server, stick it on port 26. Use SNI. So if you're hosting like a service provider with multiple domains, you know via that which TLS certificate to return. Don't feel like giving up port 26 or don't think you could convince your system admin to open up that port on the firewall, no problem. Just run it on port 25 and execute the specially crafted start TLS command that says, yes, I really wanna upgrade my connection to DMTP and all the same rules apply. In fact, there's no guarantee that when you execute that command you're even staying on that same server. Be very simple and very easy to write a plugin to a current mail implementation that sees that and just tunnels it out to somewhere else, possibly even to a box in somebody's home. So just briefly, what does it look like when you send a message? And if you can see that, then you'll see there are no mailbox names. I kind of snip the fingerprints there, but in that conversation, I set it up so that the servers know they have the correct signets before they transfer the message. And if they don't, they just give back a temporary error, go out, grab the signet, because presumably the domain that's sending also supports darkmail. If they don't, they probably shouldn't be sending encrypted messages and grabs the signet so it can verify the signature on the inbound message. And then you get a bunch of gobbledygook. So that's the spec. It's 108 pages. Like I said, I could use your feedback, everything from the 327 typos I know about to the 30 or so sections that just say TBD, but there's a lot of information in there. There are a lot of things that I'm doing at a very low level that need to be reviewed by people who know more than me. I mean, I know a lot, but I also know I don't know everything. For example, I can't figure out how to keep my servers from running off with my best friend. So that's it, folks. I was gonna say, it takes about three days to go through this, and I only had 10 minutes, but after closing ceremonies, if you wanna keep talking, you can come find me at the bar over at the Radisson. Okay, thank you very much. Please give a big hand for all of the speakers of this session again. And now please enjoy the rest of the Congress.