 So I'm Yan Zhu. I'm a security engineer at Yahoo by day and by other days because I don't really work at night. I'm a technology fellow at the electronic frontier foundation. I'm Peter Ecclesly. I lead the technology projects team at EFF. I'm James Cass and I'm a technology fellow at the EFF and a Ph.D. student at the University of Michigan. Great. So who here is excited to encrypt the entire web? I like the energy. Okay. So what are some problems in the world today other than global warming, child hunger and all that? Another problem, TLS is not ubiquitous. And it's 2015. For instance, last summer when I went to Cora, this was actually the last time I ever logged into Cora. I noticed their login page was just served over a plain HTTP, which is already bad, but also opened up dev tools and lo and behold the passwords were actually being sent over clear text. This is pretty bad if you are a site with millions of daily active users, but actually in Cora's case their purpose is kind of to spread misinformation about various topics. So maybe it's not bad if you have a man in the middle. I don't know. But there's also a little site called Google. I don't know if you've heard of it, raise your hands if you have. So Google has been pretty good at SSL, but some pages like this one, which is the landing page for Google ads, still over HTTP by default. You might say, okay, that's fine. It's just like a static page. It's all public information, no user data. But a man in the middle, such as myself, can inject a button that says login and make it look really lifelike and googly and all that. An unsuspecting user will click it where I will redirect them to my phishing site and they'll enter their credentials. So because they don't use your Bitcoin as HTTPS, this is still a problem. That's why we can't have nice things yet. Second big problem of the world is that setting up TLS is still really tedious. Even in 2015, who here has done this process recently? Yeah, a lot of you. So you know how bad it is, right? I still have one arm at least age. What's that? For instance, if you want to do this on dream host, you go to their web wiki and it's a 12-step process and you're not an alcoholic synonymous yet. But it's still 12-step process. Ridiculous. So I'm pretty experienced in doing this. But how long does it take a total newbie to set up SSL for the first time? Well, I made a little video with some my coworkers from EFF. So I basically went around the office and I asked some people, can you set up TLS? And none of them had done it before. So hopefully this will work. Oh, hello Parker. What are you doing today? I'm just going to try to set up HTTPS on my website. That sounds fun. Yeah, maybe. Can we film you for a Dofcon video? That's okay, yes. Great. Okay, okay. Wow, I didn't think that was going to be clickable. No kidding, it's 100% free. So how do I do this? Free, click the wizard. I guess I'll open that new tab. So their site was down that day. Well, I guess we're not going to do that today. Okay. You can probably stop rolling. That's a wrap. So then I went to someone else. Now here we have Noah making sure you can get email at webmaster at catplanet.cat, which we'll need later to set up. And that's the cell certificate, except you forgot his password. I don't want to be audio on this thing. Probably not. Alright, three minutes later. What's up Noah? I don't know if my email works. You don't know if your email works? We started already, is this the thing? Yeah, we're totally filming this. Yeah. Why? So because Noah has not figured out how to get mail, he is going to get coffee instead. And whenever he's in this video, when he figures out what's going on with his email. This is the website you need to get secure. Let's be free. No kidding. Free. That's not clickable. That's not click this. Oh, express lane. I want express lane. All fields are acquired. Noah Swartz. So I really have to give them my real home address. High grade. Meeting graded. Let's go with high grade. How long does it take them to sign it? Congratulations. Okay, catplanet.com. No, catplanet.com. Where is .cat on this list? Webmaster, catplanet.cat. Okay, taking a while. Taking a while. What does this mean? Generally free of charge. Handling fee. But where's my, is it attached? Where's my certificate? Why do I have a random congratulations email? Where is it? Is it in my browser? Here I am still waiting for the email with my certificate. I got a thank you email which maybe points me to an account. Maybe has my certificate in the link. But I have a proxy error from their website when I try to go through it. Yeah, after an hour and multiple tries. No certificate. I'm sorry I was such a sad video. I hope people have tissues and you're able to cope with this after some therapy. So the whole process of doing this took us several hours due to various mistakes we were making and all that. That's pretty unacceptable. All right, so let's assume that went perfectly and we got our certificate. Now we have our certificate and we want to set up SSL on our server. But SSL configuration is really confusing. So a few years ago people were saying RC4 is fine. It's very efficient. It's fast. But now in 2015 experts like Nick from Cloudflare are saying we need to kill RC4. Another example. Chrome is sun setting SHA-1 because it's not secure and sooner or later if your site uses SHA-1 hashes in the certificate chain you'll be displayed as insecure in Chrome and Firefox. So I think we should make a movie called the SHA-256 redemption. It's about a man who is mistakenly accused of using SHA-1 on his website and gets fired from his assignment job and meets Morgan Freeman and spends all this time convincing people he actually used SHA-256 in theaters next fall. Other examples. So even later in the fall people said we should disable SSL V3 now. Okay, so now SSL V3 is off the table. And then there's log jam. Log jam means you have to generate your own Diffie-Hulman group. And you know the point is if you're not paying attention to this stuff you can fall behind. And your SSL configuration will be horribly insecure. And if you use a config audit tool like SSL Labs it'll just give you an F as crypt analysis attacks get better and so forth. But like you'll notice that let's encrypt.org is getting an A plus. It's actually one of the best sites that SSL Labs has seen recently by their metrics. What's that? Okay. Yeah, sorry I did that a few days ago. Anyway, so we used the latest recommended ciphers from Iran Ristix book which I'll put a plug in for right here. But the problem is most people don't have the capacity to keep track of when they should be changing their SSL configurations. And so we end up with kind of a broken encryption on the internet. Problem four, mixed content blocking. So this is keeping a lot of people from transitioning to full HTTPS. Mixed content blocking is when your site is over SSL but you're loading all these resources from HTTPS. So the browser says okay we need to keep the user at the HTTPS secure level. So we're going to block HTTPS resources that you try to load. And so your site's just broken if you load scripts from HTTPS. In the case of Lenovo which I checked out a few nights ago, it's available over HTTPS. But they can't load their fonts yet over HTTPS. So by default you're going to use HTTPS. So who here uses HTTPS everywhere? Wow, awesome. Peter and I work on maintaining that browser extension. So if you use HTTPS everywhere in Chrome, you can go to a website and actually see what resources could potentially be written in HTTPS. So this is a pretty useful developer tool. If you're trying to convert your site from insecure to secure and you have a lot of third parties and you don't know which of them support SSL. So open up DevTools and there's a tab you can play with where it helps you rewrite stuff. And W3C is also going to help you all out. There's a new header in CSP called upgrade insecure requests. So when a browser sees the header, it's going to say, oh, this site wants us to upgrade all sub-resources and links to HTTPS even though they're written as HTTP. So it will try to HTTPS request and if that fails it just gets blocked. But it won't do the insecure network request. So that's also useful. And I think the final problem is that there's just too many certificate authorities. It's a lot. So Peter and some of his colleagues made this very complicated, scary looking graph a few years ago. Peter, can you tell me what it means? So this is not the whole map. It's actually a little portion zoomed in of the whole map that we presented at DEF CON in 2010 from the SSL observatory project. And when we set out to do that project, we thought that there would be about 66 certificate authorities in Firefox and maybe 150 in IE. But then when we scanned the internet, we realized that they'd all been signing and delegating to other certificate authorities that weren't in the official trust route, but which would actually be trusted by browsers. And we concluded there were thousands of CAs operated by at least many hundreds of organizations and it compromise any one of these CAs could basically compromise any domain on the web. So kind of terrifying. That's really scary. Not going to be able to sleep. And in fact, earlier this year, last year, this year, Google found misissued certificates from Chinese certificate authority. So this is not just a theoretical attack. We've actually detected this in the wild. So Peter, this sounds pretty bleak. How are we going to make a world that we want to live in in the future? So our solution to the problem of there being far too many certificate authorities and to all of these other problems is actually to start another certificate authority. But really, in more detail, as I'm going to explain in the next few minutes, we need a detailed vision both for security for a way that we can get every website it needs and not such as it's not supposed to have. So a solution for security and also a solution for usability. So that humans who are just web developers and don't want to go all the way down the insane rabbit hole of all the strangely named SSL vulnerabilities and the animals they named after and things, they don't need to know about that stuff. So the biggest question we need to answer for this project is if we're going to issue certificates, how do we decide whether to do so? And you can think of this as being a little bit like that scene from the Holy Grail where someone shows up and says I want a certificate and you say bring me a shrubbery. And then you go off on your quest or you have your software go off on a quest and come back with a shrubbery and then you're like, it's nice, but I think I'd like a different other shrubbery as well. And so this dialogue, hopefully it's not quite so comical. It happens in this new protocol we've got called the ACME protocol. RCA will speak as a server and we'll have a client as well but you can write your own client if you want. And then the shrubberies are called challenges. It's a particular task that the client needs to perform to prove that it deserves a particular certificate. And there's this fundamental issue that you have to deal with these challenges which is your bootstrapping from non-cryptographic authentication somehow up to crypto. And how do you know what key to use when you didn't start with keys? The traditional typical answer for at least bulk issuing certificate authorities, the ones that are comparatively cheap and will not charge you $1,000 or whatever, is to just send an e-mail to an address at that domain name, maybe admin or root or webmaster or something. Just send off that e-mail totally insecure. If a link in the e-mail gets clicked, then whoever asked for it to happen gets the set. A smaller number of CAs will do this thing where they go and inspect an HTTP URL and see that you put up a not that they gave you at that URL. So we are going to do some variant of this type of domain validation. The types we're going to support at launch. There's a new DB protocol we've invented called DVSNI. That works at the TLS layer. And the aim is to prove not just that you're a user on the destination machine, who happened to register the name admin, but that you actually have administrative control over the web server. And you can configure it to answer for synthetic fake virtual hosts that we've asked you to answer for. And we do that in the TLS handshake. We ask for that name using the SNI header if you know it. And then we inspect the results and make sure that you're able to customize those. We also support simple HTTP, which has a little bit less of that security in it, but it is also going to be necessary for some people who are behind proxies or CDNs. And, you know, in the wild, we'll get certain numbers of attacks against these things that succeed and we'll monitor the statistics and see how that's going, whether we can keep doing both. But probably down the pipeline, people are asking us for extra things. One that we get a lot of requests for is DNS-based validation, especially for larger deployments. Just have the DNS name posted not in a special record. And another one we may do is an upgrade of the DVSNI protocol to do a whole lot of domains in a single handshake, so you don't have to do, if you're virtual hosting a thousand domains, you can just do one fancy set of challenges and not a thousand challenges over and over again. And we might do that on a different port, one extra port in addition to 443 for people who want to keep their firewall 443 going the way it is and then point a special port somewhere else. We'll have to do a lot of auditing on that port before we pick which one we're going to use. Probably that will involve Internet-wide scans and a call for comment. But fundamentally, all of this domain validation stuff is a bit terrifying. Basically you can imagine the Internet as being like a very dark hallway and you're flinging some packets down that hallway, you can't see where they go and something comes back and says, yeah, I'm really this domain. And it could have been eaten by monsters or modified, you have no way of knowing in general. And so you will get attacks if people compromise routers, compromise DNS servers, they can defeat these methods. Not very reassuring. We can do slightly better than that. So we can do multi-path DV where we use servers in multiple data centers in multiple parts of the world to make several versions of the validation requests or several versions of the DNS query. This doesn't completely protect you. A very powerful adversary might compromise each of those places or someone might just compromise a router near the victim. So there are multiple dark paths through the Internet but they all wind up in the same room and being eaten by the same monster. So this probably isn't good enough for us to build the whole Internet security infrastructure on top of yet. But we can do better than that actually. So what we need to do is ensure that this leap of faith down the dark hallway really only happens once. How do we do that? We're talking before about the SSL Observatory Project. We spoke at DEF CON five years ago about this. Since then, there have been a number of these sort gathering projects that centralized observatory we talked about. We have a decentralized, so about a million Firefox clients who opted into sending us certificates. There's the certificates transparency databases run by Google and others. And the ZMAP project that James and his colleagues at the University of Michigan have done. And these build giant databases of all the public certificates in existence. And so we know the entire SSL of us, at least the public portion of it, at any given moment. And that lets us do this thing where when someone comes through the door and asks for a domain name like a bank in New Zealand we've never heard of or a corporate web mail system somewhere, we can look in the database and say, oh, there is already a valid certificate for this domain name that you're asking for. We're not going to just do non-crypto domain validation. We're actually going to ask you to prove possession of the private key in the existing valid certificate. So that way, you can only get a certificate from us if you've already got a cert by signing something or decrypting something with the key in that cert you have. This is going to be a little bit less usable. It may mean that you have to go chasing around to figure out where your existing key is for your cert. In the worst case, if you've lost it, you might have to go to another certificate authority and buy a set. But we will ensure that we never robo-issue to a bank or a valuable web mail site or anything that has a certificate right now just because a router got compromised. So you might notice if you've heard of TOFU authentication, this mechanism lets us get TOFU. TOFU is trust on first use. You're probably most familiar with it from SSH. It's the model where if whatever trust you establish on your first insecure connection, if anything changes, the person at the other end changes, you're going to be protected against it. So we think that's pretty nice. Now, the next problem we're going to have to deal with is basically the horrible complexity of TLS configuration. As Jan was showing, there are poodles and log jams and hot blades and all these things that can get you. And what we want to have is basically a client that runs on your server, an agent that runs on your server and magically figures out the right way of doing things for you. At least if you want that. And so what this does is it takes the current situation where every webmaster, every web developer out in the world is like a giant crowd of these millions of people and we're sitting up here as a security community kind of yelling at them saying, here, understand this incredibly complicated corpus of knowledge. All of you need to understand this incredibly complicated body of knowledge to customize each one of your sites correctly. And it would be way sooner to have a world where we can have a small team of people, maybe just the people in this room, the people who want to contribute on GitHub to the project, focus in on how we actually do TLS deployment correctly, do it once correctly and then give everyone else a tool that just sorts out the details for them. So that's the aim with the fancier client that we're supporting here. And the aim, the plan for when someone gets this and installs it in six months or a year's time, is that it'll support tweaking their existing server, Apache, Nginx or anything else. We have a little API that you can use to support the new server software to pass the challenges and then install the resulting certificate or certificate if you need lots of them. And then tweak all of the security parameters and options so that they have good values, either maximizing security or maximizing security subject to the constraints of compatibility with older clients depending on which one you want. And automating tasks like renewal and response to security incidents that right now cause massive problems for HTTPS deployments. So some of you probably are a little terrified if I'm saying automate security tasks. So let me talk about what we mean by doing that. Because there's a spectrum. This easy stuff, like tuning the cypher suites on a server, you know, we go and look up some lists of recommended cypher suites and debate them for a while and then pick a good value. We turn on OCSP stapling so that everyone can actually tell where the certificates have been revoked on that. We turn on the upgrade insecure and you head to the W3C just specified because it's basically a no brainer to turn that on. More tricky is redirecting from HTTP to HTTPS because of mixed content blocking even with upgrade insecure, sometimes this can cause breakage. We can maybe do a fancier version where we look to see if you've got a client that's modern enough to know about the upgrade and secure mechanism and do a differential upgrade for the modern clients and leave the old ones on HTTP. Similarly, auto renewal and reking, we've got these implemented actually largely. But they're a little tricky. There are a lot of corner cases. What happens if you fail to renew a domain? And so now you have something went wrong with one of your domains so you have an old cert that's for more names, but it's about to expire and a new cert for fewer names. And so at some point you have to transition. You want to try and tell the admin, hey, please pay attention to me. I'm a program on your server. I'm trying to make sure there's an issue but the admin is not reading their e-mail. We want to get these corner cases right but we think we can do this pretty well. And then the hard stuff is full rewrites for everyone and turning on HSTS. You should all know about the HSTS header. If you don't set it, your site is totally insecure. But it's also the kind of secret sauce that can break sites if it's not done correctly. So we'll need to be very hand-holdy and give the admin good tools and advice to turn these down when they're ready and not beforehand. And the hardest stuff which we may build but it won't be there straight away necessarily is HPKP pinning which lets you lock out all the other certificate authorities but can really break sites easily. And full mixed content auditing automatically somewhere via proxy in the server or via CSP report back. This stuff is theoretically possible but it's a big engineering task that's down the road. But fundamentally, you know, CAs are terrifying things because they control the security of the whole web and we're trying to build a giant automated certificate authorities, giant crazy machine and we have to be a little bit afraid of the thing we're building. And so how do we design for safety as we build this giant robot authentication machine? So one part of the answer is defense in depth and I think the things I've been talking about in fact forms of that trying to ensure that we have multiple tests in place and we don't fail because one of them was attacked. But also we need to plan to detect and survive really serious kinds of compromise events because we're going to have a giant target painted on us. So protecting against ourselves basically. And we have a few cards up our sleeve for this. One is to being essentially incredibly transparent. We're going to publish the logs of the transactions that we have when people come and ask us for a certificate. As a public server asking for a public certificate, we believe that's actually a totally open public event. So we'll list the logs, what IP addresses are asking for what certs and what happened when we tried to verify that. We'll publish a full verifiable history of every certificate we issue. So you can go and look at the logs, see that they have an incrementing portion in the serial numbers, they're all signed, you can collect the set of let's encrypt certificates really easily if you want. And we'll also push that data into the certificate transparency logs. So people can validate that every set they're seeing, if they want to, is there. Now, we'll also help you with HPKP at some point. If you're in power user mode and you're really brave and crazy to lock out the other thousand CAs, you probably will still have a couple of your choice as backup options, you should never just pin to one C.A. because if you break that pin, your site will become unreachable for months. And it's happened to people. And then we also need to plan what happens if we do get compromised, what happens if an employee of our organization is working for someone else, what happens if there's ODE in our systems, what happens if we just screw it up and there was a bug in our code that we should have one of our systems gets compromised. And we're planning, and what happens if our keys get factorized, we keep seeing crypto attacks that are very powerful, what happens if those affect us. And the plan there is to have some mechanisms that allow really fast server-initiated responses to security incidents. So if a hot blade event happens, we should be able to put up a flag on a million sets and get them within 24 hours, all rekeyed at least if the client is appalling us and saying, hey, do you have an emergency for us to respond to? We can tell them and that can happen before this admin gets out of bed if they want their site working that way. And then we can also do recertification if one of our intermediate CAs were to be compromised, we're not too big to fail, we could actually switch it out for a different one, go to the cold storage, the bank vault, get out the key, make a new one and then roll it out. So these are structural protections that we think help make this safer. So some institutional and kind of organizational details. This project began as a merger of an EFF and Umich project to do this and a Mozilla project, so it's now all of those organizations teaming up together. It's housed in its own non-profit called the Internet Security Research Group or ISRG. And it has major sponsors, EFF, Mozilla, Cisco and Akamai, all putting a lot of resources in. Others really helping out identity trust, automatic and the Linux Foundation helping to do the administration for ISRG and a couple more sponsors on the way. Roughly the breakdown of which bits are being done by which teams, operations of the actual CAs servers and so forth, ISRG and Mozilla, server code, Mozilla and EFF, client code, EFF and Umich and complicated policy and legal and bureaucratic tasks that happen here. The current schedule we have, this is as of this week, we had a slight revision. So we're going to have our first set issued during the week of the 7th of September. There'll be a public validity, so default browsers will trust these sets from sometime in mid- October roughly and there'll be a beta program to start actually deploying them on a wider and more specific basis. In the meantime, we have a lot of work to do. There's both the bureaucratic tasks of passing the crazy audits and producing the insane documents and then the documents of your documentation about all of your backup plans for everything. That's one of the giant tasks that makes studying a certificate authority expensive and time consuming, so we just have a couple of people who are incredibly valiant and tenacious in getting those audits passed and then code. And if you guys are interested, our code for both the server and client pieces are on GitHub and the spec you can come and hack on it, help us break it, help us fix it, help us implement some of the cool features that we talked about but haven't got yet, and help us ultimately encrypt the entire web. But I'm not going to leave it to James to give a demo of the way the client works and some of the stuff we have running right now. All right, so I'm James. Hopefully we can do a live demo here and nothing will go wrong to the demo gods. Nothing ever does. There we go. If you can increase your font size. Can people see that? Or bigger? I think that's about as big as we can. All right. So right now we're using virtual environments but hopefully we'll get into sorry, a little bit tall here. Hopefully we'll get into package managers here. Right now when you download the code off of GitHub it gives you instructions to set up a virtual environment and our clients run in Python right now. But let's go through an example here. So pretend we have an enthusiast who owns encryption example .com he likes to teach people about crypto unfortunately he can't set it up himself. And he's also interested in finance making money. So he registered the site tlstrust.us. It has everything you need to be secure. It has all the logos and it has the lock icon up there. But it doesn't actually run over TLS. Which is unfortunate. It looks secure to me. You know how that goes. So luckily Let's Encrypt came out . He has an Apache . He has an Apache server that . Sorry, I'm not used to Macs. James' first time using OSX. Let's give him. So when you run the client it asks you to go through end user agreement right now because it's a preview release. But basically the client will actually go through your server configuration and figure out what names you're hosting. So it goes through the config files. You can select which names you'd like to use. The first example we'll just do encryptionexample.com. And it actually will go through and solve the challenges for you. Right, all the shrubberies. UI can use a little tune up here. But in that time frame we've actually completely solved the challenges and set up TLS on the server. That was 20 seconds rather than 3 hours. This is still self-signed for the demo. Yeah. There we go. Yeah, so we have an HTVS server. Now mind you that it's self-signed for the XCA up and running yet. But, yeah, if you trust the Happy Hacker root which I don't advise because the private keys are public. You too can get that green bar up there right now. Where's the logo? The logo. Latin Crypt logo? Are you talking about extended validation certificates? You're a logo. Okay. So we can also, you know, it's probably advised that for the TLS trust status we're always going to want to run over HTVS. So we can run and I didn't you can actually specify everything on the command line and not use any prompts at all. But once again it will quickly set up the server, solve the challenges and it also will create a redirect from the original HTTP host to to the new host. So that's all great. So we're going to add some nice little end curses UI in there that asks you if you want easy mode or secure mode or custom mode and they'll try to figure out the dozen or two dozen security features like redirects, HSTS, OCSP, stapling, et cetera you want configured for you. So now you're safe. TLS trust status works. Now some people obviously don't want us to actually touch their configurations which, you know, makes sense. And if you do and if you do want to simply remove everything, you know, or we mess up your configuration, which we won't hopefully. You can roll back everything that I just did. There are three checkpoints. Roll back everything and now HSTS is no longer enabled on anything. It goes back to the original state as if we hadn't touched anything. Finally, you know, we... Not yet right now. It doesn't revoke the certificate. We will have a management system that you can manually revoke all the certificates and see. I was trying to get that ready for the demo but I couldn't quite code fast enough especially with the spotty internet. Finally, you know, if we don't support your server right now or you simply want to use another technique you can specify we have a manual authenticator which will not require root. So it simply gives you a file to post to your server and a standalone which you just click and it will automatically get your cert and drop it in the current working directory. That one listens on full four three so you have to turn off your existing web server if you have one. Yeah, that's it. Do we have time for questions? 10 minutes for questions. Awesome, let's go. I think there's a mic so you should go up and ask questions to the mic. How hard was it to become a certificate authority and getting accepted by the browser vendors? We didn't talk about this. Okay, how many people here think that in order to become a certificate authority you need to be accepted by the browser vendors? Can I see hands? You need to get into the thing? So many of you may have realized you don't need to be accepted by the browser vendors at all. Actually all you need to do is get one existing certificate authority to promise by contract to cross sign you and then you're in all of the browsers instantly if that existing CA was. So the crucial thing for us was getting an agreement with the certificate authority saying yes, if you pass some audits we will cross sign you. Once we had that we could talk publicly about the fact we were going to do this because we had a reliable path to being a browser trusted CA. Passing the audits is a lot of work. There's an incredible amount of bureaucracy there. Documents that are hundreds of pages long with requirements that began as sensible things that we would all think oh yeah, you should probably have a backup plan and then an emergency plan. You should have a way of vetting your personnel and a way of revoking their credentials. You write all of those things down and it becomes a really long list. Then you write down another entry saying okay now document your answers to all of the previous questions that your documentation is and so it costs a fair bit of money and takes a lot of time to do that stuff and we're close to having all of it done. We're going to have a cross sign from a CA called IDENTRUST. So one of your tenants is to make it very easy to get these certs out there which is totally awesome but it's still to a technical crowd you write the command line the demo is awesome by the way is there plans to get involved or end users who are not so savvy can easily get certs? We want to have those tools available, the Let's Encrypt client the fancy Python one can be used by those hosting environments or they can code up their own clients I think there will be a trade off for different people some will do their own coding and some will just deploy our code. So API? Yes, there are two APIs actually the ACME spec is itself an API another API for our client which is basically you have a new server so you might want to write a post fix or an XMPP server module or an IMAP server module to obtain or deploy certificates for all those different things and we have a simple API against our Python client that helps it understand new types of servers. So obviously with certificates you have to think about revocation lists for the entire internet you're going to have a pretty big revocation list and I guess a popular strategy currently is purging it every now and then but that can cause security issues for certificates that were actually revoked because of compromise if they get pushed off the CRL with things like ZD signatures so what is your plan for dealing with that? We're going to do OCSP as well I can't remember our latest plan for CRLs basically the main reason for doing CRLs is to make sure that Google and other people who bake in revoked certificate lists are going to have a fresh way of knowing which of our certs are going to be valid we're also going to be launching with a 3 month validity window so we're going to have a little bit less risk from unrevered certificates structurally than most CAs and in the long run perhaps we could aim to make sure that OCSP must staple kind of environment but I think that's a little bit more speculative and there's a lot of missing technical pieces and unsolved technical questions because revocation on the web right now is broken Thanks So on the server side when you're actually updating configs on behalf of users like for NGINX or Apache do you have any plans on integrating with configuration management tools that might also be vying for updating those so we want to be Yeah, we'd like to write an installer for Puppet and Chef I'm well aware that that is a major need there we just haven't gotten around to it yet but it should be possible and if people want to write their own clients that would be great too Actually these are exactly the kind of tasks that make a lot of sense for volunteers because they're very separable we have a fairly clean API for extending client functionality in common, find us on GitHub Yeah, I work for Puppet so I would love to see there be a good integration there so I will totally check that out, thanks Hi as we know a lot of ad sites or web content are paid for through ad sites and as you push to encrypt all these sites and everybody goes encrypted you deflect the ability to inject as dynamically how do you think that will impact content generation on the web and do you think it will push to a more paid-for style because we also just launched Privacy Badger and you could ask us a more pointed version of that question about Privacy Badger but the answer is I don't think there's any technical reason why ad tech companies can't all just switch to HTTPS there's nothing that you can do over HTTPS that you really can't do over HTTPS in my conversations with those companies we just get referrals much to my sadness actually you can get referrals either you can post them ad companies typically do or if it's HTTPS to HTTPS they're largely still there it's complicated mostly they get blocked because it's an HTTP destination and HTTPS source so I think there just has been this attitude among some ad companies saying why should we do this we don't see a reason and others I've been in a room full of ad tech people yelling at each other where some of them are saying you need to do this why are you not doing it and others are saying I don't see a reason and I think the answer will be everyone just ends up doing it you'll have a lot of plans that seem pretty lofty and very large goals you said you have two APIs publicly available do you have any other APIs planned and if so when and what will they do no other APIs planned in the for this project in the short term and one of those two APIs is very small so the big one is the ACME protocol which is like ATF the Mozilla team is largely shepherding it through there that's going to be a big like new web protocol to do this kind of stuff the other API is much smaller it's for our python clients and it's basically a way that you can write some python code for your particular server that slots in neatly you can think of that as being much more a plug in infrastructure for the client thank you very much I was one of the guys that said that not every site should have SSL you've talked about your plans for avoiding direct collision so site.com, site.com, you're not going to reissue to a different actor what's your plan if I go and get site-secure.com sort of very low tech, comaglific type attack how are you going to deal with that I think this is actually still an open question we are in talking a lot internally about two different plans or two different types of plans so I think from a first principles basis we agree that we need protections against phishing the internet needs to not phish people but there's also a question about whether the X5 or 9 layer is going to continue to be the best layer to do this because it's traditionally been the layer at which this occurs because of the lock icon and the fetishization of the lock icon which made sense when that was a mark of a trustworthy e-commerce site versus a random site it's not trust, it's identity it gives you identity and transport security so let me finish so if we were going to do it outside of X5 or 9 the two places it could go would be into the domain name system itself or it could go into the client so that there's a richer API because the client has it already exists in clients with safe browsing from Google and used by other browsers so one option for us is to take our data sets do our homoglyph detection of our maximum protection against phishing and just pass that data set over to safe browsing or over to the DNS registrars and say we have a flag going up about this domain here's all the evidence you make a decision about how to show the user the right UI around warning them off being phished here the other option is we do that ourselves inside our own infrastructure but there are some costs that come to doing that I'll tell you a couple of them one is when we deploy HSTS for instance an order renewal for people and then their site gets onto a watch list because they host 10,000 software downloads and three of them turn out to be malware so they get put on a safe browsing list if we for instance were to go around and respond to that by revoking their cert and they've deployed HSTS and we've helped them we cause an unrepairable outage at their site it's dangerous to do this kind of detection with false positive rates inside kind of a basic protocol layer that affects communication so we haven't got a choice about this but these are the factors that we're weighing up there are also people who just say politically I run that site with a thousand downloads and Google blocks me sometimes I don't want to be denied a cert and the ability to communicate with people because I have this blacklisting problem and so I think there are arguments on both sides there I think you're also looking at identity validation versus domain validation so domain validation already exists you're looking more for identity validation which Hi say your new SSL certificate authority is widely successful and everyone on the internet is now using TLS and has your certificates seeing how the primary goal of TLS is to prevent man in the middle of techs say I go to your encryption example and my web browser right now my web browser is going to make a connection on port 80 first now if I'm a man in the middle I'll just respond to that connection and bypass TLS completely how are you going to address this so the answer to that one is HSTS and that and of course the HSTS does two things it deals with that and it deals with the fact that people don't know that they need to set the secure flag on their cookies and so huge numbers of websites set off cookies, don't flag them as secure totally cookie jackable so you need ultimately for a secure site to have a HSTS set we will try to help sites do that but it's in the category of things that can cause a lot of breakage and so we need to have good tooling around turning it on for just a few minutes at first and then gradually increasing the TTL and having good tests for breakage so that we can tell the admin to roll it back on which pages on their site are breaking because of it so the plan is definitely go in there and actually fix this stuff for people but it's going to take work to smooth out all of those rough edges but I mean even if every site is sending an HSTS header the man in the middle is going to intercept the connection before the client receives that HSTS there are preload lists in the browsers and so we could auto submit I mean I think they probably haven't engineered for the sheer number of domains that would wind up on the preload list if we robo submitted everyone and so that's probably a bridge we'll have to cross with the browsers when we have enough deployment to cause a problem for them Do you think maybe a better solution would to convince the browser makers to start with a port 443 request and try HTTPS by default? It doesn't help because the attacker can drop the packets and then wait for you to try it That's a good point, yeah, thanks Thank you Will you eventually support Wildcard certificates or is it okay to hit the API We will not support them at first and then we'll look at who knows what we'll do later I can tell you why it's hard the people who are really mad about identity and phishing if you give people Wildcard certificates they can go and get paypal.theirdomain.com or whatever and they can do that without limits and so we will get a good answer to the phishing debate Wildcards will be politically sensitive for that reason So to follow up with that is there any way at all that you could use this for dot-local domains? Dot-local domains in TLS don't entirely make sense I think the thing one should aim for there is to try to actually get name spaces that are not colliding or to make up new browser UI for dot-local maybe there should be an explicit TOFU model for those local name spaces but it needs to be engineered separately from public web naming Alright, thanks Thank you very much