 Hi, everyone, and thanks for sticking around for the last session of talks. Of course, you should also stick around for the closing ceremony afterwards. But I'm going to talk today about a project that we've been working on at EFF and in collaboration with Mozilla and a number of other teams for the past couple of years, actually. And the aim is to encrypt the entire web, because it still hasn't happened. This is actually a project that we've been working on at EFF for a number of years. We began by just hassling big companies, like Google and Twitter and Facebook, Wikipedia, trying to get them to offer HTTPS at all, and then trying to get them to make it the default. And of course, we have our HTTPS everywhere, browser extension that tries to drive greater adoption of HTTPS from the client side. But there's more we need to do. If you browse around the web today, you'll find lots of pretty important websites that alarmingly are still HTTP. If you go and do a search on one of the biggest search engines, Microsoft's, it's unencrypted. If you go and read the news in our most important news forums, almost all of them, expose the articles that you are reading to anyone who is watching your network connection, and sometimes the topics you choose to read about are sensitive. If you follow the story from Amazon, you start looking at books, the story about Amazon over to Amazon, you start reading about things. Amazon's exposing what books you're looking at to anyone who watches your network connection. Even Google, that in many respects has been a leading actor in deploying HTTPS better, still has an enormous aspect of its services, in its ad services, that are not fully encrypted. This little example here is the Google Ads landing page. That sign-in bottom at the bottom is not real. That was injected as a demo by one of our team members, Yanzu. She was like, oh, look, I can inject a fake sign-in button onto the Google Ads landing page for someone who's about to go and buy ads from Google. This sign-in button will take you to a phishing page. It'll steal your credentials. And then someone else can come along with your account and post malware to your advertising account. And it'll be shown to huge numbers of users. So this is really bad. And we need to fix the problems. The obstacles to HTTPS deployment. Second problem is that even if you have decided you're definitely doing HTTPS, you set out to try and do it, getting an actual certificate and getting it installed correctly is actually horrifically difficult work. You'll find yourself trying to follow crazy sets of instructions like this, which even if you're an experienced systems administrator, will boggle your mind and make no sense. Anyone who ever tells you to go and read the open SSL man page has just sort of cast a hex upon you. So actually, we did some experiments with our own colleagues who are fairly technically knowledgeable, but hadn't set up HTTPS before and went to try and see how long it would take them. There's a chance this video will play here. If it doesn't, I'll just tell you what it looked like in practice. So this was recorded by some of my teammates, the people on our team working on this. And this is Jan going around our office and finding people and saying, hey, my colleague Parker, he's pretty savvy. He has a website. What happens if we ask him to make his website HTTPS? Oh, we have audio that's missing here. Let's try and get that. Since we don't have audio, I'm going to talk you through it. He's going to the StartCom website. StartCom is a free certificate authority. You can see this information about things, but he gets very confused by all the buttons. Things that look like a way to get a free certificate turn out to not do anything. How we have audio now. And so he's sitting there. He's trying to find instructions. Eventually what happened to Parker, he read three different pages of instructions and he got to a page that said, sorry, we're down for maintenance today. And then I'm actually going to skip the rest of the video so we have more time for Q&A. You guys have probably tried to do this before. The end result was amongst our team, you take about an hour. And if you spend an hour, maybe you get a certificate at the end installed correctly. Maybe you don't. It's about 50-50. All sorts of weird technical problems get in the way. The other person was having trouble getting email to go to their domain because they had Gandhi hosting the email. They were trying to make Gandhi create the special magic webmaster at email address. It wasn't working, et cetera. So even once you've got your certificate, getting these configurations correct is extremely difficult. There are these profound uncertainties about the best cryptography to use. As we've seen this wave of attacks, crime-based poodle being deployed against older versions of the SSL and TLS protocols, the best advice about what people should be doing, should they be using RC4? Should they be deprecating the old protocols entirely? Can they afford to? This stuff actually requires basically a PhD in computer science, or you need to be following along, reading the news, reading mailing lists every day to understand the best settings for your TLS server. We have these giant transitions that are happening. A year or two ago, all the certificates on the web used SHA-1. We now believe that SHA-1 is likely to be subject to imminent compromise. And so it has been sunset. And we want to get rid of it. In fact, you can imagine we could have this sort of SHA-256 redemption where we rescue the whole web from these insecure ciphers, and insecure signatures. But how are we going to do this if we have to go and tell every website in the world to go and change out their certificates? It's an insane amount of work. And this goes through with basically all of the attacks you keep reading about, maybe one a month for the last couple of years. Each one of them requires new expert advice about what to do next. Logjam is another really big example of this. This was probably the attack that the NSA was using to break a third of our messages on the web. And if you know what you're doing, you can go and use tools like SSL Labs to reconfigure your server until you get an A plus. Actually, let's see if you'll see up there has a really good score. But even figuring out how to go from having an F, which is probably what you get by default, the big websites tend to get bad scores when they first deploy and then gradually have to improve. That takes a lot of work. So you want to deploy something like this in your config, but you don't know about that until someone tells you. Just a little name drop for Bullet Professor Cell. They do great work here. Another giant problem we have is mixed content blocking. So this is a recent example from Lenovo.com. You'll see the website at the top is encrypted. It's HTTPS. Everything's working. But there's a little subtle visual indicator up here in the right-hand side that no one would notice, saying there's something wrong. There's a shield here. And what it turns out if you open the developer tools is a whole lot of scripts and fonts in this page and CSS are being loaded over HTTP URLs. And the browser doesn't fix those. It just blocks them. And so the user gets a working page, not really. It's like a kind of semi-working page with bits of it being broken. And it doesn't look right. And no one knows why. We have all these big news websites that we're gradually trying to make progress. Oh, actually, so what's going on here? This is a little tip. If you run a website like The New York Times, you're trying to encrypt it. You can use HTTPS everywhere as advice mode. This is a special little developer tool that will tell you rewrites that you can make to your pages to fix the mixed content problem we saw here. So New York Times has the same problem as Lenovo. You can use our tools to try and figure out a fix. There's actually a really cool thing coming down the line, which possibly nobody has heard of, only people who sit around on W3C mailing lists. There's going to be a new feature. It's actually been implemented in Chrome already and is about to land in Firefox called upgrade insecure requests. So this is a huge improvement. It's a single header that a website like Lenovo or nytimes.com could set once that will, instead of having the browser just block all the HTTP content, it forces the browser to try the HTTPS URL first. So this means you no longer need to change a million lines of code to find every HTTP URL. You can change it once in a header. So all of this stuff exists, but webmasters don't know to use it. There are great tools and we have no way of telling people about them. Next problem, in fact, some of you may remember, I've actually been to CCC Congress before to give talks about the problems that are caused from having too many certificate authorities. There are in fact thousands of these probably controlled by a few hundred organizations and they create massive structural insecurity in the web because any one of those certificate authorities can sign a certificate for github.com or debion.org or whatever valuable domain you want add-ons.mzilla.org and then it only takes a compromise at one organization like DigiNotar and suddenly the whole web is broken. This is a map from that data set we got with the SSL Observatory where we did the first scan of the whole port 443 internet. We found those hundreds of CAs and this is not just a theoretical problem. We had DigiNotar, we also had CNNIC issuing a subordinate CA to someone who then turned around and used it to attack Google. These attacks keep happening. So that's the problem that we set out to solve and now I'm going to talk about what we're doing to solve it. So actually there are kind of two parts to the solution. The first is an actual security solution. How do we fix these vulnerabilities in the encrypted web? The second part is how do we actually make the results usable for, particularly for web developers, the people who actually build the 10 or 100 million websites that make up the web? So our solution to the problem of there being too many certificate authorities is to make another one. N was not secure but N plus one will be great. And I jest but actually I don't jest. I think we can actually make a big difference by building one more of these things and doing it right in a bunch of ways that it hasn't been done right before. So if we're going to build a giant certificate authority and issue certificates to every website that needs one, whether the developers are struggling with the start com website trying to get this set today, we have this fundamental hard question to answer which is how do you decide for 100,000 domains at once or a million domains at once whether to issue each of them a certificate or not? Which ones are the real owners of those sites and which ones are the attackers? And so in our protocol, we have a little bit of acoustic feedback suddenly. The answer is basically it's like a scene from Monty Python. Someone comes to us and asks for a certificate and we say bring us a shrubbery. And then maybe they come back with a shrubbery and then we'll ask them, okay, that one's nice but bring us another shrubbery. And eventually maybe we'll be satisfied. So the dialogue that's happening here is in this new protocol that's under specification at ITF called Acme. And the shrubberies map to Acme challenges. So the specific kinds of requests that the server, the certificate authority makes to your client which is actually probably a web server saying prove that you are really example.com. So fundamentally we have this hard aspect to this which is we're trying to create a cryptographic system but we're beginning with no crypto. On day one, we have no idea what key to use to authenticate the other side. And the traditional answer for existing CAs is this thing called domain validation where you just fling an email to an account like admin or webmaster at that domain name or maybe in some cases you make a request to a specific URL over HTTP totally insecure and check to see if there's a nonce there. So we are gonna do a variant of the same kind of domain validation that already exists for CAs. The types that we're going to support initially at launch we're gonna do this one called DVSNI. DVSNI is designed to prove that you have control of the web server, the Apache or Nginx or Lighty or IAS whatever it is, whatever that process is you have control of it, you are able to create arbitrary virtual hosts that don't exist by default, synthetic ones with fake certs. If you can make those on demand that's the shrubbery you're bringing us, we'll believe that you control that web server. We'll also do a simple HTTP which is I think a little bit less secure but possibly easier for some people to deploy particularly behind proxies and CDN layers. So you can put up that nonce on HTTP. And then proposed for later we're thinking about doing DNS based validation which is very useful for large infrastructure deployments. And then a variant of DVSNI that we're talking about where you are asked to prove control of maybe a hundred or a thousand names at once and then we actually only check five or six of them. And that way if you've got a machine with a thousand, five or six randomly chosen. So if you have a machine with a thousand virtual hosts rather than making a thousand TCP connections to verify every single one we do this statistical thing where we get you to thoroughly prove beyond all probability that you can make anything you want there. But fundamentally all of these types of domain validation are terrifying. We're basically fleeing some packets down the dark hallway of the internet and we can't see what the hell is happening there. And then some message comes back saying yes I control these domain names. But all sorts of terrible things could have happened to that traffic in that dark hallway. And that means in practice anyone who can pop a router anyone who can pop a DNS server can get a certificate via these domain validation processes. That's not really very reassuring when you're gonna try to build this infrastructure for the whole internet. Sometimes routers get hacked. Sometimes DNS servers get hacked. We don't wanna amplify those attacks. So we can do slightly better by not going down one dark hallway but by going down several. So if we have our domain validation systems operating in multiple data centers maybe we do DNS of the usual type where we walk down from the route from one place. In some other place we hide behind Google DNS and check that the answers match and the domain validation works both ways. There's still a chance we're gonna get eaten by a monster in the dark hallway but only if the monster is really very big and in all the hallways at once or if it can work basically near the destination the victim website and compromise a router right next to it. So actually we can still do better than this. What we're going to do is make sure that pure domain validation happens only once for a given domain name. What do I mean by this? Well we can use this data set that I've already told you about we get from port scanning the whole internet or we get from HTTPS everywhere clients using the decentralized SSL observatory or we get from certificate transparency. We have these giant databases of basically all the certificates in existence at least the public ones from multiple sources centralized and decentralized. And we use this to demand a different kind of shrubbery, a different kind of challenge which is if we see in our database that there already exists a certificate for your domain name. We go addons.mzilla.org there's already a valid certificate for addons.mzilla.org or startupmzilla.org we're not gonna just give you based on domain validation a certificate for this name. We'll say please prove that you possess the private key and we have a challenge protocol where you can use that private key, you can do decryption or signature and then it comes back with proof that you actually have the key. And then we protect ourselves. Now if you're paying close attention you'll realize that we protect ourselves essentially against misissuing against some bank in New Zealand that we've never heard of or some corporate web mail system that we have never heard of but which is very important. Obviously in the case of addons.mzilla.org we'd notice we have a black list but we can't have a black list of every valuable domain on the internet. So you'll notice there's a problem here which is if you lost all your keys you had a certificate, your server crashed you lost all the keys. You wanting to use this because the acoustics are better? Okay. Then you can wind up stuck because we see a certificate for you but you don't have the key anymore and so you don't have a way of passing this challenge that we're showing. So the answer in that case is you will need to go to one of the existing certificate authorities and pay them money for a certificate to prove to us. And we think that that's a way of not getting rid of those businesses because they're actually doing something valuable in terms of manual inspection. There's an important role for them to play in these kinds of cases. Okay. There's an important role for them to play in these kinds of cases. Wow, you guys can hear me much better, I guess. I was wondering why everyone was looking semi-confused at one point. All right. So we have the existing certificate authorities as a fallback mechanism when the super secure thing we're trying to do automatically doesn't work. And as you've noticed, if you're paying attention, what we've actually managed to design here is a CA system that does TOEFLU. TOEFLU, of course, is trust on first use. It's the method you're all familiar with if you use SSH where the first time you connect to a server, the cert is believed. The next time, if it's changed, there's a warning. It wouldn't be practical to do TOEFLU directly for HTTPS if you think about what would happen when the cert changes. Everyone would get very confused. But we effectively managed to get the same results, the same type of security via a CA here. So that's the big answer to the question of how do we hope to even do this? Build a big robotic CA issue for everyone, do it securely. But as I was mentioning before, there are all these problems with, okay, I've got a certificate now, but I'm not gonna configure my server correctly, I'm gonna get beaten by logjam or poodle or whatever it is. So the aim here is to provide a client or agent software that you run on your server that gets all of these things right automatically for you. If you want it, obviously you don't have to use it, but for people who are just random web developers, this will be a much better experience than being given a raw SSL configuration file. And you can think of this as basically the problem right now is every web developer in the world, they're all out there, there are millions of them, they all need to know all this stuff about TLS. Here instead we can imagine just centralizing this a little bit and having a team of people who can all work together, basically the people in this room can come and work in our GitHub repo, figure out what the best configuration options are, both for a completely secure server or one that's maximally secure subject to the constraint of backwards compatibility with old clients like Android 2 or Windows XP. And those are the two options you offer the web developer. Maximum security, maximum compatibility, and that's all. So the default plan for the client is to go in, it tweaks your Apache or Nginx server to pass those challenges, make the shrubberies automatically. Install the resulting certificate in whatever server you have. By the way, Apache and Nginx are the ones who will support it at first, but we actually have a plugin API here, so you can make a plugin for any other type of server, whether it's Dovcott, XM, Postfix, Qmail, your XMPP server, anything that needs a TLS certificate, you can just go and write against our Python API and we'll drop the cert in for you. You can tweak the options to get good results, both security and compatibility-wise, and this is the other super important thing, of course, if you've ever deployed TLS, you know you spend that horrible hour getting a certificate, and then a year later, your site goes offline because you forgot to renew it, and the renewal emails that your CA was sending you were going into your spam folder along with all the other junk they're sending you. So here, by default, we can just install a cron job for the user that a month or two before expiring, it says, oh, you have a certificate that's about to expire, I'm gonna go and get you a new one, does that automatically in the background, and then only emails you if there's actually a problem with that process. And in some cases, we should be able to automate response to security incidents as well. If you had Heartbleed on your server, we may be able to do something where your cron job polls the server and the server just says, ah, there's a problem, please re-key, and that just happens automatically. You don't need to be answering a pager alert in the middle of the night. If you're configured to do it, you can just re-key in case of security incidents, or if we're compromised in various ways, we can also recover really fast. So what kinds of automation are we really talking about when we're talking about tweaking your TLS configuration? Because I know everyone in this room is probably a little bit nervous about the thought of having some code base just going in and editing it. And the answer is there are some pieces that we're gonna do for everyone, really, unless you ask us not to, because they're basically free wins. So getting the ciphers right, taking one of those bulletproof cipher lists and copying that OCSP stapling so that if your cert is revoked, people know quickly. And the upgrade in secure flag I mentioned before the fixes, mixed content problems, these are all easy to fix. There are some harder ones that you really wanna do, but which require some hand-holding for the sysadmin. Redirecting HTTP to HTTPS for an arbitrary site is actually dangerous because of that mixed content problem. You have a fair chance of getting a page that has some third-party scripts that just can't be upgraded. And so in those cases, you need to check that you're not breaking things as you deploy the feature. Automatic renewal and rekeying is a little bit tricky to get right. We think we're actually done that one okay. The hardest things are just rewriting everything, totally fixing your HTTPS deployment or turning on features like HSTS, which if you get them wrong can cause outages. And then the absolute hardest stuff, HPKP, so pinning, totally protecting you against all the other CAs. That one's kind of black magic. Some people will really wanna do it for a very secure website, but it also means that you need to have an operational plan and know who you're gonna switch to. If you don't like working with us anymore, you need to plan in advance which CAs you're gonna use in the future. And so that's a dangerous thing to deploy automatically. The other thing we may be able to do in the future with a lot of work is automatically use the content security policy header to detect those mixed content cases, get browsers to send reports about them, collect them in a standardized way and then go in via a proxy or something and fix mixed content. Oh, jQuery was the wrong URL, but there's a different URL on a different domain name that has a secure copy of jQuery. Let's use that instead. That kind of transformation to a site is in theory, automatable, but we won't be able to get it by default. But then fundamentally, I'm describing building a terrifying thing. We're building this giant robot. Any CA is terrifying, but we're building a giant robot CA that's supposed to deploy authentication results for the entire internet. And if this thing goes wrong, it'll crash into the stuff around it and cause massive problems. So how do we, given that we're fallible humans and we use fallible computers, how do we try to protect ourselves against ourself and you guys against ourself? So part of the answer is always to defend in-depth, to have multiple layers. So if one system gets compromised, you don't get compromised everywhere all at once. But also just as fundamentally, we have to plan to be able to detect a compromise event straight away and not have it totally break the system we built. So the main protections we have for protecting against CAs, which in this case now includes us, is to publish full logs of all of the interactions with our servers. So whenever there's an event, the server says, bring me a shrubbery and then someone comes back and a shrubbery is presented. We just publish that stuff so that we can audit it, but you guys can audit it too. And tell us if we fucked up. Sorry, I shouldn't swear, but the reality is if you're running a system like this and it goes wrong, you can cause serious problems for other people. In addition to that, we also publish a verifiable, cryptographically verifiable, log of what we actually signed. So logs of all the conversations where we decide whether to issue, logs of all the certificates with each one an incrementing serial number and a signature over it. So it's an append only data structure. And then we'll also publish that to the certificate transparency that logs run by Google and other organizations. And if you want really strong protection against the hundreds of thousands of CAs that are out there, you can do HPKP, we'll support it in the client, probably in a super advanced mode, dash dash black magic, and you can turn that on. And then the last kind of protection is to have a really fast rollover mechanism if we have problems. So today, widely used CAs are essentially too big to fail. If one of the one, the CAs that signs hundreds of thousands of domains had a private key compromise event, there is nothing that you could do short of starting to ship 100,000 white listed certs in browsers. There's no other way you could recover from that compromise event. So the plan we have for these kinds of situations, whether it's a problem on the server or a problem on, like your web server or a problem on our CA end, is to have a really rapid automatic reking or recertification protocol. So within 24 hours, we could switch off a compromise CA and switch on a new one and all the sites that were dependent on that could just be pinging, watching for the event where they need to upgrade and we can rollover automatically. So that's the plan. This project, as I mentioned before, is a began as a merger of a project at EFF with the University of Michigan, merged with a similar project that had existed independently at Mozilla. It's now housed in a new nonprofit called the Internet Security Research Group or ISRG. It's being sponsored in addition to the original founders by Cisco, Akamai, Ident Trust, Automatic, and there should be more sponsors in the work soon. And if you work for a company and you'd like to help us build this thing and scale it faster, please come in and sponsor us. The pieces are roughly broken down by the actual CA operations that are being done by ISRG and Mozilla stuff, the server-side code base, the CA that decides whether the issue or not is an EFF and Mozilla collaboration. That Python client that I mentioned that does the fancy configuration is EFF and University of Michigan, and everyone's been chipping in with the policy, legal and bureaucracy stuff that you need to deal with to become a CA, which is actually incredibly complex and labor-intensive. The current schedule is to have the first certificate on the 7th of September or that week, at least. Those certificates will become valid in web browsers via a cross-signature from Ident Trust in mid-October. And then everyone in this audience, everyone on the internet will be able to use it from the week of November 16th. And in the meantime, if you'd like to help, what we're doing is both dealing with the audits and the bureaucratic requirements for getting CA validity, and that's largely our job. But we're also coding on GitHub, building out both clients and servers, and so there's lots of places where if you're interested in hacking on this stuff, you can come and work with us. Three main repos there, there's a specification repo. That's pretty much frozen at this point. It's there for documentation. There's the Python client, which I'll show you in a second, and the server, the CA that's written in Go, called Baldur. And if you come and work with us, you can help us build an encrypted web, fully encrypted web. Now, because I'm a little bit crazy, I'm gonna try and actually give you a live demo of this thing running. You should never do live demos at hacker conferences, it's a disaster, but I'm gonna try it anyway. So what I have here is a test website, just a little toy website. It's running on an EC2 instance. It's got the default Apache configuration. It only does HTTP. And then what I'm gonna do is I'm gonna, I've just checked out the repo from GitHub and run the set of commands, and I'm gonna try and run it. And of course it's warning me, because this is not a real certificate yet, and this is pre-release code, so we agree to that. It asks me, hey, I see these three domain names on your system. Are they the ones you want? And the answer is yes. And so you can see it's creating shrubberies, bringing them to the server, service testing them. It's gonna take a while to do it from multiple places. It's deploying it and it's done. So now I didn't ask for a redirect, so the default is still HTTP, but if I put in an HTTPS, we should get a not yet valid certificate, but it is a certificate for this website. So we can go and click through. There you go. So I think with this code we get the hour that it took at the beginning down to about 20 seconds. And right now you have to get cloned and run a couple of virtual end commands, but we should be able to get, apt-get install let's encrypt, let's encrypt and you're done. So we're very excited about this and we're also looking forward to help fix all the crazy problems with TLS that have been plaguing us to date. And it's time for questions. Thanks, Peter. So we'll go left and right with the questions. We'll start on this side. Hi, what is the possibility of having wildcard or multi-domain certificates? So this one is a multi-domain certificate. Up to probably hundreds of domain names at once will be fine with a certificate with multiple subject alternative names. Wildcards will not be available at launch. If at some point in the future that becomes a practical thing to do, we may do it, but at the moment we want to get the first case right. Hi, thanks for your talk and for your initiative. I wonder, do you plan to do the same thing for email certificates maybe? So we will definitely do this for certificate, for domain name certificates for many parts of the email infrastructure. So if you run a pop client, sorry, a pop server or an IMAP server or an SMTP server, we will give you a certificate and we'll give you an API to deploy it. If you're talking about S9 certificates for email addresses, we don't currently plan to do that. We wanna solve the problem A first and then we'll look at future problems and that could conceivably be one of them. Thanks. Hello, right now there are many companies making money from issuing certificates and now you're destroying their business. Have any of those tried to do revenge like pulling your certificates out of browsers or smartphones that are sold currently? I think for one thing, we aren't trying to put the other certificate authorities out of business. We just think that there are, it's inappropriate to be charging for the basic kind of HTTPS that the whole web needs to exist because when it first was created, HTTPS was considered this weird special thing that only banking websites or online credit card processors would have and maybe it was appropriate to do expensive manual verification in those days. But at this point, we know the whole web needs to be HTTPS by default so that case just honestly needs to be free. We will probably generate a lot of business for the other CAs. As I pointed out, there are cases where for security reasons we can't auto issue and in those cases, the manual more expensive type of verification actually makes sense. And there's of course other businesses around things like extended validation certificates where you get the green glowing address bar that's actually supposed to be associated with a real world organization and we currently have no plans to do anything like that. Hi, thank you for your talk. I have two questions. First of all, will commercial companies be able to use your certificates? Yes. And will you provide more than one certificate for one domain? Yes. So you can do public key pinning role of us? Yes, absolutely. Thank you. So what is the process for adding an additional domain name? Yeah, so there's a... In this case where I demonstrated it, we had three example domain names and the certificate I got should be a certificate I think for all three of these. So if we go and look at this certificate, we go details. I should have checked this before doing it live on stage but I think if we look at the subject alternative names field here, down here, I actually can't see the value because my font is too big. This is not working. I'm sorry, this is... Oh, you can just barely see it but you have the three names in there. So when you renew, there's actually an interesting concept of like, okay, you're renewing a certificate and you had 10 or 20 domain names in it beforehand and now one of them is no longer controlled by you. The DNS has expired. Or you deployed a new certificate in the middle of the... We're gonna have three month validity for these certificates by default. So in the middle of the three months, you deployed a new domain name and you wanted a certificate for that. And so we have this internal concept inside our client of a thing called a lineage which is a succession of certificates that were related to each other. And so the common case is you should have a lineage of certificates where you basically have all your names in one certificate and then as a new name is added, you make a new certificate in that lineage. If you lose a name at renewal time, you'll have a new certificate that doesn't have that. You can also use the client in a different mode where it makes one certificate per domain and then each of those is a separate lineage and you might deploy them with SNI. And in the end, we wanna support people doing both at once. If you want totally optimized performance, you wanna use the small SNI certificate for a client that knows how to understand it, but you want the very big certificate with every domain name for the older clients like Android 2 and Windows XP which don't know how to speak SNI. And if you don't know what SNI is, you can just ask me that question and I'll tell you all. Why is it taking you so long? Is it the bureaucratic process or is it the software development? What's the... I think the big complications have been, one, the operational infrastructure for doing super secure or your keys are in HSMs and every single step that you take with those HSMs is meticulously documented so you have a complete auditable record of exactly where every system was at every moment in time and the exact state it was in and all the security measures you took, that takes a while to get right. And then the bureaucracy layer that goes over the top of it is a lot of document all the things you did in case A and then document the fact that you've got the documentation. And so if you go and look at the web trust, SSL rules and the baseline requirements, it's basically just a lot of human hours to go through all of those steps. Most of them begin as sensible security requirements but when you then try to formalize all the things that you would do if you try to build a secure system, you end up with a lot of time spent on paperwork. Thanks. You said that you can verify a domain by an existing key. Will you support free SSL CAs for like, start SSL or CA cert? We'll support start SSL but not CA cert. We also won't block you if you have a CA cert set because it's not in the standard trust routes, we won't say, oh, you can't have a, you can't have a let's encrypt set because you have a CA set set. So you won't be asked for that proof of possession challenge. And our sense is that, I mean, I don't know, I'd be curious if the CA set community really, really wanted us to do the opposite, we could and start issuing those challenges and demanding proof from CA set, we could consider it but I suspect that's not likely to happen. Apparently your certificates are signed by an existing CA cert. Yep. And how much influence do they have on this project and how much influence will they have on the signed keys that you issue? Sir, they do not have influence. Basically we passed our own web trust audit and as a result of that, we are our own CA and, you know, we announced this project basically at the moment when we were sure that we would have our own CA that wasn't controlled essentially by another CA. It's much easier to start a CA if you're willing to live by another CA's existing certificate practice statements because then you don't need to have a separate web trust audit but we didn't want to do that, we wanted it to be an independent operation. Any more questions? Come on down. Where can people get in touch with you or how can people get in touch with you? All right. You can get in touch with us. I'm PDEFF.org. You can find the project on GitHub. I had those URLs up before. I'll put them up again. And thank you everyone for coming and staying around to the end of the magnificent camp. Dr. Peters-Eckesley.