 I'm Josh Oce. I'm excited to talk to you about Let's Encrypt today. So I'm one of the co-founders and the current head of Let's Encrypt. I spent a long time at Mozilla before this. But one of my first real jobs was working on the Linux kernel at SGI. So let's get started. So I think a lot of people know the answer to this question, but it's so central to what we're talking about that I'm just going to say it one more time. What is HTTPS? It's just HTTP over a TLS connection so that it's wrapped in some nice encryption. So why is HTTPS important for everyone? Because we think it is important for everyone, not just if you have a credit card or some other particular piece of data. It's important for everyone because users are not in control of their data anymore. The web has become very complicated. So when you visit a web page, there's a huge amount of data in cookies and headers and the page itself and connections to other third parties. And it's impossible to keep track of that no matter how technically talented you are. This complexity on the web comes with a lot of advantages. It lets the web do a lot more for us. But this is one of the prices we pay. We lose some control of data. And one of the best ways that we can deal with that is to assume that everything is sensitive and that everything needs to be protected. It's a fool's errand to try to figure out what exact pieces of information need to be protected and protect those in particular. So we need to protect everything. On the developer side, developers expect integrity. So unencrypted traffic can be modified. You can inject things like ads or scripts, various malicious things. You can change content for mobile devices, which can be well intended. For example, resizing images for a mobile device. That might make sense in certain contexts. But a lot of times it ends up showing a website that's not what developers intended to be shown. So how many of you believe that your cell phone provider actively minds unencrypted data on your connection? Yeah, so most people. How about your home ISP? How about your government? Yeah? I'm guessing that this isn't something you would opt into if it was a choice. But luckily there are ways to deal with that. But right now you're sending a lot of data in the clear. So we use Firefox telemetry to track secure page loads. And we use that because the browser has a really great vantage point on the internet to tell what people are looking at. And the Firefox data is made public in an anonymized form. So as of August, so this month, 45.5% of Firefox page loads are HTTPS, which is a lot of data. But there's also a lot of data that's not HTTPS. And we need to get this number to 100%. There is way too much sensitive, personal, and business information flowing over the web in a totally unencrypted way. So why? We have all this great technology. We know how encryption works pretty well. We have a bunch of good authentication mechanisms. So why is the web not secure? We know why it needs to be secure. Really comes down to the fact that secure is just too difficult. The system that we set up is too hard to use. So we're going to talk about making that easier today. When you set up a secure connection, there's really two parts of that. There's encryption, so scrambling the bits on the wire. And then there's making sure that you're having an encrypted communication with the person or entity that you think you are. And the web, these are tied together. You can't have one without the other, really. There are some proposals for having opportunistic encryption without authentication. But they haven't gone very far. It's not a very widely deployed thing. And we're not really sure how far that will go. But even if it went somewhere, you really want both these things in order to be secure. The encryption part is relatively easy. It's a software stack. A lot of people use OpenSS cells, and people use NSS. You comes on most operating systems by default. It just needs to be configured. Most web servers tie into it directly and take care of things for you. Your biggest challenge is really just protecting your private key. The authentication part, on the other hand, is a bit of a nightmare, and has been for a while. So if you want to authenticate the way this works on the web, it's needed to get a certificate from a certificate authority. And it's complicated, even for really smart people like my friend Cullen here at Cisco. This is what the process looks like, for the most part. You've got to figure out that you need a cert, which isn't really not obvious to a lot of people when they start. You need to try to figure out where to get a certificate from. Trying to make that decision can be really difficult. You need to figure out what kind of cert you need. And of course, the CAs have come up with a million different marketing buzzwords for the different types of certificates, the super secure, and security plus, and blah, blah, blah. Good luck figuring that out. You've got to figure out how to request a certificate. A lot of certificate authorities will ask you for a CSR. It is totally crazy that they would do that. This CSR is like an arcane format that takes a long time to understand and get right. That sort of stuff just doesn't need to be happening. You can go through a painful manual verification process. Often you have to set up a new email address, maybe fax some stuff. It's not very much fun. You've got to pay, which might not be so bad if it's just you sitting in your home office, or you're taking your credit card out of your back pocket. But in a corporate environment, you've got to go finding who's got the corporate credit card, get approval, explain the expenses, and all this stuff. And that's going to really discourage you from setting up HTTPS on every single site that you set up internally or otherwise. You've got to figure out how to install your cert. And of course, that's server-specific. So you've got to have particular knowledge about a server and how this works. And you've got to remember to renew it on time. I'm sure everyone's run into a site with an expired cert just because people forgot. So this really came to my attention in the summer of 2012 when I was doing some work on HTTP2, which I'm just going to call H2. One of my big interests was in this debate about whether TLS should be required for H2. And my feeling is pretty strongly that yes, it should, because we know all the reasons why protecting data is important and how encryption can help us. And it is craziness to even consider making a new protocol in 2016 without really basic security guarantees baked in. It's irresponsible. On the other hand, there was one really good criticism about not requiring TLS for H2. And that's that getting certs is too hard, and it usually costs money. So on the money end of things, if we required TLS, we're essentially making H2 pay to play. That's the world we lived in in 2012. Anything that requires TLS is pretty much going to cost you money, or at least going to cost most people money. And the technical difficulties with dealing with certificates are going to cost people that want to bother with it too much. That was a pretty valid criticism of not requiring TLS and H2, and it made me pretty sad that there's this roadblock. And I felt like if I was going to continue to insist in the ITF that H2 needs to require TLS, I had to deal with this. I had to take some responsibility for it. So my friend and co-worker Eric and I had just started spending a lot of time thinking about this problem. And we thought about every possible thing we could do that would resolve the issue and do it relatively quickly, so we didn't need to wait like 10, 20 years for none of the new standard to trickle out into deployment on the internet. And the solution we came to was not really what we wanted to do in the beginning. We were looking for something a little easier. We thought about whether we could come up with some new standards or write some new software and put that out there, but we didn't think that was going to work. You can't just throw another standard or another piece of software out there and expect the whole web to switch because you did that. Just wasn't going to be enough. And we came to this somewhat disappointing conclusion at the time that if we want to do this, we've got to set up a whole new CA, which is a lot of work. And it turned out to be even more work than I thought it was at the time when I thought it was scary. But it was really the only way we were going to do it. Without controlling a CA, we could write software, but we can't necessarily convince the other CA's to deploy it and continue to deploy it in the way we want and do it on the time scale we want. We had pretty little confidence that we were going to get existing CA's to do what we wanted them to do. So we sort of accepted this conclusion when we realized there was no other option and set out to build a CA. So these are the sort of four cornerstones of the new CA that we wanted to build. We wanted it to be automated. Automation has a couple of important properties. One, it's the only way you're going to be web scale. It's the only way you're going to issue certificates to a huge percentage of the web. If you've got to have manual intervention for issuance, it's not going to work. Automation is also really good for ease of use. When systems are automated, which computers are very good at, and there's nothing particularly special about certs, people don't have to remember how to do things. They don't have to remember when to do things. It just happens. Free is another important cornerstone. It's important because not everybody has money. But even if certs cost a penny, you've still got to have a billing interaction for that. And billing interactions introduce a lot of complexity. And they might also limit where we can issue certificates in the world if we've got to exchange money. So free is important not just for the monetary aspect of it, but the ease of use aspect. There's no need to go to the accounting department at your company and get credit card approval to get a cert. Third one is transparency and openness. CAs are asking the public to trust them. So they need to give you a reason to trust them. Saying, I built a CA. We're trust where they just trust us. And we're not going to give you very much information about how we operate, what certs we issue, or anything like that. It's a pretty unreasonable request or demand from the public. You are letting them decide the issues of identity on the web. And they should be transparent. You should have the ability to look at what they're doing and decide whether they're doing a good job. The final one is being global. So the web is global. We need a CA that serves everyone in every single country. So we took a lot of care when we set up Bloods and Crypt to make sure that we could do that. So for one thing, not having a billing interaction removed a lot of obstacles for the exchange of money. Nothing about RAPI is specific to a country. We did do some work. We are a US-based organization. We did do some work to deal with US legal compliance. And there are a few organizations out there in the world that we're not allowed to issue to you. But it's a very small number. So we issue globally. And we issue to everyone. And there were a lot of places in the world where you couldn't get a cert a year ago before we started, even if you wanted one, even if you had money to pay. And that's not the case anymore. So the first thing we started doing when we decided to set up CA was building a foundation. I mean this metaphorically in addition to foundation as an entity. So it took us a couple of years to find time and the people to get this set up. And we needed some initial sponsors. And we also needed a plan for getting trusted. So sponsors are important because if you're going to set up a CA, you need money. And we found some great initial backers in Akamai, Mozilla, Cisco, and the EFF. And I want to give them some particular credit because this isn't just about supporting us. It's about having the vision to understand why this is important and that this plan has a real chance of success. And they supported us before we were a CA when we were just a pitch. So we're really grateful for those initial sponsors. And we found a great CA partner. So if you want to be trusted as a CA in the web, if you want to do that yourself, you've got to generate some private keys, some root keys. And you've got to get the public counterpart accepted into all the browsers and root programs in the world. And once you're done with that, you've got to wait for them to propagate around the world for devices to update and things like that. And we calculated that it would take us between five and 10 years to do that. But we wanted to issue certificates immediately. We didn't want to wait five to 10 years for our own keys to propagate. So we found a CA that would cross-sign or give us a sub-CA depending on how you look at it. And that would allow us to issue certificates that are publicly trusted immediately. So we found a great partner and I didn't trust. So ISRG was founded, the Internet Security Research Group. That's the company behind lots and crypts. It's a California nonprofit founded in May 2013. Got 501C3 status about a year later. And this organization's mission is to reduce financial, technological, and educational barriers to secure communication over the internet. So our main project towards this end right now is Let's Encrypt, but in the future there may be other things that help us promote this mission. A little while after we set up ISRG when we were starting to really ramp up and start hiring and things like that, we found that the challenges of setting up an organization that runs really well were getting to be quite a burden. And someone suggested that we look at the Linux Foundation as an organizational home. So we started talking to them and it turned out to be a great fit. So in April of 2015 we became a collaborative project. And this collaboration with the Linux Foundation has really allowed us to focus on setting up a CA and do what we really set out to do while having a well running organization behind that. There's a lot of work involved in building a CA. It's a pretty complicated compliance environment. So there's a lot of policy to write. We decided to write our own software from scratch so that it could be open source and work with our fully automated system. To order and install hardware, configure the environment, hire a team. Security is a really big priority for us and we can't skimp on that. So it was a lot of work goes into being as secure as possible from the beginning. We had to go through some audits and we needed some more sponsors. So 2015 was a really busy year for us doing all of these things. But somehow we got most of it done pretty quickly. I can see some of our employees in the audience remembering back to that time of late nights. So let's talk about how lots and crypts actually works. At the center of everything, there's a protocol called ACME. And it's a bit like DHCP for certificates. It's an automated way to obtain the certificates you need to have an online presence. And you can also manage those certificates once you've got them. I'll talk a little bit more about that later. But it's all safe for now is that it's a protocol that we brought to the ITF and are getting standardized there. And that we hope other CAs start using. Because we put a lot of work into this. And have had a lot of people audit it to a lot of work to make the crypto as solid. And we think it's a pretty safe publicly documented protocol. So we're standardized in the ITF. And we'd like to see some adoption of that. We've already seen some of it. So if there's a few other CAs out there that have already either committed to using ACME or are exploring the possibility. So we've got this protocol. And we've got some software called Boulder that implements the server side of the protocol. And Boulder is, so it's the main piece of software that runs on lots and crypts infrastructure. It's open source. It's on GitHub. Anyone can look at it. We don't tweak it or anything. We deploy what you see on GitHub. The third part here is the clients. So the client side of the ACME protocol. The people getting certificates from lots and crypts have a pretty diverse set of environments. So we don't produce our own clients anymore. But we've let the community decide what needs to be built and gotten built this stuff. And there are many dozens of ACME clients out there now that work with lots and crypts. They range from working with Apache and Nginx to standalone clients, to being built into some servers, to working with popular hosting provider software like C Panel, things like that. So there's a lot of options out there for clients. And our community is doing a much better job with this than we ever could have done, trying to predict what's needed and trying to build all this stuff with our limited resources. Since this is a little more technical audience, I thought I'd throw in a slide about what's actually running lots and crypts. We've got a little over 40 rack units of hardware between two secure sites. The stuff in those racks is pretty standard compute storage switches, firewalls. Maybe the only slightly uncommon thing is HSMs, hardware security modules. And those are special computers that we use, and they protect our private keys for us and do all of our signing. It's a lot of physical and logical redundancy. Uptime is pretty important to us, because if, for example, our OCSP goes down, we're going to take down millions of websites when that happens. Linux is our primary operating system. We make pretty heavy use of config management. We automate as much as possible. I guess that we're big believers in automation as a way to avoid making mistakes, avoid having to remember to do important things. So automation is an important part of being efficient but also being secure. Our API and OCSP go through Akamai. So we're a pretty small group, and it really helps to have a big CDN in front of us that can handle traffic and help us deal with threats. So now, in order to go further into how this works, I need to explain a few different types of certificates. There's three basic types out there. Domain validation. Domain validation asserts control of a domain and ties that to a public key. So it basically says this is the public key for the domain that you're trying to talk to. If you encrypt this public key, the idea is that this domain is the only domain that'll be able to decrypt. That's what Lutz Encrypt uses. I'll come back to why in a second. Organizational validation is much like DV, except that it includes the name of an entity. So if you want to get an OV cert, you need to say what's your company name, or what's your personal name, or whatever, and you need to submit some evidence of that. They'll tie an organizational name to assert. Extended validation just takes the OV concept further and includes some more information about your entity. So we only issue DV for a really important reason, which is that DV is the only one that can be automated. We don't have an automated way to verify that your organization is who you say you are, that the legal name on the server is correct. But we can assert control over domains in an automated way using Acme. DV is also the only option we have for internet scale, because DV is the only thing we can automate. It's the only thing we're going to issue 10 million, 100 million, a billion certs for. We want every web server on the internet to have a cert, and that's not going to work with OV and EV. In the long run, those are going to remain somewhat niche, because they require, for the most part, manual intervention, you know, they're a little special snowflake, and that's not going to work at internet scale. Here's a quick overview of how the Acme protocol issuance process works. So the client sends a certificate request to the Acme server, which would be lots and crypt servers. The server sends back some challenges, says you need to demonstrate these things. You need to complete these challenges in order to get a cert. The client sets up completion of the challenges and sends a message back and says, you know, I've completed these challenges, come check. So lots and crypt will come back and make sure you are able to do all the things that we asked you to do and that you properly demonstrate control of domain. And if you do it right, then we'll give you a certificate at the end. If not, we'll give you a denial. There's three types of challenges that we offer. So the first is the most common one, the HTTP challenge. This is where we give you a special file and we tell you to put it at a special place or on a particular path on your web server. And then we pick a path that is not likely to be in use for other things, and the file is not a file that's likely to be already on your web server anywhere. So if you can demonstrate to us that you can place this special file at a particular predetermined location on your server, that's a demonstration of control. The second option is probably the least commonly used option, but it can be very helpful in certain situations. And this is where instead of putting a file on your web server, we ask you to essentially provision a virtual host at your domain's IP address in such a way that it demonstrates proper control. The third way is DNS validation. And it's a lot like HTTP validation, but instead of putting a file on your server, you can think of it as taking that file and sticking it in a DNS record. Because if you can demonstrate control of DNS for your domain, then we're going to just assume that you can control the server, because you could point DNS wherever you want. DNS is fairly popular, and I believe it's growing. And it's growing because DNS is the only challenge that doesn't require us to actually contact your server for verification. So sometimes people, for example, have an internal web server that's not on the public web, but they have a publicly-resolvable DNS record. They can use the DNS challenge and prove control over the server without having luts and crypt actually go back to the server and talk to it to check for a file. DNS is also used a lot for devices, which we'll talk about a little later. So on the client side, there's basically three categories of clients. So simple clients that do the most basic thing possible. You request a cert, complete the challenges, get the cert, and it dumps it in the local directory or something like that. The second type of client is what I would call a full featured client. It's where it gets the cert, and maybe it goes out and configures your web server to use it properly for you. So cert bots are an example of a full featured client, some of the most popular clients for luts and crypts. And if you want, cert bot will install the certificate chain for Apache and modify or Apache configs to use that cert chain and set up HTTPS for you. This is really valuable because people have a hard time remembering how to do this stuff, it takes a while to learn. Just getting the cert on your server is not enough. So the third type is the type that I find the most exciting. This is where the stuff is just built into your web server, and it just happens for you. You don't need a third party to client to reach in and reconfigure your server. So not a lot of servers have done this yet, but I really think it's the bright future for this. I have an example here. So here's a short video of a server called Caddy, which has luts and crypt built into it, or Acme support towards luts and crypt built in. So what's going to happen is going to happen really fast, so I'm going to tell you what you're going to see quickly here. He's going to create a config file, and he's going to put in a domain name that he wants to set up. He's going to start the server, and you're going to see some output from the server. And in a few seconds of output here, the server is requesting a cert from luts and crypt, completing the challenge, getting the cert, installing it. And within seconds, you have a brand new website that's up and running with HTTPS. And you don't even need to know that certs exist, or that you have one, or that you need one, or what kind it is, or where it came from. It just happens. When this experience becomes more popular, many, many millions of people are going to be much more secure on the web. The web's going to respect their privacy in a much better way. So this is where PKI really needs to go. HTTPS is the new HTTPS. So devices aren't increasingly popular consumer of our certs. It kind of surprised me in the beginning. I didn't think too much about devices when we were setting up luts and crypt, except that there was some automated API does make things easier for devices. But the update has been surprisingly quick. So Synology is a really great example. They make some nice NAS products. And if you get a Synology device, you go into the management interface, you just click a button. You click that button, it'll go and get a cert from luts and crypts. And now your device management interface for your NAS runs over HTTPS. They issue a lot of certs that way. And I'm not able to put names out there right now, but we've been talking to a lot of device manufacturers. And a lot more of this is coming. Home routers, corporate routers, devices are really taking this up. I'm going to talk about a few different aspects of luts and crypt before we have some Q&A time on the end here. So we made the decision to issue 90-day certificates. We could have made it anything we want. I mean, it would take us one minute to change this to one year, two year, three year. But we made it 90 days for some good reasons. First of all, shorter lifetimes are better for security. So things like Heartbleed happen. Not only do private keys get stolen, they get stolen en masse in 48 hours. It's happened before, and it will probably happen again. When that happens, if you have a cert that's available for a long time, your best option is to go revoke that cert immediately. And the problem is that revocation doesn't work very well unless you are important enough to be on Google's radar. Google Chrome, for example, is not going to check. They're not going to check OCSP, and they're not going to update Chrome to know about your revocation. So anybody visiting a website with Chrome, your revocation is worthless. So now someone's got your private key and your cert that certs valid for another year or two, and revocation does nothing for you. You're just stuck. The best thing that could happen is your cert could expire relatively quickly. So 90 days is on the shorter end of when certs typically expire. For some people, even 90 days is a little uncomfortable. So we didn't want to go below 90 days right now. But we did it to encourage automation and to limit damage from key compromise. But if your system is entirely automated, it really doesn't matter what the lifetime is. It could be 24 hours, and it doesn't matter to you, because it's automated. It's just for news. So over time, we'd like to bring this down from 90 days. But right now, we just need to get people comfortable with 90 days, and get the clients in better shape, and get people comfortable with automating this part of their infrastructure. Another place where we differ from a number of CAs is that we don't think that CAs are the right place to be policing content. So when you request a certificate from us, one of the first things we do is we take the domains that you gave us that you want to cert for, and we ask the Google Safe Browsing API. If there are any red flags in that domain, are there any issues of fishing or malware, and if we get some red flags, we'll refuse to issue the certificate. But once we issue the certificate, our policy is not to revoke unsuspected fishing or malware. And this has been a little controversial, to be honest, but it makes a lot of sense. So first of all, you've got to ask, do you really want CAs, knowing what you know about CAs, to be the content police of the web? And in a world where HTTPS is ubiquitous and required, so that HTTPS is existential, taking away a cert becomes censorship, it takes you off the web. So we don't really want to be in that position, and we don't have the data to be in that position, even if we wanted to. Google and Microsoft and browser makers, they have a lot of data about the content on the web, and this stuff changes really fast. Fishing sites come up, they're up for three hours, and they're gone. We don't have that data, and we're not going to get it. It's certainly not through any means that you or I would be comfortable with. But there are places where we have that data. I think the industry is realizing that things like Smart Screen and Safe Browsing built into the browsers are the best and most effective way to protect people who want to be protected from this stuff. They have a great view of content on the web as it changes. They can deliver updates directly to clients and explain exactly what's going on and why there's a concern about a site, and you can make your own choices about whether to keep going or not. That's the way to protect people. Fishing and malware enforcement at the CA level, it's almost impossible to do right. You can't respond fast enough, and even if you wanted to respond by revoking something again, revocation is really ineffective. It really doesn't do very much. So fishing and malware sites, they make use of a lot of things. They make use of server software, ISPs, computers. There's a lot of things that go into running a fishing malware campaign, and a cert can be a part of it. But there's not much that we can do about that, given how ineffective revocation is. And we really just have a tough time policing it. So we felt that it was best to rely on Google for their insights on this and to not take too much action beyond that. So certificate of transparency is a really important part of how we try to be transparent. So again, we're asking the public to trust us to deal with identity on the web. So every single time we issue a certificate with no exceptions whatsoever, that cert is immediately published the certificate of transparency that lets the public look at exactly what we're issuing, right as we're issuing it. And if you ever find something that we issued that's not in a CT log, you know there's a problem, and please tell us about it. So CT is an important part of improving the PKI ecosystem. Our commitment to transparency goes beyond that. So all the software that runs RCA is open source on GitHub, and we use it as is. We really strive for quick and complete incident disclosure. So when things happen, we tell you about them quickly, and we tell you as much as we can about them. And we have set up public benefit government. So we want the oversight and the reporting requirements that are involved in being a public nonprofit. So I'm going to mention this one more time because I think it's really important. We're available in every country in the world. Everywhere. No exceptions. We issue to every TLD and CCDLT except for .mil. That's a contractual issue with IDENTRUST. And quite frankly, it might be changing at some point here. So where are we now? We have a little over 5.3 million active certificates at this point. We've been live for eight or nine months by now. And because you can protect multiple domains of a single certificate, those 5.3 million active certs cover about 8.5 million unique domains. But it's hard to say what these numbers mean. I mean, is that a lot of certs or a little amount of certs are? So this is the number that we really pay attention to. When we launched in December of 2015, 39.5% of page loads were HTPS. By April, that was up to 42. And this month we're at 45.5. So that's 6% in about eight months. That is an incredible rate of change for the web. Not many technologies get taken up that fast. And imagine how much data and how many people are protected every time we go up 1%. It's massive. We've done 6% in eight months. And I don't know exactly how much of that lots of encrypt can trade credit for. But if you look at the trend line for this uptake, there's a pretty significant upswing the day we launched. Also, 92% of the certificates that we issue go to domains that didn't have certs before. And that's what we're really out to do, right? We want to add new domains to the list of those that are protected. And a really exciting thing is that we have a chance at this pace, we can get to over 50% encrypted page loads in 2016. So for the web to be majority encrypted by the end of this year is a huge milestone. It's huge. So we're getting pretty close to the end here. If you want to help us out, the best way to do that is to champion lots of encrypt network. So make sure your employer is deploying HTTPS by default on every site. It's 2016. You know the dangers. You have the tools. Do it the right way every time. If you have partners like ad networks that are holding you back because they don't offer HTTPS networks, get on with them about that. It's not acceptable to be HTTPS only anymore. The second way to help us out is to get your employer to sponsor us. The way sponsorships happen typically is we get a strong champion or two inside a company that understands the mission, communicates it well, and can explain why supporting lots of encrypted is so important to a company that depends on the web and has vested interest in the web being a safe and secure resource. So we've got some great sponsors. Since we started, we've added a few more big ones. Again, really want to thank them for understanding how important this is and helping us out early on. I think that I've run us out of time here. But I'm happy to take a couple of questions and tell we're told that we can't do that anymore. So if you have a question, just try to speak loudly because we don't have a mic, I think. I know you have some sort of rate limiting in place in the site, and there's been issues with that for educational institutions, where the owner of the domain name is not synonymous with the authority for various entities within that domain name. You've made some changes to accommodate that. So within the past couple of weeks, we've done something that should really help with that. So we updated our rate limiting documentation to be much more complete and understandable, and we've also added a link to a form where if you really do need a rate limit adjustment, there's a form that you can fill out and request one. Give us that a means, give us some other information, and we'll try to adjust it. And actually the only people that we've done that for so far are educational institutions, just because they're the ones who applied relatively quickly when we put that up. The other stuff said it that wrong. Yeah. In the subdomain, we still have an open issue. Yeah. We don't rate limit the number of domains you get served for. It's really an issue of subdomains. Since you can't really DOS us very hard by buying new domains a lot. So it's subdomains that we actually rate limit, and educational institutions in particular tend to have a lot of subdomains off of foo.edu. The question is, what kind of pushback are we getting from other big companies, by which I assume you mean other CAs? Not as much as you might think. We have a pretty good relationship with a lot of the other CAs out there, and I think they understand why this is a necessary part of the evolution. So we have good relationship with them. Like I said, the vast majority of our sorts go to sites that were not getting served before. And you know, there's a good chance that they weren't going to if we didn't offer them something like this. So I think in general, the other companies understand why this is an important part of moving the web forward. Oh, sorry. Yeah, TLSSNI is just the new name for DVSNI. It's the same thing. Sorry. The protocol is relatively young, so things have changed as it's stabilized. But yeah, DVSNI, TLSSNI, basically the same thing. Any other? Yeah. So it's really important that we came up with a solution that worked for large deployments and large hosting providers, but that also took care of the long tail of the web. The long tail of the web is a lot of websites, and it's a lot of stuff that matters to people. So I'm really glad that the solution works for individuals just as well as it does for large companies. Yeah, so the question here is, I showed an example with Caddy, with Acme built in, and we have an idea about what might happen with Apache and Nginx. I have lots of ideas about that, because I think about it all the time. I think one of the most important things that could happen is that Apache and Nginx build this in. Apache and Nginx are servers built in a world where HTTP was the standard, and that is a world that we need to leave as quickly as possible. Apache and Nginx need to move to a model where HTTPS is the default, and the sooner we can get them to build this stuff in and do it by default, the better. And the main barrier right now is, first of all, the software hasn't been written, or if it has been, it's not that good. So the software needs to improve, but the real issue is turning it on by default. There's a lot of resistance, for example, in the Apache community, when you start a server. Right now, it just makes a listening connection. It just listens on whatever port. But if you want to do this, it needs to make some outbound connections, that at least needs to make an outbound connection to get assert. And it probably should make an outbound connection to get the OCSP response for OCSP stapling if you're really going to do it right. So there's some resistance to turning this on by default, because it changes how things work for 20 years, or when you start a server, it doesn't make any outbound connections. I understand that point of view from an HTTP-only world, but I think it's becoming increasingly irresponsible not to take the steps necessary to turn on HTTPS, so hopefully we can get them to come around on that. And in the meantime, we can at least offer software or modules for those servers that can be turned on very easily. That make sense? Anything else? All right, thanks a lot. Have a good day.