 So, my name is Jacob Hoffman Andrews, I'm a staff technologist at the Electronic Frontier Foundation. If you're at the EFF panel, parkour is saying, you know, we like to solve problems by suing. We also like to solve problems by writing code and writing blog posts, and that's what I do as a technologist. So specifically, my biggest project these days is Let's Encrypt, which is part of EFF's ongoing project to encrypt the web. Pardon me, it's going to be a little awkward because I couldn't get my speaker notes on the screen and the slides up there, so I'll check back to see what page I'm on occasionally. So to start off with, I'd like to explain a little bit about the project to encrypt the web and why we care so much. So you're probably familiar with HTTP versus HTTPS. HTTPS is the more secure version of HTTP. It protects your data in transit and ensures that you're talking to the website you think you're talking to. So the first and most obvious reason why we care about encrypting the web is our friends at the NSA who are hoovering up everything, and especially they love plain text, which is HTTP, is plain text, and it's easy to look at, interrogate, find selectors in, find data in. We also know that they actually care about SSL, which is the protocol underlying HTTPS because they try to find ways around it. This famous slide from the Snowden leaks shows a little smiley face with SSL added and removed here. So when they find encryption, when they find SSL, HTTPS, they try to bypass it by going to the inter-data center links. But we've actually started to see a bunch more kind of commercial level attacks recently. So for instance, about a year and a half ago, EFF kind of sounded the alarm about Verizon's new tracking program using this UIDH header. And the way they were doing this tracking was actually really novel. They would take HTTP traffic passing through their network, they would modify it, and they would inject a unique header for every user. So if you were a website that had visitors from Verizon, every one of those visitors would have a unique cookie tagging them that doesn't respond to their browser's privacy controls or deleting cookies, or if you have a privacy badge or installed, that wouldn't help. The only thing that helps with this type of injection and traffic modification is HTTPS. We've also seen Comcast using HTTP vulnerabilities, like the fact that HTTP is so trivially man-in-the-middle-able to insert ads into their webpages. And the funny thing is they're doing all this just to insert ads about themselves telling you, you are on an authentic Comcast Wi-Fi hotspot, we're man-in-the-middle in your traffic, but you can be sure you're safe, you're on a Comcast hotspot. The other really important thing with HTTPS is HTTPS2. HTTPS2 was standardized, I believe, last year, but has already seen a lot of deployment, and it's got a lot of performance improvements over HTTP. But what the authors of the spec found is that in practical deployment, these middle boxes, you know, caching layers and caching proxies and, of course, blue coat boxes, munch the HTTP so badly that if you tried to change any protocol aspect of HTTP without adding encryption, you would have a bad time. Some middle box somewhere would munch it and would break the protocol. So in practice, if you want HTTP2, you need HTTPS to support it. This was one of the really fascinating attacks of the last year or so, was the Great Cannon. This was due to some great research by Citizen Lab. Basically what happened here was GitHub was being used to host some content for greatfire.org, and China didn't like that content. So they wanted GitHub to take it down. GitHub wasn't taking it down, so they started DDoS in GitHub. And this was one of the biggest DDoSes seen on the Internet, I think. I'm not sure if the full numbers have been released, but it was a really remarkably large DDoS and remarkably hard to counter. And the reason was this unique way it worked. So China has kind of made in the middle position on a huge amount of Internet traffic, not just users inside China, but users outside China. So Baidu is, of course, a huge Internet company, and they offer a lot of services, including Baidu Analytics. Baidu Analytics is like Google Analytics, but by Baidu. And so that's included by millions of websites around the world, not even just Chinese websites. And visitors from all over the world would visit those websites. And normally their browser makes a request for BaiduAnalytics.js. That request goes to Baidu servers in China, comes back with regular JS. During the attack, when China turned on this tool that Citizen Lab deemed the great cannon, those bytes for the BaiduAnalytics JavaScript would instead come back with an attack payload, which would basically say, fetch this URL on GitHub over and over and over again. And at the scale, the huge number of people that load webpages that have BaiduAnalytics, that added up to a tremendous amount of traffic. And it was very hard to block and filter because it didn't come from any one place. It was essentially an instant botnet made out of browsers. And the thing that made this possible was HTTP rather than HTTPS. So if we had an all HTTPS internet, this type of attack would no longer be possible. I mentioned in the slide title PKI, I realized it's a jargonny acronym. And feel free to raise your hand if I say a jargonny acronym and I forget to define it. PKI is public key infrastructure. It's how you know which key goes where. So for instance, with PGP, you manually verify each other's keys on the web. We have some help from the web PKI, which is composed of CAs or certificate authorities. So there's a problem with CAs, which is there's too many of them and they're too big. They're too decentralized and they're too centralized. So this is a graph from the Berkeley ICSI's scan and evaluation of the CA ecosystem. There's literally hundreds of CAs or certificate authorities. And each one of them is trusted to vouch for any website on the internet. So if you run a website and you use HTTPS, there's hundreds of entities who could sign a valid certificate for you. Now they're supposed to follow correct procedures to ensure that they only sign certificates for the right entities, but those can fail. There have been a number of examples of CA failures over the last few years. And that brings us to the other problem, which is if you see some of the largest circles here, Rapid SSL, GeoTrust, I don't see semantic on there, but they're definitely somewhere in that graph and they're big. If one of those biggest CAs fails and they miss issue a cert, you might say, well, forget them, I don't want to trust them anymore. But you can't, because say for semantics, some 30% of the websites on the internet are signed by their certs. So if you turn them off, you would break the internet for yourself. More importantly, the browsers can't turn them off either, because they would break a huge portion of the internet for their users. And this is sort of an open, unsolved problem. There are a few alternatives people have worked on, especially in the hacker community, Dain, which is DNSSEC, something named entities. I forget, DNSSEC authentication for named entities, I believe. This is a protocol based on DNSSEC that allows DNSSEC to deliver the trusted keys for a given website through the DNS system. And it's hierarchically rooted in the trust you have in the root zone. But that's not actually seeing any deployment in browsers yet today, so it's not yet a practical alternative. Sovereign keys was a proposal by my colleague at EFF, Peter Eckersley, along with some others, to put identities in an immutable public log that could be used not only for vouching for identities, but also for censorship resistance. But again, we're not seeing practical deployment of this in browsers. Convergence was a proposal by Moxie Marlin Spike to use a concept of notaries. So you would have multiple vantage points across the internet that could reach out to the site you're trying to visit, tell you what key it should have. Not available in browsers today. TAC was a proposal to use the certificate authority ecosystem to bootstrap trust in your site. And then from there, you would essentially use self-signing. You kind of take care of your own key. This is similar to what is available now, which is called key pinning. So this is a bit of a way to say, I want to trust the CA system minimally, but not available on browsers. And DNS chain is a blockchain, Bitcoin-based approach to naming and delivering keys. And again, it hasn't seen any practical deployment. Or I should say it hasn't seen large-scale browser-based deployment. So what are we left with? If we want HTTPS for the whole internet, we're going to make another CA. They say you don't go to war with the army you want. You go with the one you have. You don't encrypt the internet with the CA, with the PKI you want. You encrypt the internet with the PKI you have. So this was a big project. EFF came together with Mozilla and the University of Michigan and sponsors Cisco and Akamai to create the Internet Security Research Group. So ISRG is an independent nonprofit from each of these entities and runs, lets, encrypt and owns its roots and keys. And ISRG has a small staff. It also gets through volunteer labor from EFF employees and Mozilla employees like myself and some of my colleagues. So the big question when you want to start a CA is how do you get trusted? How do you become one of those hundreds of CA's I showed you in the previous slide? So the decision of which CA's are trusted is to some extent up to the browsers and to some extent up to the operating systems. It's a question of what's called trust stores. So a trust store is a list of CA's where you say these are the CA's that are allowed to sign in my world. So you have essentially three options. You can buy a route from somebody else who's already in one of the trust stores and this happens all the time in industry. And there's pluses and minuses to it. You could say trust isn't necessarily transitive across purchases, although there is an auditing layer to prevent terrible malfeasance in theory. Another alternative is you can pursue something called a cross signature which is where another CA in that graph of CA's signs your route or signs your intermediate certificate and essentially delegates that power and says this entity ISRG is trusted to issue certs. And thirdly, you can actually just start from scratch. You can roll your own key, make sure you store it in an HSM, apply to the browser and OS route programs. And six months to a year later, the very newest browsers will trust your cert. But we wanted to be trusted by a really wide variety of browsers and other programs right out the door so that people could start encrypting right away. So we wound up going with this approach. We're cross signed by another CA called a dent trust who's also one of our sponsors. So I said, well, we've got this kind of broken ecosystem and we're participating in it anyhow because we think the win is bigger than the loss. But we are doing some things within that ecosystem to try and improve it. The first is CAA, Certificate Authority Authorization. This is a DNS specification. So it's a newish DNS record type in the last few years where you can request a CAA, Certificate Authority Authorization record for a given domain and that says, this is the list of CAAs that I want to allow to issue for my domain. So maybe you don't trust some CAA you can say, okay, only these three are, say, only Let's Encrypt or none at all. I don't want any TLS certificates. Go away. Let's Encrypt is actually the first to implement the CAA spec versions. Oh, no. OK, correct. We have a correction. Ah, all right. Sorry about that. Yes. Is that Rob in front? Yeah, hey. So we have Rob Stradling from Komodo in front, and he corrected my fail. Oh, sorry, fail. So Komodo actually invented CAA. And is Komodo enforcing CAA for issuances? Oh, that's operations. OK, that's operations. So anyways, we consider CAA implementation one of our important security fixes. We also implement Certificate Transparency. Certificate Transparency is an attempt to improve the trust of the CAE ecosystem by ensuring that every certificate issued is logged publicly and can be analyzed by the public and by browsers. And so when there's a mistake, when there's a mis issuance, we can catch it right away and take the appropriate measures. So Let's Encrypt from day one has been logging voluntarily every certificate we issued to CT. And we plan to take further steps to ensure that the CT proofs are included with either OCSP responses or in the certificates we issue. So sometime in the future, the goal of the CT project is to enforce that every certificate trusted by a browser is present in a CT log. Serial number entropy. So the CAE ecosystem recently went through a transition from the SHA-1 hash to the SHA-2 hash. And the reason for that is, SHA-1 is no longer considered strong enough. It's too easy to produce collisions on SHA-1. And if you can produce a collision in the hash used by a certificate, you get to mint your own certificates. And we saw this before with the MD-5 hash. We migrated from MD-5 to SHA-1 many years ago. And while MD-5 was on its last legs, a group of researchers actually produced a live collision and minted their own fake CAE certs using this collision. So as a response to that, the CAE sort of self-regulating organization called the CAE Browser Forum added language saying serial numbers should include 20 bits of entropy, which was a great step and I think most CAEs in practice do that now. But there's still this bug in that this critical security feature, entropy in the serial number, isn't required. So one of the nice things about being a CAE, we get to participate in the CAE Browser Forum. So we recently helped push through a requirement for serial number entropy at a 64-bit level. So now that's a must rather than a should. One of the longstanding problems with certificates in general is, you know, they say revocation doesn't work. Revocation doesn't work because the method we use to deliver revocation information, OCSP, is not inherently reliable. It fails some percentage of the time that's unacceptable to end users. So if you're visiting a website and the website is up, but the OCSP responder is down, you generally don't want that to stop you from visiting the website. So what that means in practice is that if a browser fails to get revocation information via OCSP, if it gets a timeout or it's blocked in some way, it'll fail open. It'll say, you know what, it's probably not revoked, I'm not gonna worry about it. But you can imagine if you revoked your cert because somebody stole your key that the attacker using that stolen key with a network position on their victim would just block the OCSP response. So OCSP in practice as deployed by browsers is this tool that sort of fails when you need it most. So there's a recently standardized spec called OCSP must staple. That's actually called TLS feature, but before that it was most commonly known as must staple, which basically says, you know, when I make a handshake with a web server, if it has this extension in the certificate, I should always expect to see a stapled response attached. And if I don't, when I say stapled, that means that the server has gone to the OCSP server, OCSP responder itself, fetched assigned response that's up to date and has a timestamp and attached that to the handshake. So this actually makes revocation work if you turn it on and let's encrypts deployed it pretty early on. I think, again, Komodo actually beat us to the punch on that. But this is, you know, we're trying to make revocation actually work on the web. So takedowns, you know, one of the things that doesn't get talked about a lot is the power of your CA to take down your website through revocation. You know, I said, you know, OCSP is a tool that fails just when you need it most, but it also works when you don't want it to work. You know, if your CA decides to unilaterally to revoke your certificate, you know, your visitors are gonna see a warning that your site is unsafe. Now, you know, I haven't seen this actually happen in practice, but because not all certs are logged to CT yet, it's actually not possible to do the comprehensive research necessary to find out how often revocations are happening. So when we develop let's encrypt policies, we try to take a very content-neutral approach and say, you know, we don't want to revoke certificates based on the content of the site. We revoke them if the key is compromised or, you know, a certain number of other requirements that we have. Now, we've had to make some compromises there based on the requirements placed on us by root programs in the CAB forum. So we do some amount of phishing check at issuance time when you talk to the Google Safe Browsing API. But we try to keep those as minimal as possible to avoid the temptation to do content-based takedowns of sites, which I think is going to become a bigger issue as HTTPS deploys further. And we have some community forums that, you know, started out as a support tool for let's encrypt itself and for our many clients, but I think these are quickly becoming one of the best places to get advice on TLS configuration on the web today. I work pretty hard to keep them a friendly place and always welcome more people, come check them out and either contribute advice or ask questions. So one of the other really cool things about let's encrypt is this new protocol ACME, which is the Automated Certificate Management Environment, which is clearly a background. I'd like to say it's named after the ACME bread company in San Francisco because certificates should be like bread and that, you know, everybody should have all that they need, but it's actually named after the ACME company in the Roadrunner cartoons that, you know, produces boulders and anvils and that's where the name of our server software comes from, it's Boulder. It's an ACME Boulder. Here's the URL for the IETF process. It's an IETF standards track document. Would love to get more feedback, more analysis. So right now, let's encrypt is the only CA implementing ACME, but Startcom, who runs StartSSL, has recently committed to implement ACME. They rolled out their own kind of ACME-like protocol with a custom client in a service called Start Encrypt. Some researchers found that there were vulnerabilities in how that does validation. So they've since decided, you know, let's use the standard ACME. SSLmate is not technically a CA, but is a kind of front end to a number of CA's that has a really convenient tool for setup. Andrew Run's SSLmate is also considering implementing ACME. One of the valuable things about ACME relative to previous protocols that try to do the same thing is that it covers both validation and issuance. So ACME includes not only instructions on how to ask a CA for a cert, but how to prove to that CA that you own the domain for the cert, that you own the domain that's listed in the cert that you're asking for. And I said, we have bold or implemented. We also have probably dozens of clients for Let's Encrypt that speak the ACME protocol. And the idea here is that once we have broader deployment in the ecosystem, you can have one client that could potentially interoperate with a number of CA's. Hopefully that will reduce the too big to fail problem by lowering switching costs and making it easier for people to choose a CA that meets their policy goals and meets their cost goals. Perfect. So this is our server software, Boulder. It's written in Go. You can visit our code on GitHub, take a look. Do a security audit if you want. We have paid for professional security audit and we're about to go through our second one and we've also gotten a volunteer one from the community. We're always interested in making this more secure. CA's are a core part of internet security and we do our best to make sure we live up to that very high standard. So CERT-POT is the first ACME client developed. It was developed at EFF. And CERT-POT's goal is not only to solve the issuance problem but to solve what we saw as perhaps the even bigger kind of skills gap, which is that if you wanted to buy a certificate, the cost is one thing, it was $15. It's not that bad if you're based in the U.S. It's pretty bad if you have a bad exchange rate on the dollar. But even beyond the monetary cost is the time and skills cost. Most people just don't know how to start getting a certificate. You have to create a CSR certificate signing request. You have to upload it to the CA. You have to follow their instructions to prove you on the domain. You have to download the certificate and you have to install it. You have to remember to install the certificate chain, which is a common mistake, and you say it won't work properly without it. There's a number of mistakes you can make. Even professional sysadmins sometimes take hours to set up a cert or renew it. So we wanted a tool that could handle all that for you. It does the proof, it gets the cert, it installs it, and suddenly you have HTTPS. And CERT-POT does this. It has auto-configuration for Apache. We're working right now on adding nginx support. So if you use nginx, it can do that automatic install thing. Also one of the common misconceptions about CERT-POT is that it requires root. Certainly kind of the smoothest flow is if you have root and it can write keys with only root privileges, reconfigure your Apache, write logs with only root privileges. But with a little bit of config, it works just fine as a non-root user. I mentioned we have a broad client ecosystem. This is just a small sampling of the ones we have. Lego is a client and go. It's pretty small and straightforward. Catty is a really unique web server. It acts either as a file-based web server or as a reverse proxy. And it just handles the let's encrypt certificate assurance for you. So you can stand up a catty server, tell it what your host name is, and you have HTTPS. You don't even need a separate client. Acme Tiny is designed as a small Python client that doesn't have many dependencies. And Acme Sharp is your option for Windows. I haven't tried Acme Sharp out, but try it out, let me know what you think. And if you don't like the idea of installing any extra software on your web server, if you're familiar with the existing certificate assurance process and you just wanna do that again, you can visit gethttpsforfree.com. This is a third-party piece of software that uses the Acme API and allows you to submit a CSR certificate signing request and gives you instructions to put a certain file on your website and do that whole thing without any software install. So it takes more time, but it's just kind of zero touch in terms of installing any software on your servers. So let's encrypt has limited resources. We try to make them stretch as far as possible and make sure that everybody gets the certificates they need. And I think it's a common place of running a service on the internet that if you give somebody something for free, they'll use all of it. So we have some basic rate limits in place to ensure fair distribution of certificates to everyone who needs them. We do it based on the registered domain, which is say you have example.com, it would be www.example.com, the example.com part is the registered domain and that takes into account other TLDs like .code.uk. So within one registered domain, you can issue up to 20 certificates per week. The certificate is kind of the basic unit of what consumes resources for us. But each one of those certificates can contain up to 100 subject alternative names. So you can have 100 names on that cert. They don't all have to be for the same registered domain, but if you're trying to get as many subdomains as you want on a single cert, you can do 100 of the subdomains in 20 a week. So each week you can do 2000 new subdomains inserts. The other thing we really care about is renewal. Obviously we never want a renewal to fail because you hit rate limits or somebody else who's also sharing a domain with you hit the rate limits. So we have an exception in place. If you issue a certificate before and then you go to issue a certificate with the same set of names, you get a free pass on this particular rate limit. And so what that means in practice, it means two things. It means you have a defense against hitting a rate limit while you're trying to renew. But it also means in practice you can continually grow the set of names for it. You can issue for your domain. So in week one, you issue 20 certs. In week two, you issue 20 new certs and potentially renew your previous ones, although you actually have many days before you have to renew. So 90 day certificates. This has definitely been one of the most controversial decisions that Syncrypt has made. But so far I think it's working out fairly well. There's kind of two main reasons we decided to go for 90 days, even though more traditional CAs tend to go for a year. One is key lifetime. We saw with the Heartbleed attacks of 2014 that it's possible to have roughly internet wide potential key compromise. Any server running OpenSSL in 2014 might have had its private key compromised and there wasn't really a good way to be sure. And it was a year before all of those keys were rotated off, not necessarily even all of them. With 90 day certificates, if people are issuing their certificate each time with a new key, the time of exposure to that type of attack is smaller. But I think the even more important effect here is that it's allowed us to build expertise much faster with automated issuance and especially automated renewal. Let's Syncrypt's goal is not just to reproduce the existing CA structure but make it free. We're actually trying to do something new and special which is make this stuff easier, make it more automated. So if we went with a one year schedule, it would have been a year from our first launch before we saw any experience with renewal at all. And we would only get a chance to improve our systems and for clients to improve their code once a year. So this way we've actually, we get about six opportunities through the year to, you know, for each client to renew and make sure that renewal is working. And the goal here is if everybody can do automated renewal or a large majority of sites can do automated renewal, we can get rid of the, this certificate has expired message which is kind of a counterintuitive thing. You'd think longer lifetimes would help more with that but we think this is actually contributing to a stronger auto renewal ecosystem. And it's also encouraging hosting providers to build in core integration rather than relying on their users to go to sites like get HTTPS for free and paste in the certificate. So how are we doing by the numbers? This to me is one of the most interesting ones. My colleague, JC Jones did some analysis and this is again based on the certificate transparency project I talked about. So there's some large fraction of certificates that are available on the web that are now in the CT logs and beyond its benefits for transparency and auditability of CAs, certificate transparency is a godsend for researchers because you have a large set of certificates you can analyze. So what JC found based on CT data and also based on data from census.io, that's C-E-N-S-Y-S dot I-O which is a scanning project from the University of Michigan. He found that of all the certificates issued by Let's Encrypt, if you look at the host names in those certificates, 94% of them had never been used in a certificate before that had been seen by census or certificate transparency. And this is really the number we're trying to affect, as much as we want people to be able to save money on their certificates, we're really trying to get new sites onto HTTPS. And so far it seems like we're doing that fairly well. This is an up and to the right graph, number of currently unexpired certificates. We're currently at four and a half million. You can see in the graph here, there's a pretty sharp spike upwards. I believe that's when WordPress.com turned on automatic integration of Let's Encrypt certs and they shoot for a million host names all over the course of a week or so. And that's an example of the great power of hosting provider integrations is, you can turn on HTTPS for people who wouldn't even necessarily know what HTTPS was or why they should turn it on. They don't have to think about it. That's the level of easy encryption we really want to get to. This graph, I don't know how well you can read it from back there. This is not quite as reassuring. This is the percent of page loads. Yeah, it's very small text up there, but a percentage of page loads over SSL, aka HTTPS. And this comes from Firefox telemetry. So Firefox has code in it that will collect various data that helps Mozilla improve the browser and report it upstream to Mozilla if you let them. The great thing about Firefox telemetry is that they actually make it public. It's anonymized, so you can't find data about a particular user. And it will tell us the number of page loads over SSL versus the number of non-SSL page loads. And you can see this is also up and to the right, but not quite so up and not quite so to the right. We were at approximately 37% in September of 2015, and we're now hovering right around 45%. So we've come 7% over the course of about a year. If you extrapolate that linearly, which it's almost certainly not gonna be linear, but we'll get there in seven to eight years getting into 100% HTTPS. Personally, I think that's not nearly fast enough and we need to kind of step up this rate a little. So we're gonna be looking for more ways to increase the rate of adoption. And you can see there's a little bit of a kink in the graph when let's encrypt launches, but we'll see on this next graph. This is the percentage of validations using let's encrypt certificates. So this is also from Firefox Telemetry. And what you can see here is, we start out small, we get big fast. This is kind of proportional to our number of issued certificates graph, but on the right is the percentage and it's 0.15% of validations. So let's encrypt is issuing a lot of certs, but they're mostly to what we'd call the long tail or relatively small sites without a huge amount of traffic. And honestly, that's fine. That's the goal here is not just to get the Googles and the Facebooks of the world encrypted, which partly through EFI activism, they decided to go HTTPS some years ago, but we also wanna get the smaller sites for which people would say, it's just not worth my time to encrypt the site. It's not worth three hours. It might be about five minutes, it might be worth five minutes. Zero minutes, and my hosting provider did it for me? Sure, it's totally worth zero minutes. So obviously, like any project, we wanna see our fraction of validations go up. And so this is a number we'll work on. So the big question posed in the title is, how many certificates are we gonna issue? I kind of gave away the punchline there, which is by some estimates, there are about a billion websites. So if you look at the number of certs, let's encrypt this issue today, right about four and a half million, there's some tens of millions of other certs from other CAs existing in the logs. But that's about two orders of magnitude smaller than where we need to be. So we really need to accelerate and amplify the rate of certificate issuance and the rate of getting sites to start using HTTPS. This, by the way, is from internetlifestats.com. So what do we need to get there? So first off, read the code. You're a bunch of hackers, would love you to take a look and try and find any vulnerabilities. Like I said, we pay for professional scans, but we are very welcoming to community contributions. If you see a minor non-critical bug, file a GitHub issue, if you see a security flaw, visit let's encrypt.org, and we have our security contact on there, and we have a PGP key so you can email us details. We also definitely love more contributions to the code. We've gotten some great participation on both the Boulder and Certbot repo from volunteer developers that have really made the product better. We always want more clients. I said, do we have dozens of clients? There's always room for more. Our attitude is let 1,000 flowers bloom and let people find the clients that are right for them. I think we're definitely still in the early days of experimenting with how automated certificate issuance can make things better. Especially documentation improvements. I led a workshop yesterday morning and led some of you through the process of getting a certificate from Let's Encrypt, and definitely there's places where our docs are lacking or could be more user focused or more to the point. We would love pull requests on our docs to any problem you've had, if you don't find the answer in the docs or you think it's in the wrong place, send us a pull request or a ticket. So a lot of the work on both Let's Encrypt and ISRG comes from staff like myself funded by EFF memberships. We have a booth downstairs. We've been working the booth some of the time and it's been great to see how enthusiastic everyone is here. Could I see a show of hands? How many people are here EFF members already? Nice, that's a lot of you. So by the way, I have some special Let's Encrypt stickers that we haven't had at the booth till this morning because I forgot them in my luggage. But if you wanna go down to the booth after this talk, you can get a Let's Encrypt sticker. So membership funds EFF and funds not only Let's Encrypt but our broader activism for the future of technology. And especially if you become a member that adds to the numbers that we can put to paper when we write our legislators and say, EFF is a membership-based organization with 25,000 plus members. ISRG is also an independent nonprofit and also has staff of its own, both operational and developers. And so you can give to ISRG on the website, let'sencrypt.org. The other thing ISRG can really use is sponsors. A lot of our money comes from large hosting companies and providers who look at their current bill for CA services and say, well, we could spend a fraction of that on a sponsorship for Let's Encrypt and use their certificates. So if you belong to an organization that has some money and you'd like to see them sponsor Let's Encrypt, definitely get in touch about that. And especially hosting provider integrations. I showed on that earlier graph where we saw a big spike from just one hosting provider integration. We've seen a number of others also and all of those have been extremely good for users in that sites are just easily available over HTTPS. So if you know someone at a hosting provider, if you work at a hosting provider, see if you can integrate Let's Encrypt for automatic free encryption for your end users. And it's a little off the bottom of the slide here, but CT implementations. So I've mentioned a few times just how key certificate transparency is to the health of the CA ecosystem. But CT itself is in its early days and it needs a bunch of things. So in particular, we have about one CT implementation, implementation of the log server. And we need at least one more to really feel confident that it's in the spec and get it standardized. And we also need more people deploying CT logs, which is a fairly big task. You need a pretty solid institution to back it up, run multiple data centers, make sure that it's gonna be reliable. What we've seen over the last few months is that some of the initial logs that were in CT have been disqualified at least from Google Chrome's program because they didn't meet the reliability requirements. So that leaves us in this awkward position where the only CT logs we have are run by Google. And we really want a lot more organizational diversity in this system that keeps tabs on the internet security. So if you feel like you can write a CT log or deploy a CT log instance, please do so. The world needs you. And so now I'd like to open it up for questions. The mic is up front. Please come up and speak into the mic. How you doing? Thank you for your work. I think we all appreciate it. Thank you. I'm wondering if you're anybody that could speak to the history of the CA system. And if you confirm a memory for my old days, that when Verisign and Thought were the only game in town, I feel like I have these memories of like you had to be listed in standards and pours just to get a certificate and... So yeah, so I wasn't there in the early days, but I've read up a little on the history. And certainly when the CA ecosystem was just getting started, validation standards were higher and based on real identities. And I think over time, competition has driven down the level of real world validation required for identities. But we've also seen an expansion in the use cases we want to... Type of sites we want to encrypt on the web. And so I think we do see a lot of division between people who feel like HTTPS certificates should reflect real identity and trustworthiness. Like this is a company and they're at this address and they aren't scammers. Versus people who see HTTPS as kind of a trusted introducer of domain names. So in other words, a certificate just says for this domain name, this is the key. And that's all it says. It doesn't vouch for the identity or trustworthiness of the person behind that site. So I'm in the latter camp and I think generally speaking, let's encrypt is in the latter camp, which is that CA certificates are for this trusted cryptographic binding as opposed to a trustworthiness measure. And do you think that the flaws in the current system are kind of rooted in its connection to old worlds institutions or I feel like the original assumption was the only use case was financial transactions? Right, I think the flaws are inherent in the structure of the system. PKI is really hard. We just have essentially three trust models. Tofu, trust on first use, which SSH uses. Web of trust, which PGP nominally uses, but most people just verify the keys of the people that are talking to. And kind of a hierarchical CA based system, which HTTPS uses. And each of them has major flaws. I don't think anybody has synthesized them all into something really good that has all the properties you want. But fundamentally, this type of hierarchical system has these types of problems, although we can patch some of them. Thank you. Thanks. Are you supporting wildcard certificates? We don't currently support wildcards. We might in the future. We're still working out on the ACME spec, what types of validations would have to be done. We actually recently landed a significant change to how the order of ACME validations to better accommodate both page CA's and wildcard assurances. Okay, thanks. How about TCP ink? Are you following that at all? Sorry, say it again. TCP ink, increased TCP security. So the idea these people have is every time you start up a TCP connection, you do an ephemeral Diffie-Hellman unauthenticated so that you get to encrypt you're vulnerable to a man in the middle attack, but the idea is that they want to drop this deep into the stack so the encryptor's happening automatically. Yeah, I definitely, I love the idea in principle. I think HTTPS is kind of becoming this huge, that probably one of the biggest crypto systems deployed on the planet, but that does mean we're neglecting other systems like SMTP still has horrible vulnerabilities and very often isn't encrypted. In terms of ephemeral non-authenticated encryption, I'm not sure it'll get us where we need to be. We've had similar discussions in the HTTPS realm with HTTB2 and we've had a lot of discussion about whether we can kind of automatically get everybody onto HTTPS if we accept unauthenticated self-signed certs. And personally, my feeling on the matter is that when the enemy is the ISP as we see with Verizon and Comcast, any protocol that's man in the middle bowl is gonna get man in the middle potentially all the time. We have like middle boxes and optimizers who want to cache stuff or who want to track you or sell ads and I think we will see that happen consistently to the point where it's just normal. So I think we definitely need authentication in our crypto. I was surprised to see StartCom on the list to people that's using Acme. As I remember they had their Start and Crypt project about a month ago with all of its fundamental architectural issues and then replacing versions without changing version numbers. Was it hard to convince them to use Acme or did public outcry change it? So this is the cool thing about public standards. As far as I know, nobody from Let's Encrypt actually talked to them or tried to sell them on Acme. There was some discussion on the public lists where other people were like, why didn't you just use Acme? And so they just announced it it was as much of a surprise to us as anyone else. But yeah, you mentioned their launch and they had some bugs with their validation. How do you proceed with all the stuff that doesn't like Let's Encrypt certificates? Like this phone or the proxy in the place where I work, stuff like that. So the question is how do you handle devices that don't trust Let's Encrypt? So I mentioned earlier, we did our best to be trusted as broadly as possible by getting a cross signature from the identity and trust route. But actually the landscape of who trusts what is really fragmented and complicated and different devices have different trust stores. And so you rarely see a CA that, in fact you probably will never find a CA that can offer 100% validation across all devices. Usually it's in the 99 plus range. Depending on your device, you may or may not have options. So for instance, older Android devices before, I think 2.2 don't trust Let's Encrypt. And I believe, I'd have to check my docs, but I believe that's actually because of the SHA1 to SHA2 transition. All right, so I might have time for one more question when I finish this one up if anyone has one. But essentially, you need to be able to update your software I think with Android in particular. This is this problem that cell carriers don't always offer OTA updates and manufacturers don't offer OTA updates. So we have a lot of old and secure phones and that plays into a lot of security elements besides just trusting Let's Encrypt. A lot of systems that, one of the big categories that we're lacking trust in is JavaScript, we're lacking trust, who are not trusted by is Java. And actually the root that Let's Encrypt is cross-signed by DST, DSTRootX3, which is a dentress root, was recently included in the most recent Java release. So I'm afraid I don't have any easy answers there except upgrade and if you can configure root stories, you can add DSTRoot and ISRGRootX1 to those root stores. Thank you all very much for coming. I appreciate your support. And come by the EFF booth on the vendor level.