 you have been using HTTP today? OK, quite a few. How many of you have been using HTTPS today? Many more. So amongst those who have been using HTTP, did you want to use it, or did you have to use HTTP? So not too many people who have to use HTTP despite wanting to use encryption. That's interesting. Well, Jonah over here seems to feel that this problem is more urgent than it looks like in this crowd. So he's going to talk to us about moving towards a fully encrypted web. So please all welcome him with a final applause, Jonah. Is the mic on? Yeah. So welcome to my talk about web encryption, where we are today, how we got here, and what is about to happen in the web PKI. So who am I? My name is Jonah. I have been working in backend development and system administration for 15 years. For the last two years, I have been also contributing to third-bot projects, and will be working full-time on third-bot starting next month. So what's up? Why am I standing here today? Web encryption, pretty much everybody knows something about it, at least. So let's travel a few years back. With everybody, not only the technical people, expect a certain level of privacy in all the digital communications today. It's like a bit instant messaging, email, phone calls, whatever. Of course, that's not true always, but that's what we expect. We expect the communication to be private at a certain level, at least. So for most services today, we're in pretty good situation. Instant messaging is pretty much, well, at least most of it is encrypted, at least on some level. Phone calls and email somewhat. We can actually thank Google and blame Google at the same time about the email stuff, because they have taken so large chunk of email traffic that they're forcing people to use start TLS and other authentic city texts like DKIM and so on. And Apple, they are forcing applications that they distribute through the app store to use HTTPS only for the API communications. But our beloved web, that's something that we have somewhat settled upon, like HTTP. This is just this web stuff. It's plain old HTTP. It's all fine. And we don't think more about it. Well, sometimes back, at least. It's like we're not a bank. So telling it is fine for a remote administration, right? Said no system administrator in the last 20 years. But the web hasn't received the same level of, hasn't reached that level for some reason yet. So SSL, TLS. It's nothing new. It's actually a 22-year-old tech that was introduced by Netscape back in 1995. Has gone through many iterations. SSL 1.0, that never got public because of the security flaws. There was SSL 2.0 and 3.0. And after that, TLS 1.0 in 1999, that was based on SSL 3. And could actually downgrade the connections to SSL 3. Then we have had TLS 1.1 in 2006 and TLS 1.2, 2008. And we are in process of moving towards TLS 1.3, which is currently at the draft states at IETF. So we have had this tech for 20 years plus. So why am I standing here today talking about web encryption and the adoption of web encryption today? Well, it has been pricey, painful. And the whole process has been hard. And people tend to skip the hard, not completely required stuff, because not completely required. So people tend to skip it when not absolutely necessary. So the configuration has been tedious. Acquiring the certificates, they require you to hop through many loops to actually acquire a signed certificate from CA. But today, we are in a bit different world. This is the state we are currently. Statistics collected from Firefox telemetry show that actually, I think last week, we peaked 60% of the requests made by Firefox peaked 60%. And this doesn't look so impressive, the graph itself, but not the scale, if you can see it. It's like a scaleless two years. And in that time, the amount of encrypted requests over the web has grown 50%. That's huge, because after all, it's a 20-year old tech. So what has happened? Why are we moving more rapidly today? So deploying ACTPS has got a lot more accessible. You don't necessarily need the technical knowledge of all of the stuff. We have client tools that actually acquire the certificates, get it signed, and do the configuration on your server software for you with decent defaults, at least. There has also been pressure from browser vendors. Google, Mozilla, I'm not sure if Apple does it, but at least those show the user's warnings currently on the sites that are playing HTTP and have login forms or payment forms of a kind. Browsers are also limiting the media APIs to secure origins. Media APIs mean like microphone access, webcam access. So that drives the developers to actually bring the HTTPS in from the get-go. So let's encrypt removing financial constraints, ACME protocol behind it, helping with automation, the clients doing the configuration, at least. And browser vendors forcing people to move towards HTTPS by end user questions and requests. When end user sees on a login page sees not secure, they're prone to ask, why is this? So about the actors here, let's encrypt. You are most likely familiar with let's encrypt. So it's a project by internet security research group, gives you short-lived certificates for 90 days because of the problems with revocation and so on. This is a good thing, short-lived certificates. Only domain-validated certificates. There are no extended validation certificates or organization validation. And the whole purpose is just to get the web encrypted. Nothing more. So there are some limits in place in a way that you can't request 2,000 certificates from one machine in a short time frame. But the limits are high enough that nobody actually should hit them. And if you have a legit reason, you need more bandwidth. In a sense, there are a system for exceptions for large ISPs and so on. So the importance of transparency and automation, this acts as a prelude to talking about the ACME, the protocol behind this. So why do the automation and openness actually matter? What's wrong with the old ways of doing stuff? The major CAs that have been around for a long time, they have their own process. They are audited regularly, I think, once a year, once per two years or so, by third parties. So what's wrong with that? The CA browser forum requires the audits. So what could go wrong? Part one, 404. GoDaddy was handling the domain validation for certificates by software bug that was in place for half a year. They picked the validation string from any part of the response body, which meant that if the web server was configured in a way that showed the URI in the 404 page, pretty much anyone could get a certificate signed for that domain. That's pretty bad. They had almost 9,000 certificates revoked as a precautionary measure because those were potentially mis-issued. And this is an example of a validation that would be actually valid if my valid token there was the actual validation token. And whoever requested for this domain certificate for this domain would have the certificate signed and issued. So this was GoDaddy, so of course it's fixed. And life goes on. So what could go wrong? Part two, OCR. EU, BE, and AT domains, they don't provide the admin email address in the who is information. Instead, they have this image on the web who is search. And some CAs use email as a validation channel for a domain. So they pretty much pick the admin email from who is response and send a validation email to that address. And after a few steps, the certificates get signed because whoever requested it is able to prove that they own the domain. But yeah, these domains don't provide the admin email in who is response. So Komodo was doing automation. They used OCR to dig the actual textual address from the image. What could go wrong, right? So there is this large ISP in Austria called A1 Telecom. And researchers Florian Heinz and Martin Kluge were able to actually, well, they registered the domain AL Telecom and requested a certificate for A1 Telecom.AT. And the Komodo OCR interpreted the A1 Telecom domain to who is image as an AL Telecom. And they got the certificate signed. And it's a pretty big thing because this is large ISP, right? So yeah. So what could go wrong, part three. Simantec had this registration authority program, which meant that the third party companies could use their infrastructure to issue certificates for the customers. And they were issued using the Simantec infrastructure. So that's not an actual problem. But the responsibility, it's still Simantec's. And they weren't policing their registration authority program members that well. So they have to answer for their mistakes in a way. So there was this Korean company that signed a lot of certificates, like a test certificate for Google.com and things like that, so some small stuff. And they didn't abide the rules of CAB in a sense of the ownership information in the certificate and so on. So potentially, 30,000 certificates were affected. But there is this slight problem because Simantec, it's big to fail, or at least was. In 2015, they had 30% of the certificates in the world were issued by Simantec. So you can't break way too many things if you distrust the whole CA. So instead, Google took a step that they will lower the allowed expiration time for Simantec certificates every Chrome release. So it's going to go down to nine months, I think. That will be the maximum. And I think they also revoked, well, this allowed the Simantec to use the EV certificates completely. So that's not all with Simantec. Just like a few weeks ago, Hanno Buck registered to test domains and got certificates for them from Simantec. He actually also got certificates from Komodo. Komodo didn't file for this. So after that, he created like a false fake private keys and posted them to Pastebin. And reported to the CA that his private keys for these domains or these certificates have been compromised and posted online. So I would need to revoke the certificates, please. Well, of course, those private keys weren't like the real ones, and he wanted to see if the CA's would actually validate the authenticity of the cryptographic authenticity of the private keys before actually revoking them. So Simantec revoked the certificates, and they tried to hide the reason that they were or did not answer directly why the certificate was revoked when he contacted them in a role of the original certificate owner and private key owner asking why it got revoked, because the private key is not in the wild, and they didn't actually have the right reasons, of course, to revoke it. Be sure to check Hanno's talk. I think he's talking tonight about fuzzing here at Shaa as well. So, yeah, did you notice there was these examples had like three CA's, Komodo, Simantec, and GoDaddy. These are actually the biggest commercial CA's out there. And all of these has happened during the last two years. And I think that's like a good reasoning why we need openness and automation. These are all human errors, in a way, or errors in the process. And these, of course, all of these companies have been audited by like the third party, so it should be OK, right? So, there's one more. What could go wrong? Part four, Start.com and Woosan. So, Start.com was a pretty nice CA. They handed out pre-certificates for, have been doing that for a long time. But they actually got acquired by Woosan, and they didn't disclose, they tried to hide the fact that they were actually bought by Woosan, a Chinese CA. And that's a whole different story, but they had few technical flaws as well. So, they allowed the use of any port for a domain validation, which, in reality, means that in a shared hosting environment, any un-privileged user who is still able to open the sockets on un-privileged ports over 1,024, were actually able to validate any domain that pointed to the IP address of the shared hosting server in question. So, yeah. Also, they failed the validation of SNI certificates. So, if your certificates request had multiple domains in it, they would validate only the first one. And this actually resulted in Stephen Schroeger getting a certificate for GitHub.com, because he was using the first domain, was the old style of GitHub pages, like a domain, his user name, .github.com. And that was the only one that was validated, even though the certificate request had GitHub.com, the main domain, as well. He also got a certificate for his universities, like a main domain, the same way. So, yeah, you were able to actually add, after the validation, you were able to add new domains to the certificates request after the validation, as well. Pretty bad. And they issued back-dated SHA1 certificates after the period when no CAs were allowed to issue SHA1 certificates for obvious reasons, SHA1 is broken. So, yeah. But the automation, that would fix all of these problems, if done correctly. Acme protocol answered that problem. So, Acme stands for automated certificate management environment. It's currently in draft phase at IETF in seventh iteration, will be RFC at some time. Talks signed Jason over HTTPS with the CA has a lot of features that would be needed by, you could be like a commercial CA and use Acme automation. It has everything commercial CA would need as well. So, Acme has different kinds, different ways to validate your ownership of the domain, domain validation. You can like, the HTTP validation means that you will place a resource in your web route for the CA to request it. And if they get the right expected token, they have confirmed your ownership of the domain. TLS-SNI, which means you will create the self-signed certificate with the validation token. The CA will do the same thing, resolve the DNS, connect to the IP address of your domain. Let's encrypt this prioritizing IPv6, by the way. So, a record first, and if not found, a record, and so on. Currently, let's encrypt this, still doing like this from one endpoint, but it's moving toward like using multiple request endpoints to avoid problems with DNS poisoning and so on. So, TLS-SNI, and then there's DNS validation, which basically is just like adding TXC record with the validation token to your DNS zones record with like a magic subdomain underscore Acme dash challenge dot your actual token dot TLD, your actual domain dot TLD. And there is out of band challenge as well, which is not used by let's encrypt that, and it's more like a manual process. So, there are many client implementations using Acme to request and get certificates from let's encrypt, which is the only CA currently using Acme. And there are a lot of clients that are really good. Then there is few awesome complete integrations, which means eventually we, of course, want to have the HTTPD handling the whole HTTPS thing from the get go. You will just tell what domains do you want to serve to the users and the HTTPD would handle that. There's Caddy, which does this already. It's really good HTTPD written in go by Matt Holt, and of course, because Open Source Project, a lot of contributors as well. And it does HTTPS as default. And the configuration is super simple as well. Then there is this new project, which is still not production ready, but it's like a moving fast Apache mod MD that will do the same thing somewhat. You will have to add a new configuration parameter, but that's all. And that's where we actually want to be, at some points, every HTTPD handling this for the user. So there's really no reason for the technological knowledge of inner workings of TLS and requesting certificates. That's something that should just work. Users or administrators should not need to worry about this in today. So yeah, I'm going to talk a bit about Certbot, because that's one client that I'm most familiar with. It's an Acme client by EFF. And yeah, that's the same thing as the other clients, of course, creates the certificates and gets signed by the Let's Encrypt CA. But I think that's the only client with more advanced HTTPS configuration management as well. So additionally, if you want, it will configure your server for you. Currently, it's working for Apache and NGINX. But there are developments ongoing that it won't only configure your HTTPS, but other server software as well, like email and so on. And yeah, it gives the users like a same configuration defaults, not, of course, perfect, but it's always a struggle between backward compatibility and security in this domain. So yeah, of course, nothing goes without these issues. We are moving rapidly towards the fully encrypted web, but there are, of course, still issues. I think the main one is lack of HTTPS header. This pretty much means that upon the first request, if your domain is not on the HSTS preload list of the user agents, upon the first request, it will tell the browser to only communicate with me using HTTPS for a time period. That's also in the header. If it's not there, downgrade attacks are possible. In a sense, you might know SSL strip, which actually gives the manager-middle attacker possibility to handle only unencrypted requests to the actual client. And if needed, it uses HTTPS with communicating with the actual server, so it doesn't break compatibility. So a bit about HSTS, it's cached value in the browser. So there are many situations that it could actually break your existing site if you're migrating from HTTP to HTTPS. And this is also one of the reasons why it has not been widely deployed yet. And you can't just tell people. Just add the HSTS header to your web server configuration because that could potentially break your site for whatever time you set the max age to be. So yeah, this is the demonstration of effectiveness of bad HSTS deployments. It's effective, but things can't go horribly wrong. So that's why we can't recommend it for everybody. Good intentions? Yeah. OK, another issue today is broken revocation. It's completely broken. Revocation lists are not used because browser vendors are like value UX over security. They cut amount of requests this way when the user agent doesn't have to check every domain. They use CRL sets. Google uses CRL sets and Firefox can't remember actually the name, but Mozilla has a similar thing in place. It's basically a list of high priority domains that have had their certificate revoked. So if you're not a big player and your private keys get compromised and you revoke your certificate, the browsers will most likely be happy to serve the revoked certificate and make it work just like that. So we have a tech in place. There's a certificate extension called OCSP Must Stable. That would make the HSTDPD itself to get signed timestamps by the OCSP server. So when they send the certificate for the client, the client can verify that it has a recent enough timestamp so that pretty much handles or at least shortens the impact of broken revocation. But the server implementations are somewhat bad because the HSTDPDs usually work as event-based when the client makes the connection. That's the only point when the server checks the Must Stable timestamp. And if it's recent enough and if it's not, it would then proceed and try to get it signed again. So this results in the end user to have longer round trip times. The response could take a long time if there's some issues in the connection between the HSTDPD server and the OCSP server that would result in a fixed site, so completely unusable for the client until the server gets the connection. And that gets a new timestamp. So yeah, the future. Where are we heading? I already talked about TLS 1.3. That's going to be good. But there's more. The browser vendors are going to pressure the administrators through the end users even more. This image shows the current state and the state after Chrome 62 gets released, how it's going to look. So currently, they are only showing the not secure notification if your site has a login form or credit card form, payment form of a client. But after Chrome 62, they will be showing the not secure on every site that has form, any kind of form. So that's good. And not secure notification for every site that's HSTDP if you're running incognito mode. You can actually turn the incognito mode-like behavior on today by going to Chrome slash slash flags. And there's an option called mark non-secure origins as non-secure. So yeah, there's still more. There are some new requirements by CAB, CA slash browser forum. The CAs are forced to respect the CAA header starting next month. CAA record from the DNS. CAA record will allow the DNS administrator or the domain owner to actually restrict the CAs that are allowed to issue certificates for this domain. So you could restrict the issuers to whatever CA you like. Then there will be certificate transparency requirements for all certificates that get signed after April 2018. This is good. Certificate transparency is awesome. So it will actually be in user agents as well as an option. User agent can check the certificate if it has the signed certificate timestamp and require that one. You can try this soon, ish. Google is going to ship Chrome version. I don't know actually which version, but which version. But it will give the administrator's ability to add a new HTTP header called expectct. And that will tell the user agent to actually check the HTTP header in the certificate. There are going to be wildcard certificates issued by Let's Encrypt starting January 2018. So in half a year, we are able to get wildcard certificates from Let's Encrypt. Domain validated certificates are good and better for obvious reasons. But there are still, of course, there are still use cases where you need the wildcard certificates. Wildcard certificates will be able to request wildcard certificates only by using the DNS challenge. And that's because if you use HTTP challenge for a wildcard certificate, that would mean that you could just by validating some random domain, you could claim ownership over every subdomain that could be managed by some other entities. So only DNS challenge for that. DNS challenge is pretty good, but combined with automation, it has some downsides as well. If you want to automate DNS challenges, first, you would need to use a DNS server that has an API. And second, you would have to store the API keys on your box. So if any of your boxes that need or use DNS challenges gets compromised, that means that your whole DNS zone gets compromised in a sense. And that could possibly mean using your whole digital identity as well as whoever has your DNS zone could pick up your email. Every email sent to that domain by changing the MX records and so on. So it's pretty bad, something to avoid. But luckily, Acme doesn't mandate that, but let's encrypt follow C names. So this means that you could actually use some throwaway domain for the DNS validation by pointing the C names to that throwaway domain. Of course, you most likely won't be registering two domains for every actual domain you want to use. So this way, if you use one central throwaway domain for a validation that could result in many or could result in a compromise of certificates or the attacker could request certificates for every domain that points to that throwaway domain. So some 10 months ago, I created a small piece of software that's called Acme DNS. It's a simplified DNS server, acts as sub-delegates DNS. And it restricts the updates done via its API to only TXT records. And the way you would use it would be first creating accounts, which is done by HTTPS, like a post request. It generates random sub-domain. It generates random username, random password. There are no password reset options or anything like that. So if you lose your credentials for reason or another, just get a new one. But yeah, so you get your unique sub-domain that you will point your actual domains like a magic Acme challenge sub-domain to. And yeah, it has a few features that elevate the level of security of it. You can define IP ranges that are able to do the update requests for this sub-domain and so on. It's really in go. And yeah, for many DNS servers, you would need to wait for the actual update to propagate through the secondary name servers and so on. But this one has a TCL of one second. So it's always fresh, anyway. And it's only one instance. Anyway, I'll show you a short video about the inner workings of this. If you can see it, I don't know. It's a bit small. No, you can't see it. I won't show it. Yeah, it shows the functionality just like getting the account. So from the server, you get the sub-domain and your credentials and you're able to point the CNM to them and use the credentials with the HTTP API to do the actual, like, TXT record updates. If you use a cert bot to do the updating, it's actually a pretty simple script that you would need to run using the cert bot manual mode. And this is an example of such script here. It's pretty much just one post request to whatever you will, of course, need to self-host the Akmei DNS sub-delicate DNS software because you wouldn't want to give the authority of signing or requesting your domain certificates for a third party. But this is good for a bit larger infrastructures. It's pretty much just one post request here with the data that's needed. So yeah, I think that's pretty much it. All right then. Thank you very much for your talk, Jonah. And we still have time for a few questions. If you please line up before the microphones. And we have a first question at the first microphone. So does Akmei DNS. Please get a little closer to the microphone. Is this better? This is better. Does Akmei DNS support DNS sec? No, no. Oh, that's a bummer. And it's a sub-delicate DNS, anyway. It's just like for the older. Yeah, OK. And the other ones I see you've listed Boulder. Is that the server component for Akmei, or? Sorry, you've listed Boulder? Yeah. That's a server part of Akmei. Yes, yes. And it's all open here. It's really go as well. So yeah. Thank you. Any other questions? Don't be shy. Or actually, to answer your question about the Akmei DNS, it does support the DNS sec because you are able to define in the configuration in memory DNS records that they don't have any kind of update API. But you're able to define them like a one-shot. So what is the future in the field of CI business? So let's interrupt will be the largest one. And what is like the space left to other CAs? Because you are doing everything better than them and better works than them. And you are giving it for free. So it looks like you are winning the whole business. Let's interrupt will be the largest and easiest use. Yeah, that's actually a good question. I don't know the answer to. But the CAs will have to renew themselves, in a way. It's like if the business model is broken, what can you do? Great work. OK, question related to that one. So if let's encrypt become sort of too big to fail, but that could happen. But is anyone else thinking about using the same infrastructure to have a similar CA, but with different trust route? Yeah, of course not the same infrastructure, but the same Akmei and so on. Like reusing everything. Yeah, actually, there was an initiative by Startcom before they got distrusted. They released this client software and automation system called Start Encrypt, which had most of these horrible problems as well that they had. And when they got a lot of backlash, they actually announced that they are going to move to Akmei. But rest is history. Not very distrusted, not because of the announcement. But yeah, thanks. So yeah, thank you. OK, then please, one more applause. Thank you, Jonah.