 Okay, so I don't want to put any pressure on Ilya, but one of the things I learned in scheduling the velocity program is to always put really good speakers first. So, and the other thing, and, you know, really good speakers and important topics as well. And I don't know if you read the description of Ilya's talk. If you haven't checked it out, it's kind of humorous, but it gets the point of HTTPS is the future. We all need to get on board. And so it's really important, and I think HTTPS has this reputation of being slow and bad for performance and that it doesn't have to be that way. If we all just learn more about it and learn how to look at it and how to apply the appropriate fixes, we can make HTTPS fast. And so it's great to have Ilya here to explain that to us. Ilya Grigorik is a developer advocate at Google, works closely with the Chrome team, just recently produced a fantastic book, High Performance Browser Networking. I really recommend you check that out. You're going to be doing a book signing. I think so. Yeah. And so please help me welcome Ilya Grigorik. Awesome. Thanks, Steve. Thanks everybody for coming. So I think Steve already outlined the premise for this talk, which is HTTPS is incredibly important. It's the way forward for the web. And one of the most common pieces of feedback that I get when we bring up the topic of HTTPS is, but performance. Isn't it really slow? And the answer is, of course, it adds extra overhead, but it turns out that when you actually dig in and try to unravel all the different pieces of it, you'll find that it's an unoptimized print here. There are certainly many ways you can get it wrong, in which case it will be very slow. But if you do the right things the right way, it can actually be quite performant. So first things first, why do we care about HTTPS? And oftentimes we think about the security, which is the encryption part, but there's other two pieces that often go unnoticed, which is the authentication, which is I'm talking to the site that I'm actually intending to talk to. So if it's your bank, it's really your bank, it's not somebody else. Data integrity, so the data is not being modified, manipulated by somebody else, and finally encryption, which provides kind of the privacy angle of it. So because of these three properties, within Google, we actually have a goal, which we call HTTPS 100, which is an explicit goal to get all of our services on HTTPS all the time. So this is an internal goal we're very rapidly making progress. Of course, many of our services have already started doing this way back in time. Like you think about Gmail is circa 2007 when it was HTTPS only, but we're still have some services that we need to bring into HTTPS or enable HTTPS on them. And this applies to all data that's being transferred within data centers, outside of data centers, all the data that we store for your accounts and all the rest. So this is something which we strongly believe in at Google. And if you've been watching the news, you've probably also noticed that for search in particular, the search team recently announced that we are using HTTPS, whether your site supports HTTPS, as a ranking signal. It's a lightweight ranking signal and it's something that, as the team put it, we may decide to strengthen the signal, how much it affects all the ranking and all the rest. But the premise being that a secure site provides a better experience to the users because it provides a little bit more guarantees about the quality of the content. So we know it's not being intercepted and other things. So something to consider. And from that perspective, I guess one thing I want to leave you with is there's one performance problem for TLS and that is that it's not used widely enough on the web. Sadly, majority of the web is not TLS enabled and everything else can, in fact, be optimized. There is work that we can do today to fix existing problems and there's also a lot of work that's being done into the future for things like TLS 1.3, that further optimize performance on all kinds of angles. So I want you to do two things, basically. If you don't do anything, if you don't learn anything else from the stock in particular, I want you to do two things. Go to the SSL lab test and just punch in your site. If you have HTTPS on your site and just run it. It runs a battery of tests for things like what's the site pursuit using what are the key strengths and all the rest and it'll give you a score at the end, like an A, B or C. And you should have a pretty good score and we'll talk about this in a second. And the second one is optimizing performance and the best tool that I have for that today is just open up web page test and we're gonna talk about quite a bit about latency. So you want to select a profile, something like the 3G 300 millisecond RTT. This will help you, this will just make the handshakes look that much longer on the timeline, which makes it very easy to diagnose. So that's the primary reason. So let's talk about the first one. I'm actually not gonna spend a lot of time talking about kind of which protocol you should support, which Cypher suite and all the rest. The performance of different protocols and Cypher suites does vary quite a bit. So if you're pushing a ton of traffic, like you're serving static content and you're saturating your disks and your network cards, then you probably want to investigate which Cypher suites and all the rest you wanna support. But there's a really good book that was recently published by Ivan. He's also the author of the SSL lab test. It goes into great detail for things like if you need to support older clients, these are the Cypher suites you should support. If you can limit yourself to newer clients, then this is the list. And the good news is that the strong encryption, more up-to-date Cypher's are actually faster than the old ones. So being aggressive in that respect is actually good for performance and that's great. You'll think that stronger crypto requires more work. That's not necessarily the case. So I'll just leave that here. I encourage you to check it out. If nothing else, run the SSL test. We're gonna talk a bit more about the latency part which is like how do you set up the server bits and what are the common pitfalls. So first things first, we're talking about networking. When you talk about networking, you have to start with things like optimizing your kernel, your TCP and all the rest. So you want to get a couple of things right. First of all, you should be running the latest kernel, something like 3.7. There's actually been quite a few improvements that have landed for TCP performance. So if you're running something prior, literally just upgrading your kernel to 3.7 will give you a fairly significant boost in performance just right out of the gate without any modifications whatsoever for your application or anything else. Of course, you want to be running the latest OpenSSL version for a variety of reasons, security being the most important one, but performance has actually improved significantly. So for example, if you grab or if you use your default OpenSSL installation or package that comes on your Ubuntu server or whatnot, chances are it's the older version. And if you just download the latest one and run the speed benchmarks, you'll frequently find that there's a significant difference. We're talking sometimes a difference of 2x where the newer versions are just performing that much better. So you probably want to recompile your servers against the newer version and all the rest. And then of course, use the latest, whatever server you're using, use the latest build because it probably has some performance improvements as well. So let's talk about two things. Computational costs. That's usually the first topic that comes up. So we're doing crypto. So obviously we need to spend more CPU cycles to actually encrypt the data and do the handshake. And there's two parts to TLS or HTTPS in particular. First, we have the handshake where we actually negotiate to the symmetric key. So there's two different parts. First, we have the public private key negotiation or the handshake where we negotiate the key. That part is the expensive part. So if you run a benchmark on your hardware, you'll probably find that each handshake is in the neighborhood of like a millisecond. Depends on your CPUs and all the rest. For the symmetric crypto, which is actually what's being used to encrypt the data in transit, both client and server, you're looking at something like 150 megabits per second for shot to 56. So once again, if you're saturating your 10 gig NIC, you need to kind of start thinking about how do you optimize for this sort of thing? But perhaps most importantly is the thing to take away is the most expensive part of the TLS computationally is the handshake. So anything we can do to reduce the number of handshakes is gonna make a huge impact on your operational costs for TLS and performance as well as we'll see. And second is don't trust any benchmarks that you see. Run them on your own hardware. It's very simple to do. Upgrade to the latest OpenSL version because performance has improved significantly and then you can just run the speed tool. So OpenSL comes with a toolkit of different tools and all the rest. There's one that's specifically designed for benchmarking. So you can just run OpenSL, speed, SHA and ECDH or whatever suite that you're using and figure out on your particular hardware what's the throughput that I'm gonna get per core for handshakes and symmetric crypto. So do that. And this is an important point because historically TLS has gotten the rep of like it's really slow and it needs special hardware and offloading and all the stuff to handle crypto. That is no longer the case. That was certainly true perhaps a decade ago when TLS was still young and we didn't have the right support for the right operations in our CPUs. That is no longer the case. For example, Facebook, this is a quote from Doug Beaver at Facebook. Facebook is running TLS 100% and everything is done on commodity CPUs so there's no offloading. They're not using special load balancers or anything like that. Same thing for Google. We've had TLS enabled on things like Gmail since 2007 and same thing. We don't use any dedicated hardware or any special hardware to terminate TLS. So this is all purely in the software. And then last quote I'll show you is from Twitter and I think this is a very important one. So Jacob is saying that HP Keep Alives and Session Resumption mean that most requests do not require full handshake. And if you think back to what I said earlier, the most expensive part of the TLS computationally-wise is the actual handshake. So making sure that you get the most out of each connection so you're not opening too many connections is very important. So let's talk about that. The first tool that you have in your toolkit for optimizing is, of course, Keep Alives. So just don't close the connections and reuse the same connection. The second one is TLS Resumption. So what happens here is the handshake is expensive. So we have a shortcut where we can say, remember the parameters that we negotiated in the last session? And then when you come back, just send me those parameters and we can just reestablish the connection without doing the expensive asymmetric crypto part. So this is TLS Resumption. And the other benefit of it is it also removes a full round trip from negotiation of the TLS connection. So by default, a textbook version of a TLS handshake is two round trips. With Resumption, it comes down to one round trip. So that's a big benefit. And we have two mechanisms to do TLS Resumption. There's an older one and there's a new one. So Session that infires was first introduced into the protocol. And the idea here is basically it's like a cookie where the server remembers the negotiated parameters and it just gives you an opaque string that the client can remember. And then the client, when it wants to resume a session, just sends that ID to the server and says, hey, you gave me this opaque token here, have it. And then the server has to look it up in its cache and then they can reuse those parameters. So that works. Session Tickets is a variation on this where the negotiated parameters are actually encrypted by the server and just sent as an opaque blob to the client. So the client can actually decrypt them. But the data is stored on the client as opposed to the server. And for operational reasons, this may actually be a preferred route to do it. Because if you think about it, Session Identifiers requires some shared cache if you're running more than one server. So now you have to synchronize those things and keep them in cache and all the rest. With Session Tickets, everything, the state is stored on the client. So you still need some coordination for things like the shared ticket key. You still need to circulate that for each server. But it lessens the burden on servers, especially if you're running a cluster of servers. So there's a very simple way to test if your server supports this today. So if you have HPS Enable on your site or if you can try this against any HPS site, you can use the OpenSSL as client. You can connect to a particular site. And what you're looking for are these two tokens. There's a Session ID. So this is the opaque token that I told you about. So I should have also said you can actually support both. You can enable both Session ID and Session Tickets. There is a well-defined mechanisms for how the protocol should behave when you enable both. So that's a safe thing to do. So in here, you can see a Session ID token and also a Session Ticket, which is just a opaque blob of data to the client. So if you see this coming back from your server, good news, it's working well. The other thing that I'll mention, and we'll come back to this later, is for tickets, you can specify a timeline or a time out for how long that ticket should persist. So when I send you the ticket, I can say, keep this for half an hour. Keep it for a day. Keep it for two days. And it's up to you guys to determine what is a relevant time out. For Session Identifiers, there is no time out because the state is stored on the server. So it's up to you guys to figure out how big my cash is. Am I evicting the tickets too frequently? Are they persisting and all the rest? So this is more of an operational challenge now. And for obvious reasons, you want to optimize Session Reuse because it allows you to eliminate the expensive part of the TLS handshake. So this is just a summary of what we talked about. Session Identifiers require shared cash. So this now becomes kind of an operational challenge for you guys to figure out, am I getting a good cash hit rate and all the rest? And for tickets, they're stored in a client. One gotcha, or not gotcha, but a disclaimer that you should be thinking about when you're implementing this stuff is either one of these strategies needs to be very careful about how this data is stored. So for example, when you're sharing Session Identifiers between multiple servers, do you have multiple applications, kind of distinct applications that are reusing the same cash? Because now you're mixing kind of private data between applications. So security-wise, that may be a bad practice. So this is where you also need to think about the security requirements of your applications. For Session Tickets, you only need the shared secret key between the servers. So how do you make sure that that secret key is not logged somewhere or distributed in a safe manner between all the servers? So if you want to read more about this, there's a link at the bottom. I'll share the slides at the end of the talk by Adam Langley, who's a security engineer at Google. And he provides some good guidance for things to think about for how you should be deploying this stuff in a safe and secure manner. So some takeaways for do this at home, try this on your own service. Do you know the answer to this if you have HTTPS enabled on your site today? How big is your cache? What is your cache hit rate? There are some good tools. For example, if you're using Apache, a mod status actually provides some debug output that will tell you exactly what your cache hit rate is for your SSL cache. For something like Nginx, you don't have any existing tools, but you can log whether the session was resumed in your logs, and then you can process those logs and figure out what is my resumption ratio and all the rest. For ticket timeouts, most servers set it to be very conservative, about 300 seconds, which is actually what the OpenSSL documentation recommends, which is why most servers default to that value. But in practice, if you look at all the large sites, Google, Facebook, Twitter, we all use a value of about one day. And we actually recommend that as a best practice. So you can safely increase that limit or the timeout to a much higher value. Maybe it's different for your particular site, but 300 seconds is definitely very, very aggressive. So we already talked about latency in the context of using session resumption to eliminate a full RTT. But you also need to do kind of in-depth analysis on your handshake, because there are many things that can go wrong in practice. The textbook version of the TLS handshake is two RTTs. If I'm willing to bet that if you run just like a web page test analysis and look at the Wireshark trace or just look at kind of the bars here, you will find that your TLS handshake is taking way longer than you would have expected, right? So at the top here, we have a regular non-encrypted session. So we have DNS, TCP. I sent the HP request. I got the response. So there's four RTTs in here. This is what TLS enabled. And this big purple bar is TLS negotiation. And I know that because I set my roundtrip time to be 300 milliseconds, that this is just eyeballing it. It's taking way longer than two roundtrips, right? So there's something wrong in there. And ideally, what you should be seeing once you've done all of the optimizations is this, which is it should be one extra RTT for establishing your TLS connection. If you're seeing anything above that, you have room for improvement. So we talked about this a few times. This is the textbook TLS handshake. It requires two roundtrips. The client says, I need your certificate. The server provides the certificate. The client says, OK, your certificate is valid, assuming it's validated. I want to use the Cypher Suite. The server confirms the Cypher Suite. And then they can start negotiating or exchanging application data. So this full handshake takes two roundtrips. This is TLS 1.2. We can actually optimize it. So there's an optimization that was proposed by, actually it was Adam Langley, who's at Google as well, is a TLS false start. So with TLS false start, the observation is we don't actually have to wait two roundtrips to start sending application data. After I've told you what Cypher I want to use as a client, I can start encrypting data with it optimistically, assuming that you will accept that Cypher. So TLS false start basically changes when the data is sent. So right after I choose my Cypher and a client, I start sending encrypted application data. The server then confirms and can return a reply. So this allows you to do a one RTT handshake for non-resumed sessions. So we can do two things. One RTT resumed sessions and one RTT new sessions as well. This is why I said earlier that for any TLS negotiation you should have at most one RTT overhead. Now in practice, it turns out that we've tried rolling this out by default in Chrome a couple of times. And we ran into some issues where some older servers or load balancers were misbehaving when the data was sent before the server confirmed the Cypher suite. So we couldn't enable it by default. And because of that, false start is actually an opt-in feature. Basically, this is our way to protect against bad old servers. So when we perform the negotiation with the server, we check for a couple of things. In Chrome and Firefox, we check for whether the TLS supports the NPN or ALPN extension, which is just an extension to the protocol that allows us to negotiate different protocols. And second is we require forward secrecy Cypher suites. So if you have those two things, if your server is capable of those two things, we will use TLS false start. The client will do a one round trip handshake. Safari does not require NPN, but it requires forward secrecy. And Internet Explorer actually does a kind of like optimistic strategy where they assume you will support false start, but they have a blacklist of certain bad sites that they don't use it on. So between all of these, the one common thing is to get best coverage across all the clients. You need NPN or LPN support on your server in your TLS stack. And you need to enable forward secrecy. If you have those two things, you will get consistent one-art C handshakes between all the different browsers, which is definitely something that you want on your site. So just a summary of what we just talked about. You can deliver a reliable one-art C handshake for TLS. You don't have to incur two RTTs or worse. For many sites, it's much more than two RTTs. So speaking of unoptimized handshakes, there's a bunch of things that I, you know, when I debug different sites, different services, and different servers, I find common problems where it's not only that it takes two RTTs, sometimes it takes way more than two RTTs. I've seen it as bad as five RTTs. And it's usually a combination of several things. One is missing intermediary certificates. So we'll actually talk about these in detail. There's things like revalidation checks for the certificates. You gave me a certificate, and I want to know if it's still valid. Has it been revoked? If there was a breach and all the rest? Second one is large TLS records. And then the last one, and this is, I guess, not particularly two TLS, but after all of this kind of damage has been done, then they tell me that I'm actually connecting to the wrong server, so please go somewhere else and repeat the whole thing over a few times. So let's talk about these in details. First one is missing certificates. So when the TLS handshake is done, you have a couple of things. The server sends the certificate for the site, which is signed by your CA. The CA does not sign, today it's not best practice to sign certificates with their root certificates. So what they have is an intermediary certificate, which you typically need to include alongside your site certificate. If you forget to do that, the client basically has to stop, go and fetch that certificate from the CA provider, and then once that has finished, it can complete validating the chain and then resume the handshake. So this can actually, as you can imagine, add quite a bit of overhead for your client. And because of that, you want to include the intermediary cert as part of your chain. You don't need to include the root certificate because we already have those. And just because you included a random root, we're not going to trust it. We just check whether we trust it already in the browser or the operating system trusts it. So that's number one. Second one is great. We've received your certificate. We have your CA certificate. But is this certificate still valid? So it turns out that different browsers actually have a variety of different rules for when they will do these checks. For many interesting reasons that we won't go into here, the revocation checks, as in general, are mostly a broken mechanism, but they're still useful. So for example, in Chrome, we don't do revocation checks, online revocation checks, unless it's an EV certificate. Some other browsers have different rules. For example, Firefox does revocation checks on all sites. So the gotcha here is, let's say we have a TLS handshake. So here I'm connecting to Wells Fargo, which is a bank. And it's an EV certificate. So in this case, Firefox says, OK, well, I got the certificate, but I'm not sure if it's still valid. Maybe somebody has attacked it and somebody has revoked it. So what I'm going to do is I'm going to connect to VeriSign and send them a query asking if this is still a valid certificate. If they tell me that it's OK, then I can continue. And that's what happens here. And then it just so happens that Wells Fargo actually uses two different providers or two different certificates. So there's another OCSP check that goes against GeoTrust. So this is quite expensive. And a way to work around this to eliminate this problem and also to provide consistent experience for all certificates, not just EV, is to use OCSP stapling. So the idea here is that instead of forcing the client to go and fetch the status, the server is responsible for going to the CA, asking, what is the current status of their certificate? The server gets a signed response from the CA. And that response is then stapled to the certificate. So it just comes alongside the certificate. And the client receives both the certificate and the stapled response. It can validate that the chain is correct, that the certificate hasn't been revoked, and can continue with the connection. So this will eliminate this gap. So in fact, if you ever see kind of these weird gaps in your waterfalls where you have a TLS connection, then there's a period of inactivity. Chances are this is what's happening. So something you should look into. One quick way to check if your server has this configured correctly is once again use the S Client. And when you append the dash status, you're basically asking for what does this also give me the status of the certificate. So it's asking for the OCSP stapling. And in here, you can see that this is just a certificate for some particular site. And it says that the stapled response from the CA is that it's successful, meaning that it's a valid certificate, yada, yada, yada. So you can proceed without checking with the CA. Next one is redirect chains. This is a really, really painful thing for a lot of HPS sites. We have a lot of unnecessary redirects. And it becomes incredibly expensive for TLS, because of course you have the TLS handshake overhead, which takes extra RTTs. And it consumes a lot of resources on your sites. So it's very typical today to say, oh, you come, you visit this HPS site. Let me redirect you to the HPS site. Oh wait, you're coming to the non-dub-dub-dub site. Let me redirect you to the dub-dub-dub site. Oh, and you're also going to mobile. Well then, you should be heading that way. And then you repeat the same cycle all over again. So I have seen cases where large sites are incurring three or four redirects, just to get to the final destination. And with TLS, that can easily add up to a second, especially if you're talking about mobile phones. So in fact, I'm willing to bet that you, even if you've optimized this, or you've consciously thought about this, you still have room for improvement. A very common problem is to say, you're coming from HTTP, I'm going to send you to the dub-dub-dub version of our site. So we don't want the naked origin. What frequently happens is if you've migrated to HTTPS, the common flow is you come to HTTPS, we'll send you to HTTPS, then we'll send you to HTTPS dub-dub-dub. Instead, you should be going from HTTPS to HTTPS. It just requires a little bit more kind of thought in rewriting rules for your site. But it's a very nice way to optimize this particular case. Because a lot of users, if they're just typing in your domain, they are not typing in the dub-dub-dub part. So they're incurring this redirect when they're connecting to your site. So just something to think about. Another cool little trick, if you guys are not familiar with HSTS. So HSTS is a policy that you can set once you're 100% TLS on your site that tells the browser that it should remember the fact that this was an HTTPS-only site. So if you think about it, there's no way for a site today, well, until HSTS came along, to signal that this site is HTTPS-capable. So HSTS is actually that mechanism. It says, hey, I support HTTPS. And anytime you connect to it, even if this user explicitly types in HTTP, mysite.com, the browser will automatically rewrite that query to HTTPS before it's even sent to the server. So this is great because now when the user types in yoursite.com, they're immediately sent to the HTTPS site. They don't have to incur that first redirect. So it's just a nice performance optimization. And obviously, there's other benefits as well, where you know that all of your requests are always routed to HTTPS. So there's no downgrade attacks or other things. So this is supported by many browsers. And then finally, if you've gone all the way, you can also then add your site to the preload list. So if you think about it, in order to register this policy, the visitor has to come to your site at least once. With the preload list, you can basically tell the browser, hey, please add my site to the preload list. We will just know that your site is HTTPS-only. So even on the first visit, we'll send them directly to the HTTPS site. So this is more of a security feature, where even on first visit, there's no downgrade attack, possibility of a downgrade attack. But it's also a nice performance optimization. And then we have TLS record size. So this is kind of a fun one. If you're not familiar with TLS, this may be surprising. But let's look at this example. On the top, we have the regular non-encrypted version of a site. So we have the DNS, TCP handshake. Then we send the request. And we're getting 20 kilobytes of data back. I'm just streaming back an HTML page. And what happens is 20 kilobytes is actually larger than the congestion window of TCP. So it's actually returned in two roundtrips. And that's what you're seeing here. After about one roundtrip, we get part of that data. And this is our time to first byte. So I'm using a legend from WebHTest. After that, the browser can actually start parsing the first 16 kilobytes of HTML while it's waiting to receive the remainder of the document. And that's exactly what you want. You want the data to be streamed to the client, such that the client can consume it and process it as quickly as possible. Then we do the same thing, but with TLS. And so we add another roundtrip for TLS negotiation. This is an optimized handshake, one RTT, all is good. But notice that the first roundtrip is now taking two RTTs instead of one plus some transfer time for the rest of the document. So what's the issue here? So this is the two RTT times the first byte. And the issue here is that we're delaying processing of the data. So even though sometime here, we already have some of the data on the client, we can't process it. And the issue is TLS records. So the way TLS works under the hood is the application hands some data to the TLS layer. TLS layer takes that data, packages it into a record, and encrypts it. Once it's encrypted, it also adds a checksum at the end, and then it sends that data to the client. And TLS allows those records to be up to 16 kilobytes in size. So what's happening here is the client gets partial data of that TLS record, but it can't decrypt it because it still doesn't have the remainder of the record. And it needs the remainder of the record to do the checksum. So because of that, all the processing is delayed, and that's where we're incurring that extra RTT to start parsing the HTML. So this sounds kind of like, oh, now we're optimizing kilobytes. This is actually a big performance problem for new connections because a new TCP connection, it just so happens that the size of most TLS records on most servers today is 16 kilobytes, which is the maximum that's allowed by TLS, is exactly large enough to exceed the first TCP congestion window such that you will see this on your site if you're sending a lot of HTML. And I'm guessing you are sending more than 15 kilobytes of HTML. So this is a very common problem. And a simple solution for this is to actually just make the record size smaller. So instead of sending 16 kilobytes, you want to send less data. So on Google servers, we've had this deployed for a very long time where, when the connection starts, every TLS record fits into one TCP packet. So if a TCP packet is lost or is being delayed, previous data is not blocked on it. And then after a while, because we are adding extra overhead when we're using TLS records in terms of bytes and all the rest, after we transfer a lot of data, we actually increase the size of the record dynamically in the server. So you don't have to pick one static value. What's the optimal value? Is it four kilobytes, eight kilobytes, or 16? The answer is it depends on your application. So if you're streaming large videos, you want the minimum possible overhead of the TLS layer. If you're streaming interactive data, things like HTML and CSS bytes, you want small records. So my recommendation is to find a server that can do this for you. And if you can't, at least set it to be smaller than 16K. And depending on which server you use, one of those strategies is likely available. Some servers, unfortunately, like Apache, don't allow you to customize this at all. So I'm not sure if there's a bug open about that, but certainly a feature that's missing. So two things to check for your own deployments. One is do you support false start? And for false start, in particular, you need NPN or LPN support. And you need forward secrecy. So you support those two things. You can claim back a full RTT. And once you've done that, just run web page test against your index file or any file on your service that's TLS enabled. And just look at that handshake, eyeball it, and see if it's anywhere longer than one RTT. You have room for optimization. And of course, eliminate redirects. Another interesting optimization, an important optimization, is terminating TLS as close as possible to the client. So typically, we think about CDNs in terms of delivering static content. But they're also very, very useful for optimizing dynamic routing, a routing of dynamic content as well. I'm getting a thumbs up from Steve here. Yes, Fastly does a great job of it. And the win here is that you can terminate your handshake, both TCP and TLS, much closer to the client. So let's do some math. Let's say we have a server in London and a client is in New York City. So over that transit landing hop, let's say the latency is about 50 milliseconds, which is roughly what it is in practice. If you have to do the handshake with the origin server, that's going to take just for TLS, two round trips. Or sorry, with TCP, an optimized TLS handshake, that's going to be two round trips, which is going to take 100 milliseconds. This is before any application data can be sent. This is just our ceremony of setting up a new connection. With CDN, if you have an edge node, and let's say it's really close, it's about 10 milliseconds away, that full handshake is about 20 milliseconds. So this is significant when performance-wise for a lot of cases. And because of that, you probably should be terminating TCP and TLS as close to the user as possible. Finally, speaking of CDNs and servers, you definitely do need to do your homework for which server you're using and what features it supports. Sadly, this is mostly an unoptimized frontier today. There's just many hidden gotchas. So I did a survey, and I am kind of maintained this table over time, trying to monitor which server support which features. And I'm really happy to say that, actually, yesterday, I updated this table. And we finally have one server that is capable of delivering all the right things, which is the Apache traffic server. And I guess the second best one is nginx, which allows most of the things, except for the dynamic record size, but allows you to set a static record size, which you have to set manually, because by default, it'll use 16 kilobytes. So depending on what you're using, chances are many of the servers that you guys use in production today are probably not on this list. So if nothing else, use this grid to check your own implementation. Same thing for CDNs. And I have to say, I've been very disappointed when I did this for CDNs, because I would have thought that CDNs are in the business of making stuff everything really, really fast. And sadly, for TLS performance, a lot of them are not performing that well. So you can see here that there are some that are fairly good and some that have a lot of red. So we can definitely do a better job here. So depending on what you use, talk to your CDN, bug them about it, ask them why they don't support a particular feature, because at least in my experience, even since I've first published this table, I've had a number of CDNs come back to me and say, we've enabled this feature based on feature requests and all the rest. So ask them. Don't just assume that because you're using a CDN, everything's going to be that much faster. And you can learn more about this. You can see those tables plus some additional information on istlsfast.com. So check it out. The site is also up on GitHub, so if you find bugs or you want to add your particular server or another different CDN, that would be great as well. And then finally, in conclusion, you've enabled TLS on your site. And perhaps an unexpected benefit is that TLS can actually be faster than unencrypted and operationally better than non-encrypted traffic. And I know this sounds kind of crazy, but let me tell you why. First of all, if you've enabled HTTPS, you're basically there for enabling things like speedy and HP2. For a variety of reasons, practically speaking, you need HTTPS to deploy new protocols on the web because there are intermediaries and proxies and other things that whenever they see something other than HTTP 1.1, they just abort connections. So because of that, we need an end-to-end encrypted tunnel to deploy things like speedy and HP2. If you've enabled HTTPS, you're basically there. You're just like a flag away from enabling a new server, assuming your server supports speedy and HP2. This is data for a couple of Google services where we're comparing speedy performance. This is from a client's perspective, so like think page load time performance. With speedy enabled versus regular HTTPS. So you can see that there's significant improvement and in fact, the biggest improvements are for slower clients. So things like mobile clients with high latency benefit the most from speedy. And in fact, we have examples where Google search is actually faster with HTTPS enabled than in plain text, which is pretty awesome. And of course, that's exactly what we want to see. So that's important for the client, but there's also operational benefits for the server. So one of the things we talked about is that the handshakes are very expensive on the server. The server actually has to do a lot more work than the client. And also there's an encryption overhead. But we don't have to use as many connections. In fact, speedy and HP2 are explicitly designed to just have one connection open. So today, most clients will open up to six connections per origin. So you have to maintain a lot of parallel connections for each client. With speedy and HP2, all of that gets collapsed into one, which means that we have to perform fewer handshakes, which of course, optimizes your CPUs. Fewer memory buffers and other things. So it once again reduces operational cost and better connection reuse, meaning that we can actually get the best performance out of each TCP connection. So all of this combined, we have actually seen enough talk to organizations where after they deployed HTTPS and enabled speedy, their operational load overall hasn't gone down because there's just fewer connections coming to their servers, which is exactly what you want to see. So that's a huge savings. So if you've enabled HTTPS, you're basically there with speedy. Speedy has very good support. Chrome supports it. Firefox supports it. New Safari supports it. And IE supports it. So basically it's available in all the modern clients today. And HTTP2 is basically there. The spec is in final stages. And as of about a month ago, HTTP2 is also now available and supported by Chrome's table and Firefox's table. And IE has also now support. So once the spec is finalized, you'll see very rapid adoption of HTTP2 across all the clients. Basically, I think in the year's time, we'll be talking about it's here. It's available in all the clients. Why haven't you enabled it yet? And to do that, you'll need HTTPS. And with that, you can find a mentioned istlsfastyet.com. So definitely check that out. The slides are available at that link there. And if you have time for questions, send them around all day and today and tomorrow as well if you guys have more questions. Thank you. Questions? Do we have time? Maybe two. No questions? No, there we go. So your question is, at Google, we're transitioning everything to HTTPS, even inside. And you're asking about load balancers. And yeah, we haven't seen that to be an issue for us. So we don't use any special hardware for enabling HTTPS. I mean, at the end of the day, we want an end-to-end encrypted tunnel, whether it's inside or outside of Google. So yeah, no difference. You mentioned about offloading the TLS at the CDL level. That means you have to expose your private key at the CDN. If some organization is against the policies, like if you have PCI complaint, you cannot expose, save, your private key outside your premises. Yes, yes, so that's true. So depending on the requirements of your organization, you may or may not be able to do that. You're right. If you're using a CDN, you may have to hand, not may, you have to hand over your private key. So I know that, so there's some interesting developments in the space. For example, Cloudflare has enabled some new capabilities where you talk to, like, it's routed through the CDN. There's kind of its own protocol. So that seems promising. I'm not sure if that's going to be adopted as more widely while other CDNs support it. The other route is, of course, to deploy your own service. So I know some organizations that have deployed their own CDNs by just using the public cloud. So you have EC2, you have Amazon or Azure and Google Cloud where you can just deploy servers in different data centers and just use those proxies. So you control them. You're not handing over the keys, but you're still getting the benefits of close termination. So depending on how much effort you willing to put into it and how important this latency to you, which I'm going to claim should be fairly important, there are different workarounds around it, yeah. Thank you. Anybody else? Nope, all right, well, I'm all day if you guys have any more questions. Thanks.