 Is CLS fast yet? And I'm going to give the answer right off the get go. And the answer is yes, it is fast enough. And over the past year, I've spent quite a bit of time just looking into the various performance bottlenecks that are out there. And my overall conclusion is there's actually quite a bit of work that we need to do to improve the performance CLS. But it is certainly possible to make it fast. But before we get to the nuts and bolts, let me just state the premise, which is I believe that all communication on the web should be secure by default, which is to say, HPS should be everywhere. And this is for a variety of reasons. Part of it is just protecting the privacy of the users, of the visitors that are coming to a site, but also protecting our sites and services as well, against things like men in the middle and other sorts of attacks. And I won't name any names, but I think we've seen enough news over the course of last year in a bit to motivate why that is so important. So in particular, when we talk about HPS and security and secure by default, we mean three things. Authentication, data integrity, and encryption. I think a lot of times, when you think of HPS, we particularly focus on encryption, which is just making sure that the data is obscured from passive eavesdroppers and things like that. But authentication is obviously important as well, because it protects our services from being impersonated and making sure that the client is, frankly, talking to the server that they claim to be. And data integrity is just protecting the data that's in flight, such that somebody can't modify it. So all of this is well and good, but the common questions that I get is like, great, TLS is good, but this is a performance conference after all. So what about the overhead that TLS adds? First of all, there's the computational cost, right? There's the crypto work, so doesn't that make everything slower? And second, there's all these extra round trips and all this extra stuff. And the bottom line being, this is just going to make my site slower. There's nothing that I can gain here. And my answer to that is there's only one performance problem that TLS has today, and that is that it's not deployed widely enough, and everything else can in fact be optimized. And in fact, there are cases where TLS, believe it or not, can actually yield faster loading pages. And we'll see how and why that is the case in a second. So let's start from the top. There is the CPU and memory concerns, right? The first one is the computational costs. So crypto work is obviously extra CPU cycles on top of serving the web page. And there's two parts, two important parts to establishing a secure tunnel. There is the asymmetric crypto, and there's the symmetric crypto. The asymmetric crypto is what we have when we do the certificate authentication. That's the public private key pair exchange. And that one is on modern hardware is on the order of one millisecond, right? Don't trust the numbers that you see in various blog posts from 2005, 2012. Actually boot up your machine and just run the open SSL speed, SHA and ECDH benchmarks, and you'll get the numbers for your specific hardware, for how many handshakes you can actually do on your machine. In my experience, it's for the asymmetric crypto, it's about on the order of one millisecond. And that is actually the expensive part of this handshake. The symmetric crypto is not a problem, right? Like you can saturate your full nick on a single core, and that is nice and fast. So if you look at the experience of the big sites, the Facebooks, the Googles, the Twitters of the world, you basically find that all of them nowadays run their TLS stacks purely in software. There's no longer a need for dedicated hardware or things like TLS offloading, frankly, because the CPUs have been optimized well enough by this point that the commodity CPUs are just fast enough for this sort of work. Same experience at Google when we enabled TLS, this is Adam Langley, originally this was done for Gmail, and now of course more and more of the Google services are HTTPS only. So you can see that it basically caused a negligible increase in CPU, and we'll actually talk about how we got there. But more importantly, this is something that I didn't appreciate when I first went into this space, is the memory overhead. So if you're juggling a lot of connections on your service, this is definitely something you're gonna be paying attention to, and the 10 kilobyte number is interesting, because as it turns out, once I started doing some research into this, I realized that there's actually quite a bit of overhead per connection. So for example, if you have TLS compression enabled, each connection will require up to an extra megabyte of memory, which of course is a huge amount of overhead. Now the secret is, you actually shouldn't have TLS compression enabled. It is both a security problem if you do, and it also adds a lot of overhead. So the first takeaway here is disabled TLS compression, if you have enabled, because we also have TLS or we also have compression, GZIP compression at HTTP, so really you're not gaining much. You're just doing double compression anyway. So you turn off the compression and nowadays on an optimized, let's say you're using the latest version of OpenSSL and you optimize how your server integrates it, you're gonna be incurring about 100 kilobytes of extra memory for each TLS connection, which is actually quite a bit if you think about it. It's an order of magnitude less than one megabyte, but still quite a bit. Now on Google servers, it turns out that as Adam quoted, as I had Adam's quoted before, we're actually taking about 10 kilobytes. So this was done through a lot of very careful optimization, just taking out things that we don't need. But this just shows you that there is a lot more to be gained by optimizing things like the OpenSSL libraries and all the rest. And we know it's possible because we've done it at Google. Now, I don't know if you guys seen the news, but last week we announced BoringSSL, which I think is an awesome name. Super exciting. So this is basically the OpenSSL fork that we are maintaining or will be maintaining at Google. In the past, we've always maintained our own set of patches on top of OpenSSL for things like improving security, improving performance and other things. And BoringSSL is basically that but done in the public. So this is a new project. Our goal is to actually start using it on Chrome very soon, also to incorporate it into Android. And I'm not sure if some of the memory optimizations are gonna make it into this project, but I hope they will. And hopefully this will help everybody else as well in terms of improving performance and other things. And just improving OpenSSL in general. So the other big thing is, of course, keep alive and session resumption. So part of the reason why you see negligible increase in CPU usage on the server is because you have to optimize for things like keep alive and session resumption. Once again, recall that the expensive part is the asymmetric crypto, which is that original handshake. So you want to minimize the number of handshakes. And you do that by using HCP Keep Alive, which means you use the same connection for multiple requests. And session resumption actually allows you to reuse previously negotiated parameters to avoid that handshake entirely. So let's do a quick recap of what resumption is. First, you establish a session, you verify the certificate, and then a client and server negotiate which Cypher suite they're going to be using. We can actually remember those settings. And the next time you come along, you say, hey, we've talked before and these are the parameters that we used last time. So let's just start with that. And the server can resume that. And the important part here is we skip the asymmetric crypto entirely, which allows you to actually save an entire round trip and eliminate that CPU overhead. So this is actually very important because it allows you to do a one RTT connection establishment with TLS. And there are two mechanisms by which you actually do this. There is the session identifiers and then there's session tickets. So session identifiers is the older standard, if you will. And the idea here is that the server assigns an ID to the client and then the client just sends back that ID. It's just an opaque string, right? The server then looks it up, looks up the parameters of that session and some sort of a cache and reuses that. By comparison, session tickets or by contrast, session tickets actually encodes all of the information into a ticket, encrypts it and sends it to the client, sets it to the client. So the big difference is where that data is stored. And session tickets is often preferred because session identifiers require some sort of a shared cache, right? Imagine you have a multi-server deployment and I'm sending you a session ID. Now you need to take that session ID, you look up the parameters, decode it and do all that kind of work. So session tickets are much easier to deploy in that sense. And most, or all I should say, modern browsers support both. So a quick way to test this is to just open up OpenSSL. You can use SClient and just try it on your own site. So here's an example with example.com and basically you're looking for session ID. This is that session identifier, right? So the data is actually stored on the server as to what parameters we used. And if your server supports session tickets, in this case, you can see that the session ticket is valid for 600 seconds. That data will be passed in here as well. So this is a very quick and easy way to test this sort of thing. So one thing to think about, deploying session resumption actually comes with a lot of caveats when it comes to security. You can optimize for performance, but of course security is very important here as well. And you need to pay attention to how and when you rotate your session keys and when you expire your caches. For things like perfect forward secrecy, you want to make sure that as, actually this is a quote from Adam, that the session keys that you're using are never actually persisted anywhere, right? Because the idea behind perfect forward secrecy is that once the session has gone, there's not, you can't recover the keys or you can't recover that past information. So if you're by accident are writing it to disk and doing these other things, or you're forgetting to rotate your keys, then you're not actually providing perfect forward secrecy, which unfortunately is the case for most servers out of the box today. If you just bring up, let's say Nginx and run it, it doesn't automatically rotate your keys. You actually have to set it up on your own and kind of add an extra layer of logic. So this is a good example of something that we need to improve in our servers, whether that's Apache Nginx and in all the rest, frankly. So this is a big gotcha. So if you're deploying this, please do pay attention to it. And Adam Langley actually on his blog, Imperial Violet, has great information about things you should watch out for. So you can see the link at the bottom here and I'll share the slides, the link to the slides later as well. So that's a little bit about the CPU and memory. Let's talk about latency, because after all, there's all these extra round trips. So just a quick one-on-one on the basic TLS handshake. This is the textbook version of the TLS handshake, right? We start with semi-a-certificate because I need to authenticate the server. The server returns a certificate, you do your public key crypto, you then negotiate the symmetric key and the symmetric cipher that you want to use. And once you've agreed to all that, you start sending application data. So really, this takes two RTTs on top of your TLS handshake. And after those two RTTs, you can send encrypted app data. So that's the textbook version. Turns out that we can do much, much better. First of all, the first thing you can do to improve the performance of TLS handshake is to just reduce the RTT. And that's where CDN comes in, right? I think CDNs are, they're obviously great for delivering static content, but they're very useful for improving performance of TLS as well. So the closer you can terminate your connection, the better the performance is going to be, both for TCP and the TLS handshake. So anything you can do basically to reduce that round trip time for the handshake is gonna pay off in dividends. Now, that said, before you hand over your keys to the castle, right, make sure that your CDN performance with TLS is actually good. I won't name any names, at least at this point, but not all CDNs do a good job of TLS performance. And we'll come back to that in a second. So if you recall the textbook version of the two RTT handshake, that's in theory. In practice, you can actually screw it up and make it much, much worse. So one example is the online certificate status protocol, which is great, you've given me the certificate, but is it still valid? How do I know if it's still valid? Has it been revoked? So we have this protocol called OCSP where in this example, loadingwellsfargo.com, this is done in Firefox. It gives me the certificate and then it pauses and then it actually says, well, hold on a second. I need to check with Verisign if this is still a valid certificate. So I'm gonna create a new connection. I'm gonna incur the DNS lookup, the TCP connection handshake and send the request and get the response before it can proceed. And only once I get the 200 or the success criteria is met for that specific certificate, well, I resume that connection. So that's a problem. And of course, in this particular case, Wellsfargo actually uses two CDNs. So they actually get Zing twice on Verisign and GeoTrust. So they're hurting themselves in kind of magnificent ways here. Now, here's an interesting thing. The OCSP check is actually not done over a TLS connection itself, which should raise a few warning flags, right? What happens if that gets hijacked? Well, there's a number of different problems with OCSP protocol in general and I encourage you to actually check out this post once again by Adam Langley why revocation still doesn't work and why the OCSP protocol is broken. We don't actually have necessarily better solution at this point, but we know that there's a number of problems with this process. So because of that, Chrome doesn't actually block on OCSP checks. Some other browsers like Firefox do. So they will always do the live OCSP check and they will cache it, which is the good news, but that can still slow down your site for a while. So the behavior varies by browser. And if you open up the certificate panel in Chrome or another browser, you can actually see what the endpoint is and kind of you can query it and find out what the latency of these OCSP checks are. In practice, they're actually pretty bad. They're on the order of a couple of hundred milliseconds. So the response time is not that great. So the solution to this is to actually use OCSP stapling. And the idea here is that, look, the browser needs to do the revocation check, but instead of doing a live check where I pause and I go to the server, the server is responsible for fetching the status, stapling it to the actual certificate and providing it to you when you ask for the certificate. And that stapled response is signed by the CA. So there's no way for me to fake that stapling. So this is an important optimization that you can apply today on your servers to basically eliminate that extra latency, which is gonna hurt quite a few users. So once again, you can test for this. You're just using SClient. So if you pick your favorite site, you can actually see if they're using OCSP stapling because if they are, you will see this OCSP response data field in your output here. So stapled OCSP removes that extra blocking. But there's another problem here is that we have our certificate. We've added the OCSP response. Now we've increased the size of the certificate. So next thing to think about is actually the size of your certificate. For a lot of sites that I've looked at, a lot of popular sites, the two RTT handshake is actually a great example. In reality, they have three RTT or four RTT handshakes for a couple of different reasons. One reason is their certificate chain is actually large enough. Or it's too large, rather. It turns out that an average certificate chain is about two to three certificates. You need your site certificate, you need an intermediary and you need your CA or some extra layers in there in between. Each one of those is about anywhere from one to 1.5 kilobytes on average. You add the OCSP response and oftentimes you end up over four kilobytes. Now four kilobytes doesn't seem like much, but when you just open the new TCP connection, that can actually be a problem on old servers because we had that four kilobyte threshold for the first congestion window. And oftentimes when you hear the, you need to optimize and reduce the size of your certificate chain, this is why. Because if you overflow that window, you will add an extra RTT. So if you haven't upgraded your servers to use the latest TCP congestion window, which is 10 packets, that's definitely a problem. But even once you have, I've discovered that there's another problem which is a lot of servers have buggy implementations where, well, I consider it a bug, maybe they consider it a feature, where they will actually pause unintentionally, even though your server is using the latest congestion window, they will send the first four kilobytes of the certificate and then they'll just pause and wait for an act to arrive and then it will send the rest. And there's lots of examples of popular servers doing this. So for example, Nginx up until version 1.56 was doing exactly this. HAProxy and others had this problem as well. So in this particular case, if you just eyeball this thing, it's doing a three and a half RTT handshake, that was actually the problem. So please do check your servers. The best way to do this is to just run the TCP dump or use web page test, click the TCP dump option and then look at this connect time and look at the traces. So once you've done all of that, we can actually make things even faster. So we have this feature called TLS false start. It turns out that when I showed you that two RTT handshake, we don't actually need to wait to send application data until we've finished the entire handshake. At the moment that the client has chosen a preferred Cypher suite, it can actually append data at the end of it, application data and just use that Cypher suite that it chosen before it gets the acknowledgement from the server. So this is slightly optimistic, but it turns out that it's actually allowed by the protocol. So this is not a new protocol extension, this is just working with the confines of the existing protocol. That said, there's been a couple of problems with deploying this in the real world. Back in, I think 2009 or 2010, we deployed this in Chrome by default and then we had to yank that support and then we had to add it again and then we had to pull it out again because we kept discovering servers that were breaking. So because of that, TLS false start is not enabled by default. Instead, it is used as a carrot for you to deploy new and correctly implemented servers. So to explain what I mean, different browsers have different conditions when they will enable false start. For example, Chrome and Firefox require that your server advertises NPN and ALPN extensions, which are basically just saying I support these protocols. Now, if you've been following speedy and HP2, you're probably familiar with both of these and you're thinking like, what does that mean I need to have HP2? No, you don't. You can just advertise that you're supporting HP1.1. Basically, all we look for is like, do you support ALPN? Because if you do, that's a hint to us that you're running a modern server that's probably implementing other things correctly. So that's step number one. And second step, and this is kind of the carrot for, if you want good performance, we want you to implement good security. We want you to implement forward secrecy. So you need to meet those two conditions and then Chrome and Firefox will use false start. Safari only requires forward secrecy and Internet Explorer actually has kind of an optimistic behaviour where they will try by default to use false start and then if it fails, they'll retry without it. So it's a combination of these two things. But the takeaway here is to get good support across all the browsers, you need to enable NPN or LPN and forward secrecy. So please do that because that'll eliminate an extra RTT. So the quick summary of all of this is you can actually achieve one RTT handshakes. It's not a two RTT, it's not like, if you have more than two RTTs, that is a bug, but you can actually do better than two RTTs. False start allows us to deliver one RTT handshakes to new visitors. So this means that you haven't come to our site before, we've not negotiated anything before, but you can still reliably deliver one RTT handshake. And session resumption allows us to do a one RTT handshake and also skip the expensive asymmetric crypto part. So basically the summary of this is you can reliably deliver a one RTT handshake on your servers to both returning and new visitors. And you should also implement things like OCSP stapling to avoid that extra blocking overhead that I showed you earlier. And by combining all those three things, you get much, much better performance. So one RTT handshake is where it's at. So having said all that, what's wrong with this picture? So first of all, hopefully by this point, you're looking at this graph and if you're not familiar with the webpage test colors, purple is the TLS handshake. So you should be eyeballing this and saying, look, this looks like our two RTT handshake. So right off the bat, I know that I can do better because we can do a one RTT handshake. We need to enable the things that we just discussed. But second is the green and blue. So at the top here, we have the HTTP site and at the bottom, we have the same site but loaded over HTTPS. With HTTP, we have one RTT and this is the time to first bite. That's the green part. And then this is the content download time. Here, we have the green part that's spanning two RTTs. Why is that? So it's your RTT handshake and this is a problem with TLS record size. So let's talk about that. When TLS packages up data, it actually takes a buffer of data and it emits a record, right? Which actually has a check sum at the end and that record is allowed to be up to 16 kilobytes in size. So by default, if let's say you have a well-optimized whatever engine X server and you're serving static data, it'll be more than happy to create 16 kilobytes records because that actually allows it to reduce the amount of overhead that it needs to do to create that record. But the problem with it is to deliver 16 kilobytes record, it's actually split across many different TCP packets, right? And on the new TCP connection, it will actually overflow the initial congestion window which will add yet another RTT. So that's a problem. And in this case, you're actually looking at a Wireshark trace which shows you that this particular record was 11 kilobytes and it was split across eight different TCP packets. And what happens if one of those TCP packets gets lost or is being retransmitted or it's delayed? Well, we can't decode the entire record, right? Because we need to run the check sum before we can decode the record. So that's a problem. And as far as I'm aware, Google servers are the only servers that are implementing a smart strategy to mitigate this problem. And I hope this is something we can fix in all of our open source and other servers as well. So what we do at Google is we start every connection with small record size, basically we fit our records into one packet which allows us to ensure that when the connection's fresh you're always able to decode data without buffering it. And then over time, of course, pushing these small packets, small records adds extra overhead. So we don't wanna do that. For streaming a large video, think of something like YouTube. We want to actually make use of larger records. We will do that after the connection's been established and we're streaming a large amount of data. And then we just reset it once the connection becomes idle. So very simple strategy, but it actually works really, really well. And I've been working with Willie who's working on HAProxy and we recently actually added the support in HAProxy which is I think really awesome. And I've been bugging the nginx team to do the same and they haven't implemented the dynamic record size but they've added the ability for you to specify a static record size. So you can say, I want all of my records to not exceed four kilobytes, for example, which is kind of like a nice in-between solution but I think we can definitely do better. So the takeaway here is there's no perfect record size. Depending on your workload, if you're streaming large chunks of data like videos or downloads, you want large record size because that will decrease your CPU overhead and even bandwidth. But there's a mix that we need in-between. So all of this is great, right? Like lots of optimizations that we comply. So let's actually take a look at how our servers perform in practice. So I ran this test and I'm gonna pick on nginx here. nginx is being used by a lot of high profile sites. So I just took the, at the time this was the stable build of nginx 1.44 and this is the same site across different versions of nginx. So at the top here, you see just the same site delivered over regular HTTP connection. Then I enabled TLS and as you can see all of a sudden we've added like three and a half RTTs for no reason. Turns out that they actually had that buggy implementation that I told you earlier where a large record size, they would wait for the large record after four kilobytes. So just upgrading the server to 157 eliminated an entire extra round trip. So if you're running an old version of nginx, please fix this. That'll already when you back a lot of time. If you then add that extra flag for SSL buffer size, so this was introduced in 1.71, right here we're actually saying, look, I want to serve records that are no larger than four kilobytes which will allow me to avoid buffering on a new connection. This returns yet another RTT on our performance budget because we're able to start parsing the HTML much earlier in the browser. And then finally, once you enable false start, we can reduce it down to one RTT, right? So unfortunately, the bad part of the story is that if you just enable TLS without much thinking, without much optimization, this is probably the picture you're gonna get. And this is bad because it's basically, you're going from 1.5 to almost three seconds. So it's like, great, I made my site twice as slow. And, but after you do all that extra kind of work and optimization, you can get it down to one extra RTT. So I think we just need to make this much easier and we need to provide a much better experience out of the box for all of these servers. So I did the same sort of research across a bunch of other servers, just trying to figure out like, where do we stand? What is the support across the different implementations? And unfortunately, there's not a single server that is completely green. Nginx is in a fairly good shape. The only thing that's missing here is this dynamic record sizing, which I was describing earlier, but at least it allows you to provide static implementation. Things like servers like Apache definitely need some help because right now they don't actually even support NPN, which means that even if you advertise a perfect forward secrecy or you enable that, you won't get false start. So depending on what you're using, this is something worth investigating and certainly worth bugging the implementers to say, hey, we need to fix this. Now, here's the real scary part. Once I did the testing on the different servers, I figured, hey, why not test all the different CDNs as well? And what scared me about this is the sea of red, right? I mean, there's some particularly bad examples that stand out here, things like CloudFront, which apparently don't support anything short of session tickets, but the good news is after I've published this, a number of CDNs have already started kind of reprioritizing things and they've actually made this grid greener since I've published it, which is great. So I think we just need more attention on this, right? So when I said earlier, don't just hand over the keys to your kingdom because frankly, some of the CDNs don't actually support the features that we're discussing here. So once again, please bug your CDNs about implementing this sort of thing. So I actually have a summary of the previous couple of tables and a lot more technical information for how to configure the different options that we've discussed on istlsfastia.com. So it's online, it's open source, you're welcome to contribute if you have any other tips in terms of performance optimizations. You want to add an extra CDN, so please send me a full request and I'll be happy to update it. But check out this resource for kind of hands-on information for how to make TLS faster. And then the other part of course is HP2 and speedy. So in my head, those two things are the same. HP2 is the evolution of speedy and I think that the part that's not appreciated enough about HP2 at this point is it may actually reduce the operational costs in some cases. So why is that? In practice, it turns out that you need TLS to deploy HP2 and speedy. There's a variety of reasons for that, intermediaries, proxies and other things, but practically speaking, Chrome and Firefox will only use or will only support speedy and HP2 over TLS. So by enabling TLS, you already get one step closer to enabling speedy and HP2. That's gonna be a requirement anyway. It's just something to think about. But the great part about both of these protocols is that they actually, one of the core premises is that they try to use fewer connections. In fact, they try to use a single connection to deliver all the resources. And that actually helps us quite a bit because it reduces the number of handshakes that we need to do, right? And fewer number of handshakes means, rather fewer number of connections means fewer handshakes, fewer memory buffers, fewer everything, right? So here's some numbers from a couple of Google services where we're comparing HTTPS performance to HTTPS with speedy, or I guess speedy in particular. And you can see that these are significant improvements in page load times for these services. But further, if you actually run some load tests, you will actually find that if you simulate the same workload with HTTPS and HTTPS speedy, you'll actually get fewer connections and you'll consume fewer resources. So operations-wise, it may actually be the case that you may see a decrease in usage on your servers when you deploy speedy with TLS, which I think is really, really awesome. And HTTP2, same thing. So as a summary, I think my conclusion here is that you can actually deliver a one RTT handshake. So your goal should be to deliver a one RTT handshake with TLS. There's basically no excuse at this point to do a two RTT or worse. And to achieve that, you need several things. You need false start, which means you need NPN and you need forward secrecy. You need TLS resumption because you want to mitigate the cost of the asymmetric crypto. You need to make sure that your certificate chain is optimized, that your server is not doing something silly to pause delivery of that certificate. You want to make sure that you're not pausing on the OCSP checks or blocking the browser. And when you put all those things together, you have your reliable one RTT handshake, which is awesome. And then after that, you also need to optimize your data delivery. So this is tuning things like your record size. If you can make sure that you're always, you're not essentially blocking on new connections or overflowing the congestion window. That's very important. And finally, once you have TLS deployed, you're almost there for deploying speedy in HTTP2. That's probably the biggest operational hurdle for a lot of people towards deploying speedy in HTTP2. And then you can actually reduce latency and ops costs. So it'll make both the client faster and the servers faster as well. So those are all great wins. And with that, you can find the slides at Fast TLS. And once again, check out istlsfastyet.com for more and more hands-on information on how to make this stuff fast. And I'm not sure how we're doing for time, but we'll take some questions. Three minutes, all right. And you know what, I can't see because we have bright lights. So why don't you just, there we go. I encourage people to come up to the mic if they'll be faster, or you won't have to repeat. So I'll go ahead and start. At the very beginning, you mentioned TLS compression, which seemed pretty simple and straightforward. But then like in these summary tables, you didn't mention it. Is it already done a lot? Or is it, you know, not possible to do or hard to do? So TLS supports compression at the protocol level. And you should disable it both for security reasons and turns out that actually opens up certain types of attacks against your servers. So that's reason number one. And two, we have compression at HTTP level but things like GZIP. So basically you end up doing double layer of compression. So you just want to disable it outright on your servers because sometimes some servers have it enabled because it seems like a right thing to do. It's like compression. Yes, of course I want compression. And then you just end up incurring all the extra memory costs without any benefits. You benchmark sort of mentioned performance at about 100 megabit per second per CPU. What if I have four times 10 gigabit NICs? Which is very realistic. That's what I do today, right? Sure, sure. So how many cores do you have on that server? Say 12, 24. And are you actually saturating those NICs? With HTTP, yes for sure. Using a fraction of the CPUs. So then you may want to look at the size of the actual, the block size, right? Because depending on the block size you can actually optimize for performance there. So do you have any ideas? So what's your list to expect out of this? Can I get 40 gigabit out of these CPUs? I don't know the number right off the top of my head. In the benchmark that I shared, I think I used was it 512 byte blocks? But you can certainly use larger blocks and that'll decrease your overhead. So I'm not sure if you can actually saturate your kind of 10 gig NIC, but I would expect that the answer is yes with enough work. Hi. So I work a lot with mobile app performance and I know that overuse of Keep Alives definitely has a detrimental effect on the handset battery life due to radio over usage. So how does this fit in with what you were saying about using Keep Alives before? So I think there's a distinction to be drawn here which is Keep Alives in the sense that the server or the client periodically pings each other to say like, yes, I'm still there. At TCP Keep Alive just means you don't close the connection, right? So for example, if you take something like Apache, I believe their default today is 15 seconds. So if your client has been idle for 15 seconds, they'll just close the connection. NGINX I think has it set to 60 seconds. And chances are you want the longer lifetime such that you don't end up closing this connection. So if I'm on the page and I'm browsing around and it takes me more than 15 seconds, when I click on the next link, you don't have to re-establish that connection. That doesn't mean it'll keep your radio active because we're not talking about actively pinging the device and saying like, please stay up. So those are two different things. Thank you. Yep. So it's pretty clear that terminating SSL at a CDN has a pretty good performance benefit, but as an advocate for TLS everywhere, what should we do on the other side of that connection? What's the ethical thing to do with CDN to origin? Do we go clear text or is that sort of a misrepresentation of the service? So, well, I think first of all, it depends on your use case, right? But I would recommend that you use HPS connection to the origin. And I think most CDNs today will use things like persistent connections to your origin. So you're not gonna incur that much overhead. And you should be encrypting the data transfer between your CDN because they could be using the public internet to route that as well. It likely is, yeah. Right. So yeah, my recommendation is you should have an HPS connection between the origin and your CDN and then a secure connection from there. And I think most CDNs will support that. If they don't, I would consider that a bug and we should get that on that chart. All right, we're good. Thank you.