 Welcome everyone. Thanks for coming along. Do a talk at Drupalcon, they said, that's a really good idea. You can learn stuff and then you've got to stand up here and look at all these faces. So we're going to talk about speedy sandwich, superfly sights with engine exon varnish. To introduce us, this is Oly and I'm Joe. We work for Winderkraut, this weekend was the first time I've ever met Oly in person. We work in different places, Ollie's in Finland, I work in the UK. We both are developers and both have an interest in performance and security with the sites that we work on. That's hence why we're working on this, presenting this talk. Sorry. So here's a plan of what we're going to be talking about this afternoon. Five main things, we can talk about Nginx itself, why Nginx is our go-to tool for website performance and when you've got it you feel good. We can then talk about varnish, why varnish is such a good caching tool and we'll do it so effectively that we'll be able to cope with really big traffic spikes and we can just expect people to bring it on. And then we're going to turn to SSL and TLS getting on the good foot in the post Edward Snowden world all communications should be secure always and by default and so we need to get on the good foot. And then the fourth element we're going to talk about is speedy itself which I guess is what most of you are most interested in, super bad, super slick speedy. The standard response when you talk about SSL is of course there's computation consequences and latency, surely SSL at the last minute if we absolutely have to but can we make things fast under TLS itself and that's where speedy comes into play. And then finally sorry we'll get to the speedy sandwich which is our suggestion of a way of putting this all together to get the best of everything that we've talked about so far and then hopefully we'll be able to show you something about the payback for that. So I'm going to hand on to Ollie now who's going to talk about NGINX. So hey, NGINX what it can do it's a web server, a rice proxy, static file cacher and load balancer mail proxy SSL TLS terminator. So what we're interested here in is the web server part, the static file caching and the SSL termination itself. As a web server it serves the static content very fast, it's light consumption, it doesn't use that much memory or other hardware resources and it's using an event-based process model which generally requires a lot less memory than a process-based servers like Apache. And then it was originally designed to be deployed alongside Apache so that the static content like HTML, CSS and JavaScript and images can be handled by NGINX. Over the course of the development the NGINX added integration with other applications through the use of FastCGI. It uses OpenSSL in a standard module to support SSL termination and despite hard bleed it's a good thing. NGINX SSL module supports important features to make it as fast as it can get. There's session resumption, OSCP stapling, strict transport security and that's basically all that we'll be using NGINX for here. And then there's varnish. It's a reverse proxy cache, sometimes also called an HTTP accelerator. It's focused exclusively on HTTP so we'll have to use NGINX for the SSL termination then. And it can be used as load balancer. It's very good at what it does and only at what it does. It's a key value store. It puts everything into RAM and lets the OS decide what to keep in the RAM and what to write to the disk. And that the OS has a better overview of the whole machine resources and requirements. And this takes away the double buffering issues. It's designed for modern equipment, for example 64-bit multicore machines with plenty of memory. So there's an assumption that your hardware is up to the job. That being said, it's still making an efficient use of the hardware and will run happily on a mediocre platform. There's no real need for bleeding edge. Do I need to say anything about that one? It's fast. The advantage of using varnish can be seen in the CPU usage as well. Here's a graph of a site without using varnish. The CPU usage spikes up to 100% at a few hundred concurrent users. Whereas with varnish, well, it's around 50%. And there's a lot of stuff said about varnish. So in previous Drupal cons, for example, in Copenhagen 2010, Paul Henningkamp held an awesome session on varnish, and he's the lead designer and developer of varnish. Then SSL by Joe. So in the good old days, we knew who was watching us, we thought. And he was being watched. We thought that when the wall fell down, that the watching was over, that our side, as it were, was watching the bad guys. But Edward Snowden changed all that. And now we know that what we thought were the friendly security agencies have in fact been harvesting vast amounts of data, ostensibly to pursue criminals and terrorists. But in the process, as we know, consuming data from everyone pretty much indiscriminately. Even, as it turns out, jacking directly into the hard lines to get the data. And over the last decade or so, we've become even more aware of the increase of cybercrime. Far too few sites, yes, that's what I said, use SSL. You can go around the whole pretty much of the Amazon store, and it's not until you get to payment, pretty much, that Amazon implement SSL. And so all of your browsing history is accessible if someone wants to really get into it. Which, you know, it might be fine if you're looking at compilation DVDs of Funny Cats, but who should have the right to access that information? To be part of the process of changing, I'm sure you know this, but Google have decided that they're going to change, well, they are changing their own page rank algorithm to privilege HTTPS. So sites which previously, sorry, sites with SSL will get higher rankings now than they used to. And so Google are part of the kind of the social change they're hoping. And good for them, very important. But the big question for anyone interested in DevOps, of course, is, is TLS fast yet? There was a great talk just a few months ago back in June at the velocity conference by Ilya Grigoric from Google. I highly recommend viewing his YouTube video of his talk. And if you go to that site, istlsfastyet.com, there's loads of resources there including his slides and links to all kinds of really useful stuff. And what we're going to talk about here really is a summary of the far more detailed work that they've been doing there. But the too long didn't read of his talk in essence is that TLS has exactly one performance problem, not enough sites are using it, everything else can be optimised. And that's kind of what we're going to talk about now. We're going to try and give a fairly detailed view of how we can get a one round trip time handshake, how we can go about eliminating latency in a validation process and make TLS as fast as it can be. The process of establishing and communicating over an encrypted channel introduces additional computational costs as you'll know. First there's the asymmetric public key encryption, which is used during the TLS handshake itself. And then once the shared secret key is established, symmetric encryption takes over. The good news is that modern hardware and up-to-date software have made great improvements to help minimise the costs. And what once previously you would assign to extra additional hardware to do the encryption work in particular can now be done pretty efficiently by the CPU. Two examples here, Adam Langley from Google, on our production front end machines. SSL TLS accounts for less than 1% of the CPU load, less than 10 kilobytes of memory per connection, and less than 2% of network overhead. Many people believe that SSL TLS takes a lot of CPU time, but we hope that the preceding numbers will help to dispel that. Obviously Google have the financial and human resources to put a lot of effort into that, and it's probably more advanced than most of us can hope to achieve, but certainly this is achievable. And then similarly from Doug Beaver at Facebook. We've deployed TLS at a large scale using both hardware and software load balances. We found that modern software-based TLS implementations running on commodity CPUs, standard CPUs, are fast enough to handle heavy HTTPS traffic load without needing to resort to dedicated cryptographic hardware. So this is what we're going to cover in the next 10, 15 minutes or so. We're going to just talk about briefly what's going on at the TLS handshake stage itself, where there are additional computational costs and round trip times going on. We're going to look then at one way of eliminating one of the round trips, which is TLS session resumption, which is effective for repeat users. And then we're going to look at TLS false starts, which is a way we can eliminate another round trip, if possible, for first-time users. And then we'll look at OCSP stapling, where sometimes there's a third round trip involved, and we can eliminate that one as well, or at least make it very efficient. And then some final goodness on the top. HTTP strict transport security, where we can use a server configuration to tell the browser, tell the client directly to use SSL or TLS without a need to negotiate what's available, and then a brief mention of Cypher suites towards the end. So the TLS handshake itself. Before the client and the server can begin exchanging application data over TLS, the encrypted tunnel has to be negotiated. So the clients and the server have to agree on the version of TLS they're going to use. They've got to choose the Cypher suites and, if necessary, verify the certificates. And, of course, each of those steps requires new packet round trips between the client and the server, which adds start-up latency to all TLS connections. So, in essence, the first step is the client is asking of the server, send me your certificates. And the server is responding, here it is. Thirdly, the client then says, mmm, that looks good. I'd like to use this Cypher. And the server is then responding, okay, let's go. And then, finally, we can do the encrypted application data. So you can see that the TLS connections require two full round trips for a full handshake. And, of course, there's the CPU resources to verify and compute all that for the ensuing session. The good news is we don't have to repeat the full handshake in every case. And that's where we talk about TLS session resumption. With session resumption, if the client has previously communicated with the server, then an abbreviated handshake can be used. And that requires just one round trip to allow the client and the server to reduce the CPU overhead by using previously negotiated parameters for the secure session itself, hence TLS session resumption. By using session identifiers, then you can remove that round trip as well as the overhead of public key cryptography, which is used to negotiate the shared secret key. So you can set up a secure connection, establish it very quickly, and have no loss of security because you've already negotiated the security in the previous sessions. In practice, most web applications attempt to establish multiple connections to the same host to fetch resources in parallel, as you'll know. So that means that session resumption is a kind of a must-have optimization to reduce latency and computational costs on both sides. Most modern browsers intentionally wait for the first TLS connection to complete before opening new connections to the same server. So subsequent TLS connections can reuse the SSL session parameters to avoid that costly handshake. There's two ways of doing it. Now, you can use session identifiers, and that's where the shared state is held on the server. The server assigns session ID or cache the parameters, and the client will respond with a session ID and the session can then be resumed. But that means that the server itself has to store the cache of all of those sessions. And if you've got a site that's handling a lot of users, then that means a large session cache, which may be absolutely reasonable in your use case completely, but not necessarily. And you do have to be, of course, careful about how you expire the sessions and rotate things securely. The second way of doing it, which won't, of course, require that massive session cache on the server end, is to use session tickets, where the shared state is on the client itself. There, the server encrypts parameters and it sets an opaque ticket, and the client sends that opaque ticket back, and the server can decrypt the ticket and then resume the session. So, yeah, the shared state is then on the client itself, making things far more efficient. The smart and cryptographically conscious of you will be aware that that potentially opens a security hole, and so session tickets need to be rotated regularly to make sure that security isn't compromised. In fact, Adam Langley on the Imperial Violet blog says session ticket keys have to be distributed to all front-end machines without being written to any kind of persistent storage and frequently rotated. That's what this looks like in practice if you send an open SSL request to an appropriately configured server. You'll see the session ID in some of the response and the session ticket you can see there at the bottom. Okay, so that's great for return users, but it doesn't help where the visitor is coming to the server for the first time or if the previous session has expired, and that's where we need TLS false start. TLS false start doesn't change the way the TLS handshake protocol happens, but what it does do is it alters the timing of the protocol handshake. It alters the moment at which the application data itself can be sent. It seems to make an intuitive sense that once the client key exchange record is agreed, the server already knows the encryption key and it can begin transmitting the application data. The rest of the handshake is confirming that nobody's tampered with the handshake records, so that can be done in parallel. As a result, false start, as it's called, allows you to keep the TLS handshake down to one round trip. Regardless of whether performing a full or abbreviated handshake, it can still be used. In practice, though, even though TLS false start is generally backwards compatible with all clients and servers, enabling by default has been problematic due mainly to some poorly implemented servers. As a result, modern browsers get around it or have to check that it's in place. Actually, I think I've got a little graph. I'll show you that in a second. To deploy false start, in Chrome and Firefox, they require NPN to advertise that the protocol is available and also requires an appropriately secure cipher suite is chosen that enables forward security. Safari just wants that last element. It just wants to have a good cipher suite that supports forward security. Internet Explorer has a combination of a blacklist of known sites that break when TLS false start's enabled, and it also has a timeout built into it to repeat the handshake if the TLS false start fails. In practice, what you need to do to implement it is to have NPN and a good cipher suite in place. That's what this looks like in practice. You see on this little graph here that the top row is standard HTTP, no SSL involved here at all, and you can see the response time there. The second line is a poorly built server doing SSL badly, and we've essentially got three round trips going on there when we don't need one. The third row is SSL properly implemented in engine X1.5.7. The MTU record, you don't need to worry about too much, but that's about fixing the size of the TLS record. The important bit is the final line there where you can see that we've backed down to one round trip extra. By comparing the top row and the bottom row, you can see the only thing which is different is that one round trip. The overhead of implementing TLS is reduced down to as low as it can possibly go with false starts. In short, just turn on NPN or enable NPN and choose a good cipher suite and you should be good to go, which means good to go. OCSP stapling. The last element in the stage is OCSP, online certificate status protocol. That's a protocol for checking if the SSL certificate itself is still valid or whether it's been revoked. What happens there is the browser sends a request to an OCSP URL to find out the status of the certificate and receives a response back containing the validity parameters, which of course introduces some significant problems. One is that it's compromised your security because you've already asked a third party whether this site is valid or not and so privacy is being compromised. It could potentially put a heavy load on a CA server and it also adds, of course, an extra round trip time, none of which you want. OCSP requires a browser to contact the CA to confirm the certificate of validity. The CA knows what websites being accessed and who's accessed it. OCSP stapling, then, is a way of trying to cut down on all of those three problems. Essentially what happens is that the server itself queries the OCSP server directly and caches that response. The response then can be stapled, hence the term, to the TLS SSL handshake. It becomes part of the certificate of status request response. As a result, the CA servers are not burdened with the request. The privacy issue is dealt with and browser no longer needs to disclose the user's browsing habits to a third party. It also means, of course, there's one less DNS, TCP connects and response in the middle of the process. Put all those together and you've got a great thing. Do you need to bear in mind? That's what it would look like if you were implementing it in NGINX. It's very simple, just a couple of lines in your NGINX configuration. In the same request that we showed a minute ago, this is what you would see in the middle of that response data. There we go. Do bear in mind if you're going to implement it, that OCSP stapling does increase your certificate size, so you need to know whether that will be a problem for you. The final element here, really, is HTTP strict transport security, HSTS. What this does is it converts the origin server, your server, to an HTTPS-only destination. What that does, of course, is it eliminates the unnecessary conversion of HTTP to HTTPS, all those redirects, and it shifts the responsibility for that to the client itself. It takes it away from the server and it's back on the client. The client, the browser, will automatically rewrite all the links to HTTPS. It does that by instructing the user agent to enforce several rules or requests to the origin should be sent to HTTPS. All insecure links and client requests should be automatically converted to HTTPS on the client before the request is actually sent. If there's a certificate error, then the error message is thrown into the browser and the client isn't allowed to view the site. He can't circumvent the warning. It can also set a maximum age cache, which you can set to some large thing, like a whole year, 365 days. Before we finish talking, yes. This is what you would do in your engine X configuration. You would simply add the header that says use strict transport security. A quick last mention about cipher suites before we try and put this all together. So when choosing your cipher suites, just make sure that what you're doing is looking towards ensuring forward security. So don't use SSL version 2 or 3. Use TLS version 1, 1.1 or 1.2. Concept of forward security is quite a simple one, really. The client and the server negotiate a key right then, which never hits the wind, is destroyed at the end of the session. So with forward security, if an attacker gets hold of the private key, it will not be able to decrypt past communications, hence you're being secure forward. Private key is only used to sign the Diffie-Hellman handshake so that doesn't need the pre-master key. Do you think about backwards compatibility, though, when you're doing this? There's lots of different ways of implementing your cipher suites, which can ensure more or less backwards compatibility. For help out there, there's plenty of websites which will tell you good cipher suites to use. Mozilla keeps track of a wiki of the latest that you need to know for implementing TLS. They offer a really good backwards compatible cipher suite, but it is huge. The bottom link there will give you something else as an alternative. So, just to summarise the SSL-TLS stuff, is a checklist of what you might want to do to achieve a one round trip time. Implement false starts, and that will give you one round trip time for new visitors. Implement session resumption, that will give you one round trip time for returning visitors, and implement OCSP stapling, and then there'll be no OCSP process blocking the request. So, we're going to look now at speedy itself. Okay, so some brief history. The first documented definition of HTTP was version 0.9 in 1991. The version 1.1 was being worked on through the mid 90s. Many browsers were HTTP 1.1 compliant before it was agreed as the standard in June of 1999. The 1.1 is the version that dominates internet traffic today. By the middle of last decade, rich media became such a significant feature of websites that it became clear that HTTP 1.1 was inadequate for the modern web, and people started thinking about its successes. In November 2009, Google then published a project they'd been working on, aimed at making a two times faster web, which they called speedy. Since then, speedy has been developed substantially as a protocol, and the current version 3.1 is substantially different from what was originally published. As of July 2012, the group working on speedy has said that it's working towards standardisation, and the first draft of HTTP version 2 is taking speedy as the basis and working forward from there. In a similar manner to HTTP 1.1, whilst HTTP version 2 is being developed, the early version that is speedy is being deployed, and most modern browsers now support the speedy protocol with some notable exceptions that we'll mention in due course. Some of the best known users of speedy at the moment are Google, Twitter, Facebook, Mac, CDN and CloudFront. There's plenty of others, of course, but there's too many to mention, really. Speedy, it allows... The goal of the speedy is to reduce web page load time. It achieves this in three primary ways. It allows the client and server to compress the requests and response headers to cut down on bandwidth usage. It adds a session layer between HTTP and SSL that supports concurrent interleaved streams for a single TCP connection. It allows the server to actively push resources to the client that it knows that the client will need without waiting for the client to request them. Speedy requires the use of SSL and doesn't support plain TCP. There's some advantages on the server side compared to HTTPS. Speedy requests consume less resources, CPU and memory on the server and compared to HTTP, Speedy consumes less memory, but just a bit more CPU. This may be a good or bad thing or completely irrelevant to you, depending on which resource your server is limited by. All of these benefits are dependent on the network and website deployment conditions, though. There's browser support. Most good modern browsers support Speedy, but not all of them. For example, Safari doesn't support it at all in the version 7. IE 11 has only partial support and earlier versions don't support Speedy at all. The next version of Safari that's packaged with Yosemite will apparently, by all reports, support Speedy. There's bandwidth and roundtrip time to take into account as well. Speedy benefits are found to be larger when there's less bandwidth and longer roundtrip times, because the roundtrip times and bandwidth determine the amount of time page loads in the network relative to computation. Speedy provides minimal improvements under good networking conditions. The biggest point here would be mobile browsing, because mobile networks usually aren't that great and getting the most out of the bandwidth there is a good thing. Multiple origins. Speedy can multiplex resources from the same origin, but most website requests and responses are spread throughout multiple origins. We lose a bit of the impact that Speedy has on our site. Browser processing. Once the browser receives the page resources from a Speedy-enabled server, it must process them. A slow browser will limit the gains from Speedy traffic. Then there's packet loss. If packet loss is high, Speedy may actually hurt the situation. A single connection, as in Speedy, will suffer significantly under high packet loss situations, because it aggressively reduces the congestion window compared to HTTP, which reduces the congestion window only on one of its parallel connections. However, packet loss occurs more often when concurrent TCP connections are competing with each other, so Speedy's approach of multiplexing on fewer connections may actually help mitigate packet loss. Then getting it all together, the Speedy sandwich, and getting Speedy to work with NGX is actually as simple as compiling it with the HTTP Speedy module. You're done. Having told you all that, finally we get to the thing that we said we'd propose as a way of doing all this. This is called the Speedy sandwich. It's not our name, I have to admit. I heard first about this idea from this guy Barney Hanlon at Drupal Camp London back in March, I think it was, but took it away as an idea. He did a hand-waving kind of thing. We took it away as an idea to see what we could actually do with it, whether it was as good as he suggested it was. The idea of the Speedy sandwich is this. Your original request comes in, and it passes to a front-end NGX, which has four tasks. The first task is what we've been talking about. It does the speedy bit and SSL termination. I don't think you mentioned it, but Speedy has to be an SSL. You did mention that, great. One of the aspects of Speedy has to be an SSL. The front-end NGX, because, as we said right at the beginning, is brilliant at static file caching. We get the front-end to do the static file caching. NGX can push out static assets unbelievably fast. If we give this NGX access to the dock routes, it can handle those static file caches probably even faster than Varnish could get them out of RAM because it's got access to the file handers themselves. Maybe not in the first time, but because it's caching them, three, four, five requests to the same image, certainly very, very good. It should also then do the G-zipping on the front-end, so there's no need for Varnish itself to handle G-zip content. Just let the top layer deal with that, and then help the NGX to do the caching with some good page speed implementations. Page speed is user agent aware, so it can make sure that the right version of what's being requested is being put out to the right user agents. That then passes on to Varnish, which is the middle of the sandwich, which does what Varnish does best. It caches the dynamic pages, and it does it blindingly fast. It can do some cookie normalisation. Varnish is very good at normalising the cookies. That's a good thing for it to do. The final bread layer in the sandwich, I'm English, we like our sandwiches, is another way of NGX, and that does the dynamic pages, so it can talk to the PHP FPM at the back, and you might want to do some general page speed optimisations. You may not have to, but you might want to, things like collapsing white space, some very simple elementary things like that, and then push it onto the back to PHP FPM, which is running your Drupal application, which of course doesn't have to be PHP. It can be whatever is on the back that NGX is talking to. That's the idea with the speedy sandwich. Here, just to round off, we'll get to what we think looks like the payback. A few screen captures for you. This is a request. These are all requests, exactly the same server, but served in a different way. This is to a no speedy sandwich, an SSL sandwich, doing exactly the same job, but no speedy on the front. You can see here, we've done all the stuff we talked about so far, with TLS optimisation, so the SSL connection on the very first row is as small as it can be, but then all the other page elements are flowing in one after each other in a classic kind of waterfall way in which they come down. Implement speedy, and that changes. So they come almost parallel with each other, completely parallel. You see the almost vertical line there, as soon as the SSL connection has been dealt with, then all of the rest of the page assets come in dramatically. That blue line on the right-hand side is the page completed, which on the previous one, sorry, is the right-hand edge of the page, so 1.9 seconds, roughly. All that goes on the way. With speedy, that comes in dramatically to 1.5 seconds. So he shaved off 20% or so. Implement the speedy sandwich with all the goodness that we talked about, and what you end up with is this. You can see far less assets being passed across and all the speedy goodness. Everything is collapsed down as much as it possibly can be. Page speedy is doing a great job of pushing everything together, full-on caching from NGNX, and the page load time comes down even further, and we're just marginally over a second here on this completely uncached browser requesting this. I've done some load tests as well. It's quite hard to do load tests, because speedy is such a new protocol to load testing with it. It's not easy, but we've done our best. So a ramp test here, ramping up five concurrent users all the way up to 50 concurrent users every 30 seconds. So you can see there this is what the test looks like when it's being implemented. The number of hits, simultaneous hits per second, maxing out at, what's that, just over 350, isn't it? So with a no speedy server, I'm afraid these next couple of graphs aren't in the same scale, but you can see that this highly optimized server, Ollie's bought himself a massive server. He just likes showing off boys with toys. SSL, or TLS sandwich, is very, very fast, but there's quite a bit of fluctuation in the response time, page response time. Implement the speedy sandwich, and it's much, much more consistent, and the average page response time is less than a third of a second. But in fact, it's only just more than a quarter of a second there on that ramp test. To show this in distributions here, without speedy, 95% of requests are taking just over half a second. We implement the speedy sandwich, 95% of requests there are taking, that's 0.4 of a second. So we're shaving off 15-20%, which is astounding considering we're doing things on SSL. So just to summarise, we talked about how we think using the speedy sandwich can make your sites superfly. We talked about why we think NGNX is such a great tool, our go-to tool for very, very fast websites. Why Varnishes should be implemented because it's a stupidly fast reverse proxy cache. Why everything should be in SSL and TLS these days is vitally important, it's installed by default everywhere. Why speedy is the verge of the next generation of HTTP and what Google are doing and how it's been taken on by the HTTP working group. And then finally, a proposition of putting it together in the speedy sandwich. So there we go. Thank you. Questions, yeah. There's a microphone there. Does someone mind passing that microphone across? Maybe not on the stand. Sorry, I didn't realise it was wired in, sorry. You just talk really loudly so everyone can hear. Okay, I haven't actually looked that much into microcaching. Do you want to repeat the question? The first thing was... The first question was, why not use microcaching instead of Varnish? To be honest, I haven't tested microcaching enough to give any meaningful answer to that. So I don't know. It might be better. As for the SSL keys, can you actually repeat the question? True, that is a big problem. I don't actually see that getting fixed anytime soon. Is there a certain indication of doing multiple... No, actually. You can use NGNX as a wildcard terminator and just proxy everything to Varnish. No, we're talking a TLS. So it remains on one idea. And whether or not you know if any of the are supposed to see a conflict with us or not. I'm not that aware. I've not heard anyone talk about it. I don't know. I'm wondering, why would you make NGNX do... keep G-Zip in? Whereas if you did it with Varnish, you'd be cashed at G-Zip and you'd serve it out of the box as a... That's a good suggestion. Actually, I think in our latest test box we were actually doing the G-Zipping in NGNX before Varnish. So Varnish would get the G-Zip stuff to serve out. Oh, so you G-Zip it on the back end? Yeah. Because you said... Yeah, I think that was the latest version of our test. Tell the questions. At the back. Yes. Okay. How do we detect if a browser supports SPDIY? Yeah, of course. That's because the initial handshake has the NPN alternative protocols in it. So we're just saying that we're supporting SPDIY and we'd prefer you use SPDIY. But then, if the browser doesn't understand SPDIY, then it'll just fall back and use regular SSL. So basically, with the setup, anyone who can use SPDIY will use SPDIY. No, if you want to do anything like this on the mainstream, you use SPDIY for now. It's HTTP2 isn't far enough developed, but it's going forward. In that case, thank you very much for coming. I think it has an all-to-search play approach. Would you be separating out the inch of the bar? One, two, and then how do I... Keep it in that position. I would actually... What I'd do is, why would you have the initial NPN, the SSL, the NPN, and varnish and so on? Yep, so it would depend on... And then have varnish and a lot of balancing. Yep. And then just go back to the engine. Just run, drop into... Yep. Scliff to all the other back-ends. Is there any configuration where you would reuse the same front-end engine X configuration? The same engine X... It just seems a bit... I understand why you're doing this. There's a much separation of operations, and you can configure each engine X to do that job well. But if you're running, say, up and down, Sift, Installation, is there any way that you would reuse the same engine X instance to do both the front-end caching database? So all the front-end engine X operations, as well as the back-ends? On a single server, it essentially is the same instance? Yeah. It's just two different servers.