 All right. Looks like we can start. So welcome to the HTTP2 session at scale. My name is Nick Shadrin, and in this session, we will talk about this new version of the HTTP protocol, about its use cases, about how and when to use it, and when not to. First of all, a little bit about myself. I work for NGNX in San Francisco as a technical solutions architect. And I often deal with lots of our users and commercial customers as well, figuring out the technical questions and challenges that the current internet users have, and figuring out the solutions for their environments. I have quite a long experience in web technology. Basically, since I connected my computer to the network, I was dealing with different forms of websites. Basically, first of all, creating the websites, securing them, sometimes figuring out how to make them work faster, and all their reasons around it. So my contacts are right there. It's nickatenginego.com, or you can tweet me at Shadrin. So first of all, I used a lot of different links to different resources in this slide deck, and I collected all those links in one page. So this slide with this QR code, I will leave it after the presentation as well. So don't bother taking pictures of every slide because the slide deck is also available on the conference website. All right. The first part of the talk, we will see the difference between the HTTP and the HTTP1 and HTTP2 protocols. We'll talk about what features the new version of HTTP gives us, and what kind of new enhancements and new performance benefits you can get just immediately from using that. We will review the HTTP1 and the HTTP2 optimizations. Most of us who wanted to make their websites work faster, well, we have already implemented some features for HTTP1. So we'll see how an immediate use of HTTP2 can either benefit or degrade performance, depending on what optimizations you use for your old school HTTP1 deployment. We'll see very interesting ways of troubleshooting the HTTP2 protocol. It does have some challenges there. It's not as straightforward as HTTP1 troubleshooting. And the next part we'll see a very interesting part, the benchmarks. Everybody can make their own benchmarks, and there is a number of different benchmarks available for comparing HTTP1 and HTTP2. And obviously, I made my own benchmark, and I will tell you why mine is better than everybody else's. Then we'll talk about how to configure it, how to do that with the Nginx web server, which features to enable what kind of configuration and log items to expect. All right. So first of all, little bits of the HTTP history. It's been a while since HTTP received a major update. First of all, the first draft appeared in the late 80s, beginning of the 90s. And with the version 0.9, there was a very simple way of accessing an HTML page using a very simple guest request to the URL. So there was no concept of keeping the stateful connections there. There was no concept of different resource types, caches, or all the other optimizations that we currently have. In 1996, the HTTP version 1.0 was finalized with some of those enhancements. But what we are going to be comparing with HTTP to today is going to be the HTTP 1.1. The major features of HTTP 1.1, which were added compared to 1.0, are the keep alive connections and extended abilities to control and manipulate the cache. And there is also a number of other different features. But performance-wise, the keep alive and caches are very important for us. And in 2015, about less than a year ago, the HTTP 2.0 version was finalized. And HTTP 2.0 was based on the open protocol called speedy. So currently, we have a proper standard, not some protocol developed by a set of companies, but a properly defined standard version 2 of the HTTP protocol. So let's take a look at the example request and example response in HTTP 1.1. First of all, who is familiar with the HTTP headers and HTTP requests and everything? All right, so I guess we can just skip this slide. Looks like everybody knows HTTP 1.1 well enough. So let's take a look at the predecessor of the HTTP 2.0 protocol, which is called speedy. It was announced in 2009 by Google, and it became very popular with the implementers of the web servers and implementers of the web browsers. So speedy is well supported, and the major idea for the speedy protocol was to reduce the page load time and to make performance enhancements to the HTTP 1.1 protocol. So major features of speedy included the compressed headers, the flow control with multiple streams of data, and the server push. Compressed headers with speedy, the compression was done with the GZIP algorithm. And we all know that all the HTTP 1.1 protocol can also compress data, just as speedy can compress data. But HTTP 1 cannot compress the headers. And when you have smaller requests, when you have a large number of requests, or a lot of headers are flowing through a network, having that data uncompressed, it sometimes degrades your performance. So HTTP 2.0 was introduced in 2015, and it's majorly based on the speedy protocol. The difference between speedy and HTTP 2.0 is in the method of compressing the headers. HTTP 2.0 does not use GZIP compression. It uses HPEC compression. And basically it makes a map of the headers and data on both the server side and the client side. So that compression becomes stateful. And for the implementation of HTTP 2.0 with the proper HPEC algorithm, it does use significantly more memory because all the compression data needs to be stored on both sides, both on the server and the client side. The multiple streams of data allow you to put different requests inside the same TCP connection. And the different request data and response data is coming through that connection at the same time. So basically we all know that in HTTP 1, the browser tends to open multiple connections to the same server in order to download images, CSS files, and other resources in parallel at the same time. With HTTP 2.0, that is performed within one connection. And the implementers of the browsers do not open several connections to the same host. Everything is done within one. And it is done with the prioritization. HTTP 2.0 allows to send the priority information with every stream of data. So the browser can know which data it needs the most, which data it needs to come with more priority, which data is less required for the user to start interacting with the page. And also HTTP 2.0 includes the server push feature. The server push allows you as a website builder to send data to the client without the client's prior request for that data. Basically, you will be sending the request and response together in the response. So you will tell the browser what the browser should have asked for. It makes total sense when you have highlighted the networks. For example, when someone is asking for index.html page, you already know that they do need the JavaScript.js and style.css. So you can push that information to the client. Also, what you can do is make your pages more interactive. For example, when you need a notification to be pushed on your page, or maybe some interaction data like chat style workflow when the data is flowing both directions. So it can be used as a way to work around the web socket as another method of sending data in both directions. All right. There is an interesting question about encryption and HTTP2. If we look into the specification and we look at the standard, it doesn't require that the connection would be encrypted. However, no implementers of the web browser have implemented the use of non-encrypted HTTP2 traffic. So every browser will only support HTTP2 when the connection is done with TLS. So due to this particular need for everything to be encrypted, we at Nginx also included the encryption version only for HTTP2. There is an interesting way of how does the browser know how to initiate the connection. There are several ways of doing that. First way of switching the protocol from HTTP1.1 into HTTP2 is to send an upgrade header. And to switch the protocol from HTTP1 into HTTP2 using the upgrade header functionality. Basically, the client will tell to upgrade the connection to HTTP2 to send the data with a binary format. Another way of doing that, it's called the NPN. It's called the next protocol negotiation. And it is an extension to OpenSSL which allows the client and the server to negotiate which protocol they are going to be using for that particular connection. And that negotiation is done early in the TLS handshake. So we can avoid the use of HTTP1 completely in that particular connection. There is also another way of performing negotiation to HTTP2. It's called the ALPN, application level protocol negotiation. The difference between those two extensions is that ALPN basically saves you an additional round trip in negotiating the protocol. And basically at the first handshake the client sends the list of the protocol that the client is supporting. The server picks HTTP2 from the list and starts sending data to the client. In the NPN negotiation, the server is announcing the list of protocols. The client picks the protocol and then the connection starts. So there is a difference in OpenSSL versions that supports NPN and ALPN. NPN is supported in OpenSSL 101, which is currently used in the most long-term support and, I would say, enterprise-friendly, well-supported Linux distributions. However, the ALPN negotiation is supported from the OpenSSL version 102 and onwards. And that version is not included into some of the very popular Linux distributions currently. Obviously, we expect that to be changed soon, so ALPN will be supported everywhere. If we would talk about Ubuntu Linux, the version 15.10 supports OpenSSL 102 and version 14.04, the long-term, the LTS version. It does not, so that kind of limits us in some implementations of the protocol negotiation. Another thing to mention is that NPN negotiation is supposed to be removed from the browser support pretty soon, along with the support of the speedy protocol. So we need to align our servers and our infrastructure to support and to work with the current versions of the web browsers. So if the web browsers are dropping the speedy support, we will have to switch to HTTP2 at some point. When the browsers are removing the NPN support, we will have to switch to the versions of our server software which supports OpenSSL 102 and the LPN. All right. Next, let's go through a number of different optimizations that we can perform within HTTP1. And let's see how those optimizations affect the website if we are switching from HTTP1 to HTTP2. The first optimization is called the domain sharding. We know that the browser opens a number of connections to the same host. In order to perform the file download simultaneously. So the browser opens only up to six connections per host. And sometimes when you want the browser to download more resources at the same time, we put our resources into the different sub-domains or into a set of completely different domains like WW1, WW2, and so on. That way the more domains we implement, the more connections the browser will open. And potentially sometimes, I would say, it becomes a little bit faster. Last name, Shadrin. Thank you. That was on purpose. Yes, I was told that all the time. Especially when I was dealing with the database, Shadrin. Yes, that is correct. So basically what we're doing with the domain sharding is we are distributing the resources across multiple domains, and we're making the browser to download those resources at the same time. So you already see that it doesn't help with HTTP2. It does not, because opening multiple connections and messing up with HTTP2 priorities, it's not going to be that intelligent on the network level as it's going to be done on the browser prioritization level. So making that optimization more aware for the browser so the browser will be able to choose which resources to download first and with which priorities. It makes significantly more sense. So if you are using the domain sharding features and you are switching to HTTP2, it makes sense to simplify it and go with the same CDN domain or even just the same domain for everything. It also depends on, somewhat depends on the geographical distribution of the website and the CDN use. Another optimization that many people implement for making HTTP1 websites faster is making the image sprites. Basically combining multiple images into the same large image file and on the client side you can easily carve that into different smaller pictures, smaller pictograms. It's pretty easy to do that with the client side technologies. So does it help with HTTP2? Well, a little bit. So if you start sending multiple requests instead of one larger request, it does not affect performance in HTTP2 as much as it affected it with HTTP1 because of the compressed headers, because we're using the same connection and because we're sending the data through that same connection, we're sending the multiple streams within the connection. Another method of optimizing is concatenating the JavaScript, the CSS files, and using that is very similar to the image sprites. What you're trying to avoid when you're doing that is not sending the set of headers all the time to reduce the amount of round trips on your website. And exactly the same as with the image sprites, if you are using HTTP2, it does not affect, there is not much effect in doing that. And there is a downside of using those optimizations. And all of those optimizations, they add to your time, they add to your DevOps time to, you will need to create those sub-domains, you will need to manage those files, create the deployment scripts and scenarios which are more complicated than if you would be using the HTTP2 protocol and not those complicated optimizations. So let's look at the current statistics of HTTP2 because we need to know if the web and your clients, the browsers are ready to support this protocol. So since we do have internet connection here, we'll just go to the live website instead of the screenshots. So we're looking at the can I use website. And this is the current statistics for the HTTP2 protocol support in different browsers. So what we can see is that major browsers like the Microsoft, i.e. the latest version, the Edge browser, Firefox and Chrome, also the Opera, they all support HTTP2. Some of the browsers don't support it. And the old versions of Internet Explorer, which actually have very low usage, they do not support HTTP2. The version 11 supports it, but it's only limited to the Windows 10 operating system. The major market share, according to this can I use website with Chrome, and it supports it really well, except for some older versions. And obviously, the older versions of the browsers are going away, so that percentage is going to increase. Safari supports HTTP2 with the latest Mac OS, and the old versions of Safari don't. If we go to the iOS, it's the same thing here. And since we all know that the iPhones are updated more frequently, and the users are keeping their latest versions of iOS better than with other mobile systems, that percentage is expected to drop. If we look at the Opera Mini browser, that one does not support HTTP2. But basically, Opera includes their own set of optimizations. They are acting as a proxy, and they are changing the content and making a bunch of their own changes themselves. So basically, this is not significantly relevant to our set of optimizations, because they are including their own. If we look at the Android browser, oh, by the way, the Opera Mini operates in a similar way as the UC browser, which is used on many devices in China and some Asian countries. That browser also acts as their infrastructure behind that browser acts as a proxy with the set of their own optimizations. So the direct use of HTTP2 is not relevant to this browser as well. And if we look at the old Android browsers, those browsers do not support HTTP2 as well. But that share of the browsers is supposed to shrink significantly, since users are updating to the newer phones all the time, and all the newer phones with the new Chrome for Android, they support HTTP2 properly. All right, so if we look at that through the overall stats, we'll see that about 70% of your clients are supposed to be supporting HTTP2. And if we take out this 8% and this 5%, basically we're in a very good shape for including HTTP2 in your general website infrastructure. And please remember, if you enable HTTP2 in all the implementations that I'm aware of, there is backwards compatibility, since if the web browser is not able to connect through HTTP2, it will most probably connect with HTTP1. There is about one use case when it won't, but it's very fringe use case. So let's get back to our slides. Instead of doing the screenshots, we looked at everything online. So the next page is the HTTP2 usage statistics from the W3TX website. So we already see that it's used on more than 6% of the websites according to W3TX. And also, we can see that that percentage grew up significantly just in the last couple of months. Basically, when I was submitting this talk to this event, we were at about 2% or 3% somewhere in this range, and now it's about basically triple. If we look at the historic trends on the W3TX here on this page, we'll see that HTTP2 right here is the most growing site element at this point. Basically, everything else is staying on a flat line. And if we compare the HTTP2 growth with speedy, we'll see that speedy currently has about 6.6% of the website. And HTTP2 has 6.2 starting with July 2015. So this is a very significant growth. And if we are implementing that today on our environments, we are staying at that growing curve, which should be really good for us. It might be something good in this protocol. Let's go back to the slides. Maybe not. Maybe not everything is good about this protocol. So first of all, there is a number of different downsides to HTTP2. I want to mention that not everybody needs to secure every particular request on every page. If your website mostly consists of cat pictures and funny videos, maybe encrypting every bit of that data is not really required. Also, if your website is mostly doing uploads and downloads of larger files, the HTTP2 optimizations, they don't affect that website that much. And maybe you just don't care about your website working a little bit faster. So sometimes it doesn't make sense to implement this protocol. And there is one huge downside of the HTTP2 protocol. It's a little bit harder to troubleshoot. Remember, we looked at HTTP1 request and response, and everybody was pretty much familiar with it. Everybody knew what that means. It's very readable. You can browse with Telnet. Everything is fun and easy. However, if we look at HTTP2 encrypted and even decrypted traffic, it is way harder to understand. You won't be able to perform HTTP2 browsing from your Telnet line. But even if we look at the encrypted traffic, and this is just the beginning of the TLS handshake, we can already see a few things there. We can see the client, my Chrome browser, is announcing that it works through HTTP1, speedy3, and HTTP2. HTTP2 is the protocol identifier here. And also, we can see that the server responds with the information that it works with HTTP2 and with HTTP1.1. So even though we can see this handshake being completely unreadable, there is a bit of information we can extract from that without even decrypting everything. So what if we want to decrypt our browser traffic, and what if we want to look inside of the HTTP2 protocol and to see all of those streams, frames, headers, and all the other information? There is a way of doing that with Wireshark. And I really like Wireshark as a tool for troubleshooting. It gives me a lot of information on all the traffic that's going through my system. Everything is really easy. There is a way for us for doing decryption of your onion browser traffic without the knowledge of the private keys of the servers that you're interacting with. The original way of decrypting the SSL traffic, the Wireshark was related to you uploading the private key into the special Wireshark settings. And then, well, basically I never liked it. It always meant that I would have to take the private key into the client system, and that doesn't sound like fun. So this way it's significantly more fun. You just need to specify the environment variable, the SSL key log file, and you need to open the browser with that environment variable. And then what we need to do is we need to put that session key file into the SSL settings of the Wireshark. So once you do that, your traffic will become very readable. And the new versions of Wireshark, starting with, I think with the version 2 of Wireshark and currently they're 201 or somewhere along those lines, it does support the HTTP2 protocol. So you can see this headers frame, settings frame, data, and all the other information there. This is definitely something that makes sense to research a little bit further. You will be able to see all the headers that are coming through, all your cookies, cache information, and so on. So for the troubleshooting part of HTTP2, I definitely recommend using this approach. All right, let's go to the interesting part, the benchmarks. So everybody knows how to make their own benchmarks, and everybody makes the best ones. So on the next cons 2015 in September, we had a talk by Valentin Bartena for one of our core developers on HTTP2. And that talk had a bit of a pessimistic view on that protocol. So what we looked at was a set of different tests with generated, pre-generated page. And those tests showed something quite interesting. So where's really the benefit of that protocol if we look at these graphs? On the horizontal axis, you see the latency of the web server, basically your network delay to the web server. And on the vertical axis, you can see the first page time for the page. So the page starts appearing on your web browser, let's say between, at 300 millisecond delay, you're expecting the page to appear in between two and two and a half seconds. So the difference is not that significant between the HTTP1 unencrypted, HTTP2 and HTTP1 encrypted. So the blue ones are HTTP1, the green ones are HTTP2, the yellow ones are HTTPS with HTTP1. So if we put that on a different scale, the graph looks like that. Basically, we can see that HTTP2, the black line, only has a tiny bit of benefits over unencrypted protocol. And the benefit compared to HTTPS is also not that huge. So we're starting to figure out what's the deal here, why people are so excited about it and why it doesn't work and when it doesn't work. So I did my own benchmark. And my own benchmark included the NGINX-199, the Ubuntu 15.10 with the OpenSSL 102 with ALPN negotiation. I used the Chrome browser. And what I did, I just put my page in a constant reload with no caches enabled to see how fast will that page reload. And then I started to figure out what kind of page I should be using for benchmarks. I'm pretty sure not two of us will have exactly the same page in our website. They are completely different. We're using different clients and server-side technologies. Some of our pages are extremely simple, like my home page. Or they can be extremely complicated, like NGINX corporate website, thenginx.com. So your mileage may vary. What I did, I took the benchmark page from the current free CSS template. And I chose the one that looks reasonably modern. It is using some jQuery JavaScript. It's using lots of CSS, proper markup. And I added a little bit more images to that. So I made the whole page to have the total of 54 different objects. So I figured it looks like a pretty reasonable setup. Maybe your websites have more objects like that, maybe less than 54. So there would be an interesting show of hands here. So for your web projects, are your web projects, do they usually have more than 54 objects? And less than 54 objects? OK, about the same amount of people. So we'll consider that to be something like a median page size. All right. So I started measuring that HTTP1 and created HTTP1 and HTTP2 for the same page within the constant reload. So what we are looking at here is a very interesting set of results. What I did for this test, I disabled the keep alive. So every HTTPS connection required the full TLS handshake. And when our latency grows, we found that it affects performance of the web page very significantly. So around 100, so we can see that it's a 200 millisecond delay. Our page loads for more than 12 seconds on average. And if our delay is around 20 milliseconds, we are somewhere at one or two second page load time. So next thing I did, I enabled the keep alive and ran the same test again. Once again, it was the constant reload of the page. So for the keep alive connections, we are taking away the initial SSL and the TLS negotiation from both HTTP2 to our original setup. We can see that the latency here goes up to 800 milliseconds. And right there at 800 milliseconds, we originally saw that there was some more significant difference between those protocols. So I went ahead and I increased my latency up to one second. What was quite interesting in the one second delay is that the benefits of HTTP2, they seem to be shrinking. So if your network delay is extremely high, let's say if you have satellite connections or most of your clients are coming from a completely different part of the world, or you're using 2G connections or something like that, you might see that even though the benefits exist and they are still quite substantial, they are not as noticeable as the benefits on the lower delay. So what I did next is I divided one by another. And I got this unusual graph, which is a percentage benefit of HTTP2. And it looks very interesting that once again, for that page that I was using, because your page will be different, for the page that I was using in that benchmark, the best results for HTTP2 were at about 250, 300 millisecond delay, which is quite reasonable and very much real world network delay. So that's the very interesting set of benchmarks. And this is the graph which I think shows that my benchmark is the best. All right. By the way, if you want to argue about this setup, I will be hanging out here for the rest of the day. And I will be happy to engage in any technical conversation or show you how that benchmark works. So let's go into a little bit of a practical, set of practical slides about implementing HTTP2 for your websites with NGINX. What we need to do is we need to take one of, I would take the latest version of NGINX, which is the 1.99 currently. And for the configuration, I would use the Configure Parameters for HTTP V2 module and with HTTP SSL module. It will technically compile without SSL, but how would you use it since no browser is using that, anyway? When you are using the pre-built packages of NGINX or someone has built it for you, you can check with NGINX dash capital V. You will see the Configure Arguments. And if you see the HTTP V2 module, it means you can use it. So the setup, your configuration for NGINX for HTTP2 is extremely simple. For your list and parameter where you have your port number and the protocol, you need to add HTTP2 together with the SSL parameter. Basically, that's pretty much it. So for the SSL certificate and the keys, you should definitely use more security that is shown at this example, the security aspect of having the proper keys, protocols, perfect forward secrecy setup, and so on. That is a little bit outside of the scope of this talk. I definitely recommend looking through the slides and the presentations from the security track. Many of the presenters were using NGINX as an example and they showed very nice configuration snippets, SSL parameters, and other methods of making your site secure. So definitely a great track to revisit if you haven't already. When your clients come into your website, you will need to figure out how many of those clients are in fact using that HTTP2 connection. Are they using it at all? So in your logs, your variable request will show get and post requests with HTTP2 slash 2.0. So basically, you will be able to find all that information in the logs, and you can parse those logs easily to figure out the percentages of your traffic to see how your own usage of HTTP2 is growing in the wild. And also, there is another tool for that. We recently built and we are in the process of building the monitoring system for NGINX, which is called NGINX Amplify. Basically, you can connect your Iranian NGINX instances with a small agent to the AmplifySouth-based performance monitoring tool. It will give you the configuration recommendations, and it will give you a number of different graphs. And also, it will give you the HTTP version graph, which will show how many users are going to your website with HTTP1 or HTTP2. Very useful tool. You will find the link to that in the link set that I will show here at the end of the presentation. So the useful tools that you can use for implementing this protocol, definitely, you can monitor the usage of different web technologies with the CanIUse website. I found it very useful for figuring out what I should use and what I shouldn't. There is a great tool for encryption. Well, not all of us have corporate certificate authorities and access to expensive SSL certificates. So let's Encrypt gives you an ability to generate free and supported SSL certificates for your website. They will have three months validity, but they will be accepted by the major browsers, which is a very, very big deal, very useful for smaller developer projects and so on. And the web page test tool gives you the diagrams of how your page is loaded in different browsers from the different parts of the world. Obviously, you can test your website from your own location, but having the flexibility of that tool checking your web page performance from other places is very important. So I definitely recommend using that tool. A little bit about Nginx. So you can contribute to Nginx project at the hg.nginx.org. You can find all the current source code, and you can download it. You can write to the mailing list, to the developer mailing list, or the user's mailing list. Also, you can just grab current Nginx sources from GitHub. It's a read-only mirror. And if you don't like or don't want to code but still want to contribute, there is a wiki. And all of the modules and every project in the world, we all need the better documentation. The last link shows you an interesting promotion that we have for the scale conference. We are giving you a non-production developer licenses for the commercially available Nginx Plus with the commercial features. And if you want to test out that software without the restrictions of a free trial, you can do that on a yearly-based license for non-production use. So I will leave this slide again on the screen. And if you have any questions, please ask those questions. I also have some Nginx stickers, so please go ahead. It does enable HTTP1 as well. So let me repeat the question. So when we enable HTTP2 in the listen directive, does it only limit it to HTTP2 or also enables HTTP1? Yes, it does have the HTTP2 connectivity will have to be negotiated by the client and the server. If they don't negotiate HTTP2, they will revert to HTTP1. I will have to think about that. Send me a note, please. I will respond to that soon. That will probably involve a change in the code. Absolutely. Let's Encrypt is a great project, that's a fact. You were next. Not sure about a TCP dump being able to. So the question was about the command line tools being able to support HTTP2 and the decryption of it. What I would do in this case, I would probably take the TCP dump, pick up file, and open it locally with Wireshark. Yes, I understand. The command line tools and the developer tools for HTTP2 protocols are something that is in active development. So this is something that's changing on the fly. We should expect more of those tools available. So the depth of the graph, if you are using a lot of elements on the page, I found that when we have more elements on the page, you have usually more benefit with HTTP2. I can give you an example right here. So when we go into our list of links, there is a side by side comparison. Here you go. And this page shows you 165 small elements, small images here. So all of those are small parts of one page. So when the glory starts, you will see again how fast or how slow those connections are. Also, that page is publicly available. You can see the whole engine X configuration below. Yes, please. Server push is a technology that is defined in the standard of HTTP2. Unfortunately, it's not currently implemented in the engine X configuration or engine X code. But it is defined in the standard. We are working on it. We can have a separate discussion of how server push should work through a proxy. But the request and response back for a proxy is not as simple as it sounds. Yeah, please. For the restful APIs and not just the browser connections, your clients can use HTTP2 for different other APIs and not just for the web pages. Probably the benefits will be significant because of the header compression. With many API calls, the payload is quite minimal. But the headers can be significantly more than the payload. Depends on the traffic flow. Not sure that multiple streams and prioritization would be a huge benefit. Well, we'll have to see how your API traffic flow would work on that case. Also, if you are building your own clients, building them with HTTP2 is something quite new. So you would have to perform more research and more troubleshooting on that area as well. Yes. So when we will look at this frequently asked question, it will tell you that GZ compression with SPD was prone to some significant vulnerabilities. And the standard defined the new way of doing that with HPAC, which is prone to those vulnerabilities. That's the major reason. Yes, it is supposed to be used with both ways. And as you said, as a replacement to WebSocket at some point as well. WebSockets over HTTP2 is something, well, there is a draft. It exists on how to implement WebSockets with HTTP2 or over a speedy protocol. But that is not currently in the standard. So basically, when you want to implement WebSockets when your sites are running HTTP2, the browser will be smart enough to not go with HTTP2 when you want to upgrade to a WebSocket connection. So the browser will go HTTP1 for that particular connection. You can still use WebSockets on your website. It will not be HTTP2 enabled currently. All right. Well, it looks like we're out of questions. But anyway, I'm handing out around this room and outside until the end of the hour. And you're welcome to grab some stickers. So thank you.