 Welcome to the whirlwind tour through HTTP2. My name is Ole, as mentioned, I'm a bit German. Yeah, I know, we have so many Germans here, and I feel a bit sorry. I do drink beer from time to time, if that helps a bit. Okay, we have 40 minutes, and this back is really long, so let's get started. First question I'd like to know, who of you uses HTTP2 in production already? Hands up. One, two. Okay, five. Yeah, that's a bunch. That's almost 20 or 500 attendees. Well, we could do better. Before we start, I'd really like to ask you some questions. Because whenever I'm giving this talk, there are a bunch of things popping in people's mind, and I really want to get those out of our way so we can enjoy the talk. So, let's do this little questionnaire. What do you think? Is HTTP2 ready yet, or is it done? What do you think? Hands up if you think yes. This should be all your hands up. Yes, it's done. It's SSI mandatory. Hands up if you think yes. That's 80% of the audience, and it's funny because it's not. But there's a little trick to it. In the wild, you will only see implementations supporting the SSL version, but regarding to the protocol, both versions are valid, but you will never see this in the wild. Is it made by Google? Hands up if you think yes. No hands. Awesome. Great. No, it's not. What's speedy? Just like asking, what's SMTP then? So is it plain text? Hands up if you think yes. Almost no hands. Great. It's not. Head us compressed. Hands up if you think yes. That's not that many hands. It is compressed. We will talk about this later. Will HTTP1 still work? Oh, that's my favorite question. Hands up if you think yes. Come on, people. Yeah. Still. I mean, browsers still support HTTP1, right? You would, like, drop support for that, you would break the whole internet, so we can't do that. All right. Another thing to understand why HTTP is cool, especially the second version of it is cool, we really have to understand the current problems and the history of HTTP1. So I want to walk you through the history a bit. In 1989, Sir Tim Berners-Lee imagined his idea of the World Wide Web. He had this idea of researchers and research centers sharing documents between each other and even using the browser as kind of an editor, right? This was his idea. And in 1991, the very first version of HTTP version 0.9 got released. And if we continue here, in 1992, there was the version 1 draft. And then it took them four more years to actually release version 1. But I think it's pretty interesting that it's just take one more year to release the patch, right? So they found some flaws in there which they fixed in HTTP1. Well, in 1998, there was ROC2326, who, if you know what it is about, hands up, almost no one. That's great because I can tell you a little thing here. This ROC was released April 1st. Got an idea? It is actually the ROC specifying how connected devices should behave, which want to make coffee. And if your Internet-connected device doesn't know how to make coffee, it should respond with 418 IMT pod. That's where it's defined, right, where this is from. Okay. In 1999, there was actually the first HTTP1 draft being replaced. So it's a replacement. Technically, there was nothing changing on the protocol. It is just some wording refrasements and getting out some, making some clarifications, basically. And then there was a lot of, I don't know, Ajax and Web and IE6 stuff happening at this time. And now in 2007, where the IEFT actually formed the HTTPWiz working group. This is Latin and as far as Google Translate did not fail me, this is two. And that task was actually to sort out all the different RFCs which were around at that time. But we'll come to that later. The IEFT in early 2012 was making a proposal or a call for a proposal for HTTP2. So they wanted to know, they wanted to ask the community of vendors like Mozilla, Firefox, Facebook, Google, all the people participating in the web, how the new protocol should look like. And as most of you know, Google suggested to you speedy as kind of a base to talk about, right? So if you want to come up with something new, it's always easier to have something to talk about and continue from there. And this is what they did with speedy and this is how speedy belongs into this picture. Well, in 2014, there was the HTTP1 one split. You might have read about this on Hacker News if you're into that. It actually means that all the different RFCs out there regarding HTTP were rewritten and resorted. So technically, nothing changed. So if you're writing a browser or a server, there was no need for you to touch a code. But it was to have all information regarding HTTP at defined places. So all this stuff and a lot of these HTTP stuff was defined in different RFCs and they all collected that, grouped that, and then released new RFCs replacing the old ones. And then in May 2015, HTTP2 was finally released and signed in RFC number 7540. And then in the end of last year, already 22% of the internet traffic was HTTP2, which is, I think, really cool. And also due to the fact that you will only see it on SSL encrypted connections, you can adopt that number to only SSL connections and that is 35% of all SSL connections already where HTTP2. And I think that's pretty nice. So in order to get us all started on the same plate, I really want to talk really quick about the HTTP basics here. So HTTP is plain text. So when I'm just opening up my terminal and really should stay here to make it more realistic, I can just tell that to Google. So I'm opening up a plain TCP connection on part 80 to Google.com. And when I just write the HTTP status line and the host, which is mandatory, then tune your lines, I will get a valid response. So what I did to you, I did a real HTTP request, sorry, a 211. So that was perfectly valid and it was really easy to tinker with it and to write tooling for this because it was so, yeah, so easy. Also it is stateless. That means all the information browser also needed for making a request or making a response was contained in the single request. So there was no state building up between requests. This is what you know from REST as well. And it's super flexible, right? If you think back what Tim Berners-Lee imagined and how the web could look like and what we are doing now, it was really meant to be, to support a lot of use cases, right? So it has changed a lot since its invention. But due to the fact that it did change that much since its invention, it also had a few flaws, a few problems, right? I want to talk about this with. So the web nature, right, it has grown a lot. We're far over transferring HTML documents all the way, right? We're doing way more. Think of all the JavaScript applications we have. And the total transfer size and the amount of total request per website has really gone crazy, right? And almost doubled in the last years. The average website has 38 connections per page. So 38 different TCP connections on a single web page. And when we're talking about that, it really gives us a drawback in performance and speed. Because when you're opening up a TCP connection, and again, the average website does it 38 times per page, you have to do this TCP handshake, right? And this is a three-hop connection. So it is really bound to latency. And as you might know, especially on mobile network, latency is still an issue, right? So the throughput is really good, and the bandwidth is really good, but the latency is really bad. And when we're talking about encrypted connection, it's even worse, right? Because we have so many, we have so many hops going on. This thing I like to call request bonanza, which is something in HV we can't change. That actually means the browser is requesting an index HTML, our example, and then it gets returned. The browser is going ahead parsing this. And then it might find some more assets. And then the assets might trigger some more downloads. And this builds up this kind of waterfall graph, you know, you all know from the network tab in your browser development tools. Again all this stuff really suffers from latency, right? Latency is the huge problem. Nowadays bandwidth is not the big of a deal anymore. We have 4G, as Joe mentioned yesterday, 5G is almost there, so the bandwidth is awesome. But latency is the issue. That's really my favorite example. So that's, if you see that, that is the verge. It's kind of a new site. I don't know, I don't read it. But it's my favorite example of the average website because it's maybe not so average. So that's a recording of the developer tab loading this page with cold cache. Let's see. So you see the initial request takes some time. And this is really what I call request bonanza, right? So there's JavaScript triggering, more JavaScript triggering, more JavaScript. And this is real time, right? It's crazy. It's still going on? Yeah. Okay. So you might not read this, but it's 170 requests, transferring 1.5 megabyte and taking me on my office connection, which is really good, already 12 seconds to load. I mean, that is probably one reason why I'm not using it. So I made this when I was preparing the talk for the very first time, which is now almost over a year ago. And I was curious how it changed, right? I mean, the web was always evolving. So I was checking back yesterday, did the same thing, loading up. I went, oh, shit. Can you believe that? I mean, 254, 6 megabytes and almost 20 seconds to load. That's crazy. I can't stand that. All right. But that's... Okay. It's kind of mean to do that, and I don't want to be that mean. And there are probably other bunch of websites, really shitty. I don't know if you've ever tried loading your website. The one I'm maintaining is also maybe not that perfect, but I think it's an awesome example to talk about. Yeah. Anyway. So there's also an issue called head of line blocking. So when you're reading about HTTP2 and what they really wanted to change, that is a big of issue, and you will read about this fairly often, so I want to explain it real quick. Think of that. There are multiple requests on the single connection, right? In our example, there's the next HTML, there's the feature CSS, and there's an annoying JavaScript. There should be. All right. Let's get back. So we have this head of line blocking, right? This is what we were talking about, and there's the browser, and there's the server. So the client and server, and you can make, as you know, simultaneously request on a single connection, right? So you can request multiple assets on a single connection. But here's the issue. The HTTP response has no way of identifying a resource. So there's no way, as when you're a client, to identify what resources or what resources you're receiving at the moment. So the only way to know it, especially if you make requests simultaneously, is to keep the order. So the server has to send back the HTML first, then the CSS, and then the JavaScript, right? And this is called head of line blocking, because the first request blocks whatever is next. And I think this example works pretty well, because the index HTML might be something we have to generate, right? It might be a Ruby, PHP, Python, whatsoever script. And the feature CSS and annoying JavaScript might be just on the file system. Though they could possibly serve really fast, but due to head of line blocking, the server has to wait until this first request got served, and then it can serve the other one. So let's do a little bit of head of line blocking recap here. First of all, the order matters. The order of the request matters. The slowest request blocks. There's no workaround, and therefore it's just often unused, right? So most browsers turn those off by default. But as web developers, right, we really endeavored. We tried a lot of things to work all around the issues. So we tried spreading, right? Putting all the images together, and then giving us a hard time to replacing one. We tried concatenating all our assets, right? Concatenating all the JavaScript, all the CSS, okay. We did domain sharding, and this is really the worst, right? So what we did is we were spreading out the assets to different domains in order to speed up downloading, because browsers have this limit of doing five downloads in parallel, per domain. We'll come back to this later, but that's crazy, right? We did all kind of inlining stuff, yeah? So we had all the images when they're small enough, we inlined in CSS. We also did the weirdest preloading hacks, right? Imagine you're going on any kind of landing site, a landing page, and I've seen this in the wild for real, so I'm not making this up. People were actually putting the actual application JavaScript on the index page, and it was parsed and cached, and this is the trick, and it was not executed. So the browser would have it in the cache if the people would actually do a sign-up. And this is violating all rules of software engineering I can come up with, right? So that's really bad, I think. We also do cookie-free domains. I think that's good, that's why I have this little S-Rex here. Don't change that, it's awesome. And we really tried to save all the requests, right? We really wanted to avoid all the possible requests, and we just wanted to boil it down to as little requests as possible in order to save the TCP and SSL overload, right? But we're here to talk about HV2 and how this can help us. So let's talk about HV2 now. First of all, HV2 is compatible, so it's not going to break the web. The user will never see an HV2 somewhere in the scheme. And all the methods will stay, so getPost had options. This will stay the whole cycle of request response. It won't be touched, you still have cookies, headers are still the thing, so all of this is the same. So your application probably don't have to change at all, unless you're writing a web server or a browser. Because the underlying transport is basically rethought completely. Well, how does an upgrade look like from HV1? Due to time constraints, we're not going to talk about the non-secure HTTP update, but this actually uses the upgrade header. But what I want to talk about is how this works on HTTPS. So for HTTPS, so the encrypted connection, it's actually using an RFC or a specification called ALPN, which is not correct completely. It is actually TLS ALPN. So it's transport layer security, application layer protocol, negotiation, and all. And this is invented by Google, or kind of spun out of a thing, invented by Google, which was NPN, which was the next protocol negotiation. All of that is pretty, I don't know, a lot of abbreviations. You probably don't have to remember. But I think in order to make it short, that's an SSL handshake, and that's the important line here. And what is happening is that the client is offering a list to the server of protocols it supports. And then the server can just pick, right? So it just send over, hey, I'm the client. I support HV2, and the browser will say, oh yeah, that's awesome. So let's do that. That's how it works. Little heads up here. When you know HTTP and HTTPS, it's a longer version. So the S version was the encrypted one. This changes with HTTP2. So HTTP2 is the protocol encrypted. And HTTP2C is clear text. That has some weird reasons that HTTP2C is actually already registered in the ALPN namespace. But also it is kind of a hint from the IFT that the encrypted version should be kind of the default. And clear text is the exception. And here's the thing which is going to fix all the connection problems and all the latency issues we have. So HTTP2 is multiplexed. So you can transfer multiple resources at the same time. How does this work? Well, you have one single physical TCP connection. This connection consists of multiple logical streams. And each stream consists of multiple binary frames. So in real life, you would see binary frames on the TCP connection. And each frame belonging to a certain stream. So this is how you can think about that. And frames can be mixed. So there's every kind of different frames following each other on the TCP connection. Let's talk about a little bit of the frames. The frames are binary. OK, so that's the frame layout. It consists of the length. It has the actual frame type in it. There's the flags. Then there's the so-called reserve bit, which should always be 0. Then there's a stream identifier of 31 bits and the actual payload. And the payload always differ. But each frame always has this header before the actual payload. Well, when I was reading this back, I was wondering about that little r flag. As I mentioned, this back says this should be 0. Don't touch it. Like, well, where is it there then? So I was going out on Twitter and asking the people who wrote it. And he was coming back to me, well, it's there to support platforms which does not support 31 bit integers. OK, that's weird. But well, so we have this reserve bit. It's always 0. And I don't care. As I mentioned, there are different types of frames. This is the frame you might see the most of the time. That is where you transport the actual payload. So this is your request and response body, so to say. But there can be, obviously, multiple data frames when you're transmitting a bigger entity. It's also the priority frame, which tells the different priorities. We come back to this later. And there's reset screen. I think this one's pretty interesting. It is actually a mechanism to tell either client or server when there was a parsing error. And you don't have to screw up the whole TCP connection. So this is just resetting one screen. And the TCP connection itself would stay intact. There's also a settings frame. So there are a bunch of parameters you can change on a connection. And you can exchange it with that. There are a bunch more. There are 10 different types. We'll see a few more in this presentation. But if you're interested in all of them, I can really recommend reading this back. So that's the ping frame. And this is what I really love to do. I just filled in some binary data. So that's what you see here. So let's go over this bit by bit. First of all, that's the length. So it is always the length of the body. So you'll never have the length of the header included because the header is always the same, right? And this is the type. So the type is type 6 in binary, which is ping. There's flag set. And the ping frame actually defines the set of flags, which can be set. That's the acknowledgment flag set here, which actually means whoever receives it should send out another ping without the acknowledgment flag set. Then there's the so-called troll bit. It's not that call, right? It's not called that way. I just called it that way. And that's the stream ID here, the stream identifier, and some OPEC data, which is specified by the protocol, which should be in each ping package. Well, let's talk about HTTP2 features now because everything we've seen up here now is just what you need to replace your current connection and to redo what you just did. First of all, there's Savapush. And I think Savapush is really going to change our industry and really it's going to change the speed of the web because what it actually means is the server can tell the browser which resources it will need before the browser actually makes or before the browser receives the initial document. So imagine the client is asking for some index page and before the browser sends out the answer, it can actually tell, oh, here is the JavaScript entity says, I know you will need in a second anyway, right? So it's a direct connection between the server and the browser cache. And I think that's pretty neat. This is how it looks like on the wire. So that's a push promise frame. The push promise frame basically looks like a header frame, but the difference is obviously that the connection, the stream connection or the stream is initialized by the server. There's also flow control. So client and server can change during transmission the priority of the different data on the wire. It's done by a priority tree. So let's take a look at the priority tree here. That's initial, that's initial stream. Initial stream is always zero. That's the stream dependency. That's the stream ID in our case. Here's the weight. And there's some more dependencies and some more weight. So in order to read this graph properly, it's important to know that the priorities of the stream is determined by the relative proportion of the weights. So stream ID three should receive two thirds of the available resources. While stream ID seven should receive half of the resources of stream ID three, right? If this makes any sense. So that was just a made up example. And here's a real world example. I think I found this pretty interesting. And this is what the Firefox browser is doing. And this is also the most efficient dependency tree out there at the moment, because Google's is still optimized for speedy and Microsoft Edge is still, it's just doing flat, right? It's just not supporting this. Which is not, it's not bad. You will probably don't really notice unless you're doing some really strange edge cases. So what is Firefox doing? Well, first thing they do is they initialize the dependency tree. So there's nothing transferred on the wire yet. Just some initial streams, so stream openings. And then that's the document. So stream ID 13 is the HTML. Then in the HTML, the browser might find some CSS in the head that will go there. Then there might be some JavaScript in the head that will go there. There might be some images that will go there. And then there might be some JavaScript at the bottom or at the, yeah, at the bottom of the page at the very, before the end of the body. And this will go there. The first question which might pop up now is why are they doing it that way? Well, honestly, I don't know. I would love to tell you, I think they have the valid reasons. It is all open source. You might, you could maybe figure it out. But I just found this interesting to show it to you, even though I can't explain why it is in that way. But as you can see, it's a fairly complex thing what was before just plain text exchange on the TCP connector, right? There's another thing you will use, don't matter if you want or not. It's just something you will get off of free and I think that's pretty amazing. It's actually, headers will be compressed. And even though your first initial guess might be, well, headers aren't that big. I would agree they aren't that big. But they repeat a lot. Imagine all the routers in the world, right? How often will they see method get? I mean, I think that's a lot. And if we can save method get, but just, if we can replace it by just one byte, so just a single number, that'd be amazing. And this is what actually compressed headers give us. And this is not just only for one header, but this works for basically all headers. So this is how the header frame looks like. We have some padding here, some metadata in there. There's the header block fragment and the padding. And what is important is this part, the header block fragment. That's just some binary data, which is encoded in an algorithm or in a way described in the specification called HPEC, which is defined in RFC 7541. So if you really want to read more about this, I can really recommend doing this. I actually wrote an implementation of this in Elixir, so I'm really into this protocol, but I want to describe it really briefly now. So HPEC uses a so-called header compression table, and here you see some requests header we are going to compress now. The header compression table consists of a static part and a dynamic part. The static part is mentioned in the spec, so it won't change at all. The dynamic part is built up per connection. And here's something really remarkable because usually we say in HTTP, everything is stateless, and this might be a reason why they put it in another protocol as well, because HPEC is not stateless. It is stateless per connection, so two different TCP connections have two different dynamic tables, but all request response cycles sharing one TCP connection share this dynamic table. I think this is really remarkable, and this is really a thing which is going to change on the wire. All right, and this is how the headers looks encoded, so if I have a match of my key value pair, I just put in the number, and if I have a case where I can only match the key, the dynamic, or the static table and the dynamic table have entries like this where only the key is set, you can refer by key ID and then just encode the spring here, or you can have a case where the header is not in the protocol, you just encode the header and the value. And you can then add this part to the table if you want to do that. All right, who of you knows Huffman encoding? Okay, that's a few hands here. I want to explain it really quick because I just stumbled over this, and I think it's pretty interesting how Huffman encoding works. Basically, the idea is that in the encoded value, the characters which accrue the most are using the least space. So if we have the spring Mississippi River, it's 70 characters, each character is eight bit, so one byte, so we have 136 bits on the string. First thing we do is we are counting every character, so it's M5, M1, I5, S4 and so on. Next thing we do, we just auto-buy occurrence, right? So we have the I at the very left and the space that we write, and then we're just concatenating the lowest numbers and summing up the numbers, that's what we do, right? We always take the lowest two, concatenate the characters, and then summing up the values we have. This way we are building up this tree, and this is like a binary tree, only thing we're missing for a binary tree, we have to assign zeros to the left-hand side and ones to the right-hand side, and then we walk down every possible combination of this, and we come up with the schematics, and this schematics is actually the header compression table, or the compression table. That is Huffman in general, but for HTV2, you have to know that this table is defined in the protocol, so it's not something which is shared between all the connections, right? The table is just two pages of table or of code you just copy in your implementation. The result, as I missed to tell you that, is almost 70% savage on the string we just had here. So how does HTV2 look in the real world? First of all, it's already here, right? This meant to be a conference about ideas from the future. Well, that's not an idea from the future. The idea is almost, I mean, it started in 2012, right? So it's not an idea from the future, but still it's not used yet. Unfortunately, I don't have enough time for demo, but I will upload the slides and you will find some code. You can just try it on your shell right now and introspect the tools, because it's all binary. You can't do this 10-net stuff anymore, but there are awesome tools around where you can tinker with all the binary stuff and really play around with that as well. Let's take a look at different implementations. So first of all, let's talk about the browsers, right? I'm a web developer, so first thing I do, if I want to know if I can use a certain feature or not, just checking, can I use, right? So this is how can I use it looks like. And the one red you see here is the Oprah Mini and every other is a green. And they also have a feature called usage relative display thing. And if you switch over to that, this is how it looks like. And there's one thing which really stands out to me. It's that, right? Oh, there's a UC browser for whatever. I don't know whether this probably matter. Let's talk about this too, right? This is browser which are really used in the wild, probably not that much, but they're used. Right, it's Oprah Mini and the Android stock browser. And when I see this, and then I see how all the web applications out there are combined, bundled, web packed or whatever, I really want to ask you, why the hell are you optimizing for this, right? That doesn't make any sense to me. So I think we rather, as a community, as developers, we really should change. But I'll come back to this later. So first argument could be, well, my server's not ready yet, so let's talk about servers. In order to understand the importance of server implementations, I think it's also important to know the market share. So let's go with some statistics here. Apache has really 40%, IES has 28, Nginx has 15, I think outstanding is Google web servers, which you cannot use on your own, right? So it's 2% of all web traffic served by Google web servers. I think that's really interesting. And then there's others like, I don't know, Jetty, Puma, Cowboy, you name it, all this mostly language specific web servers. So let's talk about implementation. So we don't know about Google web server, but what we do know is that they dropped speedy in February 15. And that's also Apache, so they have support since July 215, Nginx has support since September, and even IES has support since Windows Server 2016. So all major browsers already support that. And it is super easy, trust me. So let's take a look at how you can configure your Apache at home to use HV2. So that's the line which is important. Oh no, that's not true. That is the part of the configuration which is important, and that is the line which is important. So you just have to add one line because this stuff is already optional. So if you wanna have, if you have the recent Apache version and you wanna use that, the only thing you have to do is add this one line. So it's not that complex, right? So let's take a look at Nginx. Nginx is even simpler, right? So this is the code you already have. Most of this you already have because the listen is actually the listen part. Oh please come. Okay, so you see the Nginx, you should see Nginx tag people, please. There's one door, and whenever it's pretty loud when you open it, so it's like a really relieving sound when you hear it clicking, but it's still closed, I don't know. Okay, so there's Nginx. I think it is back, oh my God, I don't know how they did this. Okay, so there's Nginx. And the listen part, you will have a new configuration now anyway. And the only thing, and really the only thing you have to do is just add this one keyword. That's it, you're done, right? And I want to rephrase my question again, why aren't you not doing it? So there's a lot of third party tools and as the modern web involves a lot, we are all using Heroku, AWS and whatsoever, have our stuff shielded with different CDN networks and all of this. And well, let's talk about this really briefly. Well, most of them supported. So AWS just recently released a thing they call application load balancer, which is kind of the successor to the ELB and that supports it as well, right? So when you're using ELB, you can have HB2. Fastly, which is the current network provider, we also saw a talk from them yesterday. They also have a limited availability program where my company is also participating, so our stuff is delivered in HB2 as well. And as also CloudFare, for example, they also do a lot of that. So there's really no good reason why not doing it. As the claim of this conference is problems of today, ideas of tomorrow, I really wanted to share something which is not here yet, so I really wanted to share an idea of the future and what is going to happen to the protocol next. And the first thing which will come up or maybe not first thing, I don't know what order they will appear, but a thing which will happen soon, I hope, is cache digest, which is actually a way for browsers to tell the server what they have in cache at the moment, because server push kind of has the flaw that it doesn't know what the client has already, so it should rather push all of it. So it's really using up a lot of bandwidth and there was a paper published by Google, you'll find the links on the link on the slide later on. It was 22 pages of analogies of HB2 push and they basically figured if you're pushing all the assets, it's not very efficient. So you have to find any kind of strategy, what assets to push and what assets not to push. And then there's the thing called quick and whenever you're talking to IEFT people, never mention this TCP2, because they basically kill you for that. It is basically the protocol HTTP will run on in the future. And here's the thing why they never want you to call a TCP2 because quick is actually UDP. So think of HTTP running on UDP, right? At first, this sounds like a really stupid idea, right? I mean, all of this information in my HTTP request is pretty important and I don't want a bit missing, but here's the thing. If you're into hardware rates, you know that's rate five, so it uses kind of partitions or data parity. So every two disks contain enough data to restore data from the one disk which is failing. And that is the thing on the protocol. So every UDP packet on quick has enough information to restore a certain amount of other packets. I think that this is really amazing. It really gives us a lot of speed boost. If you don't have any handshake at all, it will take some time. You can enable it in Chrome, I guess, and obviously only Google is implementing that. But well, I think this is something really new and you will see this in a few years, I guess. And now this brings me to my conclusion, and that is also what I mentioned so far. We are all holding it wrong, right? I really want you and I really want us as a community really want to stop concatenating because this is really the worst, right? I want to stop sprouting. We should definitely stop the main charting because that's the worst. And that's bold, but I really want you to trash your asset pipeline, right? There's no need for it anymore. You don't need all this concatenation. You won't need all of this minification and all of this complexity we have in modern web development could be gone with HV2, right? I think that's pretty amazing. And if you were switching, and there were recent papers from companies who did the switch, and what they basically did, they just enabled it on the protocol, but they did not adjust the assets and they did not adjust the application at all. And then they had bad results, right? They switched HV2 and they had bad results. And this is what really gives me a bit of anger because don't expect it just switched to improve your performance. You might just keep the same performance you have at the moment, but if you want to have all the performance gains, you really should, the first thing you really should do is moving all of this stuff to one connection. I think that's the most important one, right? Move all of this to one single connection. Don't do the connection spreading anymore because all of the HV2 features rely on being on the same TCP connection. That's really the most important bit here. So if you do the switch, start using it and not just upgrading, that's my point here. Again, my name is Ole-Michelis. I'm co-hostess on Twitter. That's the best way to approach me. If you're into following a GitHub thing, you can just do that. That's my GitHub name. You can check out my homepage, all my awesome site projects, and that's all I got. Thank you very much, people. Except for browser support, how far along are server-side HTTP clients? What's that? I'll let you read it. Except for browser support, how far along are server-side HTTP clients? That is a good question. So server-side clients are, and the support is kind of mixed and really depends on the language because server-side clients are, you basically don't share them. There's Curl around, for example, and Curl obviously has support since a while. So whenever you have an HTTP library which supports or which uses Curl underneath, you will have support for that. I'm doing a lot of Ruby on my day job, so in Ruby there are only a few other clients who have support. I also do a lot of Elixir, and I'm also implementing the HPEC stuff in Elixir, and I know that there are some implementations in Elixir around which have support for that. But I also want to add here, the issue too is really for the browsers, and you won't gain that much benefit from using it on the server, right? It's also, you won't get any, that you won't get any penalty, right? So it's not, I don't want to disencourage you doing that, but it's not that you'll get a lot of benefits from doing that as well. Okay, one more, and I'll let you maybe answer these questions offline with these users. How does server push compare to web sockets? Oh, I got this question more often, so I was kind of expecting that. Web sockets is still a big thing here, and server push and web sockets solve two very different purposes because in your client land, there's no way to access the data from server push, right? Server push is a thing, it's a connection between the server and the browser cache, so it's completely bypassing everything your application stack is, right? Because push is something, usually you receive the push stuff before you get any other document, right? Any other JavaScript, so it's really just pushing assets in the cache and not making any interpretations on that. So it's completely, yeah, it's two different purposes, so you can have both and make good use of that. All right, thank you so much, Ola. I hope he's convinced you all to use it, should you be too? Thank you.