 Welcome to Node.js Interactive, this is a talk on HTTP 3, a quick update. Remember that this is experimental, so do not try it in production, if you try it in production do not blame me. My name is Revikram Kamat, I am software development engineer, my Twitter handle is Thrivikram and GitHub handle is Thrivikr. If you see I did not get Thrivikram handle on GitHub, that is because somebody I joined GitHub late in 2015 and somebody else has taken it already. So if you become parents recently, get GitHub handle for your kids, because that's more important than other social media. So the hashtag for Node.js Interactive is hashtag Node.js Interactive, if you are active on social media, you can also use HTTP 3 or quick, if you like this talk. If you didn't like this talk, then use my name and what's my name again? Most of you will have forgotten, so I'll give a like clue. So we are in Canada and it is very cold and you will be knowing this drink, it's called rum, it gives lot of warmth. So imagine a weak rum, not a strong rum, a weak rum and add three in front of it, that's my name. So what's my history with Node.js? I've been using Node.js for four years now. I started contributing to Node.js core in October 2017, helped make HTTP stable, I wrote unit test and fixed some bugs. You can do the same with HTTP 3, we'll cover that. I became Node.js core collaborator in March 2018 and I organized and mentored four code and learn events. What are we going to cover in this talk? We are going to cover what is HTTP 1.1 and why HTTP 2 was required? Then we'll cover what is HTTP 2 and why HTTP 3 was required? Then we'll cover what is HTTP 3 and when is it coming? It's still draft, it's still experiment when it's coming and we'll see some sample code on the way. So I would like to thank James Nell who is here, he's a Node.js technical sharing company member and heading the work on implementing quick. Thanks James and I'd like also like to thank the sponsors for his work, that is protocol labs and Neoform. I'd also like to thank Anna Henningsen, as James says, she fixes the bugs which he introduces and I'll also thank Daniel, Juan, Oyang and many others who have contributed to Quick already. I'd also like to thank Tatsuhiro, Tatsuhiro Sam for his awesome work in NGT-CP2, like we are using NGT-CP2 for implementing quick and he has done some amazing work. So let's start with HTTP 0.9 which was published in 1991 and it was very simple, it was just one line protocol. You get MyPage.html, you send MyPage.html and you get MyPage.html, awesome right? Life was so simple back in 1991 but things had to get complicated. So HTTP 0.0.introduced in May 1996 and it built extensibility, let us see what it did. This how you send request, this is what response you get. So now you can send version information because now there is more than one HTTP protocol. You also send browser information because there are multiple browsers and then you send status code because not all requests are successful. Sometimes there are redirect requests, sometimes client error, sometimes server error. And then there are headers with data information, what kind of server is there? What kind of content type? So instead of sending content text, you can even send JSON and other types. And then we could include multiple resources in the HTML file. So in this HTML file, there is an image being used. So then HTTP 1.1, the standardized protocol came in June 1999 and I will not go through the headers because there are too many headers. We will just see what were the improvements in HTTP 1.1. So first improvement was persistent connection in which you can use single TCP connection to send and receive multiple requests and responses which are serial, not the concurrent ones which are serial. Then there was pipelining in which second request can be sent before the first one is fully transmitted and there was chunk transfer encoding and there was additional catch control mechanism, there was content negotiation. We will see what were the issues with HTTP 1.1. So first issue is three round trips per request. So this client in a server, and this is from a blog post from Cloudflare, they have written a very good description. So client sends a TCP SIN packet, then server sends TCP SIN plus SDK, client sends TCP SDK, then client says TLS client hello, server says TLS server hello, then client says TLS request. Still the request is not sent, okay? Then HTTP request is sent, HTTP response comes. Who has time for this? Okay, three round trips per TCP connection. Let's revisit this later. Issue number two was multiple TCP plus TLS connections were created for concurrent requests. So how does this happen? Like you write applications, right? In those applications, you'll be importing images, you'll be importing CSS, you'll be importing JS. And these things have to be imported concurrently. And when that happens, multiple TCP connections are created and for each TCP connection, there are three round trips as we saw. So it takes a lot of time. So let us understand this by going through the code because talking is boring. So let us see the code, okay? If you attended Matthew's talk on streams, you mentioned today that do not use pipe because it uses memory leaks. Anyway, just ignore that, use some framework, yeah. But here we are just writing normal create HTTP create server. So if request URL is hash, just return index. If it is style.css, it return that, okay? That's it. So I'll just make it simpler. So if request URL is the home page, return index.html, and otherwise just a regular expression which reads what the file name is and return that file name from the files folder, okay? Let us go through the code because it is important. We'll see how it works in HTTP one and two and three. So index.html, you import style at the top, okay? Then you import an image in the body. Then you say hello world because in the first program, you have to say hello world. And then we import a script. Let us go one by one what each files do. So style.css is just CSS, like just body, display, flex, whatever, yeah, just CSS. What script does is that it finds the name and changes it to Node.js Interactive because we add Node.js Interactive. So it's a hello Node.js Interactive. And it will change the image, change the globe image to Node.js Interactive image. Cool, that's after a second, yeah, after one second, wait for a second. And this is just a globe image. So let's see how it looks. So I'll just enable loop and play. So here you can see that we are requesting localhost first and then three concurrent requests are sent for CSS, JS, and image. And then the JavaScript is executed after a second, the set timeout. And then the image is sent and Node.js Interactive image is shown. Let us see how many TCP ports are created. It's very different. So below I have a command, telnet command which looks for how many TCP connections are created. When you see that request is sent for localhost, only one TCP connections are created. It shows three records, but only one is created. But then when it has to make three concurrent requests, you can see multiple TCP connections are created. That is the issue with HTTP 1.1. So what are the issues of HTTP 1.1? One was three round trips first request that we saw. And second one was multiple TCP TLS connections for concurrent requests. Now what did HTTP to do? It was published in May, 2015. And you saw that three requests are sent for three resources, HTTP 1.1. In HTTP 2, only one TCP connection is created, although all three resources are fetched. So let us see how it works. So we write a HTTP 2 server. So we create a server, and then if it is slash, you get an index for the HTML, and otherwise you get the regular expression that file. And so if you examine using the network tab, it will be same as HTTP 1.1, only in the protocol column, you will see that H2 is used. So you don't see that how many TCP connections are used. So I go to next slide, and I watch for TCP connections using next state command. And you will see that only one TCP connections are created, although three requests are sent simultaneously. So let us see the benefits of HTTP 2. We saw the one benefit, which is multiplexing and concurrency. So different HTTP requests are sent onto the same TCP connection. But there are some other benefits also, like stream dependencies. Clients can indicate to server which dependencies are important. There is HPEC header compression, where it reduces the length of the header field encodings by exploiting the redundancy, because headers have a lot of redundant codes, so it can be encoded. And there is server push or server can send resources with clients have not requested yet, but we will not go through that in detail. We'll just see the issues with HTTP 2. Because if there are no issues with HTTP 2, HTTP 3 will not exist. So let's see the first issue, HPEC is stateful. So the encoding and decoding tables have to be maintained at the endpoint. So because of which, so what happens is each packet is sent, it's first time it encodes something, and when second packet is sent, it will use the encoded content, it will send a new content only if there are some changes. So that is one issue. Delta encoding is used in header compression. So this causes high resource consumption in real tire, then it's not easily routable along the network. And the main issue which we'll discuss in today's talk is TCP head of line blocking. So what is TCP head of line blocking? When a single TCP packet is lost or corrupted, all subsequent packets are blocked until the last one can be successfully transmitted. In HTTP 1.1, if this happens, you are blocking only one request, because there is only one request, HTTP request sent over a TCP connection. But in HTTP 2, since multiple requests and responses has carried over a single TCP connection, all those request responses will be blocked. So let us understand this using a, so yeah, so before that, we'll see where this effect is most, where this effect is most significant. It is most significant on high latency networks, which are long distance. So if there is high latency, probability of packet getting lost is more. Or on mobile connections, where probability of getting packet lost is more. So we'll understand this using a chain metaphor. So this is a chain, imagine this is a TCP connection between two computers. And what I've done is I put green colors and red colors, where green color considered that it's a CSS packet, and red color is JSS, JSS packet, where CSS and JSS files are requested over the same TCP connection. If a JSS packet is lost or corrupted, then the entire chain is broken. So the entire chain post that JSS packet, which includes CSS packets have to be sent again. This is explained very well in HTTP 3 explain, if you want more details. So what does HTTP 3 over quick does? So this is draft 24 as of December 2019. So what it does is that when it is setting up multiple streams over quick connection, they are treated independently. So that if any packet goes missing in one of the streams, only that stream has to pause and wait for the missing packet to get transmitted. So to understand this, let us see the chain analogy in which this CSS stream, this JSS stream, if the JSS stream packet is lost, the CSS stream is not affected. Now you will say, isn't this what HTTP 1 was doing? Are we going back? No, because HTTP 3 is built on top of UDP. So on the left side, you can see HTTP 2, which is built on top of TCP. And HTTP 3 is built on top of UDP and quick comes there in between. Speaking of UDP, I can tell you a joke, but most of you won't get it. Got it? Got it? Got it? You didn't get it. Let me still give it a try. UDP packet bar box into a. Did you get it now? So quick adds the following to UDP. It adds error handling, it adds acknowledgement, it adds flow control, it adds packet sequencing, built-in encryption, Taylor's 1.3, and bi-direction, unidirection streams. So whatever benefits you have on TCP, those goodies will come to quick. Amazing, right? And quick also reduces round trips. Remember, we saw that three round trips per request. Who has time for that? So in UDP, it will take on, if each round trip takes 100 millisecond, in quick, it will take just 100 millisecond for the first request. Or if client knows the server already, it will take zero millisecond. So before we write the HTTP server, remember this, quick is not equal to HTTP 3. These terms will be used intangibly, but they are not the same. So HTTP 3 is the application protocol which uses quick as a transport protocol. So you have HTTP 3 on top of quick, then on top of UDP, and then IP. The Node.js implementation will let you create your own alternative protocols on top of quick. One place where we are considering using quick is for inspector protocol. We currently use web sockets, but we may switch to quick or in diagnostics. Similarly, you can create your own application protocol over quick, like DNS. DNS might also use quick. So let us see the HTTP 3 server. Of course, when you request slash, you can index, otherwise you request the file name, and you load the web page, and you get this site can't be reached. Why? Because it is experimental, okay? You cannot use it in production, it's experimental, but we have done a lot of progress. So it is an early stage of development, but you can help build it. You can help build it. You can help build it. How? Go to Node.js, quick. So this is the progress we currently have around 70% done, and this is the project board, and this slightly outdated, but you will be wondering, oh, like I want to contribute, I can see all the excitement in each one of your eyes that you want to contribute to quick, but you will be wondering, oh, like I don't know anything, how do I contribute? It's very easy. Write tests. Start with writing unit tests. It's amazing. You write unit tests, you'll find bugs, and you'll fix those bugs, and then you can also become a collaborator like me. So how can I write unit tests? I'm glad that you asked. So you go to Node.js, quick. You fork it, and then you clone it. Then you say experimental, quick, because it is experimental. Get the coverage, and these are the results slightly or old, but this is what we have for JavaScript. So if you're into JavaScript, you can help writing unit tests in JavaScript, and we need more help in writing C++ tests because there is a lot of code in C++. So now this is how to contribute to quick in HTTP3, but if you want to go more deep, and contribute to the protocol itself, then HTTP protocol is an IDF draft 24 right now. So you can join the meetings and suggest. So we'll just, we'll still see an example webpage on quick, that is quick.rock, so this is how it looks. So when you examine it in it, bottom you'll see that the protocol is HTTP2 plus quick. So let's go through demo. The Node.js, it did not build on my machine quick, so I'll show the demo which James showed. In Node.con.pu. So this is one of our tests that we're using, and I have the debug tracing enabled. Basically you're just gonna see a bunch of text files across the screen, but it's a client talking to the server, sending a copy of the test file itself back and forth. All right, but once it's through, I want to point out a few things that we see in the results. So this is, sending all the HTTP packets, this is all the trace information. They'll take just a second here. But it actually works, which I'm super happy about because last week this was seg-faulting like crazy. All right, so we see here at the end in these objects, all this information is available to the JavaScript layer. But we have here, you can see the byte sense, right? You can see the number of streams that were created. If you scroll up here a little bit further, we'll see that how many bytes we received on a particular session, when the handshake started, when it completed. This is all information that's gonna be available at the JavaScript API layer just built in here. So right now, this is information that you would have to install additional modules to get in other things with the node. This is all gonna be built into the API and available. Cool, so let's have a summary of what we learned. In HTTP 1.1, multiple TCP plus TLS connections were required for concurrent request. In HTTP 2, added multiplexing in which multiple HTTP requests were sent on the same TCP connection. TCP head-off line blocking in which is the one in which the entire TCP connection is brought to an halt if a single TCP packet is lost. And HTTP 3.0 are quick, treats each stream independently so that if any packet goes missing in a stream, only that stream is affected. That's it, thank you for listening. My name is Srivikram Kamath. All the code which I showed is present on GitHub and the slides are available there. Thanks.