 It's also in Node 9 and it continues to be improved and it's actually, I wouldn't call it stable, it's still experimental, but it works pretty well in my experience and I'll talk more about that in a bit. So a little bit more about this library, this amazing library. It's actually nothing to do with Node per se, right? It's a completely independent C library that implements HTTP2 and HPEC, which are two parts of the HTTP2 standard. It's actually used by lots of other stuff as well, not just Node. It's got its own tools that are sort of wrapped around it. When you do an install, you have to get install or brew install or whatever, you'll get like a client that comes with it, you can use for testing, you get a server which you could use to serve your site. It's actually really high performance, various other tools like load testing tools and things like that. And so what it actually implements, like I said, is the two parts of HTTP2. One is called HPEC, that's your header compression, right? So HTTP requests and responses have a lot of headers and just are extremely repetitive, so they're hilariously easy to compress. But you wouldn't use just GZIP, there's actually some specific optimizations in HPEC that, for instance, take a lot of the sort of common header fields that you would find across the top 1 million websites and you sort of look at what is the top 100 strings that you keep seeing, and there's sort of a static dictionary built into the standard that the algorithm starts with. And then as you transfer things around, that dictionary is dynamically updated so that you sort of achieve sort of an optimal encoding of whatever is sent without duplicating and wasting bandwidth. And so the other part is what you would think of as HTTP2, the protocol consists of streams and frames. And I'll talk a little bit more about what exactly that means. But it's also interesting to note what it doesn't include. So NGHP2 does not include your encryption or your IO like networking or files or whatever. So if you're doing your TLS session that's like your encryption for HTTPS, it doesn't actually do that. It's up to you, the application, the person who's integrating that library, it's up to you to do that. So networking, name lookups and all that stuff is not included. So it's kind of nice because it's, I guess, it's more open to integrate with any kind of software, but it also requires you to do a little bit more work on your side. So luckily with Node, Node is basically V8, the JavaScript engine, plus LibUV. LibUV is all the asynchronous IO. That's essentially Node. So those two together. But Node also comes with OpenSSL, which is an amazing library to do TLS. So you add all these together and you get the best of everything. So like I said, it's written in C, integrates really well with Node's C++. It's widely used by things like Curl or Safari or the Apache web server. So these are sort of battle tested. It's pretty quick. In the very beginning, even I tested it out. This is, I mean, you can see this is like a year ago, almost now. And I compared, just look at the top three parts here, the two JavaScript implementations and then the NGHTB2. And this is super early experimental support, which didn't have any optimizations. It's probably not even comparable to today. I'm sure today is way faster, but it was already twice the speed. So that's pretty amazing. Not to mention that it fixes a lot of implementation bugs and stability. So it's also actively maintained. So this is like the last couple of weeks and there's a ton of work going on because it's being used by so many people. So that's really good as opposed to sort of the other projects that weren't really active anymore. I'll give you a little idea of how it works internally. So we've got, I guess, highest, lowest abstraction sort of switched around here. But if you look at the APIs that are exposed by Node.js right now, we've got a compatibility API and we've got a core API. So both of the sort of pink boxes here, that's what you can use as a JavaScript developer with Node. Now, the other parts up here, these are coming from Node's C++ code or the underlying C code of NGHTB2. Those are actually internals that you don't access. You don't have access to those as a JavaScript, from sort of user-land JavaScript. Now, okay, so you can use either the compatibility API or just the core API. If you're using compatibility API, you get nice things like a server request and a server response. That's what everyone's familiar with if you're using Express server or anything like that. You get a server and a response and you can sort of pipe data to it or set headers and all that kind of stuff. But if you want to use the underlying, like, new constructs of a session and a stream, then you've got to use the core API and also gives you a little bit of performance advantages potentially if that's your thing, depending on what you're doing, because it doesn't need to instantiate all these server requests and server response objects. It's really a minor thing. If your application is bottlenecked by that, you might have to worry about other issues, but it's just there. There's slight differences between the client and server versions of the sessions and streams just because the protocol has sort of little differences there. If you're using a client, you can set priority on which assets that you want to allocate more or less bandwidth to, whereas a server is not allowed to do that. A server can push streams to the client, whereas a client cannot push to the server. So subtle differences, but more or less the concepts of streams and sessions exist both on the server and the client. Okay. It's also kind of interesting to note, like I said, you've got actually JavaScript handling all of the networking IO stuff that's not being done from C++. So it's actually the JavaScript core HTTP2 implementation that's taking care of all of the bindings to the other parts that are required to establish a connection. So the C++ side really only talks to NGHP2 and just only parses packets, turns it into streams, turns it into sessions. So I'm not sure we should really get into too much of this, but it's here. Basically, you've got a session construct. That's actually not something in the spec. That doesn't actually appear in the HTTP2 standard in RFCs if you read it. There's nothing about session. It only talks about streams and frames. A session is something that you can conceptualize as a TLS connection, a TCP socket, depending on whether you're using HTTPS or just plain text HTTP. And it's just logically very convenient to think of, like, I have a connection open. That's a TCP socket. And across that session, I establish all these different streams. So every request or response is going to be a new stream. And so your client is going to connect, create a session, and then do a request over that connection by opening a new stream. Your server responds on that same stream. Your server can also create its own streams by pushing assets to the client. There's a bunch of stuff going on with frames. That's actually the smallest amount of data that you could send in HTTP2, sort of as individual chunk. And the reason that's important is because HTTP2 interleaves data. That means that you have one sort of TCP socket, and you're sending all these little TCP packets, right? Within those packets, you can have these chunks called frames that describe something that happens over a single stream. That's really good because if you're transferring a large file, say a huge video asset from a server, and at the same time you're loading HTML and JavaScript, lots of small files, you don't want them to be held back by the large transfer. So your server could trickle down little parts of the video file while prioritizing the CSS and the JavaScript so that the page still appears to load fast. And that's called head-of-line blocking. So this is a nice thing that HTTP2 kind of gives you out of the box. Now there's caveats with that. It turns out that it's not really that amazing, and the current practices of HTTP1 also do a pretty good job, and it goes pretty deep, but that's basically what frames and streams are about. Okay, so there's nice documentation. I suggest taking a look at it. I'm not going to give you a talk that just reads out the documentation. Like I said, it's still very experimental. Yeah, so things are going to be changing. Like, even just in the recent weeks, events are changing their names. So whatever I'm doing with it, I've had to update my code and tweak it here and there. So it makes, you know, it makes for interesting, like you've got a minute and probably like 50 slides left. So we'll go through, don't worry. So like I said, core API, compatibility API, pretty much covered most of this anyway. So you basically start off with the exact same APIs as the standard way of doing require HTTP or require HTTPS. You've got your create server and your create secure server. That's basically all you need to do anything. Ideally, you're not actually even changing anything, but to use HTTP2 versus HTTP1. Your application remains more or less exactly the same. I'll show you how to do that with some popular frameworks. If you're using the core API, you can access things like streams, priorities, HTTP2 settings, push promises, sort of the new features. If you don't need those, if you just want to enable HTTP2 in your application, just ignore it and use the compatibility API. But this is what it looks like. Okay, so the top part should be pretty familiar. You're just importing like file system and HTTP2. So you create a new server with a key and certificate. Everybody loves that, I'm sure. Generating key pairs with SSL is fun. So all you need to do is attach a listener to the stream event. And your stream event doesn't get a request response. It just gets a stream and then the headers of the request that opened the stream. So on your stream, you can have something like push stream. In this case, what I'm doing is I've got a server that's going to serve your index HTML. And along with index HTML, it's going to push app.css. It's going to do that to eliminate a roundtrip, a wasted time that the client would have to sort of sit there, wait for index HTML to come in, then see, oh, I need this CSS, so I'm going to make another request. In this case, I'm just going to eliminate that, waste the time, and push you app.css. So it's kind of like bundling, like what we saw in some other talks with Webpack or I think Angular also had something like that. This is sort of the same idea, but done at the protocol level. And what does it do? Okay, so push stream creates a new stream. Here, this push variable is actually a stream, just like the stream that you receive from the event. You respond with a status and a content type. This is sort of the headers for the response. So a push stream is actually like, you can imagine like a fake request. So that's why I'm giving it headers here. I'm giving it headers when I'm creating this stream. So the server is sending a fake request to the client, and then it's following that with a fake response to that request. So there's two sets of headers. This is sort of the request headers, and this is the response headers. We're both being generated by the server. And lastly, this is standard code that just reads a file and pipes it to the stream. And you're serving the file now. It's sort of abbreviated. This is an interesting new API. Respond with file is only available on HB2. It's a feature of the NJHB2 library that allows you to bypass all of the file handling, file system handling that Node does, and gives you a more high performance way of serving files. This is sort of the optimization that servers like NJNX do to achieve ridiculously high benchmark results. This is the kind of API that you want to use. So this is bypassing all of the file handling that you do yourself. Okay, so the compatibility API is actually very similar to the existing APIs for HTTP and HTTPS. I kind of mentioned that you just opt into, instead of the stream event, you opt into the request event, and that automatically enables all of the sort of extra work that these things happen to generate requests and response wrappers, these objects, these classes, that sort of wrap around the stream and give you that familiar API. So on the client side, okay, a little caveat, the client side is not really too well supported in terms of backwards compatibility. Like you can use the core API through client requests, but you're not able to do like the request method, which is sort of familiar to people who have worked with HTTPS modules. That just doesn't exist yet. The main reason is that there's no real need for a connection pool. So connection pool and node, right now if you just fire off like a thousand requests with Node.js, it'll use its own internal pool of agents. An agent you consider is like a browser, it'll just throttle how many connections you can make at the same time. And if you do a thousand requests in a for loop or something, it'll fire off like maybe 10 at a time, whatever the setting is in Node. So with HTTP 2, you don't really need to do that because it would just only open up one session to the server, one socket, and over that single socket you would send a thousand requests and the protocol just sort of interleaves them all for you. So you're not really destroying your server because you're going to be limited at the protocol ever to a certain amount of acceptable parallel open streams. So there's not a whole lot of need for it, but it could be implemented. Actually if you're interested, open a pool request. Okay, I'll just finish with that. So we've got the compatibility API here again. This is basically very familiar. The only real new thing here would be that you've got a create push response on your response object. So this is very familiar to anyone who's done Node with HTTP. You've got request response, and you've got set header, right header, the usual stuff. You can pipe files again to responses. The only new one here is a push response, and instead of the push stream method from the core API, this one that gives you a push variable in the callback, that's actually a response object, not a stream object. So you maintain that compatibility abstraction throughout. Okay, so if you're using with connect, it's really straightforward. Pretty much nothing changes really. This is standard connect code, as standard as can be. The only real difference is that you put the letter 2 here. That's basically it. You can use standard middleware so it's compatible with everything else out there. I just want to do a plug for Fastify, which is, I guess, a new high performance. It's supposed to be super fast and faster than native, which is kind of cool. It's faster than the Node HTTP API itself. It does JSON serialization very, very well. It actually does it faster than JSON.stringify, which is kind of insane if you think about it. You give it an object and it gives you a string back faster than JSON stringify. Basically, this is all you need to do is give it the HTTP2 true flag, and you can use this extra configuration here to allow HTTP1 so that it'll take a fallback. If you're connecting to this HTTP2 server with a user agent like an old browser or an old command line tool that only supports HTTPS 1.1, it would still work just as well. Obviously, you can't do server push and stuff like that, but for most cases, that would be a nice fallback to have. Okay. That's it. Thank you very much. Okay. First, we start with the first. Any questions? What's the current browser support for HTTP2 and when we are connecting to the node server, is it possible to do the Unreal NGINX proxy in-between? The question is, what is the... It's a two-part question. What is the browser support right now? The second one, if I understand correctly, is how do you run it with NGINX in-front or anything like that? First of all, the browser support right now is actually really good. I think the last holdout was Safari, but even that in two versions ago pretty much supports every feature in HTTP2. So the protocol itself is pretty stable. In fact, most browsers are now working on Quick, which is sort of a next version of HTTP2. So HTTP2 is very well supported in all browsers. All of the features pretty much work. In terms of the server side, I think there's a bit of a lag. If you look at any server out there today, I don't think you'll find a single one that does upstream requests with HTTP2. That means if you have upstream command in NGINX configuration, for instance, I don't think any single server out there does that. There's actually very good reasons why it may or may not be a good idea, but that's only, I think, coming from a legacy implementation of HTTP1 server and you're thinking, why should I add this code? But if you were to start from HTTP2, obviously you would just continue using HTTP2 all the way. So using things like an NGINX server or Varnish or whatever for HTTP2 specifically to offload that, I wouldn't recommend it. If I was building an application, I would want full control over all of these features like push. If I'm using server push, there's no server out there, no CDN out there that will give me full support to push assets very easily. There's sort of work-arounds with a link header, but they're not very scalable. You're limited to a few dozen pushed assets at the most. So I think the better way to go is just to run Node. I've been doing this. I'm not just saying that. I've been just running Node, sort of handling the TLS termination as well as the HTTP2 layer, and it's turned out pretty well so far. I don't think you need to really serve static assets as some super high-performance thing, depending on what your use case is. It might not be that you need to set up NGINX and Varnish to handle that separately from what your Node server can do. You're rarely going to be limited by CPU usage. I've done benchmarks that hit tens of gigabits per second with Node, serving small requests on very low-end commodity hardware. So you're always going to be saturating your link, and that's when encryption and everything. So it's not really necessary, I feel. If it makes you feel safer, you can, but know that you're going to be limited by HV1 protocol support only. Any other questions? We're talking about priorities. Does it share the stream? Would it stream a few things at once and sort of prioritize some of the others, or does it queue them up and do certain things first and then? So the question is about priorities. How does it work? So the spec and implementations are not always the same. I say that because the implementations have a lot of differences among them and also change over time. So priorities really mean you assign a weight to something, and the server takes it as a hint, as a suggestion. It's not sort of mandatory, but the server could take that as a suggestion to say, okay, the client wants me to send more data for this stream, so I'll do that, and it sort of uses those weights to calculate how much bandwidth it should allocate to each stream. So if it has like 50 chunks of data for each connection, it'll say, okay, this one has like 10% of the weight, I'll give them five per second or whatever. Priorities actually also go hand in hand with dependencies. So a stream can have a dependency on another stream, in which case the lowest dependency should be served first. And so some servers, some clients, some browsers have weird ways of doing that where like every file is a dependency of the next one, of the next one, of the next one. So if you have a page with like 100 files and you have like all of them chain dependencies, you end up serializing and you sort of kill off all the benefit that you could have from parallel interleaved connections. It becomes serial connections. That was some of the earlier implementations of I think Chrome that did that. But other ones are more like you do everything flat. So we just ignore priorities. Like you assign a priorities, but it doesn't really matter. So priorities are also sort of like they're in the spec, but nobody's really using them too much. Actually, most of the servers use prioritization, even though the spec says that they're not allowed to set priorities, like there's no way for them to set priorities from the server side, but they might internally sort of push certain files, like let's say some servers will push like CSS and JavaScript and HTML before they push any like image files, just based on the MIME type. So there's all these sort of internal sort of unknown optimizations that also come into play. So play around with it and your mileage may vary. Okay. One more. Or not. If you're totally confused, we'll go up to the later. Cool. All right, thank you.