 Okay, great. So thanks for letting me talk. So I'm going to talk to you about HTTP2 and Node.js. So I'm going to run through a little bit of details on, you know, what is HTTP2 and what does it mean to do that with Node.js? What is actually Node.js for some of your sort of not used node before? This is an open source conference about all kinds of technologies. I'm going to talk about the pure JavaScript implementations of HTTP2 in Node that are available on NPM. That's the Node package manager. So you can use that and they've been around for probably three, four years now. They started as a Google Summer of Code projects and now you can sort of use them to build your server app. Use both of the sort of most popular versions in some projects that I'm doing and I'll share my experience. And then I'll talk about the upcoming native support and what it means to be native in Node anyway for HTTP2. So that's what we're going to cover in the next 20 minutes or so. First of all, why would we need to fix HTTP1? What's wrong with HTTP1? Well, so there's a couple of main things. If you're new to HTTP2, basically with HTTP1 you're not really using TCPIP effectively. You're ramping up multiple sockets. You're taking like six or eight or whatever sockets to request a whole bunch of assets. Each of those needs to ramp up to its speed that's built into the TCP. It sort of fundamentals the algorithms in TCP that determine how fast you can request a lot of data and receive a lot of data. How many, you know, sort of, how do you respond to, you know, drops, because I get dropped or latency. And so if that's not really the right way to do it by opening up multiple sockets, one socket would be ideal. Things like when you have one request, you have to wait for its response before you can send the next request, right? So this kind of blocking means that you're going to have compounding latency. So if you have a request that goes around the world, you're going to be doing like 300, 400 milliseconds, then the second request will be, again, that amount of time that you're waiting. Noticeable becomes basically impossibly expensive to do lots of, lots of small requests. So people come up with all these hacks like bundling things with things like Webpack and whatever. And those are great solutions for HTTP1 world, but in HTTP2, you know, you shouldn't have to use that anymore. That's one of the things I've been addressed at a protocol layer and are now sort of taken for granted. Other things like it's plain text, use new line text, which is really nice if you're just reading the socket with like a packet sniffer or something, but these days we've got much more advanced debugging tools. A few people really look at the raw data on the network. Even if you're using the network tab in your browser debugger and spectra tool, the network tab is already parsing that for you and showing a sort of more polished version of that. So the value of looking at plain text is kind of like diminished these days. And there could be potential gains by using binary packetization, turning things into frames that go into streams, and this is sort of the approach that HTTP2 takes. Another thing is the headers. If you're doing lots and lots of requests with HTTP2, or HTTP1, that's it, you know, every one of these requests and responses has a set of headers, extremely repetitive data if you look at it across requests. And so if that's uncompressed, then you're wasting potentially lots of bandwidth. So HTTP2 solves a lot of these things. Now you've got sort of interleaved streams rather than multiple sockets. One socket at the TCP level, you can send lots and lots of requests and responses as sort of the streams construct. So it chops your requests and responses into little frames, various types of frames, and then we'll sort of send them over the same socket. So you can have like 100 requests going on at the same time, and 100 responses coming back at any given time, and they can all happen sort of interleaved. So you can have a little bit of this request, a little bit of that response, and you can all go at the same time both directions. Actually, I say 100 because that's actually the default minimum built into the spec of how many should be supported by any implementation. So it's kind of unheard of to do 100 sockets in HTTP1, but in HTTP2 it's like the minimum. Not that you have to do that, but every implementation should support that. And that's kind of an important thing when you look at the performance of these things. They're not always defined and sort of really tested for that. So again, like I said, binary frames where you're chopping all these things up, and then header compression is now built into the protocol. So just like we probably use Gzip or some other compression to compress the payload, you can now also use HPAC, which is sort of the default algorithm, to compress your HP2 headers. So that covers sort of HTTP1 and why we need to... Now Node.js, for those of you who are not familiar with it, is just running JavaScript on a server. So just like you would run PHP code or Python code or Java or .NET, you can now just run JavaScript on the code. It's nothing really new, but again, I just want to point that out. Now in Node, it's kind of important to realize that when Node came out so many years ago, HTTP was sort of the killer app for Node. Like there's nothing inherent in JavaScript that says it should talk about... that it should talk and support HTTP, but Node did that and that made it extremely successful at building web apps, building APIs, you know, it has all the sort of asynchronous callback-based semantics of JavaScript, you know, sort of natively because of the language, but then giving it a couple of sort of standard libraries you could call them, including HTTP, has made it sort of the default... The Hello World example on Node's homepage literally is just this, this like 10 or so lines of code that builds a web server, like where you previously would have to do Apache or instead of Nginx and then run your application behind that, proxy it upstream and do whatever. Now it's just a couple lines of code and you have your own server, you know, this is the Hello World of Node. And so the very first line of the Hello World of Node is HTTP. And that's why I think it's important to look at the new version of the protocol. What does that mean for Node? I mean, it's so crucial, so critical to Node. So there's two, like I said, two packages on NPM. Again, that's the sort of the package manager for Node. NPM stands for a lot of things, but I guess most people typically see a package manager. Now you can download, you know, millions and millions of packages there and so do these two Node HTTP2 and Node Speedy. Speedy is the former name of HTTP2 for the short answer to it. Both of these have been around for a couple of years and very popular, like hundreds of thousands, millions of downloads every month. And I've sort of played with both of them now. And I found that they're great for sort of doing my little personal experimentation with the protocol and what it can do, you know, what it could potentially do, sort of like build proof of concepts. When I try to run it in production, I find, you know, serious performance issues, you know, kind of hard to work around some of the API limitations and how they're set up, because again, these are like sort of implementations of a spec not necessarily useful to developers. So designing an API versus implementing a spec are two different things and sort of that comes through iterating. And so multiple people building these APIs around this protocol is going to come up with better, you know, sort of new patterns for how to use HTTP effectively, because there's not just the benefits that I talked about earlier, but there's some new sort of primitives in the protocol that you can now use like server push. You can push assets from the server to the client that are not requested yet by the client because you're sort of anticipating them. So the first one, Node HTTP2. It's got a really clean API if you're familiar with the Hello World example. The API for this library is almost exactly the same and it's, it works fairly well. It doesn't really crash. Once you've kind of figured out all the errors that you need to trap and all the tries and catches that you need to put all over the place, then it's kind of stable, but it really doesn't work very well. Once I started benchmarking it, I, you know, ran into weird issues where sort of the performance really takes a dive once you start hitting it with the sustained load of like 100 requests would be like really fast, 200 requests would be a bit slower, 400 would be like massively slower, and so on. And once you do like 2,000 requests per second or so, then it just completely hangs or like it doesn't respond for two minutes. Also, you have a little bit of protocol issues because these libraries, keep in mind, again, were built around the time Speedy came out before it was really called HTTP2 and some of the earlier drafts of the spec. So some of the latest things in the browser doesn't necessarily work with this library I found. The Speedy library, it's got a bit more users and it's got a lid. I mean, this is a little subjective, but I think the code base is a little bit nice structured. On the flip side, you know, like asserts in production is sort of like a debated issue, but I'm not a fan of something that can just like crash my server without any recourse. I cannot trap that error. If the server receives some kind of input that it doesn't like, it just asserts and throws and blows up and crashes my whole server. I'm not going to be happy. So I don't want to put this into production. Also, push API for my particular use case didn't quite satisfy my needs. I wanted to play with like when do I push the, you know, when I send a promise versus when do I send the actual payload and to play around with that. The API isn't designed to allow that to accommodate that use case. So I couldn't really get much benefit out of this. So this brought me to why don't we just have a native implementation in Node in the core, right? What is the core? Well, that's basically a set of libraries, things like low level constructs like buffers and crypto stuff and HB2 could become one of those things. Question is still like not being answered yet. Like is this, is HB2 really just a version bump or is it a whole new protocol? Because you can sort of make the case for other protocols to go into core, but they don't add them. Like web software, you know, every browser supports it. A lot of applications use it. It's not part of core, right? Why not? Well, you don't really want to keep adding things to core, otherwise core will just become bloat, right? You don't want to have too much stuff in there to maintain. You know, once you start pushing stuff into Node, people will use it, you know, five years later, you have to support like a million applications that rely on this one little API. You can never change and fix things. That's a problem. So putting things into NPM is a viable strategy. If you consider the new protocol, then say like, you know, we're going to like leave HB1 as part of core and never implement HB2 in Node and always rely on people to just pull the dependency from NPM. So again, is this actually going to go into core? Not sure yet. Under what name? Not sure yet. Under what API design? Not sure yet. These are all things that are being discussed right now. If you want to get help and take part of these discussions, it's very interesting actually. So like I said, you don't want to break the Node ecosystem, right? So are we going to just replace the HB1 APIs? Well, you're probably going to find like a million express apps out there that we've all been writing that are just suddenly going to break. That's not a good thing, generally speaking. So the current experiment that I've been stalking religiously is the Node.js repo, and then it's a fork called HB2. So it's not a full request. It's not a branch. It's a complete fork. So you can see this completely separate project. It's mainly done by James Snell, a gentleman who works for IBM, I believe, and Mike Collina out of Italy, I believe. So these are the two people that are contributing to the most of it right now. I'm not really contributing to this personally, although I'd love to be. I'm really looking at writing some tests and making sure things work. I'm just sort of lurking mostly on this thing and playing around with it from a user point of view, in my own use cases that are a little bit higher level, like building a server on top of this, building some tooling on top of this. So keep in mind this is really still a work in progress. It's been going on since roughly last summer, late last summer. The server sort of works, right? API isn't frozen or anything like that. The client is currently undergoing massive development. The tests are, you know, little to non-existent, and there's a lot of opportunities for people to contribute to something so impactful. Like if you want to start contributing to open source to a project that could have massive implications on a huge ecosystem of node, then this is something to really look at. Write some tests, even just the test cases. So this report is for tracks what's happening in V8, like, well, node version 8, which is kind of confusing, but it's also V8, the jobs of engine. So it's probably going to ship in a couple of weeks. Probably going to be called Carbon. Like, you know, the LTS names, like Aragon, Boron, now it's probably going to be Carbon. It's still being discussed. Probably this is not going to be ready. I mean, almost definitely this is not going to be ready. The idea of what this is going to do, node version 8. I sort of questioned that at this point, but I mean, I'm crossing my fingers that would be awesome. Suddenly people would be like an insane sprint and get it all done. But I sort of, I'm fine working with this fork. And if you want to play with this, just use the fork. And I'll show you how to do that. The implementation itself is different from the ones that are on NPM, in that it's not written in JavaScript itself. It's written in C. It's a really low-level, super high-performance implementation that's been around for a couple of years. It's used by Apache, right? So, like, the reference web server, essentially, the de facto. Apache Mod HTTP2 uses this NGHT2. It's not Angular, okay? Just want to put that out there. It's not Angular. It's like a newer generation, a next generation of some Star Trek thing. I don't know. It's just NGHT2. It's fantastic. Curl also, if you want to make HTTP2 requests on the terminal, like a pool hacker, you know, use Curl. It doesn't support HTTP2, right? You can get a custom plug-in mod or whatever for Curl that also uses this library. That's sort of what they accept. So, two of these extremely, you know, referencey kind of projects are both using this library. It's got the pedigree, right? It's really, really good stuff. It's really low-level. I'm not a C programmer, so I find it hard to contribute to anything like this. But what it gives you is it gives a standard compliance. Like, if it's good enough for Apache, it's probably good enough for you. For me, anyway. Secondary goals, like, much less for importance, at least for this initial implementation, are the performance and then the API design. So, API design, really, like, compatibility with the HTTP1 module in NodeCore is not a key concern. Like, it may be interesting if people are thinking like, we can try to overlap, but quickly people find that there's different things that we can support, so we're not going to prioritize that. We're just going to try to build a cool, clean API, easy to use, so we get people to play with this more effectively. The performance side of it, I mean, while they've done a lot of work for performance optimizing, there's more on the sort of the JavaScript bindings to the native library. So they're not worried about the tuning of the library itself more. Like, how do we throw buffers that would be from the network in NodeLand? How do we throw that over to NGHTB2? Pass things around, try to avoid overhead. So that's the kind of performance. But really, when you're talking about HTTP2, the performance is going to be other things. It's going to be how do you terminate your TLS endpoints? Because HTTP2 is all TLS, all the user agents out there, all the browsers only support it over TLS. You have to have SSL. So you can't put it behind like an NGINX or a varnish because they don't really deal with HTTP2 upstreams. You can't have an HTTP2 server sitting behind an NGINX. You can't have an HTTP2 server sitting behind, frankly, any CDN today. You can't have it sitting behind a varnish because none of them make upstream calls to an HTTP2 server. They only make HTTP1 or HTTPS1 if you're lucky. Load balancing becomes a whole new challenge, right? You have to do your own load balancing in NodeLand, probably, or wait for new tools or build new tools. So this is why performance itself is not really the key design criteria. And then that brings me to our first demo. I want to... But this is very lightweight demos. I'm not going to do a lot of live coding. I just want to put this information out there so you can play with this. And the first thing is just get it working on your system. So like I said, there's this referral. You did have the fork. You just pull it down, run these couple of commands. You configure, sets it all up for your system. This takes a little while when you run make. You can come back and have a nice cup of tea or something. You come back and you should have Node8 now running. That means you can just use... You can run this Node, slash, sorry, go slash Node and then give it your file.js and you'll be running in this Node8 pre-release type of thing. If you want to make it global, if you're like totally reckless, you can globally install this to be your default Node. Don't do that. All right, it's a really stupid idea. I've done that, but don't do that. I find it easier to work with PM2, which is a process manager. If you've ever spun up Node application servers, you're using PM2 to sort of automatically blow it up to as many cores as you have. So it'll run in multi-processing. It'll be like massively faster on any modern computer. You have to run it on a server with like 16 cores. It'll run 16 processes. You don't have to change a line of your own code. It's fantastic. So to do that, you do need to have it globally available because it just calls Node on the system. You can't pass it a specific Node, as far as I know. Probably wrong about that. But anyway, so I'll bring out the hello world for this. Like we saw this, like the hello world for HP1 and Node, for HP2, it's really simple, just doesn't line the code, and we're really not doing much new here. We're, like I said, we're using TLS, so you have to have a key and a certificate. I can do a separate talk about how to do that. It's really straightforward. You generate a key and a certificate. You read them from this. You pass the options to create a secure server. That's the same API as the HTTPS module in Node. So you have to create server for the HP1 plane, and then you have to create a secure server to spin up a TLS socket and listen on to that. That's all it's doing. And you pass the callback. The callback is your quest response. Anyone who's done Connect or Express programming is familiar with this underlying mechanic. Express is really just a callback that you plug into this function. You create a server and you get a callback. Pretty straightforward. That's what Express is based on. Now, in your response, you have a standard. If you guys, again, right head, you can send stuff. It's very standard looking kind of code, right? So all I'm doing here is saying hello world, party hat thing, and making sure that the emoji renders with your DFA car set. Straightforward. And if I now run that, and I want to see what the performance is of that, even though I said like performance is not really the key design criteria, I was kind of curious, right? Because I want to sort of see is it better than the JavaScript versions. And so I ran some tests. And this is very, you know, like I said, PM2 to automatically spin it up. So this is that 12 lines of code. There's now a cluster of four servers, right? With PM2. They're listening on the same port. And they're all going to be handling, you know, sharded kind of connections. If I run that, the GHDB2 comes with this cool, awesome tool called H2Load. That's sort of an alternative to Apache Bench, A-B, or W-R-K. These kind of like HP1 benchmarking tools that sort of like hammer, like millions of requests at your server. And that sort of measured response time and the error codes and stuff. So H2B2 is this tool called H2Load. It's very efficient. So I'm making 100,000 requests coming from 100 clients on 10 threads or something to my local host for 8443. Because again, yes. So I use that as my default. I don't use 8080 anymore because it gets confusing. And what did I find is that on this little machine here that probably has less performance than most people's phones, I got 18,000 requests per second. I mean, that's pretty ridiculous. I wasn't getting near that with HP1. I mean, you might be able to tune in and stuff, but I was like 18,000 requests on this. I mean, this client's and servers are running on the same machine. I was blown away. I didn't get near that kind of performance. I was getting like 1,000 or 2,000 with the JavaScript limitations. So this is a massively forward for performance already. And this hasn't even really been perfected and the production ready yet. I was curious, what does that compare? What do you get from Nginx? I got 12,000. Same thing. This is as paired down. I don't want to say Nginx sucks. I mean, I love Nginx. I've been using it for the best 4 to 10 years probably. I'm just saying that if you pair down Nginx with a hello world contrived, artificial, completely non-realistic example where it's just serving a hello world text file, then I get less than with this hard coded one. So apples and oranges, sure, but it's kind of promising. To me, this shows that the protocol handling of the native Node.js implementation of HP1 is really, really, really fast and very, very good. And so obviously nobody's serving hello world 18,000 times per second. It'll be the world's worst web app. But I mean, you can build things on top of this and at least be sure that this layer won't be holding you back. And so, I mean, that's sort of as far as I've gone with most of my experiments and into the native implementation of HP2. And I'm happy to take some questions now if you have some time. I'll tell you about the state of these technologies. So there are a few packages there. Then there's also the HTTP2 standard implementation that they're trying to make to Node 8. That's right. We should go for the standard one, right? Right. It's just a lot of work. It's work in progress. I'm sort of sharing my findings on this ongoing work. This isn't ready yet. You download Node, you won't get this yet. It might be six months. It could be a year, I don't know. Is it going to be replacing? Maybe you said that, but I didn't get it. It's going to be replacing the existing HTTP packages or it's going to be a negative H5. So that's actually one of the discussion points right now. So I'm personally interested in this because I feel like this is a part where we can all contribute. And if you're interested in using this, I'm just sharing this because it's been so easy for me to do open source contributions in a sense or at least evangelize a little bit and try to help out. Like do some testing with it, give some feedback. You know, question the design. Say like, should it be a package? One of the issues that I discussed on was, for example, should it be a namespace? Should we replace the ones on node with this thing on NPM rather? Should we replace NPM packages with this native one? But then we're breaking the API. And if these things have hundreds of thousands of downloads, that might still upset a lot of people. Even though you're not breaking node, you're still breaking a heavily relied upon package. That's not a really nice thing to do. And so there's all these kind of things that people are sort of discussing right now. And I'm just saying, you can be part of this. You can just sit back and say like, the world will go on and magically things will get better. But you can also just make things better. And you can be part of that ongoing world. Do fraudsters have to support HTTP2 in order to use it? Of course. I mean, they do. They actually do. Like this is like sort of unrelated to the node side, but it's very interesting. So Chrome, Firefox, and Safari, since the last version that came out a few months back, all supported Edge. I don't know about IE, but Edge at least definitely supports it. So in my tests, I've been interested in this because I'm using a bundling tool to replace sort of webpack browser pack. And I'm just using server push, building on top of this technology. And it works with every browser. It's beautiful. I mean, the support is there. It's complete. If you're worried about backwards compatibility, you care about IE 3.2 or something, you've got helpings. But it just works with the current green, whatever browsers. So currently when we build web applications with all these resilience of web services, we usually hide a bunch of node servers behind the varnish on NGINX. What's going to happen post-vibes? As you said, NGINX and varnish don't do HTTP2 upstream. We have to put a Node.js as a new frontend. What would you do? I think it's a little bit of a chicken and egg situation, maybe. Like they're not sure if they should implement it. So the browsers actually have taken a lead on this. People in the varnish community, node community, they've got very strong opinions. They don't agree with necessarily everything in the protocol. I don't think anyone really agrees with such a highly debated spec. But for me, I'm just running Node. No varnish, no NGINX. I'm just running Node to talk to HTTP2. So you're using Node basically. Then you distribute the various... Typically, I have to say that the customer is about 553 or not 554. But the browser, of course, only talks to 443. So you would use a Node application and do the distribution what you typically do with, like, say, varnish right next to that. That's right. I mean, there's so many different strategies. And so people are going to experiment with that. And at some point, maybe varnish or something will change your mind. But I mean, they haven't even implemented HBS. So I'd be surprised if they implemented HTTP2. But someone else might. And then that person's project might become like the thing for the next varnish for the next 10 years. Right? What about NGINX does it support? So NGINX is interesting in that it supports HTTP connections inbound. But when it upstreams, when you're like, you know, you're like an NGINX load balancer to your application server, that connection is still going to be HTTP1.1 or HBS1.1. And so not HTTP2. And so that's actually what a lot of the sort of major CDNs that you're using. They're actually built upon, you know, things like things like NGINX, maybe a little modified and stuff, but core technology tends to be NGINX. And that's why they never make HTTP2 connections to your origin server. So they might receive client connections at HTTP2 and send things back to HTTP2. But they won't really talk HTTP2 to you. So you can't really do those optimizations. So, okay. Have you ever heard any news that NGINX will be also included in the HTTP2 feature? It does. So like I said, it's like, we did this few months? I'm not really involved with NGINX at all. But like it has supported HTTP2 client connections for the longest time now. In terms of upstream? Oh, so in terms of upstream, I don't... Like the comments that I've read might be a little bit dated by now, but I'm not sure if their opinion has changed. They're not really implementing that. They're not really interested, I think. I just suspect that a lot of the CDN companies would be happy to sponsor that kind of development. I mean, some CDNs already have, like, switched to other servers like H2O. Like, possibly uses H2O. That's a bit curious, really. Where's the point of building an application that uses HTTP2 with that, like, path between my load balancer and my backend? Well, you could question, do you really need that load balancer? Are there alternative ways to do that? Do you just do DNS round robin? Are there other ways to, like, do load balancing at a lower level? Yeah, if possible. Yeah, for example. All right, any other questions? I think we have the next speaker. Any other questions? I think everyone's waiting for the next speaker. So, I think that's... Okay, thank you. Thanks, Sebastian.