 Wait, like a little bit there. There's no PA, right? So I have to, like, actually, can you hear me at the back, sort of? Yeah. OK. Let me know if I start mumbling too much. So, hey, this is the HTTP2 workshop just to make sure everyone's in the right place. OK. So you are? Yeah, I'm Indian. I run a startup here called Execure, focused on front end performance, like anything to do with JavaScript. Cool. I'm Sebastian Deckers. I'm sort of just open source guy for the last year. HTTP2 has sort of been my side project. We can hack a project thing, and then sort of it's been going completely dominating my life for the last year and a half. I've been working on this for a long time now, and I'd like to share somewhat of what I've learned along the way and built along the way. So I want to start with a little introduction on HTTP2. But before we do that, who here is actually using HTTP2? Or knows sort of a little bit about it already? OK, that's like if there's, OK, fantastic. Who is shy of raising their hands? OK, everyone. So excellent. So like, I am just going to touch like sort of lightly on HTTP2, because the protocol itself is actually really similar to HTTP1, which I mean we've, OK, we're all web developers here. I think I don't want to make too many assumptions. So if you're coming in here from a Haskell functional programming background, and you've never seen the web, then let me know. But I'll assume certain things about your knowledge of HTTP that you've understood what a request and a response are that you've ever built a web app that serves a response to a request. But what's different in HTTP2 is some of these basic things that I'll just go through now. So the first concept that is really interesting is you have streams instead of just sockets. So HTTP1, everything goes over one socket. That means a TCP level socket. So TCP, you have this socket provided, and your HTTP goes on top of that. With every single request, you need to have a socket, and then you can send one request, wait for a response, send another request, wait for another response, introduce a certain things like a head of line blocking, which is essentially when you have a request that's gone out, and it's like a really huge response that you're waiting for. Any other requests in the meantime, wait for the original response to finish before they can go out over that same socket. HTTP1, you address that with things like opening six or eight sockets. Are you joining for HTTP2? All right, just set yourselves down somewhere. We've already started, but just the beginning. OK. Yeah, yeah, come. Yeah, yeah, fine. OK, so for the newcomers as well, I'm just explaining somewhat of very basic theory. It's a little dry. It's a little bit conceptual, but we'll get into code very quickly, and then you get more of an intuitive feeling once you start coding and actually using these concepts. So I was just talking about streams. So basically, consider HTTP1 has a socket where you have to wait for a response and request to finish before you do the next one. With streams, the idea is that you have one socket and over that socket, instead of sending out your request right away, you can chunk it up into little packets called frames. And these frames can have this identifier, this stream identifier, this ID that basically lets you send the various stream IDs, like various headers and various data fragments on the same TCP socket. So you can have multiple requests going out at the same time, and then some of the responses start trickling back data frames. And by doing this all over one socket, you gain some benefit in this head of line blocking. You no longer have to wait for the response to finish. But that was the design of HTTP2 in practice. It turns out that you basically just pushed a head of line blocking down another level. So most tutorials that you'll read out there are going to be like, oh my god, we solved. HTTP2 solves head of line blocking. It doesn't actually in practice, because you just push it down one level, which is why in the last few years people have been working on this new spec called Qwik. Qwik is basically dropping TCP altogether and just putting everything directly over UDP and implementing the similarities of what TCP offers this whole retransmission, control flow, implementing that in its own spec on top of within UDP. And we'll talk a bit more about that later and what that brings. But the focus will be on HTTP2. And you can assume that Qwik is sort of like a supplemental to HTTP2. It's not fundamentally changing the concepts of what we're talking about here. So we've just covered a little bit of streams, frames. Like I said, you kind of chop up your data into frames. So I'm not going to go into too much details right now. We'll see more later when we actually start using these things. It also offers header compression. It's called HPEC. There's actually a couple of proposals right now to change HPEC to newer compression schemes. There may or may not be some security vulnerabilities. There's also some things that it doesn't offer. Like right now, for instance, if you compress, if you have streams that all go to the same domain name, the same authority or host or origin, whatever you call it, the terminology sort of blurs a bit over the different versions of HTTP. But if you send headers to example.com, and then you would send it to example.net, those headers are compressed with a separate dictionary so you don't have the full efficiency. So this is something that people want to fix now. It's cozy. Yeah. I mean, it's just a little line. It doesn't really matter. By the way, you can just access this yourself if you want to skip ahead if you get bored or anything. I'm not going to hold you back. Can you see this URL? Is it in the link? So hang on. Something like that. So feel free to skip, I don't want to hold you back, OK? So like I was saying, HPAC, people are trying to fix that as well. So just because I mention these things now, it doesn't mean that they'll be the same in a couple of years. Little parts of the protocol are actually already evolving. And we'll even try using some of the newer extensions that have been adopted for HTTP2 and even some of the experimental ones, which is kind of cool. OK, you got it? Get it from your neighbor. So we also have priorities and dependencies. This is a new concept. In HTTP1, there's no such thing. But now you've got a concept of streams that can be dependent on another stream because they're all on the same socket. The server could now determine which one should be prioritized. So if you have, let's say, an image that's maybe not as high up on the page, you can give it a lower priority. And the server can interpret that as a suggestion from the client or as a hint to say, OK, I'm going to send you everything else with a higher priority. I'm going to allocate more bandwidth on this socket to the higher priority resources. I will say that in practice, this is sort of a concept in the spec, in practice, doesn't really work. OK, so different servers implement this differently. Most sort of really just ignore it. Browsers also have very differing opinions about how to create this dependency tree, some of which are really just a queue. So you have a tree where you have one branch and another branch and another branch. It doesn't really help. OK, so this is something that people are playing with. But unless you're doing the research specifically on that area and you're trying to optimize something, it's not really going to be a very huge impact. But it's a lot of complexity for implementing it. So yeah, great choice, spec designers. So yeah, we've got another thing. Clients and servers can negotiate settings. When a client connects, it has a certain amount of settings that it sends in as one of the first themes to the server and the server will respond back with its own settings. So settings can be things like saying, OK, how many streams can I have open at a maximum at any given time? Now, the default in the spec recommends to never go below 100. Chrome actually says 1,000 streams. So this is like 1,000 requests and responses could be open at the same time on a single socket. Like, that's pretty liberal. Like, that's pretty generous. Other things are like receive windows and things like that. So like, wait, hang on, don't we have enough chairs? OK, OK, OK, sorry about that right now. There's like space to sit in the front if you want to get closer. Can you hear me OK there? OK. So yeah, a little bit of setting negotiation. We'll try to use some of those. For instance, one of the key ones is the client can also enable or disable server push, which is the other thing that I wanted to mention is that was my personal, the thing that got me into HB2 a couple of years ago, speedy back then, was server push. Because I really liked the idea that the server can just eliminate any wasted round trips by saying to the client, here's some file that I'll also send you in response to your request. So let's say a browser visits a website, requests example.com slash, just like the index HTML, the server would normally serve that. Take some time to get to the client, client processes, parses this thing, and then goes and says, OK, now I need my app.js, and I need my styles.mydesign.css, and whatever. So there's a lot of that round trip time that is typically wasted. It's called think time in some contexts. And with push, you could actually eliminate that. If the server knew what the client would need to go along with that page, it could push along those assets right away. So that's a really cool idea, I thought. And so this is what I've been working towards, and we'll use this later. And I'll talk about some of the problems with that, what wasn't maybe foreseen, and how we can address those then. So we'll go pretty deep on that. And then lastly, here's the coalescing connection kind of idea, which fits into the whole streams. So with streams, you basically have, let's say, all the JavaScript files, all the image files, all the CSS files from your website, can go on to the same TCP socket at the same time. But now, let's say that you're on a CDN. A CDN is like a service that like Cloudflare, you might have heard of, or Fastly, or Akamai, and many other large companies offer CDN services. So what happens is that a lot of people put their website onto those services to host them from servers around the world. Because when you put servers around the world, those servers get closer. These edge servers get closer to large population centers of the world. And that eliminates the latency, and which is always having low latency is the best solution to any performance issue, basically, on the web. If you have really low latency, if you're like in Singapore and you're requesting from a data center in one north, you're never going to notice that you're doing 100 roundtrips, because you can do it in like a 10th of a second, right? Cool. Is it hackling, or what? OK, so anyway, once you're on the CDN for all these various performance reasons, there's this concept of coalescing connections, where you could say, I'm going to establish connection to my site, my example.com domain, but then I need to load something from API.example.com, because I have a single-page app that's talking to an API. And what's happening now is that you can put both of those domains onto a CDN that serves a single certificate with maybe a TLS certificate to secure the connection, which lists both of these domain names. And then the define is able to connect, to request from both of those origins on the same connection. So you eliminate the need to set up another connection, which is actually a rather expensive process in terms of latency, because to set up a socket takes like a roundtrip to do the TLS, it's like another two roundtrips at least. And before you can undo your HTTP request, right, that's four roundtrips at least, possibly more, because the size of the certificate might increase the amount of bandwidth that exceeds a single roundtrip. The idea is that you can serve multiple domains on the same TCP socket, the same connection. And if you combine all these things together, we can start moving towards a very different way of delivering web applications, where you have zero latency, or you have the minimal theoretical amount of latency between your client and your server. You have zero wasted think time, because your server is constantly pushing data down that pipe to fill the maximum capability. This is very important in places where there isn't an abundance of CDN infrastructure available right now. So this allows you to serve from Singapore, let's say, to customer in America or in South Africa, without having necessarily setting up servers in all these places. So there's a lot of concepts there. And I'll talk about sort of how I've been experimenting with those, and actually using them in practice, and what tools we can use to do this. And we'll start with some very, we'll start with taking just a closer look at these protocols, the data that's actually being sent. So we get familiar with the sort of the primitives of the protocol. I will point out that my little rant here is probably not the best explanation. What is probably the best explanation is this really rather excellent FAQ, written by, I think, Mark Nothingham, who's one of the authors of the, I think he's one of the chairs of the ITF HTTP working group. So he's been doing this for decades. He's really, really good at it. And he writes rather nicely. So you should actually go through this. I think that out of all the tutorials that I've ever seen about HTTP2, none are as good as the actual spec. It's very easy to read spec. It takes like maybe an hour. And if you're intent on working with this, I would highly recommend just sit down and go through it. Take your time. And as you're working with it, also just look at the spec as your source of truth, rather than looking at blog posts and tutorials and talks like this. I'm just trying to facilitate and explain what I've learned, but I'm not better than the spec. So look at these things. And I found it rather really sort of enlightening. So that was the FAQ. So this is under the HTTP working group. The ITF is this thing in the top right here. Come on, position static. So the ITF thing here, that's a standards body. It's really an open organization. Anyone can participate in these discussions. They mostly take place on mailing lists. These are people from around the world who work for all kinds of companies that are competitors of each other most of the time, browser vendors and, like I said, CDNs and open source hackers. And they have various working groups for different protocols. So most of the protocols I've mentioned so far, like TCP and UDP and QWIC and TLS and DNS and HTTP, they're all standards set up by working groups. The WG is a working group that fall under the ITF organization. And they sort of have a lot of process around it to make sure that this is facilitated fairly openly and anyone can participate in this. In fact, just last month, I think, two months ago, last month, they had the 100th meeting in person and it happened in Singapore. And I was fortunate enough to attend it for my first time and it was the best tech conference I've ever been to. Because it wasn't really like a tech conference like this where you go to attend workshops. It was just people discussing this spec. People actually would get up and propose something and people would then debate these concepts. It was very, very nice, very enjoyable. And I was sort of attending. And I was able to contribute as well, even though I don't work for a Google or a Mozilla or whoever that was there, I can just go as an independent person and say, OK, I've worked on the node with it. But everyone just speaks for themselves. Nobody speaks for their employer, for instance. So everyone's on equal footing and can fairly contribute their feedback and ideas. So if you're interested in that at all, take a look at the mailing lists under the HTTP working group. So you'll find all of that here at the HTTPwg.org. And over here is the mailing list. This is where all of the discussion happens. So personally, I don't discuss a lot there, but I do work. So that's a good way to sort of keep, stay abreast of what's coming up. And you'll see things that might not even exist for another two or three years in browsers and such. But you'll just be aware of it. You'll see what's coming. So I think this is very valuable for any web developer to sort of keep up to date and stay ahead of what's up to date right now. So this is like, I mean, you see some really interesting discussions happening, and it's kind of cool to see the names coming back and who's working on it. You'll start seeing which browser may or may not be implementing which features and all that. So anyway, take a look at those. So then first thing we can do is a couple of tools to look at HTTP2 traffic. I was going to suggest that you first open up your inspector. You have Chrome installed. Most of you hope so. OK, so let's go to a website. Let's see. JSConf, hang on. Who here knows that on Sunday, there's a meetup called KopeyJS? All right, so actually, as an aside, if you're around on Sunday, for the visitors, it might be a little difficult. But if you're around at all, on Sunday morning, there's a meetup called KopeyJS, where people just gather. It's not really 100% JavaScript, but there's mostly developers there. But you don't need to actually talk about coding or anything like that. It's just a nice way to socialize and meet some people over a cup of local Singapore coffee. So check it out. Anyway, point is we have this website open. We have our inspector. Go to Network tab, reload. All right, here we go. So this is happening over protocol H2. OK, so that's our basically our first clue of that this is not like a standard old HTTP1 website. Now, the next thing you do is you go to Chrome slash net internals. That's a little lower level debugging in the browser. And if you go to HTTP2 on the left-hand side there, is this too small for the people at the back? Can you sort of see it? Is it too small? OK, OK. May I click better? So I'm not sure if you can see the URL, so let me just magnify that separate. What's that short key on Mac to zoom the screen? What is it called? I always forget that. Yeah, that zooms the content, but not the browser bar. No, I think it's like with a pinch gesture and then some keys or whatever. It'll be useful. Yeah, I can do the pasting, but I have to do it every single time. You can just zoom parts of it like that. Or you can enable trackpad. How does that work? Option command? No. Option command nine. OK, what was that? What was in Arabic? OK, so anyway, now we've got like a little live overview of what's going on here in the internals of the browser. And we can see our session here, our live sessions. So copy.js on port 443, which is standard for TLS, for like a HTTPS URL. It's got this session here ID 385. And probably we have to reload this page. Then we see it captured something. So come on. Stop. Stop capture. OK. OK, it's quite annoying. OK, I'll do it like that. So on your computer, you probably don't have to struggle with the zooming thing and all that. But basically, you can see really low level information on what actually happened, how the browser requested it. We start seeing some general events. So you see here, well, because we reloaded it, it first closed it, probably received some settings there, clearing this thing out. Where do we open it again? One second. These tools are a little bit finicky. So bear with me, please. Why is it not? OK. Live demos are the best. Here we go. So we've got a little idea of what's going on here. So our session, this is the request, the way it goes out. Because it's all binary, it's basically, this is a decoded view of that binary. Like, if you were looking at the actual binary and just like rendering it as ASCII, you would just get a bunch of garbage, because everything's compressed, like I said. So it's compression, kind of like, makes it very efficient, but also totally not human readable anymore. So this is decoded. This is uncompressed and then a representation. So this is just a debugger panel. This is not literally the bytes that are going out. With HTTP1, the bytes that go out are pretty much literally like this, just that they get wrapped inside TLS. Here, a lot of this looks very familiar, but you can already see some differences. You have these pseudo headers. In HTTP1, it would look different. I don't know if you're familiar, but in HTTP1, that would look the first line of the request is basically just get and then the protocol name and version and then the path. In HTTP2, because there's no text-based line-by-line protocol, these get encoded as headers so that they can also be compressed. And so they're prefixed with this colon so that they never clash. So if you see any headers that do not start with colon, then these are the regular headers. The ones that start with colon are defined by the RFC and are reserved pseudo headers. And these get compressed really efficiently, so they don't take as much bandwidth as what you see on the screen. Also if you have repetitive ones, they completely encoded to their entropy values. So you see a lot of these, this single session has a lot of streams going over it. We can start seeing things like the parent stream ID for this one is zero, that the root is zero, then they sort of incrementally go up in terms of the stream ID. So there's this difference between even and odd numbers of stream IDs, depending on whether it's initiated by the client or by the server, which direction it's going so you can kind of tell. They can go up to like 31 bits, so it's like a billion streams either direction. So you should never be exhausting that actually. Now, basically this is one way of looking at the debugging thing. What I wanted to actually do is install some of these tools. Who's got curl on their system? Probably everyone who has used curl before. OK, cool, cool. So just try to run like for those who have not, and for those who have not seen HTTP2 being inspected with curl, just run curl dash dash verbose, and then the URL, htbscoupijs.org. So you'll see all this content, right? But all the way at the top, you see the interesting part. Curl actually gives you kind of an insight into how it sets up the connection with TLS and what's being sent with the headers. So at the top, you see things like this ALPN or Alpine, that's like how it's negotiating whether this server is actually HTTP 1.1 or HTTP2. That's part of TLS. So you see things like the exact handshake and the certificate exchange. So these are a lot of round trips that are going on. So in future versions of the protocol, this is one of the things that we're trying to eliminate these kind of round trips. You'll see things like what kind of certificate is it, what algorithms is it using to sign and digest and everything. You see some details of the actual certificate. So in this case, for instance, the CN, the sense for common name, is copy.js.org. So this certificate only has copy.js on it. So when we're talking about coalescing connections, you have multiple domains on the same certificate. This certificate would not be able to do that. So it's also signed by Let's Encrypt, which is a very common way now. Most certificates are now signed by Let's Encrypt. It's a free service that is available on most hosting companies, free of charge. And you can also integrate this yourself. Then let's just see the regular headers. So this is fairly standard HTTP stuff, where you see the method. So the reason it doesn't show this as actually, this is not actually what was sent out. This is a little bit of a lie by Curl. So the actual thing that gets sent out is those compressed headers, which have a pseudo header that colon method, colon path. And this is just Curl's representation because Curl is kind of like a tool that's been around for ages. And I'm guessing for the user interface, this was a similar way to make it familiar to everyone else. But this is not 100% accurate. But it gives you a good idea of what's going on. OK, then another tool I wanted to share, which is specifically built for HTTP2 is NG HTTP. So this one you probably have to hang on the NGHTTP2.org. So if you go to this website here, NGHTTP2.org, this is an open source library that implements HTTP2. It's written in C. It's the underlying library for tools like Curl. It's also what's used in Safari browser. The Apache web server uses it. And Node.js also uses this. And it comes with its own command line. Various tools are included. One is a client that we can now use, the command. When you install this on your system, you should actually try this now. We're going to be using this. Or you can do it. OK, so Indian will walk you through how to get this working on your computer. You need this one? Yeah. Yeah, that's cool. This is a Docker thing, huh? Who's got Docker on their system? OK, cool. Nice, wow. It didn't even recognize the USB. Let me try it with the other one. It's fine, I just do it in this. OK, you already tried it. What do you want to do? This is, nope. Let's give it a go. The temperature OK? Cold to warm? Cold to warm? Cold to cold? Cold to warm? Cold to warm? Cold to warm? Wait a minute. Cold to warm? Warmer. Warmer. Warmer. Thank you. I think your computer needs this to work. It needs to be. What happened? I don't know. Your news is not OK. It doesn't work. Try yours now. OK. I think it might be easier to install it with Humbra if someone has that. Yeah, you're right. OK. Would your thing work with that? Yeah, it should. I think it's good. Wow. Yeah, it's not even charging. Great suggestion there. We can use Homebrew to install this on our own machines. That's actually what I do as well. The demo was going to be a little bit more of a load testing thing if we can get that working. That's fine. So if you have a Mac and if you are able to install it via brew, just do that. Yeah, do you have Mac? The Docker was mainly for those who don't have Unix or Mac. I haven't tried it on Windows yet, so the Docker content is mainly for that. But if you are able to install NGHTP to just follow the instructions on the website, I didn't know it was on brew, but if it's there, just follow the local development. Actually, one thing we could do is show the demo for the load testing. OK. And one quick innovation. I told the venue that we're going to make it a bit warmer. And second, I would like to have a photo of all of you. For my collection. Is everybody all right? Just a minute. I'm not sure I can get everybody in here. So you have to be a representative for the corner. Thank you so much. Or do we do another one with hands? I don't do that. Enjoy. Awesome. OK. So next level, Instagram. So yeah, set up NGHTP to if you have already. And so NGHTP is a C library which implements a lot of the lower level connection stuff with respect to the page. Yep. OK. Anyone able to install it so far? Who's installed it already? OK. So far. 80%? Anyone installed it? That's quite. It's actually quite heavy. So in case you're not able to install it yourself, just I have a Docker container with everything set up. So you can just go to this IP. Oh, sorry. That's the local host IP. Just a second. Yeah, should be able to access this. This has the stuff set up so that if you are unable to install or finding some problems installing, this should help you get started. This is quite big though. It's nearly 80 MB. So yeah, just see if you're in. OK. If you have problems installing, just let me know. Question? Yeah. Can I just use this alpine image? Or is there something else additional that you install? Wait, wait, wait, wait. I do something extra, but this should get started. Yeah, I'll just install it. Yeah, yeah, yeah. That's actually good. OK, that's good enough. Oh, yeah, OK. Good point. So if you want to access this URL, make sure you're on the JSConf Wi-Fi network. There are two networks. You're up next. Yeah, password is... What is the password? Beware of the cats, I think. Beware of the cats is the password. 10, 10, 1. What? Let me check. Is anyone is able to access this URL? You can access it. Which Wi-Fi network are you on? Which one are you on? Unless someone is making that Wi-Fi network. You have a VPN or something? Or firewall? Nothing. Nothing? OK, then you're on a Mac, right? Yeah, I'll talk about that. Do you have a group? Yeah. They go in for an HTML. Yeah, yeah. Yeah, they should do it. Do you use GitHub or collect names? Oh, it's GitLab. This one? GitLab, G-A-T-L-A-B. That's not possible. It should be. Because this one works off JS too. Yeah, I don't know the exact URL. Hopefully there's no more Sebastien's. I think it's private. Really? No, no, I was able to access it today morning. Oh, is it? I was able to access the GitLab. Wow, then dude, you're definitely on some VPN or something. I don't think so. It might be someone, something is downloading, right? What is this? We are downloading NG21, right? No, it's... I think it's just too slow. You probably need to... It's 100 kbps. It's not that. But yeah, wait for it. This is the main thing for you to get installed. Anyone else has a problem installing the NGHTB2 library? Can follow the installation instructions if the binary is taking too long to download? Okay. So this is basically a C++ library, which provides a lot of the low-level abstractions for managing the actual frames that get sent over the HTTP2 connection. So different servers build on top of this, like for example, Node.js core uses this library to manage some of the different APIs that they're building for the core. So yeah, this also comes with a tool called H2Load, which will be using to say load test HTTP1 and HTTP2 connections. That's what I had set up in the Docker image, but I think it might take too long to demo. So one of the things that I'll be showing is how... There are some different ways in which HTTP1 and HTTP2 are different. So one of the things is where HTTP2, everything goes on the same connection and different requests happen on different streams, as Seb just mentioned. So with HTTP1 for each request, you need to open a different socket. And because of this, there are some security implications of that as well. So for example, if you want to do rate limiting on HTTP1, you might say something like, I don't want more than, say, 1,000 connections open at the same time on the server. But with HTTP2, that paradigm changes because with one connection, you can have, say, 1,000 streams at the same time, which can utilize a lot of resources on your server. So when you're shifting to HTTP2, especially if you're doing that on your own server, not just using a CDN to manage your assets and stuff, you need to be... you need to fine-tune all these settings on your server. For example, Nginx has a lot of different directives to manage number of parallel open sessions and stuff like that. So the thing I wanted to show here is how with a minimum amount of resources an attacker can generate a lot of traffic with HTTP2 versus HTTP1. So with HTTP2, it's pretty cheap to sort of make a request. You just start a new stream and make a request with that stream. With HTTP1, it's different. So what I'm doing here is basically I'm just starting a Docker container with... I'm just limiting the amount of memory and CPU it's able to use. So if you look at it, I have done the same thing. I've just installed Nginx HTTP2. It also comes with a library called H2Load. You can look at the different parameters. So you can use different parameters to load test your application. The cool thing about this is it allows you to send requests both via HTTP1 and HTTP2. So using this, you can test which one works better, not works better, which one can send more requests at the same time. So one of the things you can play around with is number of requests, total number of requests that you want to send, and this will sense it in batches. So you can say that send 100 requests at the same time and I want you to send a total of 10,000 requests. And number of threads you want to use, I'm just going to use one because I just have one core. I have limited docker access to just one core. Number of concurrent streams, this is only useful in HTTP2. You can also force HTTP1. So let me just... So I'm just trying to send a lot of traffic to a server. Make sure that I have set up a sample server here. Probably don't bring down the CopyJs website, but this is a test server I don't care if it goes down. So it's just experiment.dexsecure.com. The thing is this server is also running on a docker image with very little resources. So there's a high chance that it can go down. Of course, I am not redirecting to HTTPS, but... Okay. So this is what I'm saying. I'm saying like send 10,000 requests, 100,000 requests. I'll probably reduce that. I don't want to say. Yeah, let me see what this does. Okay. So yeah, it's able to send that quite quickly. So using the... It shows HTTP2 because by default it shows HTTP2. You can see the throughput here. And you can also play around with the number of connections. So within one connection, it has multiple streams. I'm starting six connections to see if that improves throughput. It depends on your system configuration and stuff like that. With HTTP1, the throughput is much more lower. So you can force HTTP1 by sending this flag called H1. So here, if you try to start a lot of connections at the same time, the docker image crashes like it... I mean, it's not able to send as many parallel requests as HTTP2. And you can play around with that. So you can see if you start setting C, that is the number of open connections to say 15 or 20, it won't be able to handle it. And you can see that it's much more slower. You can measure the throughput. Yeah, just play around with this tool. It's good for measuring, seeing what... How HTTP1 and HTTP2 are different at a fundamental level. Why using streams is different from setting up a whole new connection to it's more memory efficient and stuff like that. So you can just play around with this. Does anyone has problems setting up NGHTP.trying out H2 load? Just let me know. So you can see the request start failing as I open 100 parallel connections, which is very bad with HTTP1. So you can see that most requests just fail because it's not able to open that many parallel connections. But with HTTP2, this is much more easier to do. So the URL is GitLab, not GitHub. So if you want to access the readme, this is the URL. You can access it, right? Yeah. Now are you able to access it? That one, yeah. Yeah, yeah, this is the one. Okay. And how to use this one? So have you installed? Yeah. So just think this. So there now there will be a tool called H2 load. I think. Yeah, so now you just give it a URL and you need to can set the number of total requests to send, how many requests to send in parallel and stuff like that. And just monitor your CPU usage and the throughput. Who were you talking about this morning? The workshop about performance? Yeah. Yeah, yeah, yeah. So he was really involved in the HTTP2 implementation last year of basically there was a person from his company called James Snell, who's a really amazing node hacker. We worked on it primarily and Mateo was supporting him in that. And I contributed a little bit on some of the compatibility layer and support and testing and stuff like that. But Mateo made this other tool called H2 URL. I don't know if you call it Hurl or what, but you should ask him I guess. But if you NPM install H2 URL, it gives you another option, another tool that you can use to do similar things to Curl or HTTP2. So it's another option. Just NPM install this locally in your package or do a global install. Everyone here understands NPM first. So something you might have not yet seen is you can also NGX H2 URL. And that's basically like NPM, a tool that comes with NPM. So if you have nodes installed on your system, you need node 9.4 for this stuff, by the way, all this H2 stuff to stay with the latest one that's changing like every week. So if you do NBX instead of NPM, if you just do NBX H2 URL and run the entire command, what happens is that NBX is like a little helper tool that NPM installs it in a temporary location, runs the command, and when it finishes running, it removes it immediately again. So it's kind of like the benefit of doing a global install without actually polluting your command line namespace. So try NBX H2 URL, dash dash propose, and some URL. Try that out. It's a nice tool. It's just a little experimental thing. That's it. Each of these can give you a slightly different perspective. Like I was saying earlier, also like curl actually kind of omits a few things about the protocol, but then it tells you a lot more about the less handshake. So depending on what you're debugging, each of these might give you a different insight and might help you solve different problems. Okay. I know for a fact that the browser tools sometimes are fictitious to put them mildly. Like when you're doing things like server push and then you're trying to figure out why are there more of this hitting your server when the browser tells you that it isn't. You can lead to interesting things by looking at the internals and seeing, oh, for some reason it opens up two streams because some bug in the browser leads to the FAP icon being somehow on a separate thread because that's being rendered by a different part of the user interface and it opens up a new TTP token. All these kind of weird stuff happens and that's why it's useful to have the full suite of debugging tools at your disposal. Okay. All right. So everyone's got a little bit of a thing working now. Everyone's made a request and then looked at the HTTP token stuff who still needs a little hand. We can go around it. Just to make sure. Oh, we couldn't get enough. Oh. No one is going to reach on the talker. We'll go around and see. We'll go around and see. We'll go around and see. We'll go around and see. We'll go around and see. We'll go around and see. We'll go around and see. The more sure it is. Getting the data from the error. See, unexpected token. Okay. We'll try this. Which? Yes. Let me check. Yeah? Oh, issue is there? Let me check. I think there's some issue with this version. Look, this is basically... Which two is it? Which version of Node are you in? I think it's not understanding the async function. Oh, no. This is ages old. Just changed to 9.4. Let me actually... Yeah, this definitely won't work with it. By the way, make sure you are at least on 8 or 9. Node. Yeah, Node. Someone here had a docker issue? Yeah. Where do I get that? Docker from where? I have docker from running. I usually do it every day in development. Oh, okay, okay. But I tried to download the docker image. Then build it. Like it says on the page. And then it fails. This is a little inconvenient for me to restart it right now. I don't think it should be a minus docker. Let's find something else. If you want to build an image, use it. It's actually there, right? So, I took a case clone. Then I run... Did anyone run like H2 load against the copyJS or something? Yeah, thank you. I'll check it out. It's basically a server that's just running at home. Oh, not pulling something in, pulling complete. Because I think this should work. Are you on Windows 7? No. Try it on the terminal with the Linux terminal. Do you have that? Or do you have a git bash or something? I have a bash. Just try it on bash, that's what I meant. I think it's failing because he didn't expect... Yeah, git bash should work. Oh, but is docker and stuff installed on git bash? I'm using docker every day in this Windows. Oh, Windows, okay. Oh, I'm not sure then. It should work. No, like he's running a bash script in the instance. He's running a bash script in the installation process. It should work. Oh, but this is bash as well? Yeah, but this will take all the bash commands. Okay, okay. I'm not really sure. In VirtualBox, do you have Linux running or something? Yes. No, it's not in VirtualBox, but I can run everything from here. I haven't tried it in Windows, sorry. Everything okay? Is it working? Can I see your hand raised up if you don't have anything requesting yet? I'm going to go on to the next part where we get more to the Node.js part of it. That's cool. 0.9.4.0 for this because I think the APIs have changed again for the push stream. Yeah, so the main thing is that HTTP2 is experimental in Node.js. When you start using it, it will throw a warning. That's fine. That's supposed to be the case. The reason it's there, the reason it's considered experimental, is because the API does still change a little bit here and there. And normally, if you change any API in Node.core, even if it's the smallest thing, like you change a typo and an error message, right, that has to go through a really lengthy deprecation cycle. That could take years and years because of LTS, the long-term stability, long-term support versions of Node. That could take years to actually fully go out. And just waiting that long with HTTP2 is just not acceptable right now. We didn't want to wait years to actually ship it in the current version of Node. And we also didn't want to have to leave bugs trailing and trailing for years, potentially leaving people vulnerable to security issues. So that's why HTTP2 is just considered experimental. So if you're using it, expect that some of these APIs might change. So some of the frameworks that you're using, the authors of those frameworks have to really pay attention to this and constantly update. So if you have a dependency on a web service or some middleware that uses HTTP2, you might want to be updating that all the time. Whenever there's a new version of Node, test it out before you ship it in production. So the current latest release is 9.4.0 as of yesterday. That's what I'll be using. So the basic API, the way that you use it is really familiar. We've got require HTTP for HTTP 1.1. Then if you want to make that secure, you use TLS by requiring HTTPS. That is almost the exact same API. If anyone's read the documentation on HTTP, it's basically you get a function called create server. And with HTTPS, you get a function called create secure server. Now if you require HTTP2, HTTP2 itself doesn't mandate TLS. So you can use it with or without. It's the same protocol. But you can use it with or without TLS. So you get both the create server as well as create secure server functions. So with HTTP2, you can skip this difference here and you can choose yourself whether you want TLS or not. Now you probably want TLS. And I'll probably go into that in a little second here. But for now, let's try to build a server using just a documentation. So I want to do a little exercise which is to create an HTTP2 server and there's like a little cheat right there. So the very student, and I'm sure it's like every single person here, has already seen the code. But try to create this without looking at the actual exercise. Now look at the documentation because the idea is that you familiarize yourself with the documentation as your reference. So you don't have to memorize APIs and other pointless trivia fact that changes all the time. Just always refer to the node documentation at nodejs.org slash api slash HTTP2.html. And this is nicely written. More or less. If you see typos, you might want to contribute a bug fix. This is more or less up to date. There's a couple of errors in there that I'll point out later that you might need to go to the latest, latest, latest, like on the repository itself. Because there's some of the APIs that changed like a few weeks ago and for some reason the documentation was not rebuilt yet. Now if you go here, you'll start to see there's a core API and there's a compatibility API. So the reason for this is that they're... Essentially, you have these layers of the Node.js HTTP2 implementation. One is the NGHTB2 library that we were talking about. So that library is the lowest level of HTTP2 implementation node. That's written in C. It's just included in the source code in the Node.js repository. It's just included as a dependency untouched. The exact code from the NGHTB2 repository is included into the Node.js repository. It's completely standard. Now the C++ part of Node.js itself is what binds to the C code of this library. And so there's a lot of C++ code that was written that exposes the API that NGHTB2 offers, exposes it in Node C++ land. So most of the Node.js internals are written in C++. This is a library called libUV, which is sort of an event loop library that does all of the input, output, all the networking, all the file access, all the operating system hooks. That's all C++ and the reason for that is because A, it's high performance and B, it's cross-platform. So libUV exposes file access and networking on Mac, on Windows, on Linux with a common API. Exposes that event loop, exposes the V8, JavaScript engine bindings, exposes all that in a nice high performance manner. And so that's why we have to have C++ to connect to any other library like NGHTB2, for instance. Now, because we are using JavaScript, this C++ layer exposes its own API to an internal JavaScript library within Node. And this is sort of like a layer where it splits into a core and a compatibility API. And we can use that in the documentation, so it's described as the core API and a compatibility API. The compatibility API is trying to mimic everything that the normal HTTP1 API offers. And the reason for that is because you want to stay compatible to provide an upgrade path for people who are currently using the existing web frameworks like HAPI or Express or Connect or all these things or any kind of custom code or the Node Fetch, all these kind of things. You want to make it as easy as possible to migrate those projects over. So we expose all of the methods, all of the properties as long as they exist as concepts in HTTP2. We try to map them to the same API. But the core API is going to offer you access to the new concepts like push streams or the streams and the session. It's going to offer you access to the HTTP2 settings. It's going to give you a lower level access. It's slightly higher performance because it doesn't do as much like wrapping and like extra lookups. But performance wise, they're more or less the same. You can comfortably use the compatibility API and never face any issues because of that. So look at the core API and just build a little Node.js server. It's really quite straightforward. I mean, okay. I'm just going to go ahead and put this up here. Essentially, like I said, you get a create server method from the require HTTP2, right? You import it. The most fundamental event that you can deal with any request is a stream. Every request is essentially a new stream. A connection is a session. You can think of like a TCP connection as a session and a stream is a request more or less. Now, when the stream opens, you receive this stream object which is in documentation and you also get the headers already decoded. So the stream, actually, you only receive this event once it fully parsed and processed all of the headers. The headers could be split across multiple fragments. So the API kind of waits for all of those to come in and once it's completely done, it'll give you the fully-decoded, decompressed, normalized and everything sort of safe to work with headers object. And that's just a map, no prototype, just a map of raw key value fully-decoded. So I've got a little logic here to sort of show what this does. So you can look at the API and document, run this yourself, write this yourself or you just copy-paste this and just try to run this on your system and then play around using the tools that we set up earlier, the debugging clients that we have hit against your own local server. So yes, but now you can try to DDoS this one on your own machine. Okay. What's happening here is normally you would do like something like send head or write head or whatever. I always forget the API. There's a new API here called respond where you just give it to all of these headers and remember the status code in HTTP2 is not like this magical field in the first line. It's just another pseudo header. So when you're sending all your headers, you can now send your status code, like your 200, your 404, your 500, whatever it is. You can send it as this colon status header name with the value being this number. So you send this response and then that's just going to send the header fragment out to the client and then you can send any kind of data as a stream. So you can pipe files to your response and you can create read stream from a file or take any kind of stream that you have a node and you can pipe it to this stream and it'll automatically flush all the data and do the buffering all really nicely. But this is a simple way to just send a string out there. So if you connect to your port 8080 with your... Browser won't work right? Yeah, no. So that's the key thing. The browser won't actually work right now. This is just a create server. So there's no TLS. There's no HTTPS or anything like that. That's why I'm doing port 8080. So port 80 is for HTTP and port 443 is the default port for TLS. So HP and HTTPS you have port 80 and 443. The reason why I'm using 8080 is because any port below 1024 on Unix you need to have pseudo access and it just complicates things needlessly and you might have already had something running on there. So I use... My values are usually... This is personal preference. It's like 8080 and 8443. So it just changes 80 to 443 in the other demos. So keep that in mind when you're requesting this that you're not using... So if you hit this with your browser your browser will just go like, oh I can't connect to this at all. We'll do a demo with like how to do this TLS thing after this. So okay, I kind of want to see that we actually achieve this step. So we have some HTTP2 servers running. Anybody have a... All right. Is this your local server that you're hitting? Okay. What's that? I think local host, yeah? Looks good. And then there's data here somewhere. First response header. Data here. How many? Oh hello, hello there it is. It's sort of printed in line here. Nice, good luck. Good job. You can try doing things like piping a file to it or something. Like you can read a local file and pipe it to that response. How are we doing? You got it? Yeah. Awesome, that was quick. So the colon thing, that's a pseudo header. Pseudo, like P-S-E-U-D-O. So that's basically when in HTTP2 you don't have this... In HTTP1 you have like get space, HTTP slash 1.1 space and then slash the URL, right? The path. So that's the first line of the request. But in HTTP2 you don't have like a line of your request anymore. It's binary. So there's no concept of like the first line. And so to still have those concepts, they sort of map it to these pseudo headers. So each part of that, so the method is like colon method header. And then the status is the 200, right? It's like colon status. And then there's like colon path for the slash, blah, blah, blah. So they map all of that to pseudo headers because then it just fits into the HPEC header compression. So this header frame just contains header, header, header, header, header. And then this HPEC compresses the whole thing, no matter what it is. A header can have any name. So it's colon status, colon path. It's all just the same. So it compresses it. That's why it maps this concept a little bit. But yeah, we still have to deal with like now in HTTP2, if you request HTTP2 you have these kind of pseudo headers and you don't have the standard. That's why in the compatibility layer you can actually treat it like the normal HTTP1 API. But in the core, with the core API that you're using now you have to deal with like colon status and colon path. How are you doing? Oh, you're linting my code, huh? Oh, no. Okay. You got it, yeah? All good here? Yeah. Is it working? Oh, it's the type. Ah, okay. Oh, I prefer the type instead of copy it. Oh, it's a respond. Ah. How are you guys? We're coming to the workshop. This one is the one? Uh-huh, okay. You got it running? Yeah. And then use the... This one? Yeah. Engaged. Not the X. X is a server. What, a proxy? HTTP. Yeah. And then just like HTTP localhost. And then maybe you want to do verbose. It's 8080. So you can see all the frames here. You can see like the header's frame coming out with the pseudo headers and everything in there. Oh, what's going on? I just... It's not copy and paste. It's fine, it's fine. Oh, how's it going? Oh. What's something wrong? No, it's new for me. It was new for me, so I'm just... This is an excellent guide, yes. It's really in those topics. Yeah, yeah. Have you got a note on your computer? That's right. Have you got a note on your computer? Ah, yeah. Right? Okay. And then have you tried the exercise thing? Ah, yeah. It was just... Do you want to help? Any help? Okay. You prefer it? It's fine if you want to read on your own. That's okay. Okay. How's it going? Oh, yeah. Yeah. You got it going? It doesn't work with... Oh, you're trying to turn it into HTML? Yeah. Why is the port... Yeah, so it doesn't work there, that's right. So the browser doesn't allow HTTP to... We do? Without encryption. Ah, okay. So we'll show how to set it up with encryption locally and all that. Because it's kind of tedious, painful sometimes. I think you need to use this one, right? Like, so locally generate your own... Yes, yes, yes. Is this one the thing? Mm-hmm. So look at my tutorial. I mean, you can skip ahead, I guess. So there's a... I made it... I just... I have like some... Yeah, I have some tools and whatever. But... Yeah, I just published this last night. Oh, okay. I mean, I've been using this forever, but I've never had a public. Yeah, thank you very much. Try it out. Okay. Hang on. Hmm. What is this? Okay, hang on here. Can I read properly PFX of anything? I think this is already a different exercise. I think you copied from the second one, yeah. Could it be? Yeah, yeah, yeah. This is already the secure server. So the first is the basic server? Yeah, I copied this one. Oh. Can I try again? Yeah, this creates a lot of things. So... That's funny. This looks... This looks okay, but can you make sure that this is safe? And then try to run the file? Oh, you're using the... Cool. Yeah. Can I just sign this one, Jens? Yeah, I can sign it. Really? What? Oh, me, sir. It should be the server was originally signed. What is this? Can I just see Node? Yeah, just check these. Ah. I think I see what's happening. Okay, so the version of Node is fairly old. And so you did an NPM install of HTTP2, which is somehow like a module that's really old and basically it'll get fixed by operating to Node 9.4.0, the latest, currently released. So I have to update Node. Yeah, that's all. Thank you. No worries. I can curl it and I can use HTTP2 URL. But not in the browser? Why can't I do it on the browser? The browser doesn't allow... HTTP2... plain text, it doesn't allow... Oh, plain text. Oh, okay. So you can only do HTTPS. Oh, but I thought normally if you were in allows then. Only for HTTP1 or for HTTP2? They did not do that. I can talk about why. Okay. Because everyone's having that now. I think that's good. Everyone's getting to that point. Yeah, yeah. Excellent. Yeah, yeah. Exactly, exactly. Did everyone use NGHTP to connect to it? I guess, right? Most people seem to have been doing that because that's what we were using before. If anyone used curl, they might find that it's not possible to just connect by default. That's why I've changed the command here sneakily to this dash dash HTTP2 dash prior dash knowledge. This is basically telling curl that don't worry, this is HTTP2. Because otherwise, you might be wondering how does a browser or any user agent know which protocol to use. If you're using HTTP2, it's all binary. It's like a completely different handshake. When the browser, the client sends its request, it's completely different on the wire than what an HTTP1 client would send. And so you wouldn't even be able to connect to this. You might have also noticed that your browser struggles to open this in a page. Some people have reported this now. If you open this in a browser, it just doesn't load. And it shows you some TLS, or some whatever protocol error thing. Now, so for curl, you can tell it that you have to use HTTP2, 100% double confirm, HTTP2 prior knowledge. If you just do dash dash HTTP2, it'll try to do what's called an upgrade mechanism, which is a way that the HTTP2 protocol declares that to an HTTP1 client with a specific header, with an upgrade header that says, I also support HTTP2, which tells the client to then sort of switch to HTTP2 mode and start sending HTTP2 traffic on that same connection. So that's a very slow, there's a wasted round trip again there. And that's just not very efficient. So browsers have sort of not implemented that. Another reason is that they don't, another reason why they don't support HTTP2 over plain text connections, unencrypted like at all, is because it's really problematic to deploy that on the internet. If you introduce these kind of huge breaking changes like an entire new protocol, there is so much infrastructure out there in terms of like proxy servers, they call it middle boxes and sort of the, the standard speak. There's so many middle boxes, there could be firewalls, there could be like logging servers on corporate networks. All kinds of optimizers, caches for like mobile phones that are automatically just apparently compressing your images and sort of all this infrastructure exists that makes it impossible to really deploy things in plain text because they assume that if it's plain text that they can mess around with it. And they just end up breaking HTTP2 if you send it in plain text because they're built for an HTTP1 world and now they don't recognize the protocol and updating all of that stuff would basically be a situation where you can never update it because nobody would do it unless there's a need to and it would only be need to if everyone's already upgraded. And that's why you can really only introduce huge new protocol changes by fully encrypting everything and saying only like the slightest amount of bits in that handshake. Excuse me. Excuse me. Okay. So that's why we have to use HTTPS for the browsers to actually support it. Now, so Curl and a lot of other command tools, they don't really care about that. They're just like there for debugging and the protocol, the spec actually, the spec describes how you can do it over TLS and how you can do it without TLS and the specs it describes like this upgrade mechanism. So a lot of these command line tools, they'll actually support it. Node also, Node supports like we saw our plain text and there's also a client built into the Node HP2 implementation that can connect to a plain text to HTTP2 server. But that's really only useful for like machine to machine, like an API call to an API call if you want to skip over the whole TLS thing or for like writing unit tests, like if you're trying to hit a server and you don't want to set up like certificates, self-signed certificates and deal with all this self-signed hassle on your CI server, for instance, you can just make a plain text connection to that. So, but in the real world when you're dealing with users on browsers, you're going to need TLS. Okay. So, okay, maybe before that, let's compare it with the core API that we've been using now with this stream. We can compare this to the compatibility API. So you notice here, we've got still the same create server, but we're providing it a callback right in the create server call. So in the first example, we just did a create server and we did not pass it any callback, right? We get a return value, which immediately, synchronously, we get the server object back, the server instance and then we attach an event handler to stream. Now, this is how you enable, and this is a very subtle way, I'm not a big fan of it, but this is a subtle way to enable compatibility API by passing an event handler to create server. Yeah, this is sort of the tricky part. So you might do this by default because this is generally how you handle requests on H3 One. So you create server and you give it a callback for any kind of, it's a request callback, right? So the request event. So the request event, in the core API we don't use that. We just use the stream event and we have a session event also, but we use a stream event to deal with a request and we get the headers and we then send data back on that stream, but the request and response objects are only part of the compatibility API. And these are the ones that expose all of the H3 One legacy API stuff. They expose all of the same methods and properties, like I said. And so if you pass this callback in there, you get a standard, like a familiar request object and a familiar response object and you can use them to sort of use the familiar APIs like right-head with a number and then a map of headers and then again you can use a stream to close it off with a string pass it back out. So if you run this, you'll get more or less the same thing, like a compatibility API. If you log the request object, you'll see that it looks very different from the stream object. So it has these different properties. The low-level ones are going to be on the stream and the high-level ones are going to be on request. So one thing you might be wondering is like, okay, well what if I have a request and I want to access the lower-level stuff? Luckily it's just exposed, request.stream and you get access to the lower-level stuff again. But you can see how that could be a slight hit on performance. So if you're going for like a benchmarking tool with some really high-performance code, you might want to stick to the core API without sort of the overhead of wrapping all of these objects and doing that extra parsing to expose the compatibility layer. But it's there. So in most cases, you can just use the compatibility API because it gives you all of that functionality of the middleware that you're already using with whatever framework that you're on right now. It gives you all of that out of the box. Pretty much works with, well, I'm not going to say it works with everything, but it works pretty well. And it's very familiar. If you've been using HTTP on Node for a couple of years, then this is going to be very, very familiar. You can also just change it to HTTP to get the HTTP version of it and then we can do a load test or just turn this to see like HTTP 2 versus HTTP 2 faster. That's right. Cool. Okay, so that makes sense? I mean, okay, feel free to change the code that you had and read this one, run it, and see what happens. So, oh, by the way, maybe I should clarify this little guy here. It's not an emoji. That's an IPv6 local address. So the IPv4 would be like 127.0.0.1. IPv6 because the addresses are really, really long. There's like this abbreviated notation and the local host address essentially for IPv6 is just colon, colon. It's just completely abbreviated, colon, colon. But because the HTTP URL has like the colon for your port, you kind of have to like wrap your IPv6 colon, colon in these square brackets. So essentially this becomes your host name where this is the IP address, like the double colon is your IP address and the square brackets is just there to like say this is the host name part of the URL and then this colon here, 8080, the divider between the port and the host name, the separator. So that's why you'll see this in a lot of places and it trips me up many, many times. And I fix bugs in people's frameworks now like a lot where I see people forgetting to actually wrap them and you end up seeing URLs like HTTP colon slash slash and then triple colon and you're like what is going on here? People forget about that all the time. It's annoying little gotcha. So I just wanted to put that in there to make sure everyone's aware of that. Okay now, so let's get on to why, like fixing this for the browser. So TLS, right? Like I was saying, the reason why we have to use TLS is because of compatibility really. Like without TLS it would just break. We would never be able to roll out HP2 because it would be trying to patch a huge change onto an existing protocol that has massive adoption and nobody would really want to implement that. So there's this chicken and egg thing. So the spec allows plain text but you really just want to use it over TLS. So the spec actually refers to a TLS 1.3 in one part in one place but TLS 1.3 doesn't officially exist yet. And the HP2 spec came out a while ago. They were very optimistic that the TLS 1.3 would be settled and they continued to be optimistic and so like a month ago they were saying oh, we should probably push back. They were thinking that it will be out in March or April this year but now it's probably going to get pushed back to like Q3, Q4 of this year. And we'll see, right? So TLS 1.3 is a work in progress. Right now we're on TLS 1.2 and despite the small change in version number there's like 10 years of difference between them. So TLS 1.3 is almost an entire new kind of protocol than TLS 1.2. One of the key changes, right, is the design of TLS in the past up until including 1.2 was sort of this layered approach of protocols. So you would first establish a TCP socket which means that you send something to the server to a SIN packet and then you get back this acknowledgement and you would send back your own acknowledgement to the server again. So there would be like this round trip going on before you could start your TLS handshake. Now the TLS handshake is again the client saying, hey, I want to do TLS and then the server going, okay, these are the certificates that I have and the mechanisms that I support and then the client going back and saying, okay, here's the one that I want to use and then the server going, okay, now we're connected. Now you can send your request. So we now have TCP 1 round trip and then TLS 2 more round trips before we can even get into the HTTP request that we actually just want to send. So we're dealing with minimum of four round trips which is unacceptable in most environments. That's very wasteful. You should be able to just transfer the whole thing and that has been a design goal of QUIC which is essentially the new sort of transport layer for HTTP that replaces this whole TCP TLS thing with a modular approach where you would send a single UDP datagram that includes the QUIC session negotiating thing that includes your TLS 1.3 certificate algorithm kind of selections as well as your HTTP request all in a single datagram. So that goes out from the client to the server and the server says, okay, for QUIC we will do this for TLS 1.3, we will do that and here's your response for your request on HTTP. So you can get back the entire thing in a single round trip and from then on your server can even start pushing so you never have to actually send out sort of a think time request where you're waiting for the server to respond. So you get to completely eliminate the wasted round trips that you would have with TLS 1.2. So that's sort of where that's going. I will say that neither QUIC nor TLS 1.3 are like standardized right now but if you open up your browser inspector you'll see that when you connect to Google.com or YouTube or something or maybe even some Facebook sites you'll see that these guys are already playing around with it even some CDNs like Cloud4, they'll once in a while you'll see them announce that oh they support TLS 1.3, don't worry, it totally works. What they support is maybe more like GQuick or GTLS 1.3 or well like draft versions of TLS 1.3 and earlier versions of QUIC were sort of published by Google because a lot of that research was originally done there before it moved into the ITF working group. Do you want to make a statement? Just really quick, because it started raining a little bit the hotel is telling me that... Alright, so don't get like struck by lightning on the beach party or by yourself or something, you know? Yeah. Okay, that's a little bit of a background and what's going to happen soon with QUIC and HTTP2 and TLS. Here's some of the things that we're using from TLS 1.2 right now to make this HTTP2 stuff work and some of the things that you can use to make it even better. So Alpine, if you saw the curl debug stuff like the VPN Alpine, this is what it's using to connect to the server and figure out whether this HTTPS URL is using HTTP2 or HTTP1. So because of this stack, this layered approach there's first a TCP connection handshake, this roundtrip. When it does the first roundtrip when the client makes the first roundtrip of the TLS handshake it basically includes a little bit of information that says I support HTTP2 but I also support HTTP1.1. The server receives this in the initial handshake and says, okay, I'm going to serve you HTTP1.1 or I'm going to serve you HTTP2. So that's really the handshake that's called Alpine and in node that's now exposed with a simple property called allow HTTP1. So this defaults to false. You have to actually opt into the backwards compatibility. I forget why we made it false by default. So basically you have to pretty much always set this because I think by default you would want to have HTTP1.1 backwards compatibility. It seems like a good idea but you could leave it turned off if maybe you're from now, you go like whatever, it works everywhere, it's fine. So that's one mechanism that we're using. You might come across this name so now you can sound smart and talk about it. SNI, another one, that's the server name identification. SNI is basically how you put a lot of different host names on a single host, on a single IP address. HTTP1 would use the host header for that. I think it's actually HTTP1.1 that introduced it. HTTP1.0, you still have to have like one IP address for one server because when you connect to a server you just go like I want this, I have this path name that I want to request with this method but the server wouldn't know which domain you're requesting it from. So it would have to say, okay, this IP address connection on, that's configured to be this host name. And then with HTTP1.1 they were like, okay, this is getting expensive to maintain all these different IP addresses. And so they introduced this host header, host colon and example.net. That's a domain I want to connect to. And let's say that you're hosting many, many websites on your shared server, a lot of cheap hosting or you pay like a dollar a month for your website hosting. There'll basically be one server, one IP address and almost 10,000 websites, right? They rely on that host header to map like your request to the correct set of files to serve or to correct PHP to code to run or whatever you want to do. So in HTTP2 that's happening like a step earlier at the TLS level. The TLS 1.2 has this extension called SNI, the server name identification, which you can use now in TLS to serve the correct certificate because you can't even get the request yet. You don't get that host header by the before you get to serve your certificate. So when you have a lot of hosts on one server, you need to consider that they would all be, they might all have a different certificate, a TLS certificate. You don't want to have like the same certificate for 100 sites. If that gets compromised, then you're in a whole bunch of trouble. If you need to revoke it, you end up revoking everyone at the same time. So there's complexities there and the reasons why you wouldn't want to limit the number of hosts, domain names on a single certificate. So you need to be able to figure out which certificate to serve to the request before you get to the HTTP level. That's why we use this thing called SNI to say, okay, we've, I'm going to give you this certificate at the handshake level at TLS level. Then later in the HTTP2 level, there's no more a hosts header now that was deprecated and turned into the authority, pseudo header, this colon authority. So that's where like, I get that confused a lot of times, host, origin, authority. I kind of like mix them, mix and match them a little bit too many times. But basically, okay, Alpine SNI, and then the last one I want to show is OCSP. This is, okay, what is O again? Certificate stapling protocol, like offline maybe? Or something, I don't know. Essentially what, okay, so one problem is that when you issue a certificate to a, for your domain, you get a certificate from a certificate authority and then you go to your web host and you give them the certificate to serve your content on your domain. Maybe at some point you save that in like sort of an insecure location on the computer that got hacked or you leaked it somehow and you need to revoke that certificate. Sure, you could go and take it down from the website, you get a new certificate and upload it to your web server, but everyone else might still have that certificate, right? And you need to be able to tell them, hey, this is no longer valid, stop using it please. And so what's happening is that the certificate itself includes a URL where the client can verify with the certificate authority whether or not this is still valid or is it on a revocation list. So whenever you make a connection to a web server, your client, your browser's going to actually make a, do an extra check. And again, this is a round trip that's happening that's delaying your load time. And so there's a protocol called OCSP that's called Staples. It basically staples that response onto the certificate as it's being served to the client. And so stapling means that the server, the web server itself, actually periodically goes and gets the latest sort of validity result and just staples it on. So rather than every single browser making those calls, like if you have a huge website and you have every single browser hitting that certificate authority, they're going to suffer. Like they need to start up a huge amount of infrastructure just to handle these revocation checks. And so it's better that the web server itself just periodically checks it and includes it, sticks it onto that certificate and gives it to the client where the client then goes like, okay, this is still recent enough for me to consider it as non revoked. And it also eliminates that round trip. So it eliminates the round trip for that client. So it doesn't need to like make another connection to another host and do another DNS lookup because that could be very expensive. So it's faster and it's less load on the CA server. So it's cheaper. So it's just generally better to do this. But by default, this is actually turned off in all node servers of TLS. So you need to do a little bit of code to make this work. So I'll show you from one of my projects how I do that. If you ever need to reference it, just look at this as an example. By the way, I'm using this thing called just require OCSP. Okay. And then I build a cache and I basically, this is basically a snippet from the, make it a little bigger. Okay. So the OCSP cache is essentially what I'm using to store the certificates that I've done the lookup for. Right. So whenever I get a request, my server fires this event. This is a standard TLS node event. This is not new just to, this is not new in HTTP too. This is, if you're using HTTPS right now, you could get this event and you're probably, if you're not handling it right now, you're wasting all these round trips and your users are making all these extra calls. So if you just want to improve the performance of like a standard HTTPS, you should still do this right now. Basically this is like caching the responses as they come back. Your server makes that request to look it up if it doesn't have a currently valid sort of stapled response. And otherwise it pulls it out of the cache and serves it directly. So it's really fast. This is the big concept, sort of stapling makes sense now. A little bit. Okay. All right. So the question now is how do we do this local host certificate? I had this for a long time now. I've had like some, well it started with like just going on a stack overflow every time and trying to find the right copy paste, SSL, command line stuff. It's got very tedious. So I started automating it and just last night to make this easier here I published this TLS key gen on NPM. So you should be able to just generate it by running that on your system. It'll just generate a key.pam and a cert.pam. And it'll try, if you're on Mac or Linux, not Windows, sorry, but I didn't have any machine to test it on yet. It will try to sort of set it up on your computer so that your computer trusts it. And that's very convenient if you're doing local development. Then your browser is not going to show you those security content warnings and get really annoying when you try to do anything in the browser basically. So you can use this tool to generate a certificate, have it trusted, and then you use those certificates on any software that you're working with in Node.js with TLS. A couple of things that it does, just to give you some insights into TLS itself, the tool that you're using is OpenSSL. So most of you will have that installed already, if not BruInstall, or Yum, or AppKit or whatever. This got this thing called SAN, Service Alternate Names, I think. Subject, thank you. So subject alternate names. Basically, the certificate can have a single common name, the CN that we saw, which describes the domain name for the certificate. But over time people are like, I want to have a bunch more names like wildcard names and IP addresses and all kinds of things. And so this extension was added to TLS certificates, where you can then describe a bunch more domain names and it'll all get signed on one certificate. It's very convenient when you want to have like your example.com and then your www.example.com on one certificate, right? Instead of having two, so when you reconnect you can reuse that. So basically this tool sets it up so that you have local host, you have wildcard local host, you have your IPv6, you have 0000, all your local stuff and the same certificate works for any of them. So when you go into the browser, you don't have to remember like, okay, is it 127 or is it local host? And then if you want to experiment with different domain names, you can just do something.localhost and you can set it up in your Etsy hosts file to map it to the same, so to your local host and it'll all resolve with the SQUIDIC valid certificate. Then also it's using ECC, which is like elliptic curve cryptography. Now I'm not a cryptographer, but usually when you see smaller numbers it means less secure, but with typical TLS certificates and you'd have this RSA 2048 or 4096 or whatever, you have these big numbers. The reason those are big numbers is because the certificates actually that size. It's that many bits and in this case you can achieve similar levels of security with only 256 bits using this elliptic curve cryptography stuff. So that's really nice. And it's actually recommended by some US government thing, NIST, whatever standards, technology, whatever stuff. They recommend basically the baselines for what level of security do you need for their operations and this is sort of generally accepted in the industry as what people use on the internet. And so right now if you're using like RSA 2048 you can just use this ECC P256 or P384. This is slightly larger in terms of key, but I think they're kind of rating them as pretty much the same in terms of security strength. And so P256 is more supported. There have been issues in the past where P384 was, I think it was like nodes, versions 9.0 to 9.2 or 3. It was like temporarily not supported. Now it came back. So it's a little bit of a pain. If you want to just feel like, okay, this is a bigger number. I want to use that. Go for it. But this is totally sufficient. The real risk is not even like, we have to go bigger. There's a whole other different curve now coming out because there's some weird speculation and stuff going on that this might be compromised and NSA will like break it and whatever. I don't think they're going to come after KoopiGS, but if your website is like super important then you might have to worry about that. So anyway, these are called curves. These are like short codes for curves and there's a new curve coming out. I forget what's like X255, something, something, something. But that's all still going through ITF process and reviewing and it takes a long time to get an option. So you could use it, but it's probably not going to work in most browsers. So these, like I would say stick with P256 or RSA 2048 if you want to be compatible and well supported today. So this tool basically does that for you. So try it out right now. This is going to help you with like general development. Run npx tls-keygen and see what happens. It's quite untested, so I want to see, I want to hear what goes wrong. I have a question. Sure. Sintax error. Sintax error. Wow. Subdomaining on your local host. In your xt host. Yeah. So xt host, you could just say foobar.localhost. Oh, okay. Yeah. But the certificate is a wild card, so it'll validate that in theory. Script error. What is it? Sintax error. I can't read the title. I also have syntax error. Oh, no. Oh, okay. Wow. What are you doing? Where's the command? npx tls-keygen. Hey. Can you do node-v please? Hmm. How can we do more code? Seems like there's a syntax error. So, can anyone confirm that? Yes. Okay, okay. Sorry. Let me take a look. What's the line? Does it say which file in line? Line 1. Line 1? Does it say the file name? That's the one, the require. What am I not seeing? You think there's a typo here? It's just destructuring it. 940. Okay. So, yeah, I mean, you could take the snippet and run it, actually, or why is it wrong? Anyone see the issue? Where's that? Okay. Oh, yeah, I was just doing that to escape these brackets, the other quotes. Runs on my machine. Let me see if it runs on this guy. TLS-keygen? Oh, that's a weird one. More chairs at the back, apparently. You don't expect a token. I think they're right. I know, yeah. I'm just, yeah. Okay, can anyone try to do npm install like dev? Let's see, let's see. Okay, so... Yeah? Okay, let me try. I'm just installing it in my local directory now, the workspace that we had, and then TLS-keygen. Yeah, that's funny. Hmm, okay. Yeah, right? So, okay, so, okay, we can just run it directly and you're saying... So, if we just do node... Yeah, okay. So, all right, so, sorry about this weirdness. I'm not entirely sure what's going on. I have a hint. Anyway, run it this way. Just run npm install... Hang on, let me... So, you do npm install dash uppercase d tls-keygen. That'll set it up as a local dev dependency. And then you do node... And then you node modules tls-keygen cli.js. Hmm, it's funny. From the bin it fails. But it's exactly the same file. It's like... You put a hashback back at the file CR like on KS before. Can you show me? So, the system didn't detect this as a process. Thank you, thank you, thank you. What's the standard one? Okay. Yay, thank you. Okay, so, you can just do the npm-keygen right now. It should be fixed. Okay, so, has anyone managed to generate a key in a certificate? And then gone through the sudo thing and all that? Yeah, it worked? Yeah, okay. So, you can verify whether or not that actually did anything. Just so you know I'm not like stealing everyone's passwords here. Open up this key chain access tool on Mac at least. On Linux, I'm not too sure. You can essentially see if you search for local host it should have some certificates here. Passwords, what am I talking about? Certificates. Yeah. So, I use this, I use this thing like this technique a lot so I've got a bunch of them. You should have at least one here. Has anyone seen that? Okay. So, if you ever need to do this manually, this is what you do. You just drag the certificate file onto this key chain access thing. You double click on the PEM file. And for SSL on trust settings, you just put on always trust. And from that moment on, most software that integrates well with Mac OS like Chrome browser or Safari, they will just trust that certificate for connections that are on this, for domains that are on this certificate. So, you can see for instance, in this case it would be local host plus the wild cards plus my local IP address, etc. So, this is like now a certificate that I can use with software without having to face like all these browser issues. So, the TLS key gen tool just automates as much as possible there. Sorry, just for a little bit. Yeah, I know, that's why I ended up doing this in a module. Common name. Common name would just be like local host. And these are just concatenated with semi-colon. You look? No, I can't generate them. Is the OpenSSL command on your systems? Yeah. Using the shell, and what is it? So, can you try it in the regular standard terminal thing without all, like this item two or something, right? Because I think when node runs it, it's just running it in plain bash, right? I'm not too sure, but... Try it. He's also got like a funky in terminal like that. Mine's very vanilla. He sees me. Oh, is it? Mine's in a terminal. Wait, what's that terminal? Or not? You can try yours. Mine looks funny, but it's yours. Oh, well, what? What? What node version is this one? Now you don't have NGM anymore. Sure. Hey, you got it right. Okay, so it runs here. Okay, no, no. So this part is fine, right? But now you got to do key and perm. KeyPam and search perm. Do we have anything there? Oh. I don't think... No, no, that's normal, actually. So that's actually a help message. Wait, was that the whole thing the whole time? No, just type... KeyPam. Oh, we should just do that. Maybe. That was really my badest bad UX right there. Welcome. One second, I got to save him. Okay, can you show me the commanding in that you're running? Someone. On the CLI, right? Which one is it? So when you run the... When you run CLI.js? Because they simply... They switched over to the regular terminal, but then they also, I think, forgot... But try to... I want to just see first. Okay, no. Oh, no, that... Yeah, no, that seems to work. They just forgot to do the key.pam and search perm. Yeah, no, it's different. I haven't tried out the run CLI.js directly. Could be something there as well. Yeah. Yeah. Try to put a... Oh, in-pam install on that thing. I guess so. Yeah. That's the normal thing. Yeah. Yeah, it's like... The file is not there. I don't know if it just wasn't generated or if it's in the wrong location. Yeah, it could be generated somewhere else. Let's see if there's anyone else with this one. Okay. I don't know. I don't know that it doesn't... It just doesn't go here. Okay, yeah, yeah, yeah. Okay, sorry, this is like my really bad UX. You basically have to supply this part. Ah, okay. Then... Yeah, yeah. Sorry, with the spaces, I guess. Hey, what's this? What's this? Okay, that's his problem also. He's got the same issue. Can you try running it in a regular terminal? Sure. I'd... And then make sure you have like node 9.4. Ah, my name. Yeah. Yeah, that's good. It's time. All right, so... He's working on debugging it also. Put them... What? Yeah. Yeah, so now we've got two cases of that. So, it says that you can't find the key file. The cert file. Which one? Yeah, the cert file. Yeah. Can you try making it like an absolute path? So, what is... Yeah, and then the other one. See that? Is it now saying the full thing there? Mm-hmm. They're reading... Okay, so... What is that? Turn that out. Can you open up that code? I'm sorry. Okay, and then go to set up TLS. Okay, so actually what I want to see, right? Wait, it's on sync. So, what we want to see is this whole thing what's actually happening, because he's trying to deconstruct this now. Oh, okay. Because it's got some nested shells and nested within nested and kind of weird stuff. Okay, I'll try. Mm-hmm. I'm wondering if it's because I'm setting the shell to Bashner. Oh, yeah. Mm-hmm. I don't know what you're using. Is it fish or something? Is it fish? Is it... Okay. Do you want to change it? Because you guys all have, like, interesting shells. Okay, maybe not. Yeah, no, I don't know. This is tricky, man. This is very tricky. Updated. Updated, what do you mean? Yeah. Oh. Okay. Maybe, hang on, maybe, maybe. Are you on High Sierra? Uh, no. He's got Yosemite, which is, like, I think around the same era. Yosemite is one version. Yeah, yeah, around, around. So, I mean, because I've only tested it with High Sierra. Couldn't be an issue there. Maybe the OpenSSL syntax changed. Can you update OpenSSL somehow? Because I don't want to ask you to update OS, you know? It's just, like, as an experiment to try with OpenSSL in the later version. Mm-hmm. Open, open, set up TLS right now, right now. What? Set up TLS. You have set up TLS in your code, right? Sorry, the JavaScript file for set up TLS, the TLS keygen thing. Yeah. Can you switch this on new key? RSA? Yeah, switch this on and switch this off. Because maybe it's just, like, not supporting ECC. Yes, let's see. First thing, I get an SSL version of Cypher mismatch. Is it something where you might force file contact? Because I'm not sure what to actually do in this step. So... Okay. Show me the website again? The browser, yeah? Try changing it to localhost. Oh, yeah. Mm-hmm. 8443. Sidecon, provided secure connection. It's supported protocol. Okay, can I see your... How are you running that one? You're sniping? I'm running the... Yeah, I see it. Yeah, I just need to talk to you, but I can do... I need to start... Okay, so it's... Is it listening? Not exactly, so I didn't call... Is there anything else running right now? At least not 8443. You sure? Like, you want to next that? Next that, dash A and... Is there any... Is there a search or something? Should we grab the... Yeah, probably better. Something's listening. Yeah, this is free. Okay, okay, it's going well. It was free. Okay, okay, good. Yeah. So, this guy is saying cannot. Unsupported protocol. Is there something to do in this step? Because I didn't wear anything. This is Windows, right? No, this is Mac. Hang on. Hackintosh? Sweet. I mean, my horse file is a little bit nasty. That's fine, that's fine. I mean, you shouldn't even have needed it because you're... You work for Sephora? Yeah. Cool. Hey, you know... Yvonne? Yvonne. No? I think she's a designer there. I don't think so. Oh, she used to be there. If you're just using local hosts, you shouldn't need it, actually. It's only like if you want to set up some domains on local hosts. Kind of like what you were doing there. But this one... What is that, last line comment? Oh, no, this is... Oh, right, it's like a... Yeah, yeah, yeah. I can see you had a very rough mind. 541 and you're still committed. Yeah, yeah. That's how it goes. That's pretty much... But there are guys around here who made you get it working. Yeah, yeah, yeah. Okay. But this is the thing with experimental stuff. There are so many edge cases on platforms that have not yet been ironed out, which is probably the fun. I'm not sure, though. It'll only be the keys that I've generated. Can you open up the key? Just double click the key with the search file. And finder. It'll open up here. I don't think it's that one. It just doesn't scroll to the correct one. Just search for localhost. You don't have it set up at all. Try to drag it in here. Is there a way to add it? I don't know what it is. I'll use it. Unless this key is not generated properly. The way I've generated quite a bit more. I just want to stack up and copy some random stuff. The opennesses are also changing location. Have you changed location? Oh, man. What is it called now? It's set here. Damn. That is... What? Where is it? Yeah, not found. So you gotta find it somehow? Yeah, just locate it. If you locate it, let me know. You heard it from Bru? Yeah, I generated it. How do you mean? I can generate it since it came out. Then you generate that config. I thought it was a system thing. It seems like every system I tried, it had a Linux Mac. It might be a recent thing. You get it from user-local... That's homebrew, right? Yeah, I have a homebrew one. Yeah. Okay, I'll have to put a fallback on that one. Can you maybe make an issue on the... Sure. The thing is I actually have two opennesses. This is one Mac, I think. Yeah, it's like an old one. But somehow supplying the one from the other side? No. I think this one is from the Mac one. Mac one? Yeah, and this one is a Bru one. Is it not just a sim link? I don't know. Can you cat them? See if there's any obvious difference. It's a big file. Okay, it's a big file. TSA config, what the heck? I'm gonna open your luggage with this thing. No, it's not going to be... It's a separate file, completely. The file size also is what? The difference? 11 kb. Okay. Kind of rough there, but... Okay, this is cool. How did you find it? Locate opennesses. Okay, opennesses.cn. Okay, and does it work now? When you change the path? Yeah, I hear. I don't know if it's asking. Yeah, yeah, this should be good then. Who had the issue with Mavericks or whatever, what was it? Yeah, Yosemite. Yosemite? Yeah. Yosemite? Yeah, he's got El Capitan, but it seems like... Okay, it seems like he's got the solution there. So the command itself pulls up a default configuration in here. So it pulls up this... And this location seems to have changed from the previous one. So you search for... Do a terminal thing and just locate opennesses.cnf. Okay, I don't know how you did that. Yeah, exactly. I was just asking for the... Could you help him? He's trying to find the file and he doesn't have to locate it. What? Yeah, he's... You got it working? No. What happened? No, I... Okay, okay, okay. How about here? You got it working? Yeah, it was running. So it finally ran. Awesome, okay. You're skipping ahead frameworks and all that? Go for it. Just reading on. Yeah, yeah, yeah. Good, good, good. Sorry, it's just like weird compatibility issues. Fastify is amazing. It's also... It's really amazing. We went through the performance... Not JS, but in the early morning. Then you're also talking about Fastify. Well, it's his project. So you know what? It was the first time we had it. Now we're using it. Awesome. I started using it last year, mid-last year or whatever, when I first came out. I was fixing bugs all the time. He's reviewing my code. It's quite nice. I finally get the medium now. That deal, yeah. So, yeah. Express... I think there's still some issues expressed. The project is kind of stale and dead. There was always... They do some hacky stuff with the way they wrap the request and the response objects that they get from Node in HP1 and it doesn't work anymore with HP2. They're doing things that the internals don't actually... They're not using the official API. And those things all broke. So we've been trying to work with the guy. His name Doug Wilson. Trying to fix these things. Yeah, yeah, yeah. And he knows he's aware of those things, but it's just like he doesn't have the time to commit to fixing Express, which is kind of sad, because there's millions of people using it now. Yeah. So they can use Express. Yeah. So personally I was using Connect for a long time and now for my new stuff I always use Fastify. Well, it depends. Connect is like more low level. So high performance stuff, just stick with Connect. You can use the middlewares, but nothing too fanciful. But then, Fastify, you get all this cool stuff like building APIs. It's just the best thing ever. So I put like a bunch of little examples here. You just like go through it and see how I was using schema, how I was doing the fallback and the configuring of like routes and all that. Yeah. It's a really cool tool. It's working on both systems now, right? Yeah. Yes. Good, good, good. How about it? How about here? Yeah. Is there a... Can I ask at what point did you get stuck or did you already... Good question. I just sold out something. Oh, sorry. Sorry. It's a bit like... Yeah, it's a big classroom. Yeah. It's not your fault. No, no, no. It's not about anyone's fault. It's like... I'm curious like... You didn't... Were you able to like run it at all? This is Windows. So I'm also not sure how things go there. I didn't understand the certificates. Hmm. So what happened for some of them was the same issue. Like they were like an older version of Mac and it didn't work, right? Yeah. And if you actually look at the source code of the tool, right, it's basically just calling OpenSSL on the command line. So if you're able to install OpenSSL, you can actually use that as well. All it's doing is sort of abstracting over it. But it's kind of interesting because if you ever need this OpenSSL command, you just come up with your own fix for it. And I would love a pull request for Windows. And how's it going? It's going pretty well. Let's see. A bit out of context. Here is the... I tried a push feature. Hmm. So... It seems quite cool. And I tried it fast if I didn't. I got the performance boost usually fast about it, but my colleague did it. Performance boost? Yeah. We did a load test with 10,000 requests, 10 concurrent. Oh cool. Okay. And I had about three times the request per second. Oh. Whereas this guy, he had a decrease of about 10% or something. What compared to HP2 versus HP1? Yeah, the TLS core server. Oh, versus... You mean just using... Fastify versus just using HTTPS? Yeah. No, the Vault HTTPS2. Okay, both HTTPS2? Yeah. And then what versus what? The native HTTPS2 server. Okay. But Fastify is a framework that's on top of the native HTTPS2? Yeah, but isn't it supposed to be... The other workshop, Matthias thing? I wasn't there, but I kind of used it a lot. You said that they broke the protocol in order to make it even faster. Sorry, I guess they excluded some features. I'm not sure what he's talking about there, but the thing that is really better is they have sort of a JSON generator. It's like one of the dependencies of Fastify is this thing that just generates JSON and generates those strings faster than the JSON stringify. And that's one thing where it speeds up. And one of the reasons they can do that is because they use the schema that you can supply to pre-calculate, to pre-generate a function that just statically... To create from the schema, to create a function that you give it input, it just creates strings of those lengths, right? Yeah. And you avoid a lot of internal... like allocating, reallocating of string when you concatenate stuff. Yeah. And this is one thing where Node sort of has internally this concept of there's two kinds of strings and when it switches from one to the other, it can reallocate all that memory. Yeah. Or when a string grows beyond a certain boundary, it has to grow and reallocate a much memory. And a couple of times when you're generating a large JSON response, that can really slow things down. And so the way he set it up, it sort of tricks Node into following the correct path straight away rather than stumbling until it finds the solution. Right? So this is one thing where he made it faster. Use some tricks a lot the way yet. Oh yeah, he's all about that. Yeah. But this is streaming, right? Yeah. This would just be straight standard. This is not like high performing or anything like that. This is more like showing you can do standard streaming stuff. Actually, there's a cool API called respond with file descriptor. Okay. If you go to the API documentation, right? This is what I've been doing for like serving static content. And even the Fastify static middleware uses it as well. Okay. So this is an API that's only in HTTP2. In Node now, like, it lets the NGHTTP to library itself at the C level, like just deal with the file handling. Okay. Node can open a file, file descriptor, and pass it to NGHP2, and it'll do it. So it'll avoid a lot of buffer copying in memory. Okay, so you just pipe a file. It's not even piping. It's not even piping. Like piping, we mean that it goes to Node and Node has to copy buffers around. This is just like, just C library, do the thing. Here's a file descriptor. And it's just the OS doing it, essentially, at that point. Yeah. So you avoid a lot of, like, so if you go for, like, really high throughput, that's actually what you can do to make it a little bit faster. But we had a discussion about this, yesterday. We could use some, like, in my part, we have a bunch of Microsoft services, a Docker library. Uh-huh. Now we have a varnish in front. Uh-huh. Okay. And we have a, I think you can see that. Okay, okay. And then we have an express server. Yeah. Right. At the back there. Yeah. Could you, or would it make any sense if varnish sort of communicated over HTTP 2? It doesn't, far as I understand it doesn't actually port HTTP 2, right? Yeah, but if it would. With the client, you mean to the browser? To the browser, okay. And then you could sort of flow. That's actually what most CDNs do. Yeah. Then you lose the ability to do push streams, then you lose the ability to do all the HTTP features. Yeah. Then again, you could just upgrade your entire server to HTTP 2. Yeah, but if you upgrade the backend and you have a varnish or Nginx or anything, Apache or whatever, Proxying, you lose the HTTP side from the backend because they only talk upstream to HTTP 1. Yeah. Yeah, but couldn't you sort of cache all the crests in the varnish there? And then, yes, so what they use for that is the link preload header. Most of the CDNs and most of the link preload, you can do that for a couple of requests, but you can't push a huge bunch of, like a whole bunch of push promises. So I've been actually playing around with that more. Like from a front-end developer background, I wanted to just do bundling with server push. None of these link preload solutions would accommodate that because they don't compress that header properly. They kind of do it on the front-end, I guess, but just like there will be limits on how many you can do. I've pushed like a thousand little JavaScript and CSS files, which, like, you know, that's a standard front-end application these days. It's just that people don't see it because it's in one big webpack bundle. But if you deconstruct it into, like, its original parts, then that would actually be like a thousand push promises. And with HTTP, too, that's not a problem. The protocol is totally capable of that. It's just that most of the servers kind of choke. Yeah, so you actually don't need caching anymore. When caching is actually now different because you can now cache not just, like, huge assets. You can cache each of these individual components. So actually, maybe I should talk about that because I have a section on that. Okay, so... Most people have managed to get it working, I hope. Sorry about all the compatibility issues with various shells and various operating systems and versions of macOS and all that. But it was a great exercise. I think we... So I think probably also, again, a lot of people have skipped ahead to checking out some of these frameworks. Fastify. Many of you were at Matteo's workshop, so you probably know more about it than I do now. Fastify. Great framework for working with HTTP, too, because it supports it out of the box now. And you can use this with your Express middleware. It's fast and all that. There's a whole bunch of marketing hype. Okay, don't use a framework because it says it's fast. Use it because it works and does its job for you, whatever it is. If your job relies on you serving JSON output 10% faster, then just get a bigger server or something. It's like... My personal preference for using Fastify is that it's a very nice API. It does the async await stuff. It makes my code more enjoyable to write and to read. And that's my primary reason. If I want to make it actually serve way more requests, I would probably look at very different strategies than just changing a framework and hoping for some silver bullet. But that's just... I love Fastify. It's by far my favorite choice, and I would use it over Express any day because it's just more up to date. But if you want to compare benchmarks and all that kind of stuff, it's just not really that high of a priority for me personally. Not to say that it isn't actually the fastest thing also. So yeah, I put a little example in there that shows some of the ways that you use it. But again, you guys probably already have gone through that this morning. The only thing that's relevant to HTTP2 is that you basically just set HTTP2 true and you're good to go in the HTTPS options where you provide your crypto stuff. You also just say allow HTTP1 and it does the Alpine negotiating thing automatically. So you've seen all this schema stuff. Basically, for those who were not at that workshop, Fastify has a couple of cool features beyond what Express is typically capable of. For instance, middlewares and request handlers can be asynchronous functions out of the box. If you work with Koa or I think Happy maybe, they kind of support these things as well. So you can normally use a response object to sort of send output. With Fastify, it's called a reply object and you have a slightly different API, but it's pretty simple. You just go reply and then send some stuff. And by default, it's really good at doing JSON. It's all set up for JSON. So the incoming request is expecting to be, like if you do a post to a Fastify server without configuration, you should just give it a content type application JSON because this automatically expects JSON. Like it's just really convenient for modern applications. You don't have to like configure all these things per route anymore. On the output, you can set things like a schema, like a response schema for like a 200 response. I'm expecting it to be of type object. It could be type array, it could be type string, whatever. As long as it's valid JSON, you describe it with this JSON schema, which is this backjsonschema.org. There's a format that a lot of different projects are using. I use this for other things as well, but like Fastify uses it to describe what is allowed or not allowed in, for instance, this output, this response of a 200. But it can also use it for like tweaking, filtering out the valid versus invalid query string parameters or request payloads and all these kind of things. So it's a really nice, it's a nice to have, it's optional. You don't have to set this all up. But in this case, I'm using it for convenience that I can make sure that only the node and the V8 properties are being output because my handler, this is the request handler, it just returns process.versions and the process.versions actually contains a lot more than just this node and V8 property. It contains a whole bunch of other things. But my output, I'll be guaranteed that it'll only contain node and V8. It has a nice side effect of if you provide schema, it actually makes Fastify faster because it can compile a function that just serializes your object into a string in one shot. But it's great in cases where you're, let's say you're using MongoDB, you make a query to your database, you get back some row from your database and you just output it by doing, like you just return the output. You actually return the promise to get the output so you can do this whole thing without, like with a lot less code, right? You just have an async function that returns your Mongo query and if you set the schema, it will filter out things like that underscore ID, these kind of pesky little things that you have to deal with otherwise. All that, you know, pointless code, just specify what you need to get done and Fastify takes care of it. So a lot of these nice things I really like about Fastify, so I would use it for done. But yeah, and take a look at Fastify, if that's your thing, go for it. Things like error handling is also taken care of. You can set up schemas for your errors and stuff like that. You don't even have to actually explicitly send an error, you can just throw an error. So this is the nice way to deal with async functions. If you have a promise that may throw an error, who cares? It'll just show a nice error message to the user and you don't have to write, like I always try catches and all this stuff. So really convenient modern code in Node Fastify. Okay, load testing, we kind of covered enough of that. Thanks for killing my server again. That's okay, you know, makes it stronger by the time I fix it. Coalescing connections, I talked a lot about that. So I wanted to do an example I really did. This is something that has come out like last week in Node 9.4. And it's been on my to-do list for a long time to actually get this working. It would be amazing. I tried playing with it for a few days and I just couldn't get the API working correctly. It may be a browser issue that no browser actually supports this right now, but it's, yeah, sorry, I couldn't actually get this example working. But the concept is there and this will get implemented going forward by all browsers and everyone else. So a lot of tools can start using this. CDNs will start offering these features. There's more going on than just what's currently supported in Node, which is the alternative service spring, which has this origin concept. So the server basically, when a client connects to a server to requestexample.com, it can say I also serve food.com and bar.com and send those as origin frames to the client. So that the client then knows, like, okay, if I do a DNS lookup for those domains and they match and the certificate that's being used for this connection matches, then I can reuse this HP2 session to open streams to various servers on that same session. So that's the concept. But this is very new and experimental stuff. So there's still things evolving and people are trying to make it better. For instance, now there's a lot of discussion in the last few months about secondary certificates. This is a proposal, I think, by people from Akamai or Microsoft or whoever. Secondary certificates would be you connect with TLS to a host, right? You get a certificate back and then the server can send additional certificates. So if you combine that, like if you say, like, there's another certificate that comes in for this extra domain that I want to connect to and the server also declares its origin and support it on the thing, then the server can then serve with any certificate later on. Like, this could be not just at the initial handshake, but this could be further down when the server knows that now you're going to make this extra request to a different origin. I'm going to serve that as well. Additionally, in terms of privacy, this could be quite important. With TLS, there's certain things that are protected and certain things that are not protected, right? So for instance, if you connect to a server, if you connect to, you know, MySpy server dot whatever, people who are snooping on that, your ISP or, you know, your government agency or whatever, they would see which host you're connecting to. They could not see what's in your request, so they can't see, you know, the path name of the request because that's already encrypted. If you're able to use secondary certificates, you could connect to, you know, my generic service and then follow that up with a request to MySecretspy server, right? That would improve your privacy because you're using, by the time you send that second request, it's already fully encrypted and you can use those additional certificates and they would be none the wiser. Anyone who's spying and logging that connection could never see what's actually happening beyond that first connection. They want to see that first handshake, so if that's to some VPN service or some privacy protection service, then that's all they would see, that you're using such a service, but they would not see what you're actually requesting. Right now, you can actually see that. Like, so right now we would have to use VPNs for that. This essentially is the benefits of a VPN but in standard TLS on every web server and every CDN. So those are some of the concepts that are being moved towards and like, yeah, I haven't been able to get it fully working yet, so I'll rather focus maybe a little bit more of the time we have on server push. So we've seen some of the frames that were being sent in the low level stuff that we looked at the protocol packets and all that. So we saw data frames, we saw header frames, we saw settings frames. There's another frame called a push promise. It's very similar to a header frame. In fact, it contains almost exactly the same information with a slight tweak. It also declares a sort of the idea of a stream that has the intention to use in the future, hence the promise. So it basically reserves a stream by ID for future use. So the server sends this saying like, okay, stream number 27, I'm going to use it to send the response to these headers. So that's interesting to think about. The response to headers, the headers represent a request. So the server is sending the client a request. That's weird, right? So normally the client sends a request to the server and the server responds with a response. And each of these headers, and then each of these can have a body, right? A request can have a body, right? Like if you do a post or a put or a patch or something. But in this case, it's purely originating from the server sending a request to the client and then sending a response to the client. So the reason for this request going out from the server is that it has to tell the client enough information about the request so that the client can then determine when the time comes to actually make such a request whether or not it matches the promised stream. And then it can skip the actual request going out. It can just wait for it to come back. Okay, so an example might clarify that a little bit. If a browser supports gzip but not broadly, the server might send a promise for a broadly encoded asset. The client can then say, okay, this response is going to come to me but I can't use it because I don't understand this content encoding. So I'm just going to go and request it anyway. So even though it might be the same URL to the same asset but a different encoding, it could consider that as unsupported and just try to, it'll just reset that stream. It'll cancel that and it'll make the request anyway. So that's why you might want to set things in that initial request that's coming in that push promise to have different meanings to the client. Right now though, none of the browsers really care about the headers that you're sending. So there's things like the very header where you can tell like caches or browsers whether a response from the same path is actually different or not in which ways it's different depending on which headers. Right now that's completely ignored. So when you do a push, it's very easy. The browser just looks at the path name and if it's the correct path name of the request that it's going to make, it'll wait for it. If it's different, it'll make its own request anyway. So it's a very simple, even presentation but that will change. So there's always already been declarations of intent by I think Chrome to say that we will respect the very header in push promises but I'm not sure if it's currently in Canary or anything like that but just be aware that that could be a thing. So most of the time your push promise includes very simple. It's just the path name and maybe a mind type or something like if you want to do that but it's very, very bare minimal kind of request that you're sending it in a push promise. So I'm using this personally for building web apps and sort of bundling JavaScript and CSS and fonts and whatever so I don't have to use like client-side bundling tools like all these build tools like Webpack. I'd rather not have to use that. I'd rather do that at the protocol level. One reason is that because it just makes my life easier. I don't have to set up this, I don't have to come up with a Webpack configuration for every little project that I do. It also benefits because you can do this even earlier than like a Webpack Webpack. If you have a bundle then okay you have this wasted roundtrip still going on because you first have to load that HTML before it can actually even request that bundle. So with push you eliminate that because as soon as you send your HTML you can already start sending all your JavaScript and everything. That means that your TCP connection ramps up faster and you don't have to deal with like specifics of like only the JavaScript and then the CSS. You can also just have JavaScript import other JavaScript import or CSS files import other CSS and not have to wait for a roundtrip at each stage because it's already being pushed. So I'll show an example of how I'm doing that. This is a quick like proof of concept kind of exercise here. I'll just go through it real quick so we can get to maybe something more interesting. Right now we've got curate secure server which we understand and then we look at the HTTP version for some reason. Okay what we have to look at is whether or not the client allows push. So if you connect with like curl or nghtp they have a setting that they send to the server or the client setting that this allows push. And so if the server were to actually send a push promise it would be considered a connection protocol error. It will kill the connection. So you don't want that to happen all the time. That's kind of a bad behavior. There could be legitimate reasons why a browser may switch off push. I don't know of any browser that does that other than the command line testing tools but so you have to respect that still as to be compliant with the protocol. So you kind of always check because we're using compatibility layer as a whole roundabout way of getting to the remote settings and figuring out does it have enable push set to true. So this enable push true that's ultimately the flag that you're looking for. And when it does you can use the API that's exposed on the stream instance to send the request headers. So this is the server sending this fake request to the client. And in this case all I'm sending is a simple path saying this is my dependency. Okay. And it receives a push stream and this is actually sort of the response to the fake request. So in terms of the protocol you see on the wire this called the push stream here with these headers. This is the push promise frame. This is the push promise frame that's going out. And until you do something with this push stream object nothing's going to happen. So you can use this to cleverly sort and rank how you actually emit your information. Typically what I do and what I think is a good practice is you receive a request, you send all of the push promises then you respond to the original request with the headers and the data frames and then you start fulfilling all of those promises with their response headers and their data. The reason is that the earlier you send the response to the original request that's typically your HTML the sooner the browser can start working with that and building the DOM and start scaffolding everything out. But you need to send the push promises first because otherwise it may still make those requests because it's not aware that these push promises exist that you intend to send that. So you might still send a request out. So send the push promises first then the response to your main request and then all the responses to the push promises. That's sort of the typical order that I've been playing with and it works very well. Now in this case, what we're going to see... Okay, push web app dependencies. Okay, so if we run that... Oh, wrong directory. Where are we at? Six? Yeah. Of course. Really? There's nothing here? Okay. Okay, so I've spun up the exercise as displayed and I'm going to now... Okay, so what we see here is two requests. One for Slash and one for the Favicon. However, oh, I might not have had this open. Okay, hang on. Reset. Okay, we see two requests but we see four requests in here. Okay, so what happened? We see four requests from the browser and... Okay, cool. Okay. They're served from push. So we're showing here protocol H2. In this initiator column, you can tell whether or not it's being served from the push cache or not. So effectively only this cache... This request, the root in XHTML went out and our Favicon. So the roundtrips were avoided completely when we were serving this demo page. And we can prove that by showing that the console indeed logged Hello World, which came from a dependency to a script file that was being included. So normally there would be two roundtrips and in this case it was zero roundtrips like wasted. So that's the proof of concept of how can you use push to bundle an entire JavaScript application with zero configuration or build tools. Of course, the question is, I don't want to have to code like each of my JavaScript assets into this. So there's tools around that that can be developed. I've been working on some. There's a lot of people who are experimenting with that. Right now, most of what you see in terms of public support for push is going to be that this link rel preload style. And there's a couple of reasons for that. One is most of the CDNs today are built on infrastructure that just doesn't really support the HTTP2 in the origin side of it. So there's two sides to a CDN. One is to the clients, to the browsers. So you connect with HTTP2 and you get served the response with HTTP2. But the CDN itself, that edge server has to make that proxy request to the actual origin server. And almost none of them, if any, actually do that over HTTP2. They all seem to do this over HTTP1. So it's hard for any of these servers to actually implement true HTTP2 all the way, the technology stack that they're on. Secondly is that from the perspective of CDN, they're not trying to solve a front-end developer problem. They're trying to solve a network performance problem. And the solution is potentially sufficient. So me as a front-end developer, I see server push as like this amazing solution to make my website faster automatically without configuration and all that. To them, what they're trying to solve is that think time, that server thing, that little round trip. And so as long as they can serve a single or maybe a couple of assets, it's fine. So your initial request comes in and that edge cache, they have a couple of files that are included in this link preload that they can immediately push. Then they're filling that think time already. And so that network performance issue for them is considered done. But you'd never be able to push 100 JavaScript files and 100 CSS files because your header would just be crazy long and most of these servers sort of have a limit on the size of headers like Cloudflare, I think something like 24, maximum link preload, 25 or something, capped. They have pretty low limits on these things. So for bundling a whole bunch of assets, it's not ideal. And so this is one of the problems that I've faced and why I prefer to go over something like Node where you have full HTTP2 in any direction that you want. Like you can easily proxy full HTTP2 from the client to your proxy to the upstream. And so that's one issue. Second issue, this is an example here, our hello world. The second time I load this page, I make the same two requests and where did it go? The same thing gets pushed. But I could already have this in my cache. I mean, I already have this in my cache from the first time I loaded it, but it's still being pushed, so it's being wasted. And HTTP2 has a mechanism where the client can sort of reset a pushed stream to reject it. But it could be that if you're pushing a whole bunch of files, just the volume of push promises itself could be kilobytes. And then each of these individually could be tens of kilobytes. So you're wasting potentially a lot of bandwidth. That self will take round trips. And so this isn't less than ideal to just rely on the client to cancel it. Also, the server might have already in the meantime sent it out if you're two seconds away. The server could already be sending a whole bunch of data that your client will eventually just reject or find useless. And that's, again, wastage. So the solution to this is for the client to tell the server what it already has in its cache. It does that. This thing is called cache digest. That's the proposal. If you go to the HTTP working group, you'll see cache digest proposal. It's the concept of a bloom filter. I'm not a computer science kind of background guy, but I implemented it. And the way I understand it is that you basically look at everything that's in your cache for this domain that you're connecting to. You take a hash of it. So you have this nice normalized, random, like sort of noise distribution. And you take the first couple of bits and you sort of stick them all together and you pretty much send that to the server. So the server has this list of abbreviated hashes of all the things that are in the client's cache. So this takes a lot less bandwidth to transmit than just sending all of the URLs, right? So it's just nice and abbreviated. And the server will then just say, okay, I'm trying to push this thing, but I can look up in this cache digest whether or not the client already has it by just hashing the URL of the thing that I'm trying to serve, abbreviating it and seeing if it matches any segment in that bloom filter. That's roughly how it works, to my understanding. So this is a minimal processing cost at the server side and the client side. But it saves the server from doing a lot of wasted round trips. And network performance is way more expensive than CPU performance, right? It's very, like most CPUs on web servers are just going to be practically idle, serving 10 gigabit links. It's very easy to copy buffers around. So it's kind of nice to have something to throw CPUs at now. But this hasn't been fully implemented yet in any browser on the market right now. It's still experimental. Right now, the proposals are sort of switching from bloom filters to golem coded sets to cuckoo filters. And each of them have, like, some pros and cons. And from my conversations with browser developers, there's some difficulties there. And so some people are still like, okay, why don't we just stick with this link preload because it kind of maybe solves enough of the problem. But to really, really bundle, like large front-end applications with server push, you would need something like a cache digest. And so one of the things I did was to create an experiment with a service worker that sits in the browser. So your web app has to actually include the service worker. It then hooks into any fetch calls that the browser makes and uses the cache API that the server worker has access to, cache digest. And rather than including it into an HP2 frame, it just sticks it into a header or in a cookie and passes it to the server. Your server processes this. And so this requires a lot more, like, setup and a lot more infrastructure a little bit, I guess. But it doesn't require browser-native support. And it works great. And another way of doing this was to do this on the server. Your server kind of calculates the digest based on what it has previously sent. And this is being used now by this thing that I'm not supposed to, like, really mention publicly, but it's open and public, so whatever. There's a project under a certain internet search company that is supposed to, like, give you automatic push support for Fastify and maybe in general. And this basically uses the server to generate the cache digest. The downside to that is that the server doesn't know when something drops from the cache. And that's a big downside. So then you might have false negatives or false positives depending on how you look at it. But basically it could be not pushing things that are no longer available in the browser's cache. And then the client has to make those requests anyway. And so there's downsides to all these things. Until it really gets into the browser, it's always going to be a bit scrappy and, you know, hacky. But it already solves, like, a huge amount of the problems that you'd have with bundling huge, like, thousands or hundreds or thousands of assets. So cache digest, it's a really cool solution. And it's sort of an experimental thing. But if you want to set it up, look at the service worker. It's on my NPM and all that. I've written blog posts and articles about it. It's really good. So yeah, we've got the two strategies. You can either create a manifest for, like, what do you want to send? Personally, I've been going with this manifest approach mostly. The tracking kind of model was introduced by Jetty, like this Java application server, like many years ago back in the SPDY, the SPDY days before HTTP2 was standardized. So the ideas with the automatic tracking thing is that it automatically generates, based on the statistical model, when you get a request, you send a couple of responses and you say, OK, typically I get these responses sent to this request. And after a couple of users, you kind of, like, get a pattern. And then from then on, maybe, like, you start using that probability to send out all of those push streams every single time. And you can combine that with the cache digest and get a really good approximation of, you know, a close to perfect automatic push support. The manifest is basically, you use, like, a build tool or a manual developer can configure this to say, if I'm serving this HTML file, I need this dependency. And if I'm serving this CSS file, it's including this front or this image. And so a build tool could build a dependency tree and generate a nice manifest that your server or your CDN edge could then use to push the correct assets. I've built a tool that does that. I haven't built the dependency tracer, but I've built a tool that uses manifests as a concept and I've found it to be quite effective. When you previously, in the good old days, when we could still load Copey.js, it would actually push the various assets. I'll show you how that's actually been set up. I don't know where the tab is here. So a very simple configuration that we did for Copey.js was to say, the manifest includes a single rule for the blob, which is sort of what was being requested. If it matches any HTML file, then you push the favicon and any asset that matches these images, CSS, and JavaScript. So it's a very simple rule that just pretty much pushes all the assets that matter to the HTML page. It's a very simple page also. So you might have more, like, you know, slightly more complex pages. There's a concept of code splitting in like some modern build tools. This is effectively the same thing, but with server push. So you could say, I have an entry point for index HTML where I need to push these things and I have an entry point for app.html where I push these things. And in this case, this website has a landing page. That's a very simple fast loading web page. And it has a dashboard, which is like a single page app. And I can put these with two simple rules. I can push all the correct assets automatically. The way this manifest works is not very well documented right now. Sorry to say. But working on that, and this is one way of basically making server push very usable, rather than having to hard code all of the stream.push and push stream. And this is a really convenient way. And I'm going to be exposing this as a middleware that you can use with Fastify or anything else as well. Currently, this is available in the entire server project. But the pain point there is that you need to actually use this server. It only does static files. And for static hosting, it's great. For static hosting, it's very convenient. But I want to make it more available to anyone who can just integrate this into their own web application. I think that would be more useful. This has so far been more of an experiment. So here, this is basically the exercise. We've kind of like talked about this now. If you want to do this exercise, feel free to set it all up. The instructions are right there. But let's see. Oh, one thing I should point out. This is kind of an annoying gotcha that I thought would save you some time if you encounter it. This is a maybe not so valid HTML page. But regardless, a script tag type module. This is how you can do ES modules in the browser. It works. Cross origin equals use credentials. So if you do not put this use credentials thing, your browser will still make an extra request for the asset. You'll be pushing the asset. The browser still makes an extra request. It's this annoying little thing that you have to do on the initial script tag. All the dependencies and everything else doesn't matter anymore. Once the original entry point has that credentials cross origin attribute. Otherwise, you'll find that this is quite weird to deal with. Let me show a little project that actually uses this. This is too manufactured. So let me skip over this for now. I'll show the actual website that uses it. So my goal here was to use server push. Can I get a water, please? My goal was to use standards as much as possible. So I'm using server push for bundling. No build tools that are really doing anything substantial. Thank you very much. So I've got a fallback for 200 response that just goes to my client's head router. And my manifest, like I showed before, has an index for the static home page. And then the app, which is the fallback, if that gets served, then it pushes all of the web app's assets, styles, JavaScripts, images, fonts. And I can exclude things like source maps. Because when I'm serving the app to a normal browser, I generate the source maps on the server. So when I open up my inspector, the inspector will load them. But other users never need to download the source maps. So it would be wasted bandwidth. So I just exclude them from the server push there. Otherwise, because I'm just using a simple wildcard, it would include them. Obviously exclude them here. Now, I mean, this is just like a simple single page app. Where's my entry point, app.html. And like I said, I have to do this type module, cross origin, use credentials. So I've got an entry point, app.js. What? I said app.js. OK. And then from here, I'm just using import. And this is all not transpiled. This works in the browser today. I've got a little client-side router that I have. Simple thing that a couple of dozen lines of code that just does some nice routing. But you could use your React routers or what have you. Here, so for each of these routes, I'm instantiating a custom element by its tag name. These elements have been defined by their respective component files, which are just JavaScript files that define, like for instance, the login thing that we saw. So that's coming from authentication login. So the JavaScript file authentication login is a template element or extends this template element, which is an element that just loads some HTML from its own attribute here. And then you start setting up your callbacks. And this is your web components, custom element style of development that I've been working with. So there's no JSX. It's a little bit more for both. It's all DOM. But it works. It's high-performance. It works right now. It works very well in Chrome and Safari. Sati Firefox is still lagging a little bit on web component support. But I feel like they'll just catch up. And I'm building this not for use right now. I'm not building this for legacy users. I think that my target audience is people who are interested in Housh and HTTP. Probably people are using state of the art stuff anyway. And if this gets adoption in a couple of months from now, chances are that even Firefox will come around and implement these things and ship these things. Should probably talk to some of the Mozilla people here about that. But yeah, my concern was not so much with full compatibility support, but I was actually amazed that it worked as well as it did. When I started developing this, I was expecting it to be completely broken. And besides a few minor glotches like that credentials that credentials thing on the script. I was amazed that you can today just work with full web components with the V1 spec with server push that just works really well. There's occasionally you see weird behavior when you command R, reload, and Safari. And sometimes it does a lot of get requests and you don't understand why. And there's some issues here and there. But it kind of works. And these are just bugs that are going to get fixed by other people. So my project will automatically get better for free over time. So this is a little project that I want to show. If you're interested in how it works, just like it's all open source, go through it. I'm happy to talk more about it. But this is the main thing I wanted to show. I wanted to show the static CDN. I'm guessing it's down as well since we nuked my little server. So I've essentially turned the server project that I have into a SAS, if you will. It's a free platform where you can just deploy static sites. So I would use GitHub Pages, or Surge SH, or Netlify, all these amazing products. I would use those. But none of them really supported HTTP2, server push. And so I wanted to combine those things. So I've built with Node.js. I've built a little SAS called HTTP2 Live where you can just deploy static site with a manifest, and it will just serve it with push. And my prototype has just been one node at my house. I've got a nice gigabit connection. And my house is less than a millisecond away from the Internet Exchange in one north. So it's sufficiently fast compared to an EC2 instance. But effectively, right now, I'm in the process of upgrading all that infrastructure from this proof of concept into a more production-capable, actual global distributed CDN with multiple nodes and pops around the world. And it's still a free open source service. It's, I think, one of the only open source CDNs out there. And it's all built on Node.js. And I'm having a lot of fun actually playing with all this technology and making it possible for other people to host their sites and get all that amazing performance. So I hope you all took away some things from here and there. Thanks for bearing with all the compatibility issues and everything that we've faced. This is just the nature of doing state-of-the-art stuff. You just spend a lot of time fixing little things. And then you go, why does this take four hours to find one little missing dependency or whatever? That's just how it is. But it feels really good once you get something working that no one else in the world has done yet. And once you start contributing to Node.js core, like a year ago, I wasn't doing any of that Node.js stuff. I only got into it about that time because I saw that people like Mateo and James, they were working on this stuff. And I was trying to just contribute to it. I was just writing some tests for it. I saw some of the APIs weren't fully fleshed out yet. And I just started doing little patches. And it was just amazing to see how receptive they were to it. And the whole community around Node.js core has been really welcoming to these kind of patches. So I highly recommend that if you see any bugs, just do a little pull request. You become a contributor to Node. It's a really good feeling that your little fix goes out to millions of people around the world. And millions of servers are running little patches that you submit it. It's a really nice experience to go through that. Anyway, I would say go play with these things and make awesome things. So I think that's pretty much the end of my talk now. Thank you very much. Hey, Thomas. I have just a little compulsion for putting all the work into your workshop. So thank you very much. Another round of applause for any of the staff that we had at this conference. Really? Yeah, because the room didn't, it wasn't.