 Can everyone hear me at the back? Yeah, I know. That works, right? OK. OK, so I'd like to give the same talk that I gave two weeks ago, but not here. So hopefully not too many people were in San Francisco then. So I've been playing with this idea of HTTP push or thinking about it for a number of years now. And I've finally found myself unemployed for an extended period of time, so I had the time to actually play with the code. And I wanted to show what I did. So just to share a little bit of experience of HTTP to server push. So what it is is not like what you might know as server events or web sockets, which is sort of an asynchronous back and forth connection. This is really for serving requests. So the standard HTTP to request response, think of it as one request, a lot of responses potentially. So not going to get too much into the protocol, but the idea is that you no longer need to do the bundling. All this web pack, browserify, post CSS, what have you. You can kind of skip that. And it should still work totally fine. So I wanted to try that out. And I'll just go straight into the demo, because we don't have a lot of time. Oh, by the way, I should probably mention, I'm Sebastian. OK, yeah, OK. OK, just one plug. I worked for this thing here. There we go. If you're interested in legal stuff and technology, it's kind of cool. OK, so I spun up a couple of demos. This is DigitalOcean. It's like AWS, but it's easier for noobs like me. And I've put up a little push demo thing, which is just a simple website. I'll show you what it looks like. Basically, all right. So this is just basic static files here. So little icon, font, HTML, JavaScript, CSS. Interesting thing is that there's lots of JavaScript files, because it's not bundled. Lots of CSS files, because it's not bundled. And then the .map files are just source maps for debugging. But they're not supposed to be served to the end user. And so what I do is, if you're familiar with this thing here, this is the most awesome thing in the world. So it's basically, it just spins up any directory and serves it as a static files. So you can use this to like, if you have a website, you just run HTTP server, and it just makes it available. This is the easiest thing in the world. So if you have Heroku or whatever and you want to do something crazy, then run HTTP server for your static sites, burn a lot of CPU and memory, but whatever. OK, so I just run HTTP server. And magically, it's now available on port 8080 on local host. Well, I wasn't Italy when I made this, so. I was influenced by Gelato and Espresso. So, this is the static site. We can check it out. Apologies for the Safari use. So here we go. This is just our standard files. It's not serving the source maps in here. It's just the plain, simple source files. Now, that's not very cool. So let's look at it in Chrome. There we go. So we see the standard waterfall thing here. You see the timeline. This is really embarrassing because we've got a JavaScript file that requests another JavaScript file, a CSS file that requests another CSS file, the HTML file requests fonts and CSS. So this is very, very slow, a lot of latency. So what I wanted to do is I wanted to use my little library called HTTP2 server. So no turtle, no rocket. Sorry, I can't draw. Pull requests welcome. But it's basically the same command line. And all you do is change. You kill that. Change this to HTTP2 server. And now it should be serving. And because it's HTTP2, you have to use HTTPS. So if I go here, because it's self-signed certificate, you can use let's encrypt or something or get a certificate if you like. So it's a little tricky. It still looks like a waterfall, but it's not. It's not. Trust me. No, you don't have to trust me. You can verify it. You should. So what it does is it, let me, yeah, what? Yeah, man, so full. OK. So the little green bit here is the, that's sort of like setting up the request. So it's doing a DNS lookup, connecting on a TCP level, doing the HTTPS handshake, doing the request, and then getting the first byte. So the time to first byte. That's what's happening here. And then you get this blue bit. And now notice that there's no green bits here. It's because there's no requests going out here. They're all being sent here. The little ghost light blue. I don't know if it picks up on the camera or on projector here. But there's a little light blue thing. That's because it's all being sent already in this one response. So we can kind of look at it here. We have one request for the root. And then it's pushing all the static files. And then the subsequent requests are because the developer tools are open. Chrome will request like source maps and things like that. But basically the entire site was delivered in one request. Which is pretty cool. So if you want to see how cool that is, go back to my digital ocean. So I've spent up two servers. This is the same exact droplet or server. One in Singapore, one in SFO, San Francisco. So one is really fast and one is like relatively slow. So about 200 milliseconds of latency on that. I've mapped some DNS. So we should be able to just grab the URL and go push demo. And this would be, so it's running the HTTP server and the HTTP2 server. The way it does that is one is running on, the HTTP2 server is running on port 443 because it's HPS. And the HTTP server is running on port 80. So we just run this. And here we go. If we look at the network, we'll see that waterfall graph. So you see when there's latency introduced, i.e. like all of modern users, everyone on mobile networks across the world, everyone who's not on fiber, everyone who's at like a cafe on really crappy Wi-Fi or whatever, right? Or if you're on MRT, you have a ridiculous latency. This is sort of like trying to emulate a very, very minimalist demo of what happens with latency. You have lots of round trips. And each time it's just like painfully slow. Except you can use the same connections. No. It doesn't work on HTTP2. I found a lot of browser bugs doing this, by the way. Yeah. OK, so the difference is that here we have one green bar. So we've eliminated all the other green bars. So if you look at the speed difference, it went from 2.4 seconds to the minimum round trip time plus the transfer file of the data. So this is basically doing the bundling without doing the bundling. Because all I'm doing, if you look at the demo rep it's also available online. All I'm doing is really just transpiling without bundling anything. And yet it's serving the whole thing. So even fonts and things like you typically wouldn't actually bundle. You can now push them all in one shot. Things like the request that you still get for your bundle from the HTML to request the JavaScript and to request the CSS or request the images. It's all being sent in one response. So this goes faster than doing any kind of bundling. Which is pretty cool, I think. So I wanted to just share that. OK, well, so it sucks because there's a couple of problems with it. I think it's pretty cool. I haven't used it in production. One reason, it doesn't work with Safari. Safari doesn't support HTTP server push. So that's sort of the main problem. It's like a lot of things don't support this. So CDNs don't support this. So some CDNs will post blogs about how they support HTTP2 and HTTP server push. HTTP2 server push, they actually don't because they only support it on the CDN to the user part. They don't support it on CDN to your origin server. So if you have an HTTP2 server, they're still making an HTTP or HTTPS request using protocol 1.1 or 1.0 if you're very unlucky. But nobody supports HTTP2 on the origin request. So if you're using CDNs to accelerate, it's going to be a trade off between the single response versus the omnipresence of CDNs around the world. So that's sort of the main trade off so far. That and the Safari thing. I think there was more. But anyway, I'll leave my talk at that. Any questions? Yeah, go for it. How does choose a class? How does make a bundle? How does know which class I need it? No, it's naive. It's basically, I think it describes it here. So here. Yeah, include. But it also excludes the maps. So it's kind of a sensible default, actually. So it's really the same as HTTP server. It's just by default, you're pushing everything. May or may not work. It'll actually be really interesting. This is a very, very good question because there's a lot of people sort of paralyzed by the notion that you might be pushing too much. It turns out that because of this massive advantage that you have, you can actually do more with the same amount of time for your user. And pushing more tends to be kind of OK. Because a lot of times, we're dealing with what's called elephants or something. So it's like long, fast networks, LFN. I think it's pronounced elephants, which makes sense. So long, fast networks has been a very old problem that TCP tries to solve and all that. But basically, even if you have this terrible latency, chances are you're on a 4G device that has 70 megabits connection. It's just that it's very, very slow to initiate the connection and things like that. So you want to crush that latency. And the bandwidth doesn't really matter. Even if you're on a really crappy wife connection, you can probably still stream YouTube at multiple megabits per second. You don't care if there's a little bit of buffering going on. So anyway, that's why I just push everything. But it could be optimized, of course. I think you were first. A question. If you have 10 or 20 images and you want to be responsive, and many of them are under control, which would be a lot of pages, how does that work with this? Because maybe you don't want to actually send all those images, or maybe you want to get the right size image to send. I guess you could use Exclude. It's sort of a similar question. So I think it's the same answer. So yeah, this is a proof of concept that's just meant to demonstrate that HP2 can work and how well it would work. Because I was just sort of sick of theorizing. But obviously, there should be improvements made to have maybe some kind of logic. Yeah, I think there's room for improvement, definitely. Yes, Claudio? Yeah, so a similar question again. But the protocol in itself would be to talk and pushing all the stuff that you need necessary first, and then say keep pushing, but not imperative. See where you're pushing on one request. So you could start rendering everything about the phone, and then still get all the stuff without the set, and stuff we don't, for example, without. Yeah, it's actually really interesting. HP2 has a lot of things to prioritize streams. So HP2 itself already does sort of multiplexing. So you have one TCP connection, and on that connection, it sort of sends bits and pieces interleaved of the multiple responses that you're trying to transmit. And you could then also prioritize. And then that's up to the originating host. So the server, or it could be the browser if you're transmitting a lot of stuff. But the server could then prioritize certain things. And I think you can do that in response to the client. Just a whole bunch of stuff going on that I've played with certain things. So for example, one of the things that I did, if you noticed, the last thing that my request does is actually send the index resource that was requested. The reason for that is because I was experimenting with sending this first. And it turned out that once the browser received it, it sort of cut the connection. And I don't remember if that was just Firefox or was that Chrome? Or is that an old browser thing? Or why they did that? So there's a lot of gotchas. Basically, a lot of this stuff is sort of experimental. But it shouldn't be experimental, because it's actually supported by like 75% of all browsers globally today or last month. It's growing up a lot. If you look at it a year ago, it's probably like 20%. So this is really up and coming. And I think there's a lot of things that will get worked out as we just play around with it. So actually, I want to use this for simple static sites. It's ideal for it. I want to start adding things, not just which resources can you push, but I want to start adding things like streamline API calls that I would normally have to make. So have my web server just push you that API call that your single page app is going to verify its profile session is still or to request data to render the landing page, stuff like that. You could also just push that along. And then your web server could go and make that request, because it's close to the API server. So there's all kinds of new things that we could do with this. This is just like seeing sort of if we just do the status quo thing, where do we get? Oh, I want to show you one more thing. I hope that answers the question. So I wanted to show you breaking the browser, because it's kind of exciting. Where is it? No, it's not here. OK, so this was basically pushing a couple of files, right? So we're pushing eight files or something. So what if I push a lot of files? So if I just look at how many files there are in this directory, it's about 19,000. That's basically node modules, right? Makes sense, right? So if I copy the mode modules into my public folder, and it's included, of course, oh, I should actually add that to the default excludes, I think. People would mess up. I mean, I messed up on that a lot. It was really annoying. OK, it's an SSD, but it's a slow laptop. So give it a few seconds. There we go, there we go. Thank you. So now I'm going to run the server with a max of 10,000. So I'm still capping it. Let's take it easy. By default, it caps at 100, because if you push too many files, browsers don't like you. OK, so if we refresh this page, also I should tell you, the first time it's a little slower. The first request that comes in on the server, this is only development, of course. The first request, it actually builds a cache of all the files in the directory. So it's kind of like scanning everything. So it'll take a little while now. There we go. And then, yeah, there we go. So and then you see, Chrome dies after 1,000. Firefox makes it to about 5,300 for some reason. But nothing hits more than that. Like, it's kind of weird. I have no idea. That's totally a bug. There's nothing in the protocol that talks about that. And it just rejects it. And then once you do this, you probably have to restart the browsers. I mean, the caching gets weird. But yeah, this is kind of cool, I thought. Anything else I should mention? I don't know. I think that's about it. Any more questions? Min, is there time for any questions? Hey, there you are. OK, OK. That's really good. One last one. I was just wondering, is it possible to fall back to, like, HTTP 1.1 if it's from the browser that doesn't support it? Yeah, yeah. So the way it negotiates is actually using TLS. With the initial TLS handshakey thing, it actually announces that it would prefer HTTP 2. So in TLS itself, so this is before HTTP at all. So it's up to the server to select that it will use HTTP. So this is the way that every browser does it. You can only do it over HTTPS. The spec actually describes a mechanism to do it over HTTP without encryption. Nothing implements this. But the spec says that you could use an upgrade header, just like you would with a web socket. Oh, and interesting thing is HTTP 2 itself. So it's still on TCP, right? But there's actually another protocol called quick. Just as speedy was sort of the research that went into then the standardized HTTP 2, that research at Google has continued, and it's now called quick, QUIC. And that actually gets rid of TCP altogether. The goal is to achieve a zero round trip. You know, basically, you can send your request without initiating a TCP couple round trip kind of thing and then another one for TLS and all that stuff. It just skips that and goes boom, here's the data. And the server has enough to establish the encrypted connection, the data stream, the data gram, whatever, as well as the HTTP stuff. So it's kind of insane because we're kind of getting rid of four decades of TCP. Seriously, HTTP is a predominant protocol today, and I'm pretty sure in five to 10 years if this quick stuff takes off, it's going to be gone. TCP is not going to be like this ICMP kind of minimal amount of traffic. And everyone's going to be, all right, people are skeptic, whatever. But it doesn't support HTTP 2 server push right now. The spec is also outdated. It's actually expired to RFC. But it's in the reference implementation. So anyway, but that's sort of for another talk in a few years. OK, thanks again.