 3, 2, 1. 3, 2, 1. Hello, Sohamma. Hello, Jake. This is weird, isn't it? This is very different. It is. I can't even see you. It's exactly the same as we did for Web Dev Live. Yeah, I feel like in Web Dev Live, I could actually see your face. But right now, I can't because I can see your screen. Well, do you know what? The screen is going to be more interesting because, yeah. Do you know what? It's been a while since I spoke about streams and you don't know how much you can have in streams. Do you remember which year was the year of streaming declared by you? So everyone watching, I in 2016 wrote an article called 2016 is the Year of Streams. And it was for me. And that's all it counts. I'm not sure the rest of the web was that bothered. But you know, streams are continuing to happen and get more exciting. And that's what I'm going to talk about today. This is the Fetch API. You've seen it before. This is making a request with a request body, a post request. The body can be a number of different formats. So it can be a string, which means it can be sent as text, right? Well, you encode it to UTF-8, won't it, implicitly, and then send those bytes. OK. Mr. Pedantic, yes. We had an episode on encoding. I'm not going to, you know, that's what I do now. And the nice thing is it will add the content type for you as well. And it will add the UTF-8 part as well. And same with a blob, it'll just send that as binary. But it'll take the content type from the blob itself. You and A to Ray, it will send that as just application octet stream. Application octlets. Yes, that one. Well remembered. Form data, it's a more interesting one because it's multi-part encoding because it can contain like everything a form can contain, including files. And then there's the yealdi super basic URL search params. That one I didn't know, actually. Cool. OK, so one of the nice things about the form data is the form data can take a form object in and it will just represent that form as multi-part form data. If you're wanting to convert it down to URL search params, you have to do it yourself. It's like two lines of code. It's not much. And of course, you would then have to, like if your form contains a file, you would have to throw. But this will give you the standard URL encoding stuff. But with all of these formats, you need all of the data upfront ready. Yeah, you have all the forms and all the data and then it turns it basically into a binary encoding the hood and then that once that binary encoding representation is done, that's what gets sent along with the fetch, with the request. Exactly. But look at this. This is in Chrome 85 with experiment or platform features flag. There's an origin trial as well, so you can use it on live. You can do this, which is changes stuff. It's a new feature for the web. OK, to describe why, let's just talk about streams. I mean, it's interesting because fetch responses have always been streams, right? Like the body of a response has always been a stream that you can process as. Ish. Ish? Oh, OK. There was a time. We released fetch and then sometime later we put streams in. Oh, OK. Yeah, they're like behind by, I'm going to say a year because that's a number that I've heard of, but I can't. That might not be 100% correct, but it was sometimes. How did the work in Interop, like what was body before? Oh, it's just didn't have body, basically. You just have to. Just didn't have body? Just had the methods where you say like text, JSON, array buffer. Yes, exactly that, which is what requests are like right now. It's the same thing going on. But yeah, like you said, the web is like a streaming thing by default, right? If you go to a page and it serves HTML, you'll start seeing it as it's downloaded. Same with images and video. And this is great, right? Because you can do something with just a little bit of the response. I've actually spoken to teams and they're like, oh, we're pulling like 100 search results down the wire, but we think we're going to detect if the user's on 2G and instead serve just 10 results so they'll get it quicker. And it's like, no, no, no, just serve a streaming format. Because then as soon as you get that first result, you can do something with it and you don't need to worry about the user's connection speed because you'll just get those things one by one. Because that's why I love streams. The fact that the web is streaming is the biggest fundamental difference to all the other platforms, most notably like Android and iOS. Because on Android, you install your entire app and everything is there. While on the web, you have nothing and you have to press the most critical resources to this tiny straw that is the network. And that's what we optimize for that. Whatever is needed first arrives first in the best order possible so that can be shown and you only download what you need. I think that's the biggest difference and why so many assumptions and patterns from native land don't really carry over to the web because we have to stream it. Yeah, and that's, so yeah, on native, you have to download everything before you can do anything, which is not true on the web. But that's why I'm not going to turn, that's not going to turn into a framework rant, but it's one of the things that I get annoyed at is when I see sites like just throw away this huge benefit the web has. And it is a benefit, right? It's not just a constraint or something that makes it harder. It's a good thing. Absolutely, absolutely. And yes, we've had it on responses for a few years now in fetch. So here's what it looks like. So you do a fetch and you get this, you get the body, you can get a reader for the body, and then I'm just going to do a while true here. Get some data. When you read data from a stream, you get like these two variables. So you get like value and done. If it's done, that means you've received the whole response. Otherwise, yeah, everything's received. Otherwise you get this value. And when you've received them all, that means like, yeah, you've got everything. So these values are UNT8 arrays of bytes. And the number you get and the size of those arrays, that depends on the network conditions, right? Like if you've got a fast connection, you'll get few big arrays. If it's a slower connection, it'll sort of drip feed lots of smaller arrays just to represent the same data. Quite often you don't want to be dealing with binary data. Quite often you're receiving like a text response. It's one of the more simple ones. So yes, you can use a transform stream, which is, that's only in Chrome right now, still unfortunately, but you can use like a lower level text encoder, decoder thing. But I really like the transform stream because it's nice, you just do this. And now all the chunks are text. Yay, that's much more easy to deal with. Anyway, like you said, we've had this for a long time, cross browser, the whole response streams thing. The new bit that I want to talk about is request streams. And here is how they work. So I've got a content type header in there. That's optional, but it's good practice just to tell your server what you're sending. And then, yeah, there's a stream and that's it. And that just works. So inside the readable stream, this starts callback is called straight away. And from then on, I can just start pushing stuff into the stream. But this demo is actually going to fail because requests, they expect the data to be you intake arrays. I was about to ask because the body could be a string, but does it also take, actually take streams of strings? And you're saying, no, it doesn't. No, no, so problem solved. There you go. We've got these transform streams, sorts everything out. And so, yeah, there you go. That's going to send the UTF-8 bytes for hello down the wire, and then five seconds later, I can send world and close the stream. And that's it. That's how streaming requests work. The use cases I can think of, you could warm up a connection. So like say you've got a chat app, as soon as the user focuses the text input to start typing, you could start that request up and get all the headers and stuff out of the way and then just send the text once they hit send. Obviously then on top of that, there's more genuine streaming formats. Like you could send audio data, video data. So you can just start building up the request body like that. So in this example, I've created a readable and I'm giving that to fetch. But sometimes it's easier to use a writable and you can do that as well. You can make your own transform streams like we've seen with the text encoder, text decoder. But this one here, this doesn't have an actual transform defined. It's just an empty transform stream. And what that does, it becomes an identity stream. And all that means is anything that goes in the writable comes out the readable. And that's it. But that's useful because now I can send the readable part as the body to fetch and now anything I send to the writable is what's gonna actually be sent over the wire. And you can do some fun stream composition stuff with this, right? Like I'm gonna fetch something completely different here and I'm gonna pipe its body to the writable. So now I'm fetching from one URL and sending it to another. Like I'm using the user's machine as the kind of proxy all in the streaming way. You know, the user doesn't have to buffer all the data. It doesn't have to end up all in their memory. I can even compress that data on the fly. So this is using the compression transform stream. This is a Chrome only thing as well right now. It's a web standard, but it's only implemented by Chrome. But you can see how composable streams are to sort of plugging them together like this. Yeah, I mean, I wrote an entire library where I was trying to mimic observables with streams because they basically give you the same pattern. There are some differences in a broader blog post, blah, blah, blah. Basically, you can write really complex or declare really complex data flow structures with streams, which is exactly what they're for. And transform streams are the heart of that. And it always breaks my heart that only a few, not all browsers have transform streams yet, but once they do, it's gonna be very interesting. Also it's good that you mentioned your library because now we have to put a link to that in the description. I see what you did there, very smart. Yeah, oh well. But then you also mentioned my blog post from 2016, although it was to embarrass me. Still clicks on my blog. So there we go. Well done, us. Fair as bear. Okay, so that was the good news. There are some catches. I find this stuff really fascinating. There are some cases where these new request streams are going to be limited or, you know, things might go wrong. HTTP redirects are an interesting one. Oh God. If you make a request and you hit a redirect, other than a 303, 303 is fine. Okay. But anything else, it will fail. Right. And the reason for this is the other redirect codes, well, like 307 and 308, they require the body to be retransmitted to the new URL. And with streams, it's gone, right? Of course, yes. So it doesn't work. 301 and 302 sometimes don't send the body again, but sometimes do. So I think just for consistency, we've decided. I mean, in general, avoid 301 and 302 because they're a weird end legacy. There is always a better one. Like you've always got 303 and 307, which are sort of better equivalents. But yeah, the stream can't be restarted. Maybe someday we'll find a mechanism for that, but right now it doesn't work. One more thing. I've got loads more to talk about. By default, this is only going to work over HTTP2. If you want it to work on HTTP1, we've made a little opt-in for it. Oh, that's interesting. And this is non-standard as well. This is just during the experiments. And the reason we're doing this, this is because of a key difference between HTTP 1.1 and HTTP2. Oh, 1.1 has chunking, doesn't it? Yes, it does. So here's how you post something over HTTP. So in this case, the content length is important because that's how the other side knows how much data it's going to receive. It also then knows when the end of the message is because it's when you've sent that amount of data. But in a streaming world, we don't know the length in advance. That's the whole point. So we need a different way to do it. And like you said, one way of doing this arrived in HTTP1.1 and that is chunked transfer encoding. It's difficult to say. But this is the same message now in two chunks. Yeah. But if I specify the content length, would it then work over HTTP 1.0 as well? So content length is one of those headers that you're not allowed to send in fetch. Okay, fine. It's, yeah, it's one of those things that you've not been able to do it before. So letting you do it now is, like there's that security question of like, I think if we exposed a way to do it, it wouldn't be just by setting the header. It'll be by giving us a number. Cause then that would at least prevent you saying content length foo or something like that, which might call servers to do something weird. Why is that a question of security? A server must always expect to get malformed requests, right? Well, do you know what, Samar? Not all servers behave. So this is part of the problem. Like if you send unexpected data, some servers will like accidentally do things that they weren't meant to do. And the web always plays it safe in terms of that. And tries not to cause problems. But yeah, so in chunk encoding, you get the length of a chunk and that's a hex number and then the content. And off it goes until you get a zero for size. And that's the signal that that's the end of the response. So that's been around forever, like HTTP 1.1. And it's really common in responses, but this is the first time browsers have been able to do it with requests. I see. Okay. Going back to that whole thing about servers not expecting it, we're worried about compatibility. So we've made it opt-in for now. It's not a problem in HTTP 2. It doesn't have chunk encoding at all. It has frames, which are kind of like chunks. And it uses them everywhere so there's no compatibility concerns at all. So no worries. I see, okay. So yeah, it's just a sort of worry of like, what HTTP server is gonna make of this? Like, is it going to work? Is it gonna break things? But we put that in there as a test. So developers can try it out and report back. Another gotcha slash difference slash restriction. HTTP is bidirectional. And that's something I always forget about HTTP. Like, you can start receiving the response before you finish sending the request. Actually depends on who you... I think I thought that was like many servers do, but it's actually not necessarily spec that way or something like that. Yeah, it depends who you ask, whether that's what was intended by the spec or not. But yeah, lots of servers support it. Like, I know the Node server supports it really well. Yeah, probably quite a few others. But yeah, a lot of implementations don't, a lot of front-end servers don't support it. And neither does Chrome. I don't think the other browsers do under the hood either. So that kind of leaks onto this. So in this model, you need to complete the request before you get any of the response. Interesting. It'll buffer any response that you get before. So you can't just pretend to build your own web sockets with like on top of fetch and HTTP2? Well... Well... You can sort of hack around it. So you can have two fetches, like one for sending and one for receiving. Yeah, okay. So you can create a writable and then send that as a stream request and then make another request and use that as the stream response. And now you've got like a readable and a writable for bi-directional communication, but it's actually two HTTP requests. But yeah, this is kind of like... Which over H2 is still just one connection. So it's kind of fine. So yeah, it is pretty much equivalent. Yeah, exactly. The only thing you need is something on the server side to kind of tie the two things together and know that it's part of the same thing. Right, but you probably needed that anyway, even if it had been possible to do it via one request. Like you need to do some bookkeeping there anyway. Well, if it's a single request, the server just does that by default. Oh, okay. Yeah, okay, see what I mean. But I've actually built a demo of this. I'm glad you asked about the WebSocket thing because that's the one demo I've got. Yeah, and it's just a super stupid demo. Like I'm just, you know, just start typing in this text field and it starts appearing in the page, but it's actually making a round trip to the server to do that. Well done. It's really pointless. I know, I just wanted to show in as little code as possible how you could like recreate a WebSocket kind of thing. Yeah. But it shows how you could do something different, you know. Like it. But yes, we have worries around compatibility. So what could go wrong? Like you need a server that can handle streaming requests, like Node, like Node.js does it, loads of servers do, right? It's pretty common. I don't know if like PHP does, but Node definitely does and that's what I've used in this demo. But you often end up with a chain of servers before you get to Node. Like you'll have a front end server, like something like Apache or Nginx sitting ahead of it. And then you might have a CDN. And if any of those decide to buffer the whole request, then game over. For example, I think I know that most of the serverless architectures buffer the entire body before the function is invoked. Yes. Which is sad. Absolutely. Also, if you are running HTTP 1.1 somewhere in that chain, it might get confused. It might not be expecting that chunked request. So the only HTTP one server I was using in that demo was Node, then the front end server was HTTP two, and it all worked fine. But yeah, it could go wrong somewhere else. But this is something that's under your control. So, well, you might not control the front end server if you're like a client-side developer, but it's at least owned by the kind of company you're working for or something like that. It could get worse though. It could be problems on the client side. Now, if you're running HTTPS, you don't have to worry about intermediate proxies. So that's one way to avoid that. Which you should be doing anyway. The user. Yeah, absolutely. But the user might still be running a local proxy. Yeah. A lot of antivirus software or internet security software, like it will install a certificate so it can sit as an intermediary between the user and the web so it can monitor all the traffic and I don't know, filter it or whatever. And again, if that buffers, it's not gonna work. And if it gets confused by the chunked encoding thing, it might just break. Bad luck. The only way... This is gonna be anxiety with all the things that could go wrong in the road from your machine to the server and back. Yeah, and that's why we made HTTP 1.1 opt-in because that's where it's more likely to go wrong. The buffering thing, there's not a lot we can do about that. So you really just like, you can sort of feature test it, just do that web socket-y thing and send a message on one. And if you get the server pings back, then yay, it's working. But yeah, so that's what... It's a new feature. It's an origin trial. So you can try it on live. It's in Chrome 85 with the experimental web platform features. It could go wrong. It could be exciting. So we want people to play with this. We want people to feedback. Like, oh, there was a bug in Chrome once that it turns out, like we changed somewhere we made HTTP connections. It made it all the way to, I think it made it to Dev or Beta until it was actually a VP in Chrome. Suddenly went, my internet's not working anymore or my Chrome's not working. And it turns out, actually it was a whole internet for him wasn't working. It turns out one particular brand of router in the US, a really popular one, this particular way Chrome was making these connections just caused it to crash and flip out and not make any more connections. Great. When we change, this is why we're so cautious about how, like when we change how requests are made in the browser that we're all like, we need to do this very, very carefully. So yeah, we want people to try it out. We want to find out if there's a router somewhere that sets on fire. When this happens, please report that. And also we're sorry if that happens in advance. We don't expect that to happen and I don't think we're liable. I don't know, I'm getting into very difficult legal territory now. Yeah, it's like, let's move on, move along. But yeah, that's all I've got. Try it out, let us know what you build with it and hopefully, well, let us know if it goes wrong because we're definitely interested to hear why and how. But yeah, let us know what you build with that. Definitely, all to your DMs and Twitter mentions, please. Even if you don't build anything with it, just, you know, just ping Jake. Just thanks mate. You're welcome. So, we're gonna make a stream and we're gonna fetch it. So, I've got a content type header in, try that again. I've got a content, but third time's the charm. That's going to be, that's gonna send UT if they bite.