 Hello, folks. You may remember in the last entry I talked about the idea of chunking the downloads as they came in. So imagine you've got the large video, the large audio file, and you want to chunk them up into bytes into like half meg chunks or something because we don't want to be pulling out that entire video, however big it is, from end to end, just to get a slice out of the middle. So that makes sense? Maybe not. Well, join me on the journey through the range request. Now, in case you haven't seen to this point, when we make a request for a video or part of a video, we do a thing called a range request, which is, get me the bytes 0 to 1,000, 2,000 to 5,000, whatever you like. And so in order to satisfy that, we actually have in our service worker to handle the range response. We need to be able to take in a range request and satisfy from the cache a range response. As such, we don't want to be pulling out a large amount of data just to get a small chunk of it. So the idea of chunking the vials in the first place is to make sure that we can pull out as little information as possible, reasonably so, and just get the little bit we kind of need and pop that out. Now, I'd like it if the caches API actually supported it, but it doesn't, so we do it ourselves. So as we left things last time, in fact, let me show you where things had got to last time. We have a copy of a video in the cache that looks like this. So 0, 1. These are all the chunks for the 720p video. Let me show you what happens when the Shaka player starts to request video. Cut these guys, eh? What a pair of monkeys. Right, let's have a look at one of these requests. Here we are, the header request, that's the options. We don't need that, we want, there we are. There's the range part of the request. It says it wants bytes 2, 6, 1, 4, 5, 0, 3, to 4, 1, 5, 2, 4, 3, 9. OK, so let's have a look at the service worker. This is the fetch. So whenever a request is made by the page, it's going to come to the service worker's fetch handler, which is registered here as on fetch. Now, the first thing I actually do is I say, given this request, I hand this off to my ranged response class, which is exactly where we're going to be looking in but a moment. And I ask it, look, with this range request, sorry, with this request that you've got, can you handle it? And if it's not a range request, it's going to say, well, no. And if it is a range request, it's going to start going, wow, do I have enough bytes to actually handle that particular range? Because if I don't, you should really definitely fall back to the network. So we basically start with, well, can you handle it? If you can, sure, go ahead and handle it. If you can't, just check to see whether there's a match for the file in the cache, because it may not be a range request, but it may be a request for something like an icon or one of the album arts or the poster frames and then like that, in which case, that might exist in the cache. And we can just respond with it directly from the cache without a range response. And then last but not least, for everything else that we don't have, we fall through to a standard issue, go to the network, fetch it, and then just bring it back. So all the interesting stuff, I think, is happening inside this range response class. This is the range response class. And you can see that here's the can handle. And the first question is, is it a range request? Because if it isn't, we don't care. So the question is, when we go to the request and we look at its headers, is there a range header? And if there is, then great. If not, there isn't, and so on. And assuming that it is a range request, the next thing we do is we actually look for the URL. So we try and get, in theory, we get the whole file from the cache. And if we can just respond with that, then we would. The chances are, in fact, we probably don't even need this code anymore. I might just remove that, because it is a range request. And so just responding with the entire response would be a bit bizarre. So I might need to drop that code. So let's assume that after this, I will, in fact, drop that code. What will happen next is that we go through and we say, right, what is the range request? And I ask for things like the start and end. So because that range request is a string, and it's like bytes this to this, there's a little bit of string parsing in the get start and end. And we can pull out two values, the start byte and the end byte. Now, if you remember, I was chunking the files up into, I think it was half a megabyte chunks. And so what I do is I say, well, look, given the start and end values, which chunks do we need? And I say, look, do you have the file underscore, say, 9 and underscore 10, which will be based off these values? And it might be like 9 through 12. And I assume that if you've got 9 and 12, you've got 10 and 11 in the middle, which is an assumption on my part. But there you go. This is, it works out fine for me in my particular situation. So there we go. And we basically say, well, look, do you have a response for every single one of those chunks? Do you have like 9 and 10? Or 9 through 11 or whatever? And if the answer is yes, or if the answer is no, then we just bail. If the answer is yes, then I have to start diving in a little bit more. And I have to start looking in each of those chunks, and particularly the last chunk. Remember, perhaps in the bit where we were actually storing each of these chunks, we might actually have a partial chunk. We might not have an exactly a half megabyte chunk at the end. We might have, say, 200 bytes. And so I store this header on the actual chunk size against that chunk. So later on, I can say, well, look, get me what that actual chunk size is. Because even if you think it's 512k, it might actually only be 200 bytes. So I need to know. And what I do is I say, right, what was the byte range? And what's the actual final byte that I have stored in the cache? Now, if I've stored the entire thing offline, we're good. If I've only stored part of the file, like in the case of prefetching the first 30 seconds, eventually I'm going to run out of data that I've prefetched. And so that's where this one really helps us out. Because it's the bit where we say, we definitely can or cannot handle the range, even if we've got part of that data. If we can't do the entire byte range, we should say no. So eventually, if the final byte in the end chunk is less than the end, we just say no. Otherwise, we say, sure. And that's the bit where we're just checking now, can you handle this range response? Can you create this range response from this range request? So that's that first part. Then so there are a few helpers in here. And here's the other one, actually creating the ranged response. So what we want to do is we want to go through and we want to, in fact, it's in create, isn't it? Yes, create from chunks. That's probably the more interesting one. You can have a look through all the source code. It is on GitHub. Have a proper look through this file to see all the different things that I'm doing. What we do in the create from chunks. So again, we've got this video. It's in loads of chunks. And we want to get just the chunks we need and pop those together and then use that to create our response. What we do is we go through, and again, we do with the request, we find out which indexes we need. If the start and the end indexes are totally the same, then we can just shortcut this next process. Because, well, it's just one chunk. So you might as well just pull that one chunk out, slice the bit you need and send that back. But if you need to do multiple chunks, that's the next bit. And there's just a bunch of wrangling here, which isn't all that much fun. But what we do that's of probably of interest, at least to me, was copying the buffer. So we make one big buffer for the number of chunks that we've got. And we copy each of the buffers in. So now we've got one big buffer with all the bytes from the chunks that we pulled out. So like 9, 10, 11, let's get all those bytes and make a buffer that's got enough space for 9, 10, 11, shove them all in there. That's the thing we send across. And then we can create the ranged response from that. We go through and given the start and the end and whatever else, we slice off the start and the end from that particular buffer that we had. And you can see that there's like an offset value here because we've started part way in to the file. So if I say I need bytes, I don't know. Can't do the maths quickly enough here, but if I said I needed bytes 10,000 to 11,000 and I've only given you an array of 2,000 bytes because I've kind of offset you by the number of chunks that we've got, I need to offset you back so that your indexes that you're actually looking for have been offset by the number of chunks in that we were. Yeah, it's all wrangling. I'll tell you, it's all wrangling. Eventually though, thankfully we can see that the response comes back and we do have the bytes and it all kind of does the thing that we wanted it to do. So it does mean that we are responding from the cache with the chunks that we need and everything is working. Now, what else can I explain to you about all this? Well, there's one little thing that I thought was quite interesting which I didn't realize at all before I got to this. These video files, as you'll see, they're coming from this storage.google.apis.com. They're actually being stored in a cloud bucket, a Google Cloud bucket, which is great. But it does mean that I'm making cause requests because they're not on the same origin as me. They're not coming from localhost 8080. And I have had a fairly troubled relationship with cause. It never seems to sink in for me. It's just one of those things that I'm not very, you know, o-fay with. I'll be honest with you. And I was used to the idea that you have the access control allow origin and most people put star if they're comfy with that or they'll put in the specific origin. What I didn't realize is that you also need to say whether the request can be made with credentials. So that's things like cookies and whatever. And the headers that are allowed to be exposed. You can see here in the access control expose headers, one of these is the X from cache header. Now, the reason I do this, I've got the X from cache header in here because there's times where I will need to know that a particular chunk came in from the cache. Why? Well, the Shaka player is one very good reason for this is that it does its bandwidth estimations based on each of these responses coming back in. Cool? Okay. But the problem is if it comes from a cache, it's going to come back really quickly. And so it's going to estimate that the bandwidth is super good. Doesn't sound too bad. And it's not bad if you've got the entire video cached for offline because it doesn't really matter what it thinks the bandwidth is. And because I've also locked it into a particular representation of the video. But what if you're in the situation where you've preloaded the, say, first 30 seconds of the video and then you're dropping back down to the network and adaptive later on? In that situation, you can't get away with that. So what you have to do is you have to say, right, first of all, this first 30 seconds, ignore these for the bandwidth calculations. And that's what the X-ROM cache header really is going to be doing. It's going to be telling Shaka, don't include these chunks there. They're misrepresentative. They're not actually the kind of chunks you want to be accounting for when you do your bandwidth estimations. But in order for that to work, in order for that header to be passed through, you actually have to say it's an allowed header. It's an exposed header. And the header you're allowed to expose on the server side. Why? Well, what happens is this. I make from my page a request. The request is handled by the service worker and the service worker is using the responses for the videos and so on that it got from the cloud bucket. And so what it does is it goes, right, which headers am I allowed to send? And I wanted to say, well, it's an X from cache response, which you'll see in the code here. This is X from cache. But if the server side didn't say that it was an allowed header, the service worker will go, I'm just going to strip out, I'm going to filter out any headers that the server didn't tell me could go with this particular response. So even though I'm adding it in the service worker, if the server side doesn't say, you're allowed to put that header on, the service worker will filter it out, which makes sense. I can't just take a cross-origin resource, add on my own custom headers and then expect it to ship out exactly as was. I have to make sure that the server said that those headers were allowed to be added in the first place. So there you go. That's something that I found quite interesting. I kind of followed the chain of logic to the end, but it did take me a little while to kind of go, oh, I wonder if it's something that I need to kind of configure on the service side to say that those headers are allowed. So I was looking at the responses going, those headers are coming in, but none of my headers are coming in. Why would my headers get filtered out? And it turns out, yes, if the server doesn't say you're allowed to actually add them in or have them there, then the service worker will let you add them, but it will then filter them out before it sends the response back to the page. So there you go. Now we've all learned something thrilling. Nobody ever says that about cores. Do you know what I did today? I did cores, hooray. It's not a fun area, but it's a very necessary one for making sure that resources are loaded correctly and all the rest of it. So I see its value. I just don't enjoy working with it. It's just one of those things. Anyway, that's the service worker side of things for handling the chunks and for putting things back together into ranged responses. Definitely have a read through that code. I've tried to make it as the boast as possible so that in six months when I look back at it, I'm not completely confused. And hopefully that will mean that if you look at it, you won't be completely confused either. But if you've got questions, well, you know exactly where to put those down there, as usual. And don't forget that you can subscribe and say hello to me on Twitter or wherever you like. And I will catch you, oh yeah, in the next entry. Bye. Hello, thanks for watching. If you enjoyed this video, well, you may enjoy other videos that we make too. So don't forget you can subscribe and you'll get notified when we push out a new video. And there's more videos over there or down there, depending on how you're watching the YouTubes right now. Definitely click on those.