 That was a lot of work. Thanks for the help. I don't think I would have done it without you. I'm glad it's out there. Oh, that? Oh, you didn't get a Ted too? Yeah, check it out. Yeah, it goes right to it. It's awesome. Well, it was a lot of fun. I just hope everyone else likes it too. Chicken wing, chicken wing. Hot dog and below it. Chicken, chicken, chicken wing. Hot dog and below it. Chicken, chicken wing. Chicken, chicken wing. Oh, hey, I had this great idea. Maybe you can help me. We're gonna make an animated musical for Thanksgiving. All right? It's... All these holidays have these have these songs and movies and all that, but where's the Thanksgiving musical? We need one, right? So I got this idea last night, and I think it's going to be epic. Here's the, here's my problem. I don't know how to make a movie, and I want to show it in a theater. So we got to figure that out, too. So, right? I mean, like big screen, everyone Thanksgiving, come rolling up into theaters and watch this movie. It's going to be sweet. So yeah, I'm just, I'm going to research a little bit. I'll let you know what I find, okay? It's gonna be great. Just trust me, all right? Okay, so I've been researching a lot. I watched six YouTube videos. I know, six. I think I'm an expert at making animated films now. Yeah, so there's a lot involved. It has to do with different departments and how the film goes. Here, it's easier if I just show you, all right? Take a look. They say a picture is worth a thousand words, and in the case of a movie, we have a lot of pictures that happen that we call frames, that typically there's 24 frames per second, which are really just pictures flying past your eyes to process it and make movement out of it. And every minute that gives us 1,440 pictures that our eyes process to kind of piece together parts of this movie, and a typical movie being 90 minutes long gives us 129,600 total frames that we're looking at this entire time to make sense of and make movement. And if we want to tie that back to how many words we have, that's 129,600,000 words, not including spoken dialogue. But let's actually look at how movies made. What are the parts that go into it in departments that make up animation? So the typical movie is made up into three acts, and these acts each have different parts that tell, you know, the story or conflict and world building, which then each act kind of breaks down into smaller pieces, which is what gets processed and moved through in the story. The smallest piece we call a shot, and a shot is kind of like a camera angle for one piece of a scene. Multiple shots make up a scene, which drive the story forward somehow, either character development or whatever. And those scenes kind of make up a larger sequence, which end up making the shots or the acts as a whole. But what goes into these shots specifically, as we move through and animate the film, we have to do different things to the pictures to make them look realistic on scene. So what we start with is what we call the layout department. And layout is typically just getting the story pieces together with camera angles. And it's really a blocky character. It's pieces in the movie that aren't realistic looking necessarily, but it gets the sense of what the entire layout of that picture will be, which then moves into animation, which where we actually get more emotion from the character, we animate the characters more, we give them more personality and how they react to things and how they move in the scene. That's a lot of the animation pieces there. We're going to move into a little bit more of what we call FX or special effects, which is basically animating anything that's not a character. Things like water and smoke and fire. These are all elements of the story that need to move as well, but aren't directly tied to a character themselves. We'll move into the lighting department, which actually brings life to a lot of this stuff. It gives shadows and lighting and makes it look very realistic and usually one of the final stages in making an animated film. The very last step is also what we call stereo or 3D, which literally takes the entire movie and renders it twice. So we have essentially the right or left eye and we shift everything a little bit and render the other side. But let's look at how these different departments affect what our infrastructure looks like for rendering. So as we go through these different departments, our difficulty for what we need to render in our compute resources goes up pretty drastically. From layout to animation, we're going to get some complexity just in making these scenes, but really around FX and lighting and in stereo is where we get a big spike in the amount of compute resources, the amount of machines and computers and stuff like that that we need to kind of make this film. And that happens at the shot level, but also at the entire movie level. As you complete more shots in story, not all the shots are completed at the beginning. You complete them as you go. And so at the beginning, you might have 10 or 100 shots ready to kind of animate and go through this process, but you don't get all of the shots until much later. And so you're going to get this huge spike of a lot of shots being worked on simultaneously, which means we're going to need a lot of infrastructure later on in the film, not as much early on. And if you remember the amount of frames we had, there's some data out there that a shot or a frame typically takes about 24 hours to render, which means if we did this all upfront, it's going to take us about 355 years to complete rendering this film, not just making it. So we have to do a lot of stuff in parallel. But that also means that we need all this infrastructure around this point and not so much over here at the beginning because it's just not needed. At the end of all of that rendering, we essentially get a movie. And that's the thing that we take that's the master that we send to a movie theater. And that's what they play on their screens. So in a lot of cases, this is a physical piece of media that we take as such as a hard drive. And sometimes it's digitally sent through different secure mechanisms and encryption. But that's essentially the whole piece of how the movie gets made and where some of this infrastructure fits in. So it seems like a lot of work. But it's doable. I think between the two of us, we can we can make this happen, we'll get it done in time. And it's going to be amazing. I just think everyone's going to love this movie so much. Okay, still a few things with infrastructure I need to figure out and just the process in general. So I'm going to go get some stuff. Okay, so let's look at those compute resources we need, because this is going to get complex. If I'm a single artist with a single machine, it's pretty simple. I render a frame. I use all the CPU, all the memory I need. I just wait for it. I render the next frame. I wait for it. I render the next frame. This is a linear process. This makes sense. Where this gets complex is if we have lots of artists doing lots of different shots in parallel, we're probably going to need more machines. So if we if we take more machines and add them to the equation, it helps because we can we can distribute this work all over the place. We could say, Okay, 123. But it's not that simple. Sometimes we we do need to coordinate this a little more than just throwing jobs out there. It's not just going to work. Great to collect everything into a single movie at the end. If we're just, you know, if we don't organize this. And so we need something some intelligent scheduler that is going to coordinate that force as a shot comes in. Not only do I need to track where that's going to go how much resources it needs. But I need to track assets inside of the shot itself. Here. Let's say I want to render this bear. Right? It's a vacation bear or whatever in the story. It's fine. But I need to know which version of the shirt I have because the shirt goes through different phases of cycles of being complete with different textures and patterns. The bear the character itself will have different rigging mechanisms and animation. So the bear might change and I want to make sure when I render it's finally I get the latest version of the bear the latest hat and the latest shirt. But what if what if we just change it in the middle of the movie somewhere later on they say, well, it's not a vacation bear anymore. Now it's a skateboard bear. Or we change the genre completely. And it's not a skateboard bear anymore. It's a, you know, it's a superhero bear. I don't know. These things change as story progresses. So we need something that intelligently tracks all these assets, some sort of asset management system. And we also need a scheduler that can schedule these jobs. And looking back at the compute side of it, if I say I want to schedule, you know, this frame over here and this one over here, usually that's fine for smaller systems that have a handful of cores and memory. I can just use the whole system. But if we get larger systems, we can actually cut those up into smaller pieces and render two frames at once or three on a single system. But we need to be able to isolate that compute and resources separate from each other on the same system. And containers are a perfect fit for that, right? We can we can tell it how much CPU, how much memory can use, what sort of mount file system it can use to get to track the assets and to mount volumes into it. All those sorts of things will work great. And so we can use those containers on top of this with these high CPU memory jobs. And and then we can use our orchestrator to to put those jobs out there in order based on when an artist finished shots or when a shot is done ready to be rendered. There are a couple of things that are still questions here. Like how do I get metrics from from these jobs? We probably want something that scrapes them, whether that's an agent on the box or something like Prometheus or something pulling that information. But I think I think we're we're on our way to have something that we can render a lot of this stuff at once. Yeah, so I think this is a good call. I mean, we'll look at Kubernetes. We'll look at Prometheus. We got to find a custom scheduler. There's there's other ones out there that aren't just these aren't web services. They are not horizontally scalable. They take up an entire system. So if I want to add more systems to the to the render farm, I can't look at CPU and memory usage because in an ideal state, 100% of my CPU and memory is being used. So an auto scaler is not going to work for that. I'm probably going to have to look at something like Q depth. How many jobs are waiting to be rendered? And if if I have too long of a queue, maybe I add more EC2 instances or I add more pods to the cluster and able to I'm able to scale up that way. That actually seems like a great idea. Yeah, let's let's try that. So remember all those assets we're tracking and rendering at it's going to take a lot of storage. The assets before we render them after iterations and it's everything's just going to add up. We need a lot of disc for this. Oh yeah, and remember that a scheduler we might need to make for the rendering. We got to put it in order and everything. I think for our movie, we can find one off the shelf. There's a couple that exists that'll make basically graphs of assets and dependencies. So at a larger scale, it probably wouldn't work because you're going to have multiple movies going on at once. So you need something custom. But for us, I think we're okay. Cloud storage? I don't know. I mean, I guess there's options. There's you know, plenty of different things we could try. As long as it's you know there's it's scalable enough. So that makes sense and yeah, it's just a bunch of files and as long as the artist tools and the rendering engine can access the files and is fast enough. Yeah, cloud storage actually makes a lot of sense. It's a lot easier than this. All right, well I think we have everything then. Let's let's just try the cloud storage. It's got to be easier, so let's get to work. All right, let me tell you a little secret about how movies are actually made. First, you open your app. Second, put on your headphones. Third, cut to a montage. It's done. Ah, well that work. We gotta get to theater. I'll be back. Okay, we're gonna do it. We're gonna make it. We'll get the movie out in time. Yeah, this is it. This is oh no, they're closed. Oh, how could they be closed? No, no! Yes, of course I'm eating my emotions. I thought people would like the movie. It's a bummer that all the... I mean, if they're closed, what are you gonna do? You think so? I don't know. That streaming, it seems so much work. Besides, how would you do that? I don't... like we just built all this stuff for making the movie. Like, how are we gonna stream it? I don't know. It doesn't seem like it would work out. Can't be too hard, right? I'm still bummed, but I don't know. Maybe it's an option. Oh, look. Oh, good you're here. Yeah, so I was thinking about what you said, and I think streaming might make sense. I mean, it's a lot of other stuff to do, but I've been researching it a lot, and it kind of just picks up where we left off. So I think we might be able to do that. Here, let me show you. This is what I think so far. This is what we got. Okay, so we have this movie file now that we made, and we were gonna send it to a theater, and since that's not gonna work out, we gotta figure out how do we get this to a TV? And it turns out that TVs aren't movie theaters. They don't play the same formats, and who knows how to work those remotes. So what we need to do is figure out first just what's a common format that we can play for, let's say, this TV here. And they're a little tricky because there's different models and different formats that they support, but let's just go with something that's a little bit standard, I guess, or common for this use case. That's just so how we get media in there. So let's transcode this movie file into something that we expect the TV can play. So we'll use this movie, and we could send this to TVs right through the mail. We could actually just like copy these a bunch and put them in envelopes and send them to people and let them play them, but we don't want to use that. We actually want to stream this online. So the first step we're going to need is some sort of like storage of what movies we have. We need some sort of metadata server, some database somewhere that we can put all of these files in all these movies and store this metadata so we can get that later and people can look it up and say, okay, what's your catalog? Where's this information? So let's just have some database and put it in there. And then what we actually want to do with the file itself is change it and chop it up into little bits. This is easier to kind of push over to TVs, sort of like packets on the internet, right? We can store all these little files with some reference data, what movie it's in, which part of the movie it is, and essentially where it's stored. And let's put all of those in a bucket somewhere, somewhere we can access online, put all these together into what we call a playlist. And this playlist is we need to store it in our metadata server as well, but it's going to give us the order for all these files so that when someone requests it, they can they can get those files directly and we can stream it to them. And there you go. We have a stream on a TV, which is awesome, but not the whole story. If we have a TV way over here on the other side of the country, how are we going to get the files to them? We could stream them and send them all the way across from our centralized buckets, but that's probably not the most efficient way to do it. So what we probably want to do is have a bucket that's close to them as well. And we can say, okay, well, if we put it here, and we synchronize this data, we just put all the files over there, then we can send it and it's a lot closer. And again, this sort of makes sense. But not when we have TVs all over the place, we can't just put buckets everywhere. And so for that, we're going to have to figure out something else. But we can rely on this thing we call the 2080 rule, where 20% of our content is viewed 80% of the time, which is actually really helpful for us. Because instead of using these larger buckets with all the information, we only need to actually synchronize 20% out to smaller edge locations. And these edge locations, we can use a CDN and just put them all over the place, which is great. And 20% of our data goes here. And the other 80% can still be fetched from these centralized locations. And as the media gets more popular, we can sync it out somewhere else. But now we can sync some of the larger content between these buckets and then from there synchronized to the smaller CDN cache locations and all the TVs can access it. Now what happens when we look at a global view that still kind of works, we can still store 80% of the content centrally and 20% all over the place, which is actually really great. So this, I think that would scale for us over the globe. We have the TV here. And if you have a TV, you probably have a phone too. And there's also a good likelihood that you're going to have a tablet or something. And, and then maybe even a game console. And so how do we get files for all of these things? We can't just send the same files to all of them. And we can, you know, make multiple copies of them. But your TV is probably going to have something like surround sound. So we need another version for that. Oh, and maybe you have HDR. So we might need another version for that. And what if you have a low internet connection, you can't use the really big files. So we need another version for that. And this just keeps going on and on and on. And it gets really messy. But we have to track all of these files in some in that metadata service so that when the right client tries to access them, they get the file that they want. Let's simplify it, though, just that's too much. So now let's look at what it actually takes to open the app from a user perspective. What sort of infrastructure do we need? And we're going to need a few things up front for like a sign up service, as I need to be able to create an account. I need to be able to pay for this accounts. And I need to probably set up some sort of profile. So we're not going to focus on these, but this has pretty standard user databases and PCI compliance for payments and some other metadata for which sort of profiles are part of a larger account. But let's let's focus in on just let's pretend someone has an account already and they're just going to open the app and try to play a movie. What does that look like for the infrastructure? So first step, we have the app, go ahead and open it up. And the very first thing, of course, when you start an app is you need to authenticate. So we need an authentication endpoint that we're going to get either a token or username and password just to prove that you are who you say you are. And this service is going to talk to what we call a session service, which will actually make a temporary session for us to allow us to get access to the other back end pieces that we might need. So it's going to send basically just some sort of hash down to the phone that we'll use later. And the first thing that after we've authenticated is we want to load our home screen or the main page. And so we'll look at a list service is what I'll call it here. And this is just going to provide us all sorts of information that goes directly on the main screen. List is made up of multiple different pieces. And it combines a lot of this information from that metadata service and other places. One of those things that it's going to create or use is a CDN mapping of which CDNs are we using for which locations? Because if we want this to be all over the world, we'll probably need more than one CDN. Multiple different CDNs will use different networks more efficiently based on peering and partnerships. So we want to make sure that we're giving the best CDN for the network and client that someone's using. So the second piece here is the client. And that depends on what type of phone you have, what sort of files format does it support, what DRM does it support? All that kind of stuff comes into this client piece where I can see from a request, okay, you're going to want either fair play or some other DRM with this sort of file with this sort of audio with HDR or not. And then this list will also look at personalization, things like what's on my playlist, what's on my watch later, which things do I recommend to you? And personalization is such a long word, let's not write it that way. Usually people write it as a number NIM, which is P13N, which is 13 letters in between. So that's our personalization piece. And the final one is this license service, which will depend on where you're located. If you have if we have a partnership or media rights to play that in your country, things like different files for different countries also matter. If you didn't know, a lot of animated movies have a different version for the United States versus, say, China or Japan or Canada. And so we can fetch some of this stuff properly depending on where your geolocation is as well. So all of these things will work together. So first, our personalization and license service will give us some content based on where you're located and what we think you'll like. And then that CDN and clients will give us what playlists match those files. And so we can say, okay, I'm just going to send you these files and it's going to have all of the playlist information right embedded into it so that when I go to click to play it, it's already there. And this is going to repeat for all these different files that we want to show on the home screen. We could do that for this, you know, hero bar here, but also for all of the other bars that we might have on the home screen to get you the information you want. So you can just start playing it right away. What happens if you want to search for something? That's a slightly different problem for infrastructure. So let's say we want to just text search for a movie. And the first thing search is going to do is it obviously going to work with that list service. And the list service also can have collections that are either automatically or curated by a person. And so things that go together, things like Star Wars movies all go together or animated films all go together. This is great because we can return all of those if I search for one, hey, you probably want to see the other ones as well. So list is great at that. But the full text search is a little bit more tricky because we need to rely on what you're typing to not only get the words that you wanted, but maybe something similar to those words. So if you type in a search, let's look at how that flows in this text situation. We'll use that metadata service that we had before, and we can probably just put solar or elastic search in front of it for a full text sort of view of that metadata. But just we can dump the information in there based on ingest. So when we pull in a file, we can throw that into elastic search. But a full text search of that is going to be a little cumbersome sometimes. So we might need to change that and use something custom. GraphQL API probably makes a lot of sense here because we can more fine tune the query that we want to make. And the big benefit here is that 80 20 rule is we can cash it. So 80% of our searches will probably cashed. And we only have to rely on Alaska search for the last 20%. So I type something in here into the text field, go to GraphQL, that's going to look up in the cache. If it's there, if it's a cache miss, okay, now we need to actually fetch the full text and get the result that you are looking for. So we can combine both of these backend APIs into this main search API. And whenever I search for something, send them the one or two files that we actually want for a result. So here comes our results. And once we get the movie that we actually wanted to watch, that shows up here. And what does it look like to play a movie? Finally, we're there, right? And again, this has all that information of a playlist for where it's located. I can just go access those files directly, but we're missing a step here. There's DRM on these files. How do we understand and read these encrypted files with the DRM? So again, these files are just stored in this bucket, DRM wrapped depending on the clients. And so we need to talk to some DRM service. And remember our session token that actually is what we're going to use. So the DRM backend knows that our session is going to be valid based on their own communication, but also which token we get. And it's going to give us a key that will allow us to locally decrypt those files. We don't want to do that prior to this. So once I finally get that key from the DRM service, I can access those files and there we go. Now we're loading. Now we can press play, except for a couple small things. While I'm playing this movie, I'm going to have a heartbeat service, something that's actually going to call out to some other back ends just to kind of let us know where we're at. And the first heartbeat that we're going to actually want to send is back to that session service. I want to refresh this session service occasionally just to make sure that my session is valid, that I still have access rights to, you know, get this content that I can still do things. But that new session also works with this DRM service to make sure that if the DRM key changes for certain files or certain formats, I'm still able to decrypt those. And all of that works in session, in session works in concert together to make sure that I'm getting the right information. And so if what happens if I go offline, right, like my heartbeat stops, now what do we do? This is exactly the same problem as say, download and watch offline. In this case, I get a slightly different DRM key probably that will give me a longer lease on watching those files. So I can prefetch all of the files from wherever they're located in the best formats, and my DRM key will expire at some point, but it'll be more long live than if I'm watching with internet connection. So this is great because once I come back online, I can refresh my session and DRM token and everything else just works. The final two pieces here that we want to actually heartbeat to are what we're just going to call the watch endpoints and the metrics endpoints. So watch is super critical for things like continue watching. As I'm watching this movie, I'm constantly sending back a little bit of information where I'm at in the file, which movie I was watching, which timestamp I'm at. That really matters for when I want to pick it up later either on another device or just later in the day or tomorrow or something like that. And then this metrics endpoint gives a lot of performance about what it looks like to stream this movie. Is my network still fast enough? Is the device, is the format good for this device? Have I had any latency or issues? And being able to get that from the client side and match that up with what the back end is seen is hugely important to be able to understand the performance of the entire stack. And so I can get metrics from both sides and then get a bigger picture, a better picture of what's happening in the service. And that's just a huge win. So there we go. Now we're watching a movie. All of those pieces come together to put this movie together and stream it to my phone. Okay. So I mean, that's a lot, right? There's a lot of stuff going on there. We have to track all the assets. We have to distribute them, but we can rely on cloud services for that. There's CDNs that just exist and can distribute it. And we already have the file. We can just upload it and it does all that, you know, transcode it and cut it up. All that stuff's going to make sense. It's going to be a little tricky with all the different platforms and endpoints, but I think we can do it. And the infrastructure, do it all in the cloud, use the services that exist, RDS, Kinesis. And of course, we'll use some Lambda and step functions and Kubernetes and all that stuff. That's all going to be there. But really the end of the day, we just want to get the movie out there and this, instead of the one theater, we can go everywhere, right? Directly to the consumer. Oh, and think, they'll pay me five bucks a month. I mean, that's not bad. I think we should do it. All right, let's go. Let's get started. Yeah, no, I think it's ready. The website's up. I mean, it's going to go live here. Look, that's it. I mean, you can get to it right there. Here we go.