 Thank you. I'm Lukas Dockner. I'm one of the cycles developers for the past few years. And for the past, like, almost a year, I guess, I've been working with theory studios on their CGI productions. So in this talk, I'm going to talk a bit about what I did for them, the development I did, the challenges that they faced during the production. And now, I was convinced to do this talk yesterday, so it may not be perfectly fluid. Please excuse me if something's not working right, but I guess it will be fine. So the main project we've been working on for the last year I've been there was CGI for man in the high castle. I don't know, who's seen it? Maybe raise your hands? Oh, it's quite a lot of people. So for those who haven't seen it, it's Amazon Prime Video production. It's basically based in the premise. It's a narrative history series. What if Nazi Germany had won World War II and it's set in the 60s in the fictional universe? But, of course, that's not the only project we've done. There's some other stuff. Maybe Ray and Clovis. Maybe some of you have seen it on YouTube. And, yeah, of course, many other projects going on, like some for Silicon Valley, some VR gaming and man in the high castle here. I've got a short clip here, just cut down version of the demo reel. You can find the full thing on YouTube, but for time reasons I had to cut it down a bit. No, it's fine. That's from a music video that was produced recently. All right. So theory studios, pipeline, what's going on there? One thing that's somewhat special about theory studios is that it's a fully virtual studio. It's based on online collaboration. Everybody's working remotely, which allows us to be very flexible and also allowed me to work for them because it's American studio and I'm from Germany, so I was quite lucky. And it's blender based. So blender is used for modeling, for unwrapping, lighting, rendering. There's some other software going on. For example, we're using substance designer for shading and some simulations are done in Houdini, for example, but mostly pipeline is blender based. And rendering is done in cycles, of course. And that's also a bit special. We're rendering in the cloud, Amazon AWS, using that line for render form management. So while rendering in the cloud, well, it does definitely have some advantages. For example, it allows you to be very flexible. You can just rent servers whenever you need them as many as you need. It reduces investments you need for your studio because you don't have to buy 100 servers. You can just rent them based on hourly basis. And it provides insane performance because you can just say, okay, so we're going to rent 500 servers for one hour. It's still reasonably cheap, but you can't just buy 500 servers for your studio. So instead of having to manage your rendering time, so you say, okay, so the shot's finished. Let's take a week to render. Meanwhile, we'll do the next one. You can just start up as many services you want, get the shot rendered in one hour or something, get instant preview and continue working on that. So just an example for the price, you can get 64 cores, 200 gigabytes of RAM for under $1 per hour. And you can do a lot of rendering on that and it's still cheaper than buying it. So obviously, it's quite interesting. I think even for studios that have on-premises, people on-premises working there, cloud rendering is definitely interesting and it's going to be, yeah, it's going to only get better in the future. So what challenges do we face, especially in manned high-cost? As you've seen, the scene that was shown in the demo reel was quite complex and, well, even the single scene has tens of gigabytes of textures. One scene in particular, I'll show that in a few slides. It uses 140 gigabytes of RAM during rendering. So that's kind of interesting. I can't open it on my computer. In total, the entire production files were talking about over 10 terabytes. Billions of polygons in a single scene. For example, in the exterior shot you saw in the demo reel, every single blade of grass is modeled. It's particle models. So, like, models on a particle system. And, yeah, stuff can get quite complex, as you can imagine. And also in texturing, insane resolutions, 16K textures everywhere, rendering beyond 4K because you want some headroom from compositing, of course, and the final product is 4K. In some cases, thousands of lamps, hundreds of thousands of people in a single scene. Of course, you want them to be detailed because, well, you can with modern rendering engine. And also, I mean, we have the software, so, of course, you want complex lighting, you want full global illumination, you don't want to cheat with making stuff, you want everything to look as good as possible. So what are some of the example scenes? Well, this is a concept art from the high castle. As I said, that's real grass, that's reliefs. Everything is 3D model here. So, that's quite challenging. That's also a concept art for one of the scenes. Every single person in there is a 3D model and not just some low-poly game-style asset, but actually this detailed. So, yeah, as you can imagine, it gets quite interesting to get that rendering. So, of course, just with regular blender, it might work, but it's extremely impossible. It's extremely tricky. It's not technically impossible, but in practice it probably is. So, of course, some of the solutions you can find for this is have good artists, have people who know what they're doing, have more rendering hardware, but also, probably in the end, one of the most efficient solutions, and that's why I come in as coding, of course. Why not improve the software? Software is not good enough. So, what are the larger projects I worked on at my time at Deere Studios? Well, there's one big component that most of you have probably seen already. It's denoising. Also, RenderPass API, Udems, I'll go over each one of these in detail later. Udems, network rendering, light sampling, persistent data, and a massive plugin. So, to go over these in detail, denoising. What's it about? Well, let's say you render a material shot that's not from Deere, that's seen from BlendSwap, I guess. This particular file is from the blender manual. And as you can see with regular cycles, it looks good, but still quite some noise in here. And well, if you enable denoiser, it looks like this. It's quite a decent improvement. It's not perfect, of course. If you look closely, you can see some splotchy artifacts here and there, but it's an improvement. So, yeah, it's a post-processing filter that works in cycles. It removes noise after you're done rendering. It's included in Vendor 279, so you can already use it. Probably most of you already have tried it at some point. It started its Google Summer of Code project in 2016. I worked on it. But in total, it was almost two years of work. I started looking to denoising it in August 2015, and now in 279, it's released. So quite a lot of work, but I guess it paid off in the end. Next point is the render pass API. It's an internal change in Blender, so it's not something that users really have to care about. But what it does is it allows to have more than 32 render passes at the same time. Because right now, in cycles, we have a lot of passes. We have, like, diffuse indirect. We have ID passes. But that's it. We couldn't really add more because of internal limitations. So I decided to work on that and improve the number. So now we can have unlimited changes, but sadly, no user visible change yet. There is one change, but it's only some debug passes internally. But in the future, we can and will get more output passes out of this. Then the next one is an interesting one, UDEMs. Just quickly hands up, who knows what UDEMs are? Who has used UDEMs in Blender before? Who would like to use UDEMs in Blender? Yep, so that's for you. Basically, what UDEM is, for those who don't know, it's naming convention for storing textures as multiple images. Each image is its own region in the UV editor. I have a screenshot that shows in more detail. And what it does, it allows for more flexibility when unwrapping. For example, you can say, okay, this is part of the model. I want more resolution. I want to use a high-res texture. And you can just use high-res in that specific tile. And you don't have to cram everything into the zero to one space. You can just separate your models into multiple UV regions. And supported by other tools, for example, a substance designer, which is why it was relevant theory. But, yeah, in Blender, you could get it to work with some extremely complex node setups, but you didn't really want to get it to work in Blender because it was just so complex. And, yeah, it didn't really work well. So here we see an example from the theory build that I did. Here we see the different tiles. For example, UDEM tiles started in 1001, then 234, and so on. So you can see the numbering. Each one of those is one file on disk. And as you can see, different parts of the model are laid out into different spaces. And it just gives you more flexibility. For example, you can say, okay, the wood needs more resolution. So you just sort this file at higher resolution, and you can leave the other ones as they are. And in the node setup, it's as easy as it gets. One principle, BSDF, four input textures, one normal map node. And that's it. Works just fine. What is supported right now? Automatic loading. So you just select one file and finds all the others, loads them all at once. Also linking everything is fully supported. Of course, it's important in production. You can see UDEM grid and the image editor. You can see all the images at once in the editor, which is also not possible in the current blender. Rendering in Cycles works and material preview in the Vuepod. What does not work yet, sadly, is texture painting, that's supposed to say baking. Work in progress will be supported eventually. But yeah. For now, since the textures were generated externally in the DRE pipeline, it was good enough. The next one, I started working on this fairly recently, network rendering. The idea there is, let's say you have animation 100 frames. You want to render as fast as possible. Of course, you can get 100 machines. Each machine does one frame. But what do you do if that's not enough? Well, the obvious idea is use more machines per frame. But how do you do it? Well, there's two workarounds that you can use currently. The first one is the one that we ended up using, split it via render border. So each machine renders one fourth of the frame. And then you have external tool that basically stitches your XRs together to one frame. That works, but it's not really great because one region of the image might take up much more work than the others. So one machine ends up being late. You have to use an external program. It's just annoying. The other one is one that's actually supported by Blender now. It's not shown the user interface yet, but Sergey added it to the core, which is splitting the samples. So you say, I won't have 400 samples. Each machine does 100, and then we added up. Works in theory, but again, not nice. You don't want to do that. So solution at network rendering directly to cycles. Interestingly enough, cycles already has code for network rendering. It has the code for four or five years, I guess. But it never really worked, never really was enabled. So the obvious solution is fix it. It sounds easy. It's not that easy. But I kind of got it working. The idea is you have multiple computers which run a cycle server, just a small tool. Then you have one computer running the full Blender. You just select network device. You start rendering. And just like when you have multiple GPUs, and each GPU gets a tile. With network rendering, each machine gets tiles. So you have like 200 active tiles at once, which looks kind of funky if you see it for the first time. And yeah, it just works. The current state is basically rendering works, but advanced stuff like denoising, mixing CPU and GPU, mixing operating systems, using OpenCL doesn't work yet. We'll work in the future, of course. I'll continue working on it. But yeah, main problem right now is network bandwidth. If you think that gigabit Ethernet is fast, we actually were bandwidth bottlenecked by 10 gigabit Ethernet. So can be improved, but it's just networking is so compared to the data transfer internally in the computer. It's probably going to be the main bottleneck in practice. Then second one that's currently in development and not really finished yet is light sampling. The thing there is, this is probably one of the largest weaknesses of cycles right now, having many lights at once. Like five lights, 10 lights, this is just fine. But once you try to add 100 lights, it gets more complex. You have two options. You have can use path tracing, which ends up being noisy. Or you can use branch path tracing with sample all enabled, which gets slow. You don't want that. So we need improvements. That's not just one magic thing that you can implement and everything works perfectly and everybody's happy. But there are some things you can do. For example, the first two, light sampling threshold and constant emission folding, those were added to 279, they're already released. The other three are in development and not really working right now, but they will work in the future. And hopefully it will allow it to use thousands or even hundreds of thousands of lights in a single scene without really slowing down the render. That's actually a pretty interesting point. I don't know who has ever used their persistent images option in cycles. That's a few people. Okay. So basically what it does is right now, for every render, cycles resync all the data. So when you press F12 and you get syncing and it takes five minutes, that's what it does. And often that's unnecessary. For example, if you only move the camera in an animation, you don't actually need to resync all the measures. But cycles does it anyways. Interestingly, it does not do so in the viewport, which is why some people ended up doing animations by taking screenshots of the viewport, changing the frame and taking the next screenshot. It kind of works, but you shouldn't have to do that. So the goal is extend persistent images to all kinds of data and cycles. Again, the status, it kind of works, but there were some pretty serious bugs with it and I can't guarantee that there are no bugs anymore. And to do it properly, we need some changes and depth graph and it will probably have to wait for two days. But that's fine. We'll get it eventually. And the last large element I did was support for massive. I don't know who has used massive before. There's not a lot of people. Basically, massive is a crowd simulation software. It was originally developed for the lot of the rings movie. And it's just, yeah, you can simulate large crowds of actors with it. And it's really great, but the problem is, it used to be tied to specific software. For example, there's massive for Maya, but there is no massive for Blender. So what do you do if you want to have a crowd on Blender? Well, what we ended up doing for Man the High Castle was you use it for Maya, then you bake it to Alembic, then you import the Alembic in Blender. It works, but again, it's cumbersome, it takes time, it's slow. You don't want to do it. So we got in contact with massive people and started working on it. And they added a universal plugin, which means that you can export from massive. And there's a plugin that can read the data. And so we integrated the plugin into Blender. So we now have a modifier in our version of Blender that can read massive files and play them back basically in real time in the Blender view part, which is pretty neat. Here's a screenshot. As you can see, quality is not great. I did that on my laptop yesterday. So, well, but I guess you can see what's going on. We have people walking around. This runs about 15 frames per second. Will be better in the future, but yeah, it works. Of course, that's not everything, there were also some smaller changes. For example, a save preview. That's an interesting one. Before, I mentioned taking screenshots of the view part renderer. Again, you don't want to do that. So I added a button. You click the button, it saves the current state of the view part as a renderer is out. It's a small change, but it's pretty neat. Also, for example, highlight invalid noodles. What that means is a noodle as in the node editor. I've seen this quite often. People build this huge shader graph, and then they connect a shader output to a color input, and that doesn't work. It just does nothing in cycles. So what this patch does, it just shows the connection in red, because you don't want to do that. Some internal changes to how sampling works. Can help in some cases, but it's somewhat complex to use, and you can pretty much screw up your render if you use it wrong. So it's not really great right now. Yeah, and some really small changes to hash option in the morph node. You're getting an error when you miss an image, so you don't just get a magenta renderer after four hours. It just tells you, hey, you're missing a file. I'm stopping now. Rending your view part in a lower resolution, if you have like a 4K screen on a small laptop, that's pretty bad right now. So that's actually a master, so not 279, sadly. That's also one that kept annoying me, and I think also annoys other people, because I've read it several times. When you have keyframe animation, and you move something, and you forget to set a keyframe, and you change your frame, it's gone. So what this just does, it highlights the property says, hey, you might want to save this. And this one's also interesting. Who actually knew that you can change the direction of the sun in the cycle sky node? Yeah, that's not enough people. So what this does is it lets you specify a sun lamp and will copy the orientation, because usually you want to use sky plus sun. Tiny stuff, but in the end it adds up, and it's just more convenient for the artist in the end. So of course, that's not all, but at some point you run out of slides. So that sounds pretty neat, I guess. So if you're thinking, some of that sounds interesting, you would like to check that, well, you can. It's open source. You can just go to good top, you can look at the theory public branch. Don't let the name public fool you, there's nothing we're hiding. All the other changes just splash images and compatibility changes to previous versions that it didn't release. So everything that's functionally different is included in there. Now, of course, you may be asking why is this a branch? Why isn't just everything in blender right now? Well, that's the reason. It's not 100% safe for production work. It kind of works when it works, but when it doesn't, you're in trouble and you don't want to be in trouble in the middle of production, especially when you don't have anybody you can just send a mail to and expect to fix the next day. So everything is working progress, things might change, things might break. So you can try it out, of course, but maybe don't use it for production yet. And of course, not all of this will end up in master. In many cases, there's a better solution that will take more time and will get to master eventually and it just had to hack it. In some cases, it's simply not good enough yet. In some cases, we might just decide, no, we don't want to do that at all. So that's why it's just a branch for now. Just to be clear, this is not supposed to be like a fork or anything, just a collection of useful stuff that you might want to check out. So, yeah, that's pretty much the presentation from my side. I hope it was interesting. I hope I covered some stuff that's interesting to you.