 Welcome to my talk, Introduction to Cycles Internals, and without further ado, let's get started. First of all, just a brief overview of what the talk is going to be about. First, we have brief introduction, then we look at what are we actually trying to do in our renderer, what's the problem we're trying to solve. Then the next step is how can we solve this problem, what do we have to do? Then of course, how does Cycles do this internally? And then some selected interesting details which I often see coming up in discussions and which I think will be interesting to at least some people. Some of these topics are, first of all, how does ray intersection work? What is the BVH? What is multiple important sampling? What does it have to do with lights? How does shadowing work? What is speed directional path tracing and why does Cycles not do it and why might we want it to do it or not? How does light picking work and why does it not work very well right now? And finally, shaders, what is a shader, what does it do? And then in the end, of course, 30 minutes is not enough to cover everything, so what if you want to know more? That's, will we present it there? So let's get into it, introduction. First of all, just one slide. Who am I, who's talking to you? I've started working on Blender in 2014, mostly Cycles, but now I'm also doing some other stuff like UDEM. I'm very interested rendering algorithms and software, high-performance computing, GPU computing, et cetera, so Cycles was obvious choice for me and I've been very happy working on it. Right now, I'm mostly doing it my free time, but just to mention that, right now I'm working for Genesis Cloud. We're a cloud provider focused on high GPU performance. We offer rendering machine learning infrastructure, et cetera, as a service and are much cheaper than usual cloud providers. So if that sounds interesting, do you come talk to me afterwards? Enough of that, let's get to the talk. What will this talk be about if you just happened to be in here by accident? The talk mostly will give an introduction to path tracing, which is what Cycles does, what Cycles does internally and what most modern rendering engines do. And also games are starting to do nowadays. There are also a few things the talk will not do. It will not include any math or code because, well, if you want math or code, wait for the last slide. It will not require previous knowledge of the internals of Cycles. Of course, it helps if you've ever used it, but I guess most people here have. It will also unfortunately not cover every aspect because you could talk for days. And it will probably not make you a better artist, simply because, well, I know all of the stuff and I'm absolutely useless as an artist, so probably won't help, but still it's interesting to know. So as I said, what are we trying to do, of course, because before we develop solutions, we need to know the problem. So what do we want to do? The answer is we want to compute the image. We want to compute the image that a camera would see if it was placed in a given scene in the real world. So the renderer receives the full scene, including geometry, where the lights are, which the materials there are, and so on. The renderer does not care about things like armatures or modifiers. As far as the renderer is concerned, the geometry is done. And Blender handles the rest. And the output of the renderer, of course, is an image. And to get that image, as I said, we want to get to what we would get in the real world, so we need to simulate reality. And to do that, we need to simulate physics. However, as it turns out, real life is pretty complex. So here's just a selection of some stuff that you might need to consider if you really want to simulate real life. And of course, that's not everything, there's way more. So we look at this, we look at it a bit more, and we say, no. We do not want to deal with all of that, we want to do something that is actually feasible. And as it turns out, we don't need to do all of this. Because while real life light is complex, most of the effects don't really matter for the usual scenes that you would like to render. So we don't care about perfect results, we want things that look good enough. And if cycles can't really simulate like a double slit laser experiment, so be it. Why would you want to render that in the first place? So for our practical scenes, geometric optics is usually good enough. Geometric optics is what you probably think of if you imagine how light works. So the light starts somewhere at a light source, it travels along a straight line, eventually it hits something, it bounces off that, and moves around until it finally hits your eye or a camera or whatever, and that's it. And as it turns out, we can include many effects. The geometric optics usually would not predict. For example, the Fresnel effect, which is if you look at glass or water or anything, it reflects and refracts light and the fraction depends on the angle. To actually derive that in physics, you need wave optics. But once we know the formula, we can just use it in geometric optics. Another example would be black body radiation where you say like 5,000 Kelvin and then you get in yellow light. Historically, that's actually the origin of quantum mechanics, but we don't need to simulate quantum mechanics. As soon as we know the formula, we can just use it in geometric optics. So that's a pretty good decision and we can stick with that for our renderer. So, okay, we want to simulate geometric optics. How do we do that? Well, as I said, geometric optics has this concept of light paths. Light moves around along straight lines. So we need to simulate those light paths somehow. The first problem there is there's an infinite amount of possible light paths. So we obviously can simulate all of them. So instead, what we do is we select a few random light paths. We select a lot of random light paths and we kind of just average over them. And each light path will hopefully hit the camera at some point. And if we average over a few million, a billion of them, eventually we end up with a rendered image. As it turns out, this is of course quite noisy, but as we add more and more light paths, the result gets better. And in the end, we can actually mathematically show we will average out to the correct result. This is what's known as unbiased rendering. Though it should be noted that the whole unbiased and biased thing is extremely complex and there's a lot of math and arguments in it. So whenever a renderer says it's unbiased, don't really trust that label. But yeah, it's hard to explain it without math, so I'll just keep it at that. So we want to simulate random light paths. How do we create random light paths? Well, we see an example down there. So what we do is we start at a light source, which is pick a random position at any light source, pick a random direction in which it leaves. And then we go ahead and say, okay, we said geometric optics, light follows a straight line, so let's do that and see where we end up. In this example, we end up at the sphere. So if we hit the camera, we do record it and we stop, but in the example you see down there, first we hit the sphere. So in this case, we bounce off, compute the new direction based on the material, continue in that, then we hit the ceiling. Again, we bounce and then finally in this example, we hit the camera. So path of length three, we're done, wonderful. Of course, in practice, this is not really going to work well because the camera is pretty small. If you imagine a room like this, and we just put a virtual camera in here, the sensor is maybe like this size. So the path is gonna bounce a lot before it actually hits it. So we can just continue bouncing around, but at every bounce, the path loses energy because for example in here, we have black walls, so as soon as the ray hits the wall, its energy is going to be almost zero. So that's no good. So how do we solve this? Well, it's pretty simple. We can just reverse everything. And geometric optics, one of the results is that the path is reversible. So if you start, if we imagine the camera emitting light or the camera emitting something and eventually hitting the light, that also works, which is pretty neat because that means we can just start from the camera and that way it's guaranteed that every ray will be connected to this camera somehow. However, now we have another problem. As I said, we start at the light and now we need to hit the camera. Now we start at the camera, so that's nice, but we still need to connect to a light somehow. And unfortunately, cameras are small, but lights are also pretty small usually. Often it's just a few square millimeters or an LED or something that lights an entire room. So still, we need to bounce quite a lot to hit that and practice that's not going to work. So what can you do instead? Well, if the ray won't hit the light by itself, we need to just force it to hit the light. How do we do that? We pick a random position on the light that we want to hit, then we just say, okay. So now from the camera reach the sphere and instead of choosing a random outgoing direction, we choose a direction which will hit the light because it will hit the point that we chose. That way we can always make sure that the path will end on the light. There's one problem with this though. Namely, if we just bounce and follow the path, then we will hit something that's, where nothing is in between. Like we will hit the first object that's along this path. If we just choose a point on a random light source, it could be an entirely different room. So what we need to do is we need to check the connection. For example, here's one case where it would not work. Our camera ray hit the floor near the sphere and now we try to connect and we find, oops, there's something in the way. This did not work. This path is invalid. We need to throw it away and simulate a new one. So now that I showed how to create one light path, but that's not enough. As I said, we need more. So any way to generate more light paths without having a lot more computation is obviously a good thing. How do we do this? Well, it turns out if we have a full path then all of the intermediate results are also valid paths. So for example, here is one example, as I mentioned it before, we bounce a few times, we connect to the light source. Wonderful. So what we can now do is we can just ignore everything and just look at the first bounce and connect that. That's also a valid path. And then again, we can continue, connect again. And this way we can generate a lot of light paths from just one trace. In this case, we're still not done. We could still also continue bouncing, connect again, and keep doing this to collect a lot of full light paths in one go. So far so good. So how do we use this to get an image? Well, we need to trace light paths for each pixel. So first I said we started the camera to guarantee the hits the camera. Now we go even further. We started every single pixel to ensure that every single pixel gets X amount of rays. So for each pixel, we have a loop basically of all of our samples. And for each sample, we trace one path. To do this, we start the path at the camera. We follow it until it hits an object. If we don't hit anything, we just hit the world background stop and just look at the background color and use that. If we do hit something, evaluate the shader more in the later. And once we did that, we choose a random point, the light source, connect to that as I mentioned, to use the path as we have it until now. So now they have full path. But as I said, we can keep going. So now we ignore that we connected. We bounce in a new direction and just continue again. So you can see this down here. We start our path through a given pixeler. We connect to the, we choose a random point. Oh yeah, we have a path. We choose a random point on the light source. We connect the two. Now we have full path. Now we bounce. Now we choose a, now we don't choose a random point on the light because as you can see, the point is on the ceiling so we couldn't reach it. Now we bounce again. We pick another random path, we connect, we continue and so on until we eventually hit the maximum light distance that we set in cycles and then we stop. And then we do the next path and so on until all of our image is done. So yeah, we connect again and so on. So that's pretty much the basics of path tracing. Now how does cycles do this? How do we turn this into a rendering engine? Well, we first start with getting all of the data from Blender into the rendering engine. So on the Blender side, we apply all modifiers internally to compute the final geometry. We send that to cycles. Then in cycles, we load the GPU kernels. If you do CPU rendering, we don't need to do that. If you use QDAW, the kernels are pre-compiled so we can just load them. If you use OpenCL, it has to compile them at runtime which can sometimes take a bit the first time you do it. Then once we did that, we do some pre-processing to get the data from the form that Blender provides into the form that cycles can use. For example, if we have any textures, we need to load them and decompress them. And we need to do the BVH building, for example, which I will also come back to you. And once we have that, we can just divide our image into tiles, divide the tiles among the devices we have. For example, if you use two GPUs and a CPU, then each of them would handle some tiles. And then each device looks at a tile, looks at each pixel and traces these rays that I mentioned. And that's pretty much it. So, of course, so far this may sound pretty simple. In practice, it's of course not that simple. So the first thing I'm going to start with in the details section is ray intersection. How does it work? I always mentioned the path follows a straight line until it hits something and then we do something else. How do we know what it hits? Well, our geometry is usually composed of triangles. So we need to find out the way to check whether a ray hits a triangle. Then we can test against all triangles. We look at the closest intersection. Okay, wonderful, we found the object that we hit. However, the problem is we have a lot of triangles and a lot of rays. And if we test each ray against each triangles, we very quickly run into problems. For example, let's say we have 4K image, 1024 samples, path length of 10 and 10 million triangles. As it turns out, that means we have 8 million pixels. We have 8 billion paths. We have 80 billion bounce rays and 80 billion intersection tests. And we have 10 million triangles against which we have to test each one of those, which means we would end up with 1,600 quadrillion intersections. That's not going to work. So we need to do something smarter. So how can we avoid finding out what a ray will hit? Because as we just heard, that's not gonna work. Well, the idea is if we just think of a volume in space, like a certain area in space, if a ray completely misses the entire area, it also can't hit anything that's inside it. So we can first test against the volume and if we missed that, we can just skip the entire object. So if we, for example, we can use the bounding box of the object and if the ray completely misses the bounding box, just skip the object. So that's the basic idea there. Only trace against bounding boxes and if we don't hit it, skip. For example, here we have an object. Here we have its bounding box and here we have one ray, which misses the bounding box, so we skip it. Here we have a ray which hits the bounding box, so we test against the object itself and find, oh, yeah, that's it. And here we have a third result. Here we test against the bounding box, find the, yeah, we might hit it, we try and then we find, okay, no, we didn't hit it. So can still happen, but in practice, most of the rays in a scene would probably not hit a small object. So now we say, okay, bounding boxes are nice. We did this at one step. Can we do it more times to save even more work? As it turns out, yes, we can. The idea there is, why stop? If we have a sphere, the sphere is composed of many triangles. So now we had a box around the entire sphere and we found we hit that, but now we could divide the sphere into eight parts and check, do we hit each of these eight parts and keep doing this in a hierarchy, basically. And the result is a so-called hierarchy of volumes which bound the object, so a bounding volume hierarchy, BVH. What does this look like? Quick example, we have some wonderful default shapes and now we want to test rays against those. So we put a box around all of them and if we missed that box, we can just skip everything. We hit nothing. If we do hit it, we continue, subdividing them again. So we subdivide that again and again until each object has its own box. So now a ray comes in, we test it against the purple box, we find okay, it hits. Now we test it against the two red boxes. We find the one on the right is hit, the one on the left is not, so we can ignore everything on the left. We continue, we test against the two green boxes, we find okay, the upper green boxes hit, the lower one isn't. So this means we only have to test against the cloud and practice the cloud would consist of triangles. So we would keep doing this and subdivide the cloud, et cetera, et cetera, but yeah, I think it's pretty obvious what's going to go on and it's pretty hard to illustrate this. So for those with a bit of computer science background, this is basically a binary tree. In practice, it doesn't have to be binary. You can do a broad intersections but that's starting to really go into detail, so I'll just keep it at that for boundary volume hierarchies. So okay, now we can hit things. So now a different topic. Let's just quickly return to the light connection part. When we started with this, I mentioned, well, lights are pretty small, so we need some help to hit them. But of course, not all the lights are small, so what if your light is not that small? Yeah, as I said, that might not always be the case. For example, here's a classic illustration which is designed to show this type of effect. You can see four different light sources of increasing size and four different planes with different roughnesses. So the one in the back is almost diffused and the one in the front is almost a mirror. And the small light sources work perfectly fine but as the object gets more mirror-like and the light gets larger, you see it starts to get really noisy. So what's the problem there? The problem is I mentioned if we bounce, we choose the direction according to the material. So if we have a mirror, then we know the incoming direction. So we pick the mirror direction, obviously. If we have a material that's quite glossy but not exactly a mirror, then we choose a direction that's roughly in the same direction as the reflection but not quite. If we have a diffused material to just choose anything, basically we choose according to the reflection profile. But if we connect to a light source, then we have the position of light source which means that our direction is fixed. We can't choose an outgoing direction according to the material anymore. So this means that for the mirror, we might end up picking some position of the light source and then we find out oops, that's the wrong direction. We can't connect that because the mirror doesn't reflect in that direction. So we end up with an invalid path and a lot of noise. So as we can see, this connection strategy works for diffused materials and for small light sources but not for mirrors. So we have a problem now. For the case of mirrors and large light sources, it would be better to do the first solution which is to just keep bouncing and hope that we randomly hit the light source. So we have two strategies. We can either connect or we can bounce. Both of these will eventually hit the light but which one is better depends on the circumstances. So which do we do? Well, as I said, two strategies, right. And how do we combine them? We could just try to use both and alternate between them. That's going to be noisy because one of them is going to be noisy so the noise will be in the final result. That's not gonna work. We could try to average result of both. Also noisy, also doesn't work. We could try to guess. So for example, say if roughness is below 0.1, use the bounce strategy, otherwise try to connect. That's really not reliable because you will always find some case where the other one would have been better and you get into problems like if the roughness is textured then you might see the border between high and low roughness in different noise levels and so on. So this is not really nice, we don't want to do that. We could just leave it as an option to the user but then the user would have to go through all materials and select this or that. The user would have to know the technical background so that's also not a good solution. As it turns out, there is a good solution though and that solution is multiple importance sampling. So multiple importance sampling, MIS, what does it mean, what does it do? Well, the idea is we do both. But as I mentioned, just doing both, it's going to result in a lot of noise, that's not gonna work. So we need to weight them accordingly. So we want to give a high weight to the strategy that works well and the low weight to the one that doesn't work well. And as it turns out, there is a mathematical way to automatically find these weights that is pretty much guaranteed to result in a mix that is almost as good as the better case. For this result, multiple importance sampling is pretty common in rendering a computer graphics because whenever you don't know what to do, just do both, use multiple importance sampling, you get best of both worlds. It sounds like magic, it almost does. It's really quite a powerful technique. So, right, I mentioned we want to connect and bounce. As it turns out, we can save some work because we bounce anyways. As I showed previously, we connect to the light to form a path and then we bounce, connect again. And so we already compute the bounce ray. So we can reuse it to both check if we hit the light source and to check what our next object that we hit is. So, this is also what Cycles does, of course. This means that we actually don't get a significant slowdown from doing both, but we get significantly better noise. So, how does this look in Cycles? As you might know, lamps and materials have a multiple importance sampling checkbox. For lamps, the default is to only use the connection strategy because lamps tend to be quite small, but if you enable the checkbox, it also does bouncing with IAS. With emissive materials, it's the other way around. By default, it only relies on connection, meaning hitting it randomly, but if you enable the checkbox, it would also try to connect to them. And here we see an example. This is the image from before and now we turn multiple importance sampling on all the lights and it looks like this. As you can see, quite an improvement for the mirror materials. So, with that out of the way, let's get to the next point, shadowing. Who has ever tried to place a window in a building and place a light source behind it? Who has ever noticed that it's not going to work? Right, so why does it not work? I mean, glass is transparent, shouldn't the light just go through? Unfortunately, no, because as I mentioned, for connecting light source, we need to trace a shadow ray to find out if there's anything in between. That ray is not going to work, the connection isn't going to work. What about transparent objects? If we just use a transparent BSDF. In this case, the cycles is actually smart enough to figure out, yeah, that's transparent, that's fine, the ray can go through. This used to be an option named transparent shadows, but nowadays the option was removed because there's no point in ever disabling it really. So, we have a problem here though. Transparency does not mean refraction. So, if you have a glass that is like a glass BSDF and not a transparent BSDF, the ray will actually hit it and be blocked. Why is it the case? Well, for windows, it would be fine to just ignore it, but if you, for example, place a drinking glass on your table and you point the light source at it, you will see that it actually does cause the shadow and there's some caustics in the shadow. So, we can just completely ignore it because the light is bent by the glass. In a window, it turns out that because it's flat, the bending is undone on the other end. So, it's kind of a special case, but it would be fine, but in general, we can't rely on it. And, of course, writing an automatic window detection or something like that is also not going to work quite well. So, we're just stuck with not supporting shadow rays through glass. This means that if the light source is behind the glass, the connection strategy is completely useless because it will always be blocked. However, Cycles gives you a tool. If you do windows, you can just say, okay, you know what? I know that this is a window. It's fine. I don't care that it's not entirely perfect. Just disable the shadow visibility checkbox and it will work. So, yeah, you can work around this, but of course, it requires artist intervention, which is not great, but that's the best we can do. Unless, of course, we would use B-directional path tracing. So, what's the idea there? Well, what I've described so far is so-called unidirectional path tracing. We start from the camera and we just go into one direction. But yeah, that has its problems. So, what we can do is we already did two paths, but I didn't tell you because what I mentioned is we pick a point on the light source. Now, the trick is that this point, we can think of it as a light path with a length of zero. So, what if we don't keep it at zero, but we keep tracing a path from the light and we trace away from the camera? Well, here we have an example. We have a camera, we have glass and light and we have glass in between. We trace our camera array as usual. Then, we connect to the light as usual. As you can see, the one on the left is blocked because there's glass in between, the one on the right works. But now we don't stop, but instead we trace away from the light as well. Now we connect to the second point, the light path and we can connect to the third point in the light path and we can even connect the light path directly to the camera. This is what makes caustics work in bidirectional path tracing. Now, this is all nice, but as you may know, this is not a thing in cycles and probably realistically won't be at least in the near future. Why is it the case? Well, there's quite a few problems. First of all, it's seriously complex. It's way more complex than normal path tracing. You have to get the weights and everything perfectly right. This is another application of multiple importance sampling, for example, where we have all these different strategies with different path lengths, we combine them. You need to get this right or your image will look a tiny bit wrong and it's very hard to notice and it takes a lot of debugging to figure this stuff out. Also, in the start, I mentioned, in geometric optics, we can reverse the path. That's nice and all, but in cycles, we can't actually rely on that because as you may know, we have these visibility options. So for example, we could have a wall that's transparent to diffuse arrays, but not to glossy arrays. Now we have the problem, if we trace from one direction, we hit the diffuse floor and then the wall, it's blocked. From the light direction, we hit a glossy window, then we hit the wall, now it's not blocked. So even though the path is the same, depending on the direction, we do get a different result. Cycles allows you to break the physical reality in this way, but by direction of path tracing does not like that. The more complex the algorithm gets, the more it relies on these assumptions and the more it starts to freak out when you violate them. So that's a problem. Also, people like to talk about caustics and yeah, it wouldn't be nice if we could have a glass with caustics on our desk, but in reality, the use cases are not that common. Mostly it doesn't really provide a benefit. So we have a lot of additional complexity and most of the time it doesn't really help. Also by direction of path tracing is not perfect. I wrote SDS paths here, that's kind of technical term. What it means is if you have a light path that for example is refracted, hits a diffuse surface, then we refract it again. An example of this would be caustics in a pool. So the light enters through the water, hits the bottom of the floor and exits again. By direction of path tracing cannot do that. That's a problem. There are ways to solve this, vertex connection emerging, photomapping and so on, but that's the point where it starts to get really complex and you have to ask yourself, do you really need that in cycles? So for this reason, cycles so far sticks with unidirectional path tracing. Might change in the future, who knows, but yeah, definitely not something you would just add in the afternoon. So back to things that actually do affect cycles in state right now, light picking. I mentioned so far we pick a random point on a light source. Didn't really mention how we do this. As it turns out, it's not that hard to say, okay, we have this light source, let's pick a point on it. There are some ways to do it better, some ways to do it worse, but it's not that hard. However, I just said for a given light source, what if we have 20 light sources? We have to choose one of them and that's where it starts to get really tricky. It sounds pretty simple, just choose anyone, just choose a random one and you're fine. But as it turns out, you're not fine. Why? Well, light sources vary a lot. You might have one full light that lights the entire scene and then like 15 tiny lights that illuminate one tiny spot each. The problem is cycles can't really know that because lights can be textured, light intensity can depend on the type of array, light intensity can depend on a lot of things so it's not that easy to just say, okay, yeah, this light is twice as bright as the other light. So what cycles actually does in practice is, first of all, if we have both lamps and materials with emission, we pick one of those. That's very important because that means that if you have an entire scene lit entirely with lamps and you have one tiny LED somewhere and has an emissive shader on it and MIS is enabled, your noise is going to double because 50% of the time it's going to pick emissive materials and always hit that one single LED and only 50% of the times it's going to hit all of your lamps. That's not ideal, but that's how it is right now. So if you have this situation, disable multiple importance sampling because that way the material won't affect the connection strategy and you will get lamps 100% of the time. Then if we decided to pick a lamp, we just use equal probability for all of them and if we decided to pick a material, if there exists an emissive material in the scene, we pick according to its area. So why is this not great? Well, there are quite a few examples where this goes wrong. For example, let's say we have one sun lamp that lights the entire scene, but we also have 19 spotlights. Well, each lamp has the same chance, so only 5% of all of our rays are actually going to connect to the sun. 95% of them are going to connect the spotlights which only have very tight cone, so most of the time it will miss the cone and we'll get it back away. So a lot of noise, not great. Another example would be we have one massive plane that's fairly dim and a tiny plane that's very intense, so in the end they both have the same energy. Well, as I said, it samples according to area, so the dim plane is going to get pretty much all of the samples, also not great. And then the case that already mentioned, we have only lamps, but one object has a tiny emissive, oops, we just wasted 50% of our rays. Obviously, as you can guess, this is quite a problem and there's quite a large area that could be improved, so there's actually work in improving that, which would be first, we estimate the power of each light source and then we use a strategy similar to our BVH, so we build a hierarchy of all lights, we group them together and then if we want to pick a light, who we essentially started the top level of the hierarchy, we look at the two boxes and say, okay, which one of those is probably going to be more important because it has more energy or whatever, we pick one of the two, then we ignore the other half and continue with that. Look at its two blocks again, say which of those two is more important and so on until we only reach one light source and then we choose that one. As it turns out, this is already work in progress. It started as a project in Google Summer of Code 2018, unfortunately it's not finished yet, but definitely it's on the to-do list and it will come to cycles at some point, which will present a major improvement because with this you can have thousands or even millions of light sources in your scene and it just works. So, yeah, I didn't really, I should point out I didn't really work on that, so yeah. So with this we come to the last point, see I'm already over time, but I don't think there's any afterwards in this room, so I'll just take a few more minutes, I hope that's fine, if anyone wants to leave, just feel free. Yeah, shaders, how does the shader work in cycles? Well, as you probably know, shader in cycles is just a network of nodes, but internally it's basically a program. So this program defines how your material, how your geometry appears. So with old renders like Blender internal, a shader does the entire shading part and just returns a color which appears on the screen. In modern renders we don't do this anymore, instead we use so-called BSDFs. A BSDF is essentially a mathematical model of how our material looks, for example, you have the diffuse BSDF, you have a glossy BSDF, each of these has parameters and the renderer has a built-in list of BSDFs that it supports. This BSDF concept, by the way, is also what modern game engine use. If you hear PBR engine anywhere, that's what people are talking about, they're also using these materials. So yeah, BSDF mathematical model for material appearance, those are a few examples. And in cycles you usually build your material from nodes, but you can also program it using Open Shading Language or OSL, which makes it a bit clearer that the shader is in fact a program which will return something. And here's a quick, simple illustration. We have some inputs, for example, we have the position, we have the UV, we have the normal of whatever we hit, that is then input into our shader and the output of the shader is then a list of BSDFs. So for example, the shader would look up textures, according to UV coordinates, find out okay, our texture is orange at this spot, it would compute the Fresnel result and okay, we have 20% reflection and then again, we get a list of BSDFs. That is then handed to cycles, which uses those to pick the outgoing light direction, to do the light connection maybe and compute how much is reflected and so on. How does cycles actually run these shaders? Well, if you're doing CPU rendering and you're using OSL, then this web gets a bit technical, so I'll just give a very short summary, it basically takes your code and compiles it to something that your CPU understands so your CPU can run it as if it was a normal program, which is pretty efficient. If we're not using OSL, for example, because we were on GPU, what Cycles does is, it basically has its own virtual language, like its own virtual instruction set and it compiles your shader to that, so it first takes the shader, optimizes it, for example, if you have a moth node in there, which is two plus three, that's going to replace this with a value node that says five. If you have the same node network three times and you connect it to the different things, it's going to merge them together and connect the one output to three inputs, so it's actually quite smart in optimizing your shader and afterwards, it translates this to this custom set of instructions and then it has code which can interpret these instructions and run the shader on the GPU or on the CPU. So yeah, but this is where it starts to get into computer science, so I'll just leave the rest to people who are actually interested in reading the source code. And yeah, with this, we reached the end. I mentioned a few things, but there are, of course, many additional topics, for example, I said nothing about volumes, volumetric rendering, Stefan has a talk on that tomorrow, I think? Tomorrow, yeah. Subsurface scattering, of course, which kind of belongs to volumes, but not really. We have motion blur and depth of field, we have subdivision instancing, how the BSDFs and lights actually work, baking, sampling, RNG, GPU computing, denoising and many more. If any of these are of interest to you, where can you go? Well, one book I would highly recommend to everybody is Physically Based Rendering. That's the book where I learned everything. You can read this online for free. Legally, which is pretty good. So if any of this sounds, so if any of this sounds interesting to you, yeah, check it out. It's highly recommended. Apart from that, if you really, really like Moff and you want to get all the details, robust Monte Carlo methods for light transport simulation, that's a PhD thesis by Eric Wiech. That's the guy who invented multiple important sampling and kind of invented bidirectional path tracing and so on. So if you like Moff, recommend it. If you like code, of course, just look at the cycle source codes. It's open source. And you can just talk to me if any of this is interesting and you want to learn more. And right with that, the main talk is basically over, but I still have two slides on some future stuff which might be interesting to people. So I think I'll just keep going. Okay, sounds good. So path guiding, what is this about? As I mentioned, we can connect to light sources. So if we have a direct light source, who will hit it? That's nice and everything. But what about indirect lighting? For example, let's say we have a room, we have a window without glass, the sun enters through the window, shines on the floor, and then the light bounces from the illuminated spot on the floor into the entire room and we get beautiful indirect lighting. In this case, the light source is the area on the floor because that's where the light actually comes from for the room. However, we can't connect to that because we don't know it's there because we didn't manually place it. So we again have to rely on hitting it by accident which is not very great. Another example would be we have a brightly illuminated room. There's a closed door with tiny gap and there's a hallway outside and the light from the room illuminates the hallway. In this case, we have to randomly bounce through the spot in the door to reach the other room which is quite unlikely so we get a lot of noise. B-directional path tracing won't help you there because it can't connect through the door. The connections of course also have to trace shadow rays so again, you're stuck. So what can we do? There's many more such cases and yeah, we need to find a solution. So the solution is we need to learn where the light comes from. It sounds pretty stupid how could we learn that but it turns out machine learning is a pretty big field nowadays. So there's quite a lot of tricks you can adapt from machine learning to create a approximation of how the light is distributed in the scene and use that to guide your paths to spots where strong indirect lighting comes from. That's the area of yeah, pretty much current research right now and there's quite a few things that could be introduced to cycles. That's one thing I'm sometimes looking into but nothing, finally yet. So we'll probably maybe come at some point but no guarantees. And the second part is network rendering. The idea behind this is cycles can already use multiple GPUs. So if you have five GPUs in your system it will just run five tiles at once and distribute them. But why can't we do this between computers? Like say instead of one computer with five GPUs we have five computers with one GPU each. Well, as it turns out, that would work. We could just have one computer that runs Blender and the others run a small worker program which is just smart enough to receive work and forward it to the GPU or CPU or whatever on that system. So that way the main computer which is distributed the data to the others they would render and everything would just work. You would not need to mess around with render borders and manually merging everything like people nowadays do when they want to render multiple computers. It would just be pretty seamless. The current state in cycles is there is actually source code for this but it doesn't work. It hasn't worked for years now and yeah if you want to enable it it just doesn't work. So it needs some work. And that's one of the things I did at Genesis Cloud. I developed new rendering code from scratch which actually works now. And with this we are able to render one image across hundreds of GPUs from one Blender as you just press F12. It does, image is done, which is pretty crazy. And we actually used this to render the short movie from last year, another in case anyone saw that. So this code's been around for quite a while. It works really well now and we expect to publish that in the beginning of 2020. So that's something to look forward to. It will hopefully end up in Blender. Yeah and with this we are finally at the end. So thanks for your attention and if anybody has any questions further, wants to learn more or anything, just talk to me.