 Perfect everything gets highland perfectly at one o'clock. That's that's nice. I even don't have to say something Yeah, hi and welcome to my talk. My name is Sebastian a house. I work at the intercooperation and Over the last year. I worked on integrating path guiding into cycles and this is like our talk about it how Path guiding works what it is we try to give you a high-level overview of what is path guiding how we integrate it and how you can use it To make actually nicer scenes and which have way more realism and you can do stuff. You probably couldn't done before So that's pretty much it and thanks for your attention and I will now start so what is path guiding is path guiding is a method which Can change images like this if you try to render something physically correct in the same time to give you something like this and Here we have a direct comparison and in this talk I'm going to take you by hand step by step to explain you how it is actually possible So so we start what is path guiding? I could just start with all the math and start with the rendering equation Monte Carlo integration variants Optimal important sampling the theoretical part of zero variants and in two hours. We would still be the year I will look at all your faces and it would they would look identically like my students we're just like oh my gosh, so let's keep it simple and Let's it's go on it on a higher level and actually like what does it mean if we just look it from an artist perspective What is going on here? So if you render an image usually using a path tracer like cycles You used to that you have noisy images like over time. They're getting better. So the big question is why do we have noise to begin with? so the basic idea is that a path tracer when it wants to Figure out how what is the color of a pixel? it shoots random paths in a scene and each of these paths is actually an estimator for The color of the pixel which is not very accurate But if you take an average about a lot of different samples then actually everything convert to the actual pixel So and the noise level is actually the variance that means it is the spread of all these estimates And when you have a high variance then the spread is is high between each individual estimates And if you have a low variance, then they are actually closer meaning you have a lower noise level and this noise noise level depends on the way How your path tracer can actually generate these random walks? Which estimates a final pixel color? Proportional to the contribution of all the light and light distribution in the scene to this one pixel So why do we have noise in our standard path tracer? It's pretty easy like if you would like to Distribute everything proportional to its contribution you would need to know everything about the light distribution to begin with To solve the problem you want to solve so meaning we have a chicken and egg problem So what the path tracer is then doing it takes the information it has meaning I have a point in the scene and the only information I have is about the material at this specific point Which means here for a diffuse surface? I know how the diffuse surface would reflect like light. So what was what we are doing? We take this information. We are at a diffuse point meaning we shoot a random ray Proportional equal almost equally in all directions and So we hit another surface this surface for example is a glossy surface a glossy surface reflects in a cone of Directions meaning now we should we select a new Direction based on this cone like the reflection property of our local material here and we hit another surface here for example We have a mirror is a perfectly specular mirror, which means incoming direction Reflected on the normal incoming direction equals it goes to the outcome that should we make a direct connection Which again brings us now to the couch so this concept of Generating random walks works quite well if The indirect illumination which actually interacts with each point I'm more or less smooth or uniform then you actually get a really really no noise level But in reality you don't have it So now you might ask hey if I look at the scene I Have a window I have small light sources Why is it working in here when this looks more not really like diffuse? Their artists come into play like over the years when people learned working with path tracers to make their images They learned a lot of tricks to actually make this work like adding virtual virtual light sources to mimic indirect illumination Adding some hero lights there to change the look of some specific area So artists really learned a lot of tool sets to make this one algorithm Which usually only works quite nicely if you have a uniform incoming light distribution? With faking stuff and adding stuff to make it work so that they get image in their specific time budget So let's take this scene and Take out all these hacks like like really try to just use three light sources in the room and a sunlight Coming from outside through the window and suddenly we see we have a super high noise level So and and why is it that like for example if we are on the ceiling we have the assumption that Hey our material is diffuse equally reflects in all directions But we have here this bright spot on the floor illuminating the whole room Meaning the probability of generating a path from this ceiling which actually hits This bright spot is super low, which means we have a high variance each estimate will go a far steps over Yeah, oscillate a pretty high around the actual value So this is where a path guiding comes into place So the idea is to not only use the material property But also use some sort of approximation of the incoming light distribution at each individual point in a scene To combine these two to make better sampling decisions So instead of just going diffuse based on the material. We also look. Hey, what is the a Possible light distribution we have here, which is important for the actual pixel value So we see here for example if we are looking from the ceiling we have to go down to this bright spot Or if we are at the wall, we also have to go also down to the bright spot and if you can compare here like if you just look at the material the Sampling distributions would be similar, but just oriented based on the surface orientation But if you look at the incoming light distribution, they differ at each individual position in the scene so here's a site by site comparison of Applying path guiding in this scene with the same number of samples and you can see that if you add path guiding to To this and add this additional information about the indirect illumination This actually can reduce your the variance of your rendering quite not quite big So easy, let's just implement path guiding that sounds awesome. So let's start Yeah, first Path guiding is something where researcher actually started already in the 90s playing around with it. This was a time where People were happy being able to render 60 times 60 images and had 32 megabytes of RAM Which was quite interesting. They they already did something, but they were super happy being able to render the cornel box So Also with if you want to do now path guiding it it would have cost a lot of memory consumption The research community went more into directions where I said, okay Let's let's try to continue more finding better ways like metropolitan light transport bi-directional path tracing photo mapping all the stuff you probably have heard of so in the Probably 2015 like this there was more or less a revival of the idea We have no more compute We have no more memory that people actually picked it up again and start working on it And since then we have a bad load of papers Which all have their own path guiding method So and all have their pros and cons all have their nitty details Some work some in some specific area some work in other specific area So if you just say hey, let's just implement path guiding the big question is path guiding sure which version and Also the big question is for you as artists. Yeah, which version will work for me like Because all all stuff usually are when they come from the scientific realm We usually test them on smaller scenes. We have our standard tests where yes there it works But if you are in a production when or you do you do not want to have an algorithm, which Sometimes works sometimes not do I turn it on to a turn it off? It should be quite easy not many parameters. So easy to pick it up and This is actually where we as Intel came up with the idea. Okay, let's take all these papers Boil them together into one open source library So that we can actually generate something Which people who build renderers can use to have an easy access to integrate path guiding into their renderers? So this library is called Intel's open path guiding library or short Intel Open PGL And it's actually our newest component in our one API rendered kit component Where for some stuff you probably already know like Ambrie or open image denoise or even open VKL So the basic idea about about it is that we have a production ready Implementation which makes it easy for people to integrate it. The nice thing is it's open source 100% So it's easy for a blender to pick it up to use it and someone So the basic idea foundation of it are actually three popular path guiding papers Which we took and then combined them together in this library so and One of the two features of our libraries are that and it is an incremental learning approach Which mean doing rendering you can learn directly your distribution of the incoming like distributions Meaning you do not have any pre-processing step which which needs you to wait You can you just click render and you directly have something to see and it gets better and better over time And the second the second features are our guided sampling decisions, which then uses our proctimation We learn to guide a renderer like cycles in the right direction so that you find Indirect illumination of light sources way better So how does this incremental learning approach works? So we take cycles you do your rendering like your first sample per pixel and then Doing that rendering process. We generate a set of training samples For each position of your paths you were generating in your first path We are passing this to open PGL to fit this representation Which can then offers Cycles for its second rendering iteration like already some data To make better sampling decisions for the second sample per pixel and this continues and goes on and goes on and can actually be Continued forever so that over time your representation gets better and your sampling gets better So let's make a next show an example. So this is a kitchen scene With some complex indirect illumination and this is like the sampling quality of rendering Four samples per pixel by just like the information we have about the scene If we didn't do not start training so this would be the outcome If we would now render just one sample per pixel train our distribution and then delete the image Just render four samples. We actually get already this sampling quality and Even like if we would do the same thing after 16 16 samples per pixel This would be the sampling quality if we would now render four samples per pixel Which already looks quite nice like you see over the time the sampling gets better and better so How do we make the sampling better and better by having this approximation of the incoming light at each point? So for direct for surfaces for example, what we can do is we can just directly query at each point an Approximation of the incoming light. It means it tells you where are Important indirect light directions where important direct light directions and you can then use this To generate a sampling distribution if you are on a diffuse surface You can even multiply it with the cosine lobe which gives you an almost perfect sampling decision for for materials with half diffuse components like for evaluating this diffuse component So and this is really useful especially in on indirect illuminated surfaces And as I mentioned especially for diffuse materials with diffuse components So here's a side-by-side comparison if you just let it render for 64 samples per pixel This is also a little bit a hard scene because there is a caustic through this glass table on the floor So that means all the paths would need to find this caustic which has a really really strong indirect illumination on that So but we also can look at volumes so in volumes We also have an approximation of the incoming radiance at each point in the volume But but here we also support to directly take the scattering behavior of the phase function into account when making the new guide a sampling decision and this helps a lot for For indirect illuminated volumes for example if you have something outside a Ray like a god ray or so like they're usually You do not know that you should go in the direction of the god ray because that's that's where the indirect light comes from from the next bound This is where guided Volume sampling decisions can help you and also like if you want to simulate subsurface scattering physically correct by really building Some some object which has a bounding surface and put a volume in it This is where it can help like here. We have an example where we actually just model physically correct the skin of an object and then we have some really really dense volumetric material with an high albedo and Which has a isotropic phase function, which would mean Usually you just go in a direction where you say oh randomly on the sphere I'm going in any direction But if you look at subsurface scattering you actually want to go either when you have a dense media usually Back towards the the skin or if you are at a thin portion like the ear You probably also want to go through through the through the middle object because some light comes from behind So this was more like the overview of what it is How how does it work a little bit now the big question is how do how can I use it now with blender? 3.4 and that's actually quite easy like you will have when you have the CPU device Which is currently the version we implemented we plan to also implement a GPU version of it You will have this small checkbox here Where you can just enable path guiding so we try to make it really simple So you have one toggle but to just enable disable it you can play around with Separately enabling path guiding for surfaces or in volumes and we have this parameter called training samples Which actually tells you how many training iterations you want to do because usually like these guiding distributions they pretty much converge after 128 or 256 iterations meaning as soon as you surpass this is this approach It does not really bring you better sampling quality But you still would have to store all the training data and you have to update your Distribution which will give you some overhead so by limiting this amount to to this these numbers You actually have a little bit lower rendering Performance at the beginning and afterwards you just stop training and just do complete rendering which then is much faster Yeah, which would have been on this slide So Not not a lot often easy to use that at least what we hope to hope So let's see how it doesn't look in the wild all the images. I've shown you before also images You usually see in all these path guiding papers So the big question is how does it really use by scenes made by artists? So here we have an Example from a scene from Jesus Sandoval who is an who does a lot of Interior designs Which you can see and this is like a five minutes renderer without path path guiding and this is like the result after five minutes with path guiding enabled and you can see here that Especially in the indirect illumination part that it's much clearer at the same time and and the nice thing It does not introduce you any bias. So it means it will convert To the same result but much faster with way less samples You can even go a little bit more crazy now with actually playing around with more indirect illumination like having light sources Reflected by a mirror. So this is an example from kind of us long from blender artists Dot org Who just played around a lot with this and built a scene where he just put in a lot of mirrors and some by sorts And this is what you usually get after 1,500 samples This is what you get when you enable path guiding suddenly you see that oh There are light sources reflected by some Specular materials and and they they actually do a lot of indirect illumination and this actually works quite out well and Because these were artists they also use a lot of radiance clamping because usually you would You would have a lot of fireflies in these scenes. So you so what you're doing is you throw in radiance clamping which? Unfortunately makes your image darker. Yes, it clamps fireflies, but it also clamps all the energy you actually have in your scene So and with path guiding You might still get some fireflies, but you can put the The threshold for your rating clamping is way higher because these fireflies which will be then Still there. This is like really read. They will be really really high and Coming back to the scene I presented at the beginning Just to tell you what actually we render here in the scene is that we try to simulate here Physically correct under water scenario, which means I have the water surface Which is a perfectly the dialect dialectic material like glass and Inside we do not just do some adoption. We really do Physically correct scattering of measured water like of a clear Bahama seas, which means we have a sin volume a super strong forward face forward scattering face function with an high high Albedo and Suddenly you see in this scenario everything is actually a Simple caustic so and actually if you enable path guiding you actually can get you can get the caustic on the floor and Also, you can get all the multiple scattering inside the volume, which is really a nice thing What I have to tell here. Yes, you can see caustics, but these are simple caustics So they are not super crazy done by multiple interactions of caustics or they are caustics generated by caustics I will tell a little bit more about that later But they are simple more like okay We have like one dielectric surface which actually generates these effects and this this is where path guiding works quite well so At the end some also some advices for you as artists like what can I do with path guiding or what should I think about when? I wanted to use path guiding One thing which came up with a lot of people I I talked to which actually also have path guiding implemented in their production renderer is Yes, it's a nice feature, but sometimes it's hard because Artists learned so good how to build scenes that they are renderable with our current limitations So they will build the same scene and then the effect of path guiding is just that's nice So because they already changed their scenes in a way that oh, yeah Let's make shorter path shorter bounces replace the actual indirect illumination with a virtual light source And now they actually have to start teaching them. So hey be more experimental try to use less fakes and only use them now if you really want to go on nuances of your scene You want it a little bit brighter here. You want to Change your hero spot. Oh, you could try now to only use Virtual point lights virtual light sources there instead of trying to mimic the whole indirect illumination of the scene You you can now try to use their more physical light setups and let path guiding help you to reduce the noise there and only focus on the artistic freedoms What you want? Also coming back to caustics. Yes, you will see it helps you with caustics But as I mentioned with simpler caustics, it's not a caustic solver So it's not the holy grail which helps you suddenly to build the craziest scenes as long as The renderer itself for example if you render like 2,000 samples Sees a little bit of the caustics without path guiding you have a chance because path guiding works in The way we implemented it by learning from previous rendering iterations So if you if you build a scene where after 2,000 samples like the probability of even generating one path Which is connected to a caustic? Is super super low then then we can't learn it because we would need 1,000 or 2,000 samples to even get a No, let get a hint of it that there is maybe something so as long as you build scenes which have a Pretty modest probability of finding something. It doesn't have to be perfect But it has to have a modest probability of finding something then path guiding can actually learn this stuff and help you with that Then also indirect radiance clamping usually It is often used pretty aggressive because you want to reduce the noise into fireflies. This is something you can now try to hire the The threshold or if you want to compare two images if they look the same You actually have to turn it off to if you want to know if both converge to the same result without path guiding It probably takes way way longer, but with path guiding it's much more much quicker and Another thing is shadow visibility. This is also a heck widely used But the problem is how it is implemented in most cases as well as in cycles is is that the look will change if you use shadow visibility and This is a problem because the shadow visibility is a biased approach. It will generate an incorrect image to begin with and If we change the sampling probability using path guiding then some of the weighting inside the renderer will change the contribution You suddenly have from these light sources with shadow visibility. So if if you want to be Consistent try to avoid using that If not, yeah Then you can still use it So a small outlook. So I presented what what is more or less the current state so currently like in the future we also want to Improve our library open PGL by adding more features like also guiding the termination decisions of the path because at the moment how a usual renderer is doing it it just Thinks about oh, I'm looking at my current position If it's if it's a dark surface, I have a high probability of Terminating the path if I have a bright surface I have a low probability of terminating the path, but this does not mean what is the expected contribution of the path So I don't know. Okay. Yes. I bounce multiple times on the dark surface Am I still hitting a bright light source at some point and with guided Russian roulette? We can take this into account also on the cycle side Our next features we wanted to implement is guiding on translucent surfaces as well as on glossy surface component Currently we have guiding only activates if your material has a diffuse component and Also, there's some work with the people who implemented MN and E to actually combine this together so that probably we can Combine both like the the limitations of MN and E and the way that path guiding can direct you in some sort of Caustic combine it to actually probably generate a better caustic solver which would help you a lot So this brings me almost to the end of my talk and I want to say thank you especially to the blender team which helped a lot with with the integration and actually gave us the opportunity to integrate this and Especially to turn Brecht Sergei and Ray and also to the blender artist community which probably two hours after I Put the branch online people directly took it and tried to build it and they were going crazy with it I can name all the people who get it but what I can tell you please go Go on blender artist.com look for this red and also just look at the images. They are posting They are going nuts This is really like this was this is really nice interaction with them They all try out the most crazy stuff I've never have thought of actually trying which also pretty much helped us to make open PGL way more robust and also the integration into cycles So and at the end I also want to do what? What do you want to do next? Maybe I want to say Advertise like another talk from a colleague of mine. So yeah, who will have a Talk tomorrow about the one API backend and cycles and how this actually makes it possible to use cycles on Intel on our new Intel GPUs And as well I you can visit a new web page from from us Create Intel comm lender where we will collect all interesting information about these technology sessions some demo videos some tech blogs and also you have a chance to win a new a small hardware upgrade so with Some arc GPUs as well as a look so Thank you for your attention, and I hope this was informative for you