 OK, good morning. So my name is Stefan Van Vana. I'm here on behalf of Tangent Animation, and I'm talking about volume rendering in cycles. And just a quick disclaimer, this is a last minute presentation. So the slides are done literally on the train here to answer them, and then completely redone last night. There will be mistakes and errors. The slides are pretty ugly. And since I did not have any time to actually build high quality stuff and render that super resolution, I had to do all this in my laptop in a hotel room. It's going to be low resolution programmer art. So we want to learn about volumes in cycles. What does everybody tell you about learning? You learn from your mistakes. So it's very obvious if you want to learn a lot about this, we have to make a lot of mistakes. OK, so let's do things wrong, right? This will be a very sad talk full of failure and misery and things just not working. This is not inspirational. There's nothing fun and exciting happening here. OK, volumes. What do you do with volumes in production? Somebody throws an open VDB file at you and you want to open it. Anyone tried opening VDB in Blender? Well, you don't, right? It's just what it does. It doesn't do it. OK, so as a developer, my to-do list now gets its first item, write an open VDB importer. I am aware of about three people, at least that I've written those. None of them is finished. So it's up to us to developers to actually finish that and finally get a full VDB import going. So taking one of these half working ones, I import an open VDB file of that bunny. And what happens when I render? It's an 80 megabyte file. And it takes 10 gigabytes to render. How come? What is happening? What is happening to all of memory? I open a debugger. I look at memory. And this is what I see. There's nothing in there. It's all empty. It's all zeros. Where do those zeros come from? Well, the bunny file, in this case, is 577 by 572 by 160 voxels by 483. That makes 160 mega voxels. If we expand that to floating points, we're ending up at two gigabytes of just raw data that's stored in that file for the entire bounding box. But because VDB is smart and actually does not store empty stuff, only the stuff that's important, it compresses that to 80 megabytes. But we load it in a stupid way and we get two gigabytes of raw data. Massager. So both of it is just empty. And the bunny is still a tiny file. Disney released one of their cloud files from their productions. It's about 2k by 1.5k by 2.5k, which comes to 6.5 billion voxels. Or if we were to load it in a stupid way, about a terabyte. Open VDB sources in three gigabytes. So clearly we have work to do. Another thing for my to-do list, write sparse voxels. Wait, there's a commit. Somebody did that already. And we didn't merge it. This is Google Summer of Code project from last year. And we Blender developers just did not merge it yet. Or polish, finish, and all that. So OK, one more thing for the to-do list, sparse voxels. Really? Do we have to do this? Yes, we do. OK, I'm here. My bunny is loading. And I can render my bunny. Finally, I get some volumes. What could I possibly ask more for, right? Well, two bunnies? Is that too much to ask for? Apparently, it is too much to ask for. What's happening here? Well, if you've seen Lucas talk yesterday about cycle sideworks, you trace a ray from the camera. It hits a bounding box. And then you make a decision. What is the frontmost thing that your ray sees? What's the first thing it hits? Is it bunny A or bunny B? Well, any. Any. It's random. Because those boxes are already exact same position on the up axis, it's just random which one it actually hits first. So cycles hits one, it takes a hit point. Registers, I'm entering this bunny volume. And from that on, it wants to spawn a new ray. In order to actually not hit the same thing again, it applies a tiny offset to the ray, moves it a little further like this, and spawns a new ray. What did we just do? We missed the other bunny. The other bunny is not even hit. We just skipped over that bounding box. Same thing happens again at the end. Again, we have the random decision of which one we actually hit. We don't know yet. Could be this one. Could be the other one. May not even be the one that we actually entered. And so we are leaving a volume that we didn't enter or enter a volume but never leave it. And yes, health breaks lose. Workaround for now is actually you can just move one of these boxes, just a teeny tiny bit, and it works. Suddenly, your problem is good. Still, this is something. It should not be happening. This should not be a problem. We just need to fix this by writing a better robust traversal. So anyone else working or anyone working with volumes may have also seen this problem. You have a volume, but it's just not dense enough. Somehow, it just doesn't appear. It's so thin. This one actually has a density of 1,000. I cranked up the density to 1,000. It's still half transparent, this bunny. Or you may have seen this. You have a smooth volume, but suddenly it gets a strong banding that appears out of nothing. Like this. This is a procedural shader that's perfect spheres on the inside. And you see this banding. Where does that come from? Who creates those bands? They're not in the file. They're not in the procedural. And you play with your ray samples. You increase samples. It takes much longer. And it still looks the same. This is our mystery parameter, step size. So what does it do to step size? If you've played with it, you notice maybe if you increase it, some other things get better. And it takes much, much, much longer. So with ray tracing, you typically just you trace a ray until it hits the surface and it bounces off the surface. And then it bounces off another surface. But in ray marching, for volumes, you have to do it in a different way. You actually have to take steps through the volume because there's something everywhere. You're not dealing with surfaces. You're dealing with the actual volume where everything has something everywhere. And the way that Cycle Streets is right now, it just slices it into bits and pieces. So it steps through the volume in a very sequential, very lockstep way. And every time it takes out a step, it looks up in the volume. What is my density? I'll take this as the density of my little segment that I've traced and just treat it like it was composed of these little slices. And if your slices are close enough and dense enough, then it actually works. Then you get the correct result that you would expect if your step size is too large. Oh, and you might still get away with it. You might still hit all the important features that I actually want to see. And it gets faster. That's great. And you get wider and suddenly you miss something entirely. Suddenly you miss half the features. Suddenly you skip over empty areas in your volume and things start missing or you get those bands that you saw before. So it gets thinner and you get banding. All right, so obviously the conclusion is we have to take our steps as small as necessary and just in a way that we can tolerate render time. Problem is we're not only moisten through it once. Shadow rays do the same thing and your ray may actually bounce in the volume. You may actually have indirect light. So you have more rays that you trace and more rays that actually eat all those stepings and you're just executing your shader all the time. You're doing tons of lookups and things get really slow. Well, maybe there's a way that we can replace those ray marching with what? So there's a different approach out there that's called delta tracking. That one, instead of taking lock step, fixed size steps, we actually randomize it a bit. So we break up the pattern and that way, yeah, you just don't get the bands anymore. And instead of having a step size that can skip over features, the step size is not suddenly based on your density. So you can take large steps and thin volumes, take smaller steps and thick volumes and you're without deriving the exact details, you're guaranteed to get the right result. So you take a step and then once you've taken that step, you make a decision, a random decision based on what happens. You look up the actual density and you throw a dice and depending on a dice result, you make a decision between absorbing the ray, so it just turns black, you can transmit the ray, nothing happens, you just go on or you can scatter it in a direction. Sort of feels like the random decisions, the rest of cycles already does, right? In lots of those paths racing, you just have random decisions of terminating or continuing a ray or like taking this way or that way when the ray splits. So it sort of fits in the entire, in the entire framework of path tracing. And so we continue like that, take different steps, different sizes, look up, do we scatter, do we absorb or do we transmit? You're on like that, this one is one that stops, so it returns a black pixel. Another one might go through, so it returns to background, this is fully transmitted. This is a ray that's been scattered a couple of times and actually receives some light. All right, well that sounds simple. So all we need to do is replace ray marching with a very complex, with a completely new written volume integrator, written from scratch, add a new density parameter or write something that derives a density from shades and voxels. And then hope that it's better than before. Sure, easy. And this is actually what half the industry is doing. So slowly you can see a lot of the major studios and render engines moving from ray marching to either a mixed approach or we're going fully to this new unbiased method, which is actually not a new method at all. It's something that physicists have been using, or sorry, yeah, that have been using in physics for neutron transport since they built atomic bombs. All right, so I'm done with bunnies. I've had it with the stupid thing. This time I'm gonna use procedurals. Procedurals are nice. So I built a little volume shader in a render. 10 seconds, not too bad on my laptop. I can live with that, but it's still a bit boring. Need something more in there. So how about this shader? Just another note break pattern little, 35 seconds. Yeah, well some more nodes make it more interesting, right? I'm scared to press render, a minute. All right, so 10 seconds, one minute. Same sample code, same step size, like not at render parameters was just changing that little shader. All right, I popped the hood. I look at what is actually happening behind the scenes. Well, I am tracing, what is it, 10 million rays. And for that I need 160 million shader calls. So every single ray gives me 16 shader calls. And if I look at the actual statistics, I only spend less than 10% actual, like doing ray tracing and the rest of the time is almost entirely spent shading that thing. So the approach we typically use in 2D when our procedural is too complex is we just bake it to a 2D texture. Same thing we should be doing for voxels. If we can bake our shaders in 3D, suddenly we can make things a lot faster by just not calling the shader so, so many times. If you remember in ray marching, we don't just shake the thing once we hit it, but actually have to take shader calls every single time we take a single step through the volume. To-do list, yeah. I'm busy for a while. And since I was busy already last night and the night before, I couldn't prepare slides for every single thing on my even more to-do list. So let me just throw out a few one of these. We do not have currently, at all, motion blur for volumes in cycles. It's a big requirement for most production things. You want to have motion blur on the entire scene and also on you volumes, of course. A bit of advice. Do not combine clamping, branch path tracing and overlapping emissive volumes. It can give you headaches. And sadly, it is something that works as designed, works as intended, but just not in a pleasant way. You will get visible lines where those volumes overlap and you cannot get them away until you change one of these things. You can move to path tracing or ditch clamping. I have spent weeks, literally weeks, building workarounds for that. Also, what we need is important sample volume emission. Right now, you could have a giant volume data set with billions of voxels. A couple of them being very, very bright. Cycles would not know that there are some bright ones in there. It would just search for light anywhere. And most of the time, get nothing, only very rarely hit one that actually emits light. So that is something we would need to address. There's research and papers, how to do that. Nested volumes and dielectrics are a problem. If you apply world fog to your scene, that fog is inside of everything. Like if I put a glass of water into a foggy room, the inside of the glass will not only have water, but it will have fog as well. Some of you may have tried to render water in a container, in a transparent container. And then you have the problem with the overlapping boundaries. There's also methods of addressing that. We only need to go out and actually implement those in cycles. What I have not touched on at all is, besides OpenVDB, we have file formats, libraries and data structures, GVDB, which is an OpenVDB version for the GPU. There's OpenVKL that was just released last month as an alpha from Intel. Field 3D is an alternative for the OpenVDB file format. It also has features, especially for motion blur and production. So if anyone here is a developer that's feeling slightly bored and just doesn't know what to do with the month or half a year of time, feel free to reach out. I have plenty of work for you. All right. So not an interested pressing talk on a little bit of a high note. You can still do crazy good things in cycles. Despite all that, you can use the volumes and make crazy good things. There's a talk right after this one in the upstairs room by Gleb Aleksandrov about the volume stuff that he's doing. Pretty good stuff. And just a little bit to show off. This is the stuff that we did for people that had tangent for NextGen. This is all volumes. It's emissive volumes. There's fog, there's smoke, there's explosions everywhere. And we made this work despite all those limitations. And of course, let's not forget the Spring team that did something like this as well. The team from Spring did some really good volume work for their film. All right, so please don't be discouraged by this, right? So there's lots of open problems, stuff that we can do in cycles as developers to make this a lot better, make it render faster, make it render higher quality. But still, as an artist, you can make great things with it and please continue doing so. References. I did not fully pull these things out of thin air. There's, if you're interested in more technical details, these SIGGRAPH courses are great, like front to back introduction to all of these topics. They start from nothing until all the details and latest research. There's the OpenVDB publication that explains the whole data structure. There's this robust next-head traversal thing that actually solves the problem of coplanar surfaces, which we not only see with volumes, actually you can see the same problem with transparencies. If you just put two transparent planes in the exact same location, cycles will always miss one of them. And the spectral and decomposition tracking is one of the probably most promising approaches of doing this delta tracking in a very smart way. Without having to trace too many arrays and without having to do too many volume lookups. Thanks in the end, certainly go to Brecht. It sounds like I may have, like shit talked about cycles here, but no, cycles is actually really good. And the work that Brecht has done on the volume rendering is amazing work. He was actually very good at keeping up with the latest tech as long as he could until he got distracted and pulled into 2.8 release stuff. Of course, thanks go to all the researchers that published their papers. I've just read tons and tons of them over the last couple of years of how people deal with all of these volume problems in production and in research. And thanks goes to all the artists at tangent animation that just really, really pushed at things with limits and always gave me new scenes to work with to bang my head over just to find out why things are not working the way they are. So with that, any questions? Or should I just cycle through my bonus slides and have more in-depth explanation of how things work? Is there any interest in just like popping the hood and going to more into like what cycle's actually doing? All right, bonus slides and it is. Let's use those five minutes. So if I take something simple like this, right, and I turn Susanne into volume, how does cycles actually determine what's happening? The easy way is just homogenous transmittance, right? I have a volume, it's the same density all the way through and I want to find out what's happening. So I take the beginning of the ray, take the end of the ray and I want to know like how much light does that in between swallow? How much is absorbed in there? And for something that's homogenous, that's the same density throughout this formula. There's a compact small math formula we can just plug in and it gives me the exact result. And after that, I just continue my ray and ray trace is normal. If I do the same thing for multiple intersections, I just do it multiple times, that's all there is. And what you get there is a noise-free result. That's the right thing for us, like low samples and no noise, that's great. Beautiful. Now once we add light to it, it gets a bit more complicated. I want to actually know how much light do I get over the entire area here? And suddenly there's more factors than just one ray length that come in. Like I need to actually know what is happening inside that triangle there. How does it overlap my susanne? How does it overlap the mesh and all of that? So for this, there is no closed formula. There's no single formula, I can plug in and get the exact result. So we do the same thing we do for usual path tracing. We just do a whole bunch of random things and hope it works out in the end. So I just pick a random point on its segment, I trace a ray to my light source and I again use the same front and back intersections, plug in my formula, and now I know how much light gets to that very specific point on my line. My do this again for a different point and another point, basically how many samples, whatever I'm taking. And I'll actually get light in my volume. I get single scattering for my homogenous volume and because I have to take those random decisions instead of just like having one formula, let's give me the output, I get noise. That's one source of your noise. For homogenous multiple scattering, when I actually bounce around, it gets more complicated. I pick my point and then I pick random direction, start a new ray. Same thing again from the new position, again pick a point, start a new ray, pick a point, start a ray, pick a point, start a ray and I end up somewhere. I may end up on the background, I may end up on light sources, I can then still trace my usual light connections as I did before. And now it's multiple scattering and that is much, much closer to what actually things like clouds and snow looks like. Snow is entirely multiple scattering. If you just trace this with one single scattering, it's a black sphere with a white half. And here now I have a lot of light going actually through to locations that were in the shadow before. Subsurface scattering is actually the limit case of this. So basically if I make this more and more dense then I'm actually ending up with what is approximated as subsurface scattering. If I make it extremely, extremely dense I would end up a diffuse rendering. So yeah, basically this is, because this is complicated and noisy to this long, we tried to avoid this for subsurface scattering for a long time and we used approximations where you basically had an entry point, an exit point and based on those two points you would just have a formula that gives you an approximated result of how much light would come through for. And the brute force subsurface scattering that was introduced recently in cycles. That is using a version of this that is very much optimized for subsurface scattering. But yes, it's the same thing happening there. And then things break apart as soon as we don't have a uniform density. Once we lose our formula that we had once we cannot use that anymore we don't know how to calculate this anymore. And that's when ray margin came in. As I showed before we just, instead we just slice it and treat it as a sequence of homogenous material. And that's how then that happens that we can still render these in cycles for transmittance. Same thing for the singles gathering then I end up with those many, many more slices and repeating it for heterogeneous takes me through multiple bounces, lots and lots of slices and long render times as I mentioned before. But still, stuff that we can render. And again, the step size that I mentioned before, our parameter is something that have a small step size, have a long render time, a decent result. I can increase my step size, it gets much faster. And this one I already lost a little bit of density. And as I go up in step size it gets faster and suddenly I lose my volume. Suddenly it disappears. It gets thinner and thinner. Or I can get really beautiful renders, increases step size and banding comes in. More step size and more banding comes in. So the challenge right now really is if you're doing lots of volume rendering in cycles right now it's just finding that step size that matches your entire scene. That makes everything that is in your view render properly. You may have something that's very dense and very far away and very detailed. And the unfortunate thing is that you have to take, to get there you have to trace tiny steps all the way. We don't have any per object, per location step size parameter at this point. All right, and I think at this point we're lining up for the next talks. I'm gonna call it quits here. Thanks for listening. Yeah, feel free to ask questions.