 Morning everyone. That's way more people than I expected Okay, welcome to this talk about a tech of gold and you might wonder what's gold and First of all, it's a working title of an upcoming Blender open movie project in case you missed announcement, we didn't make too much buzz about it, so We'll do I think yet with the with the past project out of the way we'll keep like posting much more So the story wise It goes into those explorational experience of how fragile we are and how resilient we can be through Transformation adaptation to the changes on a creative level this movie is about utilizing the pessimistic style and being a visual poetry type for thing So here just few examples from early concepts and early renders of how we think movie could like or should look like and World consists of several parts of it like underwater somehow Above the water part on the water part and you can already see that it's quite different from what we were doing so far with More photo realistic movies at Blender studio and if that's not enough Movie goes even deeper Something we call a beast I would not go into more details. Yes yet It will come back a bit later. But yeah, you can see how it could be challenging for the release a rendering pipeline So for us, it was clear. Okay, so we need to have more closer collaboration between developers and Our team which for previous project wasn't that close as we would hope so for this it We said, okay, we need to sit together so we did sit together with the all the creative people around with directors and render team and all the other Blender developers and I think we came up with a plan and it was literally called plan.txt out of that meeting So what it is about so we highlighted few topics which we would need to work on For in a couple of different areas So for cycles it the lighting and normal controls would be was requested like that was on a number one priority and Also something which came up later is the bright stroke rendering and there are few Topics for the compositor even though it's not really clear How exactly will break down like what's being around the time and what's being done in composer But we want to be ready for either of the choices And at this moment I bring My colleague who we work together a lot and Brecht Okay, so I'll talk about the cycles rendering parts first and so so one of the the feature requests Things that we thought we would need was the the light linking I mean this is not just like a request from the movie project We've known for a long time that this is a one of the the highest feature quest from users as well so, you know, we thought we can just solve both of those at the same time and So I mean the motivation for this particular movie is that you know, it's stylized And so you want to be able to do more tricks with lighting, you know, selectively apply it to certain things and And just sort of be more artistic than just trying to do like pure physics simulation So that's sort of motivation for this open movie and so what this feature is it's basically two things it's light linking and shadow linking and Light linking it means that you can say for this light. I won't affect just these objects in the scene And and nothing else because you might want to add like a rim light to like one character But you don't want like a rim light on the entire environment. You just want to make the character stand out For example And then shadow linking is kind of something that you need once you have light linking, you know So I also need shadow linking because if you're say adding a rim light to one character You generally don't want like the environments to block that rim light as you're like affecting that one character So you want to be able to exclude certain objects from casting shadows for certain lights and so the user interface for this is You know as in the screen shows basically on a light You can set a collection and that collection contains an arbitrary number of objects or an arbitrary number of collections and that you know That defines like these are the number these are the objects that The light effects or alternatively can just exclude once and say you don't affect these little like excludes just this one object And there's been like lots of discussion before unlike light linking. How should it work? We went with this way For two main reasons like one is it's sort of the industry standard way of doing things That we know works and that means you can like export it to USD potentially and like it's sort of compatible with other renders That might want to integrate with blender and the other reason is that the reason to put it on the lights rather than say on the Material as it used to be like with the old shadow linking in blender internal is that if you're setting up a shot Then generally it's more convenient to customize the lights and just add a bunch of stuff to them Rather than that a stuff bunch of stuff to your objects in the scene because then you need to do a bunch of overrides and gets really complicated And so that's that's sort of the motivation for that and this is sort of Then like a very simple programmer art example of light linking So there's like a white light that just affects like the entire scene and then there's a yellow light behind the ground that's Lighting like the the two monkeys from below and like the ground is not not casting shadows for that And then there's also like a red light on the one monkey and the Greenland and the other monkey I mean, this is just sort of what you can do And now we'll go into like the the implementation details more or less So there's two sides to this like there's one part on the blender side and there's a part on the cycle site And so on the blender side We involve the dependency graph to do a certain work so The way it's specified it's on the light object but for rendering you kind of want the data on the receiver object or the shadow blocker objects And so we kind of have to invert that relationship and that that's what the dependency graph is kind of good at And we want to make it into like a format that's Really efficient to evaluate that render time And I can work on a GPU and so the basic idea Is to use bit masks Do really quick comparisons and so For this what we do is we we find like All the unique sets of lights that Surfaces or shadow blockers in the in the scene might use So certain receivers might use this five set of lights and other receivers might use these seven lights I mean sort of try to figure out like all the unique sets that exist that that might affect the receiver or alternatively a shadow blocker And then we assign a bit A bit to each of these sets and that then becomes a bit mask And that's like a bit mask comparison. It's like a really simple operation very quick operation That we can then do in in the rendering So, yeah, that's that's a very quick way of just saying I'm on this shading point I have this light do they affect each other or just do comparison and then you know, you know Yes or no So that's easy to do a bit mask comparison, but it's not ideal yet because it's kind of inefficient Because you might pick a light for shading a certain shading point But it might not actually have any influence and if you trace the pad all the way up to that point and then you say Well, uh, there's no influence. It's kind of a waste, right? It's like it's just a wasted sample And so what we do is we have an optimization when you use the light tree where For every set of lights, uh, we build a specialized tree. So we first start by building Maybe I should explain what the light tree is because it's a relatively new feature But the light tree in cycles is basically it looks at all the lights in your scene and it puts them in like a big tree structure So I can quickly figure out like I'm here in the scene So like these lights that are nearby are likely to have an effect You know on the shading point so then we can quickly look them up And so in order to make the sampling efficient for this light tree Um, we first build like a light tree over all the lights in the entire scene And then for every set of lights, um We build a specialized tree. We build like we take just the subset of the tree. That's uh relevant for that particular type of light set for that set of receivers and uh Yeah, we we build a smaller tree from like the big one. Um, and we have some tricks because The point of light tree is sort of that you can have like millions of lights in the scene potentially and render them efficiently And so we don't want to like have if there's like only cell differences between certain surfaces We don't like want to have like one light tree with like a million objects for this one And then another for like this one with another million lights And so we have this trickery to like share certain sub trees of like the entire thing between like different, uh Between different lights that's when it's possible So that allows you to have like potentially like millions of lights in your scene and and dozens of like different, uh light linking configurations and still like have reasonable memory usage That's the idea behind that And then the shadow linking was actually the more challenging part. Um So conceptually it's pretty simple. You say I traced a ray from the shading point to this light And we're just going to filter out some objects. We're just going to ignore some objects with the bit mask You know, you hit the object and you say well, is it relevant? And then you you know, you might just skip it. Um, but there's a complication in that in cycles we uh From the beginning we had sort of this assumption A physically based rendering which allows us to do a certain optimization Um, that a lot of other renders started doing as well. Um, but it doesn't really work when you do shadow linking So, um, so I should explain so for direct lighting, there's this thing called multiple important sampling. Um, It's very technical and I'm not going to go into the math But basically, uh, you have two strategies for sampling a light Like one is you're at a shading point. I'm going to pick a point on my light I'm going to make connection does it and then and then uh That you know, you you you give the influence of that to uh to the shading point That works well if you have a small light and you have like a diffused bstf That runners with very little noise. Um, but then if you have like a very specular bstf Or like a large light it stops kind of working. Um And so we have a second strategy where Basically, if you do an indirect light bounce, you might incidentally also hit, you know, a light or I might just hit the beam And uh, like you might incidentally hit a light just like you might hit some geometry And that's sort of your second strategy and that second strategy is a problem for us because We kind of did the trick of like combining like indirect lighting and like the second light sampling strategy We just trace one ray because it's sort of the same thing physically But I mean, there's no difference in physics But when you start doing light linking tricks, you kind of have to make a distinction between this indirect light and this like Titan shadow linking and so, uh Basically what it means is instead of tracing two rays We now need to trace tree rays when you use shadow linking sometimes which is a bit more expensive And it adds a bunch of complexity to the kernel. Um, but uh Yeah, so so that's what we did. Um, how that works exactly is Uh, it's too long for this this talk, but basically, uh We had to do like a whole bunch of refactoring in the kernel and it was like a very complicated thing But we got there in the end And so now yeah, sometimes you trace tree rays instead of two if there's shadow linking Um So the the basic feature is there in blender 4.0, but there's still more work to be done. Um So the main one is the user interface I would say, uh So in other applications, you you have this few usually you have this interface where you can view things either from like The point of view of the receivers or the point of view of the emitters and you can sort of Have this specialized editor to set up the links. We currently just have this in the properties panel Which is okay, but it's not it's very flexible, but it's not like Uh necessarily the most intuitive. Um So we do want to still add like some sort of editor that gives you like a global overview of all light linking in the scene And then make the connections Another limitation is that there's no light linking for the world at the moment and we would like to solve this basically by Implementing another feature that people want which is adding like a dome light type So you can have basically multiple world lights in your scene and you can rotate them Which is usually there in other renders. Um And so if we add that we would automatically get kind of world light linking for worlds. Um Another thing that's been requested is that okay. This is for direct lighting for indirect lights There's this concept of trace sets in other renders, which is kind of similar, but you define collections for like For which objects are in the indirect lights. Um, so that would Be a good extension to this. Um The other thing is ev supports, of course The nice thing is that some of the stuff is on the blender side so that can be shared between Cycles and ev so hopefully that will make things faster And then the last point is like better integration with usd and external renders I mean, I know some external renders already have their own light linking Thing and they might continue to use it or they might switch over to the blender native thing. Um We'll see what happens there Um, so yeah, that's the It's the light linking. Um And then there's a second Uh cycles thing, um, which is a lot more vague Um, basically it's not it's normal control and sort of more general Um, you know stylized shading in a way, uh So this is an example render where, uh The normal isn't really just like, uh, or I don't know if this is a concept or a render it starts Sometimes it's hard to tell I think this is, uh um but uh Basically for like a painterly style like one thing you might want to do is sort of make your normals flatter Or make a more planar in certain areas. Um, so it's more like, you know brush strokes Which which tend to be flatter than having this like really smooth gradient um And so yeah, you might uh do all kinds of tricks or you might have at this sort of you know surface detail along the shadow edges that You know make it sort of jagged ears or apply some sort of texture pattern To make it look more interesting. Um, and normals are a big way of like controlling that kind of thing So Okay, so it's easy to set a different normal, but then for the renderer It's uh, it gets kind of tricky because you have your real geometry and you have your fig Your real geometry and your fake normals and they're not the same thing And if you just you know Use the fake normals you get all kinds of rendering artifacts. Um Like some of them, uh were much worse before. Um But we've solved like a bunch of them now But this was like more in the context of like realistic rendering And in stylized rendering it's a little different. Um Because there's not like one correct result necessarily. It's like you're doing Something that hasn't doesn't really have like a physical equivalent. So, um We kind of have to figure out a bit what what we want to do there or what it even has to look like exactly So i'll just briefly talk about like the existing stuff that we do to compensate for like this discrepancy between the fake normals and the real geometry this all this stuff is basically aimed at Uh trying to guess like this is what it would look like if your surface was actually that smooth or there was actually this like displacement detail there Um Well, well the surface world the geometry that we have is actually just flat. Um So there's there's three three tricks that we use. Um One is if you have smooth normals, uh along like a low low poly geometry Uh, we kind of pretend that uh the shading points is actually on some sort of virtual smooth geometry Like so we kind of compute look at the normals and then we compute like this is it's like the top image Like it's this green thing a surface or pretending the surface this and we kind of move the shading point to the fake smooth geometry and that avoids a bunch of like self intersections and things that um that don't look good the other two tricks are for bump mapping and so, uh We distinguish two cases One is the diffuse bump mapping. Um, where As you can see it's in second image like your hemispheres should be like fully above the surface But here it's bump map. So it's like tilt slightly below the surface, which means Like some of the rays might just go below the surface as well And this especially like where the cutoff point is you get these like really sharp um artifacts like the terminator problem Um, and the trick we do there is based on a paper from uh from disney, I think But it's a trick that goes back a long way in different variations. Um And basically what you do is you kind of pretend, okay, we're going below the surface Like we're probably like on on this incline and this means we'll probably have some extra shadowing anyway from all this Like detail on the surface So we just kind of like in this region add some extra artificial shadowing to sort of cover up The artifacts that you would otherwise have which in one way is a trick in other way You can like justify it like with microfacets shadowing theory And it looks and then it sort of looks like a real real surface with detail more or less And the second trick we have is for specular bump mapping And there you get the problem where okay if you're looking at the surface right on it's it's it works pretty well Because the rays just kind of come back But like if you look at a grazing angle the ray might actually like dip into the surface as well And so we do a little trick there where we kind of pull back the normal a bit So that the ray actually like stays above the surface And that works pretty well And so those are like the tricks we have now And then the question is what are the tricks for this realistic for the stylized stuff and we don't really know yet So we're still trying to figure out with our team like What exactly the look is what that they want which changes we need on the cycle site to accomplish this So it's still very much up in the air But we made some initial changes like way sanded some work on Improving the bump mapping correction that we have like the last two things that I mentioned Just to make it more correct and more consistent and it also fixed some cases for like realistic rendering case Which is nice And the other thing is that we have the control to disable all these bump mapping corrections in case you say like this stuff I don't care about this stuff. Um and so the vague ideas we have to To do to make it work better for stylized rendering if we even need to do them It's it's sort of unclear at the moment. Um one of them one of the ideas is to just say My normals locally they have no Like relation to local geometry. So I'm just going to like ignore local shelf shadowing And like only far away objects. I want those to cast a shadow and like the local stuff. You just ignore it because there's no My relation, um, that would be one trick. Another thing is that other renders have these things called light filters Uh, there's things you can do very similar to light filters and light shaders, but there's a few limitations So we're thinking, um, if necessary, we could improve the lights Shaders to add a few more features for artistic control. Um And then there's some things with ray tracing visibility that Where you want to do certain tricks and it's there's some limitations in the ray visibility That that currently make those tricks difficult and actually with like the shadow linking changes we can now Make those work better. Um so that's another thing that we're thinking about, uh Adding making the ray visibility work better for this case, but this is all very fake. So, uh, we'll see what we end up with But yeah, that's what we're thinking about currently And, uh, yeah, that's the end of my part So now we're going into even more fuzzy territory Because before it was like, yeah, okay, so there are some solutions. Um, so, uh, it's a The nature of the movie is that it should all be consisting of the pain stroke. No, no, that's the look we're looking for like trying to achieve so we need to render those strokes And here's the comparison of the concept art And the early render based on that concept art and in the Right hand corner can see zoomed in version on on the corral and the basically idea here is like, okay So all the surfaces get broken down into individual strokes and strokes gets from Atlas which was created based on actual making brushes and image and scanning them and putting them to blender magic from simon, I believe Works great. Um, so that's roughly idea of of how it could work. Um now the That's some challenges. So like all the surfaces gets broken down into those strokes, right? But we also want to give the control to the Animation department to give they need to add something Uh, the animations have to emphasize something. So it should be super easy for them to add extra strokes to the To the final frame and during animation And when we started having all those discussions, the only, uh, available Feature in cycle and blender was grease pencil grease pencil too, I believe it was back then and it had very limited control over shading so You could do some stuff, but not really and the idea which we had back then is to can be able to convert grease pencil strokes to meshes and then you can Benefit from all the features available for shading and distortion your geometry in outs and in cycles Luckily, there is a grease pencil free project going on It's already have some support of geometry nodes and for For that I would actually refer to the talk from folk from yesterday and Yeah, so with grease pencil free It's already possible to to achieve what was needed to to convert to meshes and more seamless integration into Cycles, there are still some topics open. I believe like how Like if you add strokes on top of existing geometry, we want to somehow more Easily transfer shadings to to to the stroke and go from there So you don't need to restart from scratch with the shading for every stroke you add Um So that's kind of brings us to the next step. So we really want to finish grease pencil free Make it available for everybody outside of experimental feature but it's also If all the surface is getting broken down into the strokes It's becoming a lot of a challenge for the memory consumption for rendering So one of the ongoing discussions like, okay, how can we make it more efficient for rendering can we bake something but Then baking and blender is very fun to use So there is someone going um development discussion about like what needs to be improved for for this specific case Because you can go very very deep just for improvement baking and blender and we want to Have enough room for the more important topics if they come for the for the movement for the regular development And the other aspect which keeps coming is the pigment based mixing and not just light based mixing and that's Simon did early prototypes with geometry knows, I would imagine it works pretty well Um For the technical demo type of a thing. It doesn't feel like it's sounding I would be really comfortable giving to all the animators So it feels like we could do much better integration with these tools there and maybe some Render or shading somehow. I don't know. It's it's ongoing. We'll see where that goes Um, and here we can see the results of the early pigment based mixing Okay, so now we go into more fuzzy territory. Well Maybe even more clear depends how you look at it So because it's still not really clear what what will be the exact breakdown of what is being rendered painterly and what we cheat as the post process painterly filters and compositor was one of the Topics we were looking into And it's basically to either remove details from the background or Give more like facet to look to to the shading as a post processing step because what if you know, all the rest doesn't really Get feasible because of render times or memory consumption other aspects So there are just a few examples of like where we think it could work So what we did is we added kohara filter to our compositor system And what it does it's quite so explanatory just convert something more realistic into more painterly It does it in a way that it preserves edges and overall shape of the objects in the in the scene and on a frame We also added variable control for the radius like how much painterly things become in a local area Which gives you ability to control Okay, so I want more painterly look in the background or foreground or vice versa. So that's all under artistic control and Surprisingly, it's mostly stable for animation If you give it a realistic video to convert it nicely, it's stable There's still someone going investigation about giving it more images. We want the color rendering noise There seems to be a bit of a challenge there, but we we look into it Um basic idea is that for each of the points of the image You you you get some neighborhood of the of the pixel and divide it in a number of zones and Um pick up the color of the smoothest area and just replace color of that area and area you align based on the edge orientation in in that area So That's the the basic idea and if you follow the original paper of the implementation, it gives a very beautiful result Very slow, but it looks beautiful And the other aspect is it's not really compatible with dp implementation. Luckily you There is this approach of some array tables, which is basically optimization structure which allows you to calculate the sum of of pixels in in in the specific region very efficiently So using that approach you can speed up the calculation flarians for the areas very very nicely It still has a downside that you need to calculate some basic essential for the entire image So you might run into a floating point precision issues on a gpu But there are other tricks of offsetting all the values by the mean value or Point five like there are all sorts of tricks you can do So that ends up working very very well And it's almost real-time ish on a gpu The next topic in in in the compositor is the procedural texturing So we want to give control Over artistic look in the in the post processing for example to be able to break the smooth edges of shadows On on the surface and some who recombined this broken broken shadows back to the image It's something that not so necessarily easy to do a render time But it felt like it should be easy to do in compositor with all those shadow linking shadow Accenture passing or not really prototype And okay, so how how could it be to just add some noise to the Essentially shadow pass well you can if you go and create the Brush and create texture for it open another editor and start for the Specifically texture now the editor and have it side by side with compositor. It's possible, but it's clearly cumbersome So it feels like okay So how it feels like we can make something which more which is much more straightforward for the artist So they don't need into all those cumbersome workflows so It turns out you can have a really prototype quite easily but It become apparent. There are some design challenges What was the biggest one? I would think is the fact that so far we were trying to convert the compositor system Into Strict left to right evaluation, which is the most easiest to understand for by artists. Okay, so I have this image That's what happens to it next. That's what happens to it's next and it's at any stage You know, it's your resolution and and and things like that with the procedural texture and kind of You can mimic the same behavior, but it's kind of goes against of what happens in other areas of nodes in blender And the other aspect is currently in compositor. It's only images which flow through the noodle So you don't really have easy access to the bounding boxes of them Of your canvas at specific stage of the compositor evaluation, which kind of gets in a way of getting the strict left Okay gravity works So, yeah, you don't For for modular implementation would wanted to have some Other data than floating values or the color And yeah, it's also on on a product level. It's like, okay So do we value more like easiest way for people who are already working the vfx software to drop into compositor and do certain things or do we go prioritize more to integration with the node other node A system in blender and borrow their ideology and things like that So I kind of bring us to the next steps, which would happen which we want to happen in in in this regard for the compositor is to finish the design of the procedural texturing in in in in compositor and like stuff I already mentioned And sound like I didn't talk before is like, okay We also probably want to be able to share node groups between different systems node systems in blender and that kind of adds extra constraint to how we approach the design of the procedural texturing in in compositor And one of the ideas like, okay So you can use the similar approach in the geometronals, which is called fields and for them It feels like you can also like other allow this For the fields go right to left for which we already have visualization tools So maybe that will be a answer to the confusion and making It's so that ours is always aware of what happens where But we can also add like some inputs and make it more like left to right things But it's all work in progress. So we are working on finalizing that design Which brings us to the next topic, okay GPU compositor which Basically, okay, potentially we're a heavy project on the compositing side. So more performance more better like You you can't go wrong if you if you make something like 10 times faster, right? So Okay currently in blender which I have three compositors One of them is the tile based it's What is available by default and it's implemented for cpu and that's what a lot of People using this like when the usf 12 render There was a lot of work done for the full frame Compositor, which is still cpu only which almost feature parties tile based But there are some differences and some missing features But the idea of it is that it's much more efficient for the From the performance perspective And we also have what is called real-time gpu accelerated compositor real-time compositor, which is currently only limited to the viewport so Yeah, why to have three compositor. So let's make GPU compositor the only way you are to sell a work because it's the fastest way and Okay, how do we go there? So yeah, as mentioned like we want the gpu compositor be also used for the ffl renders and there is some Initial work done for it under experimental options. So you can already benefit from it Now for the missing nodes a lot of work was done since the beginning of the project of the Like the meeting for the goal project In blender 4.0 this Like it's not that many missing nodes left to to tackle and probably they're not that important for this project But we still want to finish them Full float precision is required to have for the proper behavior of cryptomat passes and it's not sounding that was possible in a gpu before because of the member consumption of throughputs to the gpu perspectives and performance and whatnot, but Work being made to support full float on a gpu compositor Other aspect is that currently in the three and gpu compositor don't have access to pass us Okay well, you you you can current it's a bit hard because like um Some work was already done and it's like putting in perspective like what the timeline Anyway, so it wasn't possible before and it's now possible in the for the f12 Experimental feature of the gpu compositor, but it used imagesty, which is not necessarily most Efficient way because of all this round trips from the gpu to cpu back to the gpu So the idea we have is that allow render result to have Ability to have Pass data as gpu texture on a gpu While currently it's only possible to have it on a cpu so Then then we kind of have like more natural well, it's okay. So that's your render result Now you know what to do to feed it to the compositor it handles it and it doesn't really care where things is coming from and you can have good performance Without too much branching in the code The design is done Just need some time to finish and implement it and remaining the Another remaining aspect is the How the different compositor handle canvas for example when the alpha over two different images with different resolution what happens then Or what happens if you if you rotate something then blur then rotate back and before we started tackling the those points it was All the compositor would give different results. So now we kind of try to align their behavior so You can swap implementations much more transparent Because if you if artists can use gpu compositor on their workstations Unnecessary sound you can use on a on a render form because it might not have a gpu So it should be a way to swap the implementation without any regressions on the on the on the composition of the Of your frame so Yeah, the for the future fork is basically implement all the missing stuff which I mentioned above and For now only have like remove the tile base to limit us only two compositor It's already will be good and probably when she'll also go to a single like gpu based compositor And you keep working on the procedural aspect of the compositor as well so That's All we have from like, you know the tangible things that we've been working on and have something like other design docs or the Commits done in in blender There's still a lot of topics which are so to speak under development which also comes in Requirement that okay, we need to work with artistic because a lot of stuff is still Unknown like Is it a topic? Is it like how are we going to solve it? Do you have everything for it? Like do we put developer time on it? And it's just at least a few of them which is like you've seen the ocean Well, we need to simulate it right. Okay. So what do we do for it? Is it? Just to just do geometry node simulation thing. Is it fast enough? No, we'll figure out um other challenges the underwater Shots where you would need to have cloth and hair simulation. It's like, okay, so How that is going to work? um Other interesting aspect is this abyss which is so so before it has this alcohol ink effect Or like stuff like just puts on it's like spreads out and whatnot like That's an idea anyway So how do how how do we do that? Like is it simulation? Is it sounding needs to be supported in geometry node? Is it sounding rendering would need to support to do proper with our landing or something? We still don't really know but we keep working on it um and Some other like just random topic. I've overhead In the context of npr rendering is like, okay, so focus point control, right? Okay, so you want maybe Like if you just draw something you artist might decide, okay, so to remove detail from somewhere But then it's like, okay, like can this be done automatically or not? Is it required or not? Is it something which would be required for this movie? We'll figure it out. So for the updates you can stay tuned at the studio blender or that's where all those creative solutions Creative decision and progress will be shared and of course the project blender org is where like the blender development is happening And I would also like to give a lot of thanks to people listed on the slide I'm not sure if it's worth going by the names, but thanks for that contribution And yeah, I don't have anything else Okay, other questions Yes Okay, so the question is if there's uh Compositing notes that work on the cpu, but we cannot get working on the gpu and I think the current answer is no I think we can I think all of them can work on the gpu. There's A few where the implementation was done a bit different like the glare node. It looks different, but arguably it looks better as well So I mean at the moment it's None of the existing well I think none of the existing notes have this issue at the moment So, yeah Yes, I mean, I don't really know it's like I only Get to work with what simon gives to me It's like it's already like kind of ready geometry set up. It's like, hey, can we make this work or something? So probably but Yeah, simon says that's kind of what they're doing. Yeah More questions There's one on the back Yeah, uh, okay So the question is like is it possible to keep the implementation cpu and gpu compositor the same So I believe so There is some design investigation already being done. Okay, so how can you align it and from Initial thoughts is that okay, so the actual implementation of the nodes It can be quite easily ish shared asterisk On the but on the first like iteration of like the unifying thing So it's probably will still be two different ever scheduling mechanism because you might want to do stuff differently We do the same thing on on on cycles as well. They're scheduling is done differently for cpu on gpu and then Yeah, eventually it should be possible to unify even more It's not exactly the highest priority now to spend a lot of time unifying it because you want to have like Solve the artist side level first There's question over there No, I was mentioning I was mentioning a node texture editor because it's a different node system and blender for texture nodes Which is not integrated anywhere else So if you want to have procedural texturing and compositing the work law is that you add texture node into compositor And then you need to go to texture node editor And fill in the nodes there To create the texture. We probably also wanted to go to properties of a brush to create your new texture So it's it's it's a bit of a mess, but so it's kind of different story from the layer textures, but That's to my knowledge that project is still somewhere on the table. Just wait the developer time to be assigned to it Yeah Basically, all the question is if there will be better compatibility between cycles and ev for npr rendering like the shader to rgb node And the answer is basically no The Like there's different approaches to npr rendering right you can take like a more physically based render and then do Certain things in the shaders within that framework and then do a bunch of stuff in compositing and that's kind of the approach here We're not planning to do shader to rgb in cycles Um It's not really compatible with the architecture and implementing it would be like another whole big project So that's not really not really part of this Yes, we have I mean we have thoughts about this. We have many issues But I mean we have many ideas about this It's unclear if in the context of this project it will become a priority It might turn out that they're doing compositing and notice like it's you know It's not good enough and we have to improve it. Um, but yeah at the moment it's It's on our list, but it's not yet. You know, you know on our immediate list for this, I guess Yeah So the question is if it's the same effect as in the ninja turtle movie similar Well, I I don't really know how it's done in the ninja turtle movie. So I cannot really answer that Okay, um, I To be honest, I don't know like the normals are coming from the geometry No, it's I guess and maybe they they could do something like this, but I Uh, that would be more more for the artist Okay, that's according to Simon. It doesn't sound similar. I mean, yeah Okay, um, I don't know if there's any more questions. Okay. Yes Well, so What do you mean exactly by cash because there's like, okay, so you can cash Uh Like the image is coming in like if you do an animation playback You want to have like really quick loading of like your your frames from disc um so, uh We haven't really we haven't looked at that and then the other type of cash is like maybe some intermediate cash where you have like a big no graph and You know, you want to do some change here at the end that you don't want to like recompute the whole thing from the start I don't know like which Yeah for okay, um We haven't looked at it in the context of this but it's uh I mean, it would definitely be a good thing to do. Um Yeah, we'll see if the artists really ask uh Ask for it. Um, maybe we'll we'll work on that then in the context of this Or maybe we'll be worked on in the context of something else, but yeah, it sounds like Well, because I have a slightly different like no priorities in this Aspect is like, okay, so caching is like, okay, that's kind of for the performance, right? So And quite often people just add caching to solve some fundamental issues of the performance. So to me I would rather say, okay, so if you modify something at the very very beginning of the three make that Part real time And then for a lot of stuff, uh, you wouldn't probably need caching But then caching kind of to have like a second step So first step get the actual system more real time and then on top of that just implement it So it will come it's more like, you know matter of priorities and in which order you cash because if you start Talking, okay, so there is a caching when people need to wait for the cash to happen to see the result It's not that useful either. Okay. Well, thanks everybody