 Welcome to the animation 2025 presentation. Woo! My name is Siblin, with me are Nathan and Kristof and... Woo! So, a year ago we started a project, animation 2025, and basically two goals. Speed up animators and make animation joyful. Somehow, like, the first was added by a producer and the second was added by an animator. I don't know why. So, last year we had a big workshop at Blender HQ and we had a presentation here. We worked on the big picture. Like, where do we ideally want animation in Blender to go? What kind of big things can we think of? And of course that went in all kinds of different ways. So we also worked on, like, core principles that would help us guide, like, basically chasing our dream in a consistent way. So we came up, this is a screenshot of last year, right here. You can recognize the beam. So these core principles we set out, I will very quickly go over them. It has to be fast because otherwise your animators are slow and annoyed. It has to be intuitive, otherwise you don't know what you're doing. Like, the tools have to be focused. You shouldn't run all over the user interface to find what you need. You should just have the tools that you need in the place that you want them. It has to be iterative, so you have to be able to revisit your work and make changes later because directors always want something else. It has to be direct, so you need to be able to manipulate what you see. And in, like, last principle is the Suzanne principle, whatever we think of, it has to fit within the bigger picture of Blender. We have to be, like, a good Blender citizen and not reinvent, like, a different kind of wheel unless we give that different kind of wheel to the rest of Blender as well and then the whole thing is consistent again. And this gave us, like, a great sense of direction and I really noticed that and other people as well, I'm sure, in the past year that, like, we could just talk and understand each other and get in sync very quickly. So also the team has grown. Last year it was me on payroll working on animation and people in the community, of course, helping out because that's what happens. So since November last year, first 2.2, and then three days a week from Madrid on a development grant. And since April of this year, Nathan is also working four days a week for the animation module. And he even moved from West Coast, U.S., all the way to Amsterdam because he loves us so much. And also I want to mention Brad Clark and Jason Schleifer because they have been, like, tremendous in, like, a consulting role and have been very involved in the past year and before that as well. And Nate Ruipsis, who is a community developer who has done fantastic work on the NLA and a whole bunch of other animation areas. So the team is quite nice now. So let's take a look at the Chinese, the stuff that is now already in Blender that we've worked on for the past year. Graph editor performance, 4.0, 3.6. That was Christoph. It's, like, 12 times faster. Then you probably recognize these guys. And everybody loves them so much. Like, Protected Layers doesn't do anything anymore since we have library overrides in our proxy system, but they were there. And, of course, everybody knows what's in, like, the second thing, second row. So you get these add-ons. For example, the Bone Manager. Many people use this add-on to just give these layers a name. But still, you're limited to the 32 layers that Blender gives you. It doesn't support library overrides. It wasn't that good. Like, the add-on is brilliant, but Blender isn't that good. So what we have now is Bone Collections. We kind of wanted to have, like, the Suzanne principle, consistency and scene collections are called scene collections because they're owned by the scene, and they contain scene stuff. So we wanted to call them armature collections, and we discussed it, and everybody said, yes, armature collections is, like, the good way to call it because it's consistent, and then everybody called them bone collections. So that's what they are. As you can see, the layers are just ported over. So if you load a 3.6 file in 4.0, it's just ported over to the new system. All the bone assignments are still there. All the visibility settings are still there. If you use that add-on, like that one, if you use that before, it will also port those names over. And then it's just layer 1- the name that you gave it. Also, so that is on the armature properties panel. On the bone properties, you can also see which collections they were part of because that list can get quite long, but if you just select the bone, you can see what it is a member of. One thing that may be peculiar when you first see it is that all the bone groups that were converted because it's just grouping bones in a named thing. So that is a bone collection. So we also converted bone groups to bone collections. They're all hidden because previously bone groups did not influence bone visibility. And now it's all one thing, so now they do. Another thing that we had to do, because the bone groups were moved to collections and collections live on the armature and the bone groups live on the pose, I won't go into technical details much, but they live in different places in Blender and that meant that you could not have bone colors in armature edit mode, for example, because that is operating on different data. But now we moved bone colors to the bone itself, so every bone can have its own color and you have that in edit mode as well. You can even set in pose mode a different color if you really want. Also we changed how the bone colors are drawn in the dope sheet and graph editor, etc. Because it used to look like this on the left, completely unreadable. So we just moved it to the side. I think over the past years there were like three, four different design tasks that all said, just do this. So that's there now. Also the operators that are about bone collections and like bone groups have been moved to collections, so you can select by the same collection, you can select bones by the same color and you can also assign or unassign two collections. Now this again might at some point start to be a bit weird for you because you don't see them all and that is the collections live on the armature. If you link in the armature from the rig file, it's not editable. So you cannot assign or unassign any of the bone collections there. But we also added library override support. So in your shot file you can add new bone collections and assign bones to it. So you can on a shot-by-shot basis set up that for that particular need as well. And with that I will head it over to Christoph. Yes, so I'm clearly not Dutch. So we also have a lot of new edit operators in the Graph Editor. Those are thanks to Aves Devo. He's actually the developer of Animate and Gracefully decided that his functionality should be in the Blender Core. So he contributed a lot of patches which he didn't have time to finish so we just finished them for him. So yay community. There have been improvements to the post library. Most notably from 4.0 onwards it now lives in the new region which is the SHLF. It's on the bottom and this SHLF was implemented by Julian Eisel and it has a lot of new features to sort your things. So you can have tabs on the top for each like what are they called? Catalogs. Yeah, that's the word. And you can search easily. There are some additional features which have been implemented which is you can flip the post while blending it and you can also subtract poses now which I think is useful. One feature that has been requested for a very, very long time is multi-editing of f-curve modifiers which now is finally in there. For this to work the f-curve modifier has to have the same name. It's just like object modifiers which also means you can name them now. There are more smoothing operators in the graph editor as well. What you can see here is the Butterworth's filter. So the other operator is the Gaussian smoothing operator and the difference between them is that the Gaussian smoothing is a bit more predictable so artist-friendly-ish. But it has the issue of volume loss so if you smooth curves the peaks tend to shrink so the Butterworth filter tends to avoid this. However, if you have big sudden jumps you might get these fluctuations. If you're like mo-cup editing you usually don't care because you smooth over it. And the Butterworth also has the nice feature of blending the edges. This is what you see here at the end. The edges of your selection just try to keep the slope. Some other minor improvements you can finally choose where the line, the relationship line is drawn from. So when you have multiple bones that originate on the same point you can now see where the relationships go to. And your NLA strips finally no longer get stuck in the maze. Big shout out to Nate Rupps. Perfect work. The snapping options have been moved to the scene which basically means they look now as they do in the free viewport. And they are all synchronized between the animation editors so if you change the setting and the dope sheet it will be changed in the graph editor as well. And that also means we can implement new snapping functionalities hopefully. There is a new transform space which is the parent space transform. But... Our biggest fan. It basically means your translation axes are aligned to its parent which is very useful. And of course a lot of tiny improvements. Better animation baking, the paint mode selection tools have been worked on. The copy global transform addon got a new feature. There is a new operator to quickly select different parts of keyframes like the handles. The bendy bones have got a new deformation setting and of course bug fixes galore. And this was the section we were talking about what we have done so far. Now there is a section what we are going to do in the near future. And that's mine again. So in the near future and this means like Blender 4.1 ish at least we are now in a branch is layered animation. We had a workshop last June again at Blender HQ. And the goal was like speed of animators by reducing manual data management. Like the whole laborious task of switching out actions, linking actions maybe to another scene. If you want to try out different things. It's all fuzz. So a bunch of desires that we set out. We want to gradually build up animations. We want to start somewhere like blocking and then finesse it a bit more and then maybe a bit more and gradually build that up in a way that you maybe can also go back. You want to try different alternatives. Maybe you want to offer your director like two or three different takes on the same bit of animation. You may have to adjust somebody else's animation. That might be yours from a week ago and you have no clue what you were doing. You kind of want to put corrections on top of that. A bit more out there is like procedural animations, streaming in animation from other sources. Could be an Alembic or USD file. Could be a real-time motion capture system. And you want to animate on top of that again. So all of these things we wanted to do and we figured, well, that is all layered animation. And on top of that we have more desires. When to keep animation of related things together. So instead of having these actions all over the place. When you are animating like a baby you are holding a baby and you want to animate that the parent and the baby are basically moving as one thing. But they are two objects so they get different actions. We want to fix that. And we want to make all animation linkable. And this is also like a stab at the NLA. Because every object has its own NLA. So if you want to do linear editing. It is not just every object having its own action but also every object has its own NLA which has its own strips with their own timing. And it is super complex to keep that all in sync. So basically this is what we have now. You have a character say Einar. And you have some tracks on the NLA and every track has at least one action. And you have to name them and manage them or you forget to name them and then something goes wrong and you are lost. And what we actually want to have is one character one animation with different layers. And this animation should be like a data block. Just like the action is now. Animation is going to be its own kind of data block. But the issue is that Einar is never really Einar. It is never one thing that you animate. You have the armature object. You have the armature data. You have the mesh object. Maybe you want to animate a material or two. And then there is other things going on that they all need their own action. So this is often resolved by driving everything from the rig with drivers and bones and complicating your setup even more and slowing things down even more. But what we really want is that all these data block can point to the same animation data block. Currently if you were to do this with actions, everything gets the same animation. So this is something also we want to introduce that you get this decoupling so they're using the same block of animation. So with two characters out of the NLA, it gets even wilder with the number of actions that you need to juggle. And as I said, we just want to point those two at the same data block. So let's take a look at what that would look like. We have layers. Layers have keys. Nothing really fancy. Yeah, for blocking, for finessing keys don't have to match on the time like you would expect. Then we want to be able to layer things on top of each other and determine how they mix together. Maybe you want to add a rotation instead of replacing a rotation. But you've got to have that choice. With an influence slider in there as you can fade it in, fade it out, animate that influence slider, which is always fun because you're animating the animation system which has its own technical hurdles. But this should be possible with a new system. Like we want to have hierarchies of layers where you can build up the layer of, say, one character in different layers and then you have one parent layer that is just the animation of that character. You collapse it, you don't think about it anymore. This is one mode where the parent mixes in all the children and that acts as like the output of that parent layer. But also we feel that for things like takes where you want to have that one or that one or that one but never at the same time. You should be able to tell the parent just choose your favorite child and ignore the rest. And then you can try things out. Like at first you get drools and then pukes or maybe it drinks first and then starts drooling like these different takes. Also takes like this and is very important in layout stage. Not just for character animation but also for animating layouts. You want to try out different things as early as possible in production. So during layout being able to switch between these choices is super powerful. And then inside of that layer it gets a little bit more complex. So we have different outputs and we have in this case the output for INR and the output for Teo and that is how INR and Teo know which part of the animation to look at. This is why they can all point to that same animation data block to know what to look for and what to ignore. They don't have to match you don't have to have every output in every layer. It's just if the animation is there you can just use it. Within one of those blocks it is pretty much like an action is now. So at this moment we're not redefining what an F curve is that is probably going to happen later at some point but for this not yet. So these are basically what you're looking at effectively three actions on a sort of a layering system and so I said layers have keys it's a bit of a lie layers have strips and those strips have keys and by default a layer has one strip and it's infinitely long and you don't see it and you don't touch it and it's not you're not bothered by it as an animator you can just ignore the existence but if you want you can click on a stripify button and then you can have say a walk cycle and then repeat that five times and then yeah if you're like five times a walk cycle and how they interact with each other we still have to flesh out but by making this decision of having this implicit like this infinite strip all the tooling in the Python code and the C++ code is forced to immediately from the get go work with strips and that means that when you're working with your animation your tools will just keep working no matter whether you're working in a strip or not from Blender's point of view you're always working in a strip so things stay consistent it's my very much our desire to not have an NLA editor that's another editor that's not even shown in the animation workspace by default just something that works well consistently then we dive even deeper because I said like F-curves we're going to keep and F-curves are like they are in the current action but we want to be able to add different channel types as well currently grease pencil data that the animation data of grease pencil it is still contained within the grease pencil data block that is just a flat list of all the drawings you ever made and it's numbered 45 and the animation data is nothing else than on frame 15 show drawing number 5 on frame 20 show drawing number 37 we want to move that into the actual animation system again so that all the tooling that we built for this can work consistently with different kinds of animation data and then finally the ID chooser is basically a generalization of the camera markers are useful they set the scene camera to some other camera basically for that particular property they choose first that thing and then that thing and then that thing and I believe we can make that more generic so that you can use the same approach for other things instead of having these very specific markers for this one goal so that's a lot and so we had to plan a little bit and you can see the further it goes into the future the fuzzier it gets so what we're working on now is a minimal data model no UI no workflow just being able to have an animation data block have layers, have strips, have keys in there be able to save them to a blend file and read them from the blend file again and have them evaluated so that is where we are now then early next year minimal working user interface that you can actually animate with the things the data model because I'm sure that by having user interface people actually using it will find all kinds of missing things and we know we're missing things because it's a minimal data model not the full data model and again, work on UI then early end of 2004-ish stabilization and then porting the last bits and pieces and deprecating the old system and then we hope to have at least half a year where we can do other stuff before Blender 5.0 comes out and while this is going on we also have plans but this is not in the slides about when it's going to integrate with Blender so right now it's on a separate branch you can download a build on a special place if you know what to look for that's in that branch next step will be that it's merged into the main version of Blender behind an experimental flag you download an alpha build enable it in the experimental stuff you can play with it and gradually we will make it more and more accessible for mainstream Blender but that way at least we put it in people's hands and we can get feedback without breaking people's existing animation workflows so now let's take a look this is the UI that we have now and this is just to show that it's there and I don't have to write pointing code all the time to see if I'm doing the right thing so Suzanne is now the active object it has an animation data block it has an output selected we have one layer named the one layer it has the infinite strip that you otherwise wouldn't see so let's look at the current user interface for this that's fine so I very quickly go through this we have two objects, Cube and Suzanne we create the animation data block and we call it anim then we create an output for the Cube we create an output for Suzanne we assign the animation data block to the Cube and Suzanne then we create a layer that's the one layer you can see we don't create a strip because it's already there because this layer always has that infinite strip and then you can tell the strip just insert a key for that output, location so that's the Z axis time, sorry, value and time and then you go all Python on Suzanne and you do more animation and this is what it looks like what you can see me do is I toggle the output of Suzanne let me play it again I toggle the output of Suzanne between one and two one is actually the Cube's output and so it moves with the Cube it gets the same animation and two is Suzanne's output of course in the user interface you're going to see a nice strain you're going to see that it's Suzanne's and Cube's and etc, but for now it works and you can also add another layer with another strip and then you can add another rotation over another axis and the evaluation is really stupid so I chose different axes of animation because otherwise it just overrides but this is in the branch now and it works I must say this was really a magical moment when we put everything together, we press space and you see stuff moving in the 3D viewport that's fantastic but later to animate you need to be able to set keys which is a segue to Christoph ok all right, apart from this new awesome animation did a block we are also working on overhauling the keying system so I'm sure there is a lot of animators and they know that if you just go to the viewport and press I you get this nice UI which has a lot of confusing options and half of them people don't know what they do so we just want to get rid of this and it should be as easy as just press I and you get the keyframes yes the way this works is this is now moved to the user preferences there are no options for you to define hey you do want to key location rotation scale, rotation mode, custom properties and you set it once and it's there keying sets won't go away though we are working on improving them as well so I'm not sure how many of you know but there are custom keying sets in Blender if you use them talk to me afterwards because I'd like to know why you use them because currently they're very limited because they're bound to an ID so you need to specify an object and then from that object you specify for example location and a single custom property and if you then want to animate a different cube you need to make a new keying set because the first keying set points to this other cube and we just want to make custom keying sets relative so you just specify hey I want to key location X and this other channel and it just works on based on your selection the use case for this is camera animation for example because you specify location and rotation and focal length and then you have this keying set and you can animate all different cameras and you just press I and you always key the focal length apart from that we'll also improve auto keying so currently auto keying has a lot of options that partially contradict and we will simplify it streamline it and just make it a bit more predictable so that was the section of what we are currently working on and now is the section of kind of the aspirational near future well shouldn't call it near future like slightly further away future distant future well it's not too distant so we want to have ghosting in blender what you can see here is a pro type by Falk he's a Christmas developer and the reason why he did it is because we want to have a common ghosting system between grease pencil and other objects however as you can imagine this has a lot of technical hurdles to overcome most notably performance so the goal is that your current frame never ever slows down so the other ghosts need to like compute asynchronously or whatever so it's fancy future needs a bit of solid technical foundation first before we can actually ship it and there is another thing for Nathan are you sure that booing wouldn't be more appropriate no I'm not, oh thank you finally someone understands yeah so something else that we're looking into is selection sets and bone pickers and I'll get a little bit more into this as we go but it's not limited to these but it's more talking about how can we handle this so rigs and also scenes generally can be very complex and difficult to manage and we want to allow animators to kind of wrangle that complexity and get better workflows out of this so there are some things particularly for rigs that we already have like rig layers can help to manage this complexity, you can turn rig layers off that you aren't using but the thing about rig layers is that they are they come with the rig, they're static, they're determined by the rigor and they're per rig not per shot in general and just in general anything that comes with the rig is not tailored to the specific animator or the specific shot that they're working on the other thing is that wrangling this kind of complexity isn't just about organization, it's also a workflow problem so we want animators to be able to quickly scope down what they're working with based on what they're focused on at the moments we want them to be able to quickly and accurately select things and be confident that what they're keyframing are the things that they intend to be keyframing and the list goes on so selection sets are something we already have an add-on for and they're just a simple animator centric organization of rig controls you can quickly select one of those sets from a pop-up in the menu and then pose using those or you can hide everything else and then you can select that again to do your keying and you can also add your own and so one of the things that's great about selection sets is that it is the animators set of controls the animators organization for how they want to be working and also one of the really cool features of this add-on is that you can take them with you so you can actually copy and paste these selection sets into other scenes and onto other characters so the animators personal set is something that is can be part of their general workflow that they take with them but you can also do that on a per shot basis so the next thing is bone pickers so how many of you are familiar with this sort of thing yeah it's in so many other pieces of software and it's really handy particularly for cases where the 3D viewport could get really confusing or the controls could obscure what you're working on faces are a really good example of this where you're trying to get the right facial expression but all of those lines of the rig controls actually make it hard to see what that expression is so it really declutters the viewport it makes it easier for you to see what you're doing and it's also really good for controls that maybe don't make sense to be floating around in 3D space so there's a lot of rigs that have their kind of settings bone that you select to manipulate the settings of the rig and there's not it's kind of weird for that to be floating around in 3D space it'd be nice if there's a specific place you could go to for that the other thing is that bone pickers can be organized the selection widgets can be organized for fast selection rather than to be proportionate to the character so in this example and this is something that Rick, where is Rick yeah Rick pointed out is like yeah when you have hands if you're trying to work in a 3D viewport and select say all the fingertips it's painful you have to basically click on every one of them but if you have a bone picker where they're all aligned in a row you can just box select them really quick and you can also box select a whole finger really quick or whatever so this is really really useful for just quick and accurate selection in certain circumstances as well and additionally it doesn't have to be limited to selection so something we've been talking about is being able to actually have buttons in there you can click that will actually run operators or scripts and we can maybe expand that out into even more stuff in the future but this is all aspirational we haven't actually been working on this yet but the goal is for it to be something that animators themselves can easily build so you could have them come with the rig probably a lot of them would but also these would also be something that the animator could actually create for themselves and customize themselves on a per character or per shot basis but so yeah I've talked about kind of these two different ways of tackling complexity and they each tackle different aspects of complexity but they're also kind of disjoint from each other and we want solutions that are complementary and work well together so how are we going to bring all of this together well this is for the future so we're still working on it but our goal is to go further than just these two things and really figure out some powerful ways for animators to tame the complexity of their shots and focus their workflow so there's also even more things getting away from all that stuff in the further further future and these are things that we've had a lot of discussions about we really want to do but are not on the current road map one of them is rig nodes so Sebran here yeah yeah so one of the things that we've been doing over the last year we have been implementing a lot of concrete actual features but we've also been doing explorations and just trying out different ideas and this is a prototype that Sebran put together for a possible future rigging node system this is in python right now Sebran himself will tell you about the horrible hacks that are underneath it to make it work but it is really fun to play with but it's definitely not production ready but yeah so the idea is to be able to do your rigging with nodes and one of the things that this will enable is getting around dependency cycles where like in the dependency graph if you get those dependency cycles your things don't update in some sense you can kind of build your own dependency graph with these limited to within the rig so you can do some really powerful stuff some other things that we have been exploring is custom bone axes so right now the long axis of all bones is Y which is a little weird since in blender Z is up like Z is kind of the special axis in blender and this causes a bunch of issues I won't go into the details but we'd like to be able to allow people to customize that so they can select which axis is the long axis animation snippets in the pose library so not just individual poses but you could actually for example drag in an entire animation snippet onto the animation editor for example and tweak that and more rest poses so currently armatures have a single rest pose but you might want more than one and a good example of this is you might have a character modeled like this so you want to set up the rig that way for deformations for the binding for the bind pose but for animation you might want it to be like this you might want this to be the rest pose for animation so there's I think even more applications than just that case but that is one concrete use case where multiple rest poses would be really useful so we want to explore that as well and there's just a whole bunch more things that we've been exploring so there's just a smattering of examples but yeah we want to although the current targets for 2025 are the new data model and the animation layers and all that stuff in the UI to go along with it that's the foundations that we're building because after we finish that we want to go even further and really build on top of it a set of features that work really well together to make animation a much faster process and a much more joyful one so thank you so much for coming, bye does anyone have questions yeah we are able to preserve the parent scale matrix and the animation to come out perfectly with squash and stretch on non-uniforms and I'm wondering if that's something that's been thought about in the new baking pipelines that are coming here because that's going to be something that you know our characters are believing their chances needs to be able to scale non-uniformly and everything else is going to have to not be messed up from that big and that's going to be crucial for using this for yeah that super super makes sense I don't know if we've thought about it specifically in the context of baking but I think that would actually be even easier to address than what we have been considering which is even just bringing the concept of a full transform matrix to the bones and objects directly so you would actually be able to work with those in the context of what they are in the viewport it wouldn't just be limited to location scale rotation you could actually have a shear matrix for example but yeah I mean just the baking part of it I think is even like or exports part of it is probably something we could even do right now I think I'd have to think about it but yeah certainly something that we can do and now you mentioned it I think is important to address yeah any other questions yeah fix them right now I have to work with a lot of external files that I made a custom thing that I saved my animations to a library file and I put them up as I need them I put them and I put them back because I cannot edit the rig if I link the rig from outside so there's a lot of complexity that I need to manage in many libraries so I don't have a file that is like 2 gigabytes big but when I have to work with 200 animations there's it's very difficult to swap between them so I have to use third part of the other ones and I think making that coherent with the system and blender related is probably as well very important because actually the rights that have not been so much talked about so I'm not 100% sure if I followed everything that you said but one of the one of the aspects that I think I caught on and correct me if I'm wrong is you're linking a bunch of different things into a scene to do game animations and okay oh interesting okay maybe you can speak more to this one of the things in the pose library is pretty much this like the one of the issues is that it's hard to update a pose because blender is not allowed to write to other blend files like you're only like editing the current blend file and if you go to write to other blend files you open up a whole heap of complex issues like you can't undo just to name one but what we did add was just right click on any asset open in blender and it will just open another blender for you because it knows which file it's stored in it just opens another blender you can edit it you close the blender the first blender you were in is still open and it was monitoring that second blender so as soon as it sees it quitting it will refresh the asset browser and you see your new asset there so I think this kind of workflow could work well in combination with the asset browser in combination with the linking that is supported in there and I have hope for the new animation data block that it will also make these kind of things easier because you just need less different actions less things to juggle this is currently more or less possible like at least the right click open in blender but maybe we just have to talk maybe you join an animation meeting sometimes because this is very much sounds like a very specific thing for which we might be able to think up something generic that is useful for everybody practical input is always welcome I was going to say one more thing so one of the one of the motivating use cases for allowing more than one objects animation to be specified in a single animation data block is actually the game animation case because you often have a character that will have a prop and then a different prop animation and it's obnoxious to have like three different actions specifying the same what's conceptually the same animation that you want to be exported as one thing so that I think is maybe one aspect of what you were talking about but I think Sebran probably hit it more on the nail yeah and that also goes back to post library and bigger tooling because when you rename a bone it doesn't update the post library just to name another pain point I saw a finger there so if we have ghosting we need to have animation caching just saying and also like as a more general thing we want to avoid caching because caching is very very simple knowing when your cache is no longer valid is very hard so I'd rather have somebody working on like a faster system altogether and then hopefully we don't need as much caching so currently the rig node system is like as Nathan said really hacky and we want to keep going in that way for now just to create a workflow that even though the implementation is rubbish gives us a good idea of the things we want to do with these rig nodes and what should be possible with it and only once we have a better view of that we start looking at okay how do we implement this like for real is it going to be a separate node system is it going to be part of geometry nodes that feels very tempting but like jumping too quickly on to geometry nodes I think will give us certain blind spots that we still want to keep an eye on but I hope to be like I hope this will be possible we have time for one question too this is just for the post library and it's just as stupidly simple as you think it is so it will just open another blender which will use its own memory and if you have very big objects in both it may not be enough last one yes okay last one you mean this hack okay so very quickly people who are familiar with Unreal Control Rig will know how this works so we have two starting points and the bottom one is called the forward solve which runs when you move a control it runs the forward solve it has nodes to get the transforms of controls it has nodes to set transforms of bones and this basically gives you like control over bones and then there is the beauty in there that has the two blueish inputs and that is the two bone IK this is my pride and joy to evaluate it receives like it has to get control so it receives two transforms one for the target one for the pole vector it creates two new empties in the scene with those transforms then on the bone it adds an IK constraint and plugs the empties in there it evaluates it copies the resulting transforms it deletes the empties and the constraint again pastes the transforms on to the bones and that is one evaluation cycle and it's rubbish but it works and the cool thing is that that rig there I can later show it on my laptop working but like live demo thing that rig is just forward IK it's stupidly simple bones it has nothing and oh sorry I forgot the upper bit the upper bit is the backward solve it has get bone transform and set control transform so it can put the controls back on to the bones and this means that like currently one runs in object mode the other runs in pose mode so in pose mode you can grab a bone and rotate it FK style and your IK control which is follow suit and then you click on the control blender automatically switches to object mode you move the control IK control and that just works and then that when it gets really glitchy because drawing stuff is not made for this but where's my mouse that is also an empty that acts as a control and scaling that will scale the bone which still runs the IK so the tip of the bone stays at the same point but the joint that control is on will move around because like the upper arm is scaling all the time and then that node graph will just put it back on that joint so you got the transform of the control influencing the bone influencing the transform of the control which would traditionally be like a dependency cycle that blender can't handle but with this it's just nodes executed in order so it's fine and that's the end of the talk thank you very much