 Hello everyone. Thanks for coming. We are Cube Creative and Animation Studio based in Paris. We are very excited to be here for our fourth Blender conference. Today, we will share with you some of our rigging methods and how it allows us to bring our characters to life in our TV shows. I know there's been a printing mistake on the schedule, so just to make sure things are extra clear, this talk is a bit technical. We are mostly going to talk about rigging, but don't worry, we included a lot of animated gifs, so we're keeping things fun for everyone. So first of all, a bit about ourselves. This is Manon. She's our leader rigging artist, the coolest of them all. As such, she's involved in all our projects. I will give her the microphone as soon as things get too technically specific about rigging. And my name is Tanguy. I'm one of the CG supervisors at Cube Creative, and I am currently in charge of the technical supervision of a show called The Seven Bears, produced by Folly Vary and Netflix. During this talk, we will try to give you a brief overview of our rigging process at Cube Creative. First of all, I will take a few minutes to present the studio, our work, and our history with Blender for the past few years. Then I let the microphone to Manon, who will introduce you to Keith Harig, our in-house autoreg, and take a look around its main features. She will then proceed to present a few additional rigging tools, which make our rigging artists and animators' lives much easier. After that, I will take over to show you how we transfer everything from animation to rendering and the implication of such a process on rigging. And lastly, I will talk about a few corner cases and challenges we met in the last few years while trying to bake her animation. So Cube Creative is an animation studio specialized in TV series, and the studio is now part of the Xilam Group. As you can see here, some of the shows we create have a very cartoonish style, while others are more on the realistic side of things. Of course, this nice artistic diversity implies that our methods of creation needs to be very flexible. Manon will talk a bit about that in a few minutes. We've been using Blender since 2017. Before that, we were using 3ds Max on most of our projects, and Maya and a few of them. Since then, we've been using Blender for the vast majority of our 3D shows. The different shows you can see here were all made with Blender. However, there are still a few exceptions. Some of them, like The Gola La Plage, are 2D shows, and we have not switched yet to Gris Pencil for those 2D productions. I personally hope that we do so in the following years. Some others, like Kailu, have a long history in another software, which is 3ds Max in this case. And we have thousands of assets that would need to be adapted and translated to Blender in order to make it work. That being said, if we are lucky enough to make a sixth season of Kailu, we sure hope to do it in Blender. I will now show you a short reel of our CG TV series, and I'll let Manon take it from here. Enjoy. Hi everyone, my name is Manon Garbet, and I'm the lead rigging at Kube Creative. I'm going to show you the rigging tools that we developed to help us doing our job more efficiently and talk about some special cases we had to deal with. Kube Creative is a studio that produces different kinds of projects. Those go from a realistic style like Atléticus to a very cartoonish one like Kailu. That means we can have very different sort of needs for assets, and our rigging work needs to be adaptive to all of those shows. One of our specialties are TV shows, which means that the rigs must be as late as possible, but we need to create them quickly and properly, while they still need to respond to all of the animators' needs and directors' asks. It is a delicate compromise between efficiency and quality, which can be very challenging. That is why we developed tools to make our life easier. Let's talk about the tools. I will present them to you, for example, and with a character of one of our future projects, Go Flash, starting with our autorig Keith Rig. Our autorig has some common points with Rigify. It imports an amateur guide already saved as a reference in another scene, then we place the bones correctly to match the geometry, adjust some options, and generate the final rig afterwards. We chose to create an in-house autorig that would better fit our specific needs. It was also a way of ensuring that our animators would have a similar user experience as when using a previous rig on 3ds Max. The reference guide can be updated if we need to bring a correction on it during our work. It is common to all of our projects, but for some productions with very different kind of character, we can have a specific guide already adjusted specifically to those characters. The main guide is a classical biped, but we also have the possibility of having a quadruped guide if needed. For a presentation, I'll mainly talk about the biped rig. Once the guide appears in the scene, we first adjust it globally with a scale box. This one is useful to pre-define an adequate size for the controller. We place every bones and limb on the character on pose mode as it allows us to use the bones relationship between them. To help us placing all of these, we use the copy-pass transform, a useful tool if you need to adjust a bone, but not its bone children. For example, if you need to adjust the wrist, but that the fingers are already placed correctly. It will keep the transformation information in order to re-inject them after the change we made on the bone parents and not lose the correct placements of the fingers. It is a pretty classical guide, you have the two legs and arms, the spine, the fingers, necks and head. As it can vary a lot between different characters, all of the additional parts, like a tail or ears for example, will be added in a second time. Same things, if we need extra arms or legs, we will have to generate the base of the rig first, then use a new guide for those extra limbs. There is almost no facial rigging except for the eyes, eyebrows, jaw and tongue because we mainly work with shape keys and I will develop why later on. It works in a mirror way, which means that the right side will be placed exactly like the left one, so we need to place the left side and the right one will automatically follow. If we need something asymmetric, we can at any time disconnect the mirror constraint and manually place the controllers where we want them. As sometimes you need more than three, we can adjust the number of bones in the spine as well as the number of fingers and the bones inside of them. We can then update the guide display after those bones number changing in order to see and place them correctly. We do not have toes included in our rig, as most of the time we don't have characters we're showing once, but we might add this option for practical reasons. For now it is still pretty easy to add them, we just save another scene where we place the fingers guide on the toes, generate the rig, rename them and add it to the main rig afterwards. The last step would be to generate the base of the rigging, what do we call the base? It is common to all of our assets, characters and props made of five bones that will always be there, the root controllers if you will. First of all the master controller that stays in the scene center, except for when the animators need to offset the entire animation. Then we got the world controller which is used for placing globally the asset in the scene, but it's rarely animated. The world controller serves as an offset if you want your main pivot to another place than the ground, or it's also useful for walks and run single cycles. And then the global scale controller, we separate this transformation from the others, it can sometimes be a problem with all the stretches in bones or the switch between IK-FK modes. It is a way to expose its value and re-inject it later through drivers and through the rig itself. And finally we have hidden controller, the offset one, hidden from the animators in case we need another offset after during the rendering part. Once you created the base, the rest of the rig will be generated based on it and then we just need to get rid of the guide. And then voila, you have your main rig done. We usually keep one scene saved with the guide before generation as a backup as it's extremely common to have to regenerate a character rig during a production. It is always safer to keep it in case there is a late change in the design so we don't have to replace everything again. As a company with ongoing productions, it is not always easy to stay up to date with the new version of softwares. So it will take some time to switch from a 3.3 version of Blender to the new 4.0. Meaning we won't have the bone collection system yet so we'll still be using a layer of bones and bones group for a little while, sadly. So for now, at the end of the autoric process, we give our bones, bones, group, color sets that are already preset in order to be more Korean. All of our characters have the same ones. They are just color that I used to differentiate the controllers. This tool only works correctly for the autoric bones. Every additional bones would have to get their bones groups assigned manually. Just like Rigify, we have some rig UI panel that we call CustomPap. In there, we can find a system that connects the rig UI to the bone layers. A simple panel that allows animators to access their controllers stored in different body parts layer. The mechanical and rigging bones being hidden and not linked to this panel so no animators could have access to them and destroy everything because they love to do that. So what's our scene looks like? Before we talk about the other tools, let us have a quick look at our rig and how we work. I have a feeling that every studio works in a different way. So here's how we have our scenes. We have four main collection in which we separate the renderable measures from non-renderable elements and isolate the animations amateur. For rigging, we use at least two amateurs. One includes all the controllers and the rig bones and the other will be exclusively used for scanning. It's pointing to bones from the other amateur. So we have one amateur with the bones for scanning only and one for the animation rig in the animation collection. We had to work this way with separated skin amateur with older version of Blender as it used to create a loop in the rig with our spine system. Now the problem doesn't exist anymore, but we kept that way of doing as we often use more than one skin amateur on a character anyway. It makes it more uniform. It allows us as well to export the mesh and the skinning bones once the animation is done without keeping the rest of the rig. Sometimes and for most of our characters, we use another skinning amateur in addition if we want a different kind of influence on a part of a character. We usually work with multiple skinning amateurs to use them for different levels of skinning. As for example, the main amateur would globally deform the face of a character and the secondary amateur, the one we call wrap, is there to add more details. For example, the main skin in there allows us to open the jaw and move the top of the head while the secondary skin allows us to move the mouth globally and nose with different bones influence. On the rig controllers, we can find over rig UI panel that are mostly used for IKFK limbs function or parenting changes. The arm and legs of our characters are made with skin B bones linked to a switching bone from the rig. The UI panel includes control like IKFK switch and hide arms and legs curving, scale or automatic rotate like foreshoulder for example. The parent space panel allows us to change the constraint on the controller and so change its parenting relationship. It is mostly used on hands, feet or hats for example. And that would be all for the presenting part of the other rig. I will now show you some tools we often use during our work. First of all, the bones manager. It brings together a bunch of little functions that does simple process but that we do many times. Like changing the kind of constraint space, swapping it automatically from local space to workspace and the other way around. Or remove a constraint on selection. This one is pretty obvious. It can apply bone as respawns and reset the mesh modifier. It is used if you want to apply a new bone position and apply the deformation of it on the mesh. It will apply the bone transformation and the modifier on the mesh before adding a new armature modifier. Finally, it creates a skinning bone from a bone selection by selecting the bone controller that we want the skinning influence from. And create a skinning bone in the skinning armature constraint to the controller from the animation armature. It will work on multiple bones selected as well. Then we got the spline bones. We use a lot of them in our project. It is what we use for ropes, chains and straps mostly. What we call spline bones is just a curve that will have its points and bezier hooks to controllers, followed by a chain of skinning bones linked to this curve. To use this tool, we need to create the bones that will be the spine hooks controllers. The curve will create itself based on the order of the bones creation. The spline will be having a hook modifier pointing on each of the bones we just created, linking the vertices of the spline to our controllers. Once the spline is done, the tool will create a chain of bones that are constrained to the curve with a spline IK modifier. They will be used as skinning bones for the geometry to follow the spline and its deformation. It is a simple and not-so-heavy way to deal with rope-shaped assets, which is very useful when you have a TV show budget. The next one will be the bendy stretch. It is one of my favorites. It creates a rig lattice that will deform itself in a bendy, stretchy, squashy way. It is very satisfying to play with it. It is a must-have for cartoon TV shows. Animators will use it all the time to give the assets a more cartoonish way to move or react. For the cartoon show, it is used on most of our props and on the head of our characters. We use it even on our more realistic show like Athleticus, as it can be useful to cheat on the motion of an asset sometimes. Our tool will import from a reference scene the lattice rig and the armature that goes with it. We just need to merge the last one with the main armature and arrange the rest in the scene. This lattice has two simple modifiers with driver for its strength and angle value. Those drivers are all connected directly to the stretch controllers or to the bend one, which takes calculation of distance between two rigging bones. This allows us to have on the same lattice a stretch effect and a bend that goes in four directions. It is not very heavy and once again we use it all the time on our assets. The next one will be the puppet creators. Puppets are a low version of a character divided in a lot of pieces, so an animator can start an animation with the volume of a character, but it is less precise but way lighter than the renderable mesh. It is mostly used during the blocking phase of the animation. Our tool here will duplicate the meshes of the character that we have selected, get rid of all the modifiers and texture on it, and cut it in a bunch of pieces based on the skinning vertex group. It will then do a simple parenting of those pieces to the bone that had a major influence on them. It is not a complicated process, but like a lot of our tools, it saves us a lot of time. It gives us a very discontracted character, but way lighter to work with it. It is very useful when you have many characters in the same shot, which actually happens a lot in our production. And finally, the specific cases. This part is more about specific cases we had to deal with during a production. It is less about tools, but more about rigging process examples. First of all, the facial rigging, shape keys and bones. At Kube Creative, we are used to mostly work the facial expressions with shape keys. It is a choice we made as we often realize that directors have a very precise idea of what kind of facial expressions they want, and rigging isn't usually enough for most of them. They will always want the corner of them of more pointy, the lip more curled in a certain way. I guess the best way is probably a mix of both techniques, but we don't have an automatic facial rig for TV shows yet, and we are on a short timing and we don't have the time to manually create one for each character. The problem of it would be the weight as well. The facial rigging can be pretty heavy with a lot of bones or bendy bones, as you know most of our projects are TV shows. So we still have the constraint of making it as light as possible, and shape keys are a good way to have a large panel of expressions while keeping the character rigged as light as we can. So for now, we mostly work with shape keys, and sometimes when it's really needed, we add some facial expression controllers. But adding shape keys can take time, so we have at least a tool to separate the left and the right side with a smooth blend in the middle. Our character modeling artists create their shape keys symmetrically, then we separate them afterwards. This tool will base itself on the selection and duplicate the meshes for the right and left side. The neutral meshes stays in the middle, and will have a mix of both of the side of the shape. For the mirror process, it basically works with a vertex weight proximity modifier on the left and right meshes. This modifier will pick a previously made grid as a reference, and the shape will activate itself with a smooth gradient that we need to adjust for every shape. Once we are satisfied with our shapes, we select again the neutral meshes in the middle, then we can export the sided shape keys. It will strike in the scene every shape key left and right, now ready to be merged with the rest of the rig. For the rest of the process we do it manually for now, we create controls that will drive the shape keys and connect them with drivers. Those tools are pretty new in Blender for us, so they will probably evolve and be updated with our next production. Next one will be the child love. The child love constraint allows us to use the existing skin of a mesh to wrap a bone or a part of the rig on it, and so it will perfectly follow the mesh deformations without being deformed itself. For example, if you have a button that needs to follow your spine and its deformation, but you want to keep it shape follow, but not being deformed by the spine. The rig will follow the skin and deformations of the mesh without being impacted by it. We use it mostly for pieces of clothing or accessories and characters, but we can also use it to have a reference controller on something that gets deformed a lot. For example, a rope that is being stretched and flaps around. If a character needs to grab it, it hence needs to stay at the same place on the rope without being deformed, and that is a very good way to do it without being too heavy. And finally, the get pose. What we call the get pose is another amateur, a bit like our skinny one, but only for rigging this time. It is used when we need to rig a mesh while it's straight, but in its default posing afterwards for the rest of the production, like for example, a pigtail. A pigtail is better to rig straight, so the deformation will stay neat, but it needs to be already twisted for the animation when they start a shot. So we have different amateur that will be animated between the frame 0 and 10 as our shots only begins at the frame 101. The animation amateur will pick on this get pose amateur, which will be animated to already have a good posing ready to go. We have to go through this hidden amateur to keep it separated from the animated run. The animation action will be stored separately and untouched by the animators, and their amateur will stay clear of animation actions. There will be no risk of them deleting by accident. Accident. It has already been very useful for many cases and works with our pipeline. That will be all, so now I'll leave you in the perfectly capable hands of Tanguy. We'll talk to you about how we deal with the recuperation of our animation in render scenes. Thank you Manon. That last animation was lovely. So let's talk about how we bring everything from our animation files to our render files. The first question we should ask ourselves is why bother with scene and data conversion? Why not just use the animation file for rendering? Since 2018, we haven't missed a Blender conference, and it gave us the opportunity to share ideas about pipeline and tools with other technical artists. I learned that animation is a great tool for rendering with other technical artists, and I learned that some other studios used the animation files straight for rendering so that all the rigs and animation data were included in the files that were sent to rendering. So this drives me to do my best to give you a clear idea of why we chose to do things the way we do at Cube Creative. So why bake everything before rendering? First of all, the pros. We want to reduce computing costs as much as possible. This applies to render times, but also to the time a lighting artist would spend on a shot. Our scenes are already far more complex than we'd like them to be, so we'll take every chance we have to make them more responsive. Replacing the animated rigs by animated mesh cache makes the life of lighting artists much easier. We also want our blender files to be as stable as possible, and we believe that removing every unnecessary layer of complexity is a good step toward that goal. And obviously nobody likes higher render times, especially not our head of studio Cecil, and this is more or less an issue depending on the project, but we will generally tend to reduce render times in every way we can think of. And the last but not the least, baking everything and exporting animation caches allows us to easily update the animation data without losing the work already done on lighting. In the context of a production, we often have to adjust things in animation after the first rendering pass is complete, so this is a very important point for us. Of course, there is a trade-off, and with those gains comes a few downsides. First of all, we don't want anybody to handle this step manually. Most of the episodes we work on have around 200 shots, so things need to get automated. This means we need a strong development team on which we rely heavily. Then for every step we want to automate, and for everything we want to bake, naming convention is needed, and we need to follow it rigorously in every single shot. The first step towards automation is ensuring consistency across our files. We need our scripts and tools to know what they are dealing with and what they need to bake. As a result, we lose a bit of technical flexibility. Ensuring this consistency means we can't allow some technical solution that would have been great in some other contexts. Surprisingly enough, giving up this bit of technical flexibility ends up giving us much more artistic freedom. We spend less time thinking about how we should do things and more time actually doing them. With all those points being considered, we believe that the benefits are worth the few sacrifices we have to make and we chose to bake as much as we can when transferring from animation to rendering. In this next part, I try to answer the next question, which is how do we keep the animation while getting rid of the rigs in our files and what it implies for our rigging department. The first scenario is quite obvious. It's about mesh deformations. Blender offers multiple ways of baking mesh deformation and at the moment we use .cache files. For those who are unfamiliar with it, this format simply contains the position of every vertices at every frame. It doesn't contain any mesh data, just its deformations. The downside of it is it requires one file per object. So you end up with a lot of objects when you have a complex scene. We used to store the mesh deformation in alambic format, but we figured out that meshes with hair particles handle deformation much better with .cache file than alambic file. This method doesn't allow us to transfer every aspect of the mesh data animation, such as animated UV channels, for example, but I'll get to that a bit later. Now that we have successfully transferred mesh deformation, let's have a look at animated properties. At Kube Creative, we ensure that the animators only have to manipulate animation controllers. This is us trying to avoid things being broken at animation stage. As a result, if a property is animated, it means it's driven by the rig. So we have to bake the value on every single frame if we want the animation to stick around while the rig is gone. The tricky part is finding out the naming conventions that allow us to give the tool the information of which values should be baked. So just a short side note to help you with my following points. As some of you probably know, a few blender tools rely on a three-letter code naming convention. For instance, blender allows a user to define which bonds should be considered for automatic skinning by including DEF, DEF in their name. Other such conventions exist as MCH for mechanical bonds or WGT for widgets. We decided to build upon this principle and as a result, every object in our scene has a three-letter code including in the name, which inform our tools, scripts and artists of its purpose. So back to animation now. Animators work with lighter versions of shaders and in some cases those shaders get animated. It can be anything from a change in its color to a complex transition between two different shaders. This animation of course needs to be baked and applied to the render version of this shader into the render files. In order to make our life easier, we restrict those animations to value node. We make sure to give them a specific name that includes DRV, a three-letter code for drivers. It's also a convenient way for the shading artist to make sure our shader values will be rigged. They would just have to include DRV in the node names. This method doesn't apply in every cases and we can't rename every properties in Blender and that's for the best. So when we don't find a more elegant way of doing things, we include in the script the hard-coded list of the values that systematically need to be baked. As I said, this includes data properties that can't be given custom names, such as the cameras focal length, the cameras clipping distances, lights values such as powers and goals or colors, and visibility values for every renderable mesh and others. So now let's talk a bit about geometry nodes. Two years ago, we jumped from Blender 2.7 to Blender 3 and this was quite a leap. Among the few features we gained were geometry nodes and objects that have geometry nodes modifiers are a tricky topic because their nature can change a lot. A curve often becomes a mesh. A mesh might turn into a volume and on a regular basis multiple types coexist in a single object. Earlier I talked about the benefits of giving up a bit of technical freedom but the first thing we want to do with using geometry nodes is certainly not giving up technical freedom. For those reasons, we chose to keep the geometry nodes modifiers in our render files and not trying to bake the results because trying to streamline this process seemed like a very bad idea. That being said, we still have geometry nodes that are affected by the rig in some ways and in those cases we need to find a method to bake those animations to keep them while the rig is gone. If we need a simple value to be baked, we expose it in the inputs of the node tree and we include DRV in its name. This is the exact same method that we use for our shaders. On the other hand, if we need a transformation information to be baked, we use what we call the buffer object. They are usually an empty object with BUF buff in their name. The three-letter informs our tool that the transformation channels need to be baked from animation to rendering. Up to these points, those two methods allow us to keep those animations of geometry nodes as set from animation to rendering. This is an example of the use case of those buffer objects. These geometry node objects, which is a flame, is driven by two objects. We could have included them as geometry node inputs, which was much simpler to handle as separated objects. Another use case of those buffer objects is when we use them to project UVs on meshes. On most of the cases, baking their transformations enable us to keep the UV animation as it is, from animation to rendering. However, in some cases, due to the bones hierarchy and some non-homothetics transformations, the projectors end up with a sheer and skewed transformation. This cannot be reproduced only with transformation channels. In those cases, the buffer object isn't sufficient to bake the UV animation. We end up adding keyframes on UV channels for every frame of this animation. This ends up being quite heavy, so we tend to avoid this method as much as we can. The last subject I want to tackle with you is the one that led to the most headaches. It's the subject of baking hair deformation. I won't be talking about baking hair simulation, but rather baking hair that was being deformed by a rig. The new hair and fur system which uses geometry nodes has been out for quite some time now, but on our current projects, we work on assets that were designed with the current and the previous hair system. In the last few years, we met some big challenges while trying to bake those animations. Some of them look like bugs, other features, and a fair share of them might just be curiosities that nobody knew about. This part gets a bit specific, it might soon get outdated, but I thought it might help others that encounter the same limits. This is a condensed version of all the information we would have loved to have two years ago. To illustrate my point, I will use this example of a horse tail. This is a very good example of the get-pause method Manon showed us just before with the pig tail. The tail needs to be straight and horizontal to be properly rigged, but it's much more convenient for the animators to get it in pause as soon as they open their shot. Therefore, we include this animation to the pause mode in the rigging file. First of all, a few basic limits about rigging hair and blender. Hair cannot be deformed by bones and vertex groups. However, lattices can be deformed with bones and vertex groups, and hair can be deformed with lattices. This means we can use lattices as an extra step for deforming hair with bones and vertex groups. Another important thing is that lattices cannot be baked with Alambic file nor with Poncache file. In order to keep the animation in our render files, we use our buffer objects, which have their transformation baked. This implies that we have one empty object for each vertex of the lattice, and this can get a bit busy with lattices with bigger resolution. So here's the sensitive part, the limits that we discovered bit by bit while working on our projects. First of all, a mesh can be deformed by multiple lattices, but the hair particles that are emitted from it are only deformed by the first lattice. This has a big impact on the way we approach rigging, because it means that if a mesh has hair on its surface, we cannot use more than one lattice to deform it. However, as Manon explained earlier, we use multiple skinning layers on our characters. This involves that the mesh emitting the hair must be deformed by multiple means. A lattice can in fact accumulate multiple deformation, such as deformation from another lattice, and armature, constraints, or shape keys. In the end, we manage to get the deformation we want on the hair modifier with only one lattice, because our rigging team is very talented. Another limit we stumbled upon was the fact that when child particles are in interpolated mode, they are not properly affected by a lattice. That meant that in every cases we wanted to rig hair, children hair had to be set to simple mode. This greatly impacts the look of hair, and in some cases it led to a bit of a change in the asset look in the end, which is always a pity. The last impacting thing that we discovered with the Blender Hair system is that children particles don't deform properly when affected by a lattice, as their roots don't stick to the surface. This as well lets some adaptation with the grooming of a few assets to make this behavior less obvious. We tend to increase the number of parent particles to be able to decrease the radius of the children hair around it, and it's also less visible when the root of the hair is perpendicular to the surface. So as a conclusion on this hair topic, we ended with a list of constraints that we had to keep in mind and follow rigorously when we wanted to be able to rig and animate hair. First of all, we have to use a single lattice and nothing else to deform the hair, and it's emitting geometry. We also keep children in simple mode, otherwise they're not properly deformed by the lattice, and we want to keep our parent's particle number high to be able to reduce the radius of the children hair. And because I don't want to end this talk on a positive note, I will say that although young and new, the new hair system with geometry nodes looks very promising, and we can't wait to use it in the context of an actual production, and the fact that it takes geometry nodes as its foundation is a promise of a bright future for us. Thank you very much. If you guys have any questions, I guess we have time for a few questions if you have any.