 This is the making of Cinderella the Cat. I'm Ivan Capiello, and along with my three colleagues that are San Roc, Dario San Soni, and Marina Guarnieri, we are the four directors of this movie. And we made this movie with the same team that released our previous animated feature that was called The Art of Happiness and won the European Film Award in 2014. So for the next movie that is named Cinderella the Cat, it was the first animated feature ever presented in a standard movie festival. The film festival is the oldest film festival of the world and never admitted in competition and animation movie, and this was the first time. So we were very excited to be there. And we also had the thank you. And I think it's also the first animated feature ever running Made in Blender. So Made in Blender. And this was our international movie after the screening. This is from Variety. And this is for The Hollywood Reporter. And this was great. So before going on in the presentation, I would like to share with you the official trailer. It will be in Italian but subtitled in English. So I think you can understand it. Can you lower the lights, please? So I didn't expect to see you this day. Look, look, that's how it should be. I had to stay in your place. But you see, we are only the first part of the story. The best is yet to come. The marriage is the only true tool of social evolution. Now the film is under my whole body. I come back to take you mine. If it weren't for you, you would make a terrible movie. The ship is on its way. It's recording us. It's in the water. It's coming to me. Sara! Where are you going? And where are you going? Mom, how nice it is! Very soon you'll be left with an old, great, funny story. Where are you going? And where are you going? You're the same. And if I'm wrong, you tell me now. Tell me now. Where are you going? On the ship that takes you so far. Without love, where are you going? Where are you going? Where are you going? So, last year I was here and I showed you some basic stuff about how we did the backgrounds and some lots of working process and so on. So, this year I want to start showing you something more. But before going on, I'd like to show you the actual sequence I was talking about last year. So, can you please look over the light again? It's about three minutes then we talk about how we did it. Mr. Batile? Hello, Dad. Oh, it's you two. Did you find the shoe of my principal? It's nothing to do, sir, it's mine. I don't remember where I lost it. I don't know. Don't despair, my little one. I'm sure that Mr. Gemi will find it. Since we can't move, I'll take a step without him standing between the legs. Look, I've never decided to do it on purpose, sir. But I'm not stupid. And this is done by the technological field. The field of science and memory. Yes. Is this the project you put in an uncomfortable position? It's pure. I know, I'm going. I've been working for a lifetime. I've obtained the concession of the port for 80 years. And there are already many people at work. And that's exactly the point, sir. I don't like these people who turn around. Very well. And then? They turn around. Let's go, I'm serious. I should take a look at some things. You're always serious, Gemi. You're always troubled. But today is a day of celebration. Maybe you'll give me at least the day of my marriage. It's just a bad day, you know? It's true. You're right. But this ship... I don't understand. I don't think you've ever given birth to a child. Something that I can't explain to you. These? But no, all this is... It's magnificent, wonderful, sir. I don't have any idea how it works, but I'm still doing it for a reason. And then? I know it doesn't make sense. But sometimes I have the feeling of seeing things. The ship is on its way. It records us. It elaborates us. It puts us back on stage. And in short? In short. You think you're here now. And maybe it's just... an ungrateful hologram. An old memory. What a joy in its future. You're playing the game of mine, sir. From the day he put his foot on this ship, I admit it, at least I took a smile off of him. It's all right. But now you have to take me seriously. I'm an essayist. His wife will be almost ready. The ceremony is about to start. And you're still here. And you're still in shock. Hello, my love. Will you stay with Gemito until dad is finished? Yes. Ah, Gemito. Disactivate the communication with the outside. Do you remember how to do that? Yes, but... Why did you reactivate me? Your father is incorrect, my dear. Yes. So this time I want to focus on how we develop the look of the movie. First, we have to search for key elements. Maybe you have recognized someone in the previous clip. So first, clean fields and gradients. We use this kind of feeling. There's not lots of textures, just basic feeling. We are looking for overlaid paint strokes. So the strokes is not on the same pixel each frame. And we are looking for fragment brush strokes. And as final step, we have this underpaint layer. There is a living underpaint layer. And you may have recognized it on the scene. Okay, so which part of the pipeline we use? In our work, as we said, we started with having the... Without texture and having vertex color to paint the mesh. So this is a... This benefit I discussed last year. But it also affects the polygonation. The modeling is affected by vertex coloring, because you cannot draw a line if you don't have vertex there. So we have what I call an high quality low poly. It means it has low poly, but it's not strict like you have to be not using triangles or so. We do use triangles in some cases. Obviously we don't use it in the particular deforming angle so where the loop has to be continuous, but we use triangles. Let me show you behind me what the model looks like in this kind of look. This is the actual model we use. This is the topology. It's low, but every detail is in the topology already. And when it comes to this... Cloth falls, you can see clearly what I'm talking about with triangles and clean topology. We also use to detail topology more in the parts that are always more visible. Like when you made a shot, you can have a distance shot so every polygon can count a little, or you have a medium or close shot. This might probably be the face or the hands or the feet, so that is the part that are more detailed in the model. About the rigging, since we have really low poly models with high details, the rigging is almost, in some cases, one-to-one bind with bones. So you always have what you see is what you get because that kind of vertex is moved by the exact position of the bone. This is how I see a look in the animator. We try to be as much as we see what you see is what you get. So this is the rig moving and then you will see the animators start animating or looking at the playback or we basically split the character from the background. And the background, as you remember in my last presentation, is linked in another scene. So it's a scene linked in this file and the object is linked through the two scenes you will see in a second. This way the animator can focus on the only animation. This is the background and we can link the background elements to the other scene. We usually use the second part of the layers for the background so the animator can easily disable the view and have a fast and clean update. As I told you last year, we worked on Rigify. We worked a lot on Rigify to make this movie. And as I promised, these new features are merged already in Blender. What are the new features? We have new metrics. You may see animals, lots of people are starting using it. That's new advancing to generation options. It means you can have multiple rigs generated by Rigify in a single scene and you can easily select which rig has to be overwritten with the new one. We have revamped the layer system so you can in the MetaRig define bone groups and selection sets and have it automatically merged in the final rig. We also had lots of features for our limbs. So we have the FKIK snapping. The rotational pole is now merged with the new system so you can easily select which one kind of rotation pole you want. We have new animation tools that can easily transfer animation in time range from FKIK and vice versa. We also have rotation converter from Quaternion to Euler and vice versa. More important for me, the wiki, there was never been a wiki for Rigify. We have completed the task. So I think it would be useful if someone before posting bug reports that are not bugs or feed requests start looking in the wiki. Please do it. For more updates on Rigify, we have another panel today at 2 p.m. Rigging is also affecting modeling too because we often use rigging with modifiers. I mean, if anyone has ever been in this mess before, generally when you bend something, it tends to compress. But what happens if you bend a plane? A plane is not doing anything, and if you extrude it later with Solidify modifier, you can have now a banded, exactly banded without compression mesh. Moreover, you have this bonus feature that you can have three materials in the slot that you can use for your model. And this is very handy for us for cloting. You may have seen it. We have a surface, a rim, and inside, we have three different tones of the color. And we use this modifier. Moreover, if you use this technique and we use a lot, you can also reuse the obtained cage to the form and actual mesh, so you have no limits for that. And this way, you see our pipeline is a bit a mess because everything contributes to the final image. You have to model in a certain way to color. You have to model in a certain way to rig. You have to rig in a certain way to use that model, and so on. Going over, how do we get to the final frame? First, we have to split the processing passes. A bit of it was covered last year, so I will skip the first part. We create this image. This is the poster for the movie. It's also made in Blender. And basically, we have the background and the foreground and the shoe and the other key down the reflex. But what we do here is a standard color pass, shadow pass for the background, the freestyle line pass, and the basic proxy paper pass because we need some kind of rotational approach when you have the camera moving or the ceiling falling. And then we have specific masks. We used that for, in this case, the blood on the floor. And the more important Z-depth pass, I will talk about this later because, as you remember in the last conference, when the backgrounds are still are almost paint over because the artist just merges these layers in a painting program and paints it over. But sometimes the backgrounds are animated too. And so we treat them like we treat the characters. And let's see how we do this. We have the basic flat output again, the freestyle line pass again, and the Z-depth. And this is the most important pass for the rest of the process. So merging it up combines a lot of compositing. For the movie, we didn't succeed in using banner compositor because it tends to be really slow on this kind of process. But in the whole time of production, I succeeded in merging our pipeline inside the compositor and we hope to talk with the foundation, and the others to see what's possible to do it to bring it in real time or at least in the faster compositor process. So as you remember, I talked about overlaid brainstorms. How do we do that? We take the flat color pass and we bring in this node, this node group I created. This is special to the group. You can see as some canvas resolution, this defines the resolution of the noise used to deform the image. And then you can have this kind of effect. It's very slight here. You can see it. The image is a bit deforming. I'll push more over. Okay. Now you can see it more. Okay. And then we have to fragment this kind of jaggy brush. To do that, I use the same node, but with other option. Here is what happens with new fragmentation. Yeah. Okay. This node accepts as input the image process. The canvas resolution defines the size of the texture we are using for this sorting. This is the first stroke that is going to outline the jaggy edge. And this is the fragment edge. And we have also two debug passes here to see which is the texture is deforming now. This is the first deforming wave. This is the second deforming wave. So you can always check what's affecting your image. Then we can merge it together. This is the result of the two. And now that we have these jagged lines, we have to compose it with the original image. And this was again another task because the standard mix node in Blender doesn't have the lighter color mix. The lighter color mix does a cool thing. It compares the two values of the pixel between the two images and choose the brighter one. So you have one or another non-mixing of two. And this is the result. I can go back and forward. You can see the image starts to recover some of the detail we have lost. And we have this factor value of a factor that affects the borders. You can see the edge here. When it's 0.5, you have both of the distortion. Then we go on to how the node works. I think it's okay. Just I didn't tell you that keep original is basically making this node work as a standard mix node. So if you want to recover totally the first image, because in some cases, like when the character is very small, you want to keep detail there, you can recover by this. Then we have the lighting. The lighting in this movie is to make all the impulse process. There's no lighting, any light or shadow affecting characters. They only hit the background. But the characters are all, everyone is a 2D light. So the basic light we use in the 2D is this kind of offset lighting. I recreated this node. It's pretty easy, but you can offset the X and Y of the alpha channel. So you have this kind of rim effect. And you can customize the colors of the shadow and the light. And you can affect only the light or only the shadow of living in the light, the original color. This is how the node is done. And in some cases, or especially in this case, it's not enough for us. So we have to create a node that makes us do this. We use the Z-Depth to do so. Let's start with a discussion with Alessandro Racco, the other director, that told me what happens if we offset a 3D image. And after a bit of tricky calculation, I obtained this path. These are just 2D depth paths offset each other. So it's not like a normal lighting because you have an occlusion in this kind of effect. And we can use it, this is the standard path. This is the contrast path. And we use it like using these two values. They are like a color ramp output. And this is the medium filter. We use it for blending the edges together. Let me show you more in detail. Our models are in flat shading. And this is unconventional, I know. But we think that this could be like the brush strokes we can have in the mesh. So if we want, we can blur them to have this kind of gradient effect and use it in some scenes. I'll show you another 3 minutes of the movie where this effect is particularly visible. You're done with this dress! Listen, listen to me. You have to do the things I tell you to do right. Tomorrow you're 18. I think you're getting a little dizzy. Is that so? No. But you think you're making fun of me? The fact that you're becoming older doesn't mean anything to you because you're still a child. You're ignorant. You don't even know how to sign. If it weren't for you, you would have a bad ending. Because I already told you. You're a little girl with some problems. Right? And then just listen to what I tell you to do. Tonight, after six hours, you have to go fix everything as you are. Are we clear, Vicere? Okay, now you're going to set up my room and remember to wash your hands before touching my things. My plan is to build a fortune in this city. In this town. If I had your intelligence, I would have tried to find another. You could ask me as well how you spent your time. You can find me in the next one. But I'm going to help you, so you can focus on it. This process... I wish I could. I wish I could control the slider of a cloud ramp from another group. This is not possible in Blender because, as you can see, only accept this value. So I have to build my custom ramp node group to use in the node group. This is a pretty easy math. I'm not a scientist. I did this in a couple of hours, so I think it could be easily implemented right in Blender. Again, we use a lot of the z-depth, but z-information has to be clamped to be used. I wish to use the normalized node or something like that to have this value clamped between the near and far clip plane, but I didn't. So I have to make my own normalization node. It's pretty easy again. It was not a complex task just for nodes. And I wish to control the lift color of my node from a node group again. I would like to add the driver here, here, and here, and it's not possible because when you go outside node group, it's not updated. I think it's a depth-scruff limitation. I hope in 2.8 it will be resolved, but I, again, I had to write my own color lift-cam again node, and this was a bit more messy, but it works. Moreover, to make it work correctly, since we use, if you want to leave an image, you want to go over the white, and you have to input value like 2. The RGB node can make input value above 1, but you cannot slide it because you have to manually enter, so I created this little node just to pass the values there. Then I wish I could output a freestyle pass. It's very surprising for me why it's not there, but it's solved by using a color pass without freestyle active, and the freestyle pass with freestyle-only active. Then my own bugs. Okay, the z-light node has some bugs. I show you, look at this part of the clip I show you. This is an actual turnaround made with the z-light node. Can you lower the light a bit, please, so we can see better? Okay, you can see there are two kind of lighting affecting the mesh, and it's all done in compositing. There's nothing else there. A more freestyle press that we created, and the, I don't know if you can see it, there is a painted overlay. Okay, so if you have noticed that up there, there was an error. This is the error. How do we solve this? We render it a greater resolution, but with the sensor we keep the image at the same position, and then we cut out the part that is jagged, because it only happens on the image borders. Now I have an extra clip. I don't know if I have the time to show it. I have time. No, okay. So I skip it. I skip it, and I go to what's next. What's next? We tried a lot to do this in real time. These are some examples of what we did in real time in the view part. We get quite close, but then we have not any access to the pixel filters, so we abort that test, but yeah, okay. Okay, so I'll go back to show you the clip. Okay, this is happening in real time, so the animator can easily do this, and I think it was a good compromise, but it was too tricky to go further with pixel shaders, so we aborted this test. Let's see. I want to show you when it works, because it's also affecting the depth value here. There was also an align style made through ambient occlusion here, to skin-place ambient occlusion. This can work too. Okay, I'll go back to the clip of three minutes. I advise it contains some mild violence and some bad words. So if there are a lot of bad words in the video, it's just a fact. We are in real time, not at all. But don't worry, because these are things that we have decided together. What I do, what I do, I do it for you, for me, for you. I think when we get rich, important, happy, I will be the king. You will be my queen. Salvatore, but how long do I have to wait? I don't have time for that. Remember that last night you were with a lady. I don't want to honor her, at least a little bit with this fight. And the creature? What are you doing with the creature? We are waiting for Durba to film the poor thing. And who gave it to you? You sent it? Oh, my daughter Angelica. I don't know who she is, but she is a little bit of a I wanted to show you this scene because it contains a really cheap trick. How many of you succeeded in rendering freestyle in the reflections? No one. Okay, neither do we. So we created a cheap, really cheap trick. We just recreated the other side of the mirror and rendered it normally. And then we clipped it in the scene. It's a normal movie trick. You do like that when you want to see the camera in the reflection. Okay, the last thing is, okay, we're trying to make some more experiments. This is an experiment made in Cycles. We're trying to experiment about the painting on the backgrounds. So this is the normal rendering of this pot, but this pot is made with... It doesn't play. Okay, there's a texture here that is done procedurally. It makes these paint strokes on every mesh you have without UVs. And we use a lot to create this kind of shader. And this is also working with our Z pass node. So you can also use it like that. I don't know if it's playing. No, okay, that's all.