 Ok, veliko. Je Ivan Capiello. Je z Vitalija v Barberi z Vitalija. Čekaj. To je vse 20 doj, nam je po 10 doj, vzvečaj smo doj vzvečanje v studio. Tko videlem, v gre in vkore prejviske prezentacije, in prizvečanje in nosenje. V semeljih dveh dveh V svafom, ki glavni in glavni nr. are two short films, we mostly use short films to test new features. For us it's like an Apple Redcycle, because in this tent year we had struggles to keep in touch with all new features in there are, so we use short films to develop new features and test our new feature sets. Fissure sets. Btw, da je tudi in jaja in ljene v Walkin Liberty. To je naši inimated feature, in bilo prezentacije, ko mi koleg Corrado Picheter di domoro v 5 p.m. v tem samom rume. In da imamo tukaj naši. Ok, so. We are to talk about our first animated series. This is the first one for us, because we always did short movies in tega animacija. Zdaj sem izgleda, da je bilo glen, o če je bilo. Če je bilo. Zdaj, o če je bilo. Pajprenstav, grezprenstav, grezprenstav, non-photoresic renderstav in video-sequensatorstav. Zdaj, da vse pixel vseh vseh vseh je vseh vseh vseh in renderstav od storyboarda do final editinga. So, let's start with pipeline stuff. I will sorry if I bother you with this, but maybe for some people it's interesting. Working on feature animation or series is not the same thing. It's a totally different matter. Because when you're planning animated feature, you have an overall duration of about 85 minutes. There are maybe sort of three acts, like making the product, the epilogue and the mid-act. And these act are split in sequences. Then in sequences you have the main characters, more or less always the same, excluding the complementary and so on. And then you have the main sets that are going through all the 85 animation minutes. And the production time for an animated feature in our studio is about 12 years of production, excluded production. The animation series is a totally different matter, because in this case we are 11 minutes for 26 episodes. That equals 286 minutes of animation, but it's done overall in less time than we do in the movie. So, how do we approach to this? Because we never did. First of all, each episode of 11 minutes has its own three-act setup, like the first act, second act deduct, but even if it's split in sequence and then even in shots, the character may be the same in one episode. And also the sets could be more or less the same for one episode. But the thing is changed when you look at the final thing. Let's say this episode is a standard structure, where in the first act, adults or kids may have a nutritional problem, and in the end of the first act, the child goes inside the body of the guy, and they are helped by the grandpa that stays outside and gives them nutritional elements that can make the guy feel better. And then we have some sort of ending of this second act that has an it-by-it bit fight video game inside. And a couple of these things we can also recycle from episode to episode. And then we have the ending that's about the lesson learned and how to eat better things. But this structure is not the same or kept regarding a character in episodes between episodes, because maybe some sets are recurring, but not in each episode. And moreover, we have a situation here, like the main set in the first act outside the body. Then there are the set inside the body, inside the organs, and then we have again the other set. So it's like having the three times that problem for each episode. And so planning this, we decided to have a basic approach, like say, OK, there are things that are general, like the main characters will go for each episode more or less, and their houses and probably the school, they will go for generic. And then there are episodic things that could be main character variants, maybe one of the kids has a pajama in one episode, or it's just like Halloween and so on. This is episodic props, sets, and 2D effects, because all the stuff you have seen in the main title sequence and you will see later on that's done like VFX is done by hand drawing in this pencil. So we also try to consider what's recurrent. Maybe some character is dressed like with a swimsuit in a couple of episodes. Well, for us, it's treated like episodic. So the third column is like revolved to episodic. So this is a simplification of what's a basic pipeline for animation, where some faces are overlapped. You may pass me the simplification, I will try to compress it in a more linear way so you can understand our approach. So basically, in the first phase we have the concept and the model part where 2D and drawings and basic 3D model are set. Then we have the storyboard phase that usually in the past years were done in 2D animation software. That means that storyboarders are drawing on screen and then they produce a video that is going to be sent to the layout department, the layout department generally is a department that tries to rematch in three dimensions what the storyboarder has done in 2D. So this is generally a messy phase because when you draw you don't care about lenses, camera lenses and positioning on the set. And so the layout department usually rebuilt from scratch basically on the video sequence approved by the director and made by the storyboarder and create this trendy world where all the stuff is horrible, is gray, is in the pose and is like floating around. Then there's the animatic phase where something starts to get moved and you can see some more character life in 3D but still is not the final animation. And then the rigging phase made the animation possible and then the animation is done. At this point all the stuff is animated with all the cameras approved to leave layout or animatic and so on. Render is also a phase where multiple passes, the frame is decomposed and then reassembled in compositor and then there is final edit that goes on another layer. We don't have that much time and we don't have that overall budget. So being as expert of not so high budget production we try to compress these phases. So what if we could have a storyboard that includes inside the layout and animatic phases? Well, we will have much more time to develop the storyboard by itself and we should have much more time for animation because we have not to match everything because it's already matched. So what we ended up here is we will try to edit and draw into the storyboard but inside the 3D basic environment so that cameras are already set by the storyboarder and the gris pencil objects are already moving inside the 3D virtual space so that in the next phase cameras are already matching between animation and storyboard and we just replace the gris pencil drawn characters with 3D characters, actual regular characters. So in animation the stuff is already the same but having the same 3D space of storyboards had helped really a lot. This was provisional at the time but fortunately for us, luckily for us, was done this way. And then what if we don't split, we don't just compositing because since a couple of years, EV has made possible to have two shading like happening in real time, almost real time so what if we don't composite at all? So we are compressing these phases, but how we do? So our goals were having 2D and 3D space consistency between the layout and later phases. Camera consistence is very important because maybe the storyboard could draw a character just by making it bigger to simulate that it is near to the camera but then when you switch to a 3D character and you place that character near the camera you have horrible aberrations on the phase, on the ends and so on. So having camera consistency was our main goal in this but we push it a little further trying to keep animation keep poses and timing in a sort of blocking animation phase like saying we are not doing only a basic storyboard but we are doing an half storyboard, half keep poses animation in 2D. And then there is the third thing. We are using video sequencer from the beginning even in storyboard phase and we are using it overall in each phase for animation editing, for render editing and final editing. So having the sequence shot splitting in this phase is good because we can plan better how to split work for animation phase. So that's how we did it. We used 3D object, cameras and grease pencil for the storyboard, so the whole package. And we approve the poses and the things in every storyboard and then using matching pose libraries that are already in render forms at some time and now with you and with you, asset manager is easier and easier to match almost as close as possible animation with 2D storyboard. And then for render phase we just try to put the final frame like what you see is what you get. Ok, so to make this possible we have a sort of split parallel pipelines. There is the top one that is the storyboard pipeline and is the lower one that is the animation pipeline. They go in concurrency. So for storyboard we use proxies model that is like a rough, rough, rough modeling based on blocking volumes and some specifically for the sets and a bit more detail for the character and we will go to that in a minute. They will go to the storyboard team. They will output probably a preanimation layout. Parallels we go with the final model along with the proxy model but in not exactly the same time because the production phase is a bit overlapped in the year. Then we do the final rig. As long as we know from the storyboard what needs to be in rigged and what not because maybe if you look at the short list you will see that character is interacting with let's say the bottle, the table and so on and you are planning to do lots of rigs and you go to the storyboard and you see that the scene is cut out so you don't have to make the rig. So we try to compress this phase and go to not spend time on not useful stuff. So how is done the concept proxy model phase? The concept for us is a freewheeling phase where we ask the concept artist to do whatever they want to just previsualize as something that may feel right. Then we ask the 3D department to do extremely fast mock-ups and blocking from these concepts so that the concept artist may be aware of what's happening in a 3D environment but then we'll give back the 3D models to the concept artist and let him draw on to conceptualize and rematch what's maybe going wrong in 3D going from 3D to 3D. Actually it's going like this. This is a freewheeling concept and this has also a correct representation with 3D blockout in 3D and the blackout lines you see is the blackout lines that the concept artist has done on the 3D volume using either this pencil or a tablet with and drawing application that re-make connotation on it like here is a bit more thin, here is a bit more round and so on. So in the end they also made a color model from the 3D blockout and this is how it looks when matching the things and this is how it looks when we are also matching colors so pretty, pretty, pretty close. Then we also, in this phase have made a visims chart. The visims chart is completed with also a phonemes chart for the main lip sync and facial expressions but this is pretty important because from here on we are splitting again the pipeline and we are replicating this pipeline in 3D2 creating an asset post library that the animator will be later on be able to just click and apply to match the exact storyboard phase. So this is how it's done in a basic layout. There is on top line the asset phase that consists in proxy of sets and characters and the camera that are appended inside the storyboard file where the proxies are animated. Usually we use bones here. We use just one bone because in Blender storing animation data on bones is really, really, really more convenient to the storing on the transformer data itself so you can access to the interpolation tools and also to other properties you may store on the bone that's not as easy to do on the transformer channel. So the only thing that's not detailed is that for the animated proxy space we usually introduce an intermediate bone to make the movement. So on top of these animated proxies the storyboarder can draw with the grease-painting object itself attached to the same bone so he can not take care of the movement but just of the pose. And then all is assembled in this storyboard file. And this is how it looks like when it comes out from the storyboard. The storyboarder is called Ginny, that is, me, and his partner is Carlo Winter, called PC. And... What? He is coming to the computer that the teacher gave us five minutes ago. And Amerigo Cecucci, called A. I'm telling you? The club, guys. The one where we will make our adventures. Yes, but only after eating. And then the probability of having adventure here is 0. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1%. See you, guys. There is a new guy. What are you selling here? Fruit, vegetables, they are not your business. How much do they cost? They are not your business. Come on, I don't have time to lose with the boys. Why do you have a business if you don't want the customers? It's bad, antipatic. Antipatic doesn't exist? And then, who are you talking about, Ginny? The owner of the new business, dad. It's called... Nutriency. It's back. Who's back, mom? None. But don't bother that man, okay? So you can see this is also pretty close to a key base for animation. The overall blocking of the set is done in more or less one day for a set like that. So we have a really fast pipeline. In this phase, all these things are technically appended in the storyboard file so that the storyboard can do edits, can also lower the desk remove some walls and so on. And we can, again, freewheeling as possible explore the situation we are going to be in in the next phase. So here is not full proof. You can do whatever you want. You can break the file. But later on, the things are going to be more strict and have to respect the more standard procedures. So, this is an old trick is using Blender at least we are using it for ten years. Blender has in video sequence editing the ability to also edit Blender scenes in itself. So what we do here is we have a Blender sequence file inside the file itself then has various scenes there are split with different timing in in shots. So, these single shots are linked in the sequence file and you can edit like it's a normal video output but you are editing up actually the 3D timeline. So, with this trick that is, as far as I know, Blender is the only software to support it. We can have a storyboard file that is at first just one sequence. How do you know this is a sequence for people who doesn't know? Let's say there are a movie that is set up in this place. This room is a set and if I go out in the corridor this is another set so usually we split the sequence inside this set and then but we can have different shots inside here. The shots are inside the sequence and this is on one Blender file then we have next Blender file with all the shots depending on the next sequence and later on all these old Blender files with sequences are edited in one editing file that matches the sequence all together. I explained this in a far, far away conference so it doesn't change the bit from there but you can further on maybe ask questions later because it's a very intricate mess. But what we have at this point we have 2D entry-space consistency camera consistency animation keep-offs consistency sequences short-speed planning because in this phase the storyboarder can edit the storyboard timeline with the director it never changes anymore and we have also the speed sequence for the animation phase then the storyboard file becomes the 3D animation file I will cover it later and also the animation file and the file edit file this will be more clear later. So how the architecture the pipeline architecture works with the folders we try to have this asset folder that contains generic assets and we later on duplicate this asset in the episode storyboard by appending them so this file is totally independent because you may need to change, destroy maybe a main modification to the character so we don't want to bother about that. But when we make a modification to the asset this asset is copied, duplicated and stored in the episode asset folder so that each further modification will not affect any other episode we will duplicate it again and again in anything eventual modification or making new ones then this is very important is we use relative paths in blender files so the folder structure between branches should be the same and you will see storyboard animation and render as the same indent so in each storyboard folder is an episodic folder and then the blender file and so on. This makes possible to duplicate the storyboard file switch let's say rebuild it with the dynamic links to the episode library and then work on in that episode let's see after this process how the animation scene looks like and try to decide if matches are not the previous phase so ok so what's the difference between the first and the second that in this case we have episodic folders so the asset is derived from the generic asset of the street but it's specific for this episode so the background artist can add detail in this specific location without touching the overall project and even if the camera is not looking at displays anymore or looks in other point in next episode where this asset is loaded inerit this modification and make new one because this is done by iterations and this is very important because I can assure you that not one single shot remain untouched even if you have the full set in this kind of project because you can see each geometry is hand drawn, it's like hand drawn by moving pixels moving vertex the stuff less rigid so again for the next phase since the folder structure is exactly the same we just can copy the file into the render folder and go on making further modification to shaders and lighting and so on and this is how it looks like after the process I will not bother you with the voice as I can talk on by this phase technically there is no difference for the animation phase except that it is using the viewport lighting and rendering except for one single pass that I will show you later when talking about shaders that we need to use to achieve one effect but it is not totally unachievable it will never require some time editing blenders or scots so we have gone through standard produce procedure so this is how it looks like you can see at this point is totally matching the 3D animation that was already almost matching the 2D storyboard it's just more detailed in a mirror you can see probably shadowing effect like maybe usually the animator doesn't bother to make the characters touch the ground and so you see a shadowing effect that you have to correct and so on this is usual workflow it happens every time so these are look at the end of the process this is a comparison between all the three so you can see how close we are working on the final animation from the beginning let's say that we lose freedom in each step but we try to keep as much freedom as we can on the first phase because we know this is important for storytelling and this is how it's totally matching you can see each animation is much more detailed from the storyboard to the animation phase but it matched all the standard kipos that are there so what are my plan goals at the point as I said what happens in the episode stays in the episode you can totally or should not when anything doesn't go wrong take care of generic modification because you can do whatever you want mutuating the previous modification or creating one episodic case can be easily ported from one episode to another each branch is frozen in final state it means let's say that animation team has done a wonderful animation and when I go into render phase the render artist made something bad or something bad happened behind his back and we lose constraints we lose a character maybe a character is changed and you have to rethink and so on well you can always open parallel the animation file and see how it was going meant to be like there from scratch just by copying it over and remake your modification obviously we made this by our specific pipeline add-ons that made available one button to make this recreation but the process is the same and in essence can be easily edited per episode I already told you why as always we have to develop some rectified special feature to make this happen because as you may know until the last version on blender rectified phase was a mess meaning you could only add this phase and you can neither touch a single bone or remove it that will crash not saying error will total crash to generate so we are working for 4 years now 5 years or so from our last presentation to a modular phase system that right now is in blender but this system was developed 2 and a half years ago for this production so I will show what's behind the new phase system and how you can recall these new features if you want to look at current blender version what are the problems we have to deal with characters like this these are the toxins are strange don't have nose, don't have ears, I have spikes some strange sort of mouth that changes in form in dimensions among the process moreover there are made of two or more phases so how can you delete we rectify and also the expressions are so we know what we need modular rigs features shared between different characters because there are lots of these toxins inside the body so we have to replicate animation between characters and so on then here we go a modular phase system rigs with plenty strokes on 3D meshes this is an actual problem partially solved and reliable constraining between armatures I will say I will not cover it today some glimpse of these will be shown in the next panel about the animated feature film we did the first two tasks are already totally awesome today we rectify because if you don't rectify, you can do modular rigs you can share a feature between character through meta rigs sharing but what about the modular phase system so what are the modular phases and key elements we have worked in the last four years first of all, we split these into main areas the automated features and the generic features automated features, classic examples are high with the system so you can have following lead following and eye targeting and the zoom out system where you can have the zoom and the lips following the zoom movement and overlapping and then moving on then we have these generic features that I will name glue and pins that will serve as helper for other purposes beneath the automated feature there is a not so known rectify sample that is there like we added it from I believe 77 or 78 and it's called experimental super chain because it was an experimental feature we used it a lot in these last three movies and it's the corky element to obtain the others so let's have a little detail how super chain works super chain can have as input super chain or bones or a single bone and acts differently if it's one or the other so let's see a demo it will be more clear ok, this is the standard chain you can see basically it has a stretchy chain between the two extremes with one middle control to generate an immediate curve it has all tweak controls to have other modifications so this is what usually super chain does by itself and you can see you can tweak it as you want to buy two layers of control one main layer with the top bottom and middle and the second layer but what is also excellent is the feature that you obtain when you have just one bone is what Daniel Martinez Lara was asking times ages ago one bone one bending bone that you can flex between two controls so this was secretly already in blender for the last 10 version I believe and this is the core element we used to design the others super chain also has this specific feature this is called convergence you can define a bone where the one or more chain can converge to and so that the end of the chains will follow this particular bone these are the more complex things that will never has never gone in mass that it was only in our branch now has a new name that explains what is the design behind it the design behind it is to create some sort of mesh rig mesh outside surface that could be used also by vertex by vertex so basically the pins generate a collection of bones they are controlled by single controls and when the lines and overlapping chains just create one control by clustering all into one so this is the base that using conjunction with the glue that are helper bones that create constraints between controls so basically glue makes possible to propagate the transformation between the pins this is what lies underneath the new face system then we have the automated stuff we have the eye system that is always consistent by this design but we made it modular so you can have one or n number of bones or eyes, rigs and basically if you look at it the requirement was that modular not flexible at all you should not change any number of bones here now you can have any number of splits you want usually you need always need more bones in the corners where you can detail how the curve changed and you will need many bones to have a curve interpolation while the eye system is made by two super chains lightly modifies the super chains that are controlled by the eye itself this is how it works when you generate it basic automation obviously automation is hateable you can decide how much the lead following is going to happen and if you want to rotate the fk controls or eye control there is automatic lip closing and so on but moreover you can have any number of eyes you want you can create a beholder or a character with three faces like we did and there is also this feature is not totally as easy as you are looking at now but you can aggregate and clusterize all the target together so you can move all together and have detailed movement later the jaw is based on the same design so here we have four super chains one for each side top bottom and they are controlled by the jaw system itself but moreover we added the ability to have secondary child loops that will mimic what the lip is doing by parenting that means if you look at jaw demo first of all you can have any number of child here so you can detail a giant mount we require probably more bones than a tiny mount we needed it so we modified it and now you can have any number of splits as long as they are matched between the two up and bottom and then it acts like that and obviously you can move it around I will fast forward for not bothering you with tech stuff and this is how it works with the secondary loops secondary loops are in editing the jaw movement and then later on here they are moved as independent but as I show you you can create automation between these two loops to make a soft propagation by let's say using a 0.5 propagation between the outer loop to the inner loop and so on how we used it let's see a real world example these are all stuff we did in animations this is a cafe in episode so you see they are pretty crazy characters they changing dimensions in mount opening expressions they totally needed the system now it's available so how it works let's say you add two eye system pair together you add the mount system with the outer loop you can use what I'm calling now super chain but I will correct my naming in a second you can use super chain with the convergence bonds for the process so you can move it all together and you can move it by bending the three bonds and then these are just super chains connected to convergence bonds and they may have the glue bonds to have automations the character is stretching and deforming we are attaching it to another super chain so you can also deform in the heads or the body like you want this is the final actual moment so these are the naming I used these are the new face system developed by Alexander Gravelov the mysterious Alexander Gravelov that worked with me with the system and the history is that at some point chatting in the rigging animation rigging of our Blender Rigors channel we discussed about what we are doing and Francesco Sidi encouraged us to confront with the other developer working on rigging stuff Alexander developed the new system but the system I showed you was also the base which Demeter Zazic built the wonderful cloud rig by doing other crazy stuff around so this was a general idea put there two years and half ago to start this production now the naming is a bit different what I call pins now is called skin and super chain is also skin stretchy chain if you look in Rigify modules you can see that this is you can already apply an eye with this system by adding the sample you can add a gem out by adding the sample but you should use the skin system that is declared in the skin skin anchor skin stretchy chain maybe if tech guys are interested in it but I mentioned that we had to regrispensil objects because we had lots of strokes and drawn on props basically how we do it well end up doing with multi user super chain why because super chain has this wonderful ability to create this one single bone deforming the object why I am saying it because I don't know how much of you have tried to use grispensil and with painting or grispensil and lattice it's a total pain because I must say probably no one had take care of that it's not possible but probably we need some more development but in a while this trick since we are using one single bone as long as the object doesn't require fancy deformation but just squash, stretch, parenting and so on you can just parent with automatic weights to the bone it will not work why because as I said the standard weight paint tools are lacking in grispensil but since it's just one bone and the form group is added for you by the operator you can just assign weight one to all your grispensil stuff bind to the object and in a few seconds you will see the object moving along with grispensil with correct deformations because you have this intermediate bone and you don't have to care about matching the ball interpolation between two bones to joints, controls and so on so this is the easy trick it's always available in blender from the time and now you can do the skin stretchy chain to achieve the same results now let's talk about the look we have these goals when we started we've always if you've ever seen our panels we have clean fills, gradients but we need a clean and stable rim lights because we are not doing that in compositor we are doing in 3D in the scene it's either too fake or mimic something that is too deep so they stay stable on one side of the character even when characters turn around or camera turn around line art strokes at overlapping fills meaning we are not having line art in every part but when some part of the same color overlaps we want to detail it by having line art light linking I will not cover it because it's basically done the 3D character react to red lighting filtering the red channel and having the background reacting to green channels with the standard techniques you can find 1000 tutorials and even paid shaders you can buy if you want interactive shading and grade this is very important because we are not planning to use compositor at all so we ended up using it a tiny bit but we needed a way to color grade the frame inside scene and then reduce compositing as close as possible to zero so this is on the left the character has come out from the final concept phase and this is how the character looks in the final episode how it's done basically we made a rig for that tool so that we could access the old post library system because we are talking still about 2 years and half ago so by clicking on the post library we can define the preset lighting condition is done by both moving lights and modifying the shader properties this is also done through bones because as I said it was easier to do at least in that version then if you look at what happens in the side panel you can see the property that this bone controls the change as we click on the parameters the parameters are changing on the bone and this modification is applied to the shader this makes interactive lighting possible in scene and this may also consistent lighting happens in scene because if you look at 2D animation you don't want color shift on the characters and on the background between shots you want a stable lighting and color meaning that if you choose a particular kind of tone for the skin you don't want it to change ever unless a new condition lighting condition is applied so in the presentations we use the Z depth a lot so we include the Z depth system inside the shader itself so you can easily use it by fading it by distance and we also use a flat system to flatten out the character in some case you want basic silhouette you don't want shading at all moving on so I was talking about composing this is the only path we use sort of because workbench is perfect for making these overlapping things and line art is not because line art cannot generate this kind of cannot yet generate this kind of lines moreover as I know about line art is that line art needs to be stored and converted in if you want to edit later could have a giant footprint on the file so this is how we use it just by layering how we use this as a mask to multiply the frame by itself so that you have always the same color darkened by multiplication backgrounds are working exactly the same way as you can see in viewport and you can go through cameras and have always the correct lighting applied the parameters stays the same if you create a new lighting condition this is available for all the lighting artists by clicking and you can be sure this is the same for all the stuff closing we are going to about some more vsc stuff I said the view seven sets or file we are using to edit file yes but in this case at some point with which at some point it happens from the storyboard phase and so on we have a parallel editing branch that switches automatically the linkage scene with the exported sequence scene and what are the issues we dealt with this is something I would like to discuss with any developer that is available to do it Blender scene at least 2.6 is not temple anymore by design to export in Apple progress this is a real problem because you have to export pngs and then use ffmpeg let's say a standard tool that is also included in Blender to convert it in an intermediate movie uncompressed format you can use in video editing software even if we are using Blender as a video editor software I don't understand why there is no option anymore to support intermediate codecs then working in standard view transforms because when you are working in ampere you are not using filmic because you want exact matching color between shots and between characters standard view transforms seems a bit abundant or not so implemented anymore because can cause various issues in color management depending on the area you are working on is not a big deal you can work around but will be nice to have a look at it in grease pencil is a good information system we are working on it one of our young developers is working on porting a similar let's say surface deform system to grease pencil we have a not completely working prototype and we are looking forward to discuss with other from the grease pencil team to go further so thank you we have a couple yes, I believe Bastian says it's over if you want to have something you can meet me later on outside thank you