 Salut tout le monde, on est très pro-active et nous sommes heureux d'être ici aujourd'hui pour vous montrer les managés de l'étudiant et d'enregistrer des trottes. Donc, c'est nous. C'est Tanguy, le premier de la classe. Et aussi... C'est aussi le superviseur de l'étudiant, en charge de différents projets. Et moi-même, Axel, le chef de l'étudiant et en charge de Tanguy. Donc, nous avons divisé ce talk into six parts, en commençant par dire à vous ce que l'étudiant m'a travaillé dans les dernières années, suivant d'un petit démotèque, pour résoudre la diversité de notre projet, mais aussi pour vous montrer ce que nous avons fait et ce que nous avons été passés par ces productions. Cela nous permettra de vous dire pourquoi nous avons choisi d'utiliser un niveau de détail de systèmes que nous allons exprimer dans la prochaine partie. Dans le niveau de détail, nous n'avons pas eu de secrets pour vous, nous allons vous montrer les managés qu'ils ont utilisé dans les trottes et comment nous les avons managés. Et la dernière partie, c'est la plus précise partie de la géométrie, et nous allons vous montrer comment nous l'avons utilisé pour créer de différents niveaux de détails. Donc, nous allons... Tanguy présente le dernier projet. Bonjour, tout le monde. Une chose importante pour nous savoir, c'est que nous avons été étendus sur Blender 2.79 pour les dernières années, parce que de l'inertie pipeline et la length de notre projet, cela nous a pris souvent plus que deux ans. Nous avons dû utiliser cette ancienne version de Blender pour beaucoup de temps. Nous avons switché à Blender 3 au début de 2022. Et en faisant ça, nous avons beaucoup de nouvelles features. En même temps, nous avons Évie, Gris Pencil, Collections, Géométrie Nodes et Cycle X. Donc, il a fallu comme la Christmas, mais cela signifie que nous avons beaucoup de travail sur notre main, de rethinker notre façon de faire des choses, de réinventer notre pipeline et d'assurer qu'on s'adapte de toutes les nouvelles features de Blender. Certaines de nos choses ne sont pas faites avec Blender encore. Certaines de elles, comme Kailu, que vous pouvez voir sur la gauche, ont une histoire avec d'autres softwares. En ce cas, c'est 3ds Max. Et parce que nous avons des milliers et des milliers d'assets, nous ne pouvons pas les mettre dans le Blender. Ce serait trop expensif. Donc, nous devons toujours ouvrir 3ds Max, qui est un pain quand tous les autres projets sont faits dans Blender. Certaines autres projets, comme le Gora La Plage, dans le milieu, sont 2D projets. Nous n'avons pas été allés dans le Gris Pencil, mais c'est définitivement dans notre plan. Et nous sommes égaux de des projets, comme Atléticus, qui ont été faits au-delà de Blender pour les 3 premiers séances. Mais la prochaine saison, la 4e, est currently en production à Cube. Et c'est enfin fait avec Blender. Donc, nous tendons à un point où chaque de nos projets sera fait avec Blender. Donc, c'est un petit démon de nos projets. Merci. Maintenant, je vais parler un peu des défis que nous avons faits durant ces dernières séances, et ce qui nous a aidés à penser dans notre pipeline, et à utiliser un niveau de détail au système de nos assets. Donc, en 2018, nous avons travaillé sur notre 1er projet que nous avons présenté dans le Bicon 18. Et ce projet a été très mouillé et nous a offert beaucoup de sets de forests et de backgrounds de végétation dents. Parce que nous avons utilisé 2.79 à la fois, nous avons dû faire tout le scattering en utilisant le système de particules, et je suis sûr que l'un est un pain d'acheter des choses dans un façon stable et prédictive. Donc, c'était assez un challenge. Et sur le next show, qui est 10 grands animaux ici, vous pouvez voir que les sets sont beaucoup plus simples et plus lourds, mais le challenge était plus sur la partie des personnages. Nous avions vraiment souvent beaucoup de personnages animés sur la table, mais nous avons des formes géométriques qui ne signifient pas que les végétations sont lourdes et lourdes. Donc, c'était un challenge de nous donner nos animaux responsables et lourdes avec ces grands personnages. Et la dernière, ce show est le Pirate. C'était l'un des derniers shows avec Blender, aussi fait sur Blender 2.79. Et ce show feature beaucoup de grands sets avec des végétations et des objets. Et aussi, un grand nombre d'animaux complexes. Donc, c'était toujours un challenge d'avoir des scènes responsables et des scènes lourdes. Donc, nous avons créé une nouvelle façon sur nos différents assets. Donc, maintenant, nous allons vous donner une présentation plus détaillée sur ce niveau de système détaillé. Oui. Comme Tangey a dit, nous étions sur Blender 2.79 à l'époque. Depuis que la production avait déjà élevé, nous ne pouvions pas bénéficier de l'improvement des versions neuves. Donc, ce spring, le changement de Blender 3 afin de rassurer notre pipeline et d'avoir une notion de niveau détaillé de ce système que nous pouvons aussi trouver dans la production de jeux de vidéo. Donc, pour faire ça, nous commençons par déterminer les besoins qu'on a dans les différents étapes de la production. En termes de définitions et de contrôles par assets type. Nous faisons un asset type comme les backgrounds, les props, le caractère, ou la caméra. Pour l'improvement ce que nous avons besoin est un haut niveau de contrôles sur le background pour pouvoir perdre la sèche proprement. Mais nous avons aussi besoin de contrôles sur le caractère et les props. Mais pas de haut niveau de contrôles parce que nous ne voulons pas prendre trop de temps en détail. Ce n'est pas le point pendant les layouts. Donc, nous avons aussi besoin de haut niveau de contrôles de la caméra pour vérifier le framing pour vérifier la sèche, la sèche ou peut-être le bleu de motion. Sur le niveau de qualité de l'asset, nous sommes à le haut niveau de contrôles. Donc, nous ne voulons pas prendre trop de temps mais de garder un bon overview sur ce que le final de la caméra sera. À ce point, ce que nous avons besoin c'est le maximum de fps et de fluidité. Nous voulons aussi décrire le niveau de qualité de l'asset. Mais nous avons aussi augmenté le niveau de contrôles que nous avons sur le point. Excepté pour le background parce que le background a été installé avant. Nous n'avons pas besoin de haut niveau de contrôles de la caméra. Donc, nous pouvons juste augmenter le niveau de contrôles. À ce point, le problème est assez différent. Tout ce qu'on a besoin ici est une définition de l'asset. Mais nous n'avons pas besoin de haut niveau de contrôles de ces éléments parce que tous les éléments ont été installés dans le premier step et ils sont approuvés. Donc, nous n'avons pas besoin de toucher les éléments. C'est le processus de création de ce niveau de détail. Donc, c'est un niveau de détail dans le modélisation, le shading et le rigging. Ici, on regroupe le modélisation et le shading parce que dans le studio c'est souvent les mêmes artistes qui travaillent sur les deux steps en même temps. Donc, pour tous ces types, nous commençons avec le base de 5 niveaux de détails. Mais nous pouvons, bien sûr, augmenter ou diminuer ce numéro selon les spécificités du projet. Nous n'avons pas besoin de les utiliser en temps de la production. Sur le processus de création, nous relâchons sur notre système Pouche-Poule. Nous devons mettre en place des pouches pour les next steps. Par exemple, si on peut prendre un artiste dans le modélisation ou le shading, il peut mettre en place ses works, pour les next steps. Ses works en file et d'autres states de ces works. Donc, c'est juste un simple cifre de ces works pour le cifre dans le bouchon. Mais il y a des processus automatiques qui vont augmenter ou augmenter ces works. Pour tous les simples règles nous pouvons le faire par un simple clic et un bouton quand vous êtes dans le modélisation. Mais pour la plus spécifique version dans le rigging, nous l'avons fait manually par notre équipe avec notre autorig. Ici, il y a un spreadsheet où nous pouvons trouver ce que nous avons dans les détails différents. Nous allons commencer avec le rigging. Avec le plus bas, c'est de l'asset et nous allons vraiment aller au complet du rigging avec l'autorig. Et parfois avec la spécifique où nous pouvons ajouter des caractéristiques comme du corps ou des simulations. Sur le modélisation, nous commençons avec le plan. Pour une version de l'asset avec la division collapse où nous pouvons ajouter un map de dépression. Et sur le shading, nous l'avons utilisé pour l'asset et deux versions différentes de l'asset, plus ou moins gris en ressources, avec bien sûr d'adapteur. Nous allons passer sur une version industrielle de l'asset. Je vous introduis à Giorgio, le caractère de la série Fibre. Nous allons trouver un niveau de détail que nous... À ce stage, nous avons l'asset en référence dans la scène donc nous n'avons pas d'accès sur ça. Et dans le next niveau de détail nous allons juste marquer l'asset dans la scène, mais nous gardons le data en référence, donc nous allons juste sélectionner l'élément et le placer dans le shot mais juste dans le modélisation. Puis nous allons à ce qu'on appelle le bas rig. C'est juste une petite rigue avec des contrôles globales qui nous permettent de changer l'origine si nous avons besoin, mais ça nous permet d'avoir des contraintes entre cet asset et l'élément dans le shot. Puis nous avons trouvé le main rigging. C'est le rig que nous ferons avec notre autorigue. Et le dernier c'est la rigue spatiale. Dans ce show le caractère, Giorgio a la possibilité d'extender ses membres. Donc nous devons faire une rigue séparée beaucoup plus lente mais nous pouvons juste choisir de l'utiliser quand nous avons dans un couple de shots. Sur le modélisation nous trouverons le main, représentant l'asset. Et nous allons aller à la version qui a dévissé des divisions où nous pouvons ajouter un map de displacement. Donc encore dans ce show tous les assets sont en plastique infléable. Quand nous sommes close à la caméra, nous avons un espace pour le fold de la plastique. Et sur la règle nous trouverons la picture de Giorgio pour l'utiliser avec un plan. Nous trouverons une version de l'asset sans des maps animaux. Et ensuite nous allons aller à la version avec des maps très lourdes juste en utilisant le port de vue pour les animatores. Et la version plus lente pour la rendition. Et le dernier c'est la version avec le map de displacement. Nous avons juste un look sur ce que nous avons trouvé si nous allons aller au propres. Nous allons trouver la même configuration mais juste avec un petit change sur la règle la plus haute cette fois nous allons avoir la même règle comme la main règle. Mais nous avons juste ajouté des simulations sur la règle, comme un wind ou un délai automatique. Mais pour la règle, c'est exactement la même évolution. Juste un look sur le background. C'est la règle de la série FIWAT. Pour le background nous avons limité la règle la plus haute sur la règle I1. Si nous avons besoin d'une définition plus grande sur la partie du set de l'élément de la règle et de la séparation de propres pour ne pas aborder la règle avec juste un élément dans la règle. Mais pour le reste c'est exactement la même évolution que nous pouvons voir sur les autres slides. Juste la dernière slide sur Atléticus avec les plus réalisatifs c'est-à-dire un cap. Nous allons trouver la même évolution comme nous pouvons trouver dans la série Cartoon. Mais juste une petite différence sur la règle 2, où nous commençons à règle avec un muscle, grâce à l'ex-muscle. C'est encore dans le progrès sur notre site mais nous sommes très confiants et je pense que nous allons l'utiliser dans le prochain projet. Speaking of Atléticus All this leads in the shot and now we manage them. Here is the shot we are talking about with these we can find assets in the foreground. In the mid-shot we will find some animals and there is a crowd on the background. However we start with our send builder who works thanks to a breakdown list who tell us what what assets we will need thanks to a config file who will tell us what kind of level of detail we will merge into the shot. Basically all the modding and shedding are merged in a very neutral mode in the mid-level so it is a role to the layout assist to adjust it to add definition if you add for the foreground and to decrease it for the background. The background is merged in reference in the lowest rigging level of detail and all the props and the character are merged with a very short rig with just a couple of controllers go back controllers. Let's stop a little longer with our asset manager this is the main tool here this is the tool who allows us to manage all the assets we have on the shot it will also let us enable or disable them if we need it and we will find some display like to show or hide the meshes or show or hide the armature we will also find some dropdown lists at the middle who let us know which level of detail we have on the shot thanks to these dropdown lists that we can improve increase or decrease the level of detail I will show you an example right after so here is the last it is just a screenshot of the layout we can see the asset has been increased in the modeling and shading level of detail the element at the middle that just plays with the short rig version and the crowd on the background will be decreased in the low end rigging in the way that we just put them in the shots and we will add some along big caches with idle animations it will be enough as I present on the screen this is our asset manager so at the animation all the animator has the ability to select the rig and to increase in to find the main rig to be able to do the animation just the animation and all the on the previews key on the short rig are kept thanks to the action system we can find on blender here is the final shot with the animations so the animation are not really detailed here it will take too much time maybe for a future talk maybe but all the animation the animation has done on the background on the character on the background our load and we can find the asset on the foreground with higher higher modeling and rigging level of detail here is rendering step so at this step we find again our send builder to still thanks to a config file will merge different LODs that we load from the layout and usually we just we just need the render versions of the asset so we will merge what we call the I1 and I1 modeling and I1 shading and all the about the rigging level of detail we call low one so we just put the meshes on the shots and we add the cache and animations sometimes we just add an actions they can metadata like visibility or animate each other if we have it so that's the global view of our pipeline and our LOD management I will let you in the charge of the technical supervision of a project called the Seven Bears it's produced by Netflix and Folivari and as you can see on this image it has a lot of heavy vegetation backgrounds with a dense forest unfortunately I won't be able today to show you any behind the scene images of the assets but anyway I will use placeholders geometry to illustrate my points so on this show we chose to do many things the procedural way so that includes set dressing asset creation and most of the shading and this has many advantages so first of all in terms of usage our render farm is GPU based so of course maintaining the ram usage to the lowest point possible is a must for us and procedural shading helps a lot with that it also makes assets reusable very easily you can see on the right side of the screen our asset library the shader library part that allows us to share between projects and between artists within the same project shaders, node groups and other assets we presented this tool back in 2018 at our first Blender conference appearance we with thanks to procedural workflow we also can do very easily correction and fine tuning on assets this is very important for us because we have a lot of back and forth with projection designer and the directors of the show so it's important for us to be able to correct the color or the scale of a pattern on a shader very easily without having to re-export many maps we also can avoid UVA unwrapping while doing procedural shading most of the time I will tell you a bit more about that in a few slides and we also are happy to avoid in most of the case texture handling it's always a joy to have everything within the Blender file and within the shader it makes things easier to manage and also with procedural shaders resolution issues are much easier to handle it's always easier to add detail to a procedural shader than having to increase the resolution of textures that you don't necessarily have when you figure out that your shader lacks detail so of course every choice comes with its downside and when choosing to work procedurally this meant that we had to train artists to do so and to hope that they were willing to take this path with us so in terms of challenges of course basic math are not avoidable and this is not to the taste of everybody and sometimes when we tend to forget that everybody doesn't find it as fun as I do during the things the procedural way doesn't mean that you don't have to optimize things we have to be careful not to push the level of detail further than needed some of the nodes when used extensively like noise texture with too much detail can be a bit excessive in performance usage and we also try to keep the node count to the bare minimum in order to make on one side the shader lighter but also on the other hand much easier to understand by other artists or other projects or even yourself when you come back to it after a few weeks time so the marvelous node tree you can see on the right is brought to you by Simon Thomas it was posted on Twitter during last November I think and just a side note here if you're willing to deepen your knowledge about procedural workflow I highly recommend Simon's tutorials that are available on the Blender Studio website it's a great resource and we use it to train our artists it's a really comprehensive and well explained and one last shader with a procedural workflow is concerns shading mostly but easy struggles sometimes to compile heavy node trees and complex procedural shaders so we want to make sure that the animators don't have to wait 15 minutes when opening their shots to wait for shaders compilation and the colors to appear in the viewport this is why we built a lowest level of detail as Axel explained earlier for every shaders that would be compatible with the viewport usage and we just try to keep consistent in terms of color roughness mountainness and basically that's it so now I will share a few tips a few methods that we use hopefully you will find one or two that will fit your needs please do not hesitate to stop me at any point if something isn't clear or if you want further detail on one of those cases so first of all about UV usage when I said that we were avoiding unwrapping UVs I didn't mean that we were avoiding the usage of UV channel as a whole of course we heavily rely on it because it's the safest way to make sure that the textures will follow the mesh deformations and one thing to keep in mind is that the procedural shader workflow in blender and procedural textures like noise and Voronoi behave very well when mapped on the vector field so what we do on all of our assets is that we bake the location and the position of every vertices in the UV channels so of course we need two of them to store the three dimensions of the mesh the good thing about this it doesn't require any technical knowledge you can just use a very handy to remember shortcuts which is UV UV for unwrapping UVs and V for project from view you just have to make sure that you're in autographic from view and top view to project the UVs and it will automatically store the location data in the UV channels and then when combining them in the shaders this is what you got so on the left column it's the shaders simple Voronoi shader and the vector field mapped on the position input and on the right side is with custom XYZ UVs so as you can see on the bottom the texture basically maps in the exact same way but with the benefits of following the mesh deformation which is really crucial when using animation cache on a mesh that hasn't got any rig and of course these methods works as well no matter the complexity of the shader that you map on it so then a bit more about sharing our procedural shaders of course node groups are used in this case and we try to think a lot about what have to be kept inside of the node group to make things simpler and what has to be kept outside of it the goal here is to find the right balance between reducing the complexity of usage without losing too much flexibility what we ended up doing for most of our shaders is in terms of input keeping the vector information outside of the node to make sure that the artist will be allowed to decide how they want to map the shaders on the surface and we also keep in terms of inputs the basic information of colours and sometimes a few maps to mask a few things and in terms of outputs we always keep our principled shaders outside of the node groups this makes it much easier to blend between two premade shaders for example if you have a metal shader and a rust shader you want to combine them with a mask it's always much easier when you have the two shaders visible outside of node groups and you've got much more freedom so we we expose most of the time three informations which is the albedo so the base colour of the shader the roughness which can still be tweaked outside of the node group with a map range or colour ramp and the height we keep the bump the bump node outside of node groups because we want to be able to adjust the height of the bump according to the scale of the asset or how the texture is mapped to the mesh so now a bit about the dressing and procedural modelling so in this part I will talk mostly about geometry nodes and as you already know I'm working on a project that involves a lot of vegetation and forest background so geometry node was a huge help when doing set dressing and creating those ender runs so speaking about level of details geometry node helped us reduce ram usage and increase performances both at render time and in the viewport in a few different ways the first one is we can when scattering display only the needed instances according to what we are working on so for instance on the top the top image is the viewport at animation stage and you can see that the grass and flowers are only displayed near the path so close to where the action would take place but at render time we display instances in the complete field of the camera geometry node also allows us to work with the lowest required level of detail created earlier so you've already seen that with the Atleticus example with Axel you can see that grass, flowers and especially trees are displayed in a really low poly version and we also tend to keep everything as instances as far as it's possible so this has two advantages, two benefits the first one is keeping the object count in the scene quite low because you don't want 200 flower objects of course and the other benefit is that it always helps with the RAM usage so first of all a bit more about display instances only where they needed this often relies on vertex groups so of course when building these sets we heavily rely on vertex group to have control of where things should be vertex groups have a lot of benefits, first of all it's very light it's also really easy to manipulate and a great thing about them is that you can also create them with geometry nodes in a procedural way so that helps us a lot when having to create all the different parts of the forest with different shapes of path so here on this GIF you can see me just decide the distance to the path to which I wanted to display grass and flowers so of course we want this to be to be applied only in the viewport and not in the render so a few nodes that were extensively used on this project is the switch node in combination with his viewport node and that basically allows us to have different inputs with different complete systems in one node tree to manage how things are done in the viewport and on the other end how they are done at render time in this specific example the switch node controls the density of grass and flowers in the example you've just seen we also use the switch node with his viewport input to switch between different level of details of assets so this can be to get a difference between viewport and render but it's also very useful when creating the different level of detail versions of the assets because when creating the set dress of a forest I want to create a version that would be suitable for layout and animation and another version that would be suitable for rendering but of course I want the trees to be at the exact same place but the flowers, the bushes to be really consistent instead of placement so geometry nodes make all of this really easy and pleasant to work with so in this specific example on the node tree of the left we've got just a switch between different level of details for the grass but the same system is used for the flowers and the trees and one side note here about the lowest level of the trees geometry nodes also also help us when creating those low version of the trees because decimating leaves of a tree is always a struggle to do it in an efficient way but with geometry node we can easily convert all the leaves into points with for example a merge by distance node and then you can decide to keep only 2% of those points and scatter planes on those with the leaves texture so just in a few nodes you have a setup that allows you to create really quickly low polygon version of all the other trees while maintaining the global shape and the proportion of the foliage so when doing scattering with geometry nodes the main concept is always duplicating instances on points so the question that remains is how do you generate those points how do you get them, do you model them or do it procedurally of course it depends on the level of control you want and the speed at which you want things to be done so at cube we use different methods according to our needs according to the importance we want to scatter so I will show you how we do this in a few different ways so the most obvious method is just using generated procedurally generated points with a distributed point on surface for example node and in combination with vertex groups you still can get a lot of control on how things looks vertex groups which is all textures in geometry nodes helps you bring a bit more an organic feel on this and it's very simple to get a first pass of your grass flowers and for all those small elements that are in a huge number doing things fully procedurally is often the best scenario then for we want to control the specific placement of them but we still want them to be instances and we still want to be able to have a procedural variation among them in terms of scale and in terms of switching from one model to the other so what we do is we create a mesh with only vertices no faces, no polygons and we just use an instance on points node on them so we get one instance by vertex so what you see here is just me in edit mode duplicating vertices and places them to get new trees with a random variation on any parameter I want and here geometry nodes helps us also make sure that the trees properly stick to the ground and in some cases that the duplicated assets also behave according to the normal of the ground surface and in a few other cases we can use a combination of both of those methods so to create those unique bunches of flowers we use a modeled point cloud so just a mesh with only vertices so we can precisely place those bunches of flowers but we scatter on them procedurally generated small point clouds to get variation in terms of how many flowers we got per bunch if we want even more control because vertices in blender don't store rotation and scale information so if we want more control on how we place rotate and scale our objects we can use a mesh with floating single polygons so just little squares floating in a mesh and duplicating an instance per polygon so the node tree for this is a bit more complicated but on the other hand it allows us to control the rotation and the scale on top of placement of each object so that's it for a few tips about Procedure Workflow please feel free to ask anything if you need more information on that thank you very much so I guess we have a few minutes for Q&A if you want any information on this yes yes it's a French comic book called Emile Bravo called in French which is adapted by Follivari and Netflix and we're in charge of creating the images yes yes of course of course yes that's right so we have two possibilities either we rely on the automatic process so with a configuration file we say that for the low 2 version we want to decimate modifiers applied on the mesh with so and so parameters and we can also per assets decide that with a custom collection in the working file we can decide that for the specific assets we don't want to use decimation but we want to use another way of creating a low polygon version this is the case for the trees for example so in our working file we duplicate the whole asset and rename the main collection with low 2 in it so that when pushing our assets and making it available for other steps the script will detect this collection and avoid the automatic automatic process and just take what we've done manually so it's a way to override the process so for most of the props we can rely on the automatic process but on those specific use case we can do things the way we want and this counts for every level of detail I can create a custom high 2 level of details or custom low 1 or anything yes so first of all in the configuration files we can choose the default state of this to have a combination of let's say for all my high 1 modeling I want my high 1 shadow but when doing those custom LOD in working file we can assign the shaders we want from another level of detail and that will be kept another thing to be said is that in layout or animation shot the artist can switch level of detail independently for modeling, rigging and shading so of course it's not compatible in every way because the rigging depends on the specific topology and sometimes shaders do too but as long as the shaders will be compatible with the other mesh or the rig will be compatible with the mesh the artist can choose to increase the level of detail of the shaders for example if they want to see the actual rendering shader in the animation scene for a specific object yes yes the thing is between the different level of details of the rig is that the existing controllers have the same name between one of them between each other so that when upgrading the level of the rig you keep the animation you've done with the lowest one so if at the out stage somebody worked with the base rig with only rotation scale and placement controllers when you switch to the higher version of rigs you keep this animation if you lower the resolution of the rig you lose what's not available anymore in terms of control this is because the objects share the same action data because they have the same name and the bones are named in the same way yes basically when doing set dressing for a forest for instance the process or thing happens at the asset creation level but not in the shot so the asset has a lot of things procedurally generated but when the shot builder script imports all the assets it doesn't matter if the grass is procedurally generated or not and so the layout artist or the animator that would work in the shot will doesn't necessarily have the information of how things were built the only difference is that of course if flowers are scattered in a procedural way the animator or the layout artist won't be able to move individual flowers whereas if table chairs are put manually in a set they will be able to no no no we don't have this kind of information displayed in the asset manager yes yes we have a lot of different application to manage this database in terms of production tracking we use Kitsu by CGYR and we have also a tool called Emma Watson which allows us to choose what file we want to open and sorting and so yes we've got a few different tools to manage this database the one we're showing here is the one used by layout artist and animators within a shot but only handles what's inside this shot once the scene has been created I don't know if that answers your question any other questions great thank you guys