 All right, so hello everyone and welcome to the talk about the Blender Studio pipeline. So some quick words about me before we start so you know who's talking here. I'm Paul. I'm currently still finishing my diploma as a technical director at Animations Institute in Germany and in between I was working here in Amsterdam at the Blender Studio as a year as a pipeline GD and in the next 20 minutes I'm going to give you some insights on how the pipeline at the Blender Studio works. So what might set this pipeline apart from others is that Blender is literally used for almost everything and this is actually quite an unusual scenario because in a normal pipeline you often work with many different TCCs and a big part of the pipeline's job is to manage the data flow between those but at the Blender Studio we don't have that requirement and what I will show you in this talk is how we can actually take advantage of that. So how we can take advantage of how Blender works at its core, how it manages data and design our pipeline around that. So I'm going to start talking about the editorial workflow at the Blender Studio. Blender has a built-in editorial environment called the video sequence editor and this is very unique amongst other TCCs and opens up very interesting opportunities and we then will hop over to our review workflow. I'm going to show you a fun little add-on that has helped us to judge sequences as a whole. After that I'm going to introduce you to Blender application templates by showing you an example that turns Blender in a cross-platform media viewer that also has some basic review tools and last but not least we will have a look at the new iteration of the Blender Studio's asset pipeline that is really designed around Blender and I will talk about the system and sort of explain how it enables artists to work on the same asset at the same time. So as I already mentioned Blender has a built-in editorial environment called the video sequence editor I'm sure everyone knows it and loves it in this room and it has a very wide range of editing tools and it was actually used to edit all past Blender open movies and like all other parts the video sequence editor also has an extensive Python API and this highly scriptable editorial environment can be very well integrated in a pipeline. For SpriteFright the Blender Studio also switched from their in-house production management suit attract to Kitsu and for the Blender Studio Kitsu is really like the hub in which all information about the production can be found and the shortlist on Kitsu is basically the edit in a spreadsheet format and it's very important that Kitsu is always up to date and synchronized with the edit that we do in Blender otherwise you end up with mismatching data which opens up the space for errors further down the pipeline but imagine our editor has to adjust frame ranges on Kitsu every time he trims a shot or it gets shifted. So what we are essentially looking for is a system that sort of serves as a bridge between the editorial department and Kitsu to exchange data and that's what we developed the Blender Kitsu add-on for so with the add-on you can log in to Kitsu from within Blender you can then go ahead and link a movie strip to a shot on Kitsu and once a movie strip is linked to a Kitsu shot we can exchange data between the two for example by updating frame ranges, thumbnails, shot name and other data without even having to leave Blender and the great thing is that the whole system is selection sensitive so you can update frame ranges and thumbnails of all shots in your edit with just a click of a button and one important aspect to really get right is frame ranges you don't want animators to animate less than they should or worse animate more than they should and everyone who already transferred frame ranges by hand knows just how quickly you can switch up numbers so this add-on just helped a great deal to automate this process and when artists then worked on shot files we actually had tools in place that would validate the current frame range with Kitsu and you get like a warning and you can update your frame range so in this case we took advantage of Blender's sequence editor and made managing shot metadata a whole lot easier by doing it directly with the raw data of the edit. Another fun little add-on I want to share is the contact sheet add-on. This project came to life because we wanted to have a quick way to get an overview of a whole sequence and it's especially useful for the lighting and comp department to validate whether the shots in a sequence look consistent and harmonious so with another add-on you can quickly load all shots of the whole sequence and the contact sheet add-on then just takes a continuous sequence of the top most movie scripts and assembles them in this grid view and you can also just manually select the movie strips you want to have in your contact sheet and everything it's doing really is it's using the video sequence editor's Python API and a little math to scale and transform the movie clips so this is just another small example and what you can do with the video sequence editor and it's a Python API. Alright the next project is a little bit more than just an add-on the media viewer is actually an application template that turns Blender in a cross-platform video image and text viewer with an application template you can actually ship your own T-maps, startup files, user preferences and you can even define template specific add-ons and modify parts of the Blender user interface all in one bundle and the great thing is that you can easily switch between these templates without having to override your own personal configuration or requiring a separate Blender installation that means people can actually build their own applications on top of Blender that can be easily distributed and by the way you maybe heard it at the keynote of Tom the project application templates is amongst the strategic development goals of the Blender project this year and the Blender media viewer was actually an attempt to see how far we can push this system and it showed some shortcomings and hopefully at the end of the year we will get some exciting updates so let's have a look at what the media viewer can do so as I said the media viewer offers the solution to have a player that can seamlessly browse media files with the arrow keys on your keyboard no matter if they're video image or image sequences and this application template reduces the Blender UI to a bare minimum by removing all elements that are not needed and it ships with its own key map that adds useful shortcut to actually make it use usable without a mouse and in the background it actually switches dynamically between the Blender image editor sequence editor and text editor depending on what type of file was selected and it even has some basic review tools so you can annotate your media and also export them with just a click of a button and the really exciting point here is that you can basically use Blender to build your own applications so you can make use of all the open infrastructure of Blender that was built over the years like its ability to read and display all kinds of codecs like multi-layer XR support and you just wrap it in an application template that serves a specific purpose for you. All right the last section will be about the asset pipeline of the Blender studio. The foundation of the asset pipeline was established during the Settlers project and it was further iterated on during the production of SpriteFright and the asset pipeline is really designed around two core concepts. First we want to take advantage of the way Blender stores data and second we want to enable artists to work in parallel without having to wait for output of another task and the second concept is kind of the result of the first so let's jump a bit more into detail. So in order to understand how the asset pipeline is built it's worth taking a quick look on how a blend file is actually structured. You can think of a blend file a little like a database. It is composed of data blocks each storing different kinds of data and these data blocks can reference each other creating some sort of a hierarchy and those data blocks can also be appended or linked in from other blend files as I'm sure most of you know and in the asset pipeline we take advantage of this principle. So let's talk about task layers and in order to get our asset pipeline running we need to create a configuration file in which we define how an asset is assembled and we do that through these task layers. Each task layer owns a domain of data and all of them together describe the entire data of an asset. So in this example for here the rigging task layer it might own collections, objects, their relations and object data. The shading task layer might own material assignments, UVs, vertex colors and the grooming owns particle systems or hair and what you can see here is that these task layers they can depend on each other and overlap. In this case the shading task layer actually needs the data of the rigging task layer because without objects there can be no material assignments. So there's a dependency between them and a second important concept is that these task layers they can be pulled in from different sources by following a set of merge instructions that describe exactly how all these layers are puzzled together and last but not least task layers can be locked alive. Locked task layer cannot be changed anymore meaning the data it owns just stays the same and a live task layer on the other hand can still be updated. So with the last three slides in mind let's have a look at what an asset directory actually might look like and I will try to break it down for you. So we have a couple of asset task files here on the top and some asset publish files or versions on the bottom. I might use the term version and publish interchangeably but they refer to the same thing. So so far pretty good and actually pretty common and in each file you have a colored representation of our task layer definitions that we did for this whole project. So green is rigging and red is grooming and violet is shading. So tasks can publish the selected task layers to asset versions but notice that a change will actually be published to all versions in which the selected task layers are in the live state. So in this example here if the shading task layer gets published it will actually be published to version two and version three because in both of these versions the shading task layer is still in the live state. So that's a super important concept to understand which also differentiates the system from classical versioning because a published version can still change and when we publish a task layer we might actually alter multiple versions at the same time. And so certain parts of asset versions are live and therefore still get updated in which case do we actually create a new version? We create new versions only on breaking changes and this enables us the following scenario. Let's say the rigging department wants to push a change that would break animation in a number of shots they would actually create a new asset version and during the creation of the new version all rigging task layers of all the previous asset publishers get locked automatically and this creates backward compatibility because that way the animation of all the shots that still reference the previous asset versions would not be affected because the rigging task layer of those versions got locked and therefore didn't receive the change. So they still have the legacy rig so to say but the shading task layer is actually still live in the previous asset versions which means they will still receive regular shading updates. So we can actually render a shot that references version 3 with the latest shading of the asset and we don't have to worry about our animation being broken when we open a shot three months later that references this version of the asset. A task file can also push multiple task layers at the same time however it makes sense to create one task for each task layer so artists can work in parallel as much as possible and one important aspect in this scenario is that artists need to keep the other task layers on which they are not working on up to date because the shading task for instance is responsible for material assignments and shaders UVs and other things but whenever the rigging task publishes new changes new rigging changes the shading task should make sure to get these changes as well otherwise it might maybe miss an object or shades an object that does not even exist anymore. So for this reason it's important to perform a pull before a publish and when artists pull the other task layers they always pull it from the latest asset publish so this is all automated by the pipeline and make sure that the task files here on the top are always as close as possible to the latest asset publish version. All right there was a lot to chew on so let's just watch a little video on how this thing actually looks like in practice so here on the left side we have our artist in a task file he does a shading change he initiates the publish and here in this case only version four will be affected because version four is the only publish that has a live shading task layer he then applies the change on the right side we have our aner character here and we just reload the file and you see the shading change got applied it looks very simple works as you would expect but there's actually a whole lot of stuff going on in the background this whole merging process and in a few slides we will have a more detailed look at that. All right so we touched up on a lot of core concepts and theories about the asset pipeline so in the next five minutes we will actually see how all of this is sort of implemented so it's going to get a little bit tacky at the end but I hope you can handle it and the asset pipeline relies on metadata to establish all kinds of concepts states versioning and other things and this metadata will be saved in the XML format next to the file the data actually belongs to so that way we define a clear and self-contained system which does not rely on any external online database to track or sync changes and an XML file is also human readable and easily editable so with that we end up with an asset directory that looks something like this you can see we have a standard working and published directory and each file has its corresponding metadata file and the last part of the asset pipeline will be about the asset builder which is actually the component that contains all the logic of how to merge these task layers together and the asset builder gets as input a list of task layers that should be pulled or published in this case we want to publish the rigging task layer so the goal is to create an asset that's composed of the rigging task layer from a but every other task layer from B and if you remember earlier task layers are dependent on each other so just switching out the rigging task layer of B with a doesn't work we have to reapply all other task layers in the following step and this is exactly what happens in the publish file during the merge process so we import our asset from the task we now have two asset containers the one from the task and the one that's already in the publish file and now it's time to decide which one are we going to take as a base and in this case we will take the task as the base as we publish the bottom most task layer that has the order 0 the green one and if the bottom most task layers would actually not be amongst the task layers being pushed we would use the publish as the base and deciding for a base means literally duplicating the asset container so we duplicate the asset container which is going to be our target asset container here in the middle and we then just go ahead and merge all these task layers in order from their respective source and as a final slide to wrap it all up here we can see actually a picture of how these merge instructions of task layer look like so this is also the configuration file you can define per project that means you can actually completely freely set up your own system of task layers and but of course all together these task layers they have to make sense because at any publish or pull every task layer transfer data function is executed in order so everything a pipeline TD has to do here is to implement this transfer data function for each task layer and you don't have to worry about all the logic from the previous slide like when to transfer from which asset container to the target it's all abstracted away and happens automatically in the background and when you implement the transfer data function you work with the direct mapping from source to target and for that you have some very handy arguments prepared for you like the transfer mapping and which is just basically a big dictionary and that gives you a mapping between the source and target objects materials and collections so with this already set up you can basically just loop through those and perform your own logic alright you made it then we reach the end of this presentation and I know this was like probably really a lot to take in but I try to break it down as digestible as possible and the great thing is that all of this open source so you can just download and try out all the tools and way more that I showed in this presentation at this link here and yeah thanks for your attention everyone