 Hello, everybody. We're going to start the talk now. So welcome. We're going to present our talk developing grease pencil at the spa studios. Some of you may know us. Some of you may not. So about us, my name is Falk. I'm a software developer from Germany. And my Blender journey began as a developer in June 2020 when I did a Google Summer of Code on grease pencil. And I implemented the curve editing where you could manipulate strokes using Bezier curves. And then soon after a couple of months later, I joined the Blender Foundation through a developer grant, and I joined the triaging and bug fixing team. And then later on also worked on grease pencil on the module itself. And then in June 2021, I joined the spa studios. And I'm Ian. So I'm a software developer from France. And I have been working in the industry for around 10 years now in both animation and VFX studios. And around three years ago, I started implementing Blender in production pipelines. So our path with Falk collided when we both joined the spa studio in 2021. So the spa studio is a 2D centric animation studio also known for embracing 3D techniques when it helps with supporting the story and achieve creative goals. And they demonstrated that on Klaus, a movie that was fully 2D animated, but also shaded and leads to achieve this stylized 3D rendering look. And it looks like this. Right. So Blender was not part of the Klaus pipeline, but already some complex shots required the use of 3D to help with layout in the action as shown in this short clip. Right. So obviously, not all shots on class required such a setup, but building them was a bit complicated because it required moving between software packages not necessarily designed to play well together. And with the ambition of the studio to push cinematography further for the upcoming project, Amber, the need to combine 2D and 3D more seamlessly became really a key aspect, a key problematic for the studio. And well, whether for storyboarding animation, previous layout, this is the point where the studio decided to look into Blender and Gris Pencil. And a few supervisors started playing with the software and looking at the strengths, but also what was missing. And this is where we try to help them. All right. So here's an overview of our talk. We're going to do basically like three examples of where we improve Blender and especially Gris Pencil. And then also we're going to have a section just on how we develop Blender. And we're going to talk about like a big topic, which is contributions, obviously. And in the drawing animation tools, I'm going to be talking mostly about Gris Pencil. And in performance, so in early tests, we already had performance issues with Gris Pencil. So we had to fix those. And in that section, we're going to talk about how we approach that and how we fix those. And then Jan is going to expand a bit and talk about storyboarding layout. So not only Gris Pencil, but going a bit outwards. And he's going to present that. Yeah. Let's get started. So drawing animation tools. Very early on, when Blender was considered, we had an animation supervisor trial Blender and look at what are the limitations, you know, can we do animation in Blender? And so we got a long list of feedback on what needed to be improved and sort of a minimum requirement for them to consider Blender for animation. And one of the first things that he noticed was that he was constantly switching between draw mode and edit mode. So drawing a stroke and trying to quickly adjust or just moving it slightly or rotating it slightly or scaling it. We required going to edit mode and finding a tool or a shortcut and that was breaking his workflow. So we tried to come up with a solution. We called the quick edit. And for all of these changes that I'm going to talk about, these are core Blender changes. They're not add-ons. Yeah, just to keep that in mind. So I looked at model operators. I looked at then gizmos, which seemed like a better fit. So for those of you don't know, Blender has like a pretty good gizmo API where you can build your custom sort of interactive tools. And so I built on that and it quickly became clear that that was the best option. So the quick edit is a tool in draw mode. And well, I'm just going to show you. So here you can see me use it. No? It works. It works. It does? Yeah, it was working. Okay. First again. Okay, that's good. Yeah. Okay, yeah. So on the left side, you can see I have now a loss of selection tool. And when you select the stroke, it pops up the gizmo. And it's exactly what you expect from other softwares. It has all the basic things. So here I'm showing like control loss of selecting to deselect. When you grab the box, it moves it. And here I'm showing that when you have an overlapping stroke, you can alt click to select through the box because normally that would grab it if you just left click. Yeah, it has all the basic features like scaling from the corner, scaling from the side. It can rotate. You can also change the pivot to somewhere else and rotate around that. You can reset the box just by clicking on a tool that just resets it. And then we also have proportional scaling. I'm just showing that here. Scaling from the center. Yeah, just all the common things. It also has skew. So you can grab a side and skew your selection like that was also requested by the animators. Yeah, and here I'm just showing that you also have in the toolbar options to mirror the selection. So you can quickly like mirror from the bounding box and shortcuts for deleting, copying, things like that. Also with your selection. Now one thing that's interesting about this is that this tool is like inherently 2D, right? But we spent the strokes are fully 3D. So how do you deal with that? And so this is what this next video shows here is I have a selection here and what we do is that we always use the drawing plane that you're currently drawing on to stick the gaze mode to that plane because it's like it's really designed for that quick edit workflow where you're just trying to edit your selection. So you can see that the smiley always sticks to to that canvas. Yeah, all right. Another feedback that we got was tools for animation. So one example was that people wanted to draw in-betweens and one really common workflow from like the old days when you would animate on paper is that you would take your two key frames, you know, shift them in place and then draw you in between on top using a light table or whatever. And so that workflow is known as shift and trace. And so there wasn't really anything in Blender that could do that. But because we already had this custom build gizmo that could do these sort of transformations, what we decided to do is to add a sort of non-destructive transformation matrix to key frames so that you can adjust your keys as you go. And since it's non-destructive, you can just toggle it on and off to see those transformations. So this is another tool that we bought. It's called shift and trace. So yeah, obviously we can't show anything from the project the guys are working on. So here's my simple example of drawing in between like a rectangle and a triangle. So that's the tool. You just select it and then you alt-left click to select your key frame and it pops up the gizmo. And again, it's the same gizmo that we already built for the quick edit and it allows you to transform your stroke. So here I'm just showing translation, but you can just scale, rotate anything you want. And then just draw your in between, which is, I don't know, some shape in between. And again, since this is non-destructive, it's just getting applied after the modifiers on the evaluated object. You can toggle it on and off. So here I'm toggling it off, adjusting it since it's a bit too high maybe. And yeah, here you can see the in between. And the tool also allows you to just reset the transformation. So you can basically start from where the original position was and you can also use the current frame. So if you want to use a timeline to select a key that you want to shift or the friend you want to shift, you can do that as well. All right. Now, another thing that animators ask for is improving the drawing feeling, which is always not a great thing to hear as a developer. They said things like, hey, we want to draw like, we draw on paper and, you know, have that level of responsiveness. So we looked at the current GRISP pencil drawing operator and looked at things that we could improve in it. And we found a couple of things that we could have improved. When we dug deeper, we found that Blender already has a pretty good like basic painting operator built in that's like used by, you know, vertex paint for meshes and things like that, like painting on images. And so we thought it would be better to start a new drawing operator and start from that same base since it's already there and build it using C++ and sort of a new way of doing things. It has a few things that I would like to talk about. We have spacing modes. So this is just about, you know, how the points are being spaced on your stroke. Default is basically just using the default tablet inputs that you get or the mouse. Fixed gives you spacing relative to the radius of your stroke. So 10 percent of the radius and that's the distance between each point. Fixed is really, really useful if you use dot materials. So if you want to build like a realistic looking like charcoal pencil, you would use like a randomly rotated texture of a charcoal image and then place that on each point and for that you need this dense stroke and then adaptive, which is really useful for line materials where you don't need that many points and you have a threshold for it to remove unnecessary points when they are like on a straight line, for example. So those are the different modes and next we have active smoothing. So this is something that the old operator already had, but the way it worked was if you had three points that were like placed in a like certain distance, it would generate a circular arc using those three points. And so our animators had this like, they said it was like feeling floaty and not precise. So we looked at trying to revamp that and our active smoothing is like based on a Gaussian blur algorithm that's also already in blender, but it's applied to the whole stroke and also while you draw. So you can see here on the very left with no active smoothing when you draw and you zoom in, you can see that the points are getting snapped to the finite resolution that you have on your tablet. So basically the pixel resolution. So you get the stepping out effect, which is not nice. And then with active smoothing, what you're basically trying to achieve is the subpixel accuracy, where the point is really placed where it should be, which is like in a subpixel resolution. And then finally it also has curve fitting. So curve fitting is like again implemented natively and it applies to fitting after you've drawn the stroke. So this is sort of a post-process step. With all of these, by the way, we try to make it so that they don't affect each other. So if I use a fixed sampling and I use curve fitting, it doesn't use the points of the curve, for example, that it generates. It still uses your original samples and sort of morphs the stroke into place. So here I have a couple of videos. This is with a threshold of 0.1. You can't really tell. I mean it's it's with 0.1 it's basically impossible to tell. 0.5 you can like see a slight adjustment just after the stroke is being drawn with like a slight delay. Maybe you've seen that, maybe not. It's really obvious if you put it like all the way to one though. So here you can see. This is how the the panel looks like, sort of the revamped. And this is like all you ever need to change to get the right feeling of the drawing. So we really wanted like one of the core designs was to have very few options but make it so that they really have like the right effect. So if you turn all of these off, you get raw tablet input. There's no, it's like we do nothing to it. But when you, you know, use them, you can really tweak them to your liking and get the right feeling. That was one of the goals because obviously like all the animators are different and like one will say, oh, but it doesn't feel right. And the other one will say this. So it was really important to have like these buildings blocks that you can then, you know, adjust the settings to get the right feeling. All right. That was that was the drawing operator. So now we're going to talk about performance. So as I said earlier in like earlier tests with our animation supervisor, we already ran into issues. So these are mainly if your file is getting to a certain size, if you have like a million, two million grease pencil points, you can you can feel the slowdown. So after each stroke you draw, for example, you get like a slight delay where blender is like trying to catch up and you can really feel the lag at some point. So we started to profile. This is a profile of drawing a grease pencil stroke. And we can see like three major columns where time is spent. And the first one here is the dependency graph update. So this is after drawing a stroke the the copy on right kicks in. And I'm going to talk about copy on right and everything like that in just a minute. And basically you have to do a copy of the of the data block. And that's what you see at the top there. So that takes a long time. The block in the middle is actually really good. That's just drawing the viewport basically and all the rendering, which which is where we want to spend most of the time. So that's good. But then on the right we have the undo system. And when you draw a stroke just after the operator, the undo system has to encode an undo step so that you can undo. And just encoding that step also takes a long time. So we're going to talk about the two major issues which is copy on right and undo. I'm going to do copy on right. Jan is going to take over after that. So to explain what we did, we're going to first look at the grease pencil data block. So we have like a better understanding of what's happening here. On the left you have an example of what you might see in Blender. So here we have grease pencil data block with a single layer. It's called lines and two key frames, one on frame one, one on frame four. And on frame four we have a stroke. And on the right you can see how this data block might be represented in memory. So at the very top we have the data block. Then we have all the layers underneath there. In this case we just have one. And for all the layers we have all the frames. In this case we just have two frames on layer one. And finally on all of the frames we have all the strokes. So here we just have one stroke on frame four. And you can clearly see this tree-like structure that's built by the grease pencil data block. And remember that because we're going to take advantage of that now. So let's talk about copy on right and why it's important. So in Blender we have this core concept that data blocks can be changed non-destructively. And in order to do that we need to operate on a copy of the original data block. So what you'll have is the original data block on the left here and an evaluated state of the data block on the right. So we copy the original data and then for example modifiers would just get applied to the evaluated state so that we don't change the original. Now every time you do something to the original you have to destroy the evaluated state and copy it over, right? Because you have to start from scratch basically every time. Now this becomes a problem when your grease pencil object contains millions of points for obvious reasons. So we have to come up with a solution how to not do that basically. And well the simple idea is why not just get rid of the things that changed and not destroy everything. And this is what we call the update on right. So when something changes in the evaluated state we just destroy what changed and copy that over and the rest is kept. Yeah and our solution to that is what we call the update cache. So I'm going to try to explain what that does now. It's a structure that lives in the runtime data of the grease pencil object so it's not safe to file anything. It's just there to really make the copy on right faster. And it's structured a bit like this so it also is structured like a tree and it has these nodes and every node has an index, a flag and some children but we don't talk about children. They're not important in this case. So the index is the index of the element that changed in the original data block. So if you have like a set of layers and your second layer changed and you have an index of one in there and that's sort of saved in the nodes that we can find it later on. And then we have a flag which can indicate that we want to update nothing. We want to do a light update or full updates. Why we want to say that we want to change nothing is important because well let me show you an example. So let's say we're drawing a stroke on a frame. In that case we want to tag the frame because we want to copy the full frame for an update. And so on the right here you would have what the data block roughly looks like. And then what's outlined in blue is what the data structure of the update cache would look like. So you have two nodes and those nodes don't contain any flags. So they flag nothing but we need them because we need to find that frame in the right position. So what those nodes still do is they still hold an index so we can walk down the tree and find the element that changed. And then when something is tagged for full updates we not only copy itself but all of its content. So all of the stroke inside of the frame for example. When it's tagged for light updates we only copy things that are inside of the frame, things like the selection status. So when you deselect a frame you don't want to copy all the strokes so we want to do like a light update. And this is what the update hash looks like. So now when you do a copy on write we can check for if there is an update cache present. Don't do the copy on write, just do an update on write and just destroy what's needed and replace that by what's needed. All right. So we saw earlier that there were two main bottlenecks so we just like resolved the update on write. And now the next big topic is the undo system. Because by default Blender uses this global approach. It's the main file undo system. So the general idea is that whenever you make a change well the undo system encodes the state of that file in memory in the undo step and this global approach is really depending on the file size so the bigger the file the longer it takes. And the problem is that in heavier production files this can become a real problem since drawing a stroke will have some delay after that. And same goes for undoing. And on top of that we can see that it triggers on the right a copy on write as what we're saying after undoing. So to try and address that we wanted to go away from this global approach and because drawing undoing and drawing again is probably the action the artists do the most during the day right. And we wanted to make this as fast as possible and clearly not dependent on the file size. So the same way meshes have undo system for edit mode we decided to implement a Grispensil undo system which would be really responsible for taking care of Grispensil modification in any addition mode right so I'm talking about draw, sculpt and edit mode. And this system is based on the one way differential approach so basically the idea is that an undo step would encode only in memory the change from the last state right. But to be able to do that we need to know what changed from the last state. And fortunately with the update on write work we do have such a thing which is called the update cache so we can rely on that to know what changed precisely and rely on this information. So let's take a look with an example of what happens when we use the Grispensil undo system. So at the top you have this Grispensil data block as you know it now and at the bottom you'll see a representation of the undo steps we encode. So first action you just enter draw mode on your Grispensil object. When that happens the Grispensil data block gets tagged for updates and from there the undo system kicks in and in this case encodes a full copy of the data block because we can't make a differential evaluation from any unknown state right so we need to make this full copy. Also internally the undo system uses the same structure as the update cache but it will copy and own a copy of the original data block so everything that you see in green at the bottom is a copy of the original data. Okay so next action just let's just draw a stroke on the first keyframe. In this case the keyframe gets tagged for updates and the Grispensil undo system uses this information and encodes a step that only contains a copy of the frame that contains one stroke. Let's repeat that and just draw another keyframe another stroke on the same keyframe. Same process and that leads to a third step that contains a copy of the stroke with the frame with two strokes. Now what happens when we want to undo the last action? Well we can go back in time and look at step two and as you can see here step three and step two have the exact same level of information because they both encode the change of the same keyframe so what we can do is basically applying step two like if you were redoing basically and when we do that well we reach the target state we were before step three. Okay that was easy enough so now oh yeah sorry finally the update cache the undo system also tags a keyframe for updates meaning that the next depth graph update can use that and do an update on write instead of a copy on write which is like a win-win relationship between the two system. Right so now let's try to do something a bit more difficult and delete the second keyframe here. So in this case since the layer structure changed we encode we tagged the layer for updates and we have now an undo step that contains a copy of the layer with only one keyframe. Now what happens if we want to undo that and reach the state where we had two keyframes? Well again we can go back in time but the problem is that here step two is encode the keyframe level change and it clearly doesn't contain enough information compared to step three right so it doesn't contain the missing keyframe that we need. So are we just stuck there? Well no what we need to do is go back a bit more in time and try to find a step that contains enough information to revive that missing keyframe. Okay so let's take a look at step one now. Well this one is a full copy of the data block right so it contains as much information as it can be it's the highest level of information so it contains the missing keyframe. We can just restore that state and now well we can just go back to the future somehow and reapply step two on top of that because we want that stroke on the first keyframe too. Right so this example shows how crucial it is to have a full copy of the data block at beginning of the undo chain because we want to be able to fall back to this point in history to be able to restore any type of change we make later on and the question might be what happens if we reach the maximum number of under steps that is user defined because then in this case we need to free the first under step right so if that happens then we look at step two and if it's not a full copy of the entire data block itself then we take it and we just apply it on step one and this merged version becomes the new step one and in this case we always have a full copy right so this undo system is really dedicated to optimizing performance when it comes to modifying the active dispensal data in draw or sculpt edit mode anything outside of this is really out of the scope of this undo system and in this case we just fall back to mem file right so if we change for instance like the color of the material then we let mem file takes care of that and if the next action is drawing a stroke well in this case we can reenter the grease pencil on the system but we have to encode this full copy to maintain this principle so it's kind of the best trade-off because it means that we can undo anything at any time but at the cost of some potential hiccups when we switch between steps that are different undo system types so this has been in production for several months and it proves to be stable and as you can see it can lead to some dramatic changes like in terms of performance for things like drawing a stroke and we have submitted this as a public patch also and we got review really interesting review with important things to fix and improve which we mostly did internally but we still have work to do on this to contribute and update the patch right so talking about going back in time again let's jump to the next and last case study which is the first thing we started working on when we joined the studio so at first the studio thought of Blender for storyboarding because being able to to mock up 3d backgrounds and place cameras but still be able to endure characters is a feature set that many story artists are looking for and we they were left with one big question which was okay it's great I can draw but how do I organize my work and actually build shots and build that into a sequence and to answer that question we knew that Blender had a lot of features that we could use to cover for that use case and really building that sequence workflow was about connecting those pieces together so this workflow relies on the fact that we can have multiple scenes in the Blender file at least one action scene which would be the scene which contains the animated action of the sequence an edit scene that would contain well the edit of the sequence and shots within that edit scene that would define a camera an action scene and both an editing time range and an action scene time range and as we wanted to build something really well integrated into Blender and easy to maintain we wanted to really rely on Blender native capabilities to address this but the the good thing is that if you look at this and you just use scene strips for shots then it directly translates to a sequence editor timeline which also means that then we can use the VSE to be the main hub of the sequence and manipulate edit it right so another question is how do we bring all those scenes together well by ideally you would like to always see the edit timeline when you work in the sequence right but the problem is that a Blender window by design is only showing one scene at the time you could open several windows but then it's becoming a bit hard to manage sometimes so to address that problem we made a call change and it's basically the idea of having a scene override on the space sequence editor right so basically the end result looks like this you can now show in a sequencer region a scene that is not the one of the main window and in this case well we can start building a sequence and how do we again make this work together now because we have to create some links between those scenes well to to do that we created this timeline synchronization system so it's implemented in pythons like starting from here all you're going to see is basically an atom and this idea relies on the fact that we want that when the user scrubs the the edit timeline everything else updates to be in the good short context right so if the time change here we have this callback of the frame change post handler which is really what we use for the core of that system and we look for the active strip under the time cursor and if we found one we just use the correct action scene the correct camera and we remap the global edit time to the scene local time right but now let's see it like in action so as we cannot show any production content yet we will be using shots from the euro short film from Daniel Martinez Lara and the blender foundation those five were accessible on the blender cloud all right so let's start with those two shots that could be belong to the same sequence so basically we just append in the scene from the original files and made really a few minor tweaks so first let's override the scene in the sequencer to use and show an empty edit scene you can notice that moving the time does not update the action scene anymore so now let's add some shots to this so the workflow come with tools to create new shots with the correct settings and naming and here I'm just like creating a few shots using the two action scenes we just imported by activating the synchronization system now the edit scene drives the system so basically when I move the time down there it updates the scene to show the correct thing also in this action scene we added a camera that is closer to the action and we want to make a cut of that like we want to make a shot out of that so basically to create a new shot we can just rely on the vsc and cut where we need to make the shot in and out and out out now we have those three shots using the same scene and we want the middle one to use the correct camera to end with that we have this sequence panel at the top we already have the list of the shots you can click on it and just assign the correct camera and you'll see that it changes now when I start a play back there okay the transition is a bit fast so let's fix that so to add a bit more sequence context to the action scene we have as you can see this overlay here that is interactive allows you to switch from shot to shot but also we time the shot and here I'm just like creating some more overlap all those interactions are using dope shit gizmos once we have done this kind of overlap well the transition is a bit smoother and seen from the outside if you pay close attention you'll see the action repeating yeah so obviously all of this is possible in native blender right it's just that with those additional tools and context it's becoming a bit easier to really understand which portion of the scene you're using and it might help to do some non-linear editing like here so the workflow relies on having an edit scene but we can actually use as many as we want to store versions and takes of the sequence so here we'll switch to a more advanced version of that sequence and this one only contains an additional shot to help with the transition between the two scenes and as you can see we rely again on non-linear editing here because we just reuse a portion of the first scene from a different point of view right now let's switch to an even more advanced version here we added two additional shots and sound because yeah since we're using the VSE we can also do the sound design at sequence level which is really important for some steps okay now let's create a title so we have this idea of template scenes basically user can create a template scene here we just have a grease pencil object white background perfect for a title we create a shot from that from that template and it duplicates a template to create a new unique scene and the shot uses that scene so now we have something independent we can work on and write a very inspired title okay but since it's a title it would be better at the beginning of the sequence so well let's just use the alt arrow keys native and blender to put it here and use the dog shit gizmo again to adjust the timing and we are helped by the fact that it sticks to the handle when the time cursor stick to the handle when doing that right so at this point we're just doing like editing we don't even have to really realize what's happening underneath which action scene we're working with it's really what we were looking for like being as close as possible as you know classic editing experience and one other important thing when it comes to integrating blender and production pipeline is being able to communicate with other departments and in that case also means the editorial department which generally works with media files so we need to render that out okay so let's clean up the sequence a bit we have this tool that allows to rename the shot so they are like in a chronological order and then once this is set we can just go to the batch render settings which describes some parameters on how to render the scene we have this option to render additional frame handles so you can have more content on the left and the right of the shot we have this output scene that we define we'll see just after and we can start the render so here basically the batch render is a model operator that has a list of tasks to accomplish and consume sequentially so each shot is a task and then the resulting media would be putting that output scene to recreate the exact same edits but this time with media as we can see here and also the sound was copied and as we define frame handles well we can see that each clip contains additional content on the left and the right which will give some creative freedom for the editorial team and then to send that to editorial we can use this open timeline IO integration that we developed for that purpose in our case we can generate like an AF file because like the target software at spa is avid and yeah this is basically how we can build our sequence and communicate with the editorial department at the spa studio okay so code wise you've seen this sequencer scene override I mean it works we're using in production but we still have the feeling that we could probably come up with the better design for that specific feature so this is why it's not like a public patch yet we might just talk about it afterwards if anyone is interested in that discussion but yeah talking about like how we contribute and the philosophy at the spa studio folk continue yeah so obviously contributing to the Blender project is a big topic right because we've shown stuff that we built but is not public yet like some like the quick edit for example so one thing that we have to say is that the studio wants to do the things the open source way so we want to contribute we want to be part of the community and I think you know we've shown that with some of the contribution contributions we already did and the fact that we hear but on the other hand we're making a movie right and this is always the the trade off so we have to find a balance between answering production needs and finding the time to do the contributions and this is sort of the thing that we're trying to figure out now but yeah like we've been doing this for a little over a year now so we're still in the learning process on on how to deal with this because there's like different contribution complexity you could say so a simple example is bug fixes if we find a bug because some of our artists have artists have a problem we'll look at it we'll see if it's in master if it is in master we'll create a report and we we can see if we can fix it if we can fix it then we'll submit the patch um so there's like no question about that and and we've been doing that for a long time now um then there's performance which sort of sits in between bug fixes and new features because sometimes it can be really complicated to fix performance issues like we've seen with the undo system and and the update cache um so yeah so those can take more time and and also build like sometimes you need a design for this so it takes more time on on our side and then new features obviously like we need to agree on a design with the respective module for example and we also want to build quality patches right we don't want to build a patch where the burden is sort of put on the reviewer and it's hard to review or hard to understand so we want to put in the time to make sure that we have a quality patch and if you look at our contributions well you can clearly see what's happening so um for bug fixes like yeah we we have money back fixes already in master and then performance we've shown some things there's like some smaller performance improvements with like multiple instances and for grease pencil and things like that and then just two new smaller features that we've contributed so far but i'm i'm looking forward to to the future and and see what what else we can do um yeah so that's the contributions um now just a quick quick few slides on managing our code base so um we have a good lab server we're obviously doing add on both add on development and blender development so we have a blender repository we have add on repositories um and what might be interesting to some is how we deal with blender and blender branches so this is what this currently looks like um our main version is based on blender 3.3 lts and so you can see uh that branching off and our main development branch is body develop um this is where we add new features um we have like feature branches and things like that as you do um and then we have a stable version of that which is just a branch that every now and then will we're branch off of develop and that will become our stable version and then we just have um fixes on that branch and every now and then that gets merged back with develop and then one thing that might be interesting is that we also have a master branch and the idea here is that this is based on master but it includes all of our changes and every few weeks we're trying to merge that in um like updates on master and the idea here is that we can like early on identify conflicts and not have to deal with like a massive amount of merge conflicts once we try to for example upgrade a version uh which is what happened in the past and we have sort of like a bad experience with that so now we're doing it this way where we have like um smaller merge conflicts along the way but it's easy to deal with them isolated and um that way we can sort of keep track of of the changes all right and and that's it um so we have time I think for Q&A so if anybody has a question I'll I'll repeat it and and we can answer it yes um the shift and trace yeah like do you actually move the the key frame data or can you like pass the on and speaking like transformation information yes so the question was how does it how does the shift and trace basically work like does it actually move the key frame um so it's it's a transformation matrix on the frame itself which means that we don't actually move any of the points um it's you can imagine it like a modifier but it's not a modifier it's just like applied after the modifiers in a non-destructive way yes because it's in the evaluated state of the object it's it's used for the drawing engine and and that's how you see it on screen okay yes yes obviously um so like I said in the section we're trying to find the balance between again answering like production needs and contributing so it's a matter of um you know when we find the time to say okay now we want to make this into a patch or or a design um what we'll probably do with like some of the features you've seen as we'll we'll propose it as a design first and and get feedback on that and and see if the community likes it and things like that and then work on making the like the patch clean and and contributing it back um I can't give you a timeline but um again like we want to be part of the community we we just have to figure out how to deal with it time wise yes yeah so the question is like the sequence workflow could you could be used for basically anything just not grease pencil scenes and yeah you absolutely right um it's it's just relying on on blender in the end so it's whatever you have in your scenes that makes the difference um so yeah it would completely work and I think on that we should have discussions because like many people in studios are trying to do the same thing right so it feels like this is something that would really benefit from a community effort and again we are really uh willing to be part of of that effort so if that can start discussions that would be just awesome and I think we're up right yep okay thank you