 So, good morning. Hello, everybody. It's great to be here among our fellow Blender users in Blender Conference 2022. We are all here, I suppose, because we're fans of creativity. And sure, we may express it in different ways, from sculpting characters to modeling environments to even building custom pieces of code to help other users out. All of those things, however, stem from a creative mindset. And for us, in Team OtherWorks, we decided to express our creativity through creating a whole new fantastical world called OtherWorld. My name is Athanasius Velisaris, and this is my co-director, who just happens to be my brother. And just to give you a little bit of our background, I used to work as a freelance character artist for games. And Michael was into all kinds of 3D from modeling to his current preference, which is real-time tech art. For the past two years, however, we decided to put our skills and knowledge of 3D into the test and make an animated short a snippet of the work that we wanted to show to the world. So, take a peek. Long as you're not there to see it. Thank you so much. So, just to be clear, what you saw was the work of just two people. Other than voice acting, we had no outside help. Obviously, I mean, I can't do a girl's voice. But the animation was lit and rendered in Unreal Engine 2, sorry, 4.27. But the bulk of the work, including modeling, animation, rigging, sculpting, most simulations, and some of the effects were made using Blender. So, let's dive right into it and see how it was made. So, right off the bat, we knew that we didn't have all the time in the world. So, we would have to resort to using some third-party assets to help us along the process. And those were the more generic assets like the walls, some desks. We did, however, want to create our own unique assets that are closer to the camera or that are so unique that no third-party library like Quixel, it's what we were using here, has. The challenge was to create assets that were nice and stylized while at the same time matching the color and the texture to the third-party assets, so not having them look out of place. Now, the process for creating props is very well documented for real-time applications for us. We use a base asset, and from that we derive two assets, a game-ready asset and a high-poly asset. We use custom normals and we also bake normal maps, so that's where the high-poly is useful. But what about the environment? Well, for the structures we used a modular workflow. And specifically for the walls, we just added some extra loops to facilitate vertex painting. And basically we blend between two materials. You see here the brick and the wall on top, the placer. And we use a noise texture plus vertex painting to blend between the two. But what about the exterior of the library? Well, in order to duplicate shapes effectively, Blender has had the answer for years. The array modifier. Well, it was, though, a small problem. We wanted to take advantage of Unreal's dynamic instancing. And for that reason, we couldn't just build something in Blender and then import it into Unreal without losing some performance. Instead, what we did was study Blender's array modifier and recreated one-to-one in Unreal using blueprints. And it includes both relative offset, constant offset, and pivot offset instead of object offset. And it can produce linear arrays. It can produce radial arrays. And you can also stack it up, one array after the next one, to have a more complex effect. Okay, so it wouldn't be a library without book selves. The book selves themselves were a challenge since it could be pretty tedious to hand-place all the books one by one. Had geometry nodes been introduced when we made this, it would have made things a lot simpler. However, we had to resort to using Unreal's blueprints to populate the book selves with books. That way, we could also take advantage of Unreal's dynamic instancing system as well. Now, geometry nodes exist, so here's the idea of how we made all this in case you would like to recreate something similar. So, first of all, a lot of parameters are exposed to the artist, as you can see, and these are not even all of them. At the bottom, there are randomization parameters from book collections here called bookdoms to percentages of angle books, sliding books, and stuff like that, from more generic ones like the gap between the books and several angle thresholds. So, the script takes all these parameters as input and generates random seeds for the various categories and starts placing all the books one by one. The first book is always spawned randomly, and for every subsequent book, it first determines if it should look similar in appearance to the previous book that's being part of a book collection or if it should be a new random book with completely new settings and stuff. So, after that is done, the tilt of the books is calculated. The script will randomly decide whether it wants to... whether it wants to straighten the books, start sliding the books downwards, or use a new random tilt. For the first case, it looks at the tilt of the previous book and adds a recovery angle until the last book is perpendicular to the self. For the second case, the opposite is done, meaning that it subtracts a tilt from the previous book until the last book lays flat on the surface of the self. Then, we have to calculate the position of the book. This was a little bit trickier and required some basic trigonometry knowledge. Three possible occurrences are being accounted for here. The first one being that the second book, as you can see here, is shorter than the first book, and the height is determined by the bounding box after the tilt has been applied. The second case is that the second book is taller than the first book, and the third case is that the third book is taller than the second book, but shorter than the first book, essentially a smaller book being tucked between two taller books. As I said before, we have the bounding boxes, so the formulas came out pretty straightforward. This is the formula for the first case, and as you can see, D is the distance that is required for the second book to have its upper left corner to rest on the side of the first book. Before proceeding to the next book or self, the script will store both the dimensions and the tilt, and then it will also check if the width of the self has been exhausted or not. The bookstacks outside the book selves were created following very similar approach. However, since the pivot point of the books was at their bottom left, it had to be mathematically shifted, as if it was at their center. All right, so the opening of the animation does have a lot of clouds. Unreal calculates volumetrics through translucent camera-aligned planes, and that's a common technique for many real-time applications, real-time engines. Newer versions of the Unreal Engine have a ray-marked atmospheric cloud system, but we won't be talking about that today. We will instead be talking about clouds that you can just drag and drop anywhere in your scene at any height and have them work. And these clouds take the shape of bounding volumes that contain a pseudo-volume texture. A pseudo-volume texture is essentially a sprite sheet. It sprites X resolution, gives us the resolution of the width of the volume, it sprites Y resolution, gives us the resolution of the Y axis of the volume, and the number of sprites gives us the vertical, the height resolution of our cloud. But how do we go about making a pseudo-volume texture? Unreal has unbuilt-in tools, however, I personally prefer Blender. See, these are the Metables. The Metables are an awesome technique to actually model fluffy, stylized, looking clouds pretty easily. And once you do that, the next step is to convert them into a mesh and then take a cube, squeeze it real thin, and start taking intersections. Essentially, in order to make the sprite sheet, we have an orthographic camera that's looking down, take each and every single one of the Boolean intersections, and then write them into a sprite sheet. Add some procedural verenoid noise, some gradient noise for the opacity and the little wisps, and you have clouds in Unreal Engine at any height. So characters, there were a lot of them for just the two of us, but nothing special was done in the creation of them. The process of creating a character from sculpt to UV unwrapping is pretty streamlined. Instead, I would like to take this time to just spotlight two add-ons that really helped us out. They are the Hair Tool and the Gorman Tool from Bartos Teperek. Does anyone know of these tools? No, I'm surprised. Hair Tool is an amazing tool for creating real-time hair, both procedurally and manually. Gorman Tool allows you to pretty much design using the clothing that you have in your mind, and then it converts it into a mesh and applies proper settings for stitching and simulation. From then on, you can convert into a hole, a closed hole, and start baking on top. All right, rigging. Rigging was a roller coaster of a ride. When we first began the animation, we knew that we didn't have all the time in the world, so we would have to split the difference between weight painting, a tedious process, and building controls, also a tedious process. For the faint of heart, please be careful when looking at the next slide. This is a trigger warning. This is how the initial rigs looked like. Just an FK spine with some IK limbs. Now, this rig is technically usable, just not very animation-friendly. You had to counter-animate everything. After that, I took a good look at myself in the mirror and said, maybe I should implement controls. So that's what we did. We actually implemented proper torso controls, twist bones for the arms and legs, hand controls, and we even updated the IK to work. At the same time, however, I took a tutorial course from CGDive on Rigify, and I fell in love with how easy it was to create awesome rigs. From that point on, all the rigs that we created were made with Rigify. You see the R2 characters here. We also integrated into Rigify our own custom facial rig. Now, what this facial rig does have is that it can receive any kind of controls that you want, and it is also a universal rig that we will be using in the animation for all our characters and that will be used, hopefully, in the projects following this one. It does, however, Rigify, does, however, come with some caveats, and let me show you one of them. This is obviously a character that's not from the animation. It's a character that we created for one of our game side projects. But it does question-stretch a lot, so it does display one of the issues. The first issue is that this, if you are very observant, doesn't look like this one. In Blender, you can select which bones inherit scale and which do not. However, once things are baked down and exported, they will have to necessarily inherit scale and have the result look like this. Piling on top of that, we also have a huge, very complicated hierarchy, and this is just a deformed hierarchy, no extra bones here, and it's very difficult to work with something like that. So how do you go about solving those issues? Well, for the hierarchy, there is an awesome add-on that automates the process of creating a game-ready deformation skeleton called UEFI. It's called UEFI, but in our experience, it will work for any engine, Godot, Unity, doesn't matter. It samples your MetaRig and then creates a game-ready rig based on that. It inherits location, rotation, and scale from Rigify's built-in deformation bones. The problem is, the deformation looks even worse on the right. So how do you go about solving that? Well, what we tried in the animation is to use an alembic geometry cache. This is Hop, he's a cartoon little guy, he likes to stretch and squash a lot, and in the animation, we basically had a sort of a stunt double for Hop that would double him in the scenes where he would stretch, squash, and stretch a lot, and that was made through a alembic sequence, a geometry cache sequence. However, this is not a solution, this is a workaround, so we had to use something more substantial. So what was a common technique that most people used? The answer is flattening the hierarchy. If you flatten the hierarchy, you get what you see in Blender viewport in your game-ready viewport. However, this comes at the cost of the hierarchy, and you may be asking, why is the hierarchy important? Well, if you try to implement post-processes in a real-time application such as games, and post-process animations are IKs, slow correction, RMIKs when climbing, and of course Unreal has its own control rig, it becomes way more difficult to work with a flattened hierarchy than it has to be. So what's the solution to that? Well, we developed an in-house add-on called Scalify. Scalify actually creates lift bones for each UEFI bone, and then skins the mesh to the newly created bones. See, since they are lift bones, they have nowhere to inherit scale too, and in that way you can have both post-processes and squash and stretch. What's awesome about Scalify is that you can select which bones get Scalified and which don't. So you can have the limbs and the spine be Scalified, whereas bone-dense areas such as the hand and the face do not have necessarily to be Scalified. All right, animation. No, animation was pretty forward. It's mostly hand-kid. We did, however, get ourselves our hands on a rococo motion capture suit, and we decided to implement some motion capture. Here I am on the top left doing the movement that looks kind of like the movement I did. Sorry, motion capture, however, did come with its own issues, and retargeting was in the foreground. Now rococo does provide an inblender add-on for retargeting. However, for our workflow and our needs, it didn't work exactly the way we wanted. So we decided to implement a custom IK retargeting technique. Have a look at it just as a reference. So first step is to actually get our characters into t-pose. A-pose is great for sculpting and modeling and works excellent, but when you're importing from rococo or rococo's motion library or Mixamo or any motion capture library, you will most likely almost 100% find that the imported skeleton is in t-pose. So get your character into t-pose, it's the first step. Then you import, and once you've imported the source skeleton that has the motion capture data, we add an NLA layer on top and scale it there. So this is an additive layer. The reason why we do that is so as to be able to swap the base layer with another animation and still have the proper scale applied to our source skeleton. And this is key in creating proper IKs for our character. All right, now for the arms. It's a very simple process. You just parent the IK target to the hand bone and the pole target to the upper arm bone. You do the exactly same thing for the legs. You parent the IK target to the foot bone and the pole target to the thigh bone. They will get you pretty good results. So for the head and neck, just a simple copy rotation in local space will suffice. As you can see here, the mix is set to before original. We'll talk about that a little bit later. For the torso, we just parent it to the pelvis. Just to be clear, all of those child off parentings happened at frame zero, which is usually AT pose when importing a motion capture skeleton. But what about the spine? Do we use copy rotation? Do we use child off? The answer is that it's not that simple. See, different skeletons, regardless if they are source skeletons or your own character skeletons have different amounts of bones in their spine. So some may have four, some may have three. Newer versions of Unreal support six. And it's not easy to just move from our source skeleton to our target skeleton. So what do we do? Well, something pretty common among animation control rigs is the lower torso and upper torso controls. So what we do is insert two intermediate bones. One that's parented to the very top of the spine and one that's parented to the very bottom of the spine. We then have the upper torso controls, control, track the upper one and the lower torso control track the lower one. Tracking only includes rotation and that's exactly what you need. And if you have set up the two controls properly, this will work pretty fine. This technique can even be used for clavicles. Now clavicles, of course, can work with copy rotation. It's just way more time efficient to just add an intermediate bone at the end of your control rig's clavicle and have it parented to the shoulder bone of the source rig. And what's great about using child off constraints is that you can fix the silhouette and the pose of your character while still reading directly from your source skeleton. And why that is important is because you have proper timing and spacing instead of just making it down in an NLA action and using additives to fix it in addition to what you have in your base animation, which may spoil the timing and spacing of your motion capture data. All right. So after the animation has been done we also need to animate the hair and the clothes. To do that we decided to use simulation and to be more specific, bone physics. Now while Unreal does provide a built-in solution to do that we decided to bake it in blender so we can have more artistic control. Even if blender doesn't provide a built-in solution like Unreal does it does provide us with all the right tools to create our own. Essentially our workflow goes something like this. First we create hitboxes for all the bones we want to simulate. These hitboxes are messy so roughly the shape of the affected region of the bones. Then we enable rigid body physics for them and create rigid body constraints to connect them together creating a chain. Then we have the bones copy the movement of the hitboxes either by using child off constraints or copy rotation constraints. A kinematic hitbox has to be created for the rest of the body as well so we can have accurate collisions. This is a very tedious task and having to do this for every bone for each action will require a lot of time and doesn't allow for fast iterations. To help us with our workflow and expedite the process we developed a tool, an atom that automates a lot of these things. What it does is it goes to the first frame of its action and connects the hitboxes we previously created to their appropriate bones. Enables rigid body physics for them, creates the correct constraints, puts all the appropriate settings and all we have to do is hit bake and just wait for a couple of seconds for the bake to be done. This tool also allows for easy selection of the hitboxes and the bones so we can bake them and also allows us to disconnect them so we can move on to the next action. It also allows us to control the stiffness and the dubbing of the spring rigid body constraints. We have body animation, we have secondary animation and we talk about the facial expressions of the characters. While most of the work was animated by hand we wanted a faster solution both for additional animation for this project but also for our future projects. Using Apple's ARKit as well as any other application that allows facial motion capture performance to be captured in 52 or so shape keys is one of the easier ways of adding facial animation to a character. As we mentioned before we use a universal facial rig for all our characters. So what we did is using this rig we recreated all the shapes of these 52 shape keys as skeletal poses instead. But why sacrifice fidelity that way you may ask? Well as long as we use this rig for all our characters the poses can at least to some extent be transferred between them. To help us with our workflow we developed another tool. Now what this tool does is it samples the shape key data and transfers it to the skeletal actions and poses of our rigged character. Now how does it work? First we give it a list as you can see over here of which actions correspond to which shape keys. Then the addon creates it populates the NLA track as you can see on the left with tracks that each track being one action. Then it sets the influence type to combined. Then for every frame and for every shape key it samples the value of the shape key and translates that to the NLA track influence. This can be done in real time so the results can be viewed right away and then can later be baked into an independent action. Another cool feature of this addon is that it can create a duplicate mess of the rigged model but with all the skeletal poses baked as individual shape keys instead. So we can use the shape key version of our character to capture the data and later using the same addon transfer it back to the rigged character. Okay, last but not least let us talk a bit about the effects of the animation. After Oliver the dragon dives down from the broken dome window he spews a jet of fire into Colonel the bad guy. To do this after setting all the appropriate animations and collisions we set up a particle system which we then used to drive a fluid simulation. Okay, so we have the flamethrower like effect but how do we use this in Unreal? Well, Blender allows for fluid seams to be cast into a format called VDB which we can then import back into Blender and using the volume to mess modifier as you can see here transform it into a geometry cast which we can then export as an Olympic geometry cast into Unreal. However, as you can guess doing that we lose all the volumetric properties of the original fluid simulation so we have to recreate a material that approximates the volumetric like look. So first we applied a volumetric Voronoi texture that we created using the same slicing method we created for we used for our clouds to introduce some noise. Then we applied this flow map we applied it try plain early so we can simulate the swirling of the fire. Then we applied some Fresnel effects we added color and we have our final result. So the last part is lightning and there is a lot of lightning in the animation and a lot of different techniques that we used to create it. However, the one that works with basically any sparks in a drag and drop fashion is the following. First we model the arcs and we use curves with a custom bevel. The reason why we would do that is so as to be able to control procedurally the resolution of its curve. In real time applications performance is always an issue. We create the main curves, it's the ones that you see in the center here, and the flyaway curves which are less probable to be visible. We convert it into a mesh and then unwrap in a 15 column sheet. We'll see why in a little bit. But we actually unwrap the main ones on the left as you see and the secondary ones on the right most part of the sheet. So this is the material. The material's first step is the probability that an arc is visible and this will help you with the blinking effect of electricity. We have three strips that actually scroll from left to right and on the left most side they act in a subtracting fashion and on the right most side of the material they act in an additive fashion. So in that way the main arcs will most likely be visible while the secondary arcs will most likely be invisible. We then used some textures to add displacement similar to how you would do it in blender even next we'll have it if it doesn't already and we also add a large offset to create the wild electrical look. We then use a multiplier matrix texture. So this is basically we add to the 15 columns 10 rows and stretch them so as to fit the entire UV sheet. So what this texture does is scroll vertically in a step manner and assigns to it of the arcs a different multiplier of intensity giving it some uniqueness. We don't think you yet. And this is how the final effect looks like and actually it can work with any kind of shape we use it at multiple parts in the animation it's just a shader that you just drag and drop. And now I can thank you. Thank you guys. First of all I'd like to thank blender for having us. It's honestly a huge honor for me for us. I guess we got a little time so does anybody have any questions that they want to answer? Yeah, sure. You know what's funny? He has memorized this question to answer you. Well, I haven't really just thought that it could be a question that's going to come up. We mostly just used the normal FBX export that blender provides. Just make sure that if your model phase is front in the blender viewport it's going to phase front in Unreal. And if I'm not mistaken front is minus Y and in Unreal it's X. But we also used another add-on. I think its name is Blender to Unreal. I don't really remember the guy that made it. It's still an experimental mode that allows for bad export we mostly used it for third-party assets that we imported into Blender so we can fix pivot points and normals and bake some textures put some sockets in Blender so we don't have to do that in Unreal. Yeah, and we used that add-on to export the props but the characters are exported and the effects and the alignment sequences using the defaults that Blender provides. Any other questions? Well, I took the course for Rigify. Yes. But we actually have built the there are actually many more techniques to export. You can have an add-on counter animate basically the scale so as to have the same scale in hierarchy and skip the extra bones that we added. But for us since we actually when we built this add-on, the it didn't exist. The course from Citidive had just been introduced and it didn't have the extra modules that he added later on. So that's why we chose to stick by our own add-on. Any more questions? Does it cover you? Yep. Thank you. Right, same thing. Yeah, okay. Alright, thank you. So, yes. So, yes, that's a valid question. Well, in our animation we decided that we wanted to work in 30 frames per second for the animation. So everything was baked in 30 frames per second. However, we decided that we wanted to work in 30 frames per second for the animation. So everything was baked in 30 frames per second. However, I have found that you can tell unreal from not mistaken the frame rate of the Olympic gas. And it will, I mean, if you're working at 60 FPS it will not play it double the speed. It will just interpolate. I mean, if I'm not mistaken it will hold the frame and for two frames if it's on 60 FPS and you import the 30 FPS Olympic gas. Did I cover your question? Yeah, we decided on 30 FPS. So we did the simulations at 30 FPS. Yeah, but I'm not really sure how it's going to work with other frame rates, but because we only did the renders at 30 FPS. The sequences in Unreal were actually unlocked because we had some slow mo effects and well, it's still buggy to use that with locked frame rates. What I have found out is that Olympic don't really care about the final frame rate of the project. If it's imported as a 30 FPS it could play as if the project was playing at 30 FPS. It's a frame rate agnostic as far as far as I know. Any more questions? Right. Essentially it's just let me just it's Unreal's native solution for volumetrics. It's the pseudo-volume texture, that's how they call it. Essentially it will try to recreate something like a science-distance-filled kind of shape, but it will use set sprites to do that. What the camera sees in that demo is a top-down opacity of each slice. What we're doing here is basically taking a mesh, slicing it in different parts and then making that the volume texture. Unreal has built-in tools that can work even in VR to create a volume texture, but this is the workflow that I use for Blender to make things more stylized, more solid a little bit. Any more questions? Yeah? Yes, in Blender. Well, Eevee is great. It's awesome. It sounds like I'm about to insult it now. No, I'm not. I spoke with Clement yesterday. I really admire him. The problem was that it doesn't support all the features that Unreal does. Unreal supports channels for lighting. It supports vertex offset. You can't do that in Eevee right now. So it lacked many of the tools. And even the worst part for me is that while Eevee works in real-time, it renders very slowly. Unreal renders extremely fast when you render, even if you're at the highest preset. So Eevee is great. We actually use it when animating. We use an add-on that converts normal maps to hike maps so as to have a faster iterations and be able to see the shading approximation of the characters in the viewport. We considered it, but in the end of the day we just use it for visualization purposes. Also, if you're using Unreal, free Quixel assets. Yeah. Excuse me. I'm sorry, I didn't hear you. Can you speak up a little bit? Can you speak up a little bit? Yeah. You get Quixel assets, but also you get Niagara, which is an amazing FX module in Unreal. You get basically lights that have many different controls and if you're using it for we could even do the physics theoretically in Unreal. It's so fast to iterate there and iteration is faster in Unreal than it is right now in Blender for our use case. So Niagara, awesome tool. The new volumetric clouds that are ray marks, also another awesome tool. It doesn't exist in Blender right now in Unreal time. And of course, soon, Nannet and Lumen, which are two amazing tools in Unreal that don't work great right now but will be working great in a year from now. If I may add to this answer. Unreal also provides built-in LOD systems and HLOD systems and since we wanted the animation to run in real time as if it was a cutscene in a video game, we wanted to save some performance and outside the main hall of the library where the majority of the animation takes place there are huge buildings and since we mostly used Quixel assets for that it's a lot of geometry there that we wanted to build these to simplify all of that and also use the dynamic insensing system so I think we're out of time so thank you