 Hello Thank you very much. Hi guys, I'm Connor Justin And I'm Igor Puskaric and here we are here to tell you about Limitations of real-time 3d art how to overcome them how to solve them optimize for them And maybe who knows maybe improve your art with them first a little bit about us I'm musician and 3d artist with traditional art background. So if you see me with a sketchbook like a lunatic Then that's normal Here's my succulent Fifi you guys gave it to me at last year's conference. And as you can see it's doing quite well Despite the fact that I dropped it once or twice and On the right you see me in a hand-to-hand combat with a rabbit peco That of course happens in a traditional Swedish homestead and I've been doing 3d art for 11 years now and Using blender to do it for four since the legendary 2.8 release And here you can see examples of my work some colorful dioramas Asset packs game environments and recently a movie. What about you Igor? Basically, I've been an artist my whole life doing 3d for only 15 years and Some Best kind of fact about me is that I like ideas. I like to merge Different things that are perhaps not mergeable and create something new for this I use tools of drawing tools of animation and tools of 3d modeling and And they combine each other and inspire each other In terms of software first, I first I used 3d soft 3d Max and then I also taught Z brush and then lately I tried the blender and I so far never looked back And for for a while over the years I have been Growing growing in terms of public presence. I won some awards got features bucks by some big and smaller name companies Worked with some really awesome people had some cool blogs and podcasts Lately I now have permission to set I had the honor of participating in creating a curriculum for the Croatia's school system However, I also work as a Blender teacher in the Novská Pismo which is Basically understood as a Croatian Gaming Center. I'm also a sketch web seller and a master just like Conrad My utmost Excitement happened when my model worker 12 Became head model on the sketch web's main page All right when you create a 3d scene for real-time Usage like video games AR on the web You face hardware and software limitations Those could be GPU not being able to render high res textures or too many polygons or your game engine not supporting Soft-body physics or dynamic lighting and we all use work arounds optimizations illusions and tricks to get around these limitations now Imagine that you have a very beefy PC very strong PC and very powerful game engine So you have no limitations whatsoever You might want to still take these detours because there are interesting to explore and they determine new art styles and We're gonna need example and together with Igor. We have created a movie But not just any movie. It is an interactive short 3d movie on the web in a browser We published it on Sketchfab and you can find it on my sketchfab profile if you're interested and In there you can zoom in pan around orbit and inspect the scene as the movie plays out You can check out what are what are the PBR maps? What you can check? What is the topology and everything like that? There is a whole article how the movie was made so if you're interested you can check it out on sketchfab.com slash blog and You can see the movie itself on my sketchfab profile. Maybe maybe not now, okay? So what are those limitations that we want to talk about today? So since Sketchfab was not Originally meant to do what we were intended to do We had to face some basically low-poly budgets lighting low texture budget and Animation restrictions. Yeah, those are all limitations. We have to face starting with low-poly budget So when you create art for 3d real-time graphics Best way to optimize Huts off is using LODs. LODs are lower resolution models that replace original mesh as you get further and further away from it And this is LOD zero the highest resolution Ld1, Ld2, Ld3 and so on Some engines like Unreal are going to generate those LODs for you, but most not so it's good to know how to create them You can do that by decimating loop cuts in Blender or by decimating topology using the modifier Just don't decimate it too much For our project. We didn't have LODs So and the top and the budget is very low. It's on the web So it's only 500,000 triangles for mobile and 2 million triangles for average came in PC Somehow we managed to keep this is the final scene of the project and it has only 180,000 Triangles, so how did we do it a first step and the most important is creating blockout blockout is a simplified geometry without any details no textures and You can see iterations of it and this one is being the final the blue line Let's call it a focal line Represents the path that robot protagonists heroes of the movie take and this blue line represents And it it affects blockout as well as block out affects it But more what is more important is this blue line determines Objects and their distribution and distribution of detail in the scene Within the block out there are so-called placeholders and one of them some kind of a pipe system We separate and finish it have a proper topology Fully fledged model and this becomes an example a template of a sort for every other model in the in the project But not every model is going to have the same density of topology something that is close to the focal line is going to be higher resolution and The objects further away are going to need only enough to keep their form and to have a good silhouette against the background That creates a distribution of topology This image Showcase is actually what Conrad was saying right now High poly mid poly and low poly are the divisions So we gave our robots the highest poly since they're the main attraction in terms of action and moving Then we have the mid poly which is the near approximate environment and the green Represents low poly things out of the main focus just sort of to fulfill the scene but to be there Mm-hmm to summarize this chapter Best way to optimize is using LODs if you can't do that create a block out then within the block out have Placeholders and one of them maybe you can turn into some kind of a template model which serves as an example a system of topology for your project and Last but not least is having a proper smart topology distribution dense where it matters and scars where it's far from the focal line and Some artists who embrace the limitation of low poly art kept on lowering and lowering and lowering the topology and And they figured out that we don't need millions of them to create something that is appealing and beautiful. Thank you, Paulie perfect Next chapter light limitations in real-time 3d graphics light calculations are heavy So this is why we bake them bake lighting Bake light is not physically in the scene, but the data from it is so this could be both volumetric data and flat and Volumetric one is captured by so-called light probes This is screen shot from unreal and those light probes are responsible for illuminating objects that move like like the player for the environment And then we have a flat data as is screen shot from unity and this shows light maps This is they are used to illuminate objects that are static and this is being saved to a second UV channel In our project we have both moving objects and static, but we have no light maps nor light probes So what did we do take a look? This is final version of the blender scene in here We had 44 light sources and Extra shadow casting planes to darken some areas of that cave over there and the bottom of the pit Now on Sketchfab, however This is how the final final scene looks like as you see all the lights have indirect lighting contribution light Scatters all over the place in in the final scene on Sketchfab We only are limited to free light sources and HDRI and we have no invisible shadow casters as well as none of these light sources actually contribute indirect lighting pass So how do we transfer all that lighting data from Sketchfab from blender to Sketchfab? We do it by baking light into textures This is how one of these textures with light baked in looks like it's not just any light It's indirect light and you can do it in blender by using cycles combined in bake. Just make sure to uncheck the direct lighting option This big looks pretty, but it's quite dark And the secret ingredient of this method that doesn't use light maps or light probes is to Input this thing into a massive slot on Sketchfab on the web So one two three those are directional light sources that are responsible for dynamic shadows in Sketchfab Everything else all of these 44 light sources all of these invisible shadow casters all that indirect lighting contribution is done by emitting light through textures So we have a problem because if our character our hero hero is in light It's all good. He can read a book, but when he gets to the shadow He becomes completely dark Now for moving objects, we also need to use emissive input So it is good to find a balance in this workflow to have if you have too strong emission for objects that move They're going to look too to light too bright in areas that are lit. However, if you have too weak emission They're going to be too dark in shadows There's another lighting limitation that we stumbled upon when working on the on the movie and that is We have dynamic shadows, but we are not allowed to move light how to go around with it Our friend from Ireland Tyco He experimented with this stuff and he came up with a bunch of nice solutions that we want to present to you The first one is if if Mohammed cannot get to mountain Maybe the mountain can get to Mohammed. So light can't move. Let's move the environment instead Here we are at three spotlights to these light orb and it is static and violent moves Don't ask me why spotlights It could be a point light, but for some reason it doesn't cast shadows But we want shadows and if the camera stays close the illusion just works What if you have a multiple light sources that move independently from one another? You can no longer use the environment moving because it moves on in one vector So for that but you can do at the strong emission to the sphere then in post processing and bloom Now it appears as the sphere is actually emitting light. We are using illusions tricks But unfortunately the light is not actually illuminating surfaces around it unless We put a fake sphere behind the rock and add very strong bloom to it Make it bigger and make it and mirror the movement of the sphere that is visible Now if we add a translusive Refractive translucency to the surface of the rock We have that bloom of the hidden sphere scatter through and it this creates illusion of light illuminating surface of the object Speaking of illusions, there is also a third way This way we have a spotlight that cast a shadow How well even though we cannot control the light itself We can control the condition conditions the light passes through so we get this plane We can animate it through blend shapes or through object-based animation or through texture-based Shape that just alpha keys basically and we can also control it through the any underlying geometry such as this fan and Then it gets such a tool that we produced on the surface below. We mentioned those limitations because discoveries while exploring solutions to these problems and They determine identity of your style they can determine identity of your style. Thank you Taiko for your lessons to summarize this chapter Have your key lights the main most important lights real-time and bake everything else into lightmaps light probes Or if you can't have those use emission, but just remember if you use emission, you can only add lighting You cannot make things darker Keep it in mind. If you if you have bright scene, it's not gonna have any effect You can bake emissive Indirect lighting in blender you can do it with cycles combined bake and When you use this technique you find the balance between receiving light the PBR and Producing light the emission setup if light can't move move the environment instead I use the emission and bloom in post-processing and refractive translucency to to create illusion of illumination upon the surface and animate the shadow casters Next chapter textures textures among among the most demanding things in CGI both with their size and their Amount so what did we do we do that by putting them in a spare? Sorry in a square space based on power of two so 1k 10k doesn't matter as long and it can produce midmaps Which is a similar? optimization that LODs do with 3d models meaning that they Recalculate based on the camera distance You can also tile textures this means that they Transition into repetitions of themselves they can tile horizontally vertically or both ways this allows you not to create extra texture every time You run out of the UV space You can also create one texture. That's valid enough to create Multiple models or parts of models which you can practically create infinitely after having that one texture and Last but not least you can pack textures into atlases This is one of these what this is atlas and they're being rendered in one go just one draw call on the right You see the unity our engine forcing us as artists to shove individual maps into separate channels of RGB a texture This is also a form of optimization, but then it looks ugly You can't tell what the hell is going on here unless you open it in Photoshop or gimp and inspect each channel individually What a mess for sparking BB for the movie we were Restricted to a square of dimensions 12k on 12k that relates to 9 4k textures And somehow we managed to fit within How did we do it? The first important step is to group geometry into materials This could be rock metal sand by the resolution of the object by whether the object is moving or is static By whether what is the position of the object in the scene? How how it's being lit or by their purpose and here's the example me and Igor made asset pack and Here you see how they are divided so material rock rope metal and then within the metal part What is their purpose? structural metal sheets metal pipes decorative In the movie we divide based on position in the scene and Lighting this is because how we bake light into textures This just makes it easier for us to work like that and then we have renders very basic renders and paintovers I do those paintovers in Krita and in blender So here's one here's another And Based on those paintovers. I know exactly what texel density. What is the resolution of the texture that I need? So you might see that this giant rock The face of the giant it might seem important, but it actually is in a shadow So it doesn't require as much texture resolution and thus its UV islands can be scaled down Leaving more room for UV islands of the engine or the hand Now these paintovers are used to create diffuse textures and based on diffuse We make all other maps such as metal leak roughness specularity and so on Last step is to convert them into JPEGs and make them smaller some of them. So they all can be shoved into this square To summarize this keep your textures in a power of two so that midmaps can be generated Tile your textures horizontally vertically both ways reuse textures just like Iggy did with his drone pack Use atlases and use separate channels of RGB texture if you want Now it's probably good to know the budget of your engine and When you finish modeling group everything so that you don't have 400 textures for when you have 400 models That would be just crazy Figure out the texture distribution. We did it with paintovers. You can do it with Just thinking what is important in our scene. What is close to the focal path? That's gonna be higher texture texture texel density And last step is to generate all needed maps and pack them into the top all the texture budget and By embracing this limitation as well as the previous one the low poly a Beautiful art can be created windmill by Wacky on Sketchfab. Here's the prime example of the voxel art every single voxel cube is just one color sphinct on radit and Here's one more. This one uses a very simple gradients of value and color and to project on to very basic topology and creates appealing and beautiful results Thus chapter animation restrictions As such Blender does not have restrictions But there are morphs. There are armature. There's soft body mechanics. There's hard surface Particles in object transforms however Not the point point is what we needed in this project is Is Only what we what we needed What we didn't compared what we had Was limiting meaning we had constructions We only had the object transforms and we had armature and we had morphs like it. What do we do? Like it it turns out that every animation that we don't have can be somehow done with the one we have and this means that rigid body Simulation can be done with simple object transforms key framing those Particle simulation like it can be done also with key framing individual instances of particle objects Soft body like rope swinging on the wind can be done with simple armature rig and light light turning on and off We don't have dynamic light, but we can fake it with using shape keys For the rigid body you can simulate it in Blender Then you have to bake it right into cash Then you have to bake it again into keyframes and those this is how the screen frames looks like look like you have to optimize them Further by decimating them. It's a bit messy. You can skip that limitation completely by manually animating Objects falling down and for this motion paths turned to be very very useful So they show you trajectory as long as it looks like a parable and this you know, we know that's relatively believable, but we can do whatever we want. We have full control Basically, it's all about simple tricks. For example, we did lights turning on and off, which we cannot have obviously However, we approach that trick with having two plates one one is not a missive one is a missive So we just animated the emissive one in front of the non-emissive one which gave the effect of his eyes turning on So what's the ropes? I'll show you the ropes So in the project we have a rope that is anchored on both ends now It's it's middle swings gently on light Martian breeze. This is normally done Well, it's not normally done because we don't have ropes on Mars But if you had we would use cloth simulation, right or soft-body physics to achieve that effect Instead we can have two bones chains of bones that meet in the middle in the control point And then we just simply animate the control point in the elliptic pattern adding extra swaying motion and that gives us a believable motion of work rope swinging in summary When an engine doesn't have all benzels bells and whistles You there's always some that it does have and you can transform in a way for it to seem as though They had all of them Mm-hmm. You can simulate things like simulate a soft-body physics or rigid bodies, but it's a long lengthy process and you You can skip it entirely by my doing this manually By and that and with when you do it you have great help from motion motion paths Or you can use for example bones for for the rope a rope simulation All right, last notes All these things in the bag the topology restriction text the texture this is a restriction lighting and animation limitations They determine Unique feel of real-time 3d graphics a field that we learn to recognize and love and So even in the future with technological limitations gone. We might still want to choose to limit ourselves There are not so many things in this buck just a couple But amount of solutions and work arounds and optimizations are Infinite and they're yours to take and they determine your art style Just like the determined art style in our project and hopefully after this short presentation You see how these everyday limitations affect your art and who knows? Maybe this realization can help you build upon your artistic expression Thank you