 Welcome, everyone. Thank you for attending. Please make yourself comfortable as we start this session. Today, we have delved into two integral aspects of Blender's arsenal. So driver functions and shader nodes. This style goes over all aspects of my visual workflow. And its main purpose is to be an overview of the key features I've been using for a long time now. With that said, some of those topics are intermediate to advanced level and are already, as soon as you know, you're already around the user interface alongside some vector map operations. That goes for both folks watching online and the live audience here. And I think most of you will be eager to nerd out about some of the process shown. So we can do it for more Q&A by the end of the session and continue later. A little bit of context about myself. My name is Luis Carabini. I'm an artist which has been focusing on real-time rendering and consting for a great part of my career. Having spent four enriching years as part of the Sketchfab team at Epic Games, one of my most fulfilling achievements there was coordinating the Blender Council, which was a short-term initiative to spread Blender awareness and adoption within the company. So with over 12 years of experience in the realm of 3D content creation, I had the privilege of witnessing Blender's remarkable evolution right into a Swiss knife of creative potential. For sure, there are areas that are still very unpolished and requires further development for real-time, especially, such as baking and authoring textures. But with those limitations in mind, I come in harmony with the areas that it really shines, being the offline rendering workflows. Here's an example of some of the same game-ready mesh being converted for product proposals and 3D-printed collectibles. As much of those renders may trick the eyes, they're actually simulating scale and light conditions to really sell the imperfections of an actual desk toy. Everything you see here is mostly EV and cycles. Nowadays, I use Blender all-around from the initial blocking stage to the final piece, and I love that, you know. The thing about the porting and establishing workflows is you're always kind of unintentionally dealing with the biases and quirkiness of various adoption decisions behind it. And sometimes, these visions are not aligned in tandem, even if the original intention is genuine, right? That should be enough to impact there for more statistic-oriented people, especially if you're just entering the market. The job should always have the easiness of use as the main goal, right? So I try to stay as close to my craft in and of the digital realm as possible. Building Gunpla and other model kits is as my main hobby nowadays. I would even dare to say that this is almost a side profession by now. And I recommend every artist to embrace side activities that enrich their understanding of the world and the subject which they specialize upon. If you're an environment artist, go travel, take reference photos, visit your local car shop, ask the mechanics, understand how the engines and models work behind the scenes. Experiences like that really elevates your art to the next level. You know, in talking more about our hobbies, airbrushing is a widely used to, that I used to customize Gunpla model kits. You learn about the different aspects of chemical properties of animal and acrylic paints. Air, atomization, finning, you are to have a very smooth coverage of the plastic surfaces. There's also the work of polishing, sanding, masking. So the possibilities are endless and this contributed a lot to the way I try to represent my concepts in the digital form. All of this led me to create this ever expanding personal collection of surfaces. And I know most of you also have been dabbling with the asset browser for a long time. And well, that improved my output exponentially regardless, you know, I can waste less time thinking about user interface hassles and really focus on ideation. On top of that, every new seed which is generated can be saved for later reusability, right? So here you can see that all those materials are interchangeable, being able to generate like a very big amount of variations in the initial structure. And this is all for a high degree of flexibility. Being that, you know, the good thing is that it doesn't lose a resolution. So it remains sharp and preserve the details. And this factor really helps if you wanna bake them down into a texture and export later. So let's dive into my way of working. Before hand, it's important to disclose all content in this presentation was made in version 3.6 before the AGX filter transfer was introduced on 4.0 and 4.1 in the principle BSDF node. I also got more compact as many of you may know and all inputs and options are still there. However, just organize differently. Well, let's start about lighting. So lighting is almost everything in your work. It defines the mood, right? So to create a full material library, it's very important that you work in huge saturation agnostic environment for look-dev purposes. This essentially kinda like reduces any discrepancies across bounce reflections and ensure like a consistency among our colors. So my red, blues, yellows are indeed pure and not being skewed by ambient sliding. Simple like box, black and white HDRs can be a good starting point for a solid foundation on that end. And another things needs to be addressed, especially for the procedural parametric designs you saw beforehand, that when I was looking at some references and I wanted to replicate some of those ideations, it kinda became evident that I needed to be, for me to achieve the same effects with base polygonal node primitive. Although that's cheaper and less expensive to load, I needed that those roundness properties and well, you can only do that with sine distance fields, right? So I was looking around and I'm a big fan of Celestial Maze as well. Blender has good nodes by default, but they're not moderate enough when committing to more complex setups like that. In comparison, you waste a lot of time creating things from scratch and I recommend everyone going over the basics of vector math and functions before dipping your toes with sine distance fields. Both Celestial Maze and Arringdale have great lessons on that. Celestial Maze toolkit is free. You can find it on the GitHub link both and it contains more like 160 utility nodes with basic primitives. So moreover, I wish they're included by default. You can save all those in your library for quick access. It also includes like some 3D ones but I mostly focus the 2D fields as they're mostly cheaper. And that's essentially how the geometric pattern is done. Examples are the SD box, the SD ennegon with variable sides and the equilateral triangle. Here it showcase how you can use the built-in SD viewer node to preview the fields by creating a compare math node. You can pick each one of the bands you want. As far as alter you go from the center, the rounder the outline will be and you can also use the epsilon value to control like it's thickness. So pay attention to how there's an inverse proportional relation between the scale factor and its band size which you may need to compensate later on. If you tile out that with a module vector node, you're set up for success. And you can achieve a basic procedural pattern like that. Other material setups from my collection, there are supplies more simple than most people would expect in this case. The wave and the noise texture nodes are doing 50% of the heavy lifting. You can tweak the mapping and grunge scales to simulate tribal pants for all types of surfaces you want. For more advanced users here, this tip may not come as a surprise but it's such an overlooked aspect. I think it's worth reiterating. Very commonly you can spot artworks in the wild and you see the colors are not as good as they could be represented. This of course should depend highly on the type of mood you're trying to convey but because RGB linear is the default option for color ramps, I'll tell you it may work for more analogous combinations. You're not getting gradients in its full potential for like complementary and triadic uses. So for a HSL, huge saturation line is you can help achieve more vivid blends on your gradients. And the reason for that is quite obvious, like you're doing the shorter path to reach your outcome in the color wheel. So therefore, if your range is too far away, you would end up with grays and desaturated values in between. Whereas with HSL, you're getting the full strength of your values by going the long distance. This technique is particularly useful for car paint and materials with Fresnel variation, especially on candy looks and complementary color combinations. You can see here. And other, like one of the most questions I get from co-workers and a lot of folks ask me is how to achieve the frosted glass transition looks? You can see some of these works. Where you really want a subtle variation from ultra clear to light flat coat of paint. Well, that's a fairly simple setup as well. Authority has some like caveats here and there to have it like physically accurate. You gotta remember to connect your values not only to the roughness property, which is affecting mostly like no glass surfaces, but also to the respective transmission values, which most people forget. This is why sometimes your glass shader appears broken or doesn't mimic the same desired effect. The purest is your gradient, the harsher the falloff will appear. So another fact is having your bright values always been painted into the middle of the spectrum. Yeah, you can also pay attention how we are using the ease RGB transition type on the color ramp to drive the clear coat roughness input here. This is important to get a very sparse, black and white gradient that can exacerbate this type of transition. Another situation of proper value awareness is when you're using the bevel shader to fake merged topology on areas of like dense intersection, such as the case above. In this material, we have a combination of stacked shader techniques that rely on multiple normal inputs. And you gotta make sure to like plug the bevel shader data, not only to the clear coat normal pass, which acts as an overlay on top of the most common PBR stack, but also the layer weight node, which is driving different effect. This tree may appear simple right now, but sometimes your layer weights can be inside node groups and you may always want to remember to expose those values properly. So with a good base of the library out of the way, how do we proceed into exposing the values and making only the key parameters available in the main properties panel? Most people understand drivers mainly being useful within animation workflows, but that couldn't be far from the truth. If you think about it, drivers allow you to have access to almost 90% of the values anywhere. For an easier approach, I'll be using the urban camouflage material as an example, which can be replicated fairly quickly with only two multiplied Voronoi textures. Here, we'll be identifying its main artistic values that we aim to expose. So totally up to five. The scale for each respective noise and their colors, which totals for three float arrays. First, we need to create five blank values in the custom property spaniel at the bottom of the material tab. You can do it like me and drag and drop that menu to the top of the list, then create a new property. You can tweak the maximum of the float. Like the example here, I set it to 100. Once you're done, left click on it like copy as new driver and left click on the input within the material editor you want to be mapped into. So select the option base driver and you should be ready to go. But then we ask Louis, how do we do it for colors? Well, colors actually fall under a float array type property and the process isn't that much different. You just need to switch the type dropdown and set it to subtype to linear color. Simple as that. And then you can tweak the RGB. Here's the result for all five properties, successfully driving the values in the source shader material. I can control the scale and even switch their colors for the camouflage. Moving on. It's always great to have your source.blends files mimicking its intended purpose. In this case, this is the route for all my entire library so I like keeping the Outliner view set to list all data types. With that, it's easier to mark them as assets and as well as like having a little camera set up to quickly render your thumbnails on the fly when I finish altering each material and each driver for each source and scene. In contrast, the auto-journey thumbnails work well but most of the time you really want to make your content easily readable and shown in its best intended use case. The light set up that I created adds like a bottom ring light to accentuate its present across the plain gray background from the browser. It's a subtle change but it helps a lot. So try to custom thumbnail, select the asset. Remember you need to be on the source.plane in which it's located. Click on the gear icon on the right and then on the folder icon where it says preview, there's a catch for using custom thumbnails however. Depending on the final image size, they can add up to the memory and loading times in which they appear on the startup. So for that, I opted to compress their file size further using this open plugin called Super PNG. It still beats Photoshop's safer web compression method miles away. This one's reduced from 112 to 15. We felt like apparent visible artifacts. So they load quickly and it also impacts your blender startup times in a way. So decals, decals differ from shaders. They are basically a material applied to a transparent plane alongside shrink wrap modifier. So they are saved as an object type acid in the larvae. It's due to that, we're gonna be exposing the drivers to the object panel instead of the material properties for easy accessibility. The advantage of like SDF-based decals is that differently from image-based decals you can have more with less. So each design can be modified according to your needs. See how simple the crosshair decal can be replicated with OP polar node and the SD box nodes. When using the OP polar node, you can avoid weird interpolations on steps like decals by making sure to use a integer as a property type for the count number. So that way in between values will be skipped. And I did this for art direction purposes, maybe you want this. And I also limited the crosshair stripes to you too. Many of you also may be familiar with the decal machine add-on, right? You can replicate some of its techniques with SDF too. To do that, you can like disable, derave this body for glossy surfaces and also their shadows. This helps like to consolidate their appearance when high resolution renders are taken. And especially under like bright lighting conditions, this is a thing. Above all, you always want to use UV mapping for this technique. As the generated object space one doesn't translate well when using shrink wrap modifier, the auto-drain coordinate mapping doesn't account for the offset of the shrink wrap. So by using UVs, you're always enforcing that it will map perfectly into the surface. UVs are also beneficial for situations where you want to duplicate the surface underneath and map into the decal with accurate precision. Just pay attention, the center will be offsetted, always being the 0.0 origin coordinate. Here you can see the end result for all parameters exposed. The target decal is made in the same way as the crosshair, but masking a circle within the box SDF and inverting the alpha to obtain that result. Note mentioning, you can leverage the SDF notes to create procedural bump map decals. This technique is not only like constricted to generic alphas, which can be very useful for composing tertiary and levels of detail where the insets won't have a significant impact on the silhouette of the model. Here's an example of its use on the Evangelion artwork revamp based on an artist named Miratio. I found it to be a great subject of study to test this method as the nature of the design has lots of curved surfaces. So adapting to the concept into 3D is also a form of visual translation. The fundamental goal is to effectively convey an idea from one medium to another while retaining its essence and meaning. It involves considerations of depth, proportions, spatial relationships, and language on the other hand, syntax, grammar, and cultural context. So that's why blocking out your layout is essential. You cannot move onwards without doing so beforehand. A successful translation for all those factors put together is what designers refer as appeal. Appeal unifies the final piece and just like the principles of Gestalt, you can only attain it when all other aspects are working in harmony, just like the proper saying, like the whole is greater than the sum of its parts for sure. Now we finally arrived to Dramatronauts, the part where I believe most people were waiting for. And one of the in-house tools I adapted to my personal workflow is known as the inset maker. I created this group to automate something I did fairly often, which is like creating variable cuts and edents. I use this to create the details in the car, rubber tires as well. Pay attention how the inset scale is variable when I shift the object orange in point. Breaking it down, first, we are using the instance on points nodes to map the source object into every vertex. This is controlled by a vertex group which is being exposed as a selection type. The align iller to vector node is very important on this stage as well. We found it. Our source orientations won't be aligned correctly to every vertex normal. And then we proceed into Boolean then out with a mesh Boolean node. This last step can be very costly on performance depending on the source complexity of the polycount of the Boolean, right? In addition, since I wanted to replicate a parametric effect, I decided to add a RGB curve node to drive the vector scale output of each Boolean mesh. That allows me to have more control on the fall-off of their scale relative to the source origin point. And the result is very cool to play around, as you can see. This slide showcasing how the selection is being exposed as a vertex group. I can handpick which vertices I want to be the Boolean to happen, allowing for a higher degree of customization. Also, make sure to check your source Boolean orientation is relative to the world. Sometimes you may want to apply their scale before proceeding to use as a cutter. Nevertheless, when you have curved source objects or more complex surfaces, normal artifacts will be sure to happen and you need to be prepared to solve those with A data transfer modifier. Hereby, using another selection of vertex groups, we can specify which areas we want the normals of the mesh to be migrated over and you can successfully overwrite their artifacts. Some of the hit-patch parts that I use are also composed by intricate combinations of body fires and shape keys. Just like drivers, shape keys can also be leveraged outside of animation use cases. And it's NBA's strong foundation too for like concepting and detail stacking, right? Next, we have a hands-on example of all those techniques being combined to assist in modeling and concepting. I can control the bevel variation of the tire, its thickness, tread depth, and many other like visual qualities that otherwise would be counterintuitive to model and change every time the client wants. You can see how powerful this is and it's all within Blender already. Please ignore the fact that they're displayed as purple. Those are from circular dependencies as being old file. Here's how everything is set up. We are selecting key values of the bevel modifier, combined with a simple transform geometry notes to tweak its radius. It also has an array modifier to instance and bend the source mesh around. Hopefully, we can have a bevel modifier as a node in the future that way I can convert this entire structure into geometry notes. That's the goal for most of the procedural primitives shown here. Another similar example of a procedural hub cap, kit bash part. This one can achieve equal types of firm variations, you know, you can control the size, the distance and the circular visual elements for bolts and other insets. You can see how I'm using like a simple mesh boolean node again to control how a cylinder primitive is offset it into its main shape. Then I proceed into doing the same combination of array and simple deform modifier to expose and repeat the pattern along. Although we can already port those into GN, it's essential that the bevel modifier becomes a node so we can better make use of its tolerance and angle settings. To sum it up, here are some takeaways. I think it's really important to acknowledge that understanding appeal is one of the most, you know, seeking after qualities on 3D artists nowadays. And the current job market that mixed with other factors unfortunately have been overlooking this aspect. So in a real production environment, the concept art is also responsible for providing its original references, which helps in guiding the translation process. This definition provides like the necessary context that 3D artists then can improve and refine their design, taking the experience to the next level, right? And, you know, learning curves can be intimidating as a beginner and everyone who's been through it for a long time can relate. In fact, some of my 2019 talk was mostly addressing that topic. Here are some other concept art adaptations from rock age, great concept artists, side views, up views. And I would like to first thank everyone working at both Blender Institute, Blender Studio, staff and the opportunity to present this year. It's always really great to be here, connect face to face with everyone and also a big shout out to all those veterans and masters of equal tenor than myself, if not even more. Some are retired, others continue developing add-ons and sharing their discoveries every day. So, big standing ovation for them. Last but not least, a friendly reminder to donate to the development fund, if you haven't done so. And, Q&A. Yeah. Yes, yes, yes. You can find on CelestialMaze toolkit on his GitHub and it's free. Everyone can download it and start using it for the data transfer modifier. Yeah. Yeah, so that's most of the case. Nowadays I was trying to replicate that setup with geometry nodes, you can already do that. But there are some quirks, you have to pass through the material, pass now to do that because of the attributes. The thing about, I still use the data transfer modifier is because you can preview it on object, solid object mode, instead of the EV and cycles. So it makes more, it's easier that way. But it's always case by case situation, man. Like, sometimes you wanna use the vertex groups to isolate and see which parts you wanna transfer to the normals. Sometimes you also wanna offset, do a little shrink of the mesh to map correctly. Yeah. Yeah. Yeah, so on that there is a dropdown to select the vertex group. So you need to leverage the vertex group to do that. Yeah, yeah. Yeah, it's most like, you would have to show me, show me later the, yeah, yeah, yeah. And we can figure it out together. Yeah, so sometimes you have on the car paint, especially, it's accounting for the bounce reflections for that. It's more of a artistic choice. You can choose to keep it. But I think the shadows are the most important one. Yes, this is a great. This is a great question and I know it. So, yes, you can see the bevel is, it's changing slightly there. And it won't be perfect until we can have it as a note per se. This one? Yes, it's a real bevel. It's a real bevel. You can see it even has some normal artifacts still there, if you pay attention to it. But from a like a tertiary level of, if you're seeing the prop from afar, for concepting it works well. But again, if you're doing production work, then you're gonna retopo after that and you're gonna make sure to fix those. But it's a bevel modifier. It's driving a bevel modifier on the stack here. This is the stack. The way to normal, well it's doing the way to normals. So, you can also do that on the note group itself. You can migrate that. In fact, because the thing is, sometimes you wanna constrain the artifact and you wanna add loops procedurally to shrink that gradient split as he was mentioning. We can't do that procedurally yet. So, the way to normal helps into kinda stay here, stay here, yeah, doing that for the normal averaging. So, yeah, thanks everyone. And I'll be open for questions and catch me on the roof later as well. Thank you.