 Hello everyone, my name is Leif Pedersen. I'm a technical artist here at RenderMan at Pixar and today I have the luck to present one of our amazing developers of RenderMan from Aya, Philippe de Prince, who will be talking about his close collaboration with Industrial Lighting Magic and presenting and implementing their layered material system called Lama, which has been used in Star Wars and The Irishman, among other films. So to tell you a little bit about the format for today, we'll be talking, we'll be going first with our show reel, which will give you a sneak peek at the amazing work done by RenderMan Studios and you'll also see some Lama work of course with Star Wars and The Irishman, so watch out for that. Then Philippe will present and then we'll be talking over some live demos so you can see how it all unfolds in a more practical way. Then we'll be taking some questions, so make sure you ask your questions in the questions tab and not in the chat because it gets very very popular and it's hard for us to kind of sort through all that stuff. So without further ado, here's the RenderMan show reel. This is a sneak peek at some of the new shooting features in RenderMan 24. We'll start with Lama, which is the next generation of materials. Lama stands for layered materials. It was developed at Industrial Lights and Magic to replace their previous solution, monolithic BXDF with a pre-arranged combination of lobes, similar to pixel surface. Main idea was to separate the lobes to create arbitrary combinations and make it easier to add new lobes whenever necessary. Lama is a node based system with a small number of user friendly nodes. It is currently implemented in RenderMan RS and Mari and it has become a production proven tool at IAM. In Lama you will find three types of nodes. The main BXDF that will be attached to your object, it renders nothing by itself but controls the other connected Lama nodes. The layering nodes allow you to combine two nodes to build your materials final appearance. The BXDF nodes are the lobes that will respond to your seams elimination. And this is what Lama material looks like, a node graph, the highlighted nodes in the image are Lama nodes. The other nodes are regular patterns. In this case, we first combine a diffuse and a dielectric loop to create a shiny plastic and then combine the result to add a layer of diffuse dust. In pixel surface, the diffuse lobe is at the bottom of the stack. In this example, it is at the top. Lama brings a new level of flexibility. But now let's take a look at the main BXDF Lama surface. So Lama surface has only a few settings to control downstream nodes. It sets up the material per side, controls the execution of some features and triggers custom AOV evaluation. Let's take a look at the material setup. By default, we have a single sided material. In this case, the number of sides is set to one and a simple diffuse node is connected to the font material program. Now, if I set the number of sides to two, the same diffuse material will be used by the backside as well. But I can also connect a different lobe to the back material plug. In this case, it is just a conductor node, but you can of course have a photograph on each side. Let's take a look at what Lama has to offer. First off, you get nine different lobes to allow building anything you need from hard surfaces to hair and fur. Then we have two ways of combining these lobes horizontally and vertically. To give you a better sense of how much is available, let's take a look at each node. Lama conductor is designed to create metallic surfaces. It has the usual controls like Fresnel and isotropillary descents as well as a couple of new features. Full energy conservation and GGX tail control. Let's take a look. In our previous BXDFs, Pixar surface included, specular lobes suffered from energy loss at high roughness. Lama nodes implement microfacet multiple scattering using MLML Turkin's technique. The result is perfect and fast. It is used by all specular lobes as well as the diffused lobe as the RNIR algorithm suffers from a similar energy loss at high roughness. In the RNIR case, the algorithm has been tweaked to avoid oversaturating the colors, which is problematic when your albedo texture has been painted from reference. Now let's move to the other features. Specular haziness or tail control is a way to create a halo around a sharp specular response. This is often visible on metals or shiny dielectrics with micro scratches. In the past, artists often had to combine two specular lobes to achieve that effect and increased render times. The system combines two roughness values into a single specular lobe and at minimal cost. It has the same look as the GTR lobe used by the Disney BRDF, although the GTR lobe cannot be tweaked. Let's go back to our node now. Lama Dietrich can be used for glass, barnishes, coatings and so on. Energy conservation is a great improvement for first aid glass. It includes the usual controls as well as a longer waiting new feature, dispersion. This will make quite a few people happy right now. Note that Lama Dietrich offers more granular shadowing control than our previous offerings. Lama diffuses your classic ORNIR node with better energy conservation. Note that in many Lama nodes, the energy compensation slider allows you to tweak the look to match older assets that were not energy conserving. Lama emission, as usual, this is for low intensity emission like small LED screens. Lama subsurface scattering handles all subsurface scattering duties. It implements both of our past traced subsurface algorithm as well as the Burley algorithm for a different look. Sheen is used for cloth pitch furs and other microfibers scattering effects. This lobe is energy conserving as well. Lama translucent is evaluating backfacing diffuse elimination. It is typically used for tree leaves, paper or thin cloth. It is a separate node to make double-sided materials more controllable and potentially output to a different AOV. Lama tricolor SSS is the other subsurface scattering node. This mode was introduced in Renaman 20 and IleM likes its versatility. They found it was useful to use tricolor SSS in some areas of a model and use Lama SSS in the other areas and that is one of the big advantages of this framework, the ability to blend different subsurface scattering models in the same material. This node implements Cheung Hair model which has been very successfully used by Walt Disney Animation, Pixar and IleM. It is energy conserving efficient and easy to control, reproducing a wide range of hair and fur with just a few intuitive parameters. Lama Mix is our first layering node. It is a horizontal layering node which means that it blends two lobes the same way you would blend two images. It is similar to the way you can blend multiple Pixar layers nodes. There are two modes normalized, make sure the resulting blend is still energy conserving and additive which lets you go crazy but at your own risk of course. Lama Stack is a vertical layering node. It means that we have a layer on top of a base material. If the top layer is transmissive, Lama Stack can compute volumetric effects like absorption based on the top layer's thickness. This is a more physical way to combine two materials. To conclude this presentation, let's talk about IleM's experience with Lama. Luc Dev Artist quickly adopted the new system as it offers unprecedented flexibility. It also opens interesting possibilities to optimize some materials but more than another time. Performance has been great across the board although one could imagine less skilled artists pining on lobes for no reason and significantly slowing the renderer. But in practice with the same number of active lobes performance is the same as a monolithic BXDF. Finally, IleM reconstructed their previous monolithic BXDF with Lama to reuse all the assets. This technique can of course be used to support other models like Disney's BXDF or Autodesk's Standard Surface. This is something that came during the first session yesterday and I made a new slide on that. This is an example where we have a simple metallic workflow because Lama uses the same albedo and diffuse color and speculoque as Pixar Surface. So here you have the Lama nodes that reconstruct a very simplified Disney BXDF and you have two Pixar mix nodes that allow us to basically get exactly the same result with the metallic thing. And at the bottom we have the formula that you can use to get those results. So these textures were created with Substance Painter using a pivion metallic workflow and it's quite easy to get exactly the same results. You just have a couple of extra nodes. So here this is a quick render I did this morning to show you Disney versus Lama and it matches quite well. There are a couple of differences. The diffuse model are different and the Lama speculoque will look brighter at high roughness because it won't lose the energy like Pixar Disney does. So Lama will be available in Random Man 24 in Houdini, Katana, Maya and Blender. We are actively working with IELM to improve and extend Lama for later releases. After 24 we will implement Lama-based material exchange via MaterialX and USD. That means that we will be able to ingest MaterialX document and reconstruct your material using Lama nodes. And of course Lama will come to XPU as it represents for us the future of material creation. Now we're going to move on to another important aspect of shading in 24. It's the next step of our OSL adoption. So C++ patterns are deprecated. All patterns are now written in OSL. We improved our OSL support to make sure we could translate our pattern library to OSL without any functionality loss. And that allowed us to implement exciting new nodes but on that a little later. 24 is using the latest OSL 111 library delivering all the new features people have been waiting for including continued CMD execution for maximum performance and compatibility with MaterialX standard library. Full OSL adoption is important to help our user transition to XPU where OSL is the only supported shading language. Note that closures are still not supported as we feel our C++ BXD API still offers a lot more flexibility to advanced users. So this is one of the new OSL patterns that we are going to release with 24. The X-tyling pattern. So what is it? It's a new way to seamlessly tie textures. So you take a seamlessly repeating texture, you tie it and you wish you didn't see the repetition but you do. Enter X-tyling. The repetition is pretty much gone with the default settings. With a couple of tweaks, it can look really good. So how does it work? The white grid shows the texture grid. The idea is that you randomly pick hexagonal image areas and blend them together like so. Another hex-style and another one. The center of each style contains modified pixels and each hex-style blends progressively into each other. This is a visualization of the blending weights. It works really well with repeating textures that look pretty uniform. One of the key aspects of that technique is the blending between the tiles. It is designed to preserve the contrast in the blending areas. Let's take a look at a simple example. Here we have a PXR texture plus PXR manifold 2D on the floor and a PXR multi-texture plus PXR round cube on the sphere and statue. Repetition is obvious. Here we replace PXR manifold 2D with PXR hex-style manifold and just enable X-tyling in PXR round cube. That's a pretty significant improvement with close to zero effort. Let's enable the grid and we can see I tied the object at different frequencies to get something I like. This renderer shows the contrast preserving blend in action. The colors are not faded and it is hard to tell where the blending areas are. Now I turn it off. The colors are more washed out in the blending areas. Let's look at it again. The contrast is on. It's off. It's not quite as contrasty and you lost a lot of nice detail. Another thing that we can fine-tune is the width of the blend, which is useful for some type of pattern. Here we have a wide blend between the hexagonal tiles and I will just move to the next kind of width 0.5 and then 0. With this texture you start to feel the hexagonal shape of the tile. So a larger blend works better. So I will step back 0.5, 1, 0.5, 0. And if you look at the top you see kind of you kind of start to see the hexagonal times. Another example, so it's just a flat texture on the plane. I just had some displacement and normal maps and it looks pretty okay. Note that Pixar RunCube can now handle normal maps correctly as well. Another last example, oh yeah and another one. It is a really effective technique that will save a lot of time. So next feature, phasor noise. It's a new type of noise that was presented at Seagraft last year and used by ILM to look at the other in the rise of Skywalker. They used it for the sand dunes and a number of other effects. It's there is a really vast texture space to explore and we ship our studio's implementation which has a lot of nice features. So phasor noise, what is it really? It creates, imagine wave-like patterns that can be oriented and distorted in many different ways. The waves can have different shapes and morph into each other. It works a little bit like Gabber Noise but also has contrast preserving features that prevent the grayish areas you can sometimes see in the Gabber Noise. Let's take a look at a few examples. As I mentioned, you can create a very wide range of patterns and phasor noise has a lot of control. You can choose different types of wave shapes. You can orient them with a texture or with a fixed direction. You can modulate the waves density in a lot of different ways and you can use sophisticated fractal loops to add visual complexity. It's pretty amazing to play with and it's so flexible it's probably worth a contest one day. To finish, let's have this mandatory UI shot where you can see that there is a lot of parameters. So we will ship with a few presets to get people started. Another cool feature of 24 is that OSL allows us to review our bump mapping workflow. So we decided to go with something quite different. We took advantage of this new paper from Morten Mikkelsen which redefines bump mapping as surface gradients. Old bump techniques that is height and normal maps can be formulated that way and mixing surface gradients give predictable results with very little work. It works much better than in previous versions of Run-Man. So our main bump mapping node is now PXR bump mixer. It takes surface gradients as an input and outputs the final normal ready to use by your BXCF. PXR bump is still available of course for backward compatibility and PXR normal outputs surface gradients as well. So where are the surface gradients coming from? Well pretty much every node can output a surface gradient through a new NG output plug. Here is a quick demo. We directly use PXR Worley's NG plug to feed a PXR bump mixer connected to Lama diffuse and Lama dielectric. So in practice we just created a simple plastic in Lama. Here I'm adding a fractal on top and I'm just going back to my mixer and changing the amount of blending between the two until I like it. And then I can choose to maybe invert the noise to see if that looks a bit better. Isolate one layer versus the other. All those things become quite easy to do and very predictable. Of course you have three blending nodes you have over add and subtract. So you can completely mask some areas to just bump a small portion with a texture. So to recap we introduced a new workflow around PXR bump mixer. These many nodes can output surface gradients to create a final bump signal. And it is more flexible and predictable than our previous implementation. So this is it for me. Thank you very much for your attention. And I will pass the mic back to Lef. Great talk. I'm going to close your presentation over there. Great. So we're going to mix this section with the video for phasing noise and a bunch of other features with some of the questions from the audience so that we can combine our time that we have left. So one of the interesting questions for phaser noise and hex tiling, can you actually drive any of the new patterns with any surface attributes, prim bars or anything else? Sorry. Can you hear me? Yes. Sorry. I think there was a connection problem. Yeah, no. The question was can hex tiling pick up frequency attributes off of the mesh it's applied to, for example, or any other of the awesome? Yeah, potentially you can do that, yes. Okay, cool. And is there any layering performance and advantage or disadvantage to using Lama, for example, against the Disney PXDF or PXR surface? It's the same cost. It's the number of active lobe that matters. So if you have a configuration of your PXR Disney that uses mostly diffuse and specular and sheen, for example, it will have exactly the same cost as those three lobes in Lama network. Sorry. Cool. And I mean, I was just doing there in the demo some simple sand there, but it showed the potential of bump and you can combine flakes, for example, any other kind of interesting nodes in the system and you've wired this new system, right, with our PXR texture and a bunch of other OSL nodes to actually get the normal the it's called ng now into the bump mapping directly, right? So no intermediary step, right? Yes. What are the benefits there? We did talk about some benefits and speed. I think you were mentioning that 2x speed up and competing models. Yeah. So it's a bit faster and it's very easy to take the output of any node and compute the, for example, the luminance of that signal and use it to compute the surface gradients and then allow, you know, PXR bump, PXR to use that. So it means that suddenly you are not limited the same way you used to be with PXR bump and PXR normal. Another great advantage is that you can mix normal maps together, for example, and you get the correct result or you can mix different normal maps and bump maps and it works as expected. That's awesome. We're seeing a little bit of open color IO here in your new ASUS workflow, which is super interesting, right? I mean, you can currently use it in 23, but you have to kind of, you know, do a pipe manually and we have some cool new bloom features here, which are interesting. You can actually output these artifacts to comp, right? Yeah, absolutely. So there is a new bloom filter that allows you to, you know, create blooming effects and you can output the different layers separately if you want to recompay everything yourself. I have a couple of more questions here. Is Lama available for NCR version? Yes, it's going to be completely available for non-commercial and it will be available in all bridge tools, including Blender. In fact, we might even have a little demo of it here, so stick around for that. Another question is, does Lama allow heterogeneous volumetric interiors controlled by, for example, an open VDD asset? So at the moment, this is not possible, but it is a general limitation of our internal APIs that's definitely on our list of things to do, but we can't do it at the moment. Another good one. What are the main differences between Lama and MaterialX? So Lama is a number of nodes that allow you to create a material, well, a BXDF, really, so the response of your surface to light. MaterialX is a high-level description. It's something that will say, oh, and I have a diffuse and I have a specular on top and I have a little bit of sheen and I have all those nodes that will create patterns that will feed those things. And it means for us that we can ingest, well, at some point in the future, we will be able to ingest a MaterialX document and translate it into Lama network of nodes and patterns, and then you will get the same result as, you know, in another renderer. So this is what we're going to do in the future, but MaterialX doesn't do anything by itself. It's just a description and Lama is the nodes that will be used by that description to talk to RenderMan. Another good question is, is it possible to layer other BXDFs over the new hair shader and Lama picture chain? So you can use, you know, any Lama nodes to layer on top of each other, but you can't mix with the BXDF surface or something like that. It's not possible because they don't respond to the same internal API. But you can mix all the my nodes together. Some might not give you the expected result because hair shading is something that's quite particular, but you can completely have different hair shading settings to have like dirt on top of clean hair and stuff like that. That's interesting. That's a completely new feature, so that's awesome. Does Lama have an equivalent parameter for Pixar surface to fuse exponent parameter? And I think who asked that question? So what we have is a roughness parameter and that's because it's an ORNIR model and that's a tree. Okay, that's another great question here. Does Lama layering compute each layer independently or is it, or is there interaction between them? For example, if the top layer was a path trace of surface, would that affect another bottom path trace of surface layer correctly, tracing through the top and scattering into the bottom layer? So when it comes to mixing different subsurface things, we are limited at the moment by the internal architecture to only two, as far as I remember. And it means that they can be mixed, but if you stack them, you will get a correct looking result, but you shouldn't expect energy propagation between the layers. But you could mix, for example, a path trace of surface and a tricolor, right? If you wanted that kind of artist freedom for an alien or something that's completely, you know, not plausible, you know. Yeah, or, you know, the thing is, in IMS case, what they do is that often they have to do body doubles, you know, where you have a famous actor with a lot of pictures and they need to match exactly under a certain set of lighting conditions, the references. And they found that sometimes the subsurface scattering, when it's path traced, looks better in some areas with some kind of topology. And sometimes the tricolor will work better. So that was the reason why they decided to be able to have those two nodes and be able to mix them however they want. In practice, they can mix together four different algorithms, which is very, very flexible. I'm going to go back a little bit here for XPU because I want to touch quickly on the advantages of speed, of course, here and how this whole new paradigm of doing everything through OSL for patterns is really benefiting kind of this forward-looking, you know, transition to our upcoming render. So can you talk a little bit about what the advantages for Lama and OSL are for XPU particularly? So the best thing about OSL is that we write the patterns once and it works in XPU automatically. There's nothing to do. So you have to know that our support of OSL contains a number of special specific things that are specific really to RenderMan. So it's not guaranteed that some functionalities, if you write them for RenderMan, will work in other renders, but 95% or 98% of the OSL shaders you write in RenderMan should work in another renderer. But anyway, this freedom to write once and use multiple places is really great. Lama is not yet implemented in XPU. At the moment, we support PXR Surface and PXR Disney BSDF, which is the second Disney BSDF which has subsurface catering and can do glass and all those things. You were talking about also at some point in the past, I remember how Lama was just kind of more naturally, you know, better working essentially for GPUs because it provided kind of a smaller footprint per node and the code base, right? So then like storing all of these things instead of a monolithic shader is actually a lot better, right? Is that true? Yes, but that means also that we need to have like an execution model for our BSDF that is slightly different. So this is why we haven't implemented Lama yet because this is like the second phase of our delivery. At the moment, we need to focus on delivering for Lugdev using PXR Surface mostly and PXR Disney BSDF. And then the second round will be redesigning the BSDF part which has been done already partially by Fran to support Lama. Another great question is the chromatic aberration effect on the glass. Does that mean we can separate index of refraction per RGB value and blend them? Or is that done by just mixing glass materials and that kind of thing? No, you have a dispersion parameter basically. So the idea is that you instead of computing the Fresnel for a specific value, you compute it over a range of values and the wider the range and the more dispersion you get. And there's an additional control to desaturate or oversaturate the dispersion effect if you want to. I find it super simple. It's a single slider and there is also a really great show. That was awesome to see. How would displacement for double sided models work? So at the moment displacement happens in a separate execution pipeline in the renderer which means that it has to happen before Lama runs. It's part of the model of pipeline, not part of the shading pipeline. Yes. So displacement basically takes your geometry and rebakes it displaced and then this is what the renderer will use, what Lama will use. So at the moment there is no integrated approach to that. We could make it look as if there was one by implementing a number of layering patterns for displacement, but you already kind of have that. So it's just that you can't say, oh, this is a material that is displaced and another material is displaced and I'm combining them through Lama and then magically the displacements will combine. Logically, the displacement, if it comes in, it's going to come out of the other side, of course, in a single place. If you're only doing the geometry, there's really no way around that current. In the past there were tricks to do double sided displacement. We don't support that anymore I think in the renderer, but that's definitely an interesting question. Absolutely. Can we make rough boundaries between two mixing layers? I know you have masking and all kinds of other interesting things in there. Yeah. I mean, if you mean like a very gradient gradual blend over a ramp over a surface, yeah, you can do that. It works fine. We'll take one last question. Actually, one might have a question for time for two. Let's see here. Does the hex tiling work with displacement? Yes, absolutely. I demoed it in some of the examples that were displaced. They were using the hex tiling in displacement. Basically, the displacement uses the same OSL patterns. Anything you write in OSL to transform, to create signals is usable in displacement. That's interesting because the next question I was going to ask was essentially about that. Does that mean that we can drive texture coordinates with our own gradient data instead of being forced to use a manifold? You need a base manifold that then you can perturb. If you're using hex tiling, it's possible to drive it also with a gradient map that will allow you to create non-retilinear mappings of your manifold. This is something that is used, for example, at Pixar to create a tree bark. You have a texture that is just tree bark like this, and they create a vector field like this, and then the tree bark will look like that. Interesting. I think we're going to finish it off with one last question. Is vertical layering actually path traced between layers, or is there some different kind of approximation to get the interlayer scattering? It is not doing the interlayering, layer scattering, so there is an energy loss. It's using the kind of classic layering that has been described by, what's her name? Oh, I drew a blank. Anyway, sorry. Basically, we are not modeling the light transport that is, well, the light that is inter-reflected between the layers. This is something we look at, and we would like to have in the future, but there isn't a good quick algorithm that doesn't have a lot of drawbacks. A bit penalty, right? Yeah, there's speed, but there are also some that require a baking phase, things like that, and we haven't found something that works well with anisotropy, and there's a lot of restriction at the moment, so we don't think it's quite ready yet, but as soon as we can find something, we will definitely look into it. Awesome. Well, I want to appreciate everyone's comments, great positive attitude, and amazing questions. I hope that, yes, follow the link. There's a great new session starting right now. You have to get out of this page and go into that new one, so make sure you don't miss that for the women in Render Man, and I just want to say thank you to Philippe for being with us, and I'll see you next time. Thank you very much, everyone.