 Hi. My name is Eric Walkman. I oversee all of the digital look development at Leica. I work with all of the other practical departments throughout Leica to make sure that anything that we build digitally is going to fit inside all of the practical worlds that we did. I'm going to show you some of the character stuff that we had done on Missing Link, and then a few new things that we've been kind of working on. So before we start our process, we always start with a real puppet on a real set that's shot one frame at a time. We use this as just a test. We don't build digital doubles at Leica typically, so this is just a test. We'll go through and we will run a, grab the RP heads, match move it, just to make sure that we're kind of in the same ballpark when we're doing our side-by-sides. One thing that came up on Missing Link was this whole notion of optical color mixing, which is a big theme throughout the film. Optical color mixing is a phenomenon that happens when a viewer perceives a color in an image that is a result of two colors really close to each other. And this is throughout the whole film. You can kind of see examples of this. So the puppet department and costumes specifically, we don't buy fabrics off the shelf or anything. Our scale is very small and very weird, so we manufacture our own fabrics in-house. These are two examples of just a hand-woven fabric and embroidery and printing. We do a bunch of different types of tests on things like this. What we ended up with is this stretch cotton that's been printed on and then has some embroidery on top of it to come up with the final look for Sir Lionel's suit. Now, the problems with color mixing is that at a certain scale, just a small amount of change gives changes the perception of the color. So one way that we had to kind of figure out how to deal with this in CG was basically build a shader that could go through and deal with this color mixing. So here you can see the weft and the warp can change, and just depending on how much weft or warp and the direction of the weft and warp, you can get a variety of different types of effects, and you can get that optical color mixing. So we put all this together and a couple other things, and we end up with a digital puppet. This is a digital version. Again, this is just a test. We use this early on in the process just so that we can make sure we're understanding where all of the other departments are coming from and how all the puppets are getting put together. We do a side-by-side just to make sure that the practical somewhat matches the digital. There's some differences, but it's broad strokes type of thing. It's a close-up just to get there. Like I said, we typically don't do digitals, so we can't always start with scans or things that other people have already made. So we use more of a traditional CG workflow for our characters and start with a concept art. Art will give us a lineup, and the lineup, you know, it's basically we're going to figure out what's going to be practical, what's going to be digital. So in this particular case, we're going to build a few of these things, and then they change their mind, and we're building a couple more. They'll give us some color artwork, some turnaround drawings, and then tons of real-world reference. A lot of times we're working at the same time puppets is working, so we're trying to figure out things that, you know, what's going to make these characters work in a puppet world, but how are they going to react to light and everything like that? So before we get started, we get a maquette, and the maquette is awesome because it helps us translate how the 2D drawings are going to get translated into 3D. We also get some painted maquettes as well as just some fabric samples draped over just in different lighting conditions to kind of help figure out lighting. We'll get some hair samples. Our hair is not real hair. It is typically a silicone-coated polyester fiber that's got wire in it and made into strips and plugged into the head. We'll get rig reference. The rig reference is awesome because it shows joint placement, joint count, rigid areas, cloth controls, and all the things that go into making a practical puppet. We don't want our digital puppets doing anything that the practical puppets can't actually do. So during the whole build process, we'll grab some puppet reference. We'll just go over there and shoot a bunch of reference photos to understand exactly how these costumes and everything are built and put together onto the puppet. As soon as the puppet is ready, we grab that puppet and we shoot it on our own test unit. We have visual effects as our own test unit. We'll shoot under multiple lighting conditions and we'll use these turnarounds for kind of side-by-side so that we can match our digital puppets against the practical puppet. We start with the digital maquette just like the practical maquette. This is a graphic reference of some of the accessories. The reason for this is some of this stuff is really, really small. I was going to try to shoot some new reference of these, but I lost them in my office. So I could not find them to shoot. We'll get cloth samples. And these are more for the final characters. This is a color mixing example of the fabric. You can see there's a slight black wood grain pattern to the purple. This is that pattern. RP shader development. The heads are printed on our rapid prototype machine. So we have to mimic this. I'm going to go into this a little bit later. But what we'll do is we'll print a white head, a solid white head, a solid black head. Use that to kind of figure out what the scattering properties are, what the reflectance properties are. These heads are sanded and coated with paint or like a clear coat multiple times. It's got a very white sub, you know, it's a white substrate. It's printed in CMYK. And it's all scattered. There's very little diffuse response in these. We will render, take all that, render our puppet in the same lighting conditions that we shot our reference puppet, and then we will do it side by side. Once it's kind of this phase, it's probably good to go into shots. From the shot and integration, we will shoot, we'll take our back plate. We will then run a marking pass. And then this is used for a match move and survey of the real puppets. This is some reference photos just to kind of help the solves. We will shoot an HDR. We will then do photogrammetry of the set. And then if we need to, we will take that and we will build out the new like, a cleaner version of it. If need be, if there's going to be a lot of interaction with it. And then we match move all the practical puppets. And this is mostly for shadows, bounce, light, integration. If a digital puppet's going to put their hand on another puppet and you can see here this is a breakdown of how that shot kind of comes together with all of those parts and pieces. And you'll notice the one guy in the back puts his hand on the actual practical puppet. So, future work. I want to kind of talk about how we're dealing with some of the new stuff. So, Render Man 22 is awesome. It's got tons of features. We're in the process of moving and a lot of that's are coded and stuff over to OSL. USD, we're in the process of implementing USD. We've been working on our cloth shading. We've been upgrading things. And RP matching. So, I want to talk a little bit about this. That head that you see there is a white head that was printed. This is the geometry that went into that head. It's relatively small. It's a 47,000 polys. It's pretty rigable, animatable. It's definitely renderable. If you were just to throw a path trace subsurface scattering on, this is what you would get. It doesn't really match right now. If you were to resolve the self-intersections and add some trace sets, you'd get something a little bit closer to this. This is a little bit better, but it still doesn't really match the practical uses to use some lines and things like that. We were starting to think about it. What does the 3D printer actually do to this geometry? You have a bunch of pieces and you throw it to a 3D printer. What is it doing? Well, it's actually voxelizing the geometry. What it's doing is it's dropping down one little voxel and layer at micron level and then building one pretty much one layer of voxels at a time. What if we were to basically do that to our geometry and then we could get real things? We get the striation lines, which is really nice. It's 3.5 million polygons. Is it rigable? Sure. Is it animatable? Sure. It's definitely renderable. RenderMan can handle all of this. It looks great. It matches pretty much one to one. It gives us the look that we want. To come find out, it's not really rigable and it's not really animatable. However, with a little bit of work with OpenSubdiv, OpenVDB, OpenImminage.io and a very enthusiastic Mr. Peter Stewart, we can do it at render time. So we built a render time voxelization app that basically takes that geometry, lets us rig it, animate it, put it all the way to the pipeline, texture it just like you would normally a low res geometry and at render time voxelizes it and gives us all of the nice stuff that we want to. The other cool thing is that it lets us blur the textures along the voxel, which is very important because now we can actually blur multiple textures across multiple pieces of geometry and it scales. So this is about a hundred of those heads being generated one frame at a time. They are not instanced and they are displaced using the new polygon displacement in render man 22. And just another slide here. This is getting generated one frame at a time as well. And this is about 7000 of them non-instanced. So this is 7000 3.3 and a half million polygon heads non-instanced and displaced. And that's about all I have.