 Hi, welcome to my talk. I'm really happy to be here. My name is Thomas Radicke. And this talk will be a lot less technical than the previous ones. It's almost not about coding, almost. I have a very tiny script that I wrote for some of these things. And actually all of the material that I'm going to show is available as an example on my website. And you'll get the link shortly. All right. So what is this talk actually going to be about? It's called simulating materials with unusual microstructures. And it's actually not about physics simulation. But it's more aimed towards graphics designers and artists who want to have that extra little amount of reality in their own scenes. But I'm not claiming perfect photorealism for any of the examples that I'm doing here. I'll also explain why. Here you can have a little sneak preview of the types of materials that I'm going to work with. But I'll explain right away what these are. And here is also the link to my website and also where you can download all of the talk materials. Actually this presentation is also going to be online as a PDF on that short link there. So in case you want to follow the graphics on your own computer, you can do it right now. So who am I? I'm a teacher for 3D graphics and animation at the University of Applied Sciences in Graz, Austria. And probably a lot of my students are watching this right now on the live stream. So hi there. And teaching these kinds of things like just creating basic geometry and lighting scenes or creating materials is always going to stay a bit basic when you're teaching to bachelor students because we can't move forward that fast. So all of these things that you see right now are actually done in my free time because I just like doing stuff like that. All right. Other stuff I like is also programming where there is a tiny part in this presentation. Retro video gaming stuff which you might also see in some slides. I also love photography and technology in general. And well, I've once been called a surround geek. All right. Let's move on into the first topic, introduction at first. So in reality, everything is made up of the tiniest particles. It's made up of atoms and those make up molecules. And the surface structure of everything actually defines how light is being reflected off the surface and eventually reaches our eyes. And unfortunately we do not have unlimited processing power and that is why we have to simplify a lot. So we actually make everything in blender out of mathematical functions and we try to compress every little detail we see in the world into polygons and edges and points. And sometimes into shaders as we just saw in the previous talk. It's still pretty amazing what you can do just with simulating reality. So there's an awful lot of little detail in that little thing here. It's also just part of a larger project. I've also done the Super Nintendo console and a couple of other different consoles. And this was all done in my spare time just for fun. Just a little throwback, how I came to do stuff like that. This is actually part of my little progression into building the places where I've lived. So this is actually something from 1998 when I started 3D modeling. And the top stuff has been done in a software called Coral Dream 3D which doesn't exist anymore. It was later when I found out that that software was actually an OEM version, a rebranded version of RayDream Studio which was later than named Carrera. Some of you might remember that name. That software has unfortunately been discontinued. But still I somehow managed to convert my old stuff from 1998 into Blender. And on the bottom you can see very simple renderings after just importing the geometry and the basic materials. So just the color. All of the procedural textures like the wooden texture on the top left for example, which you can barely see because the image is very dark. All of the procedural have gone lost in the conversion process but the rest is there. And here's also one of the things that I've been building for a larger scene because this is just part of my apartment. So there in the background you can actually see a color TV and I've made the screen in a sub-pixel fashion. That means all of the sub-pixels there are actually simulated. That's why you can see just a very slight Moray pattern there already. But I'm going to go into detail with that as well. So that is actually the scene right now. Yeah, it's a bit dark on the projector. It's actually brighter on my screen but that's a Mac issue. And yeah, there are three empty slots in the board there right now and these are going to be filled with the video game consoles in the near future. There's also the TV again. So let's go back to the presentation. Simulating the materials is always a matter of looking at the reality and trying to recreate it somehow. And sometimes you're more successful and sometimes you're less successful. In this case here I actually wanted to build a wall shader and the image on the bottom left is actually a real photo of the wall and the one here on the right is my simulated shader version. And since I didn't want to make wall maps of all my apartment I tried to put that into a shader. And I think the resurface is quite okay. But sometimes you do encounter materials that are not as easy to simulate. This here for example is a retro reflective surface that means it returns light to where it came from. And while it does look pretty simple on the surface, its special property only becomes apparent when you try to simulate the real effect that it's giving off. When you have a light directly behind yourself for example, then this is supposed to be lighting up. But how do you do that in render? We'll see that. Reality is really very detailed. This is just a photo of my desk at home. And most of the times we try to simplify reality to make a quick scene. So if we want to build a shader like just a great table surface that has a bit of a rough reflectiveness, we actually just take the glossy shader and the diffuse shader and we put that together with a mix node and we just try around until the roughness is just fine. So that's a very simple setup. But if you look closer, just things aren't that easy. If you want to build real extreme close ups of something, you need to build the real surface structure. And actually a lot of the shaders that are built into Blender right now are simplifications of real surface structures. The anisotrophic shader for example is actually just the recorded behavior of something, which you can view from a far. Like for example, the underside of the CD. You never look at that from a microscope. But if you would, you would see the tiny ridges that the laser has burned into the surface. This is causing an effect called diffraction, which is actually making that colored reflection there. And anisotrophic shader is simulating at least a part of that effect of little surface ridges in a specific normal direction. So here we are, the topics that I'm going to talk about. At first I want to start off with something extremely simple. That's the CRT and other subpixel displays. And then I want to move on to retro-reflective surfaces. And I've also got simple security holograms. You'll see that these aren't going to be 100 percent physically accurate. And then there is something about wooden surfaces with reflective anomalies. That means sometimes wood doesn't reflect in the way you would expect it to. If it's a rough surface, that's fine. But sometimes you have polished wood and it does things. We'll see that. And lastly, I want to show you and demonstrate you a simulation of lenticular prints. You might not know it directly from the name, but lenticular prints, you've probably all seen them. Those little postcards when you tilt them, they show different images. Or if the direction of the little lenses on the surface is vertically, you actually have an autostereoscopic display. And I've built a working autostereoscopic simulation with cycles. We'll see that. Okay, let's move on to CRT screens. Well, the usual way of building just a screen is something like that. Normally, you also don't need the additional detail. You just make a screen and you put an image on it. You put it on an emission shader. It gives off light and everybody is happy. Until you'll have to have that extra bit of realism. This monitor on the left is actually lacking a lot of stuff. It doesn't have any dust or scratches on the surface. It doesn't have the lighting effects around the image border that comes from the LED backlight. And also it doesn't have fingerprints much as stuff that just comes from normal everyday use. So while this might be fine for a normal everyday render in some kind of magazine in the small print, that's not going to be fine if you need it like 10 by 10 meters on the surface of a billing. So what I've done is I've looked at the tiny structure inside the screen. And I think most of you will probably know that all screens that we have around us nowadays are made up of differently colored sub pixels. So when you take your camera and get really close to a monitor surface, except maybe retina screens, then you will probably see what it's about. This is actually really tiny. And in retina screens, you can't even see it anymore. You'll need a microscope. So I've built a very simple shader that does this in a very effective fashion. And here's the node setup, which I'll explain. On the left, on the very left, you see that little one here, that one. That's just a texture coordinate node, of course, which gets the UV information in. And these two image nodes here are actually two different pictures. One is the sub pixel structure. And the other one on the bottom is the actual wallpaper that I want to simulate in that fashion. So that single sub pixel structure actually needs to have the exact same dimensions and resolution of the wallpaper. In this case, I chose 1280 by 1024, because I was simulating an old CRT LCD screen, which had that very common resolution. And then while building it, I figured out I needed to set the image interpolation, the pixel interpolation to closest for a reason you'll see in a couple of slides. Actually, the color of the pixels is spilling into their neighbors if you don't set it like that. And then the final magic is happening in the multiplication node there. It's just multiplies the image brightnesses on top of each other, and it gets you the real sub pixel behavior. Up there, it's a bit dark, but I think you can notice. So let's have a look at a real world example. Well, real world. Okay. So these are different kinds of sub pixel devices. The one on the very left is a bit dated, but you probably all know it. These huge, clunky CRT screens. They actually have a different kind of sub pixel pattern in the middle or the TV screen. Now let's get closer. This here is a rendered view of a PC CRT sub pixel. So you can see that that very characteristic moray pattern appearing even on a rendering. Sorry. Thank you. Oh, okay, then then you might also know what the TV screen looks like, but we'll get there in a second. Okay, let's let's have a look at the next example. Also rendered screen. This is an LCD screen. And if you've ever photographed your LCD screen, you probably have had some kind of this effect here as well. You'll notice it's a different pattern than the other one. This is very geometric. The other one was very curvy. That's just because of the sub pixels. Okay, now let's move on to the TV screen. This one is a bit farther out, because the TV screen has such a low resolution that you don't actually need to get that close to see the moray pattern. Especially in tails face, for example, they can see that it has, well, the exact same sub pixel thing, the same moray pattern that you see in real life. Now let's get even closer. This is what I was talking about when I said you have to set the texture interpolation to a linear, not nearing, sorry, to closest. This is what it looks like with linear. And now I'll switch to closest in a second three to one switch. And now you see, it's now actually pixel perfect, I can actually zoom in here a bit to make it a bit more obvious. So here it also becomes apparent that the image resolution needs to match up perfectly, because if you don't match it up perfectly, some of the full pixels will not, well, match up with the sub pixels, and you will get wrong colors at that part, because the sub pixels together make up the real color. So the TV screen looks like that. And there is a beauty error in this. Probably you can see above the D, there is something going on here. That's not just because of the sub pixel interpolation, but it's also because in a real TV screen, every second column is shifted by half a pixel. And in order to simulate that, I would probably have had to modify my original source image to show the same effect. So in the effect of having to double the resolution and then shifting every second row by one pixel. Okay. Oh, it's too lazy for that. So here it goes. Still good enough, I think. It's also a very simple process. All you need to do is get an image or a tiling image of a sub pixel screen and reproduce it. By the way, on the address that I gave you on that Google short URL, there is a couple of download packages. And for all of these projects, there are example files. And in this case here, my sub pixel tiling images are also included. And they're all released under a Creative Commons license. All right. That was quick for the first one. Let's move on to the second topic, retro reflective surfaces. This is a bit different because retro reflective surfaces come in many different shapes and sizes and also the actual retro reflective effect can be caused by a number phenomena. This here is being caused by a simple corner reflector. That means you have three sides of a corner and the light entering that corner is being reflected as to, yeah, well, it makes up a 180 degree reflection because of the rectangular structure of everything. So this is actually the same effect, just with crystallized structure of the stuff that they spray on the roads. So this is also a corner reflector, so to say. And well, this is what it works like. This image here is from Wikipedia. So I hope the owner of that image will not sue me. This is just one of the ways that a retro reflector works. There are also the other ways, like for example, with spherical refraction and also the principle that works behind cat's eyes. Cat's eyes have actually a spherical reflector and a reflective layer behind it, which basically does the same job as this one. It's a bit different, though. All right. How do you simulate that stuff now? You'd think, hey, I'll just use a normal map. But then you suddenly get something that looks very much different from what you expect. So in this case, it just doesn't work. And the answer to this is to not just use a normal map, but use the actual geometry. In this case here, I just used a cube, cut away the front side and multiply the rest to match up. And just gave it a reflective surface that wasn't just perfectly reflecting. It had just a tiny bit of roughness to it. So I could control the amount of diffusion on the surface. And this is what it looks like when rendered. In this case here, the light was always staying behind the camera. So it's always the same incident light. But I was shifting the surface around. And you can see there is that bright flash when there's a certain angle being reached. That's actually the same phenomenon that appears in reality. Only that you rarely get retro reflective surfaces in that size. Most of them are just small ones for bikes. All right, it works. And it's a pretty nice solution if you want to stay small. But now, let's imagine you want to make road markings. And you have something like a driving simulator scene, where you have kilometers and kilometers of road markings, which need to act in a kind of realistic fashion. But fortunately, I've also figured out to do that in a shader. And that shader has just one really important ingredients that's on the very left. It's the geometry node. Because that one has an input or an output that's named incoming. And what this is doing is it's basically turning the normals of whatever object you're applying it to, the other way around. And the incoming node is actually your viewing angle. That means your viewing angle is being reversed and put on the object, which is exactly the effect that's happening in the real world. So in this case, I was just figuring, fidgeting a little bit with other glossy nodes to get a good balance of diffuse reflection and glossy reflections or very rough glossy reflections. And, well, this is the result. And if you could, if we could take the normal map, which we had seen earlier, which doesn't really produce correct retro reflective surfaces, but it could be used to add more detail, then if we combine this, we get something that looks a bit more complicated, but actually works the same way. In this case here, this is just a normal, I think the scaling is a bit off on the ground, but here you can see the light is standing still and the camera is moving between the light and the object. That's why we get kind of a bright flash of retro reflectiveness. This also works large scale. It's easy to use. It's a very simple shader. You just have to think what makes it work. All right, we're moving quite fast through everything. Okay. Now for something a lot more complicated. Security holograms are hard to make. You've all probably seen some, sometimes with the nice stickers that you get on your baseball caps or with proof of authenticity on some products that you buy and also on your money. So these holograms actually have a wide range of application and they're considered fake proof because they're really hard to rip produce. That is because they are made up of tiny, tiny structures. They are so tiny that with normal security holograms like on the right side, they take up just one micrometer or so. So these are really extremely small features. And these true 3D holograms like you see on the left one with that Nokia sign, these are even harder to reproduce because they are using even smaller features to diffract light in a very specific pattern so that it actually produces a 3D image. And the structures on the surface of that 3D hologram on the left are so small that they have an effective DPI of about 254,000. So these are 0.1 micrometer small and well, light has an amazing property. If you shine it on a very small structure of a surface, the different wavelengths actually sometimes have to travel different lengths until they reflect back to the surface. And on a very tiny scale that actually diffuses the light back into its color components. So that means sometimes you get a white reflection and sometimes from a very specific angle you get the effect that everything begins to diffuse into different colors. You all have seen that effect on a CD. And that's actually the same thing. If you want to know more about how that stuff actually works, it's a number of phenomena that are causing this. And the search keywords are iridescence, thin film interference, structural coloration and diffraction. And I was talking to the developer of cycles just very briefly earlier today and mentioned that unfortunately there is no wavelength simulation in cycles yet. And he said, I was just recently implemented that in another renderer and it's actually not a lot of code and maybe it will find its way into cycles. All right. So the following that you see here now is that's something that now has very expensively been to be simulated with an OSL shader. And I found that on the on the Internet. Elysian user Miguel Porquez has actually written an OSL shader that can be used in the very same fashion that you saw on the previous presentation. So you take a script node, set it to external and load up their shader and it produces stuff like this here with a lot of configurability. It just has a huge drawback. It's slow. It's so incredibly slow. This image here, which is just full HD, took 34 minutes to render with 1600 samples. And I've seen other shaders. Well, actually, I haven't seen any other shaders render even slower. It's just with different lighting situations. So this is actually direct light from an HDR, HDR image, HDR image. And it still took that long. And it's still a bit noisy. So if I zoom in here down down there on that. Oh, no, it's completely black. Okay. Down there in the shadows, it's not actually a shadow here. Oh yeah, it is a shadow, but it's actually a colorful shadow. Unfortunately, you can't see it here, but you can see it in my slides on my website. Okay, so this is a very complicated thing. And if you want to try and download that shader, the address is right there. And I think the actual shader is on page four of that form thread or so. It's just embedded in text. Just copy it out, save it as an OSL file and try it. You'll see. Of course, I'm very excited if that stuff ever finds its way directly into cycles and we won't have to fake it anymore or load an external shader. And since I'm not such a math savvy programmer, I just sometimes program something for fun, but I still wanted to do that. So I found a way to kind of fake that. And this is a rather simple, compared to a sacrobs shader, a rather simple note setup that does the following. Of course, I needed to look at the real hologram to try to find out what it is actually doing in order to be able to simulate it. And the first thing I noticed is that the simulation is heavily dependent upon your viewing angle, of course. And when you look directly straight at it, it's actually non colorful. So directly looking in front, it's gray. You can't see a thing. All right. So then I shifted the vision a little bit and noticed, okay, it's actually the reflections of light getting diffracted back into colors. But most of the time you just see a colorful patch somewhere. In this case, I generated rainbow on the left side here. And that rainbow... Oh, we have to speed up. Okay. Oh, I'm done already. Okay. So, okay. The basis is I was just generating a colorful table and scaling that back onto the surface and multiplying it with a little brightness and saturation value. And know when you move the whole stuff around. This is, by the way, the object which I've been using it on. If you scale, move it a bit around. It looks like this. I hope Microsoft won't assume me for infringement here. Okay. But wait, there's more. I'll just quickly step through it. Okay. Sometimes you see wooden surfaces that do funny things on the surface. Here's a video of my actual kitchen table. And just shifting the view around, you can see that the light is kind of moving on its own. And the reason for that is I really thought long and hard about why is the wood doing that? It's not a normal shader. And the reason is in the wood grain structure. So if you have this piece of wood and you just cut through it and remove the upper part, then you will get something on the surface. That's the grain that's pointing into different directions. We know that from somewhere. All right. Of course, I'm referring to normal maps. The one on the left is an object-based normal map, which is very colorful. But if I want to simulate that stuff, actually the wood is a naturally occurring normal map. All right. Let's try to simulate that. Usually, a normal map just encodes the difference from the up direction into left and right and front and back directions in the red and green channels of an image. And the blue one always stays at 100%. So in this case, I only had to find one normal in all of the color space, which would be easy to use to create a fake bending of the normals in some places. And I came up with this. It doesn't look very creative, but just the thought process behind it made this possible. And if you combine those two, you get a wooden surface that has that kind of brightness shifting effects. And just wait until the light is coming in. OK. I'll have to hurry up a little bit because there's one more thing. Lastly, the lenticular principle, which I've been promising, you've probably all seen that kind of effect. So it's just lenses kind of doing something funny with an image behind it. I have a poster in my home, which I wanted to simulate. And so I, of course, had to look at how is it working. And actually, it's a very tiny lens surface that's refracting an image from underneath. And if you look at the image on the left, you can see that your different eyes have a different position in space, of course. And the different refraction gives you different images from the underlying print. And so every lenticular surface is always composed of at least two parts. One part is the lens, and the other part is the underlying image. And for this example here, I generated that picture sequence on the top and kind of interlaced that together using a script. And, well, the lens geometry has to match up with the number of pixels in one of these images from the sequence. So if I have a 100 by 100 image, and I have a whole sequence of that, I still need a lens that has 100 vertical lenses. And the easiest solution was just build the lenses in blender, give it the correct refractive index of 1.49, as Wikipedia says for transparent plastic, and loading that image. And just a short word about that image, it has been created using a little script that I wrote for Processing 3, which is also available for download on my website. If you use the short code earlier, that's the same side as this one here. Anyway, that script is also free to download, and you can just feed it an image sequence, and it will spit out something like that on the left. And this is what blender is doing when I just render it from the right perspective. And, of course, I couldn't stop playing with it, so I built more. This is a setup that you can use to create your own lenticular image sequences. It's also online in the lenticular pack download. So this is the lenticular version of my portal gun model. And this is really just two objects. It's the lens object and the flat surface behind it with that image applied. And as you can see, it's working right. So if anybody of you can do cross-eyed stereo, just try crossing your eyes and it will become a stereo graphic. The funny thing about this is it actually works with current technology and cycles. So this is actually a stereogram that's stable within cycles right now. So you just render it from different perspectives. Perspectivist, but I can't talk anymore. You can just render it from different points and it will still stay stereo if you use some kind of stereo vision. This effect here is very subtle. But this is the actual poster which I actually wanted to simulate. So I think I got that going. Here's just a tiny detail of what's going on under the surface. Oh, and one last word for the cycles developers. Surprisingly, using normal maps from the baked surfaces didn't work as expected. I had to use a normal strength of 2.0 to get the effect going and it was so dark that you could barely see something. It was also noisy. It did not have that shift over effect when the angle of viewing was actually overstepped. So something's going on here. But I think the simulation was a success. So I've given here a little star rating of my methods. I'd say sub-pixel displays are almost perfect. Just the shifted sub-pixel stuff of the TV CRT is making a little bit of trouble. The retro reflectors are actually working pretty well except for the real geometric version which just doesn't scale well. The security holograms are still a problem. My colorful fake was looking okay from certain directions but it wasn't physically accurate and Sacrop's shader was actually so slow that I just had to give it three of five. The wood glossiness works actually pretty well but in order to get the real reflective process from a wooden grainy surface I'd probably have to use measuring equipment that would light the surface from different angles and capture the result. But hey, the lenticular prints are working perfectly fine except for the normal version. All right, thank you. Thanks. It's really useful for... Yeah, it's cool. Basic but it works.