 Nice, nice. All right, let's get underway, everybody. I'll fade out of my inspirational music for now. Oh, that went smoothly. I think I'm getting the hang of this. All right, everybody, welcome to Logic Live. My name is Andy. Today we have yet another Andy joining us. We have Andy Davis. But before we begin, Logic Live is brought to you by AJA. We are very happy to have their support. AJA develops an extensive range of solutions for the professional video and audio market. From conversion devices to IO solutions, digital recorders, cameras and more. If there's anything you need in the world of IO, you definitely need to get it from our friends at AJA. And as they like to say, AJA, we can't fix the text module, but if you need video IO, we've got you covered. You can see what they have to offer at AJA.com. And Logic Live is also sponsored by Synthesis Oceana. These guys are my, excuse me, my personal reseller. I've been working with them for 15 years. They've not do what I do without them. And they've always supported the flame community sponsoring user groups all over North America. And we thank them very much for their continued support. If there's anything you need, definitely reach out to Synthesis at Synthesis.io, Synthesis Oceana Solutions integration and support for digital content creators supporting flame artists since 1997. All right, I'm going to stop the share here. And we're going to welcome to Logic Live. Andy Davis. Hey, howdy. How's it going, man? Good to see y'all. It goes well. It goes well. It's not quite so hot here in Los Angeles. Well, right now. Enjoy it while it lasts. Gotta enjoy everything while it lasts. Amen, brother. If that isn't, if that isn't like maybe, you know, the banner for 2020, right? Let's enjoy it while it lasts. I want to say, for those of you who, who don't know Andy, he's, and this entire conversation is going to be like an Andy fest. So I swear I'm not talking about myself in the third person just yet. That starts week 20 of the pandemic. So here we're safely in constant week 18. And so, you know, I'm still able to tell myself from, from Andy Davis. But Andy is a flame artist and sorry. It's going to be a thing, especially for my family that's sitting on the other side of this wall is going to hear me talking about myself in the third person. Andy is a flame artist and a supervisor based out in LA. And I'm sure you've seen his posts on logic about things like getting motion capture data into flame and recreating accurate depth of field. Andy is fascinating and brilliant guy with a real eye for where the industry is going. So I want to welcome him again to logic live. And Andy prerecorded some videos for us to watch, because I know you bounce around a lot between different apps. And that's kind of the theme for today's presentation. But how you doing, man? I'm doing good. Yeah. So I felt the need to prerecord this stuff because I'm, I'm on multiple machines, multiple S's. And just to make it a bit clear and not waste anybody's time, just to try to get the idea of the gist of how games tech and VFX tech is overlapping these days. And there's no exciting possibilities for creativity. Well, without any further ado, I'm going to screen share here and we'll play your first video introduction. And we'll go from there. So I prerecorded a bunch of this because I'm bouncing between multiple apps and OS's for this workflow. Game tech has become an exciting resource for VFX in the last few years. There's a lot of stuff out there. The haystack is huge. But if you have interest, get in and start playing with the tools. It's perfect for those years when you can't go outside. When starting for the purposes of understanding workflow, I suggest that you focus on specific tasks to avoid being overwhelmed. There are huge amounts of free tutorial resources out there. The area is evolving so quickly that it's honestly hard to keep up. There are lots of interesting developments including video walls from the Mandalorian, et cetera. They're all worth checking out. Specifically, I would say to check out Matt Workman's YouTube channel. He's a cinematographer that has been on the forefront of virtual production. Personally, I've been focusing on R&D that I can do without incurring huge costs and given the current environment. So first off, no one app or OS can do everything. Flame is my creative tool choice. High-quality and interactivity for final imagery is unrivaled in my opinion. Especially for real-time creative discussions with clients. There are lots of helpful 3D tools within Flame, but true CG tests are honestly better done in CG-centric tools. The trick is passing data accurately between the apps, which allows a whole bunch of flexibility and allows you to use each app for what it's good for. CG apps are to taste. Houdini is my preference. Blender is a great tool as well. You can use whatever gets you assets to play with quickly and consistently. Game engines. So I picked Unreal as it was the most accessible. It also, Quixel Megascans is free for Unreal, which is an amazing library of game-ready assets. As long as you render through the game engine, apparently the whole library is completely free, though you can use the FBX and other apps. If you want to do your final rendering in other apps, that might be a different case than with the free, I honestly don't know. The streamlining of CG rendering on game engines is huge. Even if it's not real-time, it's crazy fast and the quality is stunning. Now, a true CG app will likely give you better rendered quality, but within the game engine you have a render in minutes. I did a test render of a Quixel project in Unreal, which I posted 1,300 full CG 4K EXR frames in 10 minutes on old hardware. It was a Z620 64GB RAM and a GTX 1080 Ti. My settings need improvement, but still it's crazy impressive. Project tracking apps. These workflows can quickly grow complex. Shotgun using Python can track and even create assets. Artists automatically load and publish to their correct directories, which allows them to concentrate on making great imagery. Confusion is the enemy of the effects. This topic is a huge haystack on its own, so I won't go into this today. But it's certainly worth looking into, even as a small studio. Besides Shotgun's documentation, Alan Latteria and Jesse Morrow did a few videos about their Shotgun integration that I found extremely helpful. On the OS front, although I prefer Mac or Linux, honestly you need to run a Windows box for some tools that's unavoidable at this point. Houdini and Digital Assets. So, HDAs are similar in concept to a new Gizmo. They can make tools that work from within Houdini, but they also can work from within other apps. Unreal, Unity, Maya, etc. One of the benefits is that it automatically handles world scale and rotation differences without user input, which is really helpful. Because each app has its own predefined scale, and in some cases in the game engine, rotation is different than we're used to in the effects. So, why is this complexity worth the hassle? Setting this up correctly enables incredible creativity and flexibility. I hope you find some of the following info helpful. Yeah, Andy. Yeah, dude, when we did our run through the other day, I had this epiphany moment, as you were explaining to me your philosophy of getting things back and forth in between apps, and it's really true. When Flame used to be an island, or at least maybe that's how many of us Flame artists used to work with Flame, but now that more and more it's part of a pipeline, and as artists we're going to go back and forth from app to app based on needs and things. Being aware of how different apps deal with data, how they deal with scale, orientation, and things like that is just as important as being aware of color management when you pass images back and forth between apps. So, I think this stuff is more relevant than ever, especially with, like you said, all the game engine assets available to us. Well, I think part of the thing is this, is that all our tools are good at different things. Okay, for interactivity and dealing with clients, Flame is the place. Okay, that's my tool. I'm most fluent in it. I don't think about the interface. It's just my tool of choice. Now, when you want to get something that's hero CG looking, a CG app is where you want to be. It's got ray tracing with different kinds of ways of addressing high quality CG images. But games kind of goes back towards the interactivity side. Now, its quality isn't quite as good as CG app if you're going to compare them next to each other, although that's getting closer. But then again, you can get decisions made quickly. You can have a world. You could have a world where we see our location. In some ways, the game is kind of like, filming is a game. The whole project is a game. So, when you bring it into it, you're using each tool that it's good for. Okay, so for instance, if I want to get real-time focus polls, I can do that in the game engine and bring that back into either CG or into Flame. Or camera movement. Those things can be captured in real-time from the game engine and then brought back to the tool that we prefer to use it in. Whereas if we tried to get it all working from all sides, then you're always going to end up being a jack of all trades here in Master of None. So I think that the other thing is that the game engine stuff is so fast that it's actually a lot of fun. The Spielberg picture is that I want, you know, I feel like we need to get our clients to have fun making this stuff. The CG ship is a very big cumbersome thing. And it's sometimes our jobs to shield them so we can get our creative decisions made. But we need tools that allow us to be able to have these conversations in real-time. And this is one of those tools. Do you want to dive into the second video? Yeah, let's do that. Because the second video is basically more flame-centric and the third video is more some of the stuff that I was playing between Unreal and Redini. So let's do some of the flame-centric tools next. When bouncing between different apps, it's critically important to understand scale. Apps treat this differently. Flame works in millimeters. Redini works in meters. Unreal works in centimeters. And not only that, the Z-axis is up instead of the Y-axis, which is what we're used to in visual effects. I used to import an FBX into flame, viewed it through the camera from the same FBX. Everything looked okay and I moved on. But that might look right, but this is wrong. The reason it's wrong, the world scale is critically important, especially when you're bouncing between apps. As an example, I've got here an FBX that's one meter square that I've brought into flame from Redini. If everything's correct, Z should be offset by 1,000. This should do exactly this. It exactly lines up. So 1,000 units, which is millimeters in flame, is correct for the one meter square object from Redini. If you have trouble with this, I've noticed in the past that this ruler here, enable it, it defined. This defaults to 400. I changed it to 1,000, took define off, re-imported my model, and that seemed to fix this. Why is it wrong, and what's the impact of it? Why is this important? What I have here is I have the same FBX brought into two different scenes, one where these scale was at 10%, the other where the scale is at 1,000, which is correct. And you can see even detail in the model, the way that the IO is calculated, is completely wrong here compared to what's going on here where it's correct. The numbers that we use in flame, where when I scale this back, you can see the scale of what these units mean is wrong, whereas here it's correct. Everything can get very touchy if your scale is off. Your controls may not act as you would expect. So Mix and Low is an app that Adobe has that has characters and animations that are downloadable that you can then bring into your game or CG app. It's a huge library, it's a lot of fun. And so I wanted to bring it into flame. Now, sometimes when you bring in an FBX for flame, if I were to bring this in with all these defaults on. And nothing happens. What's interesting though is that if you were to take that and turn all these off, including the geo, then what you get is something that says that we want to align the animation at frame one. And you get these locators, and you can connect to the different parts. There's a lot of information that we could attach different things for after the CG rendered. I even went as far when I was doing one frame of white to create a character based off simple geometry. So here's a quick example of me swapping out an animation with another animation from Mix and Low. You know, it's a little tedious, and I'm sure there's a way of Python scripting this that would make it much more automated. But it actually works pretty well. So basically, here is just a character that's using the Mix and Low data. Other CG apps that will be doing this better, and you still want to use CG apps for a lot of things, but this is just a real nice quick way of getting human-sized things and animations into your scene quickly. So here's a sped up walkthrough through the vast library of Quicks. There's so many assets to take a look at. It's really worth taking your time to pick through. You'll spend many days looking through. Many of these assets are available to import into Flame. It's worth checking out. So you can have 3D previews of what it is. There's surfaces, there's decals, there's all sorts of stuff. You can bring these Quixel assets into our Flame. These are waterframes. You can see what these level of detail is. Level of detail is a way for games and CG apps to basically lighten up the load so that for background objects, they automatically become less detailed. So you can see the amount of geometry between a higher level of the Telford scene asset and a lower level of detail. Now should we then turn on the solid, we can see what these assets would look like in rentable form. So another thing that we could do is games and CG apps can use Instancing to also lighten the load. We don't have Instancing exactly in Flame, but our antiquated particle system can generate instances of the same objects. In this case, I have this Quixel asset that's just a rock in there and this terrain. And I have these coconuts that are basically being generated by the particles in random orientations on the terrain. So it's kind of a fun hack. It's an insanely fun hack. Yeah, you can end up layering that stuff. I've had some work doing that. This is kind of pushing things to a degree that, especially on my old Trashcan Mac, you're going to break them here and there. Because you're all referencing the same object, it still stays reasonably light. It can take a minute to load up, but once you're actually working, it actually seems like it's very interactive. Yeah. Well, let's break down some of the things that were in that video. You know, it's all the time, as Flame artists were asked to, you know, look to do look dev to kind of figure out what's this going to look like? How could the shot, or what's this going to look like? And you know, there are so many times where the, the thing that we create then has to be passed on to someone else. And, you know, like I, we have a CG department at Lively where we work. And the ability for me, if I, if I was going to sit in session with a client and kind of mock up a scene for them. The only way that that really benefits us as a company is if I've built that to scale, you know, I didn't just fake it. I didn't just throw a model of something over on the right hand side there and scale that up and down. So it looked right. If I build my world to scale, when it gets passed off to CG, who always builds things to scale, then it's going to be relevant information. I think that's what you're illustrating in the, in the first part of the video there. Yeah, I think like scale, when you get into dealing with scale, when you get your scale nailed down correctly, suddenly things such as depth of field calculations and all those things just come along for the ride. Okay. And you can check these things. And so, um, and, you know, sometimes, you know, when you're getting a track back, you don't realize that it is out of scale. You know, it's good, it's good to check these things. But if, if you're making sure to keep, you're making sure that your information is clean and accurate and when your FBX goes out to the other guys, then they can just start running. Okay. There's no trouble shooting. It should be able to just work as. And I remember when you posted the, the motion capture little tests there on logic. And that's just wild. It's something, I mean, I never thought about, um, importing an FBX with basically everything turned off just for the purpose of getting the locators in. I didn't know it would work that way until I did it. But it's actually quite undocumented feature is what we call it. I suppose so. I mean, it's, you're just turning off the geometry, but I guess what's interesting is that the FBX seems to, like at some points it just sort of shrugs and it doesn't know what to do. So it gives you nothing. But then when you start turning off certain things, you realize that, oh, okay, there's, there is information here that's helpful. So, you know, if you wanted to attach something to a CG or around an area of a CG character, that locator is very helpful, right? Even on top of the render, you know, especially if you've got a camera and then you've got a locator, you can create an element that just sits in the scene at the same place as that, even though you're without having to go back and see it. Totally. And the, talk to me a little bit about the Quixel Bridge. I thought that that was again, like an example of a great resource that's out there. Okay. So what Quixel Bridge is, is with, it comes from, it was acquired by Unreal, the subscription type of product before. And apparently the way it works with Unreal is that if you use it within Unreal, you get access to the whole library. And when you go through that, you sign up through Unreal, and then you download the asset that you'd like, you put them in your shopping cart and you'll download, and then you export it to whatever you're using it in, whether it be Unreal or, you know, no flame at this point. But you know, other, you know, Maya's in there. So whichever you were to choose, it exports, and then it shows up in there. Now, because it's FBX, FBX is a great all-around bridge for all of our tools, because it works between all of them. So Olendik does too. But I think as an overall, the one that has the broadest acceptance for everything, it would be FBX. So that's why, you know, it's just interesting to be able to start playing with those things and understanding how the normal maps and the roughness and all that stuff can be used in flame. We might not be ray tracing inside of flame, but that doesn't mean that your stuff can't still look really nice compared to those assets. There's, you know, you can layer them. They have a separate app called Mixer that allows you to layer these on top of each other to create grunge and all sorts of stuff. So tiling doesn't become apparent. And I'm only really just scratching the surface of what this stuff is possible for. The fact is that because the way FBX works, you can get the stuff in and out to any of these apps. And that's helpful to go back and forth, even if your final render is in, maybe my final render is in Unreal, but maybe I want to do my focus pull in flame where I can add the lens distortion and the layers and et cetera that I prefer. So I could bring the data back into flame for my focus pulls and defocus, et cetera. So again, you're just choosing to use which app for which thing. And if you can make it so that they all sort of talk, if you understand the workflow enough that you can get them to talk to each other enough or understand what the idiosyncrasies are between them, then when you bring bounce back into whichever app, you're just working. You're just continuing to work. Now, if you were to use Shotgun, you can actually have it so that it automatically loads apps or loads assets and stuff, which could also be helpful. But I think you might need to get a little bit Python heavy to get that working correctly. But the idea is that once you have it in, just get the artist working. And then suddenly this thing's there. They don't have to worry about it. And that's what I like. I like to keep artists making pictures, not artists trying to figure out where the hell something is on disk. Right. Where the hell something is on disk. And is this the right level of, is this the right depth of field? Is this the right color space? Is this the right color space? Is this the right anything? Yeah. Yeah. And I think that, like, as you understand how the color space works or how the way the scale is, and there are, there is answers to these questions that is the truth. That is the fact. You can choose to be creatively different, but this gets you in the ballpark real fast. So the fact is, even though not all lenses are going to be 50 millimeter, it's going to be exactly 50 millimeter. It's going to get you really close with, you have the correct sensor that you're putting that, that lens info into the F stop. So the, basically you use sensor F stop and focal length and basically the rest of it and your focus distance. The rest of it's all calculated automatically. The one thing I didn't have a chance to get to for today was in a more in depth explanation of my depth of field tool, which maybe I'll do it. Sounds good. Yeah. I've always wanted to say at the end of one of these things, oh, it was great having you. And the next time you're back in town, please stop by, you know, so thank you. You've made my amateur broadcaster dreams, you know, another one of them come true. I hope to be able to stop by sometime. Well, there's, it seems a bit far off. Amen. Yeah. Yeah. When we did the run through, you were talking about data and, and maybe one of the wild things about all the, the game engine technology is that it kind of opens the mind up to seeing other possibilities. Right. You were telling me that you, you had rigged up a tangent panel to, to, to control the camera, right? Yeah. So that's something I posted on logical while back. And so, you know, everything can be a game control. Okay. So in my case, I would use, I would use game. I would use my tangent panels, using expressions to drive my camera can drive its position, can drive its focus, all those things. Now, what's tricky about that is the tangent panels are based on color and especially the big wheels on a woman panel, each has a different thing that it's referencing offset, gammon game. So the math is actually for the expressions is different for each one, which is a little bit annoying. But so, but the, but the nice thing about a color panel is that it's designed to have multiple inputs in real time. So I could be using all the wheels and they should still be capturing in real time. That's why it works in great color. So to be able to use that kind of technology in flame is really helpful. The one thing you can't do is we can't, I can't capture runtime with the panels in flame. And the game engines at this point don't support color panels because color panels are exotic for what they're doing. But you know, you can use other kinds of game controls. You can use vines. You can use your iPad. You can capture your camera using an iPad. There's lots of great demos on the Unreal site. I look at things on virtual production and look for things on sequencer. Sequencer is basically the game engines timeline. And in the next set of videos, I'll show a little bit of like that's where you get your ex, FBX is out. So basically inside the game engine, you have this big haystack of assets and then you have levels where these assets are populated and do things. And those levels go into sequencer and then you can take takes of these things, whether and they can capture the images, capture the data from those things. And then you can export that back out either as images or FBX and then bring it back into another program. Gotcha. Does anybody have any questions for Andy? Before we move on to the next topic. So far, so good. This is great, man. I don't know. Maybe nobody. Wait a minute. Oh, there is one. Hold on. Here you go. Oh, there's two. The first one. How much does Unreal Engine cost to use? Free. Free. It's the best kind of answer. No, I think the thing is that if you, if you make over a game that makes over, I don't know, a million dollars, it's then you have to pay for it at that point. But for the, for all intense purpose, intense purpose. Yeah. For all intents and purposes, it's free. And then Quicks is free too. So just install it. It's on all OSes. That's excellent. What about, what's the best way to get started with Unreal? Just start playing with it. There's a lot of the, the, once you load Unreal, there's tutorials within the app. Then there's also the, the Epic marketplace, which has unbelievable amounts of resource. So you, you, they have stuff for architectural visualization. They have stuff for creating different kinds of games. Okay. There's there, if you were working with cad assets, you have different needs. And if you were working with making a 2D shooter game or something like that. So you start figuring out what these things are, but I'd say that the biggest thing is it's a little overwhelming to just jump in and try it. Okay. And there's a lot to it. It's really not that hard for some of the stuff. Again, a PC is going to be necessary. So if you wanted to use VR PCs necessary, you can't run steam on a Mac. You might be able to do it on Linux. I'm not exactly sure. I see I've always talking about MIDI. Yes. You can use MIDI controllers. They, they can do pretty much even, even if you wanted to use your PlayStation controller or your Xbox controller, those can be mapped into there. So there's a real wealth of opportunity out there. Pretty much, you know, it's just figuring out where it is in the haystack. So that's the tricky part. You know, luckily we're all in lockdown. So we have time to look into things like this. But I think that the reality is that there's a really a lot of exciting stuff going on and the possibilities are really quite broad. From Brandon here question, would you recommend learning how to first build environments and unreal or learn how to get data out with existing scenes? I mean, it's to taste. I think the fact is that creating environments and unreal is straightforward. I mean, with, if you use the Quixel Megascam assets, you literally just drop them into your scene, start moving them around. It's amazing. So, but then the question is that if you, if you're looking to try to get the data out, that's when, or when you're trying to create stuff in takes. Okay. That's where the sequence or the take sequence or comes into play. So that's out of the many menus. That's the one that's going to be the one that's probably a VFX hub would be bringing this data in and out, or especially bringing the data in and out. I think cameras can only come in through the sequencer instead of the asset browser. Cameras and GEO can go out through the sequencer. You know what we should do. Let me show. I guess anybody hasn't seen it show the, the unreal test render that you shared on logic. And bring that up here. Yeah. And I mean, this is really rough. These settings weren't tweaked at all. But I mean, this is, this is all procedurally generated CG at 4k HDR. In 10 minutes. 1300 4k frames rendered in 10 minutes. And it says here on a Z6 20 with 64 gigs of RAM and a GTX VTI. Yeah. I mean, it's really an old machine with not, but, but this is just what the assets do by themselves. I mean, when you're inside the unreal engine, you'll notice like you'll see the foliage kind of drifting. That's just part of what foliage does in the game engine. Okay. That too. That's not actually animation. That's just the physics going through what they call the foliage. And then it, it then affects foliage differently. So, so you're able to see things float around and blow around. Automatically. That's wild man. Let me stop this chair. We'll have to get it to Anna more for next time. Hello. Um, Hold on one sec. We have another question here. And that is, um, from Brooks. Can you get an NVIDIA GTX and put windows on the Mac pro to do the PC aspect or to take, you know, to use it and use it as a PC. Um, yeah, I think theoretically you can now for me, the tricky part is, is that, um, personally, I want to be able to have flame and game engine. And sometimes you're Deanie open at the same time. Okay. So if you're using it as boot camp, you can't do it. Okay. You're flaming it going to run on windows. Um, also from experience and granted, my hardware is a bit old at this point. We'll try to upgrade it at some, some point, but you know, you're having all these app, these memory heavy apps open at the same time you're going to get crashes. It's going to happen. So having my Houdini and unreal on the PC while I'm working on the Mac with the flame is sort of helpful. It takes the load off of any one machine. So the answer is yes. And I know people do boot camp. They're using windows, uh, on, on the hardware for game stuff. For me personally, it was not the way to go because I want my flame up at the same time. And I would suspect on Linux that the graphics drivers are more up to date on the game engine stuff than they may be on flame. So whether or not that's able to be dual boot, I don't honestly know. Um, again, you still might need two boxes for this kind of thing. But it was with flame having its own versus the game engine. Well, Jack chimed in and said, you should get separate boxes. And I'm not going to argue with Jack. I agree. It's helpful. Get two strong boxes. Get a whole raft. But of course, none of us are working. So none of us. Right. And then Jack is chiming in against saying Houdini will kill a machine. Houdini is fantastic. Okay. I think there's a lot of the, what's very interesting about Houdini and some of the stuff we'll get into in the next video is that it's able to create assets that are populated within the game engine. So that if you wanted to create a build, you know, generate buildings from OSM data, it could automatically, depending on what the data says, each building type is, you could have different buildings just automatically form. You can have streets that that generate and at the corners of each street, it has a higher probability of having traffic lights and all sorts of stuff. I mean, it just suddenly you building the. Rube Goldberg machine that then suddenly when it goes into the other thing, it automatically works because what it's doing inside of unreal is it's using the Houdini code to, to work. It's not. So the tool is using a terminal shell version of Houdini as far as I can understand it to then work within the game engine. It's really fascinating. And then they, Houdini also has something called PDG, which is dynamic graphing, procedural dynamic graph, which would allow you to do things like, so I've seen it work in unity. I haven't been able to get it working yet in unreal, but you could be, if you had say, say you have a bridge asset inside your game, your environment in, in your game, in this case, in this case. As you move it around, the terrain could actually know that that bridge asset needs to have things move around it. So as you move it around it, PDG will, will separate the terrain into tiles. So suddenly you'll see the tiles start to update inside the game engine while you're in it. It's, it's, it's really kind of mind blowing to try to wrap your head around. But what it basically does is it, it takes things that are, it takes processes and makes them available to everything on your farm. So that, you know, I think we talked on earlier between you and I when, you know, particle generation is usually a linear thing. It can only be done on one product. You need the previous frame to understand what the next frame is going to look like. Now, if you wanted to do a wedge of particle sims, PDG would basically stream it out to lots of different places, then assemble it back at the end and show you assembled wedge using every machine that you find. So you, it's really a decentralizing kind of workflow. It even gets more so, there's a format called USD. That's an unreal and a Dini that has even more promise. But that's, that's also a big, big, wild. Well, let's move on to the third video. Not a post showed how you could use Google Maps 3D data to generate the simple geo, but a little motion capture guy in Houdini so that you can get an idea of what real world scale is. And so a couple of cameras I've set down here, ground level version. Just going to pause this or interrupt for one second. So now you're, this is taking place in Houdini and you've downloaded Google Maps data. Correct. Yeah. So what ended. So in the post that I had, basically, there was a hack which takes the, the data from Google Maps. And debug mode, you can take that data and capture it before it gets to your monitor. And then you can bring it into Blender. You have to go through Blender for this. And then you have to go through it. And then you have to go through it. And then you have to go through it. Apparently, at least at this point. I will read. Send the link. But basically, I just went through and did what it showed how to do. And it is, does in fact work. Now the data looks nice. As a model. But here you can see that when we're looking at it as real world scale. It's, you know, it's not really holding up. It looked nice from afar, but, you know, this isn't going to sell anybody on previous. It's not going to sell anyone. It's going to sell one pixel equals one meter. Okay. Let's go back. You can really see how very lumpy. Great for getting a rough idea for something. But at the end of the day, it's not necessarily the best of solutions. There's another option called Mapbox. It's a plug-in for Houdini using OSM data. I think it's available in other applications as well. But so if we take a look at it up here. Yeah, there's our little motion capture back. Now, what's interesting is, is that we have building data that is generated by the OSM data. OSM is open street maps. So basically it has all sorts of information that comes along with it. Different kinds of buildings or building zones. All sorts of crazy stuff. What's handy about this kind of solution is that the way of generating clean geometry that's based on something. The heights are part of the OSM data. The ground level. You can see that there's, it's, you know, the buildings, although very basic, are much more useful as geometry. Again, there's a little motion capture guy right down there. So just to give you a quick idea of what Mapbox does, generates a high field. It's like a volume that basically has the height mapped to it. It's not geometry, it's more of a volume. And you can also do things like you can generate buildings from it. You can see that there's actually kind of an interesting amount of detail. And if we look at, you know, what's coming in through the data, you know, then you can see like there's, you know, how many levels, street addresses, not all of this stuff is particularly helpful, but there's ways of filtering some permissions. So it sets certain kinds of buildings to automatically generate, for instance, a non-real. We have roads can be generated too. And basically what the red areas are, like intersections, so that, for instance, you can end up having traffic lights more populated at intersections or different kinds of things happening. It's a procedural workflow that uses this data to generate things in a intelligent way. It's an intro to Mapbox. Continuing on Mapbox, I kind of have a few movie locations that I've pre-set up, so that we can look at our actor in different environments. This is Monument Valley, big fan of Westerns. I've got a medium. I've got the line in. You know, he's very, very small here compared to this. And by the time we get out to the super wide, he's really tiny. You have to zoom in to really find him. Really, this is a three-by-three Mapbox element. It's about, this looks to be about 13,000 meters across, roughly. Another example, we might have our actor hand for Django, Mountain Pass, that this is for this. And these are just to type in the lap long, these areas, looking from these different vantage points. He is just tiny compared to this environment. Now, an environment like this is too rough for detailed work, but, you know, you can start adding quicksul type assets, like trees and grasses, make this stuff really work. But being able to see the greater topography and scale of an area is really quite helpful. Moving on, we've got our actor to be hanging out on the Braveheart mountain range. So we've got him right on the mountain side. Even so, he has a really big mountain. Just tiny, tiny little guy. And then by the time we're out for the super wide, I think he's in this area right over here. This is Bendethus. I think this is where Mel Gibson was running up. It's right there. Another option, perhaps we could go visit the fictional area where up was, I think it was this waterfall. It's based on Angel Falls in South America. And we have our to scale, where he's really standing on the edge of this falls. The color map didn't really come through on this one. Every once in a while, you'll run into areas where the color map doesn't come through. Again, a lot of this stuff will be better done if you texture it using pixel assets or something like that. So then you can then scatter foliage and rocks and trees and stuff and make it look realistic at a close level. But, you know, you still get a great idea of how vast this is. We have Niagara Falls. You know, he's standing here on the edge of the falls. If we look off to the wide, there's that tiny little guy over here. We've got some, you know, buildings roughed in from the OSM data. And the light is a bit wider. Kind of see Niagara Falls. I think that this could have a lot of use for the scouting locations. Some of this stuff isn't going to be lucked as stuff. You can start putting people here at a real scale and then kind of understanding what's going on. You know, I picked an area of midtown Manhattan, Pal, to be standing on top of the building like such. So again, he's standing on top of the building. This is more or less to scale. And then he went to the super-wide. I've only done the buildings for the center tile of the map box. This is an easy way to quickly generate huge worlds. You know, between Houdini and Unreal, there's lots of ways of instancing geometry to be able to take advantage of all this OSM data, use it to just to visualize real world scale. Quixel Bridge is a huge library of assets that they've collected from photogrammetry. The detail of all these things is just phenomenal. There's so many different things to pick through on here. There's textures. There's 3D objects. It's definitely worth your time. So like for example with 3DS, there's so much stuff. So the idea is that you're signed in, and basically you can download these, and then once you've downloaded them, then export to your application Unreal. It could also be other things that take FBX, DDE Maya, et cetera. I believe the library is completely free as long as your final renders are through Unreal. Quixel also has given us some demo projects to take a look at. These are pre-made animations of camera and assets. This is the closest thing that we have to getting things in and out for visual effects. So this is where you would leave your image sequence or movies. This is also where you might export FBXs. If you have animated characters that you were using in there. The amount of detail in these assets is jaw-dropping. I mean, it's a full 3D environment, so we can get real close to things. It's dipping. We're going to then look at it through camera viewer that's been put into our sequencer, and we can see these pre-cons post shots. There's so much there. I mean, it just takes a lot to sort through. The quality is still quite stunning. You know, I'm on old hardware, so I'm not sure I think the lighting needs to be rebuilt. A lot of it comes down to how efficiently you can pre-build the stuff so that it runs well and cleanly so you can record your imagery that you're pumping through it and also things such as holding focus. You can be moving lights around. And all those changes can be cut out through either through FBX or captured through the image sequence to really high quality. It's amazing. Absolutely amazing. Yeah, there's a lot out there. It's really not that hard to start playing around. It's worth checking out. Totally. I just want to make sure everyone sees. Brooks pointed out here in the chat that he's seen people tie the OSM information in the time of day and sun angle during the year so they can know where the sun is going to be on a given date to do a shoot. Absolutely. Then, of course, to be able to map that stuff out to scale is really where it gets invaluable. Yeah, I mean, I think that one of the reasons why I prefer Houdini is basically almost any data can come in through it and then it's not acting in the way you'd expect it to. It's going to kind of force it into another way so that the app that you're expecting to get it out of reacts as you'd expect. Like, for instance, there's going to be times with the Quixel, Megascans assets like 3D plants. You might have to go through a CG app first and then export as an FBX because some things there's just not available in flame such as they have ideas of atlases, which is kind of like multiple plants on the same texture. So it's accessing a section of what that texture is. We don't have an easy way or level of detail calls and stuff like that. So you might want to have to go through the CG app as your hub to then say, OK, flame, here's your clean FBX. No. And most of them can do something like that. You just give me a great idea for a future request. Adaptive degradation based on distance from camera. Yeah, it's level of detail. It just needs to. If flame could import FBX with level of detail, it would automatically be there. Wild. Or it could be. It could be. Does anybody have any other questions for Andy? Keeping an eye. Oh, here you go. Also from Brooks. Have you done any cloud rendering? I'm assuming through Unreal. Haven't. I mean, the thing is, is that cloud rendering, I think would be more critical. Right now the rendering on Unreal is fast. I don't think it's really all that necessary. And. I haven't yet figured out how to push all that data across because it's all based inside the project. So I haven't, I haven't tried it for, for Unreal in CG type apps. Yeah. You might want to try to do that somehow figure out a way of being able to do your render so that you can use this made processes. But I'd say that in the game engine, the rendering is so fast. Anyways, that it's really. I feel like you're, you're figured until I get a need to do such a big world that it's taking me a couple hours or days to get a render out, which would be like traditional CG. I'm happy as it is. Sweet. Any other questions for Andy? All right. Well, thanks, man. Thank you for taking the time to share. Oh, it was great. It's amazing. And it's eye-opening to see what else is out there. And one of their options are available for, for visual effects. So thanks as always. And thank you for always contributing to logic community, man. I really appreciate it. All good. All right. Let's close this out. So coming up. In logic live next week, we're going to have Naveen Srivastava from, from Toronto. Doing some, some spot and some shop breakdowns for us. Got a bit of a schedule change for you here on August 2nd. We're going to have Renee Tim. Come on. And she's going to take us through building and running your flame business at home. She's really an amazing, amazing person. And I'm so, she reached out to me and I'm so thrilled that she wants to share this information with everybody, especially with, you know, everything going on. So definitely tune in for that August 9th. We're going to have Chicago's own Randy McInty. And on, there we go, August 16th. Using flame with shotgun with the aforementioned Alan Latteri and Jesse Morrow. Be sure to check out the logic podcast. I have a new episode coming out this Wednesday. So if you haven't subscribed in your podcasting app of choice, please do. And of course this episode and all the other, the past episodes of logic live are available on logic.TV. There's links there to the podcast and all other kinds of great logic content. Please take the time. If you haven't already to subscribe to the YouTube channel. And of course we want to thank our sponsors, AJA and Sinister Oceana. That's going to do it for this week. This week's episode, everybody. Thanks so much for tuning in and we will see you next week.