 Hello, good evening, my friends. Really happy to see you all here. Thanks for coming My name is Gleb Alexandrov aka Gleb Alexandrov if I may use the Belarusian transliteration I'm one of these guys behind creative stream laboratory making computer graphics coffee and blender tutorials and today We'll talk about photo realistic environments in the context of photogrammetry aka reconstructing 3d from photos As it turns out we've been using photogrammetry quite a lot in our creative stream projects We've been using it for digitizing some random historical artifacts We've been using it to create a central piece for the render We've built some miniature movie sets and then photoscan did just for fun But most importantly we used it for creating some pretty cool Realistic environments the environments that would feel authentic and real There are a few quite amazing things that we can do as 3d artists and as a blender artist for that matter with photogrammetry and We will explore some of these things today But first of all a question to you guys. Have you used photogrammetry before by show of hands? All right, and now raise your hand if you haven't used photogrammetry awesome As a quick refresher the process goes like that you make a bunch of photos of an object or space While making sure that each photo is as clear and sharp as possible While using your best capturing device Basically treating your photos as an archival Material basically then you load it into a photogrammetry software and it reconstructs a point cloud Based on your photos and then a high-density mesh Quite likely with the texture or vertex color That's the basic photogrammetry workflow. Anyway, the most generic photogrammetry workflow now I would like to To expand upon this idea and talk about some workflows that are more specific in their goals and in the type of Data that you gather and the type of data that Photogrammetry outputs for you. It will start with scanning the whole seamless environments That may seem like a silly idea like what why would you do that? But actually it is pretty cool When we arrived at this abandoned warehouse location, we just wanted to Scan some modular assets like this vulgar car that has been sitting in there for half a century Gathering dust and not only and that's what we did But then Nick our camera person proposed Why don't we fly a drone there to capture essentially an isometric view of this location? And when I saw it reconstructed, I believe it was reality capture. I thought it can definitely Work as a game environment. I know that nobody does it like that in game development But I can totally imagine it working Especially when you pepper it up with some dramatic lighting and some special effects in blender, for example And imagine throwing in a playable character If you're lucky enough to have some amazing historical Places around like this 18th century Bernardine monastery in Brest You can definitely try getting yourself some really nice base meshes for your environments scanned literally in one go in one piece and It works really well with some extra lighting in blender stuff like that Extremely realistic in fact Of course this method is not super duper versatile like what would you do with such mesh and there is the performance consideration as well The meshes produced in this way end up being pretty heavy due to every texel and every polygon being unique But then again even the lower resolution stuff found on skate fob like this amazing Funery monument by musea It can work really well as a base for your cinematic experiments in blender That's some diabolic realism and when it comes to creating the snapshots of the entire environments Even with the dynamic material properties like reflections and refractions The novel view synthesis methods like neural radiance field and gaussians plotting for that matter produce even better results These newer methods of photogrammetry can capture the wispy details like hair or vegetation for that matter But also well reflections and refractions Something that was historically impossible to capture and to visualize with more traditional Photoscanning methods and now you can do that in fact that demo was created out of just 30 photos In an open-source software by Inria that you can find and download from github Speaking about reflections and refractions. We prepared this demo just before going to blender conference and well Stuff like that was impossible to make a few months ago Even the tech is developing really quickly indeed and it has already been adopted by the mainstream tools like Luma AI or Polycom for that matter and you can literally download it and try it with your own smartphone basically Have you guys heard about gaussians plotting before? It is an amazing technology indeed of course like this advantage of using neural radiance field and such stuff is that it isn't not clear yet how to edit like these holographic point clouds because the Technology of editing nurse is in its infancy the field is pretty new forgive me the pun But it's pretty cool. Nevertheless Of course a far more versatile approach would be modular one meaning finding a cool location And then Dissecting it for props with the goal of creating the asset library basically out of which you create new environments Getting back to this abandoned warehouse. We could have scanned the hero assets like this Volga car But then the medium and small scale props, you know, how it goes textures and PBR materials Everything that can be salvaged in there with the goal of creating a cohesive asset library Of course then each asset would have to go through the optimization pipeline a Reduction of the polycount transferring the Geometrical detail from the raw scans on to the lower poly measures stuff like that with the help of normal maps or displacement maps for that matter and sometimes this process can be a breeze when it can take an asset back home or back to Studio and scan it on a turntable Maybe while flipping it around in the process to trick the software into reconstructing A watertight 360 degrees representation of an object then the rest of the optimization pipeline is just Crunching down the polycount and that can be done automatically and then probably baking textures and that can also be done kind of Automatically But it's never like that Because you know the role photoscans are often extremely messy and to extract modular library out of photoscanned environment is like it's just a very laborious task Which takes a lot of time Imagine modularizing something like that But when it works it is Probably the most versatile type of workflow associated with photogrammetry Nothing, and I mean nothing is more Pleasurable than scattering such photoscanned assets around to build new environments And the resulting environments end up looking as cohesive or even more cohesive than the environments can't literally in one piece It's the industry standard for a reason Yet another type of modular assets that can be produced by the means of photogrammetry is of course characters Meaning background NPCs for your environments stuff like that I'm sure you have seen Stuff by Ian Hubert. I think Ian has popularized this workflow It goes like this you just scan a character in the final pose They are supposed to be and then insert a few bones and animate the head and torso movements In real-time Blender's viewport while recalling these motion and you basically get yourself A nice background character doing its background characters stuff And you don't even have to care about Scanning characters in a proper t-pose or a pose for that matter You can just scan them scan them in the final pose. They are supposed to be It will look like sufficiently realistic for background NPCs The next logical step would be to apply a real motion capture data to your characters in the Trackless motion capture tools like mixamo or Rococo vision for that matter and here You can see 80 performing some stunts on camera and it kind of works But better yet if your characters doesn't need to move at all It's probably that the best case scenario people tend to fidget all the time making the scanning Like needlessly complicated when they're conscious at least that's me by the way He has another type of The asset for your environment is of course capturing the PBR materials and Surface textures by the means of photogrammetry The process is no different from scanning any modular prop or character for that matter You just find a patch of the ground that you like and capture a bunch of photos of it While making sure that each photo overlaps with the next and then you load it into Photogrammetry software like measure room or reality capture let it reconstruct high density mesh But then you bake it onto a flat plane essentially to derive such textures as Albedo or color maps normal maps ambient occlusion textures and displacement for that matter most importantly The height map that encodes the surface modulation and then you load it into an image editing software Offset all these textures and paint across the seams all at once I'm doing that we kind of make All necessary ingredients for the amazing seamless PBR materials that look extremely realistic with Apropolygon displacement in blender and no surprise because yeah, we basically captured the real surface high height map And it works in real time now thanks to blender Evian that is important. I think Of course Unlike their procedural counterpart. Oh, actually I forgot to mention we can use this height map for some cool blending between such PBR materials It's not as versatile as building materials with nodes in let's say substance designer Because you are always kind of limited by the resolution of the source footage and simply by the availability What was available to you to begin with? But it's I think it's extremely cool nevertheless You might recognize this demo The last type of photo scan data that we're going to discuss today is going to be lighting You can either extract the lighting information and throw it away or extract it and cherish it Depending on what you do in a typical photo scanning workflow You usually aim to remove any cast shadows or for that matter in direct lighting and ambient occlusion In the delighting tools like a gsoft texture delighter, so you can set up your own lighting and blender, obviously Alternatively we can flatten out lighting during the capturing phase using an on-camera flash it it's kind of Yes another method of reducing All the shadows and all that stuff So we are left with the diffuse textures only or you can go as far as removing both shadows and reflections by means of cross polarization as Seen in these amazing demo by James candy aka classy dog that you can watch on YouTube We don't have time to go into that But basically cross polarization means that you apply the polarizing filter Both the lights and the lens to kill the specular highlights a Little bit of James here for you Yeah But if you were scanning for a virtual movie production or something like that You may want to retain the lighting information and to use it and Then you would want to retain it in full to capture the full dynamic range We can take multiple exposures for each shot and then merge them into a single singular high dynamic range image Basically and build our photos Photoscans out of this high dynamic range data the resulting environments would emit a proper amount of light And the 3d objects put into such environment would receive a proper amount of light Hopefully integrating seamlessly into such environment and I imagine that is really great for Virtual cinematography, of course capturing HDR images is kind of a time-consuming process. You have to merge all this multiple exposures all that stuff But when you do it right it kind of works These are just some of the things we can do as 3d artists as blender artists And of course as environment artists with photogrammetry Even though the output produced by this technology is not always a super versatile one The tech itself is pretty versatile nevertheless So no matter if you are a level designer a 3d artist like me or maybe you work in archaeology or in science Photogrammetry gives us some amazing tools and workflows to Basically digitize our physical world and bring it into 3d. There is something for everyone and before we wrap it up, I would like to say of course, thank you to you and I would like to thank our team who have been helping to gather assets on the ground Sometimes taking a risk Lena and took nick 80 Pavel Eugenia, thank you so much for your contribution and Yeah, now I will give you a brief demo of the photoscan stuff that we have been doing The volume up, please Thank you so much