 I think we got the small session here, so I don't know why it's the need for all of this. I fixed it. Yeah, but yeah, I just want to share with you some say about 3D as a content format. So my name is Barclay. I'm the product and growth lead at Luma AI. Luma AI, we are this AI 3D startup that's building 3D as a content format as a solution for everyone. So wait, oh, does this pen work? Sorry, does this work? No, I can just come back here on my small podium. Yeah, so what we think our philosophy is 3D is the medium of reality. It's like the fact that we see through 3D everything that we interact with on a daily basis is like 3D. It's also how we think imagine your dreaming last night. I don't know if you guys dream a lot, but I do. So when you're dreaming, everything you dream is vivid. It's concrete. It's 3D. It's not like playing like a video or like flashing an image. It's also how we want to express our ideas to others, like something that's concrete, that's vivid. And it's also an art form that's not widely common today because of the barrier to creation. But that's what we eventually ultimately want to create. So the experience of like 3D or the magic of it is that it can become interactive. It's like when we're playing piano keys on a keyboard, it's not like we're sitting back and consuming something. We are creating something. We're also enjoying it at the same time. So that content format just brings together a creation and a consumption experience together. It's also direct. It's something that is as natural as we write on a piece of paper. We took a photo with our cell phones or we make all of these like concrete stuff in real life with our hands. And then it gets better with effort. What we previously think that the creation process is linear. It's like I have a goal. I want to create something. I get from point A to point B. But 3D's creation or the creation of all of these are art forms, it's actually not like that. It's a bunch of like trial and errors. It could go into more branches. It's more like a tree that you start from somewhere, you go out, try a little bit, and then you come back and then you try something else. And that eventually comes out to be something amazing. So Luma AI, we are a platform that tries to bring these 3D as a content format to everyone. So the way people first interact with 3D is that by taking a bunch of like images that is like surrounding a space, for example, if I go around this meeting room and I take a meeting of all of you guys and I get like a 3D reconstruction that captures the moment. So in the past like one year of our product launched, we first launched our product in like September 2021, sorry 2022. We've seen a bunch of these creative videographers, film directors that have been using our product to make these amazing like 3D reconstructions. Like this first one is Zane, who's the lead artist in the One Direction. He used our product to make this cool music video that can showcase him like writing a motorcycle, but then capture that moment of him writing a motorcycle and then show it in 3D. Similarly, we have a lot of these captures of famous buildings or places that's being shot with a drone that being able to capture these and then people can actually play around with it and then see every aspect of that particular building. Similarly, another artist that's more famous on Instagram, Karen Chung, she used our product to create this like drone like shots without a drone. So all of these what you see looks like a drone footage, but it's actually just by using Luma to go around some bar and then do a capture of them and then it will be able to create these like flysuit imagery of because it's a 3D reconstructed scene. We can direct the camera movement. We can make it more interactive. We can view it from all kinds of different angles. So then she decided that to make it into more of these like drone shots. Similarly, people have been using us to scan themselves, bring them to virtual environments. And the last one here is just like a really cool like underwater scene. I actually don't know the creator. So I don't know what he does or what he captured it. But then this is probably more of like a diving experience. He uses probably like GoPro camera to go around this like underwater relic and then capture these things. So in the past, like all of these are just like shown as like videos, but they're actually like 3D reconstructions that people can play around with their phones. But because of the limitations of the creation tools and also the platform that these can be displayed on, a lot of these are not that like appealing to people. Because like all you're seeing right now, you'll be like, I hate these are a bunch of videos. I don't see why this is much of a big difference between this versus like a full 3D format as an art form. That's because we don't have a platform or we don't have a technology that's be able to interact with it. However, with the development of AI and the things have changed a lot in the past like half year, we're able to bring together some more interactive experiences for people to to play around with. So there's this technology called Gaussian spedding that came out this September. So a lot of you probably have heard of Nerf, which is a lot of these like what we talked about before that's coming out from like the Berkeley labs. But with Gaussian spedding, people can right now actually generate these 3D formats in seconds without the need to render in them. Because what's amazing about this technology is that it's not only rendering these like these like Gaussian views, but it does not require deep learning. So it doesn't require a neural network to train and to generate these 3D models. Instead, it is using all of these like Gaussian point clouds. So yeah, if you if you guys want to take a second, pull up your phone and a scan the QR code out there, you can actually see a 3D version of like the Colorado, I think the Colorado capital, which is based in Denver of the night view, and you can play around with it and actually see how it is like on your phone by being able to drag it around and then twist it, turn it, view it from all kinds of like different angles. Yeah, just want to confirm if I won was able to see it on their phones. Awesome. Yeah, so that's what we're talking about the future of 3D. It's interactive, but it's also instantaneous. It can generate these like views by after capturing without taking some time to render on your phone. You may notice that it almost immediately loads. That's something that was never possible before, but with Gaussian spreading, we're able to bring this interactive environments in a matter of like seconds that you can interact with, play around with, and then be able to even like change it however you want based on the camera trajectory, based on these different view angles. And then with this development in AI in these like new ways of rendering 3D, we can actually imagine a lot of like different use cases suddenly become possible because of this technology. So first I imagine in the future like videos and 3Ds are going to connect together so that like when you're viewing a video, you can choose to passively consume it by just like laying back on your couch and watching these videos, but it can also try to interact with it. You can play around with it. You can change it into different angles. And then with the mechanism that's built into the video, you can also interact with the elements of the video. For example, like a person, you can always like freeze to say, Hey, well, I want to maybe talk to this guy or I want maybe play around with it, move it, move it around and then see what it does. That all becomes possible with AI with being able to reconstruct these like 3D views. So it almost become when video and 3D combines together, it becomes this new format of like 4D, which is like fully interactive, but still placed on a time sequential. That's something that we are working towards with. So there's a couple of like different like also business use cases that we can make. So with Luma, everyone can now start making like Apple quality, like product demos. Like you often see in all of those like maybe like shoes companies like Nike that have like the 3D display of their shoes. Previously, like you have to go to like a studio to be able to capture all of these with like several different cameras and then be able to render it in a specialized software. But with Luma, we are able to just like enable everyone to use their phone and then to do a round of scanning and then be able to generate these kind of like 3D reconstructions. So here is like example of like what people have been using Luma to scan a certain object and then we'll be able to reconstruct it so that they can display it on like an e-commerce website or their own brands or even use it as their like marketing materials. So all of these like 3D models are just like similar ways of like going around things in a circle with your own camera, capture it and then regenerate this like object. Another big use cases of 3D nowadays is on game assets and environments. So currently like to make a game, it takes significant amount of effort because of the cost it takes, because of the time it takes to create all of these like 3D assets. So that's why we have like 3D artists, like all of these like game pipelines, game engines that is developed. If you ever want to make a concrete game that's in the form of like 3D. However, we imagine a future where right now we are can be able to capture these in real life, but also generate these in with with like text or with images and then be able to showcase how these assets can be instantly plugging into a game and then we'll be able to move around and then integrate into the game developer experience. And we hope like eventually games could not only be like these kind of like these 3A like large scale games, if some of these like mid-core games can also just start be generated with AI with these game experiences that's allowing everyone to be able to create some experience. One of this company that we really admire and follow in this space is Roblox, which I'm sure many of you have know, even especially if you have kids, like I have a little cousin that's now going to school in Berkeley. He and his friends just like over a weekend were able to develop like a single simple Roblox experience, although it's like really simple. It's like a pixelated like really simple format. The game mechanism is simple, but he and his friends have fun and then they were able to show it to other of his classmates to say, hey, this is the game we've built. Everyone tried to play this. So we imagine the future like with these generative 3D and with these like reconstructive 3D, game development can be as easy as just entering a few text or entering a few prompts and they become generate a full experience for people. That's not only like right now Roblox is only like pixelated format, but with AI we can enable it in different kind of like style, different kind of like format, different kind of narratives. So we have already have people like that use a Luma to scan themselves and then create these kind of like game like or like experiences. Although these are really simple attempts right now, but with AI I think it will be easier to create all of these game assets or even put yourself into a game by scanning yourself and then have this like 3D data that will be able to interact with the environment or just put yourself in that game environment. Yeah, so that's what we hope for with especially with the future of potentially like AR or VR come into place or even if it does not, even if it takes longer than we hope. We believe 3D is the interactive format that we can all see, we see it on a daily basis, but it's right now because of the barrier of creation is so high that no one is creating these kind of stuff, but we really believe that it's going to be the new content format, then possible next wave of like TikTok, but in 3D or like even in a VR glasses, and that's what we're working towards which is a platform to make 3D simpler and then turning the world, turning the whole internet into a more 3D, more interactive environment. Yeah, and that's the end of my presentation and thank you guys so much. Yeah, yeah, yeah, that's a really great question. It's currently like when you capture something in Luma, we actually generate both a Gaussian spanning view and also a nerf which then can be turned into a mesh or like any of these like standard like 3D format, like OBJ, GLTF that then can be exported. With Gaussian spanning, it's currently, it's not that effective if you eventually want a 3D model because the way Gaussian spanning works is that it renders like a point cloud, but that's it. I think there are some research going on that are trying to see how that point cloud can be turned into a mesh, but because of the fact that it's not being through like deep learning, it doesn't reconstruct the different inside and outside aspect. So I think right now it's harder to turn Gaussian spanning into a mesh. So in our product format, we actually took a way of like trying to generate both, but then when people are just viewing it, we'll show them the Gaussian spanning point clouds. But when they actually want to use it to say, oh, I want to export this into a 3D format, then we turn them into like a mesh and then be able to, then they can be able to export it. So that's the current approach. But yes, I know there are like academic researchers that have been going on to try to explore how those point clouds can be turned into a mesh with Gaussian spanning. Does that answer the question? Yeah, yeah, yeah, sure. I mean, you guys can just exactly try it out. So let me see if I still have the QR code. Like, yeah, for this one, if you download Luma and then just try capturing any of these tables or objects, then you can directly view it on the phone. I think that would be the best demo because, well, CD reconstruction is currently still takes like 20 or 30 minutes to create. So like if you're waiting me here to produce a demo, that will take some time. Yeah, yes, exactly. Yes, this is already possible. Well, like right now, if you go on the editing camera trajectory, it's still the nerve format. So we are actually able to allow people to edit nerves. But with Gaussian spanning, that would be a little bit harder actually because the point clouds are constructed in a way that you are not constructing like a mesh in the center. So you don't know where objects are and their relative positions which you heard because you're just like rendering views based on the image that you have. But that's possible and then that we are currently in development that is expected to come out in January. But yeah, right now with edits, you can edit a nerve and then see in the future like Gaussian spanning as well. Is that time? Okay, okay. Okay, yeah, but thank you guys so much.