 My name's Matt White. I am the CEO and founder of a small outfit called Berkeley Synthetic, working at the intersection of simulations and AI. I am the co-founder and chair of the Open Metaverse Foundation, Open Metaverse Foundation, a part of the Linux Foundation, but was originally founded back in 2006. And I'm a board director and chair at the Metaverse Standards Forum. So what is the Metaverse? You may all recall Justice Renquist's famous statement on the Metaverse here. And I don't know if anyone wants to take a stab at trying to define what the Metaverse is or will be. No brave souls. So I don't know what the Metaverse is either, to be honest with you. But there are a few things that I do know. I think the Metaverse does not yet exist. And there are some core tenants that I think we have to be cognizant of and follow in building out a shared simulation. And so some of the obvious things here are that we need interoperability to be able to facilitate communication and movement through virtual worlds. We need persistence. So the ability to maintain state and create this global shared simulation that is fairly frictionless. And we need to support the creator economy. We need to support an economy that moves in and out of the Metaverse that empowers creators and its users. And then immersive experiences. I mean, that is fundamentally the foundation of the Metaverse, whether that is a game, whether it is a scientific experiment or a virtual laboratory or a training gym for autonomous agents. We need these immersive experiences. Also, it's social. I've seen several definitions of the Metaverse that completely exclude that, which is bizarre to me, but it certainly needs to be a social space. And of course, it should be accessible and inclusive, right? It should be a space that is safe for everyone to participate in and be inclusive across the board. So I don't think it makes sense to define what the Metaverse is or will be. I think what makes sense as builders of the Metaverse is to take the bottom up approach, right? And so build the underlying infrastructure, allow developers and creators to build their experiences on top of an interoperable framework. And the Metaverse will then define itself. So at the Open Metaverse Foundation, we have eight foundational interest groups. We've fundamentally divided these up into areas that we believe will be catch-all basins for most of the activities. And certainly we may change that with time, but we have these eight foundational interest groups, networks. I mean, these are all fairly intuitive, but I'll go through them quickly. So networks, this is how we move between virtual worlds. This is how the virtual worlds are interconnected and how we can do things like state management, naming and addressing these sort of fundamental elements that have significant parallels in the internet today. We have users. So this is like decentralized ID, wallets, authenticity of users and that sort of thing. Then we have legal and policies. So we're trying to be very cognizant of the fact that this can't be a lawless space, right? The Metaverse needs to have guardrails, it needs to have bounds, and so we are looking at that as well. And certainly just like policy will start to affect artificial intelligence, it will eventually find its place in the Metaverse. Then we have virtual worlds and simulations. So these are the engines that run the virtual worlds. This is also, we have macro simulations where we have a simulated environment, but then we also have more scopic or micro simulations where we're simulating physical and visual elements. And then digital assets. So the, if you're part of the Metaverse Standards Forum, digital assets may mean slightly something different, but still close to the same thing. Here, we're talking more format metadata, or is it USD or GLTF or that sort of effect. And then the portability of digital assets, these are all fairly important elements. So the last one I didn't touch on is artificial intelligence, which is the topic of today's talk here. And fundamentally, the unique element of AI here is that AI actually can and does and will affect all of the other seven foundational interest groups. There are applications across all of these spaces, and we will touch on those through today's presentation. So I'm going to take a quick stop on this synthetic realities slide here. So if you are familiar with the simulation hypothesis, or you believe the simulation hypothesis, or you're a follower of George Barkley, his immaterialism, or you particularly enjoy the matrix, this is this concept of a faithful simulation of the real world, right, and all its aspects. So I'm going to take a quick this is this concept of a faithful simulation of the real world, right, and all its elements, very atomic and quantum elements. Now, will the metaverse be synthetic reality? I can't say, but today that would be computationally impossible, right. But it's kind of cool to think about the idea that maybe we have create this metaverse that is a synthetic reality and then becomes the reality for some downstream society, right. But what I did want to touch on this, the reason why I want to touch on this is because of the concept of atomicity and composability. So when you look at things from the bottom up, and you're able to create these foundational elements, that will help facilitate the building of the metaverse as opposed to making it very prescriptive and saying, okay, top down, this is what the experience is going to be, right? Is it going to be Roblox? Is it going to be Minecraft? Is it going to be this blockchain game? Those are very prescriptive methods, right, and it doesn't embrace openness, it doesn't embrace interoperability. And so the point being that we need to be able to look at things from a bottom up perspective so that we can build the metaverse effectively. The other thing I want to touch on was that those sort of the experiences today are very deterministic, right, like you have programmatic rules. If we build the metaverse with a very stochastic element to it, we can then allow developers to then build in that space in a method that is not constrained, right? And so that's the challenge right now that we face is to how can we build towards something that is truly open and full stack top to bottom. And so this is the wagon wheel of AI in the open metaverse. There's certainly a lot here to cover, but I do believe that AI will play the most significant role in the development and the operation of the metaverse. For something to be at such a huge scale, ultimately, human moderators cannot moderate something at that scale. Human moderators cannot moderate the internet today. And so we have this very powerful tool and I believe that we only scrape the surface of what we can do. And some of the major innovations in the last 10 years have really started to culminate and get mainstream attraction, particularly generative AI. And so being able to generate new content from learned distributions is something very compelling. And it's been around since the 80s, right, the restricted Boltzmann machine, the work of Dr. Hinton. But by 2014 Ian Goodfellow's paper had started to generate more and more interest in this space. And you know, we're at the point now where if you've seen, you know, mid-journey version five, you get some pretty photorealistic outputs. But over the course of the last decade, we started to see generative images, we started BLC generative text, speech, audio, video, and now 3D digital assets are actually becoming more viable. So, but I do want to highlight that generative AI is not the only show in town, right? The metaverse, you know, you can imagine that, you know, the creation part is really compelling. But we have applications in fluid simulations, we have applications in fabric and hair simulations that can be done with neural networks. And, you know, simulating natural phenomena, you know, facilitating scientific research and being more physically accurate than many of the programmatic methods today. And so conversational AI is one of those elements too that, you know, may be the de facto mechanism by which we interact with the metaverse, right? You know, we may have these autonomous agents, our personal assistants, that facilitate a lot of the activities that we do in the space for us. But, you know, the most method of interaction, these are all sort of elements that can be solved with AI and deep learning and reinforcement learning and so forth. So I do think that 3D asset creation, world building is going to be, at least initially, will be the very, you know, primary activity in the metaverse, right? Folks will be wanting to build their spaces, this, you know, beckons back to, like, everybody jumping onto GeoCities and trying to create their own web page and this sort of thing, right? And so the metaverse will be incrementally created over time. And, you know, the workflows that are in place today that are very procedural can be improved through the use of AI. And so I would say one of the important innovations at least in the last few years in this space has been, you know, Milton Hall's Intensix paper on NERF. So being able to effectively recreate 3D scenes and from 2D images inside this 5D coordinate space has proven to be pretty compelling. And although NERFs do suffer from, you know, long training times, NVIDIA's Instant NERF really helped improve that. And I only see future improvements making this much more accessible. We also have, and actually I want to stop for a sec, if anyone wants to ask a question while I'm on a slide, I'm happy to answer. So just throw your hand up, we're still a small enough group. So there are other aspects here that can be accomplished with neural networks. And we talk about, like, style transfers. So you may have a 3D asset of your avatar, perhaps, and you're going, you have a photorealistic avatar and you go into a 3D, or sorry, you go into, like, Minecraft, let's say, and you apply Minecraft style transfer and you go into that space and now you've matched that world's style. 3D scene generation is a little bit out of hand right now, right? It's not something that we're able to do. 3D asset generation is still, you know, progressing. And, you know, dream fusion, stable dream fusion is looking better these days, but certainly it's not fully there. But I do think that there are really compelling reasons to apply AI in this space and, you know, there are certainly boundless applications. One of those is the ability to move avatars between worlds, right? And so we want to be able to control these characters. And so aside from autonomous agents, I'll jump on that in the next slide, the necessity to interact with environments, you know, is a little bit coarse when you throw on a headset and you're kind of just pressing a controller and kind of moving around like this. There are neural methods that have shown that we can actually very well mimic human actions, right? And so a particular interest is OpenAI's work with video pre-training. They took Minecraft videos that are on YouTube, you know, taught a network to be able to replicate those movements, use some reinforcement learning to fine-tune and then take it, took those agents from the offline world and threw them into Minecraft to actually go and execute tasks. And so I think the idea that we would only be moving around in the metaverse, I think, is probably naive because we have this ability to automate a lot of the tasks that we may want to do. Facial animation is another application that's pretty interesting in video's audio-to-face, real-time analysis of audio and then being able to mimic facial movements, as opposed to having to use sensors on your headset to be able to analyze your facial movements. Yep. Yeah, I mean, you're referring to being bound to the GPU, right? Yeah. Yeah. I think, I haven't seen a paper recently that illustrates that, but I do, I don't think it's, and the solution is bound to GPU, but I do think that we're going to get there in terms of being able to do full facial synthesis through audio analysis. I just don't know a paper right now off the top of my head that covers that. I'll give you a timeline. Yeah. I don't know, because I'm not involved in that research, right? It would be hard for me to surmise, right? But yeah, certainly we, this may not be a popular statement, but we need to move off of being bound to a particular vendor's hardware, right, in terms of openness. So yeah, yeah. I mean, one vendor can't power the Metaverse, right? So the other thing is autonomous agents, right? And so there's some, there's been a lot of hype around it lately. I, having worked with autonomous agents, I don't really believe that these auto GPT and baby AGI and all this stuff are, and even like, I don't know, just this chaining mechanism is really autonomous agents. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. I suspect that the Metaverse will be filled with more autonomous agents than it will be with avatars, right? And even when you exit the Metaverse, you may disengage and allow your agent to take over your avatar and go do its thing, right? Now that paper is exploiting the nature of language, right? It's using large language models to be able to kind of do this sort of communication. And I can't recall the paper, but there was one, maybe three years ago where I want to say it was Meta had set up some agents and started, they started communicating with each other. And then suddenly they're not talking English anymore, right? And they've developed their own language, right? And it's very interesting stuff. Yeah, certainly. But I do think autonomous agents, you know, there will be this convergence of personal assistants and autonomous agents. And I think the breeding ground right now is going to be the personal assistants, right? Like these highly hyper-personalized assistants that, I don't know if it's Clippy that comes back from the dead, but it's very tailored to you, right? And your agent can go and then interact with other agents to get something accomplished, right? Like, why am I on the phone with Comcast for like an hour and a half, right? Trying to figure something out, let my agent go do it. So yeah, I'm going to jump way ahead then. So there are a huge number of concerns, right? I mean, every technology can be used for good, it can be used for bad. And a lot of the challenges that we're going to face, we're going to face before the realization of the metaverse, right? We're with the, you know, I think Sam Altman is going to be speaking to Congress next Tuesday. And I'm very interested to see how that goes. But, you know, fundamentally, there will be regulation, right? And when will that happen? I'm not sure. I suspect sooner in the EU and UK before here. But, you know, there has to be guardrails and there has to be consequences. You know, you can't say my autonomous agent did it, right? And absolve yourself of any responsibility. And so the, some of the things with, you know, the, I'll kind of jump on this slide here. This is the, you know, basically looking at everything from an ethics lens for everybody that's involved in creation and building and research and implementation of AI. Every AI system has to be applied responsibly and used responsibly, right? And so around the topic of autonomous agents, you know, human-centered and aligned is fairly important. You don't want agents contravening fundamental human rights and these sort of things. So and, but my prediction, again, we're getting off a little bit off the metaverse space, but is that the wide-scale usage of synthetic media and autonomous agents is going to create some serious problems, right? Perhaps in the next election. And so I think the reaction is going to be pretty stiff, but I don't know how you mitigate that. Yeah, yeah, go ahead. Yeah. So, so basically, so whenever that topic comes up, I get worried because I find, at least an, I don't have a politics follower, I find any laws that are going to get made, are going to get made by corporations because of the regulatory capture that they have. And if Sam Altman is going to go and talk to Congress, it's not because Sam Altman, the savior of humanity is trying to make better laws for, you know, the world. It's because Sam Altman, the by-combinator, you know, founder is trying to make money and is trying to get regulatory capture for themselves. So I get worried when talk of laws comes into this, something, you know, you predict a problem and you make laws for it, like, let's do my own dashboard, why not be. So I wonder what your thoughts are on that with making laws, what kind of laws. You talked about moderation, like, is it truly a metaverse if people don't get to say, is it a walled garden or is it an interoperable open space, right? And the other thing was about the, I think I'll sort of just leave it at that first. Okay. So there's two elements there. One is AI, the other is metaverse. I think the onus falls on virtual world operators to enforce good behavior in their virtual worlds, right? And so we've got these, like, applications of agents, you know, as gatekeepers, as moderators, as detectors of threats that can enforce rules, they can enforce behaviors, right? Identify behaviors and path through patterns and be able to mitigate that, right? Because human moderation is notoriously biased, uneven, and doesn't scale. So any legislation would have to then be enforced through some similar mechanism in a virtual world. Now, are there going to be virtual worlds that are totally lawless, like Silk Road virtual world? Probably. And obviously, we have the law to, you know, legal system to handle that, right? Now, because AI's reach is far beyond, you know, the metaverse, you asked about sort of big tech influence. And I think that's obvious by who was invited to speak to Kamala Harris last week, right? So there isn't representation. But I do think that there, I do think that open source AI is, it's, at least open source generative AI right now is a growing movement, right? LLMs are dropped on a daily basis. There's a lot of activity in that space, you know, stability has made a commitment to open source as well as trying to commercialize their own product. And so at some point, open source needs a voice, needs to sit at the table, right? To say like, look, like, yeah, you've talked to these three big tech companies, but there's another community here, right? And so I think it might be getting too far down the rabbit hole and, you know, sort of philosophical. And I know that open source community doesn't generally lobby, right? And, but with AI, because it will be affected by regulation, we have to synthesize how we can convey our needs, right, to legislators. Now, Stella from Eluthor has already, you know, they've provided some details to the EU and it looks, it sounds like they didn't follow their recommendations. And so traditionally, this kind of activity can be accomplished through lobbying, but honestly, it's beyond my... I really love that. I want to definitely get into the open source side of it. And I mean, perhaps at a later point, I don't want to interrupt this, but I just quickly to emphasize on the point of the laws. Like I feel like my personal, I've been going around being, you know, very vocal about the people that I talk to, like I'm saying, let's not get crazy about lawmaking with this stuff. Because if there are, if first of all, if people get, I mean, a frenzy about, let's make laws to prevent something, the laws are not going to get made to prevent the thing that you think you want to prevent. What it's going to give people is a mandate to be able to make laws. And then the laws that are going to get made are the laws that they want to make, which is operations being the day. And so I feel like I'm generally trying to make people skeptical of lawmaking needed here. So to that point, I'm trying to understand what laws don't exist right now that would need to be created for this. Like if you murder someone through an electronic means, you know, that is still you doing the murder. If there's, you know, what laws do we need to make for autonomous agents? I'm trying to understand why would there need to be another law if you do harass someone, for example, and you're doing it through electronic means, whether it's a metaverse or whether it's the email or scam or whatever, they're already lost on the table. I feel like I get, I'm worried about the law making, because people get in a thing about like, well, let's make new laws to prevent this. What are new laws? What new laws are needed? What can you think of as a law that needs to be done? And what are what laws that they're going to make to outlaw any open source model? The GPU clusters are now being tracked like terrorist cells. I'm making a meme, basically, but you know what I'm trying to say. They're going to try and do anything they can to keep that moat if you're following the discourse. So I'm trying to understand what laws need to be made that aren't already there. Yeah. So the sort of, the idiosyncrasy here is that today, if I, let's say I'm a scammer and I'm using an auto dialer and I'm calling tons of numbers right, waiting for somebody to pretend to claim their Amazon $100 gift card versus using agents, right? Autonomous agents to go in and conduct that. There's obvious scale using automation to be able to, to affect that. And I mean, you could be talking to an agent, not autonomous agent and not even know it, right? It's on the other side of that phone. Ultimately, it's accountability. And I do think that the laws on the books cover that today, right? Is okay. Well, you've launched all these agents, right? Now here's the caveat, which is to say that there's emergent properties, right? Like that I told the agent to go order groceries and it went and, you know, told somebody that, you know, something that affected their life, right? So that liability, right? But I think ultimately accountability has to be with the operator. And where I do see that gray space is that when agents are let loose, right? And, and now let's say they've become untethered, then, you know, we kind of miss this whole property of controllability, right? Like, you have to be able to control the technology. And if it isn't controllable, how do you hold anyone accountable for it? Right? Is it the developer, the researcher that came up with the model architecture? Is it like, you know, the last person to touch that agent? I don't know. But I do think that's going to be a topic that needs to be addressed in the future, right? I don't think we're there yet. Like even if you look at like Lang chain and doing some of these things, you can set the, you know, GPT for a while to do a few stuff. But I think that's what I'm trying to pass out is if you start an agent, first of all, it's on the person using the tool to be accountable for the tool, obviously, like you can use the hacksaw and use it to cut a tree or do something else. It's a tool and it's a broken tool and it misfires or something. That's also on you. I think with LLMs, they're, you know, black boxes and we need to learn more, but I'm trying to understand like, are we trying to create laws to say, because already I'm, this is my opinion, if you're going to do that, if you're going to launch an agent, let's say baby AGI and it goes ahead and fraud, you know, fraud does something, I'm sure the law is going to be like, well, you created, you did fraud. Let's go, you know, go to, go to jail. So are we trying to create laws to prevent people from going to jail? Are we trying to create laws to put people in jail? Because I'm pretty sure the laws that are going to get made are going to be laws to put people in jail. Not some law that says, Hey, it wasn't this person's of all it was meta, uh, llama that had a mistake in their weights. And so actually someone at meta is not accountable. I very much doubt that my, my feeling is if a law is going to get made, it's going to get made to put more people in jail or prevent people from being able to do research or whatever. So I'm going to understand from your point, because you didn't, you didn't, you don't seem like someone who's wary of laws. You, you seem like it's probably, it's going to happen. It needs to happen at some point. As I'm trying to get some sense of some real law that needs to be made, some real policy that needs to be enforceable. What do you mean rape in the meta where it's rape? How do you do it in a meta where it's rape? Yeah. No, I know, I know, I know the article you talked about, but I'm, but I'm, but I'm exactly, I'm bringing up exactly those things as people are talking about this ephemeral space, the digital space as like, I got harassed in the meta where I started raped in the meta where it's not, it's a digital, it's a computer. Right. Can we talk about things separately and so that everything doesn't get diluted? Yeah, I think so obviously all that stuff will play out in the courts, right? Like ultimately they make those decisions. I don't think that we're going to see any laws on the books anytime soon with respect to that, right? And I think the only thing I would say to kind of wrap that up is that we as stewards of the metaverse and developers have to act responsibly and anticipate the possible abuses of what we create, right? So I think we are actually does anyone have the time? I just want to make sure to see where we're at in 15 minutes, I think. So I'm not going to spend too much time on image generation. It's pretty ubiquitous now. You know, for PBR materials, there's some applications here, you know, integrating into your workflow, do some modern generation of color roughness and metalness maps, these sort of things. It's all, you know, text guided. I think where we move in this space is that the instead of one shot generation is, you know, we have to like control net to be able to do some guidance. But I think now in the metaverse, it becomes a little bit more interactive, right? Being able to iteratively create your outcome, right? And so I don't want to dismiss anything with images because I do think there's probably going to be a very flourishing art community. And that may be AI art, it may be human art. And we talk about this idea of authenticity and how are we going to be able to differentiate between what's generated by AI, what's generated by people. I do see a possible use for, you know, distributed ledger technology in that space to be able to assert authenticity. But certainly, you know, there are applications. The one I do want to stop on because I think it's going to be fairly pervasive. And before we get to the metaverse, it will be pervasive is this concept of hyper personalization. And so being, you know, today we can understand consumers and users fairly well with hyper personalization, we're able to more accurately model human behavior, right? And I don't know if you've seen some of the like interactions with some of these large language models is, you know, you're able to actually, you know, do pseudo reasoning through language to be able to discern, you know, what product is this person going to buy, right, based on prior activity. And now we've got this hyper personalization, which will tailor content to a particular user, right? And so being able to do that in real time, you know, creating an experience, right, inside the metaverse where it's highly tailored to you dynamically based on your preferences. And the applications outside of the metaverse are certainly in like video generation, like kind of a watching Netflix and have like, you know, a different outcome or different ending in that film based on my preferences, right? Now I like that film even more. So hyper personalization has a lot of applications in content generation and in marketing and advertising. And so I do think that's going to be more pervasive. It's kind of got that minority report type situation. So one of the neat spaces is in using neural networks is in simulations. And so we've got these, you know, physics based and visual simulations. And they can be synthesized right with neural networks. So we've got like rigid body physics. You know, it's fairly low computational overhead. But there are there have been some really promising papers on emulating rigid body physics with neural networks. And so using deep learning, you're able to outperform mathematical models. And there was one particularly interesting paper from Stanford and the University of Oxford, a technique called dense, that was able to accelerate these simulations by billions of times, which is just, you know, just is pretty incredible. And so for class simulations for, you know, soft body physics, there's these same applications in neural networks being able to train on, let's say the output of a, you know, for instance, there's this one paper called neural cloth simulation. And it was able to learn from the outputs of Maya's and cloth physics solver, how to accurately predict how soft like cloth would behave, right. And it's able to do that with 5000 times less computational resource requirements, right. It's doing inference to be able to emulate that, which is the training is expensive. It's the inference that is relatively cheap. So maybe less, I mean, this is becoming a little bit more pervasive with a lot of these, you know, new platforms that you can synthesize speech and obviously the media kind of hypes it up and people are being, you know, held ransom and from a few samples of their voice. But you know, DeepMind produced a paper, WaveNet that was released in 2016. And it's, it demonstrated early on that, you know, generative AI can be used effectively to generate different forms of audio, right. So speech, music, it can work with waveforms, it can work with MIDI, and these methods can be broadly applied, right. And so we have like opening eyes, MuseNet, that came out a few years ago, can produce like four minute for, sorry, like four minute musical compositions that have 10 distinct instruments and it's guided by text, right, effectively. You know, there's, there's MusicLM was just released, January, I want to say, by Google Research and it can generate several minutes of like 24 kilohertz, you know, audio from, from, from strictly from text input. And, you know, there's also Whisper, I think, which OpenAI released, you know, that's, was trained on 680,000 hours of multilingual, multi-task supervised data, and has some, you know, pretty impressive outputs. So I think the use cases here are pretty broad, right. In the metaverse, it's an audio experience, you want to be able to, you know, be able to move through virtual worlds, but not have to effectively go in when you build a virtual world to prescribe all of the audio effects that are going to happen in that world. And so the, there's, there's been some other papers that have been able to take video and generate sound effects from the video without any guidance, which is pretty impressive. And so I think there's a lot of applications here because I do think that the metaverse will be a very, you know, an experience where we're using voice more than mechanical means like keyboards to be able to interact with the space around us. And just a quick stop on, you know, other multimedia, I don't want to skip over this. Their large language models are quite capable of producing text that will only get better. Model sizes are going to reduce, get smaller. Model architectures are going to change. You know, the days of the large language model, as Sam Altman has put it, are done. I would, I believe that already. I think that we, once we move off this transformer architecture and or enhance it, we can get away with having smaller models that are more portable. And the open source community is already starting to produce these things. Although Lama is not fully open source. A lot of its derivatives are like open Lama and, you know, GPTJ as well. These are all accessible models that we can use with smaller parameter accounts. So commerce, this is where we, we can sort of leverage this concept of AI smart contracts. So as opposed to writing procedural contracts, procedural smart contracts, we, in a decentralized environment, you can use these AI smart contracts that are effectively models. And they're going to perform some function, right. And so if we look at, you know, a smart contract that may release funds, after 1000 widgets has been sold, and those widgets have been, you know, the delivery has been verified, we can use a model to, to effectively achieve that outcome. Whereas procedural code can effectively do that. But there, the code will, the, because models generalize better than code, we're able to look at outside elements, right? Like look at potential like, you know, transaction behavior and other things that might be indicators of fraud and be able to flag that and hold up transactions. So I did spend some time on the hyper personalization. I think that's going to be a big deal. Search and discovery. So in the metaverse, I mean, finding anything is probably going to be fairly difficult. And having, you know, current search and discovery methods, probably not going to scale, especially in decentralized environments. And so we have to look at alternatives using artificial intelligence that can help us locate what we need, whether that's people, whether that's product, whatever. So we did skip forward to this earlier. I'll cover the right side because we covered the left, the, you know, authenticity in the metaverse. So who, who are you, right? How can I verify your credentials and things like voice pattern recognition. And I don't know if you, I think Sam Altman, I keep saying his name, which is bugging me. But, you know, the world coin and the debacle with that that you had to like authenticate yourself with like some biomechanism, that might be a little too much for, for folks, but certainly, you know, deep learning can do a lot of these, you know, authenticity checks. We also want to be concerned about health of users in the metaverse, right? And so if somebody is spending too much time in the metaverse or their, their behavior is irrational, we may want to be concerned about that, right? And flag user early on and accessibility. So being able to do, you know, maybe text to Braille or being able to do real time language translation, these sort of things can all be affected with, with deep learning. So I know that we kind of jumped around in this slide deck. So I'm going to go back to the benefits here. So AI does have some significant benefits, right? Aside from things like we talked about, like computational requirements at runtime, we have these lower barriers to entry, right? If I want to create a 3D model, I'm not so good at that, right? And so I'm not going to be able to create avatars. I'm not going to be able to sell these things. But through the use of generative AI, you can create these avatars, you can create assets, and you can do it very friction in a very frictionless manner. We also have this idea of reduction of human error and sort of a more impartial application, especially when we talk about moderating environments. And, you know, continuity, right? Like AI doesn't sleep. So we can maintain continuity on a 24-7 basis. We, let's say we need customer service inside the metaverse. I shouldn't have to wait forever, deploy my agent, have them resolve the issue with another agent, and then come back with some outcome. And I do see a possibility for, with the metaverse, to reduce barriers, not just reduce barriers to entry for creation, but also create opportunities for economic growth, right? Let people that perhaps don't have, you know, $4,000 a year to spend on Maya, right? Or they don't have a laptop that's particularly well suited for creation. To be able to interact with these virtual worlds and be able to create things and monetize what they want to monetize. And I'm just kind of, we're short on time, so I'm kind of skimming through. There are a lot of risks, right? You know, the misuse of AI, we've talked about, you know, that issue as well as autonomous agents and being able to to amplify harms, right? At scale. Privacy and security and safety models train on lots of data. And unfortunately, PII data can sometimes leak into that. And, you know, the bias, bias and discrimination. So bias in data, bias in models, and bias from unintended outcomes is a challenge. The big one I see is misinformation and manipulation. I mean, humans are doing it pretty well today. With AI, it's amplified. And so we need to be very cognizant to the fact that that will have an effect on people. You know, governance and accountability. So where does responsibility lie, right? When somebody is harmed? Addiction and overuse certainly going to be an issue. And power imbalances, right? So AI is already creating an power imbalance. Certain companies have more power because they have more ability and more, you know, an economic position to be able to train bigger models and be able to execute on those and manage them. So do we want to be, do we want to be held to using someone's particular black box, or do we want to have the freedom to be able to use our own models and have access to open source? Yeah, and job displacement. I think that that's sometimes a nice name of saying lost jobs, right? We have to understand that if we use AI as a tool, we can maintain continuity. If we treat it as an adversary, then, and you don't want to embrace it, then generally, like every technology that's come before it, you end up losing out, right? And so we have to understand that there are, there would be impact to the economy. And then the unintended consequences is really like a catchall for like, I didn't expect it to do that because I don't know how the model behaves, right? I don't know why it's doing what it's doing. So if I look at the landscape right now, I, this like move this trend towards foundational models, I think is not going to succeed long term. I think we have these open source models that are starting to become more accessible. I see a community growing larger around that sort of paradigm. And we won't be using Anthropics and open AI as models to do everything that we need to do, right? We'll have access to community based models and models that are capable of running on, you know, a smartphone, right? Eventually. And so I do see that we'll move away from that black box, the opaque sort of like I'm relying on some third party to provide that service. Now, will that totally go away? No, I mean, that's a good model for a lot of companies that want to be able to have something that's reliable in the cloud and they don't want to host it in house. But the community, I think the generative AI community is more and more embracing open source. And I think as that momentum builds and the community comes together, that it'll become stronger. And I think that's where consumers have choice, right, and can make their decision for themselves. So we kind of, we did go through this. Can anyone do a time check? I don't know. Two minutes. Okay. All right. Should I wrap it up and then we do questions or the only thing else I want to drop here was this is something I put together a few years ago. It's the Metaverse Code of Ethics. It's actually based on loosely based on the ACM's code of ethics. And it, I mean, it's in the presentation if you want to have a look at it, but it basically is guidance for those that are responsible for building the technology that will impact create the Metaverse to for us to live, build things responsibly, right? And effectively it and a shameless plug on my upcoming book.