 To the second half of the Open Simulator Community Conference 2014, we're really excited and delighted to have our next guest, Phillip Rose Dale. I'm sure many of you are familiar with his work. He obviously is the creator and founder of Linden Lab and Second Life, and now of course the CEO of High Fidelity, a company that I'm sure many of us are watching, all of the terrific work that they're doing with inputs and controls, and we're very happy today to have him talk to us about where he thinks the metaverse is going. So Phillip, welcome. Thanks for coming today. Thank you. Thank you for having me. This is great. It's so much fun. You know, we haven't done very many live events like this, so it's an honor to have everybody to have me here. In terms of format today, you know, Philippe, what I thought I would do is, yeah, talk a little bit about where kind of I think the metaverse, if you will, is going and some of the stuff we're doing, and then hopefully we'll have some good time for discussion and questions, which I always love, so, yeah. So first of all, thanks everybody very much for having me. You know, I was thinking about this word simulation. Someone asked me in the session just before this talk started if I had, what I had thought in running Second Life when the open simulator, you know, software and then community first showed up, and I said that it was inspiring because, you know, I knew we must have been doing something right if, you know, a number of people were willing to devote a tremendous amount of time and energy to actually, you know, build a whole back end of their own to, you know, to compliment the Second Life back end itself. So I was delighted. I, you know, never got a chance to share that, but that was a great moment. And it's an honor to be here today. It's an honor and it's a delight that there are so many people watching right now, hopefully not overwhelming the servers, but it's an honor to get a chance to do this. And, you know, I was thinking a bit about that word simulation, you know, open simulator and maybe think about, you know, the work that we're doing here at High Fidelity. And in particular, I am also a, in the real world, I should say, I'm also a pilot. I learned to fly airplanes and I haven't done it a ton, but I learned to fly when I was in my early 20s. And one of the things that obviously was so striking about flying an airplane is how difficult it was and is. And one of the things that made me think about in terms of where we're all standing here today and what we're all doing together is the simulation around flying an airplane. You know, when a pilot, well, there is a moment for every, you know, 757 pilot when they, for the very first time, actually fly that plane. And when they do, there's a couple of hundred passengers right behind them. And, you know, they do that very successfully. And of course, as you know, the reason that they're able to do that, you know, getting a plane the first time and just fly away is because they are, they have already spent hundreds of hours and hundreds of takeoffs and landings in that aircraft in a simulator. They've been doing it in a $10 million simulator that basically provides an identical experience to flying the actual, you know, airplane. And, you know, that's a pretty amazing thing. And it made me think to myself, well, why don't we simulate a whole lot of other things, for example, I have teenage kids. Why don't we simulate learning to drive a car the same way? Why don't we do that? And I think that, you know, is a great thinking point for what we're talking about today. It would, of course, make tremendous sense to have teenagers able to learn to drive a car by doing it in an Oculus or doing it in a virtual world. But historically, it's just been too expensive. It just isn't cheap enough in a lot of ways. And I don't mean cheap in terms of dollars. I mean cheap in terms of, you know, training and time and work. It's not cheap enough to teach kids how to drive cars. But the question, if you think about it, is will we eventually have kids learn how to drive cars in a simulator? And of course, the answer is yes. And in fact, I think some of the technologies we're all working on right now are going to make that particular idea possible just in the next couple of years. And so I think that's a pretty inspiring thought. In fact, I think a lot of what we're all doing together here is more around democratizing the experience of virtual reality and what we can do with it. It's more around opening it to many people than it is around making the experience, you know, especially different than what it is right now. So I think that's an interesting trend to think of as well. You know, when the web became, you know, the juggernaut that it did in the mid-90s, it did so not because it offered an experience which was tremendously different than the experience we were already having doing things like CD-ROMs or using, you know, using BDSs even. It didn't really change that a lot. It just democratized it. It suddenly opened it up to millions of people where there had never been in there before. And I think the same thing needs to happen here. So I think that we're on the verge of being able to, you know, open and democratize and provide access to the kind of experience that we've all been working on the open sim community, the second life that we've all been working on for, you know, in many cases our whole careers and certainly the case in mine. And so it's pretty inspiring to think that that, I think we're on the verge right now of that actually happening. So, you know, my own history, I often say it, but let me say it again because I think it's so, I find it so personally inspiring. My own history and my own passion about virtual reality, about all this stuff, hasn't come from being wanting to be a great, you know, business person pursuing what I thought was a big opportunity or a big new kind of market. But instead has come from my own personal experiences growing up and thinking about the world. I was a young, I was a computer programmer from a young age and I later went to college and studied physics. And along the way, as I programmed computers and, you know, as I investigated math and science and everything, I became fascinated with simulation, with the idea of simulating virtual worlds for some reasons that I think are, have really led me to work as hard as I have my whole life on this and are perhaps a little different than what most people might routinely encounter. And a great example I give sometimes, and, you know, I think one of the things we're gonna make work really well in this next generation of virtual worlds is that I could just show you a video right now and point at it with my hands, which, by the way, I can't do. I don't have a, I don't have hand controllers on today, but that is part of what we're working on in my fidelity. But if I could show you a video right now, I would show you a video, imagine in your minds, of a bunch of birds flying together. And when you see birds fly together, they do this beautiful thing that we typically call flocking, you know. And when you get into the math and science of why our birds do that, you learn a really interesting thing, which is they don't, there isn't a leader bird. And moreover, there isn't some like deep genetic way that birds learn how to fly like that, you know, how to fly around that way. In fact, what you discover, and as a young programmer, I actually tried this, is that there's just three very simple rules that birds use to flock. One is that they don't run into each other. So if you get too close to somebody else, you kind of steer away a little bit. The other one is that they mostly steer in the way the other birds are mostly flying. So you can imagine this, you know, when you look out at those other birds, you sort of turn so that you're heading in the same direction that they are. And then the third thing is you don't get too far away from the birds because, you know, I guess presumably you might get eaten, you know, if you're at the edge of that flock. And so it turns out that if you program these three rules into a computer, these three rules of flocking, and you look at what you see on the screen, and I found this just jaw-dropping as a kid, they form into the same beautiful patterns that you see birds flying in. They actually look exactly, you know, a bunch of pixels on the screen, look exactly like a flock of birds. And as you increase the number of pixels, you get all these wonderful, complicated patterns that real birds exhibit when they're flying around. And so that was something that really blew me away. And as a kid and as a young engineer and entrepreneur, the thing that I was struggling more than anything else was the question of, you know, as computers become more and more powerful, if these little simple rules are sort of all we need to kind of create a world, minus us, then won't it be possible as we have more and more computers to just do that and hook those computers together and kind of come up with some simple rules that, you know, regulate how things work and then watch those rules kind of build a world around us and, you know, create a space that's so vivid and lifelike and compelling that we can pretty much use it for anything, meaning we can use it for simulating how to fly an airplane or we can use it for learning a new language and, you know, being immersed in a faraway city, we can learn it, we can use it to recreate places that are no longer here, you know, from times past. In short, I was inspired and continued to be inspired by this idea that we definitely, almost inevitably, are going to be able to use computers to create vast, unimaginably interesting virtual worlds and we're all gonna continue to spend a lot of our time there and that is what has, I don't know, it's just moved me relentlessly. I've been working on this from, you know, I started thinking about virtual worlds in about 1993 and I'm still working on them today and, you know, I'm two companies in at this point and I mean, I'm just never gonna stop working on this until I can kind of get up every morning and, you know, put on maybe the Oculus or, you know, something or other and look and, you know, watch the sun come up in this strange new world that we're all building together. So that's what really moves me, but let me talk now about where we are, you know, what both, where I think virtual worlds are overall and, you know, kind of what we're doing in hospitality. I guess I'll kind of give that to you last. Well, why don't we have a billion people? You know, we have a billion people using the net. You know, we're a sort of a, you know, a billion scale human population online today. You know, and what I think Facebook has a billion people, you know, connecting, I mean, it's just a crazy huge number. And yet the use of virtual worlds thus far and the use of spaces like the one we're in right now and certainly the one with high fidelity is relatively limited that there is about a million people using those things. So about 1,000, about one person in 1,000 is using virtual worlds successfully. Why is that? Well, I think there are two reasons for that. I've had a long time to think about it. You know, as the founder of Second Life and now the founder, co-founder of High Fidelity, I've had a long time to think about, you know, what we've been able to do so far and then also what we haven't been able to do. And I'm struck by two thoughts. I think there are two reasons why we haven't been able to get and more than that one in 1,000 people into virtual worlds. The first one is that the experience of people being in there together, particularly their ability to communicate with each other in real time, hasn't been good enough. We, for the most part, use text and then sometimes we use a little bit of voice like we were trying to do earlier in the Q&A session before this talk. But that is still the only real way that we can communicate when we're face-to-face. And I think that for, not for everything, there are certainly types of communication for which that is inspiring. In fact, enabling, you know, we're able to, you know, be more honest with each other. Sometimes we're more direct or more probing by just using text. But nevertheless, the human experience of being face-to-face in the real world is by no means yet captured in the virtual world. So that's that first thing, the face-to-face presence experience. The second thing, which I bet many people in the audience here will not relate to immediately, is that it is extraordinarily difficult to go into a virtual world with a mouse and a keyboard. I have this great video that, again, I could show you, but you've all, everybody here in the OpenSim community knows exactly what I mean. Look down at your hands sometime when you're editing and talking and using the virtual world, when you're using the Singularity Viewer, using OpenSim, or using Second. Look down at your hands and look what you're doing. You've got one hand on a mouse of some kind, and you've got another hand that's basically kind of making a cord on the keyboard. You might use three keys at one moment or two keys at another moment. Those two keys might mean move up, or, you know, orbit, or right-click. You're basically doing this incredible thing where you're using one hand like a bow on a violin string and you're using the other hand on those strings themselves in a way that's kind of, like I said, like a cord. This process is extraordinarily difficult. It takes dozens of hours of training to really learn to do it well. I think of it as kind of being like playing the guitar or something. I mean, it's a skill that once you have it, it's absolutely wonderful and enabling, but it's a very difficult skill to learn. And so virtual worlds, in my opinion, are not gonna get to that billion scale until it is much easier, especially for somebody that hasn't tried it before, but for all of us, for everything. It's much easier and more lifelike to move around and manipulate things and communicate and do things in the virtual world. So those are the challenges. Those are the two challenges I see. Now, let's look at where we are today and ask ourselves where we are in solving those problems. So two things have happened. One of the things you can't experience in my keynote here, you can experience, obviously, I'm not even explaining you how I'm doing it yet, but you can see that I can talk to you, raise my eyebrows, blink, wink, and speak pretty darn fluidly. And we're doing that by, and this is part of one of the solutions to that first problem, we're doing that by using a 3D camera. It's a camera called a Prime Sense particular, although there's several coming out right now and tells about to release a new one that's amazing that are able to do this. And hopefully these types of cameras are gonna be in all our laptops and PCs pretty soon. That camera is basically watching my face, tracking my eyes, actually where my eyes are looking around and communicating a lot in real time, in parallel to my voice. So listening to me talk right now is giving you one demo, but there's another thing that's happened with respect to our ability to communicate and feel good in these spaces. And that is that the virtual world is now, everybody online is basically 100 milliseconds or so away from everybody else. We are, the time it takes a packet to travel across the internet to carry my voice to you is even if you're going from San Francisco to Singapore is less than 100 milliseconds. And it turns out that 100 milliseconds or about a 10th of a second or about the time it takes you to blink your eyes is this special amount of time that's very important because if you can send the sound of somebody's voice or the movement of their eyes to the other person in that amount of time, the other person basically can't tell. Our cell phones today are about five times slower than that. They, it takes about a half a second for our voice to get to somebody else and that's why we hate them. That's why, well, we love our smartphones but we hate making telephone calls with them anymore. And so this is a big opportunity but the internet has gotten fast enough and this was not true when I started Second Life. The internet today is fast enough to allow that 100 milliseconds of data transmission to happen between any two PCs and even between PCs and mobile devices. So that's an enormous change and it's one that's literally sitting there in front of us right now. Everything's ready. We just need to write the software correctly to take advantage of it. The second thing related to that mouse and keyboard problem is that, and of course we've seen a lot of this in the news lately, we are now able to make hardware devices that allow us to interface with our computers in ways that I couldn't even have imagined when I started Second Life, when I started Linden Lab, like we have an ability to, for example, in the Oculus Rift to make a device that completely replaces our vision and sits on our head and wraps all the way around our face and we can probably make a device like that for $100 to $200. And we're seeing Oculus pushing toward a consumer release of such a product and we've got ones for everybody in the office here at High Fidelity and they are amazing, amazing things. So in the area of visual display, we now have new hardware that is inexpensive, very accessible, that's gonna be staggering in its ability to immerse us in the world. Now there's a second class of hardware which is things that track our hands or our bodies. And that's a harder problem. You don't see my hands moving today because I just basically didn't bring with me for this talk one of the several devices that we have that can track my hands. Otherwise I'd be able to put them up in front of you and waggle my fingers and stuff. But that is a big problem too but it's another problem that is really just a matter of smart people getting products into the market itself. We have got everything from several different companies that are competing to build these kind of data gloves to devices like the Leap sensor, a newly announced device called the Nimble. I just saw a couple of nights ago that's fantastic that uses a 3D camera that's actually attached to the head-mounted display to be able to capture the motion of your hands perfectly. And these controller devices will not only allow us to communicate, say by moving our hands by being more rich in our gesturing, but they'll also allow us to reach out and grab things in the virtual world. And we at High Fidelity already have amazing internal demos where we can use some of these early prototype devices and reach out and grab something in front of us and move it or reach out and sort of draw in mid-air. So this sort of second phase I think of where we're gonna use new technology to better capture the motion of our hands and our intentions. I'm trying to click on that or move it from here to there. That is something that is a hard problem. It has some big challenges. Maybe we can talk about that more in QA but it is one that again is solvable right now. So it's just a matter of waiting for the entrepreneurial cycle to bring us a bunch of different ways to kind of fully immerse ourselves in the virtual world. So this is amazing stuff. So in short, what I think is happening right now is that there is an opportunity standing in front of us to build a new set of technologies around virtual worlds that are gonna allow us to actually have an experience that is an experience that the whole internet-connected audience can get access to and going back to what I said about simulation earlier and I bet I don't need to enumerate too many of these different possibilities for people in this room. But the set of things that we're going to be able to do when we can broadly simulate a world in the way that we're doing here is just beyond reckoning. I think that we don't understand how remarkable that change will be once we get to that scale. I always tell the story that even as one of the internet's basic kind of pioneers, a guy who was lucky enough to do my first entrepreneurial work in the mid-90s in San Francisco, California, where the internet was literally kind of landing like a bomb, even having been right there from the start, I think I still looking back didn't understand the scale of how the internet would affect us, of how e-commerce online would affect us. We all were like, well, this is going to be really big, but I don't think we knew how big. And I think with respect to virtual reality, we have no idea how big the experience is actually going to be as we really see it come online. So finally, prior to taking the questions, let me tell you what we're doing in high fidelity. So we are building software, open source software, designed to allow everyone to put their own virtual worlds online and then connect those virtual worlds together in a number of different ways. This technology, the software also takes advantage of all these hardware devices that we're talking about. Doesn't require them. You can use high fidelity on a laptop like I am right now. You can use it with an Oculus Rift. You can use it with a Hydra hand controller. You can use it with all these devices that are coming out. But our vision here is to create a single open source code base and set of systems that enable all of us to begin putting virtual worlds online and then connecting those worlds together in connecting those worlds to each other. So some of the things we're doing in that regard. So as I said, the first thing we're doing is we're creating an open source code base. It's actually online today. It's online at GitHub. You can go to our site at highfidelity.io and go to the code and start playing with it yourself. You can actually run your own simulator or your own what we call domain server, which would be like running an open sim simulator and literally stand in the world and talk to each other the same way I'm doing right now. You can actually do that day because it's open source. We don't get to decide whether and when you'd like to do it. We, though, as a company, are also providing a bunch of services that allow those virtual worlds once deployed to be connected together. And that's kind of the thing where we're at an earlier stage. We're in an alpha program right now with about, oh, probably 50 people a day that are logged into high fidelity building things. Let me just say this amazing space that I'm standing in right now is a model of a warehouse that was built by a guy named Judas who's one of our great alpha contributors. And thank you, Judas. It's just a beautiful space. It's rather awesome as a backdrop to talking. So we've got a small group of people that are actively testing our system and developing content. It's definitely not ready for prime time yet, but it is getting there. And the next phase we expect to go into in high fidelity, and I can't say I'm not sure exactly when we'll do that, but we're pretty close, is one where we're going to be able to allow you all, allow anyone who wants to, to start deploying their own virtual world using our software more openly. And as I said, what we're gonna do at high fidelity is provide these services around identity, the marketplace and content, moving content around, and kind of lookups and search where everybody's servers are, what they're called and how you get to them. We're going to provide a set of services that are global, we call them global services, that let you do all that stuff. So you're gonna be able to name your server uniquely. You're gonna be able to jump to somebody else's server using these services. You're gonna be able to name your avatar. You're gonna be able to wear clothes that obviously will show up with you when you move from one machine to another. And you'll be able to buy and sell things from one another in hopefully ways that are similar to, and hopefully a little better as we go along here, than the things that we've already seen so successfully happen in Second Life in terms of the economy. So that is what we're doing. We're on track to do it. We've raised a bunch of money from some great investors. We are 15 people right now. We probably have about 40 or 50 people already in the open source community that are contributing code and help to the project. And our hope and expectation is that we're gonna have many, many more. I think the approach that we've taken to do this all in an open fashion is very, very, a very important one I'd say for a couple of reasons. One is that on the development side, there are more people interested in and using virtual reality that are willing to contribute engineering time and development to many more people than could possibly work at one company together. And so if you don't take advantage of the fact that you've got this big audience that is able to contribute time and engineering resources to the project, then you move the virtual world forward more slowly. And I think that would be both a bad business decision and also in some sense kind of morally incorrect, I guess to quote John Carmack. I don't think we can move forward as quickly if we build isolated, closed company or system. We need to get everybody in. And then the other thing as I touched on is when you just really think about the virtual world and all the servers that are inside it, think about the fact that we think there are probably about a thousand times more servers collectively owned by all of us running at home, running in our own dens and our own laptops. There's about a thousand times more computers available from all of us than there are in all the server farms and all the world. So if we want to imagine a virtual world that has millions of servers connected together, each of them is sort of a planet in each other sky to use the ready player one metaphor, which I think is delightful. If we're to have a virtual world with millions of computers, we need to have a world that is powered by all our machines. Because literally the sum of Rackspace and Amazon and all the providers that are putting machines up that you can kind of post things on is a tiny fraction of the number of connected devices and very powerful devices that we have online today. So our strategy is an open one, both because we want all the help and contribution from people that want and need to be in there contributing to the design of the thing, but also because we want everybody's computers. And so you're just, I mean, I think and I think very few people would contest this, you're not gonna get to that kind of scale unless you have something that is very, very open, very simple, easy to install. And so that's exactly what we're shooting for with high-fidelity. And I hope that in the next few months, you'll see more and more about us. You've always seen a lot already. There've been some great articles and we've had some great contributions. Like I said, like the space we're standing in. But in the months to come, I think it's gonna get more and more exciting and I wanna see virtual worlds continue to grow and I want to see them be the phenomena that I've always dreamed they could be. So let me stop there. Flethen, thank you and thanks for having me and let's maybe take some questions or whatever you'd like to do. Yes, absolutely. We have so many questions, Philip. I hope you have time to answer them all. Sure. The first one is really something of a practical question which now I'm curious about too. How do you detect facial movements if you are also using a rift? Yeah. Boy, I've heard more people just me talk about that. I was at a meetup in Silicon Valley a couple nights ago and we were all kind of juggling about the same thing. That's a hard problem right now. You don't. Here's the thing though. You folks that are listening right now may notice that Fleet is not moving her head but she is raising her eyebrows and opening her mouth when she talks. That's actually being driven by audio analysis so we're driving her avatar by basically just analyzing the audio stream. Now I on the other hand have been driven by this 3D camera. I think there's going to be a set of solutions that are associated with that that are going to work fairly well. In the near term, you're going to use the oculus but your head's still going to move. You're going to be able to nod like I am right now. Remember, and your lips are going to move the same way that Fleet's lips are moving here. So I think that's going to work reasonably well for many types of interactions. In the future, there are a bunch of companies including Oculus themselves who are thinking about different ways to capture facial information, blinks for example, which are super important. There's some blinks and basically I think we're going to see a gradual clear set of new features that enable more like what I'm doing right now but you are correct. Right now, you have to make a kind of an actor-consumer decision almost. Like if you want to be really animated like I am, you have to use a normal PC and point a camera at your face and if you want to be totally immersed and blown away by how amazing it is to look around and oh my God, we're really in this space together, you can use the Rift but then you don't get all this facial expression. That's a good issue. Yep, so it sounds like there's some trade-offs still. Another question, probably considering the audience that we have in world, we have a lot of content creators and developers who make in-world experiences for users. Can you talk a little bit more about how you create content in high fidelity? Sure, absolutely. Our goal in my fidelity, I guess kind of in concert with that open systems goal is to basically be able to import anything. So the model that we're standing in here right now, this room is a model that I don't even know what package it was done and I'm gonna guess it was probably done like in Blender. The way high fidelity works is that you import content from pretty much any format. Our ultimate goal will be to use one of the great, we'll probably use one of the great open source libraries out there that can basically kind of convert from any thing, whether it's Kaladar or 3D Studio or Blender or even things like Sketchup, whatever the source format is. We're gonna import that into the world and what we do is we kind of put that data into our server in such a way that it is scalable. So as you move away from things, as you show up, the streaming and the processing of the data coming from that stuff, no matter what the initial format was, just works. But our goal is that, our vision especially in the near term is that you're probably gonna do a lot of, what you're gonna do in a virtual world is you're gonna download something that was probably a full mesh, beautiful textured object that you got from some repository, like Google 3D Warehouse or building it yourself in Blender. You're gonna drag it, you're gonna drop it into the world here and then you're gonna use the in-world tools of high fidelity. And by the way, those tools are all articulated, almost all of them as JavaScript. So one of the things that's incredible is you can literally just reload your whole interface anytime you want. In all the suffering that we've all done, complaining about and trying out different interfaces, it's gonna be a much more fundamentally open environment with high fidelity where you can just load up your favorite kind of editor and use it. But my belief is that in the near term we're going to put objects in world and then move them around with our hands, with our mice and probably do a lot of the really rich vertex level if you will, content editing in one of the commercial packages that we're already using. I mean, things like Unity, for example, we're able to load content directly from the Unity store as well. It's just incredible. I mean, the amount of stuff that's out there. We've just come to the conclusion that you don't wanna try and build a fundamentally new primitive format like we did the first time with Second Life. There's just, the tools now for creating content are just too good. We wanna just use those directly. So that's the strategy we're gonna take. That sounds very interesting and I'm sure a lot of content creators are dying to try out the alpha. Sort of a follow up to that. Do you feel like the content in the virtual environments and virtual worlds will drive more adoption or do you feel like the interface? What's the interplayer, the balance between interface and experience in these worlds? Well, I think the content that's already been built is just staggering. I mean, when you look at a really great Second Life build, when you go to somebody's island and you look at the stuff that the very best creators of Second Life have built and obviously I've had the pleasure of looking at it all these years. It's insane, visually, especially visually, how good the content is. So I think it's all this interface issue. I really do. I think that it's just too hard to get into the space and I don't think it's a download problem. People talk about we just need a smaller download. I think that's true. It's nice to have small downloads and High Fidelity is a small client today. We're actually looking at, one of our advisors is a great guy named Tony Parisi who's a deep long term 3D web guy, WebGL proponent, but I don't even think that being able to jump into a virtual world without downloading anything, I don't think that's the issue. The issue is the interface. The issue is that I can move my head right now and look from side to side and with the right hardware, reach out and grab things. That's the thing that's gonna make it easy. I don't think it's content because the content that's been built is already good enough. I mean, it's more than good enough. It's mind boggling great. So I think that certainly High Fidelity, like anything that's started 15 years later than Second Life was, is gonna be even all the more dramatic. I know one of the things people are probably thinking watching is do you have to look like Philip right now? Do I have to look like this sort of animated character? Again, there the answer is no. You can actually load any FBX file today as an avatar that you like. So we already have the same kind of incredibly rich looking characters and avatars that you see in Second Life that are loadable and usable as your avatar here. So I guess I could jump into that question, try to answer it before everybody gets into the like, I don't like how the avatars look. It's like, it doesn't matter. They can look like whatever you want. We've spent a bunch of time playing with how lips move and how cameras track us. And this avatar that I'm using is an example, I think, of the kind of look you're gonna wanna have if you're doing something like teaching a class in the virtual world where you're face to face and trying to communicate and be very articulate. But at a high level, it's an open system. People can use anything they want. Well, that's a good question actually. It's an open source system and there were lots of cheers about hearing that your technology is open source. And since we have a lot of developers, who controls the submissions to that? Is that a high fidelity or is it more open ecosystem than that? Well, yeah, I mean, today it's a GitHub repository and so there's probably, I think everybody that has direct, you know, has the ability to accept a pull request are all Hi-Fi employees. But we've already demonstrably accepted, I mean, if not a daily occurrence today, it's an every other day. So it's probably just about daily at this point. We're pulling pull requests that is submissions and changes from people in the community. I think that if you look out in the longer term, I think we'll probably end up doing something, you know, maybe we'll do something that's more like a foundation approach where, you know, we've got maybe a couple of different, you know, big, you know, big widely used branches or something that have more of a open governing body around controlling the code. We're at such an early stage now that, you know, I'd be flattered to have that much, that high level of contribution. I hope we get there soon and I'll be delighted when we do. But I think what we're doing today is very reasonable, which is, you know, we're 15 people, we're smart, you know, we know what everybody wants. We're profoundly committed to getting this thing up and running. And so I can't think of a single case where somebody said, hey Phillip, you know, you know, this new idea totally works and it should be in the code base and you guys aren't pulling it. I've had, you know, exactly zero of those conversations so far. Well, that's good to hear. In Opensim, we really have a sort of decentralized federation of virtual worlds without any single point of control, just like the internet. Why do you think that a centralized service is a good model for high fidelity? Well, first of all, the software is decentralized just like with Opensim. You can run a high fidelity server and you can run it on a network that we'll never see and it'll work just fine. So that's the first thing to say. The second thing though is I do think that particularly with respect like identity, like avatar identity, I don't think that we're gonna be able to have again a billion people using virtual worlds until for example, you can just have an avatar identity that is consistent across not necessarily all but many of these servers. So I think again, there's just this practical problem that, you know, if I'm gonna walk around as, you know, as Phillip Rosell or as, you know, some entirely new identity in these virtual worlds, I need to be able to jump from one to the other seamlessly and have my avatar come with me. And as everybody knows here, there's a whole bunch of problems associated with doing that. I know some fantastic work has been done with, you know, different grid systems that seek to connect these, you know, different open simulators and I think that work is extremely important and in fact, I think that's core to the value that I hope we're able to add as an operating company. In other words, I think that, you know, if we're able to make money as high fidelity, it'll be because we're solving these problems for people and they actually need us to. I mean, if people don't need shared identity systems across multiple simulators, then we just won't make any money. Yeah, I mean, you know, we'll, you know, we'll perhaps just be a fully decentralized system, but I really believe that both people and content needs to be able to be movable between these worlds. How can you build a virtual world unless you're standing on the shoulders of each other, which is totally what happened in second life. So, you know, people need to be able to build content and then share it with each other and rapidly move it around between machines and you can't do that with a completely decentralized system, at least not in any way that I've been able to think. Well, that's true. And we do experience that in Open Simulator. You know, I have many, many fleet avatars and they all look the same. Right. They have different inventories. Right. Which is very hard to manage. You know, speaking of Open Simulator in the hyper grid system, do you or can you imagine any kind of, you know, plug-in services or modules that might allow hyper grid in Open Sim to be compatible in any way? Well, again, I would be flattered and I would love to have more Open Sim, both users and then developers participating in our system. We have, like I say, we're, there's a sign-up form in the site to get into the alpha, but there's also a bunch of ways to talk to us. The Gitter chat room, there's, you know, a doc system on GitHub, but there's a whole bunch of different ways to jump in and get involved and talk to us. I think that there are probably some very amazing things that can be done to connect Open Sim content and maybe even Open Sim itself or I should say, you know, Open Sim grids, right? Directly to high fidelity as we get online. I mean, I think that, well, I know that we can import the content. I mean, I think that if you had an island in Open Sim, that's gonna be very directly portable to what we call a domain in high fidelity. So you're gonna be able to launch a domain and move, put content on it and probably just, you know, almost drag and drop that content. I know the other Chris Collins, I'm smiling at you, Fleet, the Chris Collins who works for us here at High Fidelity has already been able to move content from, you know, the Open Sim format, things he had put on Open Simulators for other work that he's done. He's easily been able to move that into high fidelity. So, yeah, but I think, you know, we're talking earlier in the VIP session about things like the audio system. It may be that the high fidelity audio system can be of some use to Open Sim. The audio experience we're having right now could actually, with a bit of clever work, probably be brought up as, you know, kind of running in parallel or underneath an Open Sim grid very, very easily. You know, it's designed to be a very simple, open source, completely unencumbered, audio mixing and transmission system. So, you know, that's an example of something that I don't know, might be useful. I think there'll be a lot of projects like that. I hope so. I hope so. Lots of interesting work to be done there. Are you considering a web viewer for high fidelity or any interaction in mobile? What kinds of delivery and viewers are you looking at? We are thinking about both the web and mobile. I, you know, I think the experience part of immersing you and then letting you, you know, use your hands, that's kind of our first priority. But that said, I looked at a funny thing the other day. I think the iPhone, I think my iPhone 5 has the same compute power and graphics power as the minimum required machine at the time that Second Life launched, which is pretty funny. And so, we certainly should be able to render a virtual world onto that, you know, machine. And so, the approach that we've taken is getting a bit more technical for a minute. I mentioned earlier that one of our advisors is a guy named Tony Parisi, who's one of the biggest thinkers about 3D on the web. We are fact, we are doing our software development to separate the rendering from the client systems in such a way that anticipates the ability to create a completely web-based view of the world itself. Now, web-based approaches to rendering graphics are still a little different than native approaches, basically. There's still a little bit of inequity, but I think that we'll be able to, you know, have web views into high fidelity that work really well. My own vision about this is we'll probably do that in more of a kind of look but don't touch kind of a way where I bet we'll use web views to give us kind of little windows into virtual worlds. And then if we want to jump in and be fully immersed and be interactive, you know, be editing things, be targeting people, or actually have our avatar show up in our identities, I bet in that case we'll use a native client, at least for the next couple of years. But I think that idea of looking at all this content directly from the web is a critical one and it's one that will make work. We already have a really cool thing that, again, you know, folks here haven't seen yet because not all this stuff is open. In fact, off the top of my head, I'm sure it's all open source, but I just mean like open for people to play with as yet. There's a Metaverse browser that we have that's just amazing, that shows all the machines that are running, all the servers that people have running together in a shared space. Because again, part of what we're trying to do is coordinate like, you know, where you want to put your server and what you want to call it. We're providing a loose kind of coordinating service for that so that you can literally, I hope, look up in the sky, you know, from here or whatever and see somebody else's server sort of, you know, floating off in the distance as another planet. I said earlier, I really like that metaphor. I think it's great. That's really interesting. Getting lots of questions about some broader topics beyond Open Simulator in Second Life, thinking about the future of the Metaverse and the Net generally. What do you think about the impact of Net neutrality on the Metaverse and virtual worlds moving forward? Boy, that's a great one. I'm not up to speed on all the raging debates and discussions around Net neutrality. I do believe that in a global marketplace, it doesn't really matter. What I mean by that is market-based systems for connecting people have been extraordinarily successful. So what I think about neutrality is in the end, it probably doesn't really matter very much in a good way because we are democratizing and cheapening the cost of bits and the cost of access points and the ways we get online so rapidly, happily, that I don't think it'll even matter much what we need to do from a sort of a legislation perspective. And in other words, I think that's an optimistic outlook, but I don't worry too much about the nature of these systems because there was a great wonderful statement made. What was it? Somebody said one time, I think the internet recognizes censorship as damage and routes around it. And I think Net neutrality is one of those types of issues where if people are advantaging one service type or one protocol over another because they're, I don't know, trying to keep people from sharing video instead of buying it from a provider, people are just gonna route around that. They're just gonna put new services online. The cost of deploying a whole infrastructure to get data to people is like a hundred of what it used to be. And so we can just build other ways of getting on the internet. So I personally think it's a pretty rosy future with respect to data access. I'm not really worried about it. I mentioned that 100 milliseconds earlier. I mean, I think it's pretty much game over in terms of getting everybody online. From that, I find things really moving. Talk about broad net phenomena. Google's idea with its project loon that you could actually just float balloons in the air, cheap balloons and put Wi-Fi access points on them or not necessarily Wi-Fi access points, but something similar. That's just spectacularly inspiring. I mean, that means we're gonna get the sort of one billion or so people that are still not living in a well-connected world. We're gonna get them online extraordinarily fast using techniques, as I said, that are a hundred of what it would have cost to contemplate getting them online in the 1990s. Yeah, that's definitely true. And hopefully we'll see more people, of course, coming in and using these kinds of virtual platforms too. What are your thoughts about standards? Many people feel like in order for the metaverse to come to fruition, we're gonna need standards. Others feel like that will come organically over time as good technologies emerge. Where do you sort of stand in the standards question? Well, I think the organic process will be the right one. I think that there are some standards that'll make sense earlier on, like a reasonable standard for avatar names and identity and yet a basic standard for, if I connect to your server, how do I announce myself and what kind of protocols do I use there? And that's where there's a blog on our site where we talk about the strategy we have, which is a combination of a kind of an Oauth style strategy where when you log into somebody's server, we can help you, we can use an open system like Oauth to basically authenticate you in such a way where you only need to give away the appropriate amount of information about yourself to get it onto somebody else's server and you kind of know what that is. So if they say, I need to know that you're, this or that in terms of as an avatar, I need to know this much information about you to let you on or to let you edit things here. You can use an open standard like Oauth to make that work correctly and in a way that's safe and secure for everybody. So I think that's an example of a case where we need to get those standards rolled out pretty quickly. I think that the different types of standards that we need to make virtual worlds really work, they kind of differ over time. Like I think right now we can still do a lot of experimentation with content. I don't think we need to make every vertex or every atom in the virtual world ascribed to some well-described standard that came out of some big committee meeting. I don't think we know enough yet about how we're doing this stuff to do that, but I think we can take a crack at that with things like avatars. And so I bet there are members here in the community that would be big contributors to that. I bet we can all figure out a good way to say hello and like log in in a fair and open way. And I think that that kind of standards of creation is really important and something we should do. But I think if you're talking about, I know Chris was mentioning earlier, one of the amazing things we're doing with high fidelity is we're building a terrain system that lets you do things like dig caves. So we're doing that. I was touching on it earlier. We're doing something that's akin to what you may have seen in upcoming game engines like EverQuest too. There's a wonderful voxel sort of based approach that's this thing called voxel farm that I bet folks here have seen that is just beautiful. We're doing very similar technology development internally on a diggable terrain system. So should that be a worldwide standard? Well, no, it's just too complicated. I mean, at this point, I don't know. I mean, we have to try building, you know, with different types of digital bricks and see what works and what doesn't. And I think in that case, you want to be very open and organic. And yeah, maybe you have to download some plugin or something. You get somebody else's server because, you know, you don't know how to render this stuff. If you look at Minecraft, you see some of the same stuff. You know, you see an immense amount of experimentation and really it's a question of whether you can build a plugin model that's easy enough for normal people to use that they can tinker around with different ideas. So if you're talking about something like identity, I think we need good standards and we need them soon. And I think we'll be a big part of that process, hopefully. And if you're talking about something like editing and building, I think we should do. We should things in a very open, organic way and just sort of see what happens. And I bet we won't have worked out all those standards in just the next couple of years. Okay, I think we're almost to the end here. I've got two last questions for you. The first one is, so with Oculus Rift and Facebook's acquisition, it's reignited the conversation about virtual worlds and virtual reality. For both high fidelity and for all of the folks who are really passionate about seeing the metaverse happen, what lessons do you think we can draw from the last big hype cycle that we should use moving forward with the next big hype cycle coming? That's it, boy, that's a good one. You know, I will maintain that with Second Life, it was a harder, I think there's a limit to how much you can kind of control the hype cycle. I almost think that's a good thing. I mean, I don't think it's a good thing in terms of people jumping in too early and then not being able to build a business or whatever, but I do think that the hype run virtual worlds is, you know, it's always merited because of course, as we all know, as we start to have these new worlds that we're in that are kind of have a big impact on the real world, say by replacing things that we formerly did in the real world with things we largely do in the virtual world, these are big, big societal changes. And so I think it's appropriate that people are very get very excited when these things happen. In terms of what we can learn from, yeah, looking at sort of Second Life as I guess the last big virtual world hype cycle, I think one thing we can do is we can focus on enabling widespread use around experiences that we think do work. I mean, if I look back on the first few years of Second Life when everybody suddenly got excited about virtual worlds, I think that there were so many experiments tried and I guess when I look back on it, a lot of those experiments were things that didn't really, or some of them were things that didn't really logically make sense. They weren't yet possible. Like Second Life in its first couple of years was not gonna be a place where you could sell retail consumer packaged goods because there just weren't enough people and the audience was wonderfully much more diverse and global. So if you were selling shoes or something, it just wasn't gonna work. You can't sell shoes to the audience of people that are using Second Life circa 2006. But there were things that you could do like training and simulation that I think we didn't focus on enough. So I think what we can do this time, and I try to counsel people when I'm out talking and playing with this stuff, it's like on the one hand this is a big starry eyed vision that we're all very excited about, but on the other hand what I would say is I hope this time around we find and really solve and attack the things that are viable, compelling uses of the technology in the early stage. So for example, having a small virtual classroom in a space like this where everybody's in an Oculus so it's super, super fun to basically get together and take a class from somebody. Let's make that work. That can work today. I mean, there's 50 to 100,000 people with these Oculus units out there right now. They could all be taking classes from each other so that somebody like me could be standing at the front of the room and talking and lecturing and then walking around and having units with students and walking up and saying, hey, Fleepe, how are you doing? Are you getting this? Can I help you with anything? Building and solving a problem like that and having that reach massive adoption levels is something that can happen I think in the very near term. So I think this time we should focus on the stuff that works and when we're dreaming about the future, let's try to keep that kind of in the dream space and in the watch this amazing movie but don't try to like make that work like right now. Like I don't need to have, they're not gonna be virtual car dealerships anytime soon still. I think that's fair to say and our very last question is in the next year when we come back to the next OpenSim conference, what do you hope that high fidelity will have accomplished in the next year? Well, the first thing that comes to mind is I hope that there are tens of thousands of user created virtual world. There are tens of thousands of virtual worlds that are running on other people's machines that you can jump to as a kind of a lobby and then just, you know, you know, you and I, I hope that next year, you and I can go on a guided, you know, we can go on a tour together where we're literally jumping in seconds between servers and see new spaces and new groups of people, you know, res in so to speak around us in the way that this room is and we're able to do that and explore the content of tens of thousands of people's servers. And I think that is, I think that actually is possible. I mean, we're feeling pretty good about the software we've got working, we're feeling pretty good about the networking and automatic stuff that we do and, you know, discovering these servers and lookups and everything. I think that it's possible that we could have a really good number of servers up by next year that have a lot of really interesting content on them. So this conversation will hopefully be as compelling, you know, there'll be less bugs and I know there's some audio glitches. I actually am, I want to just shoot myself in front of everybody here because one of the things you may have heard my voice getting clicked off a little bit that there was actually a bug that I introduced myself in programming and it's doing that in an attempt to do something else. But, you know, there'll be a few less bugs but I hope the one-on-one experience that we're having here can be a bit more animated but then on top of that we can be amidst all this content that people are starting to put online. So that, I mean, that's what would blow me away. That's what I hope we can do. And I, you know, I hope people, well, I hope people get to grab our stuff and at least give it a try once we've got it up. And like I said, that'll be pretty soon, I think we'll have some sort of alpha stage versions that aren't perfect but do allow you to put up your own virtual worlds and start playing. Well, I have to tell you, being on the other side of your facially animated avatar, this has been one. It's amazing. Much it changes the experience on my side of the screen to see your eyebrows moving and everything. So it's been very cool. Thank you very much for coming and for talking with us and for answering a whole host of questions. And we'll hope to see you next year perhaps. This is great. Thank you very much for having me. And I hope the conference continues to be the success that it seems to be from visiting a little bit this morning. Great. Well, thank you very much for your time, Phillip. And thank you to the audience for a wonderful round of questions. Thanks, everybody. We have a 30 minute break before the next sessions begin. Feel free to explore the grid if you haven't already. And the next sessions will be starting at a 130 Pacific time. Thank you again to Phillip and to the other Chris Collins, my doppelganger, for their wonderful assistance to make this technologically challenging but exciting keynote happen. Thanks, everyone. Thanks. Thanks again.