 Well, greetings everybody. I hope you've had a good lunch and everything else, if you weren't in a highly active conversation that was in a private Q&A with the devs. I'm Mal Burns and this is the regular kind of, we used to call it a viewer panel. I tend to think of it as an interfacing panel, rather than specific viewers, but a whole load of stuff has actually come up today already, which we're almost segueing into as far as our viewers are concerned. One thing that I think, you know, I follow what people want from devs and viewers and stuff like that. And then, you know, we're looking at open sync code, which is a server code, basically, but you can't obviously live without the viewer. And so, you know, there's so much interoperability between the server and the viewer. You can't really view them as separate things. They're all parts of the experience. And there's been a lot of moves recently, talking about other ways things like viewers could be implemented. I mean, could there be an interpreter, for example, that puts open sync regions into a WebGL kind of viewer on the web, for example. They're actually connected in a genuine way to the whole hypergrid. There are other things going on where, you know, they are separate platforms, but they offer you a web interface, as well as a fully fledged viewer. And we were just talking about Crystal Lopez's on-law viewer, for example, which is based on the open-sync similarity viewer, but it removes a server anything a grid operator doesn't want you to have. So, you know, you can sit and walk and listen to music, but they may prevent you doing anything else. And for certain people who want it fast and easy, you know, that is an ideal way to get in. And obviously, if something like that, fast and easy was on the web too, it would be even better. So there are a lot of things going on. And also, I'm going to mention up front, because I would hope maybe our guest will address this when they're talking, is that I've followed the news at Second Life, or rather, Linden Labs. And they were actually at an Amazon conference last couple of weeks, I think, talking about moving Second Life into the Amazon Cloud. And this obviously isn't going to happen overnight. Too many servers involved, I imagine. But apparently, this is going to mean some incompatibility with what Linden's called the third-party viewers. And various third-party viewers are a bit concerned, you know, because although Second Life is their main market, and a couple of them, you know, open-sync is really just an extra audience on top of their main customer base. Anyway, so some thoughts there. But the first thing I'm actually going to do is actually introduce you to everybody who's, as it were, on the panel here. And we'll do them in turn if I actually open up my, the proper notes. I'm going to start with, I'm going to start with Robert Adams, who's also sort of known as Mr. Blue, on occasions. Robert's been an open-sync developer for nearly a decade, is responsible for the bullets in physics engine, the addition of VAR regions. Many thanks for that, Adam. And many a bug fix and performance improvement. And you can find his whole bio on the website. But we won't have a panel if I read all these in full. So welcome, Robert. Hello. How am I supposed to talk now? No, I'll just check you there. I'll go to the rest first. Right. We're also joined by Adam Frisbee. Now, Adam is CEO of SignWave Entertainment, who actually is thus his CEO of SignSpace, which is a new mesh-based virtual world. It happens to be also in Unity, and it's like some other things. It's a kind of wall garden, in a sense. Though it is open to, you know, open to making connections. That's what that word, but we'll move on to that. But Adam was also one of the founders of OpenSim here. So it really is a sort of former core developer, right back to the core, so to speak. So he can bring a perspective on that to the conversation too. So welcome, Adam. Thank you. Right. Next here we have Dita Hain. Dita helps from Munich in Germany, or this is what he's born and raised. He made first contact with computers programming in 1981, which set him on a course for a master's degree in computer science in Munich. He described himself as a frequent Metaverse traveler. So we will come to this shortly. I will add into this. It's not on my synopsis, oddly enough. But Dita is also responsible for yet another kind of independent platform known as CyberLounge. And CyberLounge is, if you can imagine OpenSim on the web, that is CyberLounge. You can export OAR files, for example, or content and put it up on CyberLounge. The only difference is that CyberLounge is not an OpenSim viewer. It is simply a web platform that you can put OpenSim content into. But when you post it, you'll basically, you know, say it's an art gallery, you can have it on the web via CyberLounge, but you can have it in worlds on the hybrid as normal. I will mention here that Selby Evans, I think Romelvo is on the panel later, I believe, also is promoting a thing called Web Worlds, which basically is, I think you agree, Dita CyberLounge under a slightly different name. So welcome, Dita. Okay, thank you for having me here. Great. Now, a familiar face to all of you. She didn't make the call panel this morning, but she's been chatting away in a private Q&A over lunch. It's Crystal Lopez, sometimes known as Steve Aganto. She's a professor with the Department of Informatics at the Donald Brennan School of Information and Computer Sciences at the University of California Irvine and a core OpenSim developer. And of course, she developed the HyperGrid, which is a federation architecture and protocol for OpenSimulated Virtual Worlds to interoperate basically. And so welcome, Crystal. Thank you. And finally, she's not here, it seems, but we, as she may just be late, so if another face appears on the panel here, it will be Cinda Roxley, who is a software developer in 3D and high performance computing. But she maintains both the alchemy viewer that many of you may be familiar with, but more importantly, it's recently taken over the Radagast viewer since the sad passing of its creator. Radagast is more of a sort of tech space client, which I actually find very useful. It's a way of being logged into everything but the visual world and you can sort your inventory and I'll see your friends' questions from there. Anyway, whether Cinda makes it or not, we don't know, but we have plenty to talk about in the meantime. Now, I have another panel coming up in a couple of hours actually, which is all about the HyperGrid, incidentally, which I believe is pretty much the killer app of OpenSEAM. And I know for conversations a few minutes ago that you were there, I think probably everybody would agree with me, but suddenly Krista does. I'd like to start, we haven't got Cinda here, so we don't have anybody here fully representing a fully fledged OpenSEAM viewer per se today. Although I'd like to start with you, Krista, because as well as creating the HyperGrid, you develop the onlook viewer, which is a variation on the singularity viewer, but in particular it is, well, it's not in a customizable experience, but in some ways it's a very lightweight experience because if people are forced to use the onlook viewer to go to an event, a grid which, say, might otherwise be closed will let them in and server side, it actually implements controls that limit what the viewer can do. So I might go to a talk, for example, and I can take a seat, I can walk into the venue, I can take a seat, I can listen to media and see things being read, but using onlook, I won't be able to do anything else. I don't get a hijacked by the build tools or the fancy things I don't understand. So this seems to be an ideal viewer for the, you know, if somebody, you need a lightweight viewer for somebody, this would seem to be it. Maybe just briefly tell us about onlook, but also if I can ask you, are you developing anything more on that side of things at the moment? So let me tell you about onlook. Onlook was and is an experiment and the experiment was very clear. I wanted to find out, I wanted to poke at the second life code base, the viewer, so it took singularity, and I wanted to find out how difficult would it be to kind of redo all the part of the viewer that does the user interface. That's not the 3D part, but everything is 2D, so the menus, the traditional graphical user interface, the menus, the pop-ups, the buttons, all of that stuff that we see here on this viewer. I wanted to find out how difficult or how easy was it to make it customizable. And in particular, so the first phase would be to kind of hide certain, you know, certain options that might not be needed all the time. That was the first phase, but the second phase would be to actually program what options were available. So, you know, if you're in a situation like this, a conference, you might see a GUI that has to do with the conference event. So you have things, the buttons for the schedule, you have buttons for things that are related to what is going on. And so this is programmable UI. And that was an interesting experiment. We were able to kind of knock out a few things, like the buttons that I shown on the bottom bar and the menus that I shown on the top bar. And so it was not totally hard to hide things. It would be a lot harder to add new things. And that would be it shown to be really, you would have to rewrite a lot of the viewer to be able to actually show new things in the programmable manner. So we had sort of a plan at some point to how to do it, which would consist of some heavy re-engineering of the viewer to sort of wrap it in a JavaScript engine with which you would then program all the 2D parts, menus and stuff like that. But, you know, and that I think technically the plan is feasible, it's viable. It's just, it's a lot of work. And that's the part that, you know, me being mostly an explorer and a poker, I really don't have the time to do that kind of heavy lifting of doing that kind of development. So it's sort of one hold until somebody wants to, you know, take it on, find some incentives to actually develop it. But yeah, that's the story of one look. Well, I think I think a good segue here. I'll come to, no, I'll come to Dita and then come back to them because it's a float of this. Dita, CyberLounge, for example, and obviously the thing Selby is doing with you, WebWorlds. You effectively have created, haven't you, an OpenSIM viewer. The only thing is it's not connecting to the OpenSIM grid network or the iBegrid. It is simply a web-based viewer that can display effectively things that are built in OpenSIM. Have I got that right? Almost, yes. Almost, of course. Maybe it's only an 80 or 90% OpenSIM viewer because it supports other tasks which are created like OpenSIM other tasks. You can import any content or almost any content as long as it's static for now into the WebWorld. Most of the imports are now already done by drag and drop. So it means you can have your inventory, not inside the world, but on your desktop. And you can simply drag the content of your inventory into the 3D view. But as you said, it's not connected to an OpenSIM grid. I've created the back end on my own and it's not so far compatible with OpenSIM. It's a real WebStack which manages everything. And now I, for example, we will be hearing from him shortly too, actually. Third Fredericks has this OpenSIM installer. It's like the old SIM on the stake. So it's not a stake. On my local machine, it's called Outwells. And I do all my hybrid readings from there simply because as you say, everything is in my own hard disk. If all the grids are members, I'll go down and go away. It won't bother me because I just launched myself onto the hybrid direct from my own computer. And I can see that when you've got a SIM installation, it's a bar region 25, so it's not really compatible. But just say I had a small region built that way. What is nice is that I can literally export that whole region and put it up in CyberLounge. And it wouldn't be connected. And when people went to it, it wouldn't be going to my region on OpenSIM, but it would be going to a duplicate. And you could argue if this is something like an art gallery or what Lyndon Sanzler now call experiences, not destination. It's just an experience I have. I could have that in two places. I could be able to post a link in Facebook and tell people, come and see my art gallery. Here's the link on the web. But if you're on the hybrid, here's the hybrid address. And I could give them those two options. And basically what they would see is the same thing. And possibly one day, the build itself could possibly have backend scripts in it, which would allow it to connect the two installs. In other words, they're on separate platforms, but there might be things running in the build like an interactive whiteboard that can connect the two worlds together. So that, in theory, would be possible, wouldn't it, the way you're working with CyberLounge? Exactly. This already is working. So we did some tests with virtual whiteboards and the funny thing was we were three people. One was in Second Live or in OpenSIM. I was in CyberLounge and one was directly in the web browser. And we could work on the same whiteboard at the same time from two different places. Wonderful. Right. Now I'm going to move to Adam. Adam is very familiar with OpenSIM. He's one of the core founders and the original core team. So that's a given. But I'd also like you to answer this on the point of view of Science Space, because Science Space, again, is a very different world. It's built in unity, principally mesh. But one of your builders, we know him very well, but there's Joe Obama, who's actually on a session tomorrow, probably talking about what he's done with bridging worlds. But very much the same thing as applies. I'm thinking about our gallery that will be on a region on my desktop connected to the hypergrid using OpenSIM. I've got the same gallery, a copy of it in something like CyberLounge. And I could also have a same copy of that same gallery, which I could export as a mesh and then via Unity. And I say via Unity only in my case. I could then upload it into Science Space. And that would not only put it into a different platform as a kind of duplicate, but because Science Space has both the web interface and a viewing client, it's just another way of putting it on the web. So, Adam, really, it turns a little bit about Science Space, but any thoughts you have in developing this kind of interoperability side of things? Yeah, I mean, to give everyone, if you haven't heard of Science Space, I think we came on and sort of talked about it for the first time really last year here. Basically, long story short, I got very tired of being bound to the second life era when I was working on OpenSIM. That was the reason we built Science Space was because there was just so much legacy architectural decisions that have been made when we actually were working with OpenSIM that just, frankly, made more sense to build a new world from scratch than try and sort of re-engineer the entire stack from underneath itself. Now, in terms of interoperability, I think that I've talked about this before, but I think the key thing about is just giving you as many options as possible to view the same content. So, content actually built in OpenSIM can be brought into Science Space. I see Austin Tate in the crowd here. He has actually done quite a bit of work on getting OAR imports working straight into Science Space, which is really cool. I've seen quite a bit of that. But in terms of interoperability, the ultimate thing we want to get in with our viewer decisions is actually just accessibility. That's pretty much the principal driving force beyond all the other driving forces that we have, and we do have plenty like taking limits away from creators, so you can really build what you want. But I mean, one of the things we've been working really hard on has been getting a good high-quality WebGL client, and it is much harder than it sounds. Frankly, the browsers, all of them, have got horrible, horrible defects in their JavaScript engines, their WebGL renderers. I mean, you name it, it's probably broken on Safari. You name it, it's probably broken on in Istate Explorer. It's really sort of a long challenge, but once you actually get it right, once it actually works, it's fantastic, because you literally can just go to a URL and hop into the same region that people are accessing with the sort of full-featured, graphically rich clients as well. Okay, I'll just do a quick apology. I've seen a note from Bill Blythe, and I don't know where Singer Girl is here too, in the audience. WebWorlds with that name and spelling is developed by Singer Girl mode independently. That's the one I mentioned earlier. So I'm sorry, I'm sorry, Dieter. I thought it was a version of Zelda lounge, but go figure. I've got that wrong, so my apologies for that. Yeah, I mean, one intriguing thing I have thought about in terms of interoperability is the idea that, well, I mentioned this art gallery, for example, imaginary course. It could be anything, TV studio, whatever. There might come a point where I can collaborate with people in OpenSim and build something that is basically exported to a platform that runs on the web, maybe science space, maybe cyber lounge, and not only is it a duplicate of, you know, an only a duplicate of the OpenSim build, but the build itself could within it have certain scripting and things like that that maybe work from a web back end somewhere. So they will work in a similar way in all worlds. I mean, not all platforms interpret scripts the same way and things. So it's not that easy. But the notion that we could start using OpenSim as a prototyping platform where we can freely build very intuitively and prepare stuff for deployment on other platforms. That's why the reasons I love the idea of my OpenSim regions running offline on my desktop, you know, it's sort of a private space, do it, build it, and then see if I can get it to work anywhere else. As you know, Adam, not very successfully, but I'm not a builder. But, you know, the idea that we can, you know, do things that way. Do you think there is a future and I know this isn't a viewer specific question, but do you think there is a future for using OpenSim, you know, unique tools as a prototype development platform for what ultimately may be deployment on other platforms? I will open that up. Actually, I'm going to open it out to Robert actually because I haven't got him in yet. So any thoughts on that idea, Robert? Well, I do think that the there's so many different use cases now for the for the viewer quote unquote. You know, we talk about it being on mobile and on desktop, and there's now all the VR stuff. I also think it's related to the AR stuff. I mean, you just take the world around it and you have stuff in a place in front of the camera. And I'd really, I've been thinking about that an architecture for a viewer should really separate the actual rendering or the viewing part from the virtual world back end. Because most of the most of the piles of stuff are built together is one one big glob. That is your viewer is an app or a web thing that you run. And it not only has the stuff to figure out how to put, you know, put what to put on the screen at the moment, but it also has all the logic to interpret what the virtual world is sending it. And I mean, I would like to have a viewer where a a science space avatar could stand next to a second life avatar. And so, I mean, following up some of the things Adam said, in some sense, the 3d rendering in a browser is all broken. But in another sense, it's completely solved quote unquote with with web assembly coming out this year and their science space is using the unity plug in firestore. Amazon has one unreal engine has a browser version of it. And that's only going to get better over the next year or two. You don't have to be stuck with the the web GL, whatever form it comes in. So I think the base rendering technology is for some definition of solved solved. But then you all get back to, well, how do you what are you connecting to to get the things that you're seeing? And then how do you interact? Because how do you create the content that you see? And I think content creation, a lot of people are going outside outside tools or you have to do blender or whatever and put it in. I'm fond of in world building like in Second Life or Minecraft. And I think that that those are kind of requirements for a viewer to be able to create a lively community where people build and people play. This is kind of what I said at the start. It's impossible to divorce the server from the viewer although, you know, there's sort of two two halves of the same coin, you know, and they can't one guy is this without the other. I kind of come back to Adam very quickly here because one thing you didn't mention and although he has done in science space is that, you know, we were mentioning earlier that, you know, every software including opens in viewers quite frankly are learning software. You just want to be able to offer easy options as well as more complex ones. But Adam will tell us that at the moment, you know, you have to go into unity and that is a learning curve and sort of understand the principles of, you know, making your objects and stuff and working with them in real time isn't quite as intuitive as walking around them in live space. But there are certain building tools that Adam has put into the viewer. I probably they probably don't work there when the web viewer, but they do work in the science space client, which Adam, you described these, if I recall, as a micro set of uniseas own tools, the idea being that somebody can enter your world in the science space, just like they do open similar second life. And you will be able to offer them building tools, but the nature of these building tools will be such that they're sort of learning a micro set of the industry tools at the same time that would then get then give them maybe a logical jump into the full authoring software. Yep, that's exactly what we've been working on. So I see it as sort of a big long chain. And this is something I've talked about before. And this is sort of the the democratization of creation. Right now that there is sort of two approaches in our viewer, you've got the in world build tools, and they're somewhat simplistic, but they do exist. And they do actually work in the web viewer. I still would recommend using the full, full viewer, but you can in theory do them in the web viewer just fine. And then on the other hand at the spectrum, we've got the sort of the thermonuclear warhead of content creation, which is unity. And by that, I mean, it's just so many tools, so much power, but you've got to spend a bit of time learning those tools. And what I see is that this is sort of the chain between the two. And what we want to do is we want to actually build every link in the chain. So we've got at some point, you do have to cross over and fire up unity, but that's sort of the midway point ideally. And what we've got is the more built tools we use all the same keyboard shortcuts, the same control gizmos, all that same stuff as unity itself. And then on the other end of the spectrum with unity, we've got the full cleaning. And every time we add sort of one link between those two, we sort of begin making a guided path from one to the other. So you can start out with the simple tools. And then as you start using and getting used to the keyboard shortcuts and things, because again, we copy all the same keyboard shortcuts unity does. So it's all identical keyboard shortcuts for control of the camera and everything else. As you learn all those tools, you do pick up unity knowledge. And then when you actually get into unity, you're like, Oh, hey, I already know this. This is fantastic. Because the reason why we've got to do that is because there's a fundamental gap between users who create and users who don't create that doesn't need to be as hard as it is right now. Yes, there will be users who never want to create a thing. It's probably quite a lot of them. I would say you look at the statistics, it's probably something like two thirds of the population. But there's sort of this one third who do actually or could be prodded into creating content if they had the right tools in front of them. The reality is that right now in virtual worlds, and I'm going to include Second Life, OpenSim and all the other ones, content creation is really locked to no more than 5% of the population. OpenSim is probably a bit higher just by the nature of the fact that people have created content and drawn to it. But if you look at the sort of the global populations, it's a very small percentage and it could be easier. It could be higher. The easier you make content, the better it goes. I view proof this with things like being able to remix other people's content. The number of people who remix other people's content versus create original content is like three or four times higher. And again, sort of every step that makes it easier is not a batch of new creators you can bring on board. Exactly. I will freely omit. I'm a remixer. I'm a graphic designer. It's component parts that count. Not being able to do anything. Interesting because my focus here that I've mentioned it so far is things like AR and Austen's in the audience here. The OAR converter, for example, it makes it easy to save an OAR file offline, then run it through the converter and suddenly everything in that OAR file is in a folder full of DAE files, which I mentioned. You can drag them all together into say Unity and everything maintains its position. It's a way of just bringing the whole thing across or you can extract the various DAE components that you might want. Now, what interests me is your bottom line with size space is virtual goods really, I think, in the long run, getting lots of users obviously, but the bottom line for the company is going to be the virtual goods. So that's the area of the commerce panel later. But on the community side and the interoperability side, what is the possibility and other people can comment on this too if they feel like it? What is the possibility of a reverse of that direction? Could I, for example, use the in-world building tools in science space, which are a subset of Unity, and then build something really exotic, if I was up to it, in mesh, in science space, and then export that exotic phenomenon back here as a duplicates into OpenSIM. I mean, OpenSIM supports mesh. There are variations, of course, but do you think that will be a possibility too, where the different platforms are different, but they will allow this back and forth? Yeah, so the naive answer is yes. And the reason I say that is the naive answer is that, I mean, we use standard formats. We don't go off and enter our own mesh formats. I mean, we mostly use FBX, which is the one that sort of the entire industry has settled on these days. But I mean, you can use DA and all the other formats as well. It would be possible to write an export of our scripts. There would be some various elements that we need to sort of kick out because we've got things like content protection worries and things for our merchants. We try to be quite strict on that. But on the other hand, I think the problem is that while you bring out across a lot of, say, the mesh, we give you a whole bunch of other tools. Like one of the things that we really focused on was taking all the limits away. So you can write custom shaders, for example, for your content. Now, for those who don't know, shaders are small programs that run on the GPU and they describe how something should be rendered. A shader is what defines it too. And I'll just bring that as one example. That would be sort of the kind of content you couldn't bring back just because it's simply not there. The moment you bring it back, all these limits lock straight back into place. And when you do that, I mean, yes, you can bring it back. But the question is, would you want to? Would you want to take your content and sort of rip all the cool stuff out of it? Or would you like to sort of bring it somewhere where there is all this power? Because all that power needs to be added to the second life viewer in order to be able to bring that back and make it look good. And this was pretty much the inspiration for why we ended up building our own platform, because that stuff just was so insanely difficult to do that it was easier to build something new from scratch. A lot of people are asking whether you can get the all converter. You will find out that AI Austin is giving a presentation all about that later in the conference. So we'll make sure you've got the links to download that. Just search for AI converter on the Googleizer. Right. Let me, I was actually cautioned, although I've been told differently since that when Cinda, who I'm afraid just doesn't seem to have made it, you know, she said, I'll talk about algorithmic singularity, but don't ask me about Moses. I'm under an NDA. I don't know how much I can ask, but of course, Moses project is no longer. I don't know. I know, Robert, you were involved, weren't you with Moses to some extent? To some extent. And I gather that there were prototypes made there in terms of an actual web viewer for OpenSim as it is now. Is there anything you can tell us about that, that is not NDA or do you, even if it's an abstract line, do you think there is any future in what they were working on? Well, I mean, they were working on an architecture, which is I think an architecture you mentioned earlier of having a converter in between the viewer and the client or in between the simulator and the viewer that did asset conversions, converted them to different formats, did mesh simplification, could do mesh joining, and that's our stuff. And they had a very kind of a initial proof of concept that they used to get some initial numbers, but it was nowhere near a releasable technology at. Right. So like everything else, it was at the permanent beta stage. I actually laugh when people are actually saying things like, well, compared to all these flashy worlds, OpenSim lost data and everything else, but as we're saying, there is so much when you actually take on these tasks that would always look a bit dated or whatever, because they're individual strands of a greater picture. Now, this is a real nightmare, but I was actually fascinated that the Cordevs, when they were speaking earlier, almost provided a segue to this panel because the issue of viewers came up, and it was quite clear that the Cordevs would say, we know something needs to be done about this darn viewer. And I think more so than ever. Firstly, something that's fast and easy to use, where there's maybe a longer learning curve for people that really want to get into it, but not for people who just want to use it quickly, be it on the web, standalone viewer, or whatever. But there are two things coming into this. So I gather now that Linden Labs are moving second life to Amazon's Cloud over a period of time, and apparently some of the functionality of what Linden calls third party viewers, and I guess that can be from fast or down to the others, may be impacted by the way it will be served from the Cloud. For example, a bit more like Kitely, I gather, where second life regions will actually be offline until the first person logs into them. That continuous world that you can walk and fly about won't necessarily be there anymore. I think it will depend on the speed with which they can bring the servers up as people request them. I don't know enough about the mechanics, and I quite know how that affects viewers, but I gather viewers will have to change. Harking back to the Cordevs talking earlier, both at the Q&A and their panel earlier, they are clearly realizing a dedicated viewer is needed. If we suddenly find what we call the Linden third party viewers are becoming dysfunctional, it might be odd. They might work here and no longer work on Linden World, or they might work in Linden World, but no longer work here. The imperative for a viewer that I personally would just love to see something I call a metaverse viewer, which actually would somehow interpret all the platforms and adjust itself, but I don't think that's going to happen very soon. I want all of you to answer this here. What is the likelihood and the drawbacks to a universal viewer? I almost know Krista's answer because I think I'll let you start this, Krista, but I think you're going to say, well, there's nothing wrong with the picture. It is all the codes that makes the interface on top of the picture kind of thing, but it's the cost of developing that. I mean, you told me well, I'm not going to numbers and things like here, but the cost, it would take somebody a year with a rather big budget to engineer a rebuild like that, wouldn't it, Krista? Yeah, so viewers are, there's many kinds of programs and programming. Doing server-side programming has its hardships, but doing heavy graphics programming is a whole other parade because of many, many reasons. If you're doing front-end programs that run on people's computers, you're going to have to deal with people having all sorts of computers from high-end to low-end, high-memory, low-memory, many processors, a few processors. There's so much variability on the front-end of things, and we know lots of different kinds of graphics cards. So it's very hard to do a graphics program like the ones that we are seeing here that performs well in these many circumstances. So it's quite a bit of engineering and a lot of it is not rocket science, but it's just work. And that kind of effort, I think, is it's going to be very hard to do it the way that we had been doing OpenSim. I have a feeling, as I said before, that we need to have some incentives of probably financial incentives for people to actually want to do that kind of work. And so I don't know where to find that, but it would be a nice thing if people would get together and try to do it in combination of crowdfunding and maybe companies who are interested in developing. Yeah, when it comes to the core code and stuff like that, this is the problem about, you know, it's not the problem. It's a great thing that if it's a volunteer and it's all open source and everything else and doesn't really have that hypothetical roadmap we were talking about earlier. But it does mean that if there's a major jump to be taken that requires funding, one gets the impression that devs will want to work on with the code and it probably needs somebody to come in and they really need a reason to build that ultimate viewer, if you see what I mean. And I noticed Kay McClendon and there's quite a bit of conversation going on in the world, but you know, are the viewer creators. We haven't got Cindy's here, of course, but open to addressing the needs of OpenSim and it's a difficult one to answer. I will cite Firestorm because I'm talking to Jessica quite a bit. I mean, she loves the idea of OpenSim and loves that Firestorm is used in OpenSim a lot, but you know, she has a huge dev team of about 80 people or something just on the viewer side and they are all second-lifers, you know, and she can't get them interested in writing the stuff for OpenSim. And you know, she wants people to, you know, there's Firestorm user groups in quite a few grids on OpenSim and she wants feedback from those people to say what they want and what they need, you know, and it's another avenue that doesn't seem to be explored, you know, so join the Firestorm users group and tell them what you want to make it work better in OpenSim, not just generally, but in OpenSim. So I think they are OpenSim especially, although she's not here, you know, the Radar Girls client works very well for OpenSim and as does, of course, Alchemy. Right, back on to that question though, the prospects for a greater viewer, both the economics of it and the practicalities of it, bad to you, Robert, on that one, I think, before I move to Dita again. This is probably where we have to start rapping. Yeah, what was the question again? Just your thoughts really on, I think we all agree there will be a need for an OpenSim dedicated viewer, if not a metaverse open viewer. Well, I keep buying my lottery tickets and I haven't won yet. So, but given that, I think it's going to take someone to lead the charge on the open source version of it, you know, to make a roadmap and start making it happen and see if people can be collected around it to do the development and, you know, make some decisions on which technologies to use. And I think there's a lot of possibility there. Okay, same question to you, Dita, really. You know, obviously, there's the aspect of a web viewer that would be compatible with everything, but also standalone viewers. I mean, do you think there are any signs of this being a possibility or do you think the cost is just something that is going to take a flu to somebody to come up with for OpenSim? Yeah, I think technically it should be possible, but it would mean to take a lot of effort and a commitment of several groups so far. I think with a web viewer, the prospect is even better because we started from scratch very recently. So, what I have done so far and also what Singer Girl is doing with her web worlds is just a brand new kind of viewer. And the thing is, here we should have contact with core developers because what we will need is to have a kind of interfacing black box as a part of the server to be able to use web protocols easily to get data from an OpenSim server. Do you think, I think only Robert or Crystal would know the answer to that, but do you think there is anything in the code base of OpenSim itself that could be addressed to make it an easier process for, say, a web-based viewer to be able to access the data? Actually, there are several possibilities. I mean, there's just started a core module from bases. There are modules like the dispatcher that was built a few years ago that actually has both RPC type access and as well as security integrated with the OpenSim security system for accessing stuff inside the web or getting out of the objects and that sort of stuff. It wouldn't be too hard to put a module, have a module for OpenSim that did that, created a new protocol. In fact, it's been done for other systems a long time ago. But a module, how does that work with all the other things? I mean, hypergridding, one of the problems with it is that all the regions aren't running the latest version. And if it required a module, that would be this problem, which is also why other people started working on outside boxes that essentially proxied the region. Sure. Yeah, I know that. It seems that every second time I hyperjump, I get, this region is using a different version of a different server and blah, blah, blah. So yeah. Okay. Well, I'm getting my prompt for my backstage crew, as it were. So you've got two minutes left to stop blabbering. Time just flies. There's never enough time for all these. I'm going to give a plug for a show that I broadcasted at noon on Sunday. It's called the In World Review. It's a mix show. It's an open-ended tool show. Sometimes it's going to stop for two or three hours. But these are the kind of things we do kind of address there now and again. So the conversation continues. Also at the conference here, of course, we have a hypergrid panel coming up in a couple of hours, which I'm hosting too. Tomorrow, Cesar Ember will be talking more about the advantages of the hypergrids, the killer app I mentioned. And indeed, AI Austin will be talking about the conversion stuff, the ORR converter. And tomorrow, there's a panel called Bridging Worlds, which Lilovo will be hosting, which will go further into the idea of creating cross-platforms. For example, the ORR converter and taking things from OpenSim and putting them on cyber lounges and things like that. So, you know, it's all connected. It's all connected. So it's a lot smaller to know why you're still here. But for me and this panel, I'm afraid it's time to wrap. We've got more stuff coming up for you. So I'd like to thank everybody here. I'd like to thank Dieter Heyn. Thank you, Dieter. Thank you. And it was great. And I think we should have really a continuous talk about these topics, because I think especially a web viewer where you simply click a link on a web page and enter the virtual world is really fascinating. Yeah, it's just going to blam you there quick and easy. And it doesn't lag. Bye. And thank you to Crystal Opens, of course. She'll be with me on the Hypergrid panel too, because it's her baby. So, thank you for now. At least, thank you, Crystal. Thank you. And thank you to Adam from SizeSpace and one of our wonderful founders of OpenSim. Not to forget that. So thank you, Adam. Thank you for having me. Okay. And thank you to Robert Adams. Sometimes it was Mr. Blue. I finally clicked on that one. Thank you, Robert. Thank you for having us. Okay, wonderful. So as I said, it's a wrap. I'm going to put him as a moderator, go take over me and tell you what is coming next. But actually, in case they aren't, I'm simply going to, I'm going to open our own schedule to tell you what's coming next again. Typical me. I've got the right page open. Bye. Yeah, where are we now? Yes, indeed. We're going to hear from Fred. Fred Beckhausen about Dream World. Actually, we'll be hearing the Mind Palace for English in Immersive Worlds. Oh, yes, sorry. You're ready for your other panel. You're already hearing yourself. Yeah, I'm going to go ahead of myself. What's new there? Embarrassment? Embarrassment or doubt? Yes, indeed. So we have Lou Lobo, La Tissue, Sherwin Colgan, and I know when Hikey is on the next panel. So Mind Palace for English in Immersive Worlds at 1pm, which is eight minutes from now. So I guess you'll be getting some music on stream to keep you going in the meantime. And thank you for having us and good wishes for the rest of the conference.