 Welcome back everyone to the 830-9AM session of the 2021 Open Simulator Community Conference. This session, we're pleased to introduce the presentation Future Scene Gate 2.0. Our speakers are Lisa Laxton and Frank Ruloff and several of their interns. But before we get started, I want to just remind you that you can check out our website, conference.opensimulator.org for speaker bios and details from their sessions, like links to things that they may mention. As well as those of you listening on our pre-recorded or our recorded session, you can ask your questions through Twitter at OpenSimCC and hashtag OSCC21. Anybody else within the world, if you could direct message me your questions, that would be fantastic. All right, I'm going to start this session Frank. I'm going to toss it over to you so you can introduce all your great interns. Thank you, Meg. Again, nice to be here. I'd like to introduce the students that have been working hard on the Future Scene Gate 2.0 viewer. The last group that worked on that was Axel, Marion and Tiffen. They did the first initial work on it. The current group that is working now are Alex, Kojé, Pierre and Oscar. They all come from the CP in Lyon, which is a university. They have different trails of what they are learning to, but all in the area of software engineering. I would like to give the word to Tiffen as the first that he will tell more about what his group did in the last year. Over to you. Hello, everyone. I'm Tiffen February and during this presentation, I will present to you the work we did with my team last year when we were interned at Thales Netherlands as a strength. We were a team of three composed of Axel, Salmon, Marion, Clément and I. We were all French students from the Engineering School of CP Lyon. Our mission was to upgrade the rendering engine as a Senegal viewer. Next slide, please. Our first mission was to analyze the code of the Senegal viewer in order to identify the rendering part and regroup them together. We decided to classify the files involved in the graphic engine according to their use. For that, we looked for the central files which manage all these functionalities and that allowed us to identify all those which were necessary to their functioning afterwards. After the classification, we had to group them together in the single folder. The creation of this graphic brick engine will allow us thereafter to facilitate its removal in order to put a new one with better performance. I don't know if you can see the graph, but as you can see in the graph, we wanted to have an independent rendering part that could be connected or disconnected with the other parts, like the server or the physics. Next slide, please Frank. When this step was done, we had to find a replacement for our graphic engine. This new graphic engine had to have several particularities which are really important. It had to be visually more beautiful as a ground run because it's an upgrade. It had to have better performance in terms of frame per second. For example, the 3D headset in the future. It had to be open source because it's the key of this project. And finally, its programming language had to be a C++ in order to match the rest of the initial code of the Cinegate Bureau. After studying several known engines such as Unity or Unreal Engine, as Frank told you before, our choice fell on the Godot engine. This engine met all our criteria in addition to have an active community and a prospect for evolution. This part was really important for us in order to keep updating the rendering engine in the future or just adding new features. Next slide, please. Then we had to do the same classification work that we had done on the Cinegate Bureau. This step will facilitate the association between the files having the same function in the two graphic engines. So we sorted them in the same way we gave us the same table with the file of the new engine. You can see it on the presentation. Finally, the last step consists in run-placing one-by-one the Cinegate files by the Godot one. We had to identify which file had the same role as the whole one. Indeed, we had grouped them in categories, but we did not know their precise errors for each one. To do this, we had to read the whole code and find some documentation on the internet. Moreover, the two engines not having exactly the same functioning, certain files had the role of several old ones and users. We thought we didn't manage to finish this part, but we gave all the documentation and the work we did to the next team. You will meet right after me and I'll let my friend speak for the rest. Okay, thank you very much, Tiffen. Oscar, will you take the second part? Yes, so next slide, please. So I'll be presenting this year's progress and this year there are four of us from a CPE working on the project, including me, Oscar, Alex, Goetje and Pierre, who are also on stage. Next slide, please. So this year's main goal is to use Godot's rendering engine to improve the frame rate and stability of the scene gate viewer so that we can potentially use VR in the future. And to do this, we have set three goals that we were working towards this year. Next slide, please. So as you can see, this is a simplified structure of how we would like the new scene gate viewer to function. And so the three goals we have set are first to create a transition interface as an API that would convert everything that exists inside of the 3D world and that are sent from the server and convert them into entities with the API that can be recognized and used by the Godot engine. Our second goal is the integration of Godot's rendering functionalities inside of the scene gate viewer with the creation of an adapted Godot engine. And our third goal is the rendering of the world and creating an output onto the screen for the user. Next slide, please. So for our first goal of creating the transition API, our goal is to keep the changes of the Godot engine to a minimum. That way we can use most of its functions and also in the future have upgrades be a possibility. So we decided to create a world inside of Godot in which we would transfer all the visual elements from a scene gate such as the lightning, the objects, and the terrain. And that way we can let Godot's rendering engine take care of the work through its rendering process. Next slide, please. So the way we want to do that is that the elements from scene gate are sent by the server. They are then unpackaged inside of the viewer and sent into the rendering. And what we want to do is intercept those entities before they go inside through the rendering and pass them through our API that will convert them into objects that are then created inside of our Godot world. There are also all the updates of the camera and objects that go through the API into the world. That way every change that happens in real time will also happen in the same time in the Godot world. As of now, we've had some successful tests of creating objects sent by the server inside of the Godot world. But there are still a lot more to be done on that part of the project. Next slide, please. For our second goal of the integration of Godot's rendering functionality, our goal is in the end to create one single application with a single executable. And so we have to integrate all of Godot's useful functionalities inside of scene gate source code. The thing is that Godot was not made to be used as a library, which is why we want to create an adapted Godot engine. And to do this, we also need to understand all of Godot's architecture to get the most out of it and transfer all of the necessary rendering functionalities. There is also the issue that Godot uses S-Cons, which is a compiling application. And we need to use CMake inside of viewer. So there's also a conversion needed for that. And as of now, we've made some progress of implementing classes from Godot inside of scene gate. But then again, like our first goal, there's still a lot more to be done on that part. Next slide, please. So for our third and final current goal is the rendering. So our goal is to be able to create an output onto the screen, which is a result from the Godot rendering. Godot usually renders inside of its own application. And so we need to make it render inside of the scene gate viewer, which is why we need to create tools to make this rendering process possible. We also need to take into account that the rendering is done in multiple parts with the user interface, the heads of display, and the 3D. And we want Godot to of course take care of the 3D part, because that would boost the FPS by a lot. But we will also have to see if we keep some of the scene gate functionalities for the 2D components. For this goal, we're still at very early stages and haven't been able to do anything conclusive. So there's still a lot more to see. Next slide, please. So this is an overview of our goals for the new scene gate viewer's rendering process. There might be some changes that come with new challenges during our development later on. But for now, we will continue with this structure. Next slide, please. So in the case that everything I've talked about until now turns out to be an impossible task, we still have an alternative goal that we might work on at the end of the year, which is to improve and optimize the current viewer's rendering and communication with the server using Godot and other means we have at our disposal. And so this concludes our presentation on the last two years. Well, thank you very much. I think you all did a great job. I had a question to the first group. What did you find the most difficult in the work that you have been doing? I think the lack of documentation at the beginning was pretty hard. Maybe nowhere to start. We were kind of lost at the beginning, yes. Okay. And you, Oscar, Pierre and others? I think for us as well. I had to spend a lot of time looking into the scene gate code at the start, but we've had some help due to the previous interns' documentation and also diving into the Godot engine. And do you intend to make the documentation on the viewer complete when you finish, Oscar? Yeah. Sorry, you repeat the question. Do you make the design documentation, so the documentation that is needed to maintain the source code and to explain it? You're also making that in parallel with doing the work on the viewer, isn't it? Yes, we've already made some documentation to help us with our work. And when we make something that works, we intend to create a documentation for it as well. Okay. Are there any questions from the audience? Yeah. And Lisa kind of answered one of them, but I wanted to just put it out here. Nick asked earlier, can you use this in a mobile format? Oscar? Oscar, do you have a goal of making this so that people can use it mobily? For now, no. We haven't really thought about it. It's mostly on PC. Well, most of the most to answer this question a little bit more. The problem with mobile, of course, is the processing power of mobile phones. So if you can do the rendering somewhere else and then stream it to a mobile phone, that works fine. But if you really have to run the application on a mobile phone, mostly the CPU power is not enough to do that. And what we're doing now is not especially focused on mobile phones, but to run it on a normal PC laptop or whatever. But there is one application, but this is a costly one, is that you run these viewers in the cloud. And then you can, of course, to stream it to mobile phones, tablets, and other things. Right. But that in itself has a fairly high cost to it. Yeah. Around $60 a month per CPU or concurrent user. And, you know, so that if you look at companies like Bright Canopy, they recently shut down because they were not making enough money to fund that service. And they were providing browser access to a host of second life users, which is a much larger market than OpenTem. Along the money line, Alan Scott asked, who's paying for this project? This part of the project is paid by Talus. And the payment is in the form of, well, the interns, they get an allowance because they do work for us. And my hours that I put into the project, as well as having a laboratory available and some support or services from other parts of the company. And the coordination of this project with Scenegate in itself and some of our other work is covered on my side of the house. Okay. Gavin Hurd asks, what is your strategy to decouple the rendering code from the rest of the viewer code when not evenly the labs who wrote the renderer is able to do so? Well, that's one of the main objectives of this part of the development. Eventually, and we decided to first take a different rendering engine in the same code. And then the next research step would be to look if you can take that code and isolate it in such a way with an API or something else or separate, even a separate application to run it for 3D headsets. Because 3D headsets request a constant frame rate, otherwise you get the sort of motion sickness in 3D headsets. So you must allow that. And current viewers are not made to do that. Current viewers, there is a loop over the rendering part, but also goes over the internet to the server itself. So it means that the frame rate will, we did some testing in the part, we put very high performance computers to do that, but we never could it right because we also experimented with Oculus Rift and so on to see what we could achieve with that. But the frame rate we never could get on a stable. And it has to do with this. So one of the things is, can we isolate it somehow? Can we isolate it somehow in such a way that we can guarantee that a rendering part will run at a 60 hertz frequency? And that's one of the goals that we have for the future as a next step in the evolution. But first we wanted to upgrade the viewer with modern render apart. That at the same time learns us how in the scene gate viewer, everything is connected. So then the next step would be the research to see if you could decouple it somehow and run it and be sure that it runs at 60 hertz. Okay. Friends, Tommy asks, did you create your own documentation for the open simulator code that could be provided to update the open sim wiki? Well, we provide a documentation on the scene gate code. Because that's what we're doing now. Okay. So everyone, you can find it on the sim gate doc. Follow-up question by Gavin Hood. With the very tight integration between current content and the renderer, what is your strategy to transform the content to work with a new renderer? I think Couchay is one of the interns that is looking at that subject. So Couchay, could you tell something about that? If you're not on mute? Couchay? Is he up here? Yeah, he's up here. Yes, are you ready? Yeah, I hear you. Yeah, okay, sorry. We do it part by part slowly. We try to do some smaller objects, but we hope we're going to be able to do it for every kind of visual element at the end. So how are you doing that then? You look at the object, how it is being rendered in the current viewer and then put that against the requirements for the Godot engine. Yes, exactly. Yeah, okay. Okay. Art Blue is asking, he says, I know a bit on game engines like Unity, we moved two years ago and OAR to Unity and Google Cardboard and Oculus. I assume Godot has also a mesh-based database. Will there be a bridge from OpenSim to Godot or am I fully wrong in my understanding? I see you speak of the rendering machine in Godot. So do you take the data from the OAR OpenSim and bring it to via Godot to the user viewer or I'm not sure if my question, okay. It will go exactly like it runs with the current viewer. Your objects will be supplied through internet in a certain form which can be translated by the viewer into objects you see on the screen. Now, if you recall one of the slides that has been before, now you will get the same objects, only the objects are internally converted to the format that Godot knows that can handle it. So it stays internally. For the outside world, there's nothing that needs to be changed in the way the definition of the objects are in the OpenSimulator itself. This conversion is internally in the viewer itself. Ah, okay. Let's see here. Is there a way to map between the different platforms a standard of some sort and reference libraries? Well, that is difficult because every viewer has his own rendering engine. But there are, of course, there are, of course, standards about defining graphical components. But to what extent that is followed by OpenSimulator, I don't know to be honest, because we focused on what there is and to convert it, to change it on the OpenSimulator side. Okay. And Gavin Hurd asks, is your target group for this development users with 3D headsets? No, it's one of the target groups. And what are the others? See people with normal 2D PCs and laptops. Because I think, and we had this in a lot of earlier discussions, is that I don't think that people will, I think that in a lot of cases, having a laptop or screen is more effective for people than wearing a 3D headset. First of all, it's costly. People do not, and a laptop or PC you always have at home. A headset is only the thing you need for audio. And so not only the cost, but also I cannot imagine, at least I would not like to wear a 3D headset the whole day long. So it's also the duration that you can use this headset. 3D headset is limited. You cannot ask people to wear a headset for three hours or so. Yeah. And so it is a way of getting better emerged into a virtual world, but the time you can do that is limited. And so I think it's not only 3D, it's also the normal PC laptop and 2D screens that are important. So it's not targeted at only that group, it's targeted at all groups. Nick is asking, will there be a web viewer? That might be, but that's not our first, that's not on the road map now, because we first want to transform the viewer and provide it with the flexibility that we need. There is a way to do it with a web viewer, and that's what we said before, using rendering in the cloud. But that's a costly solution. But you can do that now. You don't need to wait for some changes. You can do that already, but you need a certain work in running it in servers, the rendering itself. And that's quite expensive. But not for now. And there's also still the limitation in graphical capabilities that you have when you simply want to run a web viewer. Okay. And then I just wanted to ask any intern that wants to comment on this, but I'm curious, as you're working on this project, how is it inspiring you to think of the next project you want to work on? Anyone? Oscar? Yes, I can give an answer. I mean, yeah, more or less, always wanted to work on projects like these, making a 3D application, games or just a simulator like these. So for the future, I could continue doing things like this. Nice. Nice. Go ahead, Frank. I'm sorry. Okay. Gavin Hurd has a question. Why is it that you think the M1 processor used in Apple's laptops and desktops running the current viewers perfectly fine, are now able to run a viewer in a phone when the exact same processor is used? Well, it's not only the processor, of course, it's the graphical the GPU that's in the processor that has to support it. Yeah. And the GPU is maybe not enough. I don't know. We didn't really have any experience, we didn't really have any experimentation on mobile phones because we are so far from having something that we can run on a laptop or PC and then we can maybe see if we can transfer to the phone to a mobile phone. But until now, all the articles I read is the limitation in graphical capabilities for that. Annope. All right. Let me see if I have a last question here. Are you guys going to be at a booth after this? Yes, we'll be at the booth at the break. And where is your booth? We are booth number four in Expo Zone 3. All right. Booth number four, Expo Zone 3. So if anybody else has further questions for them, please stop by there. And I want to thank gosh, all of the interns here. What great project you're working on. Very exciting stuff to move us all into the future when you figure this stuff out. So we really look forward to your success. Thank you very much. And thank you to all the interns that take the time to be here today to explain it to the audience. All right. Well, as a reminder to the audience, check out the website, conference.opensimulator.org, and see what's coming up on the conference schedule. Following this session, we do have a little break and their next session will begin at 9.30 a.m. and is entitled Digital Citizenship for Cyborgs and Avatars. Yes, please. Anyhow, oh, I added the yes, please, because I can't wait to hear it. Anyhow, all right. Thank you so much for listening, and we'll see you at 9.30 a.m. Thanks, everyone. And thank you very much to the interns.