 Hello everyone and welcome to the 11 a.m. to 11 50 a.m. session of the 2019 Open Simulator Community Conference. In this session we are excited to introduce a presentation called Scene Gate Viewer. Our speakers are Lisa Laxton, Frank Roulop, Natasha Blue and Troy Schultz. I will briefly introduce our panel speakers today and please check out the website found at conference.opensimulator.org for speaker bios, details of sessions and the full schedule of events. Lisa Laxton, sometimes known in-world as Che Lene Erez is the R&D visionary and CEO of the Open Simulator Community focused Infinite Metaverse Alliance or IMA. She is also president of Laxton Consulting LLC, a company that provides various technical solutions for virtual world organizational needs. Frank Roulop is a senior systems engineer at Thales Netherlands with expertise areas in training and simulation. He leads an innovation in research for Open Simulator within the Thales global company. Natasha Blue is an engineering student at CPE Leon France specializing in network architecture and cybersecurity. She works as an intern at Thales Netherlands and has been charged to review the Scene Gate Viewer security issues. Troy Schultz, known in-world as Seth Nygaard is a multi-discipline developer in real-time systems for industrial, automotive and other critical environments. His roles include senior hardware designer, senior systems administrator, engineering manager and chief technology officer as well as owner operator of Refuge Grid. The session is being live streamed and recorded so if you have questions or comments during the session you may send tweets to at OpenSimCC with the hashtag OSCC19. Welcome everyone and let's begin this panel session. Good morning. Thank you for that great introduction and congratulations to the organizers here on your first day yesterday and today is going very well. Great, great sessions are happening. We'd like to talk to you about one of our projects which is the Scene Gate Viewer for Open Simulator. Since the Open Beta launch in November of 2019 there have been quite a few questions so we hope to have time for Q&A with our panel after the slide presentation. Infinite Metaverse Alliance is a strategic partner, Dallas, share a common mission to focus on inclusive design thinking to advance virtual worlds, virtual reality and synthetic environments. Project Scene Gate is one of several integrated projects supporting this mission. The Scene Gate Viewer provides the user with a gate to a 3D scene in a virtual world. So we asked the research question from a use case perspective do we need a new viewer for Open Simulator and this is one of the things that we gathered information on in our own surveys and this was pretty much validated by Maria Korlov's surveys that she has done that she talked about earlier today or yesterday. The answer is yes. As a result of the research, development gaps were revealed and use cases were defined. Accessibility by design is needed to combat digital marginalization. This occurs when the needs of users with disabilities are not met. Hearing impaired users may be unable to communicate effectively using the listen from avatar position option. Mobility impaired users may be unable to communicate effectively when frequently changing the avatar's camera position. Visually impaired users may have immediate need for variations in color and contrast. Cognition impaired users may face immediate challenges related to hearing, mobility or vision stresses. So onboarding also remains a challenge due to steep learning curves and orientation time. Specific virtual world use cases include classes with users who are students and educators, meetings with users who are attendees and speakers, and immersive interactive environment with users who are new users or trainees. Usability improvements are needed to put the power into the hands of the users and to accommodate these needs. During our analysis, non-trivial challenges and development were found. The original Lindenlab source code project developers have priorities that are related to their business focus. And the open simulator stakeholders are not a consideration for those original source developers. Existing third party viewer project developer priorities are divergent. Everybody has a little bit different idea of where they want to go with things. Now the lack of standard compliance impacts the interoperability with other applications like 3D modelers or screen readers and a host of other devices or software applications that assist users who use virtual worlds in very different ways. The documentation is non-existent or not up to date. I think it's time to stop doing it this way. Admiral Grace Hopper said the most dangerous phrase in the languages we've always done it this way. So it became clear that open source software forks were needed after extensive discussion and investigation into existing project road maps and priorities. So we began forming a new development team to meet immediate needs identified and consider users as developers. So surveys were conducted, software was analyzed to identify and involve the stakeholders. Through strategic collaboration, IMA and Dallas are implementing systems engineering approaches to support future viewer development. The viewer design scope defines requirements and stakeholders. The requirements we define so far include open source and distributable, reliably runs on Windows, Mac and Linux systems. That's what we call platform agnostic. We want to be able to support all active versions or derivatives of open simulators. So we call that open simulator agnostic. We want it to be based on an adaptable modern code base. We'd like it to be customizable by users and by organizations. And finally, to be standards compliant where possible. The primary stakeholders that were identified are new users, or those attending classes and meetings, disabled users of all skill levels, grid owners because they're focused on user needs, creators of interactive environments and objects, educators using virtual worlds, and lastly, collaborators using virtual worlds. So we came up with a solution. And after looking at all of the different existing code bases out there, the Alchemy viewer code base was selected for the scene gate viewer. Our current development focus is on immediate improvements related to accessibility, onboarding, performance and usability. And this is directly driven by our analysis and answers to our research question. So our future development focus then includes the Echo voice integration, which we talked about earlier this morning. Renderer decoupling, we'll talk more about that in a little bit. Standards compliance, improved security, which Natasha will be talking about. Better documentation obviously is needed. And development of an advanced mode. The advanced mode would then be geared towards providing creators with the tools that they feel they need beyond what is already available. So our project roadmap is available online and I'll post a link to that at the end of the presentation. But in a general sense, the viewer efforts for IMA began with research in 28. And this involved previous community and viewer user surveys, survey analyses and community feedback. We have weekly meetings. We've had a lot of involvement from several grids. And extensive testing of multiple viewers by multiple stakeholders. I want to say I appreciate all of the work that everyone has done in volunteering to do all of those tests. And the proof of concept that was done, contributed by Talis is a simplified singularity viewer. That proof of concept we deemed project educate. We learned a lot from the project and it became a driver for the scene gate viewer. And then of course, we looked at the available code bases and analyzed them and we decided to go with the alchemy code base. Now systems engineering milestones were then established for the development timeline and the project roadmap. Specific milestones included our alpha development, closed alpha testing, the beta development closed beta testing and where we are right now is in the open beta testing. We hope to have a release ready at the end of January. It really depends on the results of the open beta testing. Now Elise Roy, she is a wonderful technologist. I mean, she really made the point that when we design for disability first, you often stumble upon solutions that are better than those when we were designed for the norm. And that is really sort of a mantra that we have when we think about design thinking. And we are combining that with the systems engineering approach for all of our projects. So when you think about who is involved, project scene gate has various areas of participation. We have the hyper grid users from different grids involved in the testing and also in some of the R and D that we are doing supporting this project. We have experienced developed team members who are actually active in open simulator communities. And I think that is important because if your development team is not active, they don't have a pulse on the user community and understanding what the user community concerns are. And then of course we have industry interns. Natasha is one of them from TALIS who have been really contributing a lot to the project and really appreciate that as well. Now IMA meeting attendees who attend our meetings every week, some of them are here at the conference. They voted on what the default preference changes would be for the scene gate fewer. And of course we had some advice of things that we learned from project educate as to what those preference default should be. We also had external human computer interaction review and usability reviews for the user interface. And finally we looked at industry standards and that really drove changes in the default avatar camera view. And that was one of the big things that we saw get reviewed by Ramesh on his video that he posted. So the most frequently asked question has been what's new. So let's see some screenshots. The ability to hear voice equally or listen from all positions was added to the scene gate fewer. This used to be there and was removed I believe back in first storm 501 right around that time frame because Lyndon Lab had removed that capability from their viewer. This is a big accessibility issue so we made sure that we added that back. It is currently working and we've had a lot of good comments about that. The feature that helps users who may have hearing cognitive or mobility issues communicate voice immediately that is what that listen from all positions was about. But then we also had to consider the visually impaired users. And visual impairments are not always totally blind. We do have visually impaired users who are using virtual world. So when you consider that aspect you say okay well you might need to have more colors and contrast options to be available. So both of these aspects of accessibility improvements are really important not just to disabled users but also to new users in simplified mode because those new users you don't know what their disabilities are. So this is really important to address both of these. And of course they're not limited to just a simplified mode. There is also an extended mode so those features are available. Now on a simplified mode the default user interface can provides to users with that custom color and contrast. But in simplified mode a very limited amount of information is available to the user to that new user. They can't build and they don't have the ability to see very many toolbar buttons on their menu as well. So when you think about that and you say okay we are really addressing that cognitive overload that happens for new users and in implementing the simplified mode and minimal toolbar buttons we are effectively taking a bulldozer to a steep learning curve and I will give Selby Evans credit for that phrase. The toolbar button defaults presented to new users are limited to only the ones that they need to learn to use virtual worlds so that further reduces that onboarding time for new users. When new users are ready to learn more they do not need to download and learn how to use another viewer. There is an extended mode for experienced users. So in this mode they simply just go to their preferences and select mode then select extended and restart the viewer. And so what you are seeing there on the screen is that now the default avatar camera settings was something that we mentioned earlier. Any of you who have ever played video games you notice that your avatar camera is not looking down at your avatar at an angle the way the default second-life viewer was and most of the other third-party viewers do and this creates difficulty in moving around inside virtual structures. So taking their standards as an example we made changes to the default camera for scene gate and the feedback from testers has been pretty positive. So when we did that we had to make sure it obviously applies to all modes. Now advanced users can change their camera settings temporarily or permanently. We do have some that prefer that other camera view especially people who cam around rather than walk around. Now grid manager error checking is another improvement that came from TALIS and we appreciate that because we found that a lot of users complained it was difficult to add new grids out on the hypergrid to their grid manager and it's sort of simple to approach and I like what they did. We are providing immediate feedback if somebody makes a typo or the grid they're attempting to add is not available at the time that they add it. So users get that immediate feedback and that allows them to manage their own grid list. So what we've done in our default grid list is a very short list of grids that most users in Opus Simulator have accounts on and if you ever get to where you have this huge list and you just want to start over you can then load the default grid list and it'll just put that in there. So this really goes back to a usability mantra is don't make me think and that's what we have to think about when we consider design thinking is what is the user's perspective and how can we make it easier for those users to use the software and of course this feature is available in both simplified and extended modes. Now the one last thing that we did on usability improvements really is to look at where do buttons belong on the screen and how can we make the most usable experience available to new users and the suggestions that we got were to group the buttons according to their general function. For a simplified mode the left side button group represents two tools associated with the user's account. The right side button group represents two tools associated with places the avatar can go and how to get there and then the center bottom button group represents a commonly used button to communicate, navigate and find people. Users with specific disability needs or preferences can customize the toolbar button layout if needed. We did not remove that option. Now with any software projects we are always going to have bugs that we need to fix and also to improve performance. And right after we started this project Windows released an update which I believe was build 1903 that caused the viewer to crash on exit. A lot of the third-party viewers saw this problem and so we immediately obviously we had to fix that. That was a showstopper and fortunately the Alchemy Dev team was already on top of it and we were able to get that from them and put that into the Cingate viewer code. So that was our first bug fix. Talus also had a requirement for dynamic texture loading to be much faster than it was for their presentation efforts. So they provided a contribution that allowed us to really, really improve the performance from the user's perspective. Textures do load much faster. That combined with the deep graphics quality from the Alchemy code base was very positively received. Some of the comments included it's pretty fast and slick. Loading speed is quick, render quality seems better, seems to run okay on my media PC at least with the least power of the one that I use at home. And we really appreciate all the testing and the comments and feedback because that helps us make this project even better. Now, when we talked about future viewer considerations, this research is underway. We talked a little bit about our thoughts on this last conference and I just wanted you to know we are moving forward with that. So when we look at that we do it from the use case applications. And the initial activity involves splitting the renderer and the data handling parts of the viewer. Development goals include a modern look and a feel that is used by gaming and modern UI packages as the code is developed. If the R&D is successful, gaming, VR and web render engines can be used with the viewer. And so this may lead to another project of a viewer that may be called WebGate, may be called something else, may fall under the EOS project within IMA. Because we know from our own surveys and Maria's surveys, a web-based access to open simulator is desirable. It also can drive VR resolving some of the technical issues associated with internet latency and low FPS or variable FPS. So this is something I'm very excited to see happening. And Dallas has two students who are really working hard on that right now. And we also want to provide new users with user interface options to support any of the new features designed as plugins. And this is really why we had to take control of a code base because our project and design goals were not aligned with the roadmaps of any of the other viewer devs. However, this is all open source. And after we do the release, the code becomes open in the repositories so that other viewer dev team can pick up some of these improvements and feature additions that we've made to help them along their roadmaps. Now, one of the primary considerations involves maintainability. And this includes establishing a clear structured and documented design. This also will likely involve some refactoring of the existing scene gate code base. A new package seen in the diagram as OpenSim interface is needed to begin decoupling the renderer. Design goals also include the use of modern libraries and packages to optimize security. Two student interns are actively working on the decoupling. Natasha has been working on looking at security vulnerabilities. So I may have seven primary project areas with most of them having some relation or integration to at least one other one. Three of these are directly related to project scene gate. Project echo, which we heard about this morning, includes echo voice source code which will integrate with the scene gate viewer to provide seamless configuration of different voice solutions. And also, and this is one that's really important to me, is the future roadmap item common to both the viewer and the voice project includes text-to-speech and speech-to-text capabilities to address accessibility and to provide transcription by the sign. So if you have people, you're having a class, you're having a meeting, you have some information, you would like to have that live transcription, it really is not just an accessibility benefit, it's also a logistics benefit. Because people who can attend would have that transcript available to them. Now, development efforts include compatibility with all third-party viewers. Now, project dream gate is a Firestorm viewer fork for Open Simulator. And the improvements are needed so that those with disabilities who prefer the Firestorm user interface do not experience that same digital marginalization. And project helios is that encompasses our Open Simulator R&D on the server side and in the grid architecture, but it also provides support for all IMA projects on the Metaverse Depot grid. Now, grid work directly related to project scene gate, and I'm sure we'll be happy to hear about this, includes bringing the Diva Wi-Fi user pages into standard compliance with the Americans with Disabilities Act, otherwise known as 508 compliance. What we found when we did testing was the pages were only about 60 to 80 percent compliant with the standards, which meant that it did not work with screen readers. So if you have a visually impaired user, they can't create their own account without somebody assisting them because the screen reader was unable to read the web pages for sign up. We brought those into compliance. The user pages are done. They are all 100 percent. The admin pages I will get to as soon as I can. But for anybody who wants to look at what we did to bring them into compliance, you're able to go to our Diva Wi-Fi pages and go to our Diva Wi-Fi page. You can also go to our CFO grid, right click for the source and grab it. It's freely available. So we encourage others to update their pages, especially we know that the DreamGrid, all of the 300-some-odd grids out there, they are using the Diva Wi-Fi interface. Now the Metaverse Depot grid, I had a few questions about that earlier. I wanted to make sure I included that in this particular presentation because it does support these projects that we're working on. We basically have a grid architecture that extends the research of the former Moses team. The Moses in a Box used a single virtual machine image of a grid to deliver a platform agnostic plug-and-play solution. And what we did was ask the research question can we use multiple virtual machines instead of one to decentralize the virtual world on an open simulator grid? And if we did that, what would the benefits be? Now the research answer was yes, with trade-offs open simulator related development has evolved from this research. Now some benefits include segregation, ease of installation, ease of administration, and ease of remote machine connections. Trade-offs included increased virtual CPU and RAM resources because the virtual machines need some of those resources themselves. But the approach will deliver a platform agnostic plug-and-play solution called IMABOX. Now work continues with a focus on further easing that grid administration, security of remote machine connections and security of the scene gate fewer itself. In coming full circle we have a primary goal for all of our R&D to develop a stable, secure virtual world platform to expand the community into new markets. We need more users and developers to advance virtual world technologies. We then need to pass the tort to the developers of tomorrow. We also need to meet typical IT department requirements to generate funding so the platform will be sustainable for all users. We can apply design thinking considering users as developers with a systems engineering approach. This means we not only need stable, structured documented source code, but also to improve security in all use cases. Now on that note, welcome Natasha, who will discuss her research around security issues. Hello everyone. I'm here today to talk about security in the scene gate viewer. I will try not to go too far in technical details, but explain quickly the meaning of the vocabulary I use for everyone to understand. However, if you have questions feel free to ask during the Q&A session after the presentation. So security is an important matter. It's a question of trust and every user, military or civilian, deserves safety for their data. We all care about who can see our information on social media and everywhere on internet and we don't want to share everything with everyone. It is the same thing with the scene gate viewer. If someone isn't authorized to see our information it should never be able to see it. Currently the viewer is not secure enough. When trying to connect to a grid, a warning message appears. So the main question is how will we improve security in the scene gate viewer? I will focus on three aspects of the issue, beginning with the logging process. Next slide please, Lisa. So credentials are sensible information. In order to connect to a grid the user needs to enter username and password on the logging panel. The viewer will then send them to the server. The username is sent on plain text and the password has an MD5 hash. MD5 is an encryption algorithm using a mathematical operation that can't be reversed to hide the value of your password. But MD5 has vulnerabilities and should not be used to encrypt passwords. It is vulnerable to cryptographic collisions, so two different passwords can have the same hash and also to brute force attack where all possible combinations are tried until the attacker finds the password. Here I used the rainbow table attack to crack MD5. So rainbow tables gives the password associated to a certain hash if it's present on the database. So I open the network analyzer to capture the data transiting between the server and the viewer. I found the authentication packet found the MD5 and pasted it into a web browser. So result appears immediately. So how can we secure credentials? First, to prevent cryptographic collisions and make brute force more difficult for attackers we can use SHA 256 or SHA 512 ashes instead of MD5. These are recommended encryption algorithm. Then to prevent rainbow table attacks we can use assault. Assault is a random string that will be aided to the password before ashing it making the results as predictable. Finally, and it is a very important point, implementing transport layer security TLS is necessary. This will allow the communication between the server and the viewer to go through an encrypted channel. So the attacker can't see the information exchanged but implementing TLS must be done on server side. Next slide please. This brings us to the second aspect. The communication between the server and the viewer. Sensible data can be exchanged between the two and currently everything is sent in clear text without encryption and everyone can open a network analyzer such as Wireshark and capture the traffic to see what data are exchanged. The attacker can access user information avatar information, video in media content and even some code and the chat messages. This is very bad. So how can we prevent this information from being exposed? Once again TLS must be used but this is not enough. Sensible data must be encrypted before being sent to the server. Once again this requires server side modification. Next slide please. So the last issue is related to third party libraries. Indeed, usage of libraries is needed for again enough time during the development and it allows developer to dispose of better tools for their code so better functionalities in the software. But they also increase the attack surface due to more line of codes. Also some libraries are calling other libraries that increase again the attack surface. Moreover, it happens that libraries have non-venerabilities with exploits ready for any amateur attacker to use. The number of non-venerabilities is rising when we use outdated libraries or unsupported libraries. So what is the current situation and how can we improve it? Next slide please. So currently 79% of the Cengage viewer libraries are outdated and only 15% are up-to-date. The vulnerabilities found in the libraries currently used are serious enough to represent a major risk. Next slide please. In order to improve security we first need to update the packages to the last version. So there will be less non-venerabilities but this issue will require constant attention. It is primordial to watch for new vulnerabilities and stay up-to-date. Next slide please. So to summarize the first change we need to make is the implementation of TLS. But this change must be made on server side. Then we need to be careful about how data are sent. Keep in mind that security is an important part of this change. Every day new vulnerabilities are discovered and we need to stay up-to-date. Finally and hopefully we will have a much more secure viewers than it currently is. Thank you for your attention. Thank you, Natasha. We clearly need to address these security issues. On our project status the repository Wiki structure is built but we need help to finish include tutorials and frequently asked questions. Build instructions for Mac are pending. Anyone can participate in testing and bug reporting or even contribute bug fixes. We plan to expand the development team as work progresses. If you're interested, contact me. And thank you for joining our session today. We have some time for discussion. Are there any questions for the panel? There's one question that came up a couple of times and that is how do we download it? All right. This page I just put up here has the links. Of course the screen is not clickable here. Wouldn't that be a lovely function for us to get? We would have to do a meteor on a prem to do that. But the downloads.infinitemetaverse.org slash index.php slash downloads. Or if you just go to the domain itself, that subdomain you would look in the main menu in the upper right. It says project downloads. Click that and that will get you to that page. Be aware this is not a release. It is an open beta. But most of our testing so far has been positively received and we still have no bug reports of anything that we didn't already know about. Okay. I have a question from YouTube. That is something that I can't answer. Unfortunately, our Mac team people were not able to be here today. There is a Mac version in work. So I will pass that on. Okay. And Alan Scott asks, is there funding that we can use to download the video? And Alan Scott asks, is there funding for all of this and how much and who is funding it? We are partially funded for this project. Grateful thanks to Mr. Selby Evan, also known well in this photo world. Okay. I can add to that that this Dallas project is for R&D budget. Yes. So this is a strategic partnership between IMA and Dallas. Gentle Heron asked if the slides are on slide chair. I have not put them up, but we will make them available. There is a question also from Robert Adams about an OOS login. Yes, I think currently OOS is used for logging with social media. But currently on the scene gate viewer it's not really working because this need to be a feature of the grid and not of the viewer. So we will look this further, but right now I don't think it's possible to do that. I don't think it's the third thing we need to do in terms of security. And I see a question in chat from Alan. Why did you not take Firestorm and create a simplified Firestorm? One of the things that we mentioned earlier was that we went through an extensive testing. We found that Alchemy performed well on a road map. However, because Firestorm does have a large user base and it has some advanced tools that Alchemy did not have, we decided we would have a parallel project called Dreamgate and that is our Firestorm fork. We will be working on that in 2020. I also want to add a remark about a simplified viewer. You can if you want to build your own simplified version. I can imagine that maybe for some specific use people want to have a different version of the simplified viewer. You can do that by recompiling it and adding some parts in the XML file so that you can even create your own simplified viewer. The modes allow you to make any distinction that you want. You can also have three modes if you want to or five. This goes back to our requirements. Everything we do is requirements driven which is pretty much an industry standard. We wanted it to be customizable by organizations. Say for example you represent a university and you're running a closed education environment. You have a batch of usernames and you don't want them to have Metaverse Depot or Talus private grid or OS grid or whatever. You don't want that on the grid list available to them. This is really easy to simplify for you. There's a question from Starfarer. Do you look at security against ransomware? I think that question is for you. Yes. We didn't look at security against ransomware but I think the problem will be when you will download the viewer and maybe providing a hash alongside the archive when you download the Cengage viewer will be a good idea to prevent from ransomware. Okay. Any other questions left? Maybe enough for one more question if there are any others or perhaps some final thoughts if there aren't any more questions. Maybe I can elaborate a little bit on the future viewer that Lisa's thought. The idea there is to make a viewer that is very flexible in what it does. We started with the renderer but the solution is to get a constant frame rate so that you can use 3D headsets without getting sick but the other parts of there is to have an adaptable UI so that if you use the viewer for a specific course you can bring in those specific commands that you want to happen into the UI of the viewer and the other part of that is to have that can be connected. Well, we talked about that in Cengage and another thing is that we want to make a general interface that you're not dependent on every change that you have in OpenSim but you can more or less limit changes to a certain part of the viewer and make it easier to maintain. Thank you. Seth, I just wanted to give you an opportunity to chime in for the folks that didn't see the presentation this morning. Seth, didn't you say that you have Echo Voice working not only with SceneGate but also with some of the other viewers as well? Yes. I have Echo Voice working currently with Firestorm, Alchemy, SceneGate and Singularity 32 and 64-bit versions currently only Windows although I'm actively working on a version that will run under the Linux. No using Wine. It will be native. Right. And why this is important is yes, we have a goal to have this working with SceneGate and eventually our DreamGate which is Firestorm Fort but also to work with the other viewers and why that's really important is because we want to bring in text-to-speech, speech-to-text to help improve the accessibility of the virtual environment for people that are using all of the viewers in all of the versions of OpenSim. Are there any other questions? I think we're about at the wrap up point so I would like to thank you Lisa, Frank, Natasha and Troy for a wonderful presentation very, very informative and necessary work. Thank you very much everybody. Thank you. Thank you. Thank you for your questions. We are in booth 4 at the break. Expo zone 3. As a reminder to our audience you can see what's coming up on the conference schedule at conference.opensimulator.org Following this session there's a 30 minute break and the next session will begin at 12.30 p.m. in this keynote region and is entitled Also, we encourage you to visit the OSCC 19 poster Expo in the OSCC Expo 3 region to find accompanying information on presentations and explore the hypergrid tour resources in OSCC Expo 2 region along with sponsor and crowd funder booths located throughout all of the OSCC Expo regions.