 Hello everyone and welcome to the 7.30 to 8.30 AM session of the 2019 Community Conference. In this session we are happy to introduce a presentation called Echo Voice for Open Simulator. Our speakers today are going to be Lisa Laxen and Frank Ruloff and also will be joined by Natasha Vru and Troy Schultz which is Seth Nygaard. Please check out the website found at conference.opensimulator.org for speaker bios, details and sessions of our sessions as well as the full schedule of events. I will briefly introduce our panel speakers today and please check out the website found at conference.opensimulator.org for the full bios, details of the sessions and the full schedule. Lisa Laxen or Shelen Erez is the R&D visionary and CEO of the Open Simulator Community focused Infinite Metaverse Alliance or IMA and she's also president of Laxen Consulting LLC and she has experience providing various virtual world technology solutions for education, research, business and defense clients. For more on her work please see infinitemetaverse.com. Frank Ruloff is a senior systems engineer at Thales Netherlands with expertise in training and simulation. He's leading the research and innovation activities related to Open Simulator Technology with the Thales Global Company using multiple Open Simulator Grids focused on user needs and Natasha Vru is an engineering student at CPE Lion in France specialized in network architecture and cybersecurity. She works as an intern of Thales NL and has been charged to review the scene gate viewer security issues and then Troy Schultz or Seth Nygaard is the CTO and developer he's a multi-discipline developer with 30 plus years experience in real-time systems for industrial automotive and other critical environments. He has worked in the roles of senior hardware designer, senior systems administrator, engineering manager and chief technology officer at various companies and was the owner operator of the refuge grid. Combining a keen interest in virtual worlds with his professional experiences he has been an active builder tester and developer using the Open Simulator platform. Well today's presentation is on the development of a new open source scene gate viewer focused on improvements in usability accessibility and interoperability. The session is being live streamed and recorded so if you have any questions or comments during the session you may send your tweets to at OpenSimCC with the hashtag pound oscc19. Welcome everyone and let's begin the session. Thank you very much Sun and it's very good to be here. This presentation will focus on one aspect not only the scene gate viewer but in this case the voice of IP that is related to OpenSim. As we all know we have a number of ways to have audio in our OpenSim environment. That's great. We have lots of support from VVox in the past and we use FreeSwitch but we have also some other needs for a voice application. Those needs are coming from the fact that for privacy reasons and sometimes for business reasons but also for security reasons we can not have public users or require voice of IP services that are not encrypted or not standard. In general we also have a good alternative in case some of these VOP providers fall away. So to start what we started with in the application is to look at what kind of application we would like to build and we build upon. First of all we want to have spatial audio just as we have it now in the VVox voice of IP. We want to have it open source and so not propriety and it must integrate with OpenSim. So what kind of use cases can we see for this voice of IP application? If you look at VVox then we see that it's a third party unencrypted voice stream and that could be intercepted from a security perspective. There are also other reasons why we should have voice of IP services that are not go through a public voice of IP provider and that is dependent on the information that is shared over this voice of IP. For instance for training all training that contains private information from the participants should not be shared publicly. For military training that's a general requirement because military do not like to have voice of IP servers that they don't have under control. But we also have business examples, meetings between companies that should kept internal to those companies for private information and also meetings that would contain private information. Some use cases are if you look at security meetings where health of persons is discussed that information is private and restricted. Any meetings that use sensitive information on people say banks or courts. Meetings that contain classified information government, military, examples, Moses and Thales for instance our company is a defense contractor and we are very restrictive to what kind of information we are able to share. There's also another need that for instance for in-world counseling and education there are laws that prescribe this kind of privacy and the US that are FARPA and HIPAA but also there are comparable laws in Europe as well. Another perspective is cost, sometimes it is more cost effective to create your own voice of IP server than to use voice of IP services from other providers. With the echo voice that's the application that we want to build we deliver an integrated encrypted audio stream solution over OpenSim scene gate viewer under control of the grid and region owner. So in that case you can build a grid which is totally controlled by yourself. When we started this we worked together with VCOM in the past and we managed to build up our own Thales local area network with the voice of IP server that we provided. That was built on whisper and murmur and here you see a schematic overview of how this application worked with OpenSim and the viewer. That was mostly based on the same on the interfaces that already exist for the VFox solution and the VFox solution llslvoice.exe was simply replaced by an equivalent for whisper and murmur. So what we have now or what we created in our local area network for the Thales global company was an open source voice of IP server based on mumble and murmur which we have spatial audio we have better noise cancellation than with respect to the VFox voice of IP. We have parcel audio which helps into separate audio between parts of the region or parts of the grid even. They are encrypted and we host those voice of IP servers ourselves. What we don't have yet in the solution is that we are able to have IM audio so discussion between two avatars using voice of IP group audio and we only in this case made this application for Windows supported viewers and we have to still look at the Apple and the Linux viewer but that later. The name that we choose for the application is based on the name of the greed codex for sound echo. So one of the things that we would like to do is to create a roadmap for echo voice for the future so that the community can participate in its functionality and in its features and this is very important to us we like to have the information from the user community to build upon and we create a roadmap in which we will put all the things that we want to do with the echo voice application. So we already mentioned we already put some of them in a roadmap one of them is the package build modernization the current voice the current application is uses very old libraries and old components some are even are not even able to be to be get anymore and of course we want to we want to support Apple and Linux viewers we want to improve security so that it complies with the different laws especially in Europe but also with the US laws and that is one thing that is already started to a set we'll talk about that after my my talk we provide want to provide I am voice of IP group voice by P we want to investigate the text to speech in real time and speech to text with the viewer with the viewer together and one of the other things that we have on the roadmap is to look at the look at the way to integrate into the viewers and in our case the scene gate viewer and to be able to switch between our voice of IP providers the background of that would be that not everyone would support our not every grid would probably support our solution so that we still people are able with the same viewer to whenever grid they go or whenever region they go they can switch between different voice of IP providers and the other thing that we have on the roadmap is to allow voice of IP communication between non avatars and avatars so that you can have avatars that are outside the virtual world but still with voice communicate inside the virtual world this could be for instance for beneficiary if you're doing meetings and the people that have to attend the meeting are not able to come online for whatever reason they are but that gives some some additional problems of course because how do you represent those non avatars in the environment that you are helping the meeting and so on so a little overview over the development that we have done until now the first step we've done is that we work together with Vcom in Switzerland which originally said had put some of the source code for mumble whisper online some long time ago and we hired them to work with us to try to get a solution for our grids on the talus network that was the first step that we did so they supplied us with an initial version which we worked on together with Vcom then the second step we created a working solution on our own intranet with the possibility to build different applications and then the second when that was finished Ima in this case said took the work over from us to testing it and deploying it for the internet and in that case provided to a larger public to a larger public as well so we're going to talk in a little bit about the work that set is doing now and how far we are with releasing the first versions of the echo voice application but we are very interested in meeting people or developers that would like to join our effort to like to joining our development team and so if anybody in the public would like to participate in that we please talk to Lisa to me to set and we can see what we can do with that so as I said the current status of development we want to have a internet based solution as soon as possible we're already looking at step one package build modernization and the topics in development I will turn my speaker over now to set who is going to tell more about the work he has been doing to get these points running set to you thank you very much thank you hello everyone let's go to my first slide so this this shows what we currently have with the mumble and murmur voice solution as our echo voice what we've done is we've created a intermediary program application between the viewer and the mumble client we call that echo voice bridge that is basically an abstraction layer helps us helps us avoid some of the issues that the vcom solution has where the exe has to be swapped out each time what we've done is we take the s l voice exe and rename it and then our voice bridge goes in dropped in in the s l voice dot exe that minimizes what the user has to do each time when they want to switch voice solutions the voice bridge then can work with both the mumble client and the s l voice renamed vivox client as well so you can easily switch back and forth between the two voice solutions on the server side we use the currently I have not made any changes to the vcom add-in yet for the voice and the murmur server so on the server side that that has yet has not been updated i've been concentrating on the client side to get that working and make it a little bit more convenient for anyone who wants to test it and that's actually working quite well now let's go to our next slide some of the advantages we have with mumble you know it's it's a well proven solution it's low latency has good noise reduction reduction proven codex resulting in a good high quality voice i actually find it to be better quality than what vivox is some of it is is user to user preference as well but so far i've been quite impressed with the with the mumble voice solution it's fully encrypted uh end to end both control channels and the voice this improves the privacy it has built-in spatial volume control already we will be supporting windows linux and mac operating systems i currently have it functional on windows the echo voice bridge is functional under linux i am in the process of setting up a build so i can build the old mumble client which is modified for linux i will be updating those mumble clients to something newer which will make that a lot easier mumble is also a open-source project with a large user base so we're not reinventing the wheel or starting off from scratch we're able to build on an already proven solution it's well documented and there's still an active development team despite the fact that mumble went a few years within a major update let's do it our next next slide so what have we done with the echo voice bridge what i've done is i emulate the s l voice exe command line i make use of more of the command line parameters than what the modified mumble does i've made some improvements to the mumble the modified mumble client and what i've done allows i can do one single install for echo voice bridge and easily add that into multiple viewer installs on the same system so i have it currently working with firestorm alchemy singularity as well as singate and i'm able to to set firestorm up to use vivox alchemy to use mumble and they run independently of each other even though there's there's one one central install and i've tested it with both 32 and 64 bit viewers and this was one of the reasons why i did the echo voice bridge solution it's written in modern c plus plus trying to keep it as bare bones as possible to minimize dll conflicts the various viewers use very use different dll's and if you drop the entire mumble solution in as has been done with vcom solution you run into some issues where you may break the viewer or it it doesn't behave correctly because of conflicts let's do our next slide please so here we have our overview of what happens on the server side the adian plugs into the region instance and then you have a murmur server now the murmur server is a separate executable it can reside on the same server or a different server and there's a zero c ice rpc mechanism that's used between the region module and the murmur server to add users handle the access control lists and actually move move users between channels automatically in the background the murmur server has its own database for users so once the user has been added they become persistent and the same zero ice mechanism can be used for monitoring tools or future tools all at the same time let's go to our next slide so initially on the client side we have the viewer and we have our echo voice bridge which essentially runs as a transparent voice proxy at this time um taking the xlo xml commands and merely proxying them back and forth between either the echo voice agent or the vivox agent um this allows an easy path to get us working with the with the current solution with vcom with only some minor new modifications um to handle some features i wanted from the proxy for mumble uh let's do our next slide please in the future though i'm going to remove most of the xml handling that was added into the mumble agent um i'm trying to keep the mumble agent as generic and close to the stock mumble client as possible uh we'll make use of the mumble link game which which already exists for multiple games that's where the spatial voice is handled i will replace the the xml with a control api solution likely not using xml um haven't fully decided what i'm going to do there i have some ideas though and once that's done that allows the voice bridge to look at the urls that are being used for voice and automatically decide whether it needs to use the vivox agent or the mumble agent so from a user perspective on the hyper grid if you were in a region running vivox and you teleported to another grid that was running mumble you wouldn't have to restart your viewer or anything it would simply connect to the mumble agent and your voice would work on the new grid and return back to vivox when you go to another region um this involves a quite a bit of refactoring but it uh i think it gives us a more solid solution and a quite a bit easier to troubleshoot and develop i believe despite the extra the increased complexity the other advantage that the bridge gives us is once we have the functionality in the bridge of handling all the second life viewer commands we can then look at extending this to other voice solutions if needed um it it actually is working right now um so quite happy with what started it as a proof of concept i have it working on on two test grids um behaving quite well so let's do our next slide so not really where we are right we've done through the work that that that frank's group has done in thales with their existing solution and the work i've done with the proof of concept um we're well on our way to each of these steps but now we want to involve the community so we really need to determine what the community needs for a voice solution uh we know what we have we know as as a community i think we have a good idea of what we want uh so we'd like to hear from you what features you want to see and don't want to see in a voice solution and then we can roll that in where we're practical review everything that's existing and create some some additional baselines for comparison so is one voice solution actually better than another or what features do we need our our design development uh we need really need to determine what improvements are required um get more developers involved and especially testers uh these solutions really the only way to test them is to use them uh and that that really only works when you have multiple people in a in a region um so the proof of concept is is currently working that will continue to be developed and and expanded out so and we'll do some unit testing so as we make future changes we continue to test for for regression issues um and we will do some type of roadmap as we add features for what can be done short term and and long term and most importantly document uh both the installation the usage and the build process currently trying to build the old mumble client is not easy um oh it's 10 years old now so several of the dependencies are severely deprecated um they even require microsoft studio 2008 to build a couple of the dependencies and that's no longer downloadable from microsoft and we will be providing downloads to the community for testing uh both for the viewer and the server side um and i think that uh the next slide is the is the end of my part of the presentation thank you everybody yes thank you um i think we we are now going to the panel discussion so if there are any questions or things we'd like to discuss on on the voice of ip uh solution that we are providing we would like to hear that hi this is uh lisa thank you frank and seff both uh wonderful presentation a lot of information uh i'm sure the developers are just drooling right now uh at the possibility of having a replacement self-hosted echo solution but i wanted to note here uh we're getting some feedback in chat if you take a look at that your teacher said the ability for people who cannot handle heavy graphics load to still have access to the voice channel uh the other gentle heron mentioned which was also mentioned in the presentation is the text to speech or in this case speech to text voice to text text to voice is the same thing uh we definitely have that on the roadmap uh pretty much an integrated point between the scene gate viewer development and echo voice um set do you want to address marcus's question in chat okay then so curious whether web rtc signaling and server have been considered it's on our radar uh it is something we've looked at web rtc is it'll be a lot more work to try and do anything spatial um we could possibly add it although the current work is certainly concentrating on mumble uh building on what already exists once we have the the voice bridge fully flushed out i am looking at web rtc i'm also looking at matrix as a as a possibility and a a a void uh sip bridge some of these already exist for a couple of the solutions so plugging that in should not be that difficult where somebody could literally phone into a grid and participate in a discussion hopefully that answers your question oh one one of the things i would like to add is of course that we don't want to lose things like spatial audio uh in in because it's important in the in a virtual environment like this correct i have spatial for a spatial uh audio right now there is one point there uh a difference in terms of accessibility we have already implemented a feature in the scene gate viewer uh that allows you to listen from all positions and what this does is effectively um focus the spatial sound within the 40 meter range uh of a roll off for the voice um as you know uh some of the devs out there can confirm uh the default voice range is 80 meters but the roll off that occurs naturally really the voice is not heard uh loud enough once you get 40 meters away the the issue with spatial audio in a meeting environment for people who are hearing impaired cognitive impaired mobility impaired uh or who are new users uh not really familiar this allows people to listen from all positions equally or what firestorm used to call hear voice equally from everyone this is already available in the scene gate viewer so it makes it an option however when you are engaging in an immersive environment where you do want that spatial sound you have the option to change it to avatar position or camera position so we're considering uh the broad range of use cases in the integrated development that we're doing i'm already seeing questions about where they can uh get instructions for testing and download uh we'll make that available soon yes and the question on mumble server is a docker image that's actually quite easy i have a preliminary test copy of a docker image already working i'm doing a lot of my testing in a in a vmware um guest um as well as i have a test grid set up on the internet as well that's hyper grid accessible but yes uh docker image is definitely on the roadmap frank did you have um anything else you wanted to mention well i would like to mention and maybe some of the people in the audience already noticed that i i'm very i come from a professional software development environment and one of the things that i hope that we can do here is to create a group of people a group of developers including the associated procedures and ways of working to get software that is not only working very well and complies to what the community likes to have but also is very well documented because for one of the things that i find most important is that people that are working people that are working in and do and making um their time to build software in their free time if the threshold to give this information to give your source to somebody else gets too high then all that work eventually goes to zero because nobody is is being able to take the flag as we tell it and somebody else can continue with it so one of the things i'm i'm very favorable of and also with this uh with this application is to not only get a good product and not only get good software and tested software but also get it very well documented so that we can maybe and to share people to work with us together and they understand quickly how to change the software how to add something to the software and well that will benefits us all in the long run and i think that's a really good point because if we have documentation uh that is clear and structured it's easier to troubleshoot things it's easier to go back um when you have a developer may leave the team that knowledge is not lost uh we need to make sure we capture the knowledge uh while the work is being done uh but also i believe it helps improve participation because you have people that can come from outside of the virtual world arena but maybe they have some great developer skills and they can get involved you know we want to bring in new users and to expand the community user base uh in general so that documentation also ties in with that effort uh it also allows us to look at things like standards compliant uh it or is the application compliant with industry standards to make it easier to integrate uh and write apis to integrate with other applications outside of the virtual world platform uh this will help us uh really expand the use of open simulator into the industry sector that industry sector then supports education it supports government it supports public school systems uh it supports uh different medical advocacy programs that need to see how they can use virtual world to help them uh in their own environments it we should not just limit us to uh going to creativity and uh social events there's a lot more that we can do with open simulator but we have to design the software so that we can expand out into those other communities um moodle is mentioned here by sun uh k earlier mentioned canvas uh that's a good example of two other applications that we need to have some way to hook and that's how you do that through apis and through standards compliance why uh i can i can say something years ago when we started to look at this uh at open sim we already integrated open sim with sleutel so that that worked the s l the moodle for a second life we also i was also able to integrate that uh presentation tool that they have with open sim so yeah we we did some experimentation that in the past so there is a you can of course integrate moodle with open sim uh as it is in the past the other thing i maybe want to mention the lisa is the fact that uh we are saying we like to have the features from the community so we also have uh in our documentation in our procedures we have a way that outside people will be able to put their thoughts about what they want on paper so that we can start some process internally to evaluate it discuss it with other people and get to eventually to the points that will come on the roadmap right and um you know what frank is referring to is basically when someone has a new feature request or enhancement uh we'll provide them with a white paper template uh and they will basically do a little white paper to tell us what it is they would like what need is it addressing um why do they think it is necessary where do they think the um architecture may be impacted where they believe it might fit in the roadmap um all this information is basically we want you to sell to us what your idea is so that we can seriously consider that and have it integrated in our internal change management process uh now this is a little bit different than what most of the open source programs out there do uh what we're doing is bringing some industry software uh approaches into an open source project and uh our intent there is to avoid spaghetti code and to keep uh the project maintainable and sustainable with adequate documentation through generations of developers and also supplying the features the community wants yes and and that's you know speaking of users as developers we we know there are two schools of thought one school of thought is uh for developers to make something and hope that the user's like it the other school of thought is listen to the users to find out what the users want and develop accordingly to help meet those needs and that is the perspective of users as developers and that's the perspective we're taking. DeliFo made a comment about um the canvas uh which just bought for two billion dollars by an equity company that is something to look into to see what the impact has been uh if the source is uh still going to have adequate API standards so we can hook to that if needed with the viewer. That's right Selby you could write on target design thinking where the systems engineering approach is exactly what IMA and Thalys agreed to when we formed a strategic partnership we have a lot of the same common goals uh even if the markets are different. Thank you for that link Barbara. Was there anything else uh you wanted to mention Frank? No that's it I think I love this presentation I hope that lots of people uh I love the way that that you can do virtual conferences like this because it allows a lot of people to be and hear and get the information which otherwise would not be possible. Yes I agree um I'm glad uh everybody uh had a chance to make it in this morning I know it was a long night last night and congratulations uh to the organizers on the first day uh of the conference and I'm looking forward to the rest of the conference. We'll be talking more about the scene gate viewer today at 11 o'clock Pacific time and I hope you all will join us for that session. Cool oh also I didn't know if you uh had noticed in the um discussion a little bit earlier during the presentation uh you had talked about uh bringing people in that wouldn't be physically present in world um it might help to look at some other platforms and things that they've done well it might give you ideas for approaches uh there was one that was uh from sun microsystems called open wonderland and they had some interesting innovative ideas one was a spider phone so that somebody could um basically call in to be in the meeting without having to have their themselves be there in avatar form uh the other thing that's interesting with that is that um if the person doesn't want to be in the virtual world because they're not really as accepting of the technology they can still take part in the discussion without having to be physically embodied there so that also might be a baby step too they they're doing it through a spider phone and then later they might come in avatar form as they see the um the usefulness of doing it within that environment. Yes I think I think that that is a great idea and the presentation I did last year or the year before I also mentioned that if in the environment that I'm in which is an industrial environment that there is a sort of psychic threshold to use these kinds of um these kinds of applications in virtual world um you see that people when I give them presentation on what they could do with it they always not and say oh yeah this is great and we can use it but they never come back to it and I also noticed that uh the only the only people that return are the people that they discover the benefits from it when they use it they don't discover the benefits when they look at the presentation or you'll give them a demonstration or demonstration already better but they really see the benefit when they are going to use it and that makes it sometimes difficult to more or less sell because in the industry you have to sell something to get money to develop it uh gives it difficult to sell because a lot of people they look at it and they say it's great but especially if you look at people that are older they first the first thing is to say ah that's a game it's not serious sometimes yeah it's it's not serious it's a game yeah younger people are more used to it because they they play games but still it's very difficult to get people over this threshold to let them actually see the benefits of using this kind of applications well one one thing I think that at least I've found from my background to be helpful is that I don't think we should think of people in two ways is either being someone that uses virtual worlds or someone that doesn't use it and I think is that there's a tendency to kind of develop that thought process but the thing is you got to take baby steps with some some have to crawl before they walk before they run some run right out of the gate so it's just a matter of understanding that we might need to have different ramps that bring people in and they might not all use things the same way and that's okay it's just like for education you know being able to educate people where somebody's on a computer and someone's on an iPad or an Android tablet or somebody's on a phone you know I mean different levels of usage it's okay we don't have to have the same solution for every person forward to work but then that also means for things like the development you're talking about how do you make things that work for all those users to be able to communicate together like they're in that same place right and and and that's where you we have the approach of the non avatar being a user who may be engaged in a voice conversation but not necessarily represented by an avatar and one of the things we have to do is think about how we're going to represent that voice presence to the users that are in the virtual world so that's that's another aspect as well yeah and I think in that case with the spider phone what they did is they thought well how would we do this in a regular meeting that we do in real life and then they found ways to work that into the virtual yeah yeah I think that's a good another one that was kind of innovative that they had too is they had this one in open wonderland I think it was the dome of silence if you remember the tv series get smart basically this thing would come down you would be in a silent zone so if somebody was just outside your classroom they couldn't hear what was going on or see what was going on on the screens or even see the text chat that's inside that room and it's just an interesting perspective to be able to have that kind of a security system sometimes that's useful for educators as well and before we run out of time there are some things coming into the chat I wanted to make sure we capture those star mentioned that it was good to put voice into text so it can be translated to another language real time yes that is part of our roadmap and your teacher talked about lag broadcasting voice from discord so these are all things that we want to look at and as expected we're getting questions about when do we plan to release the code I'll have to have Seth address that one and kayaker talks about large presentations like this I believe if I'm correct Seth probably 100 users it's easy to handle with the technology is that right yes 100 100 users would be quite easy and it's it would not be that difficult to do a bridge from from the murmur server directly into Skype or YouTube for streaming like we are here yeah and and one of the things we hope is that we have a solution readily available for the abaccon folks to use at the next conference you may be able to eliminate the Skype bridge okay that sounds interesting well we want to wrap things up now we want to thank you Lisa and Frank and Natasha and Seth for this great panel discussion it was a terrific presentation as well as a reminder to our audience you can see what's coming up on the conference schedule at conference dot open simulator dot org following this session the next session will begin at 8 30 in this keynote region and it's entitled dockerizing open simulator also we encourage you to visit the oscc 19 poster expo in the oscc expo three region to find accompanying information on presentations and to explore the hyper grid tour resources which are available in oscc expo two region along with the crowd sponsors boots and sponsors boots that are located throughout all of the oscc expo regions thank you again to our speakers and to the audience thank you all thank you all very much hope