 I want to welcome you to the 9.30 AM to 10 AM session of the 2016 Open Simulator Community Conference. This session is entitled Google Maps and the Panoramas in Immersive VR Environments, which takes a look at how mixing 3D content with real-world geo-located visual cues enhances the knowledge a person acquires during a virtual training. Before I introduce our speaker, I want to remind you that you can get our in-world, for our in-world and our web audience, you can view the full conference schedule at conference.opensimulator.org, tweet your questions or comments to atopensimcc with the hashtag of hashtag OSCC16. So, up next, our speaker is Ramesh Ramul, who is the lead designer of Resmala, a rapid virtual world building company for diverse training applications. He has more than a decade of experience in human-computer interfaces in VR application research and development. Also, Ramesh is the CEO and CTO of Deep Semaphore LLC, a simulations and e-learning company. Ramesh, I want to welcome you to OSCC16 and I pass our virtual mic to you. Thank you. Very happy to be here. I hope everyone had a good lunch. At least I had one just now. I wanted to share my latest, our latest experiments with Google Maps and Google Street Maps and explain the motivation behind it. Let's have a look. Let me go to the next slide. There's something that I was thinking about in terms of applications and content generation in virtual environments. What I found is that and try to basically create sky boxes. I'm sure you're very familiar with sky boxes, with textures inside the faces, and it just gives you a sense of the panorama. Everybody knows about that. Let's start doing wise. Just to use this term, maybe our augmented virtual realities, how we try to augment virtual world content with direct real world imagery data. Basically, what we are trying to do is instead of finding the textures and then manually adding them, at this point in my talk, I have to introduce the Resmila system. I will do it a little bit quickly so that you get a sense of how the Google Street Maps actually works with with the Resmila system, which is basically, it's an application that provides three functionalities. It allows you to create large-scale virtual scenes and it allows you to control what happens in that environment all from a control room kind of setting. You can also monitor what's happening in the virtual environment on the board. That's the third leg of the whole application. It's got three main functionalities. Basically, you have the board and people sit around it. If you go to my booth later on, there is a landmark that would send you out to one region that actually has the system out. If anyone wants to experiment and try things out, so John and I, what we did is we spent a lot of time trying to build a system that would allow a subject matter expert to create a training environment just by selecting objects from a library and placing it on the board. You generate the large-scale environment just outside of the control room, basically. These objects can have different types. You have static objects that are like trees and buildings and things like that. You have scripted objects that can also be part of the library. All these whole range of objects are accessible through a visual database. You just get to browse the database inside the control room and pick the objects that you need and place on the board. We use this mainly for the one defining feature of our application is that it allows you to generate content collaboratively with all the students inside the classroom. As I mentioned, there are different types of objects in the library. You can have static objects, just like an object that you can pick from that set of objects in the library. You can place it anywhere on the board. I should mention that we have another object which is called a Google Street map object. It's not a cube object. It's different. It just allows you to place a Google map on the board, and that map would be magnified and placed on the region. That's where the whole exercise is going to take place. On that region, you can place your buildings etc. We have a hybrid setting where you have objects. You can actually have any location on the planet. You type in a dialog box. The system allows you to do that. It automatically finds the right images to display ranging from the maps, Google maps or even the street views and textures the cube so that when you place the cube and even move it along anyone on the map, the pictures get updated. The one thing with our approach is one of the limitations we had is that, of course, we are using Moab to display those Google Street View maps. We were limited by the number of Moab surfaces that can be displayed at one point at a given time. Right now, it's around the more we can push it, at least with our machines that are able to render them. In our experiment, maybe you'll get away with eight or nine concurrent Moab displaying these street views or the map concurrently. So that's one of the limitations. Let me see if I missed anything here. All right, so let's moving on to the next slide. So as I said, when you have the Google Street View cube, which is at a given location that you have placed there, you can actually have people moving together into that space and interact with each other and collaborate and talk about real-world situations. The way we designed the whole system is there are many different aspects to it. So this cube also in itself provides a lot of functionalities, because it is what we call a malleable link set in itself. So if you see the cones that are placed on this picture, we could also raise different types of objects and people can play around with objects within the cube itself. So we can have collaborative activities beyond just speaking about situations within that shared space. All right, so it's 12.46 here, so I'm going to move to the next slide. The other way you can use the Google Street View maps is that you can actually not have them be geolocated, because in the previous experiment, in the previous example, we showed how we could actually move the cube on the map, which is displayed over a whole region, and the cube gets updated. You can use any number of cubes, by the way, and the textures that get displayed are determined by their location on the map, but you can also have just a room, a cube that's in a specific position, and you have a search function. You type in the coordinates of the place you want to actually render, and that would just appear on you. There is a demo in the send box area. You can just go there and don't forget to sign in. There is a little computer on a platform, and you have to sign in first before you can start clicking on the bigger icon on screen and launch the blue dialog box. There are, I think, about 200 interesting places that you can look at and check things out. That's in a situation where you don't have the Resmila system, but you just want to use the Street View map object on its own independently, and it could be useful. The other thing that we have added, I mentioned my label link sets, but in the afternoon, in a few hours, I have another talk that will be talking about this, but in this cube, I'm just giving an example. We have the concept of interactive hotspots. When you click on a hotspot, that actually allows you to do a number of things. One is that you could actually click on a hotspot on an object, and that would go a URL to display on a specific screen to give additional or more detailed information about whatever you clicked. The other thing is you can also use hotspots on a map in this example. We are in the White House, and I have placed a map on the floor. What you do is you can just click on the floor and on those red dots, which are hotspots, and that's going to load the various appropriate Street View maps or panoramas rather than this case, and have them displayed all around. I've just run through these slides very quickly because I'm always trying to make sure that I'm on time. What I can speak about a little bit now is that there are challenges when you use Street View maps or panoramas from Google in the sense that just give me a second here. The format changes all the time very frequently, and that becomes difficult to actually try to keep up with what they are doing on their end. We just had one problem recently that I found was that the panoramas that people can create before 2015. If you go there and you take anywhere on the planet and create your 360 panorama and upload it on Google Maps, now the format has changed, and that's why the surround views that I am including in this example that you will find on the sandbox are from 2015. Those are the traditional things that you run into when you have to use, well, we can't do otherwise. There's an ecosystem and you have to really be in tune with what's happening. I think if more people use what we are offering, we can as a group try to participate a little bit on the Google forum and try to get them maybe streamline their services, their APIs a little bit more and stabilize it so that people like us can actually have better opportunities for developing applications that can be used in a more stable way. If you have any questions, please do ask. I tried to run through very quickly, and just because I didn't want to, and I said in the beginning, I want to thank everybody on my team and the kind of programming that we are doing is really, really, really pushing LSL right way over its limit, especially the Malable links and John Hopkins has done a really great job in trying to bypass a lot of the LSL limitations. I want to really thank everybody on my team for what we are able to do. To answer some of the questions, my interest is to make sure that we make everything that we are making as available as soon as possible. The only reason why we don't do it depends on the number of factors. One of them is that we want to make sure that things work properly in a very stable way before we push it out. The second thing is I just want to make sure that it's affordable and we can do it in a sustainable way because, yeah, thanks. This was a fantastic presentation. I think everybody here wants to run over and try this out. It looks amazing and I think that interaction, like you said, of the different things from the panoramas to the street views are going to be a lot of fun to try out. I can't wait to go to the White House and look in the cinema room. All right. I want to remind the audience that you can see what's coming up on the conference schedule at conference.opensimulator.org. The following this session, the next session will begin at 10 a.m. and it's entitled The Future of the Metaverse, Blueprints for the Evolution of Virtual Worlds. We're going to take a quick break to switch out our guests and our speakers and we will start presently in about six minutes. So thank you very much. All right. Thank you.