 For this session, we're going to be talking about virtual machine viewers as a way to enable open simulator access on mobile devices as well as on low computing devices. Our speakers today are two colleagues of mine, Jason Larino and Chris the Muir. And I'm Kay McLennan. Jason is a systems administrator and professor at Tulane University. This is also a systems administrator at Tulane University and many of you know me already. I'm a professor of practice and director of online learning at Tulane School of Professional Advancement. So let's get started. I'm going to set up the user end user view of the use of virtual machine software and then I'm going to turn it over to my more able colleagues to give you more of an overview of virtual machine software and a virtualized viewer. As my colleagues know that I'm having an issue here, let me start again. I'm not seeing anything up on the board, but I'll imagine that it's changing even though I can't see it. There are viewer limits on student use or access to open simulator. One of the limitations is that mobile devices whether it's an iPhone, an iPad or any similar touchscreen will not display the open simulator viewer and it can't be used on these devices. Another limitation is how Chromebooks cannot be used to download and install the open sim viewer. And then there's no guarantee that our students are going to have or be using a computer that is capable of actually running the open sim viewer from the standpoint of their available RAM or graphics card. And then finally, students may be reluctant to download just a third party viewer because it's unknown to them and it's not available on the institution's website. So virtual machine software to the rescue, the VMware horizon that our systems administrators used is just something to be downloaded onto your device or laptop. And of course, it's available either from the iTunes store or whatever the Google, I can't think of the name right off the top of my head. But once the VMware is downloaded onto your device or computer, you click on the icon item number one on the screen now. Next, click on add the client, the server. Number three, you enter the name of the server. And then number four is just clicking on the server when you're ready to go online. And this is two screenshots from my iPhone six. I only have one of the small iPhones, but you can see the green VMware logo up on the upper right hand corner of the screen. And then when I launch it, you can see on the right hand side how a singularity viewer and firestorm viewer are available for me to use on my iPhone. This is a screenshot of me using the firestorm viewer on my iPhone six. I can move around. I can, you know, the resolution is quite good, as you can see the depth of field. I will confess, and this is the first time I've tried it on my iPhone, I was not able to use singularity to log in because if you remember where the text boxes are for singularity at the bottom of the screen, for some reason, I just couldn't get the little text to go right in there. And I'm not claiming that it's easy to use on an iPhone with the little text boxes, but it does work. And that's just an absolutely amazing development to me. The next screen I have is of about an eight-year-old iPad with, it might be the second generation iPad that has a smashed up screen in the upper left hand corner, but it worked with this device too. So that was absolutely amazing to me. So in summary, I'm going to, again, turn it over to Jason next. A major advance in accessibility is possible through the use of virtualized viewers. However, I'm going to open this up to the group of capable developers and users that we still need a single sign-on from learning management systems like Canvas. I know I sound like a broken record, but there we have it. I'm detaching my slides now, and I am going to start Jason's slide. Jason, I'm going to turn it over to you, and please tell me when to advance the slides. Thank you, Kay. I appreciate it. Okay. So first of all, of course, let me say thanks for allowing us to come and speak at the event. So I'm going to start a little bit and start with the scenario and how we came to develop these virtual, you know, these viewers in the VDI infrastructure. I'll do a little bit of the intro for that, and I'll turn it over to Chris to do some more of the higher technical details as he was more involved in the setup of the Blade Center itself, and then I'll finish up with some of the things, the concerns we found, et cetera, in case others are trying to set up a similar situation. So let me start a little bit with the backstory. I know Kay went through it, but, you know, my colleague Chris and I, we all work in different capacities, so I am both a work in the technology department with Chris. I'm also a professor in our applied computing, and I had never really been involved in any of this open simulator stuff, but so we have a new CTO come in, and he's wonderful, and he's committed to doing everything possible for us to use technology to improve the student experience. So when we're looking to do that, he put us in touch with Kay, and she told us of the things she was doing with the open simulator software stuff, and we also had a horizon Blade Center, which is what we're using to virtualize the machines that have the viewers on them, and he's allowing us to use those things to better enhance the student experience. So as long as it's a teaching capacity for the students, et cetera. So anyway, so the first scenario is that, you know, students were using the open simulators on campus, but like Kay said, many times it was their own devices. So, you know, sometimes if they were off campus, their devices may not be good enough. Their machines may not meet the correct hardware requirements to run it. Like Kay also mentioned, you know, maybe they're, you know, scared about downloading the right things. You don't want viruses or anything. So, you know, it's potentially, they weren't sure of which one to get. We also from a administrator or a college standpoint, we didn't have the ability really to centrally manage when people were bringing their own devices, especially if they're off campus in other states, other countries, et cetera. So it was harder for us to manage. So we had a bunch of different scenarios of what we were trying to all solve to better improve the student experience. So I mean, really, this bring your own device wasn't the way to go. So we talked to Kay. You can go ahead and advance Kay. So we talked with Kay and what we kind of came to propose was, you know, one of the things we would do was to host those Windows virtual machines on servers in our data center. That way we could control the hardware, make sure they had sufficient hardware that would meet the needs of these students. So what we decided to do was use this virtual desktop infrastructure or VDI to host these things. So with VDI, we also found we have an extra benefit of adding a security layer through the native VDI session. So those screenshots Kay was showing you of her using the viewers on the mobile devices. Those things also are similar to a VPN that the connection the horizon client makes is in actually an encrypted session. So we do have an extra layer of protection we're offering to our students using this. So the best part was is, you know, using this technology, you know, we can control the student experience, but also, you know, provide more options for our users and students that are off and outside of our campus network. Go ahead and you can advance. All right, Chris, I'll go ahead and turn it over to you. You can go into some of the some of the details about the infrastructure itself, and then I'll jump back in at the end. Okay, yes. Thank you, Jason. Good morning, everyone. So I'm just going to go over the infrastructure architecture and sort of how everything is built. Like it was mentioned earlier, we're using a VMware horizon. It's a VDI solution offered by VMware. It's pretty widely used. It's fairly robust, and it's got some pretty neat capabilities, which I'm going to get into in a bit. But all of the VDI machines are running on vSAN ready nodes, which is essentially a hyperconverged architecture. Each node has four 10 gigabit links all connected to a central switch. So that allows for pretty fast communication between the virtual environment. And the VMs are also running on solid state disk storage, which provides pretty good performance for OS operations and so on and so forth. But really, sort of the secret sauce for this architecture is the NVIDIA Tesla M10 GPUs that are in each horizon server node that provides enhanced graphics processing, which was sort of a necessity for running the OpenSIM viewers. So yes, the advance of that slide. This is sort of an overview of the infrastructure. The two screenshots here, and yes, that is the NVIDIA grid protocol, Frank. But this is screenshots of our vCenter as well as in horizon. It's just a handful of virtual machines right now. They're all using the NVIDIA grid, which is sort of a native capability in horizon and VMware. So do you want to take it to the next slide, please? OK. And here's the Horizon Administrator console. So this sort of facilitates the connection between the end users and the virtual machines running on the infrastructure. So do you want to advance to the next slide? So that's sort of a high-level overview of the environment itself. Here is essentially how we performed our testing. We used the Firestorm and Singularity viewers. Those seemed to be highly recommended and popular. And we installed them on Windows 10 virtual machines. And we used the long-term stability build, that's LTSB, of Windows 10. It seemed to just be a better fit to run in a virtual environment, not just for this, but other VDI use cases. And we actually tried without using the GPUs. And what we saw was that it was pretty much unusable. Even on the local network, the video was choppy. It just wasn't really feasible to use. So we were able to get some of the GPUs. And we tested with various profiles. So each profile allocates a different amount of video memory to the virtual machine. So each GPU essentially has 32 gigs of video memory. And the profiles allow you to carve out the memory according, however you want to. So we tried it out with one, two, and even four gig profiles. And even though it was one gig, it ran very smoothly. And it was a very good experience. And I'm actually in the virtual environment right now. It's how I'm accessing the grid. So if you could advance, please? Yeah. And like I said, it's Windows 10 LTSB. Oh, yeah. So this environment is only available to two-lane students. I'm sorry. I see you downloaded Horizon. Yes, SK. So the hardware specs on the VMs, we essentially settled with four virtual CPUs, eight gigs of RAM, and 80 gigs of hard drive space. So it's fairly standard virtual desktop. Maybe a little higher performing than we're used to. But the Horizon vSAN ready nodes have a ton of resources available. Each node has, I think, 768 gigs of RAM. So multiply that by four. And you have a very large number. It's three terabytes of RAM. No, I don't have a technical how-to. But we could put something together real quick if you're interested. A lot of what I use to build the architecture is, it's really just a standard Horizon build. The complexity came with actually setting up the GPUs themselves. It's very vendor-specific. And we actually had to have some assistance from Dell to install the GPUs in the servers, as well as install the drivers in VMware ESX. But I can somehow get out some of those documents for you if you're interested in building an environment like this. So that said, I'm going to take it back over to Jason. And you can wrap it up. Thank you. Thanks, Chris. So yeah, Chris went through some of the technical stuff on how we set up the blades in the blade center and the VDI machine. So if anyone is going to try this or do some stuff, we did also want to include some of the concerns we found when doing this. So the first concern and the first problem we ran into is that normally in a VMware environment, machines are highly available. They're floating around. They can be V-motioned on and off to different hosts. So that way you can do things like do cluster updates, do move resources around if a host gets overused, et cetera. And typically, all that stuff is set up automatically. But what we found is that using these GPUs, we're doing like a PCI bus pass through. So when you do that, the machines get fixed to where they're not able to be live migrated any longer. So what we've had to do is just turn off all our high availability, our DRS, et cetera. And if we need to move these machines around, we just have to turn them off and manually move them. So another thing we have to do is because of this, typically, we don't want to have all this user machines on one static node. So we do have GPU cards installed on each node. And what we've done is we've sprinkled the virtual machines around on different nodes. We have two or three per host. So that gives us some additional protection. In case of loss of service on a host, we may only lose a fourth of the machines. The next concern is that typically, in a virtual environment, you are able to overallocate RAM and storage and stuff. But with using these GPUs, you have to reserve fully all the RAM you are going to use. So if you are going to purchase one of these things, make sure you do have enough to like an overallocation of RAM because these machines, you have to reserve it all upfront. So I guess the last one is that also typically in a VMware environment, as a server administrator, you can right click on a machine. You can get into the console. If you need to access something, check on a machine or assist a user that may be on there. But these GPUs use the same protocol that BLAST protocol we're using as the console does. So once we turn this on, our only option really is either an RDP, just a remote desktop session, into the machines. So if we have to go in there and say if there's a new version of Firestorm that comes out, then we have to go in there RDP to access these things. Or there is some built-in stuff, some assistance in the horizon that we can use. So you can go ahead and advance K. So then one of our last thing, let me also talk about, because I started off with how we started the project and how it came to be. Then we ran into some concerns. So let me also then talk about a little bit of what we did to testing where we got it over to K. So what we did first, we had our infrastructure functioning. Chris and I got that set up, mostly Chris. He's the technical one out of us. And then we started on our on-campus testing, because of course, your network is going to be better probably on-campus than off-campus. So we started with just basic testing. Our technology staff, Chris and I, and we tested some Windows desktops, some laptops, some older laptops, and everything we tried it on, we had great success. Another nice thing that I wanted to mention in here and throw in, we're starting to move our lab environments, and we're doing some testing around campus with these wise terminals. They're a cheaper way to run desktops, but we were scared because the resources on these things are a lot less. It's almost like a mobile device, or even like a Raspberry Pi type device. It's this small terminal and it's running this thing called FinOS. So we used those to connect to, or they had the horizon client on it. We used those to connect to these machines, and we had great success. So we also then tried to start off-campus testing, because of course, that was one of Kay's requirements that she has students that may be not in state or not on campus or around the world. So we wanted to go ahead and test that. We've tested from home, from different locations, all successes. We then turned it over to Kay and some of our other staff, and she was really the one who tried out the world and the viewers on her iPhone or iPads. We also have tried it on some Android devices, and found out that these were great successes too. So that's kind of what led us here, because she really thought it was amazing that it would run on an old iPad, and that's a really great thing. And a lot of our university, we do have older stuff that we recycle and stuff. So hey, maybe this is a way we can repurpose and use some of these devices. All right, Kay, you can go on one more slide here. One more slide, yep. So in the last one we have here, just to kind of finish it up. So our access control, so I did see something in the chat that somebody had asked Chris, well, how do I access it? What's the address? So our last concern was, of course, how do we control access? Because we don't have the resources to allow every student in our whole university to access this at the same time. Now, if that came to be great, maybe we would be able to get more resources, more horizon nodes, more servers. But so we did need a way to control access. So using the Horizon software, we actually were able to use Active Directory security groups to access these machines. So once we had Active Directory security groups assigned to our desktop pool of machines, we then could also use our in-house identity management software to delegate control of these security groups to our instructors. So if Kay has a class she wants to teach, she can have a web interface to add, remove the students she needs access to and then take them away when the semester's over or when users are not using them. So this allows us to control access. This also allows outside users like we have in here, even if we did post the address, they would not be able to get into the collection and use them unless an instructor, our professor gives the students the access they need to these machines. So that's great because that way, we still have centralized control and management of them from the technical side on our server side, but we can delegate everything to the professors who know how the software is being used and who needs access to them. And I guess that is it. So... There's a few questions, Jason and Chris. Sure, yeah, we'd be happy to answer questions. Selby asks, do you have an estimate of the cost per user? Now connected to Christie's iPhone. I don't, I was not available or in any of the budgeting scenario when they purchased this. Maybe we can work something up and try to figure out because we are using this in other environments. We support the horizon, we support other lab environments and things around our campus. So I'll try to find the original budget of what the GPUs especially cost because that was something we more purchased for this use and then on the blade center itself. But probably, unless you had a very large user base that we're gonna use this, the cost is probably not gonna be probative to just for a handful of students. But for our GPUs, we've found other things like AutoCAD and things we're doing with this so we were able to justify the cost. And like I said, our new CTO is wonderful in doing everything he can to support the student experience. So he was happy to do this, but I'll see if we can get something together and at least get an estimate so you know how bad it is. And just sort of to follow up on that, Jason. The GPUs do add a cost per user. And like you said, we don't have the exact numbers but we can't include a, we can't carve out GPUs for every virtual machine. So it's sort of an extra feature that would increase cost. So just to add that in there. Lisa asks, it seems like this is the same as frame or bright canopy is doing. Is that correct? If not, how is it different? I have to admit, I'm not familiar with the frame or bright canopy. I'm not either, so I'm not sure. Shelby put something in the chat, but the new price 10 hours for $17 a month. You know, I'm just taking a guess here. I'll bet that that's what a bright canopy is doing is virtualizing the viewers. I would bet also when Chris and I started looking at this, we looked around and didn't see too many people doing anything like this. I did see some people trying to use some type of Citrix product for this, as well as one or two with Horizon. So I would guess it's either Citrix or Horizon that they're doing this with. And it's, I'm sure a similar setup. Let's see, I'm looking, but bright canopy, Joyce adds, but bright canopy is a marketed service to others as opposed to this being an in-house scenario. Yes, you are right. And Krista, Jason and Chris, you haven't met, is a computer science professor at the University of California, Irvine. And she's also one of the core developers of OpenSim as well as the creator of the Hyperbred. And Krista adds, the technology is wonderful. It just seems a bit expensive at the moment. Yeah, thank you. Very nice to meet you. Yeah, it is expensive if we didn't have other uses for this as well, it would have probably been tough to justify this, but I think it's well worth it. I mean, from what I've seen, from what Kay is doing with it, I would love actually, so I teach in the applied computing, I would love to figure out some way to incorporate this into my classes, because when I was a student, I would have loved to do something like this for classes. And as you said, Jason and Chris, that it may be a way to repurpose old equipment. I'm constantly over at our surplus deco looking for furniture equipment, and it may save money in the long run. And I wonder about centralizing the different computer labs too, as opposed to having so many desktop units. I wonder if there might be some cost savings there, but I'm just guessing here. Yeah, so there is definitely a cost saving on the end user devices. And like Jason said, we use a horizon in other parts of the university. Like the business school actually has a virtual lab that students access from anywhere. And also, I took a quick peek at the Bright Canopy site, and just looking at the FAQs, it sounds like they're using a very similar architecture that we are. Just reading one of them, how does this work? We run a viewer on a cloud machine with a hefty graphics card. So it does sound like it runs on a virtual machine with some sort of a GPU. But I can't attest to, you know, what the cost difference would be. But we do have the benefit of controlling the entire environment within our infrastructure, as opposed to a paid subscription service. So. You know, the idea of like 10 hours for $17 is daunting to me, because I would, it would be so stressful to have to get what I need to get done in the virtual world, you know, in short order. It's kind of like the old days where long distance calls cost money. So we've got some more comments in the chat about frame virtualized applications like CAD and CAM, and any cloud that needs rendering in the cloud. And Ramesh, and Ramesh is the originator of some build technology to help, you know, devices that will build up both regions and, you know, the content in the region. And Ramesh adds bright canopy is a bit expensive, but it's great to be able to catch, to test our touch UI to an open sim environment on a mobile platform. And let's see, and Joyce adds, and they're utilizing, they're already in place infrastructure. Yes, we are. Any more last questions? And I've got my last script remarks here. Thank you, Jason and Chris for a terrific presentation.