 This is the virtual show hosted by the Open Infra Foundation, where we talk all things open infrastructure, open source demos, which we have one today, industry conversations and the latest updates from the global open infrastructure community. Welcome, we're live here on Thursdays at 1400 UTC, streaming to YouTube, LinkedIn and Facebook. So you can watch on whichever platform you choose. We are live today with some guests. So if you have questions or comments, please drop them in the chat anytime during the show and we'll answer as many as we can at the end after the initial presentation. Like I mentioned, we are excited to be here. We have several guests and today we're gonna be talking about the importance of having a virtual desktop infrastructure with OpenStack. And we have some great guests from all around the world. So to get things kicked off, I'd like to introduce Radick, who will be starting the presentation. Thank you, Alison. Okay, am I being heard now? Okay. So thank you for the invitation to this meeting and I have the honor to present the background for the topic that we are going to discuss today and present to you. So we were selling this episode with the VDI acronym and we all know that in IT, we really love the acronyms. So again, with VDI, with our acronym, which stands for Virtual Desktop Infrastructure, this is the thing that we are going to be delving into today. So since this is infrastructure, Virtual Desktop Infrastructure is definitely an infrastructure to provide the virtual bestops as the name suggests. And you might have heard another acronym also used in this context with clouds, which is Das, which is desktop as a service. So it's more or less the same, depends on the context of course, but in general it boils down to having a desktop provided to the end user, right? And the usual way to go with the VDI, so the usual selling point for the VDIs is to use the data center as a hosting platform for the desktops. So that's how it's usually done, but in general, by the definition of the VDI, it could be based on any other kind of infrastructure that is presented with the virtual desktops. And as the slide says, the access usually takes place remotely and this is often done via TIN clients. So it could be done with Fed clients, but as we will soon see on another slide, it's often the case that we use TIN clients precisely because of our wanting of the virtual desktop infrastructure. So going forward, what is a virtual desktop? So to really know what we are talking about, this is still a desktop. So we are not targeting a machine per se, but we are targeting a desktop. And the means to obtaining the desktop is irrelevant to the end user, right? So the point to make here is that as we have the physical desktops, when you have the monitor connected to some PC or you have your notebook or whatever else, like a tablet or a smartphone nowadays, you have some user interface that is graphical and you have your apps right there. So with a virtual desktop, we are precisely having the same, but the actual rendering of the desktop is not happening on that very same machine that you are using to access it, right? So it's somewhere there remotely, it doesn't exist physically, right? So the machine that is doing it is very likely headless anyways, but still the desktop is offered to the end user, okay? Next slide, please. And to reiterate this point really, to not be confused by the terminology since we use the virtual in many cases, virtual desktop does not really entail a virtual machine. This equation doesn't hold and it doesn't hold like in either direction. So for a virtual desktop, it can be powered by a virtual machine, but in practice, it can be powered by anything else, like a physical machine that is connected to the monitor or that is not connected to the monitor that is in the data center or maybe in another room, whatever else that is capable to do the desktop, maybe also the container. So that is something that we might want to discuss at the end as well. And virtual machine may offer a virtual desktop, but of course, since this is open in front, this is also about open stack, we know that VMs are much more often accessed via those non-desperate protocols, like the actual VMs are service that are offering some services or they are used as terminals via protocols like SSH, okay? So to remember that when we are talking about the virtual desktop, we mean precisely the service of giving the user, the end user, a graphical user interface that is like on a physical actual workstation, but nonetheless is somewhere there, somewhere remotely. So another good question. So after introducing what the virtual desktop is, what the VDI is, another good question is to discuss why do we actually want to have VDI in the first place, right? So the reason that is most often cited, especially in the white papers by the providers of VDI solutions is the reduction of the total cost of ownership. And this includes the maintenance costs. And this usually boils down to the maintenance costs because afterwards, you still have to buy the equipment, you still have to, like if you have graphic intensive, graphic intensive workstation in mind, then you're still going to have to buy those powerful cards. But then again, you can better plan your environment. You can save on the maintenance and you can save on the reusability of the infrastructure that you have. So another point is about this improvement of manageability. Since he had this all again centralized. So nowadays it's like going back to the future, right? So before I was born actually, this was the case that the mainframes were accessed in one single place. So it was all centralized with desktop infrastructures as well. If you could call them like this back to date. But anyways, this is what's happening again. So to have control of the resources, you would like to centralize to obtain a single pane of glass over all the resources that you are offering to your end users. And desktop infrastructure is one of such resources, such meta resources, let's say, that operators would like to offer. And as I said previously, also related to the cost, but also to the, let's say the sanity to the inventorization of the resources is this flexible access, flexible way of utilizing the existing infrastructure like the GPUs that are hidden somewhere, the unused RAM, unused CPU power to offer the desktop infrastructure. And also in here, what I'd like to emphasize that nowadays, it's even more important probably because of the cost of the graphic cards. So for some people to be able to access graphic intensive applications to utilize them, the graphic cards to be powerful enough are priced so high that it makes much more sense to centralize such usage and be able to share it and split it by many people. And finally, if we are talking about centralizing, we also usually have those centralized data silos. We obtain data from for food and computation and visualization. So in here, another reason to use VDI would be to avoid those large data sets transferred because we know they are costly and well, we would like to avoid them at all costs. For the next slide, yeah. And now after saying why VDI at all, then why another VDI solution, right? This question is because if you just go to like an a search provider and type in VDI solution, then you are going to get a rather long list of providers of VDI solutions, most of which are going to be commercial ones. But the problem that we are facing, especially in academia, both public and private academia is that the commercial VDI solutions are high priced, really overpriced for our needs, for our budgets. And this is also the case for any hubbests and similar people that are interested in having VDI, but nonetheless cannot afford using a commercial VDI solution that are there. Still not stopping at the cost. This is not the only reason why we are taking the subject. It's also because the existing solutions aren't perfect. Even if you pay a lot for them, you might still miss some features that you would think are obvious to have, but they are still not offered in general. Like the lack of GPU accelerated virtual desktop. So you are going to get some virtualization in some of those solutions. But still the software that is going to be exposed to you will only do software emulation or maybe some kind of GPU acceleration, but that is only designed to power some special tools, but not being general purpose enough, like not being able to run your Windows machine with all the different software that you would expect to be runnable there, right? And then again, if we are moving from the VDI solutions to VDI solutions that are cloud capable, one would love to have the same solutions, same software, same single glass or single pane of glass for the scheduling for the management of the actual resources behind the scenes and have it as dynamic as heterogeneous compatible as one could imagine, right? To really utilize all the hardware that is present at the current point in time. And finally, if you already have invested in a cloud computing infrastructure, like for example, with OpenStack, you might think about workflows that involve both the computation, both the compute intensive part as well as the final visualization. So this is also related to this data transfer to the localization of the data, to keep the data for avoiding the costs as well as to avoid the data leaving the boundaries of some organization, basically. So next slide please. And I would like to invite Manuel from University of Freiburg now. Manuel, are you there with us? Yeah, I'm here, hello. So I will continue this presentation and now we can ask what makes the VDI different from a typical OpenStack offering. So first of all, OpenStack is a platform for cloud computing. So its main task is to provide mostly headless compute resources in form of virtual machines, but you can also use containers. But here in this presentation, we will focus on virtual machines to provide virtual desktops. And OpenStack also provides optional console access for management or configuration purposes. This is especially important. For example, your cloud resource isn't responding. So you can, for example, if your SSH connection into a machine doesn't work anymore, you can access the console and repair something. VDI, however, is a platform for remote visualization. So it's a different kind of aspect that you focus on. And that's the reason why you also require some other capabilities. For example, you require advanced visualization features. For example, good GPU acceleration for a smooth rendering. And also, you need more optimized remote access to such virtual desktops, especially if you are not in the same network. For example, if you're rocking from home and you want to access your virtual desktop in a data center. And now we can summarize that we have to deal with two different infrastructure models if we combine both worlds. So to say, the cloud computing, which provides infrastructure as a service and the remote visualization, which we also name as desktop as service. And if we came to the next slide, we can summarize a lot of VDI use cases in the academic world into these three major classes, I would say. Before we take a much closer look to all these open stack technical aspects. So first of all, on the left side, we have the traditional single user access workplace. I think that's the most famous work VDI use case because here you want to replace a desktop workplace with a desktop machine and with a virtual desktop. For example, to provide employees a virtual desktop that they can do their office work. Such virtual desktops should be persistent and stateful because if every employee in the company has their own desktop, then they can personalize the configuration of the guest operating system, for example, and these configurations are preserved. Desktop transport to the remote thin client can take place via a desktop transport protocol such as a Spice or VNC. So they are most common used in the open stack world but we will come to that later. We also have another let's say group of VDI use cases. For example, the use cases where you need workplaces but only for temporary sessions. So you want to have a kind of a descartable desktop that you can create and then throw away. That's the reason why they're pushing hinges on the icon. You can use such temporary desktop workplaces, for example, for courses, training and assessments and they have a descartable nature and they can also be stateless virtual machines where you do not save any configurations or stuff like that, except the work data but this data can be saved on, for example, network storage systems that are apart from the virtual desktop. And we also have a third, yeah, prominent use case especially in the academic world as often researchers are asking for powerful virtual desktop workplaces because they want to model something in three-dimension and they are, for example, they want to do media editing or some other visualizations from data that they have computed in other open stack virtual instances from the traditional cloud computing environment and here you need a more dedicated graphic rendering. So that's the place where GPU acceleration is mandatory and also the desktop transport will become a little bit more complicated because you cannot use simple protocols, especially if, for example, if you rotate a 3D model in your application and in each frame, almost every bit is changing its color then this would be a little bit, a large overhead. So here it would be beneficial to use a video streaming approach for this BDI use cases. And if we summarize all those use cases, next slide, then we can see that we have on the one hand, we have on-demand desktop workplaces. So their characteristic is you should allocate on provision resources on demand. For example, if an employee wants a new personalized virtual desktop for his work at a company, then you should allocate on provision resources on demand. Therefore you can reuse open stacks on demand resource scheduling and placement. This also has the benefit that the cloud operator can adapt all those existing services to the underlying infrastructure. And on the other hand, you have the such, we call it timed and maybe also long-term desktop workplaces where you have specific time slots where you can, or where you should reserve and allocate resources. For example, to a desktop, GPUs and stuff like that that you need for your virtual desktop workplace and that you can start those workplaces in time when the event starts. So here you could get the time schedule from, for example, another resource reservation systems. So in the academic world, this would be, for example, a campus management system where you manage all those lectures and events and e-assessments. And then for example, if you add an e-assessment in two weeks then automatically the resources are reserved for this time slot in two weeks where the e-assessment takes place. Now we have discussed our use cases and their characteristics. And before we clarify the question what's already there and what's missing in OpenStack right now to develop such a VDI solution, we want to take a closer look to our goals. So our major goal is to create a VDI service for OpenStack to turn an existing cloud infrastructure into a VDI and that you can use both infrastructures in parallel. The service itself should be separate, which means that we want to modify only as little as, only as little as OpenStack components only as little as possible, meaning that we want to only insert all important lower level features that are missing right now into OpenStack and all VDI related features should be implemented in the separate services. And we also want to focus on the mediated GPU pass-through approach forward to the desktop so that we can provide a smooth rendering so that everyone is happy if he's working with graphic intense applications within such an environment. And therefore it's also necessary to have, to develop a hardware accelerated and especially low latency video encoding for efficient desktop transport, especially in the case of a powerful virtual desktop environment where you want to have the same power as if you have a workstation under your desk. We also want to realize the desktop transport with VRD, the Spice Protocol, that's a protocol for desktop transport similar to Microsoft's LDP, and but it's open source and you can easily adapt it and also extend it with other channels so that way you can transmit additional data so that the user can smoothly interact with his virtual desktop from a remote place. For example, this could be a multi-monitor support or USB redirection, so if you have a local USB device you can redirect it from your thin client to the virtual desktop in the data center and there you can access it. The same applies to the folder sharing where you can share a folder with your thin client to exchange temporary files from your thin client to the desktop or the other way around. And last but not least, this goals should be collected in a project which should be free and open source to everyone so that everyone can collaborate on our ideas that we present today and everyone feel free to collaborate with us here. So now we know our frame around our proposed VDI solution. So now we take a closer look onto OpenStack and which features are already there. So first of all, OpenStack provides two really great services that we can reuse for our needs, especially the timed long-term resource restoration that can be done with the OpenStack's placehouse service. So this service is not part of the standard OpenStack installation but can be added and there you can reserve time leases where you can reserve for a specific time lease a resource. Currently, this is a little bit limited but we will come to that later. Also the Cyborg service of OpenStack is really great for our use because especially if you do not have data center with homogeneous hardware, then you have different servers and compute nodes equipped with different special or dedicated hardware and hardware accelerators and there it would be nice to discover and to manage those resources and therefore Cyborg is there which is a great service to discover such special devices within a data center and to keep track of them so that you obtain, let's say, an inventory list of every resource that you have and that you can provide. Currently, there's basic spies and VNC support available to access cloud computing instances in form of virtual machines remotely. That's only base in a basic feature set available because it's used as fallback solution and for example, other communication services like for example SSH or other things do not work anymore so you can recover your machine if you're using the basic console support. That's not enough and that's one limitation that OpenStack has so it's not enough for a great VDI solution to provide a smooth interaction for each user. And last but not least, support for basic and dedicated graphic rendering is also integrated into OpenStack right now. For example, OpenStack supports hypervisor specific graphic adapters. It also allows to use partitioned GPU instances and also as a third option, the pass through of physical GPUs into a virtual machine. I have written that down in a very, yeah, very short points here. I want to clarify that in more detail that you can understand the major features that are major requirements that are mandatory to provide a smooth VDI solution with smooth renderings. So on the next slide, I have therefore prepared for images where I want to show the four major GPU virtualization technologies that are used today to provide rendering for guest machines. First of all, we're starting with the graphics emulation that's visualized in the picture on the left side. Here we can see a host or our compute node which is equipped with an emulator emulates a graphics card for a virtual machine and the virtual machine can acquire the features exposed by that graphics card and can render graphic applications. So this approach is very easy. It was the first approach that was there in history. For example, the QEMU emulator which is really famous in the open stack world and often used have implemented this approach as first option, but then people realized that this is really not the best approach because the performance lacks. So then they improved this approach and they got to the parallelization. So here you have also a compute node and a virtual machine and what the difference is in the virtual machine you have such called a front end graphics stub or graphics card which redirects all rendering commands to a backend on the host and then the host is capable of using a dedicated graphics card which is indicated with the orange box and then the host can render all the graphics commands. This increases the performance of rendering such graphics commands but isn't that fast as the third approach, the third approach increases the performance further because here the dedicated physical graphics card is passed to the VAM so that the VAM has direct access to these resources in our case GPU, but it can also be used for other PCI devices. That's the reason why it's often called PCI pass through. This approach increases performance as I said but it also lowers flexibility because there one GPU is acquired by one virtual machine and that can be sometimes good. For example, if you need to set up for high performance use cases, high performance computing use cases, sorry, but it's not so great if you want to provide multiple virtual desktops to a large group of users, there would be better if every user can acquire a little bit of a GPU and that's the reason why in history the mediated GPU pass through approach was born which combines the parallelization or IDs from the parallelization with the direct GPU pass through approach. In our visualized image we can see that the virtual machine has direct access to the GPU but now it's a little bit different because the host has also access to the physical GPU and he can command such a special GPU and can say, please GPU, I need a petition more, please give me a GPU slice, so to say and this slice is represented in the host as a virtual GPU instance but this visualize with a second orange box here which is labeled the GPU where we want to say that this is a virtualized GPU instance. This approach has the benefit that it's very flexible so you can partition your physical GPU if the GPU supports that feature but most data center GPUs nowadays support that and you have the advantage that the performance is really high because each VRM who acquire such a partition can access those partition directly and it has also the advantage that the host can has control of the GPU and can also acquire a partition. This is especially useful if you, for example, want to offload video encoding especially in our powerful virtual workstations that we want to replace with powerful virtual desktops that would be also a great option because then you say, okay, the virtual machine does the rendering and the host does the encoding so that you can offload video encoding to the host. So that are all four major approaches that exist today and the great thing is that OpenStack has support for GPU virtualization approaches which is great because there we have a solid base ground to develop a separate VDI service which can acquire such features for virtual machines that should provide virtual desktops. On the next slide, I have summarized all the aspects that are limited or missing right now. So first of all, although there is a great base as I have shown you on the last slide, there are some problems especially for example with the graphic rendering regarding the resource management. So currently if you want to use hypervisor specific graphic adapters, there you have the problem that OpenStack cannot account those resources so you do not know how much video memory for example does a graphic adapter from which is built in into a QEMU for example for in this hypervisor you do not know it or for example if you use Hyper-V and Hyper-V emulates a graphic adapter for the virtual machine, you do not know how much memory is reserved on the compute node because OpenStack does not have any resource accounting for graphic adapters right now. It was implemented once but was then removed in the past with the notice that cloud operator should keep those things in mind which can be a little bit dangerous if you want to realize a VDI solution. Also there is no unique or unified video memory handling for virtual GPU instances available. So for example at the moment you can only say I want to acquire one virtual GPU instance for virtual machine but you cannot specify other requirements for example I need a virtual GPU instance that matches my four gigabyte video memory for example. And also fine-grained control for mediated devices is missing so one limitation here is that you cannot create different for GPU types per GPU. Live migration is another thing that's missing completely for all those accelerator sources and also there's no support for long-term resource reservations for special resources like accelerator resources that's also missing and should be implemented and the console API should be optimized so that we can have more options to distinguish for example whether we want to have video streaming for our virtual desktop transport or not or use for example a simple protocol such as VNC. And on the next slide I want to introduce Andy and he presents now let's say a first VDI service implementation for OpenStack so now Andy it's your turn. Thanks Manuel. So I just want to start with a little bit about the organization I'm working for. So IRDC stands for the Australian Research Data Commons and it's an Australian nationally funded project to accelerate research in Australia. So this slide is just a little bit about some of the components of the organization and the part that I work on is mostly in that purple section on the right, the storage and compute. And so traditionally we've operated a cloud called the Nectar Research Cloud and it's like a partner ship between a number of mostly university organizations that come together to operate this sort of nationwide research cloud. And so we've traditionally focused on the infrastructure but now we're sort of looking at how do we support research up the stack a bit more and so we're starting to sort of dabble in that pink section with platforms and software. And so we've developed a virtual desktop product and we've called it Bumblebee. And so it's really there to kind of help those researchers who aren't necessarily looking to run servers and use infrastructure in that way. And so we're really looking to have this service as a way of those users being able to use a virtual desktop for their research and not have that overhead of needing to be very technical to operate on the next server. So I think from here we're gonna go to a demo that I've prerecorded. So this is a demo of how DC's Nectar Research Cloud virtual desktop service that we launched recently. So users are gonna come to this service and be able to hit the sign in button and get sent straight through our standard authentication process, which we use OpenID Connect and Key Cloak. So I just choose my institution here and I do the standard institution login and then users will then end up at this page. Before they can get started, they need to create themselves a workspace. So I'm gonna click this button and get started. So I'm gonna create a workspace here. I'm just gonna call it demo and just put some basic information in here. I choose a field of research code is a field of research codes in Australian standard for classifying research. So we use this for our reporting purposes. So I hit submit here. You can see my workspace has been created and auto-approved. So I'm able to get started straight away. So if I scroll down a little bit, we have a catalog of images for users to choose. We have a couple of generic ones. So CentOS and Ubuntu are generic. They're just basic installs. Fedora is similar, but includes a suite of research and scientific tools. And then we have a couple of domain-specific ones. So NeuroDestop and Geodestop they contain domain-specific research tools in there that users can use to get started. I am gonna choose Ubuntu here for the demo. I click a few details and then you can see a bit of information about the image. See a little bit of info about what's installed and some of the details here. Default size is four vCPUs and eight giga RAM. We also include a boost size here. So I'm gonna click create desktop and get started. In our production environment, users could choose from a range of availability zones. But for this demo, we're just gonna use the one. So I hit create here. And so what's happening is the web services kicked off in a synchronous job and we'll have a worker pick that job up and it'll begin talking to the OpenStack API and creating the resources for the desktop. So we clone a sort of a golden volume which has the base OS on it. And once that's done, we create an over server instance and set the boot source to be that newly created volume. And then as the instance comes up, there's a system D service that runs at the end of the boot process which calls a webhook to essentially find home to the web service to let it know that the process has completed and the instance is ready. So here it is. It's up and running now. So I've got a few buttons here of actions I can take. But first of all, I'm just gonna choose open desktop, clipboard integration there. And so what's happening is I've been directed to Guacamole which is the service that we use for translating RDP from the underlying instance through and sort of proxied out to this web browser visual desktop interface. And so our desktop's now ready to be used so I can launch Firefox for example. It just takes a few seconds to start and I might launch a terminal. And so you can see if I do, you can see we've got four vCPUs here and eight giga RAM and our root disk there is 50 gig disk. And so at this point users can use sudo to install things or do whatever they need to do. And when they're finished, they can just close the window and their desktop will persist in the background. They can come back at any time and just click the open desktop button and get straight back into this session. So what we can do if it uses who require more power, they might require more vCPU or RAM. We offer some boost functionality. So if I choose that and then hit the red button there, what's going to happen is the virtual desktop will be shut down and resized to a larger flavor size. So it doesn't take too long but we allow users to use this boosted functionality to gain access to sort of a higher spec flavor for a period of time. So we allow users to do that for seven days. And at the end of that seven days, they can choose to extend or if they don't extend, it'll just be resized back to the flavor, the default flavor. Okay, so I now have eight vCPUs, 16 giga RAM. Just wait a moment and their desktop is back again. So eight vCPUs and 16 giga RAM. Okay, so one last thing I wanted to touch on is that we do put a time limit on these resources to ensure that the service the vCPUs aren't being wasted. So we have a time limit of 14 days and at any time during that period, users are able to extend their desktop. So if they wish to keep it around longer, they can just click the extend button and that'll extend it by another 14 days. If they don't extend it and we hit that time limit then we'll have an expiry process that'll come in and essentially we call it shelving. The process of shelving, I'll demo here. So I can hit the shelve button and that's gonna speed up the process rather than waiting for the expiry to do that for us. So the shelving process will delete the virtual machine but keep the volume and it'll keep it in that state for three months. So at any time, the user can log back into the service and unshelved the desktop and it simply just reattaches, it just creates a new instance, attaches it to their existing volume and it's just ready to be used as it was before. So you can see that we're just in the shelving process now, just takes a few minutes. Okay, now it's completed. Okay, so you can see the desktop is currently shelved. And so if I wanted to, I come back and click on shelve and it's simply gonna create a new virtual machine, attach it to the existing volume and then it'll be ready. Okay, so instance is launched and now we're just waiting for that system D service that sort of earlier to phone home to let us know that it's ready. Okay, now we're ready. And so that's all we really want to demo. So I'll just delete and click on my instance and we're done. All right, thanks. So I'm just gonna make mention, we've just been in the chat here, was mentioned about the little B. So I've got to give a shout out to Darcel on our team who's our UI UX designer who did design that little B with the Fluttering Wings. Yeah, it's very cool. Everyone comments on that. So I'm definitely gonna pass that feedback onto her. She really appreciated that. So anyway, I just want to talk quickly. I know we sort of run close to time limit. So a little bit about the architecture. So the architecture of this is, there's kind of two main components. One is the core component that runs the main web service. And then we have a site component, which is a stack of visual machines with the software running at each of the sites that we availability zones, I guess you think of it. So with the core component, we're running it through an external load balancer into Kubernetes cluster that runs the web service. We've got Redis and there's a worker scheduler process there that sort of talks to the OpenStack API to do the provisioning of the resources and stuff. And then the web service will talk to a shared MariaDB cluster that we have. And so if we go to the next slide, the site component, this is essentially, this architecture here is duplicated at each of the availability zones that we run in. So you can see on the right hand side there, we've got three zones that we're running in currently. And so the sites are sort of broken apart. So users can actually come to the site component directly. So when they click the green button to access their desktop, it's sending them directly through to a whole new load balancer there that you can see. And so in our case, we use Octavia for that. All of this component is hosted in OpenStack and we do deploy it with HEAT. We've got like a stack of virtual machines running Guacamole and the connections go straight through to that and then get proxied out to the virtual desktops themselves. And I just mentioned that Guacamole, the shared MariaDB there, Guacamole and the central kind of Bumblebee web service, they actually share the same database. And we found it was the most convenient way for us to be able to integrate the two tools was to share that database so that the Bumblebee web service creates the Guacamole connection rows in its database directly. And so we kind of get this really nice integration that way. So next slide, please. So a bit of an overview about the core part. So the project is actually based on the Orion project from the University of Melbourne. University of Melbourne is one of our partners and when we were looking for a service to run nationally, we did an evaluation of the other kind of virtual desktop tools that were around and we didn't really find very much. But the University of Melbourne was kind of starting out with this product that was Python based and was using our opens that cloud already. So we encouraged them to open source their code and then we took that and we developed that further to add features and support that we needed to run it at a more national level. So as I mentioned before, we're running it on Kubernetes with Helm, find that's quite nice and flexible. We've got good coverage of unit tests. It's all working with native OpenStack resources and we're using an asynchronous job scheduling system with Redis with a really nice kind of Django plugin integration there and we're using OpenID Connect for authentication using Mozilla's Django OpenID project for that. And so in the next slide, we talked a little bit about the site component. So the site component is basically an Octavia load balancer with a stack of virtual machines running the guacamole front-end and daemon. And for this security, it was a really important factor because we expected that researchers wouldn't necessarily be on top of security. So we wanted to make this as secure as we could but also accessible. So we've built it on top of a private network. We allow any incoming connections except from guacamole itself and the secure sort of strict access between the virtual desktop. So we tried to make it, I guess, isolated in a way and we enable automatic security updates in the OSMH2 to really make sure that we're as protected as we can be. So next slide, please. So I guess I've probably talked a lot about this already but I guess one thing I'll mention is one of the interesting things we found was to actually speed up to optimize the deployment time. We found cloning of volume was actually the fastest way to get up and running. So using SF back-end, that was definitely the quickest way rather than booting from an image. And I guess when we're deploying the virtual machines themselves, we use CloudInit. CloudInit does a lot of the heavy lifting for us. So we have a template of CloudInit that the web service builds and sends through and the CloudInit does a lot of the on-vm provisioning and setting things up for us. And then as you saw in the demo, the time it takes is usually less than two minutes for the desktop to come up. So we think initially when we took this project on, we were concerned about the time it might take and users might get frustrated if it took too long. But actually once we optimized it, having it less than two minutes is a real win for us. So we build the OS images using a combination of tools that we are already using. We use Jenkins and integrate a lot of jobs through that. We do a pre-seed or kickstart build and then use Packer to automate that with Ansible. And Ansible deploys a lot of the tools and features that we have like XRDP and any of the other kind of applications that we need. And so we've got five images at the moment, some generic ones and some domain-specific ones. And we've got the possibility of bringing more on as user demand will kind of drive that, I think. Next slide, please. And so, yeah, we've got some time limits on this. I know Manuel talked about in his slides having persistent desktops. We sort of got a little bit of a hybrid approach here. We wanted to make sure that the resources are used in an efficient way. We're operating a cloud where users don't get built for their usage. So there's a potential sometimes for users to kind of just forget about resources running. So we do put a time limit on these. Users can hit the extended button to extend the usage at any time and there's no... We don't have any limits on that. But if they don't, then we start going through that shelving process and then eventually after three months of it being shelved, we then archive it and that'll do a volume backup process on the back end, store the image in Swift, which we can recover if we need to. And we send users emails to let them know in advance when their desktop's going to expire and sort of the workflow of how their desktop goes. So that was it. That's it from me. So thanks very much. Awesome. Well, thank you all. This was definitely one of the more engaging episodes that we've had. We've had a lot of questions in the chat. I know we're a little bit over on time, but I wanted to go ahead and invite back Roddick and Manuel to... Maybe we can just discuss one or two of the questions before we transition to the end of the episode. So I know there was one... There are so many. I have to scroll through. Let's see. Here we go. This one is around automating the launch of the VDI as a user login or with the click of a button. So this is more around the logistics of actually making it work. So I want to see if one of y'all want to tackle this question. I guess for us, doing it on user login was probably going to be too slow. Depends on how you would do it. I mean, if you were using a container back end maybe on Kubernetes or something like that, you might be able to really optimize the speed to get something up and running quick enough. We opted to do the virtual machine approach because it was going to be more appropriate for us. So in that case, there is a little bit of extra time that it takes. So we really wanted to have the opportunity to show that little buzzing bee, I guess. It was my favorite part. I'm not going to lie. Awesome. And then one of the other questions that I know this actually led to a lot of discussion. Roddick joined in natively on LinkedIn to continue to talk about the goals of implementing this VDI and really emphasizing that it's a collaboration. So I'm not going to read the entire question but this is from Lord Jonathan Race. So this is around the goals of implementing this OpenStack service. So Roddick, I know you answered this a lot back and forth in LinkedIn. Do you want to kind of summarize what your thoughts are around this? Yes, so maybe a little background on how we ended up here in the first place, right? So I started this little thread on the OpenStack discussion to really gather some traction on the VDI topic whether other people are also interested in having VDI on OpenStack. We're using what we already have with the existing tooling and then build on with something on top of that. So basically this way by a simple Google search. Yeah, I've been able to farm some desks after that far. But using the discussion, I have actually got in touch with the two of you and we have already started collaborating behind the scenes. So something that wasn't really presented today. Maybe I started teasing about that via LinkedIn comments but in general we have active ongoing collaborations in this group today with the few of us. And this call is also, as Alison mentioned, a way to invite you, the other participants of this OpenInfo live session, the ones that are now listening to us to correlate further. We are still on the design phase, in the design phase to be honest. So we are thinking how to best approach this topic. There are many open questions. There are many loose ends that we would like to tackle before going further with the coding part. So as I said on LinkedIn already, there are really thanks Alison for, I don't have to repeat myself basically. I just thought it was such a good quote to show just all of the different areas that collaboration would be helpful because I think that's ideally the whole point is that gathering folks from all around the world to collaborate around a common challenge or mission to bring things to life versus all of us doing it alone. So I think that it was very well said. Thanks Alison. Yeah. That's it for me for that. Okay. And I know that there were a lot of questions and one of the things that we talked about backstage during the episode was maybe even doing a panel discussion on another episode to just live answer questions about this exact topic or things that are related. So you probably haven't seen the last of these three, but I know that we've gone a little bit over today. So I do want to be respectful of everyone's time. So thank you all for joining today. And before we sign off for today, I wanted to show the next episode that we have in a few weeks. It's around large scale ops deep dive with Schwartz Group, which is one of the largest retailers in Europe. So they'll be talking around about their operational challenges as well as some of the things that they've collaborated with the community to solve. So please join us on September 29th at 1400 UTC to learn more about what they're doing. Otherwise, thank you for joining us today and we'll see you on the next episode of Opening for Live. Thanks, everyone. Thanks.