 I want to thank you all for attending this talk about how do you put a little bit of physical in the school. We are in this area and a little bit of it, it's here. We are the teachers at the Changit City Life School in Barcelona. We have over 1,000 computers that we have to update. And we usually did the cloning system. But as we saw in the system, we tried to, we tried some systems. When we did the cloning system, we adopted exactly 2 units. What we will be explaining in this talk, 54, is what we will be solving. I saw that we will try. And we keep in touch with what things we went through with the system at different levels. And I saw that we solved all the problems that we had during this past years, while using these mental methods in our school. We will talk about the storage, and things about the storage. All supervisors. Who is that BDI who has to organize the districts where they run, and who decides where it's better to run. The networking, client rules, the users of the system, connects to BDI. And the system is scaled from the beginning. So, for example, the system can improve your organization. So, it's an open source virtual list-tops. We are focused on rest-tops, planning rest-tops, and servers. Of course, everyone serves with the server. And then we have a fast and easy interface, that allows all the users, especially the pitchers, to create a very autonomous system with rest-tops. We're storing the server, and then delivering all the server, these created templates to the students. We need an easy interface that is quick, and also that could be used by all the users in our organization. We have an interface for the users where they create their rest-tops, and they can start it, or access to the client via. And we have also a quick system. We can make a number of rest-tops. They can start, they can create. Also, we have another part, this demonstration part, where we can access all the rest-tops, the users, the providers, and the users on the system. Why? Why do we create an exact program? Well, the philosophy was a little bit what I explained. We need a system that brings autonomy to pitchers, and also to all the students. One of the ones that I would tell the staff in our school, we didn't want to spend more time delivering the server to the classrooms. We tried with all the musician systems, and we didn't have a special interface to what we needed. So, we will start with my example. That's the typical use case. The teacher usually comes with a server. The server they have for a remote, they have just moved. And they need to install it, test it, and then deploy it to those. So, now, the philosophy has changed. We say, okay, just go to the exact web, install it. So, we provide a form where they can choose from templates and then you create it. We create and optimize some templates, we call it the base templates, where they can start a new desktop. They choose whether a base template be one or maybe templates that all teachers have created. And they can choose also the hardware, the LAN. They can attach the desktop to the LAN where they are going to use it, for example. And after you have created the desktop, and it's still called the server data, they want to deploy that to the students, so that this program is, okay, just convert the desktop to a template and then it can look to all the users. So, when he was in class, he says to their students, okay, create, set, look for a new template, and create the desktop. The students have the rest of it. This is a really fast, really fast sequence. Here it works at the level of the disks, the disks we use. We create the base systems and we place it. The teacher creates the desktop, instance of the software we want to use in the class, and converts this to a template. So, this is a template, output with only. And after that, the users just create the desktop and all the data they like on the disk go to the disks they attach to the desktop. So, the system is growing itself as users are creating templates and new desktops. This is a view from the designer. So, you can see that we are at the tree of the disks who they are growing. What we use is a chain of disks. With QEMO image, you can mitigate the disk from one disk and you can continue so on. So, with this system we have, for example, more than 2,000 virtual desktops and only 8 terabytes of cloud disk. So, one of the theories that we have is a little bit of space wasted using these chains. Also, as RBDI has the ability to configure it, you can set up as RBDI to set the disk, the users disks, on different storage servers. So, the IO performance that is critical in these systems gets improved. It will balance the disks based on weights you can set. So, with this system, we have lowered this budget. We have balanced the tasks that we can set in different servers. So, we have better IO performance. And it's the disk of our files, our TQL files. So, you can just store it wherever you want and carry it on the network with whatever system you want. We use NFS4 as the sharing system. I provide those on the storage, but you can use whatever you want. And the other thing that we use to lower budget is this cache. This cache, we have tried to enhance IO and DM write-boss. The first one is currently maintained. And we have changed to DM write-boss. We have additional disks that are slow. And we have NVMI disks really fast. But they are quite expensive. So, we buy the cheapest ones in size, the smallest ones. And we configure the DM write-boss to write all the desktops right to these quick disks. And you can see it here. Here there is the impact of writes of multiple desktops. And after that, in blue, you can see how the cache is lading the disk down to rotational disk. So, we can lower the budget also using this system. Okay. How ISART BDI controls the hypervisors. We use generic KVM hypervisor. You can find it in main distributions. So, there is no specific software needed to be installed apart from the main KVM packages. And we do all the monitoring from ISART software through SSH on all the hypervisors. So, it's quite simple to setup. One thing is that we have tried with Intel server boards and these expensive boards. But in the end, we have seen that if you balance the cost performance of these boards, using more commercial boards as hypervisors, you get a better performance system than using only small ones. Sorry. Using high-end main boards with a lot of features that are really costly. So, sometimes it's better that you have a farm of hypervisors better than trying to buy a computer server and hypervisor with a lot of RAM, a lot of CPUs. The cost will be too much for the performance you will get. So, we assembled all the hardware ourselves. It's gaming, so you have lights everywhere. Lights. On the engine part of ISART, we have an orchestrator that will monitor all the time the hypervisors and the desktops that are going to be run on those hypervisors. So, this algorithm has weights and will decide based on the free resources on the hypervisors and based on the resources needed for the next desktops that are going to be run, which is the best hypervisor where to place this desktop. Well, on the network part, we started with bonding with internet cards of four boards, for example, and we set up bonding to have four gigabits. And after that, we start looking at 10 gigabytes, 10 gigabits switch. One thing that we have seen in the past five years is that you can get quite low price switches on eBay, second hand, and they are rock solid. So, it's a hardware part that you can buy second hand and you can lower a lot the costs. Maybe three or four times the cost of the new switch. We use corporate internet switches and network cards and 10 gigabyte switches. Now we have about 40 gigabits switch, also second hand. And we set up all the networking with this second hand hardware. Well, another thing that we have seen with our experience is that you have to split those three network streams and try not to mix that on interfaces because it will get bad user experience if you mix it. You have to send video, send video, the client video through one interface and one network. The access to internet of these desktops, you have to split it, you have to use another interface to send it. And, of course, the storage part, you have to, you need to assign network apart from the others. What happened in the beginning is that we had video network access to internet of the desktops mixed and user experience a lot of lack when trafficker internet of these desktops were on the same interface. So this is the new 40 gigabyte switch we have bought and another 10 gigabyte switch that we have and we split storage, access to internet and video on those three networks. What we are providing is a BDI with multiple kinds of access. We try to bring bio access so the user, the students, are starting to bring their laptops to school and they can use the already created templates of the teachers on their laptops so the network access on the school. Between all these viewers we use an iOS spice web client that works quite well for normal use and brings access to a lot of clients that have only a browser. With the spice you need a client to be installed on the client computer but you will have quite good performance and also some features like transparent USB book on your desktop, on your virtual desktop. VNC works quite well also and most of the operating systems already have VNC ready and RDP to access all the kind of Windows clients. With the clients, client viewers, we have set up some computers with Raspberry as the client that they access the system and get the desktop and well, it works quite well for normal use if you try to do graphics, intense graphics, something like that, of course it doesn't work quite well. How it escalates all the system so it can get bigger. Here we started with a system in one computer that we tried with a few students and we tried if the concept could work. So you can have an iZart VDI for personal use all in one, one computer, you set up iZart VDI or even you can have a mobile computer you have a cube, it's a mini ITX mainboard with a shown CPU and I think 64 gigabytes of RAM with an access point and direct 10G access network. So we can use this as a portable iZart that we can set quickly on a new classroom. We have to install also on some laptops just to show some demos of the software. Of course you can have a setup of one computer only. The problem is that you only have one computer if it breaks. You have nothing else, but we started like that. We had one computer for one class and we used Gigabit bonding of the network connected to the classroom switch and they used it on that platform. We have some pictures. We assemble all the computers we have in the school. But of course this has grown a lot and when you have a big infrastructure where you want to deliver the built-in desktops you need to set up a cluster for reliability, hypervisors, a pool of hypervisors and you have, you need 10G copper Ethernet or fiber Ethernet to bring all this traffic to your classrooms. Our solution, as we don't waste a lot of disk space with all desktops, was to set up DRVD between two NAS servers that have the hard disks. These two servers keep data synchronized with DRVD and with this art we can, as we said before we share the disks on both servers so the hypervisors will get this from both NAS servers and we get a better IO performance when all the desktops are writing on the disk. And we set up also a Pismica cluster that brings us the possibility to fail one of the NASs and all the system will work with the other. All the services will go from one NAS to the other if something goes wrong. And we use the, of course, second hand network hardware to connect all this device. More or less what we have now is something like that where we have those two NAS servers and we have the hypervisors that will run the virtual desktops and on clients we have different kinds of access laptops, raspberries, we have different kinds of clients and connections. Also to make it easy to install we did a Docker setup so we have a Git repository where there is the software you can bring up an Azure TDI in your computer in minutes and try the software and its possibilities. I have time, I think. What I will show you now is more or less how it works on a day-by-day basis. We have the Quartz. Its user has a limit on what he can create and run. So, for example, a teacher creates desktops, sets a name, this is a real-time video, I think, sets the hardware he wants to be set up on that. One thing that happens is that a lot of software of ARM robots and the robotics and everything like that the company gives Windows software, we have Linux on the computers, but they need to program that ARM robot so this is the solution with Spice. They can just do a transparent USB plug on the desktop so the physical USB will be seen inside the virtual desktop. So the teacher just downloads the software, installs it on this newly created desktop. This is real-time, so it started so quick. The NVM-A cache works really well. The NVM-RiteBoss cache that we have talked about before. So he will install the software and after that he converts it to we open software just to see that the desktop works quite quick. It's really fast. Also the Ion disk is really fast. This is the typical research of the teacher. He installs the software that he wants to use with the students and he tries that it works and after that he will convert it to a template. All this desktop based on a windows that we have set up will convert it to a template and it can be dedicated on all the desktops. So it can be used by other teachers or other students. Well, we shoot down this desktop and we just convert it to a template that will be seen by all the other users in the school. So it's a really quick system that we needed for school and we tried all the softwares and we couldn't find a system that was so quick in all this process. And that brings the autonomy to the teacher and to the students to create their own desktops with the software they want. And a student in the class goes into his art and he will find the template. So he creates a desktop. This is a real-time video. We haven't cut the video. The students have less quota on running desktops and hardware and all that and we can access the desktop through different client views. So if you have... Thanks for your attention and if you have any questions about... Wives could tell that better than us. He asks, how long did it take for us to develop the software and also the settings of the hardware that is behind that? A lot of hours, of course. We like it. It was like a personal project that we wanted to try. Let's give it a try. It works with one computer. Let's start more desktops. What happens? You have a lot of I.O. You have a problem. Okay, let's look. We need better hard disks with faster speed and writing. We have an old budget. Then we look... We have cache disks, cache disks, so we set up cache disks and we started growing that for maybe four or five years. Then we started setting up all the systems. What we have seen is that teachers discover this software in the school because we didn't promote this software in the school. When they discovered it, it was like talking between them and it grew up. We couldn't stop that. It was high demand of these desktops. You have an LDIP. Sorry. If I understand, you are asking how do we map the users to the system? We have an LDAP system. We have three levels of... We have roles. We have groups. We have categories in our LDAP on the school. That was already set up. We just have a kind of plugin, a script that you can set up for your LDAP to automatically get the users from your system. Also, you can create your users, local users, without any authentication system. Any other questions? Yeah. That is a big problem. The question was, what about the licensing of the software you are running inside these... these virtual desktops? It happens the same with all the systems you... virtual desktops systems you drive. There are policy, Microsoft has its own, and I think you are talking about Microsoft licensing. You have... You need a physical license to... and then you can run that license on a virtual desktop, something like that. There is like a... a problem of licensing, virtual desktops, but this is the same in all the systems. There is no other way to do it. You have to... You need the physical license of that virtual desktop. Any other questions? Yeah. We are using... Sorry. Which version of DRG we are using in our system? We are using... I think it's version 8, and now there is version 9. Version 8, you only have two servers, and we want to... When we have more time, we want to try new version 9 and it lets you set up more servers, try it together, but we haven't tried yet. Yeah, we tried Glaster in the beginning of Glaster as a system that will let us grow, but the performance, I don't know now, but it was four or five years ago, the performance of the system was so low that we started looking for cache disks and a system that didn't grow up in size of disks a lot, so we started developing our own system. Ah, thefts. Yeah, you need five servers maybe to set up a minimal... In the beginning we tried Glaster and we saw that the performance was not so well, and with thefts we saw that we need a lot of servers to set up the storage, a minimal storage, so for us it was cheaper this setup. So in all the system we tried to keep the budget low. I don't know if you can... Maybe you cannot see it so well. We monitor all the system and as you can see here, well, these two peaks are the load of the NAS, when on top there you can see the hypervisor on the desktops started, so it goes from 5, 6 desktops to 60 desktops. I don't know the time, I don't remember, but in a few times you have a lot of desktops started and what you get is a high load on the NAS, the storage, on the storage, but the cache system, in this small setup, works so well. You can optimize it a lot. We have run with six hypervisors... Sorry, he asked if we have tried the system to the limits. Of course we like to do that during night and all that. And with six hypervisors, 64 GB RAM, 128 GB, 12 GB cores, 24 GB cores mixed all together and gaming boards, most of them. We start 120 GB, something like that easily. The students make the start at the same time. The teacher says, OK, start. And the cache, it's very heavy. The IO performance when... Also, when the teacher has created a template, what happens is that he's in a class of 25 students and we say, OK, let's go to Isaac BDI and create a template now. And you create 25 hard disks, but with this system, the hard disk, it's 200 kilobytes, I think, and it will grow up as the student starts writing on the disk. So this process is not so consuming. In the engine, we have a queues? Queues, yes. We have queues of operations, we control how this behavior goes. If it's so quick, we can stop some stats and delay it a bit. Well, no one knows. But it has been a software that the teachers are demanding more and more. So now we are trying to keep an stability of all the system as it is growing, because you need more resources and everything will grow. But, well, I don't know where it will reach. The commercial solutions, as he said, where and all these commercial solutions are really expensive. If we ask also for 20 desktops and it was like... The problem for us also was that they say, okay, you have also to go to this company and buy or you have to set up this storage like this and this and the costs were a lot higher. So it's not only the software, it's the hardware. They want to run the software. So we did it ourselves from round up all the hardware. We assembled the computers. We set up the switches and the network cards. Also, we tuned the network cards, the performance of the network cards. And we did the software as also the interfaces we have seen. Didn't have this speed in creating and converting desktops to templates. Any other questions? Okay, thanks for your attention.