 Hi. Hello. Welcome to the session. So you want to run Windows workloads on OpenStack. I'm Miklos Karim. I work at Halasoft as an IT director, and I have been a Microsoft MVP for the last 10 years. So let's start. The agenda for this session, first, let's talk a little about us, our environment. Then let's see the options that we have to run Microsoft Windows on OpenStack. We will focus on notice from the field here, then how to create Windows image on OpenStack, and the drivers and configuration. Let's take all the consideration that we need to have over there. Then let's switch to the best practice on how to configure instant using Horizon, but focus on the users. The users are the one that is the main principal character over here. Then let's go to the Windows instance, check that you are not having problems, what you need to do, monitor, and then we will see all the features running on Windows on OpenStack. So about us, we have an OpenStack environment in production running since 2016. We started testing and learning about OpenStack previously, but a production ready with Mitaka, we started that year, and our main focus of the company is software development. So all our users are software developers. That's the big challenge that we have here. We started hosting Linux environments because everything was running over there, very few Windows workloads. However, due to COVID-19, the Windows workloads right now are the main workloads in our cloud. We are talking about 70% versus 30% of the Linux, so you can see that most are Windows, right now, workloads that are running over there. Why? Because that everything is virtualized in the BDI and so on, we are using OpenStack for it. Anyway, architecture. Let's put our rack server over there, and it's just symbolic what we are showing over there, but we'll give you an idea that we have our infrastructure that is running in one of the servers. And then we have another conservative server that is just for the OpenStack controller nodes. We have all the OpenStack infrastructure running in containers in databases, Keystone, Horizon, Heat, Glance, you name it, it's over there. And then we have the computers and storage nodes, the Nova compute, Neutrogen, and the Salesforce. Yes, we are running safe in our environment and it's everything running over there. So that's a quick view of how our architecture is going to be. And over there, we run Windows OpenStack. So when you say, okay, I will run Windows, you have two options. I mean, two. Because one is I will use this already hypervisor, VMware, HyperV, any other bare metal hypervisor. You can do it because OpenStack will have to do it. But also you can say and say, I will run in the Wikipedia. Okay, both are a solution that you can choose. So let's go to the first one. If you go to the HyperV, because if you are running Windows, you will run more natively in the HyperV. You will know how to go with the drivers and everything. So you can do it. Yeah, you have the link over there to configure is required to the Nova compute and the Neutral HyperV agent and you are done. You will be able to run the Windows Post-Closer. But also you can run the Ender KB. Because KB is a Post-Windows, it's not a big deal. But there are other considerations that you need to be focused on. Maybe why we are running on the KBM and not in the Hypervisor because all our servers were running Linux in the beginning and then we need, I mean, since we were running Chef, it was not possible to run the Hypervisor, the HyperV and then connect it to Chef. There is no way. So we need to choose to run under KBM. And if you're running on the KBM, you will face with the next problem because you need to run one, if you need to run one virtual machine inside all the virtual machine and then what, but say why you need it? Why you want to run a one HyperVison and not a HyperVison top of the other in the Windows side, for example. Remember, developer, they use Visual Studio and in Visual Studio they have an Android, for example, emulator. That Android emulator will run a virtual machine that needs all the capabilities of the Hypervisor to virtualize in order to do it. So nested virtualization was an easy. However, since we start with Mitaka, we were not, it was not possible to do that because the parameter that we need, that is the CPU model extra flags equal BMX was not present. It's just in screens that was, that appeared that helps to pass to the instance, the CPU model and then we'll be able to do the next virtualization. To do that, there is a link for the configuration. You go to the section nested guests supporting NovaCon and the host model to avoid that problem in the future. And then another problem that you can have is unless you have all your environment with just one type of processor that I didn't think so because you add new machines and then they have not the same type of processor as the CPU model. So avoid that problem in the live migration because if you have different servers, I mean, different CPUs, you will not be able to live migrate. So you need to enter that parameter that I am pointing over there in that documentation, CPU mode custom, and then you put a different CPU models if you bridge household model and then OpenStack will choose between them, which one, you can choose as well, which one you want that all the processor behave. Remember that you need to select the less CPU with feature that you have. I mean, if you have a v3 or v4, you need to see that all the v4s need to be behaved as a tree. Yes, you will lose some new functionalities but you will win the live migration. Unless of course, you have a very good bunch of machines that runs before and then you will have the option to run without any problem because you will supply it with sales or with reagents, you will check that how you want to do it and then you will not lose that functionality of the new processor. But if you need it and you will put in the same environment, this is what you need to do. And finally, you need to tell OpenStack that you are running a Windows instance. For that, it's just parameters, OSD is equal to Windows, your OS type equals to Windows. CERNblog talks about it, why it's needed, what promise you can avoid doing that. Unreal with those parameters and we didn't run any problem so far. How to create Windows Image and OpenStack? You can say the procedure, the manual one, you have the link over there, it's an OpenStack docs and then you start doing from the beginning, you could put the ISO file, you create the row and so on, and you have one image. But doing that every month will be very time consuming, better to have a small automated way, the cloud-based project for me, the guys are very good ones, they do very good, they're Windows-sized so they have your GitHub, I started that image from them and what they are doing. You will run, I mean, they will run, all the disk, apply ISO to disk, create an attendee, inject the drivers and then start the VM, they will start all the updates, they will install the cloud-based units and we'll do the syspref. Once you do that, you can take that and you will have your golden image if you need to, to just install the updates month every month, but what you will do it later is just that they will destroy that VM that noted that is in Hyper-V and they will resize the partition, resize it, it converts the target format to code and then compress it and you will have it ready to put in the image in the OpenStack. So what is the key takeaway here? Remember that this project, you can customize for your needs. For example, we can install programs, we are configuring the windows to behave a little different, for example, the flag, ground, some other things in the remote desktop features that are really a pre-configured. So the image is ready to go when the users launch the instance. Okay, so remember, you can customize and you can do a lot of things over there. Really, a very good project to automate and have the windows ready. Drivers and configurations as well we have the Fedora drivers to put in the windows you need to. However, those are not a WHQL sign-in that means windows have a quality lab sign-in or windows lower test. Okay, if you don't have that, you will not have too much problems unless you need support from Microsoft. If you have a production ready machine and you're not having those drivers sign-in, if you don't have any support for it. And to have those sign-in, you need to have a RENHAT Enterprise Linux Bar that your drivers or the Canonical Build UI drivers, meaning in order that they can support you and provide the configuration. And how I know if I have or don't I have the Build IO Driver certified by Microsoft. Okay, let's take a quick look about it. Look, we have here the Red Hat Build IO Driver that for example we see in properties driver and see it's digital driver by Red Hat. Okay, that's fine. No, it's not fine. It's not Windows certified. If we go to the balloon driver that is for the memory run, we can check that that is also Red Hat. And let's jump here to another machine. And here we have the Canonical Build ISCafee drivers and these ones in driver, we can see that is Microsoft Windows that are a compatibility publisher. If we go to the balloon drivers as well, check that in the driver side we have also the Windows load test. And then if you have that, you have all the support that needs from Microsoft if needed. Last but not least, the Kimo agent, very good in order that you install and have the metrics coming to for the segmenter for considerations later. The best practice for users to launch instance, what worked for us was to have the windows space and the hard disk with minimum volume size to any kind of files. Yeah, that's what Windows needs, minimum. If you put larger files like 100 gigabytes, what happened is that the user will start putting everything in this C drive and then fill it and then you need to troubleshoot that. If you have with many, very minimum space, hey, I don't have a space, hey, you need to put that D drive and put your data over there. So if you run out of space in the future or you need another thing, you can recreate very easily. So they understand later, but it really start with a very minimum volume size in order to avoid that problem. Talking about avoid, two gigabytes of RAM, please, no, one gigabyte of RAM, no, four is the minimum that you should have, okay. Usually, two bits of CPU is the minimum for Windows Instance. You can say, hey, why one bit of CPU can be, okay, yes, can be, but what working for us is to have two bits of CPU, like the minimum for Windows Instance. The network needs to be configured and ready for the users. So they don't need to think what the floating IPs or whatever, please, be ready. Always create a volume, not a femoral. It's by default. In the Suri, it's by default, that's a good thing. Flavors, name convention can be easy. So it uses can identify and say, ah, two gigabytes or four gigabytes of RAM with two bits of CPUs, whatever. And the security group with default ports open. Let's take a quick look about what we are talking about here. For that, we'll just launch an instance. We put an instance name, okay, a description. And here, say what we are talking about. You have the image. Let's check out Windows 2018. Create a volume, yes, always create a new volume. Otherwise, we don't want the femoral windows here. Volume size, we say that's the minimum. The let volume on Insta-delete is select no, please choose no. Because users will delete their instance. And then you need to somehow recover quickly their job. And that happens, yes. So you will see that it's in their instance and you will be able to recreate if needed. If the user is more with more skills and open stack, they will be by their own, but you have it there. So very good thing. In the flavor, as I told you, you have here the two fora, for example, is two virtual CPUs for RAM, for R as well. So it's more easy for them to choose and not chat that when you troubleshoot the problem as well. They are running, it's everything very slow. And you say, hey, you have a really a very low instance. So with very low resources, I mean. Networks are already selected for them. They don't need to do anything. Security groups, by default, the ports are open what they need. TCA-NAID, 138, 139. That's something that you need to be considered in your environment. And I think that's it. You have everything to run the instance for more easily for the users. And finally, always configure the disk to a certain throughput. That means the IOPS, you need to put a quality of service in order that the users or one user consume all the IOPS needed. When we run Linux in them, we never run this problem. But as soon as more windows instance run in our environment, we run performance problem. So what we need was to limit the throughput IOPS that each instance can consume. For example, read 3000 IOPS, write 1000 IOPS. And in that case, it will not consume all the resources of the cloud. Be very careful with that. I always recommend configure the disk. They have the documentation. So users will not copy on the limit. And it's not that the users will go, but sometimes some windows processes maybe can consume them. How to check there will not be any problems. In general, try to allocate enough RAM, avoid to get memory paging, because that is what causes a lot of performance issues in OpenStack. If you can individually monitor CPU memory can be very helpful in order to troubleshoot the users later. Environment, very important to overall look for trends. For example, the IOPS here, you can see our trends that it was very minimum beyond 2000 IOPS. And then suddenly it grows. You can say, hey, it's not too big, it's 6000 IOPS. But hey, it was not the same trend as before. And we were not doing anything special, I mean, we didn't create any new growth. So what happened? The users somehow, windows users, started to consume much more. They installed something that they didn't should have, and it's causing those problems. If you don't monitor closely that, you can lose control very easily, at least in the windows side. It can be fair to monitor the edge space on each instance, its responsibility of each user. Yes, but if you can, and have some metrics to get how it's going to each instance, it can be very helpful. And training users is the key feature in order to avoid more problems on the OpenStack environment and using windows as well. And as soon as you got everything, yes, you can get all the new features of windows running on OpenStack. Let's go over here. For example, we have here the nested visualization. Oh, I mean, for example here, we are running the WSL, the windows subsystem Linux. If we, the new version, that is the version to roll. Here we can have the windows sandbox running as well. And last but not least, Microsoft Edge can have the new application guard window running as well. Over here, you can check. Yes, it's application we are running in the Microsoft Edge. And last but not least, the nested visualization that we talk about it, we have the Hyper-V, we're running windows 2016, running inside the virtual machine in Hyper-V, and this machine is running on OpenStack. Very good feature, very good. Everything running seamlessly without any problem. And with that summary, remember, if you're running, you need to see if you run under KBM or Hyper-V isle. If you're running under KBM, check the processors and the parameters. Remember to have the Beards IO drivers. You can have the Fedora or the Windows hardware quality labs sign it. Remember, how is your use case, what you will need. Use an automated platform to build windows image. Cloud-based is very good, but you can choose whatever you want. Keep it simple for users to launch instance. Do not complicate with for the users, make it simple. And remember, configure the hard performance limit for problems, I mean the IOs. Remember that, train your users and finally monitor and start tuning to fit your use case. That's the more important. Every day you need to say and you will start tuning and have a very good environment for your users. With that, thank you very much for your attention and hope to see you in the future. Bye.