 All right. Hi everybody. Thanks for coming and joining us today for a discussion about Hyper-V and the support we added for Grizzly. My name is Peter Puglia. I work for Microsoft and my friend here. I am the CEO of Cloud-based Solutions. So what we want to show you today is all the work, the great work that we did to bring Grizzly to Hyper-V for this cycle. So we had a lot of, I guess, additional contribution this cycle, which was really great. A lot of people stepped up. We had some new individuals add code. We had other teams step up and do additional testing. It was a tremendous help in the effort. It helped us deliver some high quality software, which is fantastic. We had great contribution from the IBM team, specifically around our quantum work. We had additional Hyper-V evangelization from other people in the community driving just awareness towards us, which basically helped individuals get in touch with the right people for them to start using Hyper-V. So we've done a good job driving more awareness in the community, getting more people involved, getting more code in, getting more functionality into Hyper-V. And I'd say we had a great success for Grizzly. A lot of thanks to these guys, a lot of thanks to you guys out there. And I truly appreciate it. So once again, thank you. We had some great work done by that individual right there to add sender support for the Grizzly release. Pedro, great job. Thanks for maintaining it. Thanks for keeping up and putting the time in. Really appreciate it. So let's talk about Hyper-V in general. Specifically in OpenStack, our target development platform is 2012. And that's specifically because the features available in 2012 are more feature-rich. We have a lot of the high-level functionality, like share-nothing line migration and those type of things that we can take advantage of. And actually it allows us to deliver a really great use case for Hyper-V in an OpenStack ecosystem in a way that is consistent with the model of OpenStack deployment of that, you know, share-nothing. And we can actually put Windows in that environment and have a great success. So from a Hyper-V perspective, most people don't realize this, but Hyper-V server, which is a product skew on its own, is a free product that Microsoft offers to the general public. You can take it, you can download it. And from a licensing perspective, the time at which you include costs is when you license Windows workloads on top of Hyper-V server. From an open-source perspective, you can easily build your OpenStack infrastructure on Hyper-V server and run as many Linux and FreeBSD workloads you want on that without accruing any additional licensing costs. So that's something that most people aren't even aware of, and it's, you know, one of the, I think, great things that Microsoft has done. Additionally, if you want to do run Windows workloads in your environment and you need to have that sort of cost-benefit that you get from a different licensing model within the Microsoft licensing ecosystem, you would use Windows Server 2012 with the Hyper-V role, and then that basically, if you license it properly, it allows you greater VM density for Windows workloads specifically on that note. So, and we can also once again, for development purposes, use Windows 8. There actually is Hyper-V functionality within Windows 8, although you wouldn't necessarily want to use it in production for a Hyper-V cloud, right? So here are just some of the key features. When we start talking about 2012, you know, there's an enhanced network virtualization stack. We have obviously, you know, some of the things that we take advantage of is things like the extensible switch, the PowerShell interface. We actually also have, take advantage of the ability that Hyper-V now has to provide shared nothing live migration. So we can actually live migrate virtual machine workloads without having any shared storage. So it makes it really convenient, especially when we start talking about scaling out specifically with Windows. We don't have any components that necessarily would cause us, I guess, issues when we want to go sort of far to the right in terms of scaling one. So in terms of OpenStack and Grizzly, what exactly was delivered this time around? Well, obviously, we had a lot of enhancements in the Nova Compute driver. We also did development to get start the process of an official Quantum Hyper-V plug-in. And also the cinder volume support. And one of the, I think, the most important pieces is cloud-init for Windows. So now we have the same cloud-init functionality that everybody enjoys on Linux. You now can have on a Windows workload. And that's quite awesome. And for those of you who haven't tried it, I suggest that you do. We actually got some great discussion this week, and I bet you're going to start seeing it in a lot of public clouds from people around here. So it's a lot of great work. A lot of great work coming out of the community and just a really successful release for us. So from a purely Hyper-V perspective in terms, when we start talking about Nova Compute, well, what is it? People think of OpenSec deployments as different deployment models, right? We've got the XCP model where they use a virtual machine. Well, in this case, Hyper-V, we actually install the driver onto the hypervisor. So just like from a Windows IT perspective, we're installing an application on our Windows server. That application installs a service, and the service actually is the Nova Compute service, which is talking back to infrastructure. It doesn't require Windows clustering. We don't require shared storage for live migration, and the only requirement, an additional requirement, would be active directory in that case, specifically for live migrating the nodes. But outside of that, that's the only additional component required, and that's for that specific piece of functionality. So thanks to this guy here, we have all these great features now in Hyper-V, right? Because literally when I first started using Hyper-V, it was with OpenStack, it was a very, very limited feature set. And today I'm actually really proud to say we're pretty much feature-complete with KVM from a hypervisor functionality perspective, right? So that means you tomorrow can deploy Hyper-V in your OpenStack cloud and start using it for your Windows workloads immediately, right? So great job. Exactly, yeah. Pretty much right now, what we're seeing from a sort of the community perspective and typical community use cases is exactly that model, right? Most people want to run their Linux on KVM, and to be perfectly honest, it makes best sense to run your Windows on Hyper-V, right? You get the cost-benefit analysis, I mean, the cost-benefit of that, and also, you know, let's be honest, Windows is optimized to perform on Hyper-V. Yep, yep. And obviously, you know, once again, and for those, if you are just happened to be a, you know, Hyper-V fan, or you're replacing ESX or something else in Hyper-V's getting involved, like I said, you can actually get the cost-benefit of being able to use Hyper-V for free for your other workloads outside of Windows. So that's a pretty substantial thing. So, Alessandro, you want to... Yeah, it's gone. Thanks, Peter. Okay, let's get now to the details. Let's say some of the features that we implemented right now. As you probably know, we introduced in Folsom the first version of this driver, okay? As you know, Hyper-V was not included in ESX. We got involved with the Folsom release. So, in San Diego, at the last previous summit, we presented our work, okay? That was just, let's say, probably 5% of what's available today. We did a huge amount of work in the last six months, okay? And how many people did try, actually, Hyper-V on Folsom? Okay, thank you. Thanks again. How many people plan to try it on Grizzly? Nice. That's what I'd like to see. Thanks. Very good. So, let's take a role now on some specific features. For example, we have the volume attach and detach. We have the Windows Ice-Cars initiator service, okay, enabled, which runs on every compute node. This is just a part, a standard part of Windows, so it's available on every version of Hyper-V. As Peter already said, it's absolutely irrelevant if you're using the free Hyper-V or the full Windows server with Hyper-V are all enabled. Actually, we typically prefer the free Hyper-V for the simple reason that you minimize the amount of software that you need to update, let's say, in the parent partition, okay? Because for the rest, everything that we need, for example, the WMI calls the environment to run Python and everything is still available already in the free edition, okay? I'll add to that, if you don't mind. Sure. So, one thing to just clarify. With Hyper-V server, you get the reduced surface of Windows Core already built in. Now, today in Windows, we also have the ability, even with Datacenter, to have that reduced core surface. So, if you do need the additional licensing, you still get the benefit of the slim-down scale, you know, slim-down reduced footprint. So, it's available both in Hyper-V server and in Hyper-V server core. Okay, so we have the full volume-attached-detached feature. We have also boot-form volume, so if you want, you can avoid to have any storage locally on your server, so everything can run out of your ISCSI storage. I mean, Cinder in this case. Live migration, okay? Hyper-V has an excellent live migration support. Actually, when we introduced it in Folsom, we were the second hypervisor doing it, okay? So, each compute node must have active directory domain membership, that's the only requirement, because active directory is used, of course, for the credential management across the nodes, okay? We're using share-no-thing live migration, so you don't need any cluster. This is very important because the idea here is that we want to scale up to thousands of servers, and we cannot do that with a traditional cluster implementation. Share-no-thing live migration can be enabled via simple PowerShell commands, or from the Hyper-V management GUI, and very important, we do it also automatically and directly inside of an installer. That's gone. This is a new feature that we added for Grisly, which is resize and call migration. It was not available in Folsom. The root VHD is resized to the size specified by the flavor. Copy and write VHDs are automatically merged with the base disks as VHD differences in disks cannot be resized, okay? So, we do all the wizardry behind to make sure that this can happen. We are going to introduce also VHDX support for Havana, okay? This will require, actually, the full support for the V, the version 2 API, is used by Hyper-V, which means that it will not be available on 2008 R2. How many people are using 2008 R2 here? Okay. Oh, quite a lot, I would say. Now, 2008 R2 with Hyper-V? All right. Glance integration. We have a native Glance client support on Windows. We can upload images directly for Windows, or, of course, from Linux if you prefer. No, there's no ISO format. It's a VHD. Yep. How do we use this VHD? Like, use a Qcode 2 for KVM, for example, you can use VHD here. It's actually the same format used by Xen, for example. Quantum. Quantum was one of the big deals here. As you probably recall, for Folsom, we had only some basic Nova networking support, meaning that we had flat networking, okay? We didn't invest in Nova networking for the very simple reason that it's deprecated. So, all of our investments right now are on Quantum. Is anybody of you still using Nova networking? Don't be shy. Okay. That's all right, Jim. I won't hold it against you. Let's say our plan is, of course, to go on and work only on Quantum, but if you guys have some very specific environments in which you cannot migrate to Quantum, please come to visit us at the boot and we can discuss to see if it's worth it if we can find a solution to support you on Nova networking, okay? So, we have a new plugin for Hyper-V, which is very important as part of Quantum Core, okay? We have also Quantum Common Montainer here, Gungish, that helped a lot in the work for my BM. Thank you. Supported network types, VLAN, flat, local, with tunnel incoming pretty soon. We have a plugin agent model. So, plugins run actually in the Quantum server. Okay? And the agents are running on each Hyper-V compute node. It's a model which is very, very similar to what OVS does. It's so similar that we decide to also maintain a full interoperability. What does this mean? We decided to keep the same identical protocol used by OVS between the plugin and the agent. So, what you can do is that you have, you can have an environment where on your controller you have an OVS Quantum plugin, okay? You just take Hyper-V, install our components, including the Hyper-V agent for Quantum, and simply plug it inside of your network, okay? And it will just be recognized by the Quantum OVS plugin as a Quantum agent, okay? Of course, this is limited only for VLANs right now, for the very simple reason that Hyper-V doesn't support the tunneling protocols as for the moment. So, you can use the OVS plugin with Hyper-V agents, or vice versa if it makes sense for your environments. It's limited to compatible L2 particles, so flat networking, VLANs. And you can use, this is also very important, you can use the L3 and DHCP agents with a Hyper-V plugin, okay? So, all the L3 area is delegated currently to Linux. So, no need to run all the L3 part. L3 meaning DHCP, meaning routing and so on, okay? No need to run it outside of Linux. You can use exactly the same components that they're using for OVS or for the Linux bridge. Let's go on. Okay. Most people in the OpenStack world are not too familiar with the Windows world, okay? And not so many people in the Windows world are particularly familiar with the OpenStack or Linux world, okay? So, we are sitting somewhere in the middle. We have a lot of experience in both RAs. And we were thinking about what's the easiest way to provide a solution to deploy OpenStack, specifically the Nova Compute and the Quantum part, on Hyper-V. So, we decided to create an installer. This installer will create an independent Python environment to avoid any conflict with existing applications, okay? It will install and register all the required dependencies, so you don't have to get crazy with various PIP installs or packaging instruments and so on. It generates dynamically a Nova Comp file, and, of course, also a Quantum Configuration file based on the configuration that they are putting in the installer or in the unattended setup. It creates an Hyper-V external switch if required by your environment, okay? So, also here, you don't have to have any specific knowledge about Hyper-V. You don't have everything for you. It registers, of course, Nova Compute as a service and starts it. So, this is very important because Nova Compute will be seen by Windows as an absolutely normal Windows service, okay? You will just start it and stop it like every other service in Windows. Inside of it is actually a wrapper that we developed at Doc for this, and that will spawn a Python environment that will contain all, of course, the Python code that you need to run, okay? So, very simple, very useful, and, of course, whenever you restart the host, it will automatically start also the Nova Compute, okay? And one thing to add here, the whole goal of this, of our efforts here is to make OpenStack on Hyper-V easy for Windows IT pros to be able to come in and have an experience, right, that they're used to, right? Well, at the same time providing that same experience to Linux professionals who may want to dig down deep and get nitty-gritty in the code, you can still do that. However, from a strictly Windows, I guess, perspective, you won't notice any difference from installing this as you would from installing any other application and from a service perspective. It's the same user experience that's delivered that you would expect on a Windows platform, okay? Yeah, if you go on our website, we have a blog post showing all the screens, every single screenshot, every single instruction. I will do a demo also right now about how to install it. And please contact us, of course, if you have more questions. Okay, so other features, it enables and configures the Hyper-V live migration if you need it, of course, okay? This will require, of course, domain membership, but it will avoid the need for you to go on the Hyper-V, use the Hyper-V configuration tools, understand how it works, and so on. It will be very, very easy. It's just about enable it, set the username, and go. We take care of all the complexities behind. Another important thing is that it can be executed fully and attended and automated, okay? So most people are using nowadays Chef, Puppet, or old school group policies, and so on, okay? Especially if you want to deploy a consistent number of servers, okay, for sure you don't want to stay there and go through the graphical user interface and press next, next, next all the time, okay? You want for sure to have a solution in which you can just deploy it in a completely automatic and unattended way. Our solution consists of using the MSI tools, the installer tools provided by Windows, okay? Here's an example. Don't get scared by the syntax, it's very easy. I will attest it's about 32, I think, configuration options that you actually have available from the actual installer. What you're specifying is, for example, the Glance host, the Glance port, Rebit, I don't know, the instance part, the switch that you want to create specific features for the Nova configuration file, and so on, Passports, Quantum configuration, and so on. All the stuff that you see inside of the graphical user interface is available here, okay? It's not difficult. It's well documented on our website, okay? It's just about creating one single script and run it. At this point, you can just point it as a process in puppet, in chef, group policies, whatever you want, okay? No need for user interaction at all. And you have, of course, also perfect error control because of the logging feature provided by the MSI installers. So here is how it looks like. Of course, we're going to have a demo pretty soon. Okay, so typical MSI features. Here, this is a screen where you're going to create a new external virtual switch, okay? Behind, during the install phase, you will have a lot of code that will automatically interact with the hypervisor specifically with the WMI interfaces. This is one of the things that we like. A user from a very, very well-known company in the industry brought on our blog. Well done, thanks. I wish the install on Linux was as easy. Any Microsoft folks additionally, please write that down. Okay. That's very important. Of course, we made it, okay? So it's, of course, our opinion on that. But almost everybody with which we spoke, the first thing that they say is that this is the easiest possible way to deploy OpenStack. And once again, it's a testament to the approach that we took, specifically to make it that easy for the IT professionals that we expect to be using it. Okay. So no need to go and look to complicated instruction on blog posts and everything. Just one installer. The installer will ask you the required information. And at the end, you will just have a novicon file with all the required information. Questions? Yeah, sure. It works exactly like any other Windows update. So you can just go and update installer with a new version. It will, of course, replace the components and so on. So we have two main options. We can go even on the website. The current branch, meaning Havana right now, we have a nightly build. So every night, there comes a new version out with all the latest software. We have on the server, we have automated component that will get from Git down all the pieces, okay? Package the installer, compile everything and upload it automatically. So that's what we use. Normally, we use for development Havana and DevStack. We have a three-node environment that we typically use. I'm going to show it to you right now. Plus, the latest build. So every night, we just download and install it and so on. We have the same thing, of course, also for Cinder. We have the same thing for Cloud Init and so on. And, of course, we have also the frozen stable builds. So if you go on the website, you get also the stable Grizzly one, which is, of course, frozen with the latest components. We're going to update this one as soon as the 2013.1.1 would be available, okay? You will see also in the installer that it's very clearly stated what version you're installing. And you can update it like any other Windows solution. Or if you want, being anyway a Python environment, in any moment, you can also... Let's say that you want to update some code with the latest code from the repository, you can just run Git clone, Python, set up the install, and off you go. So the idea is to make it as easy as possible for Windows IT pros and as easy as possible for Linux and OpenStack professionals. Okay. This is anyway the goal that we wanted. I never saw anything as easy as what we did for that. But anyway, I did it so it's... It's probably... Let's pretend I didn't do it for a moment. Okay, here is our Quantum Hyper-V setup. This is what we are going to use for the demos. We have this very similar to the typical environment that we expect that people are using production and actually what our customers are using. You have dedicated servers for the controller. You have dedicated server for the networking. And you have, of course, dedicated computer nodes. On the computer node side, we didn't put it not to have too much space. What our customers typically require, as Peter already said, is that they want to run their Linux workload on Linux and the Windows workload on Hyper-V. Okay. I'm not entering in the discussion which one is better and so on. It's just a fact that customers feel more comfortable with it. Linux works very well even on Hyper-V. But we prefer now that customers have a choice and they can use whatever they prefer. And if there's anybody interested in running Linux on Hyper-V, I actually have tons of experience with that, so please reach out to me and I can help you. Even FreeBSD, for example, runs very well as an additional workload. Okay. So multi-node setup, control and network, one computer node for Hyper-V, one second optional computer node for Hyper-V. This is used for live migration. And we use typically additional computer nodes for KVM. This is what we use typically for testing and this is very similar to a production environment. Of course, a production environment doesn't have Hyper-V nodes, it has hundreds. It doesn't have two KVM nodes, it has hundreds, okay? Especially with Grizzly, with the great NodeDB compute feature, Opus Stack scales up amazingly well. I mean, since you don't have any database interaction between the compute nodes and this one. I put here in the slides something for you because it's very complicated to create a multi-host environment in Dev Stack, okay? So here is the full configuration. Controller, we enable VLANs, one physical network, multi-host enabled. We disable, of course, now the networking and we disable, of course, the Nova compute on that node. And we enable the Quantum Service. Controller, we enable, of course, all the other Quantum pieces, especially the Quantum Agent with Open-V Switch, the Quantum DHCP servers, the L3, Layer 3, okay? And the metadata. We specify which one is the host to which we connect for the controller in all the remaining configurations. Very important, OVS Bridge Mapping, physical network bridge. KVM, if you want to use it, okay? So in this case, you have, again, enabled services, simply the Nova CPU and the VSC staff if you need it. Since this is not a session on KVM, I guess we'll just move on to the demo. Okay, questions so far? Okay, very well. So all this stuff is running now on my laptop. So I'm trying to defy the laws of, the Murphy's law, which is the famous rule of never run a live network, and the second one is never run everything on your laptop. So we have a double single point of error here. Okay, let me see if we can get full screen even here. I just started installing, not the best view. This is not Hyper-V8, this is VMware Fusion, so it's not our fault if there are glitches there. Looks like he doesn't like it. Okay, not a problem. Let's do like this. Pretty old school. Okay, this is our installer. So we accept the licensing terms, which is simply Apache 2. So not too much to accept here. We go on, here you have the configurations options. So you can decide if you want quantum or not. If you say no, it will just configure no one networking, okay? So it's up to you. If you say yes, then it will, of course, ask you all the configuration parameter for now. Live migration, if you say yes, then you have to go and set up all the Active Directory configuration that we were talking about. ISCAS Initiative Service, this is a service which is already part of Windows. We just enable it for you and make sure that it will run automatically. So the idea for the installer is that you don't need to do anything else on Windows. You install Hyper-V, which is free, as I said before. You install our software, no need to do anything else. Very important, we have also FreeRDP. We have also here, Mark, the author of the FreeRDP project, which is an amazing project which takes, actually, the RDP protocol, which now is well documented by Microsoft, okay? And Mark created with his team an amazing open source solution, okay? Which works on the Mac, which works on Linux, and it works, of course, on Windows. The Free Hyper-V version doesn't have any graphical tool. For example, you don't have any console access directly on the hypervisors. That's why we've wrapped FreeRDP inside of our install, and we provided also a PowerShell Snap-In so you can access the console of your guest, of your instances, okay? In a very, very simple way. This is not something for your users, but it's very important for you when you troubleshoot. Without it, you're almost blind, of course. Or you install a third server where you install all the tools. Let's go on. Next, you already saw this screen. It's about choosing an external virtual switch. For example, now we can choose the external one, or we can say add an additional virtual switch, and you can choose on what network adapter. It's very important an external virtual switch is made by, of course, a Hyper-V component and also a network adapter, because otherwise it wouldn't be external by definition. So again, no need to know anything about how Hyper-V works. You just have to follow the wizard which is asking you the steps, the provider steps. Okay, external. If you create a new one, you can even set it shared for management. So if you have only one network adapter, for example, you can just use it both for management and for your virtual machines. Okay, now we start in putting in the data. So this is gonna be our controller. So the Glance API host. Port is the default. We support Rabbit or Cupid, okay? Anybody using Cupid here? We got a lot of requests, so we ended up adding it inside of the installer. So either Rabbit or Cupid, you just specify the host, you specify the port. You have a user, password. Here we have the instance path, which is actually where you're gonna set all your instances, okay? Next, quantum configuration. So we have the quantum host. We have a service. We have the username, admin username. Our super secure password was meant to go here. You didn't see anything. Okay, and that's it. And this is the Keystone URL, okay? If you notice, those are all the information that you are required to put inside of the Nova Conf, okay? So no need to go to a block post, getting raised in troubleshooting because you forgot one configuration. Simply here, if you don't put it, you don't go on, okay? Because the next bottom will be disabled. Okay, so more information. For example, if you want to enable logging, if you want to use copyright, if you want always to attach a config drive, if you want to inject the config drive password, if you want to have verbose logging. Limit CPU feature is important. If you want to do line migration between hosts with different CPU architectures, and so on. That was it. Now, it will start. It will install everything. It will create your Python environment. It will deploy all the components. It will create the services, and it will start them for you. That's it, okay? Now, once again, we deliver the same experience for Cinder and for Coordinate with the same installing mechanism, same ease of installation. Same ability to wrap it in your favorite DevOps tool set. If you want to desinstall it, same stuff. Simple MSI installer. Once again, typical Windows IT Pro experience. All right, we can let it go in the meantime. We have, of course, another machine here already set up. Let me see. There are some issues with the projector, but we can solve them right away. Okay, here it's already installed. Everything is running the same host. Here's, for example, the log file, okay, with all the information. So, it's everything very, very friendly. Okay, let's go on. So, we have... I don't know if you guys can see it. Is it big enough? I don't think so, all right? Better? Okay. There you go. So, as I said before, we have a controller host. This is just a DevStack, so it's a Ubuntu machine with DevStack. We have a networking host, okay? We have another KVM host here that we are not using, of course, and we have now these two Hyper-V hosts. So, let's take a look at the situation. Looks good. We have already a log host. I guess that we deleted. Okay. So, now, let's take a quick look about what we did here. The first thing is create an image in Glance N. I will cut it very short. It's going to be a Windows image, okay? We have also, of course, images for Ubuntu and so on. Glance image create, and to answer to the question of the gentleman related to the hypervisor type, okay? So, making sure that the given image will run only on Hyper-V and other only on KVM and so on, it's amazingly easy in OpenStack. You just specify this. Property, hypervisor type equals Hyper-V, QAMO, KVM, XAN, and so on, okay? So, you create your images, your dedicated images, and off you go. The name of the image, container format bear, disk format VHD, okay? And this is just the VHD file that I get from another partition, okay? That's it. At the end, I will find, of course, the image here. Networking. So, this is the networking script. Notice how we use Oak in this case to extract all the information that we need out of the creation of scripts, okay? So, all the ideas so we can pass it to the next step. So, basically, we create a network. We create a subnetwork. We create a second network, this type of type flat with a related subnetwork. We create a router. We add the two interfaces to the router, okay? I don't know how familiar you guys are with Quantum. Quantum, it's a beautiful beast. I mean, it's incredible. It's very powerful, but, of course, it requires a lot of knowledge to be configured properly, okay? So, all the stuff will be in blog posts on our website. Okay, we create an external interface, an external network in this case, and we just connect it to the same router, setting a gateway, okay? So, basically, we're going to have two networks, one with VLANs and one flat. That's it. Okay, time for a boot. This is how we boot an image. So, now a boot config drive falls in this case, okay? I give a flavor, the image that we want, the key, and the netink. That's it. Well, we specified when we created the image, right? When we added the disk image to Glance, we specified the hypervisor type. So, now it knows that it can't deploy that specific image on anything but Hyper-V. The VHD file has to be created in Hyper-V. So, when you're preparing your images, obviously, you'll have to be preparing them on Hyper-V for Hyper-V. And we have a free image for you for download for evaluation. We'll talk a little more about that. Yeah. That's also something that we can do, of course. Okay, let's go in the app. It's building, building. Take a bit, of course, to get up. Here, we can just ignore this part. I'm sorry. In this case, being a simple demo, we are going directly on the local storage. During the install phase, it's asking you where you want to store the local data, okay? So, in this case, it's C slash instances and so on. You can have a boot from volume, so everything stays as seen there, okay? Yeah. It's basically, you define that in your initial setup or in the nova.conf. Okay. So, now, as I can take a look here, GetVM will show me that there is a running VM called instance and so on. And look at this. I can do simply like this. So, yeah, this is PowerShell. This is actually the PowerShell work that Alessandro did. So, GetVM, the instance that they want, okay? GetVM console, and voila. We have a full console with all the stuff. Now, this machine is going up, okay? It just arrived, okay? It's up and running. Now, CloudDB is starting. It's configuring everything, and it will shut down and restart the machine again by setting the username, the password, by enlarging the local storage, by what else? Yeah, of course, setting the username, running user data scripts, and all the stuff that you need to do, okay? Same thing that you will see in any Linux machine. Another important thing, I'm not using Horizon right now because I want simply to show you how it works with direct nov-APIs and so on, okay? This thing here, for example, is not meant to replace the dashboard inside of Horizon, okay? It's simply a troubleshooting tool that you need, and you can need, of course, to see what's going on. No, we're not using... Sorry, I was going to say we're not using VNC, right? That's another question, and actually some more work that we did for Grizzly. Okay, very quickly. We're using both config drive and the metadata service. So, because of time, I want to talk about a couple things, actually, specifically about this instance. Because this is actually a very special moment, and there was a lot of work done that actually amazed me, okay? And I have to thank that gentleman over there for that work, specifically. So, what Alexandra's company is sort of the first to be able to do is to offer a Windows cloud image for you to take and test in your OpenStack environment, specifically, fully baked, fully sys-configged, I mean, sys-prepped, fully ready to go and into your glance environment and deploy onto your OpenStack infrastructure, okay? And Microsoft, graciously, through their legal department, was able to create a license that allows you to do that in a 180-day eval environment, okay? So, at least now, you can take and you can test this stuff. You can test cloud on it on Windows. You can see how well it works on Hyper-V in an OpenStack environment, and you can actually not have to create the image yourself. Very important, you have the images ready made for Hyper-V, for KVM, and for Xans server. You just downloaded, put them in glance, off you go. That's it. It's an evaluation copy, so take care of this, and Microsoft entitled us to do this only for testing OpenStack servers, okay? Only from a strictly technical perspective and only technical perspective, if you put in the user data script, the DS command, okay, the one that you see here, you can specify your product key and by the fact that you are accepting a new EULA, a new end-user license agreement, okay? You get the user, the agreement, the license agreement for the production Windows, okay? For that moment on, you are no more in evaluation mode. You are in full production mode, to say so. But be very careful, because this image is provided only for testing, okay? Important part, the image comes with all the drivers and tools, so virtual Ion, Xans server tools, cloud-based in it, which is how a Windows clone needed. The image is cis-prepped, okay, so you can generalize it directly. There is nothing that you need to do. Just take it, put it there, go. The image is as small as possible, small as in Windows terms, of course, okay? Meaning that it's 6.8 gigabytes. Let's say that C-ROS is a little bit smaller. One or two dimensions, I think. So, for example, the Microsoft recommends to have 32 gigabytes, okay? We made it at 16, so you can anyway create a flavor and deploy with 32 and cloud in it with automatically enlarged partition to get to 32. Let's talk briefly about Havana. Sure, okay. So, some of the things that we are looking to address in Havana, and what I'm gonna say is probably the first one that we need to address is the migration from the WMI v1 name space that was originally coded by cloud.com. In order for us to support future version of Windows, we're going to have to move to the WMI v2 name space. So, that's one of our sort of key things to address in the upcoming Havana release. Additionally, we want to add VHD support, a VHDX support, and obviously there's going to be features that are now going to be available to us once we move to the new WMI version, right? And we want to be able to start taking advantage of those. So, the other key piece is obviously Solometer. Now that it's becoming a baked important sort of feature of OpenStack, or component of OpenStack, we want to make sure that we can pull metering data out of Hyper-V as well for those people who are going to be using Solometers. That's another area we'd like to address. And obviously, you know, the other additional areas, I know our friends at CERN have specific problems around active directory and keystone scalability, and that's an area we'd like to help them address as well. So, if any of you have additional things that you would like to see come into Hyper-V, please join us in the design session later on today. And we'd love to hear your input. We have an additional demo for free RDP. Excellent. Do you have questions, so far, features that you would like to see? Actually, we should just briefly touch on that before we let everybody go. The one other component that we weren't able to show you right now, because of time constraints, is a free RDB gateway. And that gateway actually will provide access through the horizon to the guest seamlessly. Yeah, Tavi has a demo for it right now. And please stop by the cloud-based solutions booth. Yeah, and very important, we have a design session today. Okay, we will show part of this stuff and discuss about the rest. Just a second, you had a question? Yeah. Well, I add a small parenthesis here. We are running tests with Samba 4 now. So, Samba 4 is an open source active directory implementation, okay? If you run that one, which is, of course, not a solution supported by Microsoft, okay? In that case, you have zero licensing costs, free Hyper-V, free domain controller. Anyway, frankly, the cost of the domain controller are... Yeah, at that point, you either have it in your environment already, or, you know, to be honest, you could actually run it on one of those... And frankly, consider this, Microsoft gives you the licensing, like SPLA or volume license. If you run that, that gives you unlimited virtualization rights, okay? So, if you... You have only to provide the license for the physical sockets, okay? So, if you run, for example, set 50, 70 virtual machine on top of a host, the cost of the licensing go down, like, to $1.50, $2 per single machine, per month. Then you don't need it at all. And it's only required for the service account because we need to have authentication between the nodes to be able to provide live migration, right? So, if you don't need it, then you obviously don't need active directory. So, we have a technical component because we are leveraging the same components with a difference. We have something more compared to System Center, which is the fact that we... We connect the ISCSI volumes directly to the host and pass them in past through to the guest, okay? So, whenever we live migrate, okay, we check on the host if the volumes are connected in ISCSI, connected if needed. We check, of course, the order so in order to be sure that they are connected to the host. So, if you have a boot from volume and you don't have local storage, live migration consists only in migrating the configuration file of the machines, okay? And reattaching the volumes on the target. So, which means the storage is to finish that the migration will take less than one second. It is, actually, because since we are using copy and write images, okay, you don't copy the base disk because it's already on the target. And what you copy is only the delta. And anyway, your user will continue working on the source machine up to the moment in which the disks are migrated and you will have only a bunch of milliseconds in which the user will be disconnected from one and reconnected to the other. So, it's very, very performant, even in our situation in which we decided to support and share nothing live migration because it's the best option in a case in which you have hundreds and hundreds of servers. My belief is that this is one of the best, if not the best, live migration option on the market. But, again, I wanted to say it. Go ahead. I think we should discuss a little bit more. Yeah, definitely. Please stop by. If I got it right, we cache locally all the base disk images from Glance, okay? And when we spawn, we don't need to do that. So, for example, now, when you saw I was pointing a machine, it took only a gigabytes of machine there. So, they were simply there because a previously spawned machine cached locally the image copy. So, I don't know if this is related to what we are talking about, but please stop for a second and I will let you talk with you. Yeah? I think it's like four. Four, yeah. In Active Directory, the current version has some very big scalability issues that came out in CERN, okay, which they have thousands and thousands of users. So, anytime that you need to check a user identity and so on, it will repeat the same identical query against entire Active Directory branch. That's not a limit of Active Directory. That's a limit of how LDAP is implemented currently in Keystone. There is a session, I believe, today about Keystone in which we are going to address exactly this specific issue. And then we are going to implement it as a free afterwards. No, it's free, absolutely free. You can go on our website, download it and use it. Completely free, supported and so on. Okay, more questions? Okay, guess that we are running out of time. Well, thanks everybody. Thanks a lot for coming.