 All right. There we go. Sounds like we're ready to go. Hello, everybody. My name is Peter Pouliott. I work for Microsoft, and I work on OpenStack. I'm here today with Alzendropolotti from CloudBase Solutions, and we're here to tell you all the wonderful things we did during this development cycle for the Windows Server Platform. As you can see, it's the two great tastes that taste great together, as I like to say. So we'll leave it at that. So what we're going to talk about today is Windows as a Guest and what we do to enable that in OpenStack, Windows Licensing in OpenStack, Hyper-V, and what you can do with it in OpenStack, as well as what's new for Windows in the Metaka release cycle. OK, here we go. How many of you guys use CloudBase in it? What are? I don't see you necessarily with this light, but OK. We get a few Alzendropolotti. Yeah, definitely. So as you know, CloudBase in it is full Python code, so very portable, and everything. It was designed for day one for being portable across operating systems. We typically use it on top of Windows, and the new introduction in the Windows family is called Nano Server. Anybody heard about Nano Server yet? OK. Great. Good. And for those who are newly indoctrinated, CloudBase in it is the Guest Initialization Agent for the Windows platform to give you equivalency to Cloud in it for your automation needs. So Nano Server, we will talk a little bit more about Nano Server later, but it's the first time that Microsoft introduces a new, completely different type of operating system, still based, of course, on the same core provided by the Windows Server foundation. But the important thing is that we have a different way of deploying application, a different way of deploying workloads on top of this Nano Server. The important thing is that CloudBase in it will still be there. So you will have the same type of initialization, same type of experience from this perspective. Supported Windows versions, so that's the new part. Of course, we will continue to support every version of Windows, which is supported by Microsoft. So Windows 7, 8, 8, 1, 10, and so on. I think I forgot to put Windows Vista, but nobody complained. I don't know, right? Windows Server 2008, 2008, or 2012, 2012, or 2002. And most important now, Windows Server 2016, which is in technical preview. So nice thing about those technical previews is that they are a very nice way to introduce new features to the audience. So feel free to download them, test them, try them, and everything. And of course, CloudBase in it works perfectly fine. On those, which means that they work perfectly fine in any OpenStack deployment. It doesn't matter if you guys use KBM, or Hyperview, or 6i, or whatever else. And Nano Server, of course, 2016. It works also on XP in 2003, so it's totally unsupported. So we still have people coming on our Ask.CloudBase IT website say, what about 2003? Well, time to update, I would say. Anyway, it works. So it doesn't mean unsupported, doesn't mean it doesn't work. Just the fact that we don't run continuous integration tests on top of 2003 today. So Windows Images. Well, we have scripts that will automatically deploy for you Windows Images, including, of course, all the drivers, tools, CloudBase in it. They've sysprep them and everything. How do we build them? Well, that's one of the important things we like. And we believe in open source. So all of our scripts are available on GitHub. So you can see them there on the URL over there. We get tons of questions about those images. So keep on getting the floor for the questions, of course, on Ask.CloudBase.IT. We have also Windows Server 2016 and Nano Server support, so it's actually the big deal at the moment. And just one more thing to add to what Alexander was saying there. Those are our de facto mechanisms for creating cloud images of Windows. So just to be clear. So to briefly, a quick introduction on licensing with Windows and OpenStack. Essentially, Windows licensing in OpenStack is actually surprisingly easy. We have, obviously, the data center license, which essentially will give you unlimited Windows instances on top of that host. Essentially, today with 2012 R2, it's one license per socket. However, as many may have seen in the recent Microsoft announcements, that's going to be changing from a per socket to a per core model, with, I believe, a minimum of two cores. This licensing model, it's applicable to Hyper-V, KBM, VMware, or any use case that you may have for the Windows operating system. And essentially, at the end of the day, it does result in a very cost-effective way to utilize Windows in your environment. And for those of you who have massive infrastructures or if you're a service provider, we also provide the SPLA licensing, which is our specific for service providers, to enable you to license Windows across your infrastructure on, I guess, a more applicable basis. All right, so one of the key areas that we run into, specifically with people who tend to use Windows instances on top of KBM, is an issue where they tend to navigate towards vert IO drivers provided by the Fedora and the Red Hat based open source community. The problem with those drivers that you should all be aware is those drivers are currently uncertified drivers, meaning you will not be able to get support from Microsoft for those Windows instances. So we highly recommend you use one of the certified vert IO drivers for Windows. Specifically today, you would have to go to your enterprise Linux vendor, namely, SUSE, Canonical, or Red Hat, to obtain those certified drivers. And we can guarantee you, because most of the problems that we hear about today with running Windows on KBM, it's usually due to using the non-certified drivers, which do not go through the same rigor and process of testing that we do with our partners for those other drivers. So please be aware when you're deciding to deploy your Windows instances on KBM. So unless you use Hypervene, in that case, you don't need it, right? Well, preferably yes. But for those of you who choose to continue to use KBM, please be aware. So the key question here that we get all the time is, does Microsoft support OpenStack? Well, we support the use of any guests on top of our platform. So yes, I guess, ultimately, what does that mean? Well, we will support your Windows instances running on our hypervisor, regardless of the management platform that you choose to operate in. So if that happens to use Open, if that happens to be that you use OpenStack, that's great. You can still get Microsoft support for those instances in that use case. You cannot, we will not provide support for the actual OpenStack layer. However, that's why we have cloud-based solutions. All right, there we go. So if you have questions regarding any of these things, or any topics related to Microsoft technologies and OpenStack, you can email OpenStack at microsoft.com to obtain help. And we'll help you, regardless of whether it's, like I said, a technical question, or if you just have general knowledge about where to go next with your Windows and your OpenStack environment, feel free to use that. And I can guarantee you, you will get a response. Et toi, que ça trade, hein? Yeah, we've actually, most people who send email to that address, we change their perception of Microsoft after we start interacting. So please, feel free to reach out to us. We're here to help you. So once again, let's start focusing in more on Hyper-V, our flagship hypervisor, right? And how we consume that in an OpenStack environment. Just a question. How many of you guys are using Hyper-V at the moment in an OpenStack environment? Okay, I see a few hands. How many use Windows as a guest? Okay, cool, cool. All right, let's talk about this. So WinStackers, in the OpenStack big tent, of course there are lots of different projects of different areas and so on. And it's nice that actually different teams with different type of expertise actually are able to deal with their own projects, having their own TPR, core reviewers and everything. The same applies also for Windows. For Windows, we have a specific team called WinStackers. Okay, deliciously cheesy, I will say his name. We chose it. And that's actually where we put all the projects with our Windows specific, the first one. So it's actually very new, okay, so we created a group just a few months ago when the DC actually approved it. The first project in this umbrella is called OS Win, which collects all the Windows OS specific features, okay? We are going to add now also networking Hyper-V, which previously was under Neutron, and now it's going to move as soon as possible on the WinStackers as well, okay? We will add here more projects going forward, okay? Just to be clear, this is not a Microsoft or cloud-based solutions project only, okay? We like to have contributions for everybody, so we have already different partners, different companies which are contributing code and helping with bug fixes and everything. So we are more than happy to accept contribution, like we have other OS WinStack projects that we really believe in the open source spirit here. One more thing to add to this, one of the main reasons we went down this route to create WinStackers was to help accelerate our ability to bring new technology and features into OpenStack without having to rely on other teams to provide, I guess, the review process to do that. So we have our own PTLs that can handle that for us in our own review cores that will assist in this area. So it gives us some more autonomy within the project specific for the Windows pieces that we need to influence. Okay? Awesome. So Hyper-V and OpenStack. How many people here knew you could use Hyper-V and OpenStack? Perfect. What some of you may not have known is you've been able to use it for quite some time, okay? And one of the things that we try to do is make sure that we can give you a user experience that with the technology that we're bringing to OpenStack, that's very easy to use, right? Easy to set up. So for the Windows IT Pro, that means the same type of experience from an installation perspective, right? However, from a pure OpenStack perspective, if you're used to using OpenStack from a pure Python play, we actually can, you can on a Windows platform, tweak and tune and, you know, work with that Python code in the same way you have on Linux systems. Okay, just a quick thing here is a quick snapshot from the installer that you can use to deploy OpenStack compute on Hyper-V nodes. So the idea was always to have an experience very familiar with OpenStack DevOps, okay? And at the same time also very familiar with Windows types of, let's say, sees that means that some people used to, let's say, install things by using MSI and everything, okay? The nice thing is that it will deploy all the related Python code, whatever you need to make OpenStack running on top of Hyper-V. It can be fully automated. So for example, we use typically Puppet, Chef, Juju, of course, one of our preferred ones, and SoulStack and many more, okay? So don't think that you just have to click through these things. It can be fully, as I was saying, orchestrated in any other modern type of cloud deployment. So, you know, one of the things that we pride ourselves at Microsoft is the continuous integration effort that we currently have going on for OpenStack. Many of you might not know, but we actually operate one of the largest continuous integration infrastructures in all of OpenStack. We consistently are either in the top two or unfortunately this time around three in terms of the amount of upstream votes that we actually commit to the core project, okay? So, you know, from that perspective, we've got currently today 10 active continuous integration infrastructures with plans to add more for each project that we're working on. Additionally, some of those are continuous integration for the supporting projects, like OpenVSwitch, CloudBaseNet, that we consume as part of OpenStack, okay? Additionally, you know, we are looking to, as I said, grow, continue add more infrastructure and have commitment from our management going forward that this continuous integration infrastructure will operate to support our efforts with an OpenStack. So, we can assure you that we're gonna keep testing OpenStack with Windows in the same vigor that we're doing today going forward. Okay, time to talk a little bit about Mitak, right? So, what we have new and everything. We worked a lot, we have a lot of new feature coming with this release, which, by the way, for OpenV, just got released at the beginning of this week. So, if you can go to our website on cloudbase.it, you can download already the message, right? Very important, it doesn't matter what OpenStack distribution, to call it this way, you're using at the moment, whether if it's, you know, any of the big names that you know, or if you just deploy it by yourself from scratch or if you're using DevStack and so on, it doesn't really matter. So, you can just add Hyper-V compute nodes to any of those, okay? So, don't think that you need to have an entire Windows-based OpenStack to try it out. You can have your own OpenStack with, I don't know, one, two, three, whatever, KVM compute nodes, you can absolutely add Hyper-V nodes as well. So, they can perfectly live side by side. Our goal, once again, with all our effort is to maintain an interoperable footprint when we add Hyper-V into your existing OpenStack environment. Okay. From this perspective and talk about interoperability, you know, you can have your Hyper-V box as put them side by side with KVM, but if they have different type of SDN solutions implemented, they will never be able to talk, okay? So, you cannot have like a Linux VM running on KVM, talking to a Windows VM running on Hyper-V if they don't have an underlying networking implementation that will allow you to do that, no? So, that's actually the main reason why sometime ago we decided to port Open, we switched to Hyper-V, okay? How many of you guys use OBS? It's good amount. Good amount, as I was expecting. So, it's the fact of standard for SDN, so it's not something that you can just ignore. It's actually evolving very fast, very well and everything. So, it's a great way to make sure that different type of hypervisor can communicate among each other and also different type of network equipment and so on. Great interoperability, Hyper-V, KVM, ASXI and so on, VNSX, tunneling, VXLan, GRE, STT, and so on. We have a Neutron ML2 plus an OBS agent. So, the same OBS agent that you guys are running already on Linux works on Hyper-V in the same identical way. It's compatible with Open Daylight and NSX. And in theory, any OBS DB compliant SDN controller. Now, once again, just to be clear, we're talking about having a use case where people are consuming KVM with Open-V switch within an open stack environment. We can seamlessly plug in Hyper-V as a compute node without changing any underlying network architecture. So, the news here is that we have the new Open-V switch 2.5 release, okay? It introduces a ton of new features actually, way more than what I could fit here without going over time in a matter of a few slides. So, we have, just to be clear, we have all the type of encapsulation that you will expect. So, VxLangere, STT, MPLS, and so on. This is a stable release, okay? We had a previous 2.4, which was the first one that we introduced, and we always marked it as beta. The reason is that it was the first version, so marked as experimental. So, you were, of course, always encouraged to test it, try it out, but we were not encouraging people necessarily to use it in production. 2.5 is a stable release. So, meaning that we hope and encourage you guys to use it in production anytime. So, we consider it as stable as we have, for example, the Corent and Networking Hyper-V agent, which is based on the native Microsoft Networking stack, okay? And once again, should you want to use it in your production environment? You could feel free to work with cloud-based solutions to help you implement that. Among the many features that we have, for example, we have also LACP support, this is something that was not available in the previous release. So, that's great actually for all your bonding needs. Okay, all your bonding needs. So, Alzandro alluded to earlier, Nano Server. Nano Server, from my perspective, is being a relatively new employee to Microsoft, but a longtime user, right? Is a revolutionary step forward in how we distribute and consume Windows Server, okay? We're talking about a sub-500 meg footprint that contains the core functionality for hypervisor and storage, and allows you to do things very similar in those smaller footprints that you would do with Linux, okay? So, highly efficient, highly tuned, a lot of the stripped down to provide the type of compute environment that a lot of us expect today when we consume, or build out cloud infrastructure, okay? It's basically Windows without Windows, no? Yeah, it actually has a shell. Yeah, it has no graphical user interface, you can use PowerShell, for example, or mounting to connect to it and so on. So, for those of you who are used to using Windows Server Core, we're talking about an even smaller footprint than that, right? Like, tiny. Okay, Storage Spaces Direct. Ever heard of this? Yay, some hands, very good. So, Storage Spaces Direct, it's a great technology. It's the idea that you have a full-share and nothing storage, exploiting the individual disk that you will have on a traditional commodity servers, okay? SSDs, SATA, SAS, whatever you have, okay? And all together, those disks will basically create a single storage pool that will be distributed across the nodes, okay? The main idea is that this way, you can have a regular servers, no need for traditional, expensive, sand type of storage, okay? While still have all the benefits of having a type of distributed storage, okay? Data mirroring, so all your data will be basically mirrored across multiple nodes, so if you just plug the plug from one of those servers, it will still work, okay? All the mirroring, so all the traffic will be based, of course, on your network adapter, so we highly recommend to use 10 or 14 gigabit nicks, okay? But that's, again, very commodity stuff in nowadays enterprises, right? It uses or leverages scale-outread server, which is based on the SMB3 technology, and you can also use SMBDirect or NDMA enabled adapters nicks so that you can exploit that extra bit of performance thanks to the offloading that this provides, okay? Once again, we also, you know, as part of this provide, you know, Cinder integration with that technology to help enable open stack, hyper-converged, hyper-C. Okay, so just a couple of notes here about hyper-conversion became, of course, a very hyped word, okay? But it's definitely the direction in which everybody's going to go in terms of deploying because the advantages are clearly overwhelming. I mean, it's not something that you can compare it to the old traditional way of splitting storage from compute and so on. The idea is simple. All the nodes are identical. They have storage, compute, networking, and so on. If you need to plan new capacity, no problem in saying, okay, I need 30% more storage nodes, 40% compute, and so on. You just take the same identical ones and they will just deploy in production. Then you can use a bare-metal solution like, for example, mass like we usually do or ironic and so on, and you deploy your nodes from the bare-metal up, okay? So also very agile, very efficient as well. We already have, based on Windows 7 2016, an OpenStack distro that you can just deploy on your environment. It's based on 2016, which is not yet available in production, as I was saying before, it's in technical preview. But feel free to deploy it for proof of concept and whatever. What we typically tell our users and customers is to deploy a proof of concept today, and then in the moment in which 2016 will go RTM, so it will be available for general availability, you can just flick a switch and redeploy automatically all the node with a new version and just go in production with it, okay? But you will already be used to it. The environment will be already fully test-driven as well. Okay, well, we already talked about this. Pymi. Okay, so Python on Windows is an interesting topic, okay? It works great. So Python is definitely a multi-platform language, so there is nothing actually that prevents you to use it on Windows, okay? But the underlying library, which will offer interaction with the operating system, except, for example, C-types, depend, of course, on some modules which are available from the community. One of those leverages is so-called WMI APIs, okay? The current module that everybody uses on, and it's available on PIP and everything, is the WMI module based on Python 32, and it was written somewhere in 2003, so it's based on a very old implementation of WMI. That's what we used to use until the Liberty release when we got really fed up by how slow it was, this approach, okay? So we decided, okay, stop for a second, let's write a new module that will actually leverage the new stack, the new management API stack, which came with 2012, it's called Managing Infrastructure API, and it also was designed to be a full drop-in replacement for the old module, so if you had the applications which were using the old WMI module, you were simply able to replace it with a new one and just go on, okay? So for example, that's a case of OpenStack. You take a kilo or June release, you deploy the new module and everything just works, but it will be extremely faster, okay? And let's talk a little bit about how much faster. Here you go. This is an example of how faster we got from the previous release to the current one. Okay, the performance takes care not only of the fact that we introduced WMI, but also the fact that we introduced a lot of additional improvements, driven by the fact that we are finally able to rely on a way faster underlying module for doing this, okay? So on the left, you can see the two almost vertical lines that represent the difference between the old WMI implementation in Parma. It is actually for one specific OpenStack service, which is the neutron agent, which was always one of the biggest bottlenecks when, especially when deploying security groups ACLs, okay? So when you have hundreds of machine, it used to be very, very slow, okay? Now you see a completely different curve if you see on the right, especially with increasing the number of threads, there is an enormous difference, okay? PyMI, you know, Python is notoriously a non-multitrader language because you have some issues with the fact that only one thread at a time can own the SQLGL, so it's common from other interpreter languages as well, like Ruby and Son. We overcome this issue by leveraging a lot of parallelism in the underlying C++-based module that we developed specifically for this. So it's C++ under the hood and Python over, okay? So the best of the two worlds. The results, we will publish, of course, some benchmarks pretty soon. We use Rally for doing all the banks, so hundreds of machines deploying in parallel and everything, and we are talking about the 10 times plus actually performance increase from the older model. And not only are we at this point, we are way faster with Hyper-V with Windows workloads, so we are talking about 20% at least faster than other hypervisors. And, but we even managed to get faster with Linux, okay? That's especially thanks to all the improvements that happened also in the Linux guest with the LIS components. Okay, I will stop here with the comparisons, otherwise it feels like I'm comparing detergents or stuff like that, but that's it, yeah. So one of the other key areas that we had been approached to help enhance support within the community for was specifically around the fiber channel support for Windows Server within Cinder and OpenStack. So we decided to work together with HP to actually build that support. We run the CI for that in the Cambridge continuous integration infrastructure. And essentially, as a result, we not only added the three-par fiber channel driver for fiber channel, but also for iSCSI, okay? And it gives the same user experience across this project as it does for our other Cinder projects, okay? From a storage perspective, and it allows you to utilize those assets if you have them in your OpenStack environment with your Windows guests and hosts as well, okay? We utilize a pass-through for boot from volume. Also, vSAN support will be coming soon, yeah. The reason why we went to the pass-through for this one is that we needed also a boot from volume form which is not available in the vSAN support itself, yep. So additionally, we add a multi-path IO support, and as I said, iSCSI support, we leverage the host-based MPIO, and it's compatible with Windows Server-supported targets, okay, okay. That's a pretty important topic. Just a question, how many of you guys use containers? Okay. How many of you knew that Windows Server containers are coming? Awesome. Whoa, that's great. Perfect. So from our standpoint, you know, with all this momentum that's around now behind containers and within this ecosystem, we wanted to ensure that we could take advantage of that as well. So with the work that's coming out of the Windows Server team on containers, they've done excellent work to integrate Docker APIs on top of that. So if you're just from a straight Docker perspective, we're talking about Docker files building Windows containers, right? It's pretty cool. The support for that container technology was merged into Nova Docker. So if you want to use it within your Nova deployment, you can do that and get Windows containers out of Nova. Additionally, we've added support for Magnum. So when Windows Server 2016 is released, you can implement Windows containers on your OpenStack environment using the same technologies you're using in your Linux environment today in your OpenStack deployment, okay? Yeah, we are investing a lot on containers. So Magnum and someone will be an important area of development during the upcoming cycle. So if you have any questions, please come forward and ask us, okay? So not only Magnum also, for example, Kubernetes, Mesos integration and so on will be definitely on top of the list. So one more thing to add here, when we talk about containers, one other key critical feature that's coming in Windows Server 2016 is the ability to nest, such being we could actually run Hyper-V in a container, okay? So it's from our standpoint, having not had nested virtualization for quite some time, we're very happy this is coming because it's gonna essentially, hopefully make our CI life a hell of a lot easier, right? All right, next topic, Hyper-V failover clustering. So you know the story with pets and cattle, right? So, or as a friend of mine says, cats and petal. The main idea is that OpenStack and modern cloud designs are made for lots of machines in which the individual machine, the virtual machine or container or physical node can fail anytime and your application keep on going because the application themselves are designed to be full tolerant, have availability minus on. But what to do with all the legacy type of deployments that you have, okay? So you cannot just, you know, draw a line, draw away whatever you have and just start fresh, you know? So we have a lot of requests of companies and users saying, okay, well, I have my mail server, my database and everything. I would like just to take them and make them work on OpenStack just enough for consistency where the rest of all the new and shiny workloads that I have. So that's actually where Hyper-V failover clustering comes in hand, okay? It's basically host-based full tolerance. This means that whenever one of your Hyper-V nodes dies for whatever reasons or if you needed to put in maintenance, you will just be able, your virtual machine will keep on running by simply move automatically on a different node, okay? Take about line migration but triggered by some automated system, okay? Of course, it's much more than that, okay? But that's the main topic. So if you just shut down, for example, one of your nodes, all your machines will automatically migrate to somewhere else. This is called node draining in the jargon. And if it crashes, they will automatically restart from another node, okay? This is called failover. It integrates seamlessly with Nova because one of the traditional discussions around this topic was always the fact that what happens, I mean, when you have the Nova scheduler taking some decision and the cluster saying something else, okay? The idea here is that the Nova scheduler is always what commands, okay? Then the cluster will simply failover and update the Nova configuration. So if your virtual machine is running on node A and failover on node B, our Hyper-V driver will always tell to Nova, look, this specific Hyper-V node, this specific Hyper-V virtual machines is now on host B and no more host A. So it will always be consistent and transparent. Most important, not only it will give peace of mind to your users but also to you as the employer because it comes out of the box with Windows. So it's just one of the features that it's very easy and simple to deploy. We have a blog post that we published this week on our website if you want to learn more. All you need is a domain controller and for the rest it will just work on the free Hyper-V server and whatever other Windows server version that you guys might use. It supports all the networking options that we have. For example, Open-V-Switch or the native Windows server networking, Hyper-V networking, and it supports also seeing the S&B volume. So all your volumes will remain attached actually to your machines. When do we use it? When your VMs don't necessarily run full-tolerant applications, traditional enterprise apps that delegate full-tolerance to the host. And again, as I was saying before, it's a very common request from our users. Next topic, so I'm going quite quickly on top of the next list. How many of you guys use VDI in OpenStack? Let's see. Oh, there's a couple. There's a couple. That's pretty good. So, Windows server and Hyper-V are definitely the best platform out there for VDI. Okay, so Windows, especially for Windows, for workloads on the running on top of Hyper-V, okay, without taking anything from other competing platforms, of course. There are a lot of new additions for remote effects in Windows Server 2016. So remote effects, as I'll use, basically to share your GPUs running on the host between virtual machines running on the host itself, okay? So you can have up to one gigabyte of dedicated RAM, with, of course, the other options you have here. And you can also have shared VRAM, so we can basically over-commit the VRAM that you have, support generation one and generation two VM, and very important supports OpenGL and OpenCL API support, okay? So, if you have some scientific type of, let's say, workloads, that's perfect, okay? Very useful in both private and public clouds. We have also videos available and a blog post available at the link which is available there. And actually down you see also link with the video showing the big difference between the two, okay? Shielded VMs. Just a quick note about this, because we didn't publish it yet, we will publish it extremely soon. I mean, publish it as in available in the codebase for OpenStack. These are a very unique Hyper-V feature and it's a great feature. The main idea, the main reason behind is that you cannot really trust the underlying host, your hypervisors in a cloud, okay? What happens if a hacker will take control of your host? In normal traditional environment, it will simply own all your virtual machine running on top of it, okay? So, if you have some significant amount of sensitive workloads, that's gonna be a problem. These solutions, which is based on isolated user mode, BitLocker, TPM support, Hyper-V, of course, and so on, will allow you to make sure that whatever happens to the underlying host, there will be no way to taint and basically eavesdrop in the content of the virtual machines themselves, okay? So, full encryption of the data and basically an extremely tiny attack surface that won't allow any attacker but can compromise the underlying host partition kernel or user space to access the virtual machines themselves. Very, very interesting feature coming within the server 2016. As a bonus, you'll also get a VTPM support, okay? If you want to have a TPM in your guest. On networking, you wanna talk about this one or? So, for Windows 2016, the networking model changes slightly, so we wanna take account for that in our work going forward with OpenStack. It's a REST API-based interface. The controller runs on Windows Server nodes and there's a dedicated 4D extension, the excellent offloading and some OVSDB compatibility from the new native interface. Okay, so this stuff will come as well in Utah. This is, of course, a new networking stack coming with Windows Server 2016. So, it's, of course, our pleasure in duty to say, so to include it also in the next version of OpenStack. So, much like the existing networking model that we can consume today out of the basic Windows Server in 2012 R2, you'll also be able to consume the new networking model that will be presented in 2016 within your OpenStack environment as well, natively. Okay, some other minor updates that, so all this stuff is already fully support in OpenStack Meetup, okay? So, if you download our installer. VNIC hot add, meaning that you can add a NIC, also a virtual NIC to our running virtual machines. You can have secure boot for Linux VMs. I'm a big fan of secure boot, okay? Previously, it was available only for Generation 2 Windows VM. Now, it's also available for Generation 2 Linux VMs, okay? All you need is an API type of partitions and to just work. We have finally nested virtualization. So, you can run a VM with Hyper-V running on top of Hyper-V, and you can play with all these inception as much as you want. The main idea, okay, this sounds like it's just like a crazy lab med scientist scenario. In reality, it's extremely useful for testing so that you don't have to deploy. For example, if you want to test OpenStack, you can just test OpenStack entirely inside of virtual machines without having to deploy it on physical nodes, okay? Some to guess, okay, REFS, a new file system from Microsoft with additional features fully support also on storage spaces direct. And that's actually the main list. There are some other minor things, but let's say those are the ones in which we mostly focused. So, some time back, we decided to work to bring Juju to the Windows platform and Juju Charms as well. So, what that means is for those of you using Juju as your orchestration mechanism, we have a significant amount of Windows workloads and traditional Windows IT workloads available for consumption in that automation model, okay? Here is actually a list of the catalog available on our website. Basically, all the type of Windows workloads you might actually think about. SQL Server, SharePoint, Windows clustering, Active Directory, IIS, WSUS, SMB, VDI, and so on. So, all the main things that come to mind. And for those of you who may not be using Juju, we also have this technology available as heat templates, okay? On the opposite side of the spectrum, we have a small project which called Vimaging. The scope of this project is to allow anybody running Hyper-V, including, for example, a Windows 8 or a Windows 10 laptop, okay, to run OpenStack on that specific Hyper-V box, okay? It will create basically a VM running a Linux-based controller, and it will use the host itself, could be your laptop or just a server or whatever you want, okay, as a compute node. This is actually most probably the fastest way to test OpenStack today. I mean, all you need is your laptop or a server that you have around or nothing more. It runs even on a surface, so it doesn't really need too many resources, and it's fully guided, automated, and everything. Okay, just a couple of notes about Azure Stack. Anybody heard about Azure Stack? Okay, a few hands up, okay. Azure Stack is a new product, and it's a project coming from Microsoft. We receive a lot of questions about what are you guys doing with Azure Stack? What about Azure Stack and OpenStack and everything? Well, our point, there are standpoint as cloud-based solutions that is that there are two completely independent projects, okay? It offers the Azure REST API, the ARM API on top of your on-premise cloud, users with the server 2016 operating system features, or the same ones that we have also underlying OpenStack, and we also offer a service to migrate between OpenStack and Azure Stack, for example, if you want to experience how to work on both environments. Currently, it's available in technical preview, and we work definitely on both environments, and we see our customers and users asking questions, evaluating also the idea of running both of them at the same time, okay? There is nothing wrong with that. So, if you have the intention of deploying Azure Stack, please come and ask us the questions, especially if you plan to have both environments deployed in your scenario. Okay, so once again, as we said earlier today, we do support your workloads, okay, if they run in OpenStack. If you have questions specifically about the Microsoft technologies, you may email openstackatmicrosoft.com, and we will get back to you to help you and answer those questions. And if you have questions regarding any of the cloud-based technologies that we've talked about today, you can go to ask.cloud-based.it and get answers in the open source channels that are available there. Okay, we are slightly over time, plus we are what remains between here and your parting, and a lot of other things. If you have any questions, that's the right moment. And if those who would like to ask questions, could you please step up to the microphone such that we may get it recorded in the session? We have a question. Go ahead. Hey guys, I'm Rich Dole from Box Cloud Engineering. Firstly, great work on the Windows stuff. Love it. Quick question, you mentioned Open Daylight Integration. You've got a fairly extensive deployment that's using OpenVswitch and OpenFlow 1.3 or 1.4 to provide overlay and connectivity without having to use STP and no controllers. Is that scenario supported in your OpenVswitch implementation? Definitely yes, because we have the full compatibility for both OVSTB and OpenFlow. So the fact that we have 2.5 running on the node itself doesn't mean that we don't have compatibility for the previous OVSTB versions. Okay, thanks very much. Very welcome. Thank you. Go ahead. We have 40 compute nodes running under KVM. In order to run nested Hyper-V, I'd have to replace some of them with Hyper-V as the place to KVM with Hyper-V? Well, KVM already supports a level of nesting. I'm not sure. I remember in the past. No, it's exciting. So it supports very well, for example, or works very well, which is unsupported. Nested Hyper-V, but KVM is not yet necessarily there. Yeah, I thought there was some pattern. Yeah, to be honest, I'm not that well versed, but I... So I'd have to take some of the KVM nodes and replace them with Hyper-V. That's a great idea. Yep. And when you want help doing that, you can email OpenZagatMicrosoft.com. Sounds reasonable. Okay. Go ahead. Just the recent work to port some Ubuntu user land stuff over to Windows, signal adoption route for some of the traditional Linux-based server technologies in Windows Server 2016. So if you're referring to the recent announcements about the bash on Windows, let's clarify first, right? We're not porting anything. What we're doing is we're providing elf compatibility in the Windows kernel, right? So you're natively running those executables on Windows, right? Yeah, so reverse wine, right? Yeah. Cisco level interpretation. Yep, well... So would that enable you to take Linux binaries and make use of them on Hyper-V natively? Absolutely. So you can run your entire stack on that. In theory, absolutely. Now, we haven't done extensive testing in that area to determine exactly what it is. But from the preview releases or the discussions that were recently, I think it was a build, that is possible. I would assume, depending upon what application it was, there could be some issues. But in theory, yes. So just to clarify, this bash on Windows is mostly for developer scenarios, OK? So for example, we have it on Windows 10 today. So is there a limitation where you can't open? Yeah, so unless you want to run server applications on Windows 10, of course, it doesn't necessarily help too much on the server side today, OK? The future, who knows? But for the time being, it's a little... Also, in terms of terminology, the kernel is specific Windows kernel, OK? So you have all the new type of user space tools running directly on the Windows kernel. So one way to call it is, for example, GNU slash K Windows, for example. And Ubuntu, of course, canonically is providing all the support and all the user space binaries. Very interesting thing, for example, this work allows you to run primitives like Fork, which are traditionally not available on Windows, to the difference between how the processes work. Win32 processes work compared to F type of binaries and Linux process, OK? OK. Great. Go ahead. Last question. From an operational perspective, if I'm running Hyper-V and Open-V switch and two VMs can't talk to each other, who do I call? Who do you call? Do I call? Do I give you my phone number? Do I call Microsoft or do I call CloudBase? You would call CloudBase Solutions. OK. And then they would reach out to Microsoft. So to be clear. I'm just curious from a support perspective. We work very closely together to ensure that these technologies work, right? CloudBase Solutions will be your support provider and our partner to deliver that support to you, OK? But I assure you that technology is getting tested in our continuous integration infrastructure, and we work extremely close to ensure that we're going to deliver a great user experience to you, OK? Thank you, guys. See you later, other parties. Thank you, everybody.