 Okay. All right. Hi everybody. Welcome to Boston. Okay. My name is Peter. I work for Microsoft and I work on the open stack integration of Microsoft technologies and the larger open stack ecosystem. And I'm here today with my colleague and friend Alexander Pilotti of cloud based solutions. Yep. And we're here to discuss the latest and greatest for this release cycle for the windows platform and open stack. All right. So we will start with killing these things here. Dammit. Just a second. This is fantastic. We try to look good at all times. Anyway, a little bit of history. I'll take over a little bit. A little bit of history. The open stack on windows project is pretty much run here out of Boston. Our continuous integration infrastructure for all the Microsoft technologies is roughly about what? Five, six blocks over the river that way. So yeah, this is our home and welcome for being here. My first open stack summit was Boston. The original Boston summit. So we've been working to bring windows technologies into the forefront of open stack for quite some time. And if my friend Alexander here can ever... There we go. All right. Okay. Let's resume the slides. Okay. Here we go. All right. So our goal from the get go has always been to ensure that windows is a first class citizen, both as a guest and as a functional component in open stack. And what we're here to do and sort of prove to you today is that windows is actually a first class citizen. And there's really no difference in managing your windows instances and open stack as there is to managing your Linux instances. So our goal, as I said, has always been to make sure that your open stack experience with windows is an open stack experience. So we tend to plug in seamlessly with all the necessary sort of layers that one would and practices that one would typically use for managing their open stack infrastructure. Now... Oh, go ahead. So how many of you guys are running windows instances in open stack today? Yay. Cool. All right. Okay. And how many of you are using cloud-based init? Good. So cloud-based init is the factor standard today in deploying windows instances in open stack. It's modeled conceptually, let's say, like cloud init for Linux. So the main idea here is to make sure that when people deploy an experience, let's say the usage of windows instances in an open stack context, they have a very familiar framework. So it has to be something that is not something completely independent that somehow fits the, you know, battling side of a completely different type of framework. But it's something that has to be very natural, both coming from an open stack direction, so from, for example, Linux experience for cloud init and everything, and also from a windows direction. So coming from a type of Windows server re-entercise admin model, no? And that's how actually we work cloud-based init. It's fully Python code. So it fully looks and feels like an open stack project, basically, from that perspective. It's wrapped in a Windows service, so if you are a Windows user admin, you will just be able to see it as a normal service. It works on any supported Windows version, including nano server. It has a very extendable model based on plugins and supports basically every possible cloud. Open stack is, of course, the main one, EC2, Azure, Azure stack, cloud stack, OpenEbula, Mass, GC, Oracle cloud, and so on, okay? So the main idea is that it could even create a single image and running it on every cloud, and cloud business will take care of everything for you. Here is a very limited list of the things that cloud business can do for you, okay? A lot of those things are new from the Otaka or Kata lifecycle and, of course, also extending into the PyQuant. So what can you do? I think hostname, creating username and passwords, managing, of course, your admin password, setting static networking, even on Ironic, for example, to connect with the previous session. Managing public is automatic volume expansion, you know, when you boot different flavors. Configuring the HTTP listeners, configuring the time service, automatic updates, sound policy, licensing. For example, you can move your VM from prem to cloud, okay, and get already your licensing configured if something that you need, okay? Setting NTP services, setting the right MTU for OpenVswitch, and, of course, running every type of user data script that you might need, you know? More details, there is a full documentation on the link that you can see there, okay? So this is just like scratching the surface of what cloud business can do for you. How do we ensure that testing works? Testing is fully based on a continuous integration framework based on Tempest, which is based on Argos and Arrester, which are names coming from the Greek mythology, that we use to test cloud business needs on every possible Windows version. So imagine when you have to test every single patch that comes in on Windows 7, 8, 8, 8, 1, and 10 twice, because it's X86 and 64, and then, again, Windows Server 2008, 2008 R2, 12, 2012 R2, 2016, and even Nano Server, which has a completely different, you know, surface from this perspective. So this is actually the tools that we run, which we run also as part of the Cambridge facility, right, Peter? So, yeah, as Alzandro said, we take the approach of basically scientifically proving the value of the work that we do through our testing, right? So we assure you that this is all extremely high quality, these aren't toys, this is all enterprise-worthy and worthy of being consumed in your open stack deployment to make sure that your Windows instances operate as sort of best as they possibly can. And as Alzandro said, we try to do that for every supported release of Windows that Microsoft currently has available today. Before you ask, yes, we can transfer some Windows XP in 2003, and no, we don't support XP in 2003. So if you want to run it, it works. We provide, let's say, consulting services, but we don't have intention to support this as an upstream feature. For the simple reason that it's also not supported by Microsoft, right? Out of those projects, for example, Arrester, which is a project that simulates all the possible cloud metadata API interfaces, that's new actually in the last cycle, okay? It's all, of course, open source and all available on GitHub. This is all news here. There is a Windows Server open stack evaluation image, right, Pivda? Yep, that's currently available for open stack testing only. We were able to get a special license out of our legal team to allow for that. So I give you, please take it, try it, test it out in your open stack environment if you want, prior to baking and rolling your own images. Okay. You might be curious to know how many people are actually using CloudBasinit, you know? And of course, like with every open source project, it's quite difficult to determine the real usage, okay? So we have some basic statistics, basic downloads, update requests and everything. And I can tell you that since we started checking, let's say, the amount of instances in CloudBasinit runs that we were collecting from late 2015 till today, there are 8 million of them, okay? So whoever's telling you that open stack is not real or is not really getting into the enterprise and to people which are saying that open stack is not really Windows-friendly, well, 8 million instances to me seems quite a big number, right? And once again, he did say since mid-last year. Yeah, so it's quite huge. How to build Windows images? This is also something that everybody's always asking. And again, Windows instances that can run on every possible hypervisor, right? Not only on Hyper-V. Most people, of course, ask about KVM, which is, of course, the most popular platform for open stack, but it works also on VMware, on Censeraver, and so on. So here are a bunch of tools that we keep on updating. There are, of course, a lot of updates in the last cycle even there. They support for nano-server and so on. Okay, and it's slightly deeper in repository, but let's say this is the main one. And you can basically run tools that will generate the images for you. Basically, they generate an offline Windows image, and then they run it in a Windows machine to apply all the Windows updates, okay? Plus additional scripts that you might have. It includes VirtualIO or VMware tools of whatever drivers you need. Windows updates, custom drivers, custom applications, and so on. Another very common question is how do you run custom user data? You can run partial, basically, CMD, meaning old-school, you know, common-prone batch files, Bash or Python. In order to determine the content of your user data, it's very simple. The first line will start with PoundPS1, okay? CPS1. That's pretty simple to the model that you might be familiar with, of course, with Linux, right? So actually, if it starts with a typical shebang, then we know that it's a Bash script, okay? We also support the syntax that the EC2 folks use, which is simply a PowerShell tag and then the scripts and end, okay? So both of them are supported, meaning that you can have the same identical user data script working on Amazon EC2 and also on cloud-based init. Here are just two very simple examples of things you can do. You know, just creating users, assigning local groups, domain joins, and stuff like that. If you get more complex, of course, you can run everything inside of a user data script. Of course, you have only the limitation of the size of the user data that you can have in metadata, but of course, we accept also gzip metadata, so this means that it's fairly unlimited the amount of data that you can run. But if you're serious about, let's say, orchestration, I would strongly recommend to use something like heat templates, juju over there, okay? So here there are some good examples about how to create an active directory controller. They are all upstream. I think I can even open it here. Let's see if I can even go here. So you see that there is the typical YAML file on a heat template. So here are the resources about how to, you know, create a computer resource and so on. And at the bottom, you will find also the scripts that are being executed. Get back, okay? And here you have a PSM1, so a PowerShell module, and you have also a script that will perform actually the actual configurations here. We are importing the module, and we are simply saying, hey, install active directory domain controller and performing the rest of the operations that we need to do. And here there is a bunch of PowerShell that will show you how to, no, sorry, the other one. And here there is a bunch of PowerShell that will show you how to actually perform the work, and how to synchronize with the remaining system. Okay, let's go back to the templates. There is another example for SQL Server, but, you know, at that point you can do really whatever you want there. So we get a lot of questions around Windows licensing in OpenStack, and, you know, it's basically the same as licensing any other Windows instance, right, in your environment. We typically, you know, recommend people if they're going to be deploying Windows Server as a component of the underlying OpenStack infrastructure that you use data center licensing. And in fact, it's probably your best solution in all cases. And if you're going to be hosting other people's content on Windows Server, then you will need the SPLA licensing set. You know, obviously this works regardless of, you know, the Windows licensing is really kind of regardless of hypervisor, and it can be extremely cost effective when choosing, once again, the right licensing model for your deployment, such as, like, you know, whether you're a volume license customer or SPLA customer. Now, you know, we also get questions, does Microsoft support, you know, Windows, or OpenStack and Windows, and well, the reality of the situation is that Microsoft will support Windows guests when they run on Hyper-V specifically, or they support Windows guests when they're using a certified VertIO driver stack, which today, in order to get a certified VertIO driver stack, you have to be using an enterprise Linux from one of the vendors you see on that list. And you will get that driver stack directly from that vendor, okay? It is not the Fedora VertIO driver stack that you are probably familiar with that's available for free. That, if you decide to use that Fedora driver stack in your Windows instance, you essentially render that Windows instance unsupportable from a Microsoft legalized perspective. How many of you guys are using the VertIO upstream drivers? So they work perfectly fine. The only issue, remember, is that Microsoft won't support them, okay, so that's the only thing to remember. Of course, with Hyper-V, you don't have that limitation, meaning that the stack is anyway supported, but the alternative is to go with, as Peter was saying, once again, if you're already paying a customer support subscription for your Linux distribution, you might already have access to those drivers already. So now, what happens if you need help directly from Microsoft? Well, you can reach out to our team, our OpenStack team app, Microsoft, right? By emailing openstackappmicrosoft.com, and that will come directly to myself and other individuals on our team, and we can try to direct you in the best possible way to get your assistance. Okay. Now, let's move to the outside of things. So OpenStack plus Hyper-V, we've come a pretty long way around that. We started in Folsom, right, Peter, for that, and, well, we did quite a lot of work. It started a little longer before that, but essentially, since 2010, there's been a Hyper-V driver. So shortly after the original release of OpenStack, the current iteration, as Alzo said, has been since 2012. Yeah. All right. Some unique Hyper-V plus OpenStack features, so just to make it clear, you can run Windows guests perfectly fine in OpenStack on KVM, okay? There are, of course, some advantages in using Hyper-V as a hypervisor. We live in a world in which hypervisors, of course, nowadays, are very commoditized, so it's not that it's the end of the world. If you're not using Hyper-V, use KVM for you running Windows guests. Actually, as Peter was saying before, they are perfectly supported, but there are some specific advantages that you may find useful for Windows, no? So one of them is Windows failover clustering. So people are using, asking, hey, what can I do with my virtual machines, which are leveraged, for example, any time of host level, high availability, okay, like VMware vMotion or Windows Server failover clustering, and they want to move them to OpenStack, which is a platform more oriented at a cattle compared to pets, no? And the good thing is that we have also a driver for the Windows failover clustering, okay? So hopefully, we'll be able to demo it to you pretty soon. Plus, we have remote effects, so if you're running VDI workloads, that's definitely the right thing to do. And there is a new feature, again, at our time frame, which is Shielded VMs, that basically allows you to do full VM encryption and isolation, and also having a completely independent security model to say so from the underlying host. So even if somebody takes control of the underlying host, your VMs will anyway be safe, okay? There is quite a lot to discuss about this topic, so we wouldn't be able to introduce it only in a short session like this one. But please feel free to come to our booth or to ask at openstackatmaxoff.com, as Peter was mentioning. And trust me, that's a feature that really changes the way in which you can consider security for virtual machine in the computing world, no? Now, some people like to ask us, well, who supports using Windows and Hyper-V inside of OpenStack? Well, these are some of the ones that support. We've been working with all major OpenStack vendors from a platform perspective, pretty much for the last four plus years. We've often been told, pretty much if you look at the ecosystem, the Windows guest is the only guest platform that's essentially supported by all those vendors, because in some cases, they don't support each other. So that leads me to believe that we've got one of the best supported guest platforms on top of any OpenStack distribution today. So we as CloudBest partner with all the names that you see there. So if you want to deploy and then Hyper-V nodes to an existing OpenStack deployment, it's designed in a way in which you can just add the node and make it work. Hyper-C. Hyper-C is a fully converged Windows Server and OpenStack Power to Cloud infrastructure. So we use the best things that come out of Windows Server and the best thing that we develop upstream in OpenStack. So Nano Server 2016, storage spaces direct, scale out for a server, Hyper-V, of course. Windows Failover Cluster, OpenV-Switch, and so on. How many of you guys are running Hyper-V? I forgot to ask you. All right, good. Nice. Thank you. So one thing that we did was to port OpenV-Switch on Windows. So OpenV-Switch is, of course, let's call it the lingua franca for software defined networking in OpenStack. And we knew that it was very difficult for Hyper-V and Windows Server to have a future in OpenStack without porting this. So we made an effort. We worked together upstream in the OVS project with the folks at VMware and the OVS community in general. And we are extremely happy about the results that came out from there. Great interoperability. It works. You can have a KVM and Hyper-V nodes in the same identical cloud. All tunneling supports VXLand, JRE, STTG, Neve, and so on. The same ML2 OVS agent that we have and also very important we support OVN, Open Daylight, NSX, and let's say every modern type of controller. So just to be clear, that's KVM, Hyper-V, side-by-side, without any extra really configuration on your part from a networking perspective and delivering a 100% seamless interoperable network for your guests. Okay, demo time. So what you have here is a fully hyper-converged OpenStack running on top of this Nuke. So those are Intel Nuke's here. We usually keep them at the boot. So this time we decided, hey, let's try to see if we can throw them on stage and make it work right away, okay? We encourage anybody who wants a closer look to move forward and take a peek. Take a picture. So the idea is that those are four hyper-converged nodes. So you have compute, networking, storage, and everything on them, running, of course, Hyper-V, storage spaces direct, OpenV switch, and so on. They don't have IPMI because they are, as I was saying, the six-generation Intel Nuke doesn't have an MTV Pro model. So what we have here is what you normally would do to simulate IPMI on a regular machine, which is pressing a button and releasing a button. And since we didn't have any other simple way to do it, we used Lego with EV3 Robotics. And with a custom firmware, of course, and there are some motors under here that will simply press the button and automate the full thing, okay? Then if you want to see more, you can come later at our booth to see it. There is a controller here. This is a redrunning Ubuntu with Ubuntu Mass, which is performing the bare-metal deployment of all the nodes and all the automated deployment of the entire OpenStack. So this simulates what normally happens in the real data center, in which you have, of course, your Mass node. You must control the deploying, of course, the entire nodes inside of the stack. So you don't have to install anything managed. So if you think about the old-school way of installing Windows Server, in which you had to install Next, Next, and so on, forget about it. It's everything entirely automated. Okay, I guess I can connect to it. So those are the Ubuntu Mass nodes. So you can see those are the four nukes and the rest are a bunch of other machines running on the controller, okay? What else? Here I have my OpenStack. Let's see if I do an overlist. I should have everything. Yep. Now let's do some demos. Well, first thing I want to show you, I'm just RDPing into one of those machines, almost. I have no networking. Demo effects. Let's see if we have them. So if you guys are familiar with Windows Server for lower cluster, this is actually where we have the cluster manager just showing you that we are connected there, no? Now, and this is just one of the many compute nodes that we have. Many. Four. And I'm just RDPing into it, okay? Now let's get back to the demo. Just fetching a network, which is called Private VXLAN, and just doing an overboot. Overlist. Okay, so I'm just watching my overlist. Spawning. In a few seconds it will be up and running. So what's happening is that the scheduler choose one of those nodes and all storage and compute networking and so on is handled there, no? A bit of suspense. General. In the meantime, what I can show you, if I do here, I'm on a compute node, on one of the compute nodes, I just did an OVSVSI show, exactly like you will do on any, let's say Windows Linux machine, and you can see all the flows and all the configurations that we have there, okay? So all the flows, okay? All the configurations, which configuration that we have, including all the tunnels and everything. Okay, machine is up and running. So I should be able to go here, and you see that it's here. So what's happening here is that the normal compute driver coordinates with the cluster, so that the cluster knows that the machine exists. Now, if I... Okay, machine running. If I migrate it, for example, well, before I migrated it, I'm assigning a floating IP, and I'm just RDPing into it from another node, from another, sorry, from another shell, so you can see that the machine is here, and I can ping, no? So I leave the PIG running. So now this machine is running on the nuke number four, so if I go here and do a getVM, I can see that the machine is running, no? And even here, it tells me the cluster that it's on nuke number four. So I'm live migrating it to the first one. So now on the cluster, I should be able to see that the live migration started, and as you can see, the normal driver is synchronizing with the cluster to tell it to go to the other side. And voila, it's running on the first one. So now I show, it tells me that it's on VM1. As you can see, the ping didn't care. It's still running, you know? So all this happened automatically without any interruption for the user. Now let's see what's happening if something more brutal happens, like a full failover. So let's migrate it back to nuke zero four, and the next thing that we're going to do is to brutally shut down the node where it's actually hosted. Can we spell the name? You change nuke four, and now it's nuke four as with capital. Now let's see where it's running. Okay, nuke one. Okay, I'm migrating to nuke four. I misspelled the name of the node, so obviously it didn't migrate. Okay, it's on four, and now what I can do, so here it will tell me that it's active, I can go back on mass and tell mass to brutally power off that machine, okay? So it will talk to the Lego nodes. Hear it? It's pressing the button, it really is in the button now, and the machine is dead, you know? What's happening is you can see that the virtual machine already recognized that the cluster recognized that the node is dead, and the machine has been live-migrated automatically on the other side. So me as a user, I'm able to simply reconnect to the machine and be able to access it, you know? Because if I look a novelist here, the machine is happening up and running on vm one, no? Here you go, nuke zero two. Here you go, if you want to see more of these things running, you can come to our booth and show them, okay? So this is a big advantage, meaning all your workloads will just work even if they don't have an application-level fillover. They work just also based on a host-level fillover. So essentially what we're getting at here is with Microsoft Technologies and OpenStack, we can basically suit the needs of your pets and your cattle. Yeah. Okay, time to resume our slides. If you just want to see how OpenStack works on Hyper-V, there is another open source project called Vimagine that runs on everything running Hyper-V. So even if you have a Windows 8, 8.1, or 10 laptop, it even runs, for example, on a surface or anywhere, or just a Windows server running around, you can just run it and you will get a one-node OpenStack fully functional on Hyper-V, okay? It's extremely simple to use. It's just like an X-Nex experience. So the ideal thing for people having to learn how OpenStack works. So we also got a lot of questions about the performance of Windows and OpenStack. And over the last two years, we've spent a significant amount of time ensuring that we can perform as well as Linux in an OpenStack ecosystem. What these fancy colors and numbers show you are the results of the rally test that we did for an Apple's-to-Apple's comparison of both KVM with Linux workloads as well as Windows Server 2012 R2, as well as Windows Server 2016 with both Linux workloads. And as you can see by just the nature of the colors, Windows Server 2016 basically slightly outperforms KVM for Linux workloads in an Apple's-to-Apple's tempest comparison. There is a series of blog posts on cloudbase.it where we explain how all this works. All the tests have been performed with rally, so open-source or classic OpenStack tools. The last ones are particularly interesting because they are based on a Hadoop cluster, okay? So real enterprise workloads on this scenario. What we're trying to show here is that, you know, Windows is a viable alternative as your hypervisor platform, right? So if you have those licenses, there might be a reason to use it. Okay. We already talked about the continuous integration, or you want to add something more now? Yeah, well, basically, you know, we have about 13 different drivers that we currently test in our continuous integration today. You know, we're on, from an ongoing perspective, we're constantly advancing that and adding new technologies to it to have time and have need to. So, you know, our goal is to assure that, you know, Windows is a viable alternative to Linux and your OpenStack ecosystem. So, and the best way we can do that is to assure that we're as well tested as everything else. You know, I would like it if you go and take a look for yourself on StackAlytics and you can easily look for the cloud-based solutions under CI Boats and see how we compare to the others, but I would guarantee you you're going to be extremely surprised. So, from that perspective. So, what's coming next? I mean, how do we plan to deploy Windows components going forward? Well, we like a lot the Collab project, I think, which is brilliant. And we are basically containerizing all the Windows bits as well. So, Windows Storage, Windows Compute, of course, as in Nova Compute, and Networking for VS for the Microsoft network controller stack and so on. So, and we will have a unified deploying model based on Collab. There are already patches which are currently in the process of being reviewed and merged for an initial release, and then we will continue adding more. There is also a blog post available to get an additional idea. If you look at the modern type of Windows workloads on OpenStack, Azure Service Fabric, it's probably what you're looking for. It's working perfectly fine on OpenStack as well. Fully automated, including auto scaling and everything else. Docker is, of course, one of the big things that came with Windows Server 2016, and before, of course, on Windows 10 as well. And lots of people are containerizing their traditional applications and new applications in the Windows world. Think about ASP.NET and so on. So, of course, we are going to help a lot in that direction as well. On OpenStack and on Kubernetes. So, at Google, next, there was, of course, a presentation together with Apprenda, in which Apprenda showed actually a full Kubernetes deploy using our OpenStack, sorry, our OpenV-Switch and OVN model. So, we are now in the process of making a driver for that, and the preferred networking model for Windows containers in Kubernetes will be, of course, OVN and OVS. We have a session about VDI, if you are curious, including a fully open-source VDI project that's going to be tomorrow, Wednesday. Thank you. Here we have actually a co-speaker. I have something like five or six sections this time, so I completely messed up it. Peter, I want to say something about OpenStack. We have a lot of questions about OpenStack, AzureStack and SCVMM and WAP. Traditionally, those are really suited for different purposes, from where the work that we do here at OpenStack, we don't really see it as competitive. This is a quote from my employers. AzureStack and WAP have a different purpose. OpenStack is great platform to run Azure services in your data center. We look at OpenStack as being the solution if you want to have open-source, open-cloud technologies and be able to consume Microsoft technologies alongside. From that perspective, OpenStack does bring the standard open-source cloud IaaS APIs. AzureStack isn't only about IaaS, but it brings the Azure services to your data center and into your private cloud space. I think we can wrap up with one last slide. This is not a comparison between apples and apples. Those are really apples, oranges and bananas, so don't think that you can put AzureStack and OpenStack in the same comparison in the centers, because they are completely different things as Peter just said. You get an idea of the pricing that OpenStack will have a pay-as-you-go model based, of course, on Microsoft announcements. The hardware will be quite hefty in order to deploy it. You have to consider the fact that in order to have OpenStack deployments, at least based on what it's known publicly at the moment and I put a bunch of links there about the sources that are publicly available, not necessarily from Microsoft. This is not, of course, a statement in hardware, while for OpenStack you can start as small as you want. As you can see, you can have even a small proof-of-concept multi-node on just some simple nukes. Lots of people are asking about how does that compare also with the system center, and again, even here there is no comparison. System center, at least how our customers typically see it, is more like an alternative to VMware, not a real cloud alternative, not even with the web portal on top. There are different spaces, one where OpenStack excels, one where Azure Stack excels, and one where System Center excels. We have customers, and quite a lot, that are expecting to deploy both OpenStack and Azure Stack in the future. Of course, they can also communicate among each other, migrate VMs among them, but most important don't think that they are mutually ex-lived, they are perfectly valid choices that you can have in the future. I think we can wrap up here. So as we said earlier, if you have any questions for myself, and want to see it come from a Microsoft email address, feel free to email OpenStack at Microsoft.com, and you will get in touch with myself and the rest of our team. And then for cloud-based solutions feel free to reach out to check their website, or come see us at the booth for the rest of the week. We are at the booth, of course, with a question related to our product or OpenStack involvement and everything, and of course the upstreamaskopenstack.org. If anybody has any questions and preference of time and to not step on anybody's toes, we'd be more than happy to answer your questions on the floor later. Or now. Go ahead. Yeah, I got a question. I'm an OpenStack movie, we've had it for about a month, so a good portion of that time has been spent on cloud-based solutions. I've stumbled across cloud-based tools and I've been able to successfully use them to create 2016 standard and nano-images. That core is a totally different story. Curious, if you guys have support for core image generation? Windows have a core? Definitely. That's what we do most of the time, actually. Yeah, I keep getting a boot-res DLL error. Did you ask a question on cloud-based solutions? I suggest you come down to the booth and we'll have you talk to an engineer. Awesome. Anybody else questions? Then we'll see you all later and have a great week here in Boston. Okay, thank you guys.