 All right All right, I think we're ready to go. Yeah. Hi everybody. My name is Peter pull out and I work on OpenStack at Microsoft Alexander I work at clubby solutions the company doing all the integration between Microsoft technologies and OpenStack Great. I'm so pleased to have a packed house today with Everybody here in Tokyo. So thank you for coming out to listen to us talk before I begin I'd like to ask a couple questions First off, how many people in here have already used Windows with OpenStack today? That is awesome sweet. Wow awesome. Yeah, I don't even know where to that's a that's a big house I don't remember. I'm overwhelmed by seeing that many hands raised you you have to Every summit we've been coming to for quite a while now. We've slowly watched those hands progress So yeah, how many of you using hyper V in OpenStack? We're still progressing at this one. Awesome. I'll take it. All right So today we're gonna talk about a few things regarding the technologies that we've enabled in OpenStack for Basically the integrations that were enabled for Windows and OpenStack So Specifically, we're gonna start the agenda with Windows as a guest. Okay. You want to say two words about the screen over there Oh, well, I'll let you say it because that's okay So as you notice you see those logos there so OpenStack plus Windows equals love Okay So our goal here is to make sure that everything related to micro technology and OpenStack will work together in the best possible way So together with Peter, we started up a community which now it's already quite a few. Yes, right? Well Well from my perspective, I've been at this for well almost over four years now We've been working together for about three and a half. Yeah Yeah, basically from and things are growing a lot. So we have a Lot of people developing and working in a novel community. We have people working in Cinder, Manila, Silometer, so all basically all the major projects have people involved in it So it's it's great to see all the community working and if you guys also Willing to contribute let's say on every possible level from filing bugs down to documentation down of course the writing code We're more than happy to accept any any help Award on the agenda. Okay, so me so Windows as a guest Then a little bit of information about Windows licensing in OpenStack How many of you have absolutely clear how Windows licensing works in OpenStack? Exactly Hopefully we can shed some light for that. He templates and of course awards for Windows hypervisors meaning hyper B So Windows is a guest. All right So today we can consume Windows as a guest on top of OpenStack regardless of hypervisor technology used Okay, so KVM VMware all those things and we can assure that that guest experience is identical to Linux Okay, so we can take that image feed it into glance and and you should have the identical user experience for your Windows Guests that you have on Linux given the integrations that we've provided by working with cloud base So Today if you're going to use Windows on top of any hypervisor other than hyper V You're required to use a level of integration power virtualized device driver layer Okay today though that layer is built into Windows if you were to run a let's say Linux guest on top of hyper V You would need to utilize our Linux integration services for KVM You're required to use the vert IO drivers and there's an interesting circumstance that that is sort of a byproduct of the certification process at Microsoft which Add some complexity to that vert IO later and vert IO layer and we'll get into that a little bit later With VMware, you need to use the VMware tools and with Zen server and XEP You need to use this and server tools, okay, and generally speaking if you have hyper V It just works out of the box. Okay, that's a yep Yeah, but I just want to be clear We support Windows on any possible hypervisor supported in OpenStack So it doesn't have to be a hyper V only thing So of course it works in an easier way at least but any other hypervisor works Clarbase in it. How many of you are using it? Okay, how many of you know what it is? Okay, I Think the same people I might be so Cloud base in it is 100% Python code Okay, so and that's one of another thing that we wanted to make absolutely clear from the beginning when we started developing cloud base in it You wanted to make sure that any DevOps familiar with cloud in it was able to find that very similar environment also Windows So we prefer to use a Python code over dotnet code in this case Okay, not because there are not enough dotnet developers out there. Okay, but because Python is somehow The lingua franca. Okay in the moment in which it comes to coding. Okay, so if I may add something Yeah, just to be clear cloud base in it is our guest initialization layer for Windows, right? So the equivalent of cloud in it. Yeah and I don't know if you guys know this but we We reached let's say an agreement with the cloud in the community in which we merged the two projects Okay, and there's going to be a cloud in it version to which is the merit between the two code bases Okay, so between cloud base in it and the cloud in it to be one the reason why we didn't do a parting from the beginning Of cloud in it to windows is because cloud in it is very very Linux specific So it was simply impossible to do it Okay, so while cloud base in it was started from the beginning as a multi platform tool There are different companies which are using it already on different operating systems. So for example free bsd Solaris and so on. Okay, so the new cloud in it takes into consideration Of course, there's layers Let's say of abstraction over the operating system while taking all the goodies coming from cloud in it and cloud base in it At the same time and making a new project, which is dual licensed meaning that you can choose between gpl and Apache 2 Okay, it's a GP or So because another thing is that the cloud base in it is Apache 2 license So you can do whatever we do you want actually with a code based on the license it comes with an installer So it's very easy you take it you install it finished. Okay For the new nano server, we also provide the zip packages that can be used to deploy it also in an environment where MSIs can be used and of course it can be also fully automated So MSIs are basically the way are very roughly the micros of the equivalent of that packages or RPMs Okay, you can consider is the atomic way of deploying your package on on Windows, okay It can be of course via graphical user interface classical next next finish Okay, or it can be fully automated. For example as part of puppet manifest or whatever else, okay It comes with a lot of so-called plug-in models meaning actions that you can do on the operating system The most common are users in group management storage. For example, when you boot your instance Maybe you have any an image, which is I don't know 10 gigabyte and you have actually an image of flavor Which requires 20 of this so it will automatically expand the disk to reach that size, okay? It handles winner M winner M is basically the equivalent of open SSH in Microsoft context, okay So it takes care of creating a listener so you can directly use them HTTPS to manage your node Okay, as long as you have username and password and also it allows password less authentication be using x519 Certificates so now since kilo you can create keepers which can be either SSH keepers or x519 keepers, okay? And for Windows machines, you can use those to have a password less authentication very useful for automation for example, no licensing it takes care of Activating automatically your instance, okay? User data, that's most probably the most important part here You can run any power shall script as part of user data So whatever action you need to do on on the machine can be done via partial or we support also hit templates, okay? And we have also pretty big collection of hit templates both open source Available on the directly on the upstream open stack Heat templates project, okay, or as part of our commercial offers, okay, so we offer recover basically all the Microsoft related workloads from this perspective And then there are a ton of other Plug-in models for example setting them to you very important if you're using GRE or tunneling which requires you to have a Specific empty you size. Otherwise, you won't be able to handle the sedimentation of them Open with which level and some NTP to set the proper clock your local scrapes and a lot more Very important. We don't support only open stack We support well in open stack, of course open stack HTTP config drive Recently we added also support for config drive in ironic Meaning that you can do a bare metal deployment and have config drive, you know deployed as part Let's say as an as a partition inside of of your disk. Okay, if you target disk Amazon EC2 Cloud stack open a bola Ubuntu mass, which is one of our favorite Bare metal deployment solutions and so on. Okay, you can also specify multiple of them So which means that cloud business will simply try one after the other until it finds the right one So you can have a mean an image that will simply work on every possible type of cloud. Okay Now which supported Windows versions can you use with the open-stack technology that we have today, right? From a Windows client perspective. That's Windows 7 8 8 dot 1 10 in both x86 and x60. We forgot to put Windows Vista You know Windows server all the current platforms 2008 2008 R2 2012 2012 R2 x64 Windows server 2016 technical preview nano server 2016 and XP in 2003 Also functioning with card in it. I'm adding also a small thing on this for nano server 2016 fully supports Python 3 so we did a lot of work to make sure that Python was working on nano and Which means also that cloud business and the future cloud in it will work seamlessly on part of to Python 2 7 or Python 3 All right Now as a sage equivalent in Windows today We use what's called a when RM and WS man to remotely manage that box It allows us to essentially control over HEDP HBS It can be used to execute PowerShell And we can actually run commands directly from Linux So we have we can install those tools on Linux and then use Basically use it as a starting point for automation to automate our Windows hosts And if you want to see some configuration examples of that we have some scripts available on the cloud base GitHub repository site We have evaluation images anybody try to download evaluation images Okay, good. Nice. Okay We hear some complaints of course that the download might be a bit slow The the main issues that those images are like six gigabytes in size. Okay, so make sure that when you start the download Either use a browser that will enable you to to retry let's say the download and not restart from zero Okay, or make sure to keep sure to keep open also your session Okay, because in the the in order to download those images you need to accept a license So we cannot just provide you the direct link You need first to access the license at that point you get a session cookie with a token and with that token We can download the image what we're planning to do is to add also a command line tool Which will basically do the same thing the limitation with the license is simply due to the fact that Microsoft In order to allowing us to let you download those images is requiring that you accept that license. Okay We are I believe the only company ever was probably yep pretty much that now this happened I believe it was in Portland that yeah did that work. Yeah, it essentially that was the first Windows cloud appliance that I know of so and it was fully baked for open stack. Okay It comes also complete with our beautiful logo. Here's the auto by the way Okay, how to build an image obviously those are test images, okay We provide to our customers, of course as a service pre-built open-stack images tested with continuous integration and Updated every month with updates and son But we also want to make sure that the community can just rebuild the images with the same identical tools that we use. Okay Here is the repository where you can just download the tools it will automatically create the image for you Okay, there is an offline creation and then it will Boot the VM in order to run download all the windows updates and so on so open source You can just look on the slide deck actually for downloading. Okay, and now the hot topic now So licensing okay from a licensing perspective with Windows you have to utilize one of our existing licensing models, right? So from that standpoint if you already have existing Windows licenses that you're consuming those licenses can be used with open stack Okay, if you're doing a greenfield deployment, right then you can use either, you know any of our Current licensing models of volume licensing SPLA if you're a service provider, right? But essentially what it comes down to is your your best opportunity to get the most out of it is is essentially with using the data center SKUs and and Essentially what that does is it gives you unlimited guest access on top of that hypervisor, right? So what that means is and from a guess we're strictly talking about Windows guests The only time Microsoft charges you for that, you know is when you're consuming Windows on Windows, right? Because we give away hyper V server for free You can take that today and build an open stack deployment off of that and run any other operating system on it other than Windows Without getting charged by Microsoft Okay, so What we have and this touches a little bit on the on the vert IO piece earlier if you want to run Windows on a Platform other than hyper V with an open stack. We require you to use it in a supported configuration Right, so what that means is you need to have a power virtualized device driver layer, right? In which is certified by Microsoft today. There are currently only three Certified device driver layers there that operate For Windows on KBM, okay in order to obtain those you need to purchase them from your enterprise Linux vendor Okay, so what that means is you need to run supported Ubuntu From canonical you need to run rel from Red Hat and you need to or it needs to be Sousa Enterprise with each of their own respective Vert IO driver layer if you use the upstream fedora Powervert layer for Windows you will not get support for that Windows guest from Microsoft because it is currently a Uncertified solution so we highly recommend that you work with your sort of you know a Certified Linux vendor to obtain the appropriate license if you want to get proper, you know proper support for Microsoft Okay, so obviously VMware fits in there with VMware tools, but from a KVM perspective only you need to be Aware of the power virtualized device driver layer you're using on Windows if you want to get appropriate support So we get a lot of questions. Does Microsoft support open stack? Well, the answer that is actually yes, right? So and we do support it in the way that Front in the following ways right so if you decide to use hyper V with You know a supported version of Windows with hyper V and you want to put any virtual machine on that We will support you regardless of the management platform as long as you're in a certified configuration So feel free to use it Any supported licensing model there work So if you're you know have valid licenses all that stuff is you know kosher and you should have no issue obtaining support from Microsoft if You are running Windows in a supported configuration and have and are for some reason having any problems You can email that email address and I will assure you that someone will respond and help you figure out What you need to do to get a supported version of Windows or if you have questions also regarding any of this stuff That we just recently Discussed you can email that and we will make sure that you get your questions answered Okay, now let's move on to the next level so okay Everybody likes to deploy the virtual machines on a on a cloud So infrastructure services obviously a mandatory player. Okay, if you want to deploy anything useful, but Okay, you don't usually present your customer just a virtual machine you what the customer usually wants is to have Some something running on top of it. Okay, some service some applications and whatever came So we support a variety of options from this perspective one of them is heat heat is of course the orchestration project in open stack, so we have fully support for that and we have a Big lot of templates. I mean active directory exchange share point SQL server. Yes windows failover clustering SQL server also always on with with failover clustering as on okay So all these things are Fully automated. So all these activities that traditionally were requiring assist admin days to deploy now are fully automated And are deployed in a in a matter of minutes or maximum hours. Okay, including entirely cluster configurations Once again, that's in a purely open stack native user experience can be virtual can be physical I mean bare metal containers and so on what we'll talk later about containers and so Um Okay windows open stack components as I was saying before we are active on a Very large number of projects. So we have of course hyper V computer driver I mean that the hyper V driver is our darling meaning that it's the first project We started working on since for some okay, and obviously this one is a very mature project and so speak for yourself The project well, I started on campus so So The main idea is that people sometimes ask hey What's the status of the support for that project and everything because you know, usually you hear about I don't know All the KVM stuff backed by a red hat or canonical or all the big names in there in the Linux world Then you say do you hear VM were baking their project, but you don't hear Microsoft backing the The corresponding hyper V driver, okay We as cloud-based solutions are maintaining very actively together of course with a community this driver And I can guarantee that it's very high in terms of quality Okay, so the fact that there is or not Microsoft pushing behind it has nothing to do with the quality or the level of maturity or feature completeness of the project Okay, we are definitely willing to hear your opinion. We're willing to to ask or let's say answer any of your questions and Either of course through the open stack at Microsoft comm email alias or of course our booth or whatever else I see any of those things Next the Neutron agent of course there is no compute without networking So we have the hyper V SDN support and also open with which more on this later We have seen their support. We have actually three drivers two of them on Windows. I study support and SMB 3 Manila is actually fresh from Liberty So we have a new project a new a new driver which which merged actually in Liberty I'm talking about all this stuff is upstream in there in in the core of the stack projects, okay Well, of course we have a Windows cloud in it, which is like a lot better in it We have an agent for four kilometers. Okay, and Also very important since in 2000 in Windows 7 2016 there is going to be support Actually, it's already available in the preview for containers We have also Nova Docker support today. Okay, this is actually merging currently I meet up for a view in Nova Docker and we are working now on Magnum in order to have full support in Magnum Also for Windows containers. Okay, so this will be available by the time in which Windows server 2016 will be released. Okay So once again, those are all native open stack open stack Experiences in which we're integrating the the key Windows technologies such that you can use Windows technologies in your open stack deployment today Okay Okay, we would like to introduce sure. So hyper V. What is it? It's Microsoft's flagship hypervisor It's setup is pretty easy Our Nova drivers and its seventh release Actually, it's well since working with cloud base. It's in the seventh release. Well in Nexus. It was out. So yeah So essentially we have support in it for a hyper V the hyper V releases that are included with the 2012 2012 R2 and 2016 releases. We have VHD VHD X support Solometer support and there's lots more killer features that we include So a hyper V server is our free edition similar to ESXi. I guess. Yeah It's the full hypervisor platform. So there is nothing that Let's say that we include all the features and functions available to run a full hypervisor stack Within that product. Okay, you can take it today and have this have all the same bits that you would have in a full Windows server Hypervisor wise, right? It's a stripped down Windows core experience. So you get the you know reduced footprint And if you decide to use it, you will need to only license your Windows guests, right? Making a comparison with ESXi So ex Xi has some caps that say in the amount of resources you can use and to use fully as XI You need to use this fear, right hyper V is different meaning that you have everything you need in the hyper V itself Okay, so it just works. It's free out of the box No limitations exact same features that you can have on the Windows server a counterpart Now if you have a Windows server media standing around and you want to try this All you need to do is enable the hyper V role feature, right? And with same with Windows 8.1 and Windows 10 You can use those for your open stack workstation development, right station Okay, then of course on top of hyper V you just deploy the hyper V now a compute which comes again with an MSI As we will see pretty soon Seamless open stack experience just like on Linux again, it's the same concept Only Python code so in the moment in which you look at the traces you look at the logs and everything is the same Identical type of logs that you will see on Linux. Okay, it's the same identical Nova compute that you will see running on a KVM box Okay It uses of course key features baked into the hyper V So we have a driver which runs inside of Nova compute and that's actually the hyper V driver Some key differentiators. I mean of course hyper visors today are Comodities meaning that you know you can take a KVM you can take an SSI you can take a hyper V You know they do more or less the same thing, but there are still some difference I mean some of them are stronger than others on some specific aspects, right? For example hyper V has a share nothing live migration out of the box You take two hyper V boxes. You have live migration. Okay, so now while I migrate it just works period Remote effects. So if you want to have VDI you have baked in hyper V and thanks to our driver also exposed to Opus tech you have GPU acceleration So you can have all these features as part of your Windows guests So for example, if you have VDI so if you give to your users access to virtual machines for desktop usage, okay? Shielded VMs is a new thing coming in 2016. It's a security feature that actually allows you to Guarantee to your users that even in case of Attempts to paint. Let's say the hypervisor itself your VMs will always be safe The way in which shielded VMs work. It's a pretty complex and long argument discussion that could take Let's say one one entire hour one entire session to discuss it, but it's an amazing feature Okay, it comes actually it's based on the so-called user resolution mode which comes with Windows 10 Okay, which is something that will change radically the way in which about we think about malware today It's a 2016 fix feature So meaning you can take the technical preview today and try it out But it will be available in RTM of course next year in storage spaces direct Which are used for hyper convergence meaning that you can have a shared nothing storage Meaning distributed across multiple nodes like you will do with Gloucester fsf and so on in which you can have as we will see pretty soon Every single note which has compute storage networking On seeing on on every note. So you don't have there also distributed across separated nodes in your network Now continuous integration, right one of the key aspects of open stack hyper V is fully CI tested With tempest and we report upstream to Garrett. Okay, we are one of the largest CI contributors Today and have been since we stood up the continuous integration infrastructure. We started with Nova, right? We also have CI for neutron both with a hyper V native virtual switch as well as We're currently working to enable our OBS driver in the CI as well. Okay, we we also Have downstream testing for those components because we have in some cases code that's waiting in the The pipeline to get integrated into the upstream release Those two projects are the networking hyper V and compute hyper V Okay, and those upstream project will our downstream projects will actually in most cases contain all the integrations that We have sort of waiting to get in the upstream tree We also Maintain and run CI for cinder. Okay, that's Cinder with ice because he targets on a Windows server as well as SMB three targets, all right and SMB three connectivity to Linux as well because Linux and the well essentially the Sama team has done a tremendous amount of work to enable the SMB three protocols in Sama and also Manila both Linux and Windows guests. Okay So I believe we are Most probably the only company with so many CIs running at the same time I mean, I don't know. I find it quite amazing sometimes that we've gotten this far Okay, let's move to neutral so Of course, we have the neutral plug-in part of the project since 2013 for hyper V back In the days in which it was called quantum since Havana We support VLAN and which are a flat networking and local network on this specific plug-in It's a plug-in agent model the cool idea is that it works as an ML to plug-in agent Which means that you can have the ML to plug-in and as many agents as you want Including For example open V switch. This is a typical Network diagram that he will have like so exactly like for every other type of open stack deployment. He will have a You will have some controllers. He will have a networking nodes and you will have compute nodes Okay in the new hyperconverged model. We have also distributed around it meaning that the networking node and the compute node merge, okay OVS interrupt so the ML to mechanism driver that we have is compatible with open V switch meaning that you can have one KVM node Running open V switch one hyper V node running the native networking stack and they will just work together seamlessly For example, if your tenant has one Linux machine and one Windows machine The two of them will talk among each other over for example a neutron tunnel Okay so one very important thing that we did from the beginning from a design perspective was that we wanted to be sure that you Were able to take the hyper V box put it in an existing open stack network With for example KVM or whatever else and make it just work seamlessly and this goes exactly in that direction We support of course the type of networks we were talking about before And we just use the level layer 3 DHCP for foul and so on agent that come directly with OVS. Okay, I'm most important with neutron Next that was not enough So of course most of our customer were happy, but some of them were asking hey I want to have also OVS DB and need open flows and need open daylight compatibility In short, they needed open V switch So we said, okay, let's move down to the hardware level or better said to the kernel level and Let's port open V switch to Windows. Okay, so we did this very very cool Porting work and now we have open V switch working natively on hyper V This is actually a work that we did together with VMware. Okay, so that was a very nice community work and all the code that we Contributed the entire project is currently available as part of the OVS project upstream. Okay Great interoperability support for all the type of tunnels that you will expect with open V switch So VX LAN and GRE STD and so on there is some genive porting in process as well Again near to an ML 2 and OVS agent So the same agent that you run on Linux works exactly as is on Windows And most important is compatible with open daylight and NSX because we support also OVS DB the OVS DB protocol So even if you don't run an agent locally on the hyper V box Okay, it will just work because always DB will is available to the network to the management network Seen that so Cinder Cinder we have an ice-cazi Windows driver basically utilizes the existing ice-cazi infrastructure Exporting VHDs today. I believe in 2016. We'll be able to do Roblox devices Does SMB? Sorry, that's all right I don't know where you left me off. All right. Yeah, so It also does SMB 3 and so FS Windows file server driver. It's a great companion for hyper V. It basically allows us to do TCP offloading of that transport layer. So I said don't think of SMB 3 as you know your old-school sifts or SMB It's it's used it by Microsoft as a as a high-speed data transfer layer for Remote disgrace activity and it can be used with any hypervisor Okay, with Manila same thing We enabled the SMB 3 driver in there for both Linux and Windows guests and that basically allows Windows file services To be exposed to guests through APIs dashboard integration So obviously horizon can be used with hyper V as well. Okay, there is one small difference All the other hypervisors use VNC as their technology for accessing the console of your Guests, okay, while we use RDP In order to do that we have a project which we contribute We are currently the maintainers of the free free RDP web connect project which consists in a HTML 5 layer which connects via web sockets to a service and this service will connect via DP to the hyper V host Okay, we are not talking about RDP to connect the inside of the guests But it's used to connect directly to the host meaning hyper V and that one will Redirect the content access to the console meaning that that's what you use also for example for accessing a Linux guest running a hyper V itself, okay In short it just works so our goal was to make again a seamless experience So what you get with VNC is what you get also with RDP. You just have this additional component to install So now of course has a installer So if you don't want to automate it with chef puppet and so on as we'll see in a second You can just run it and it will guide you asking you all the relevant information for example. Where is your keystone? Where is your glance API? Where is your MQP service rabbit in this case? Where do you want to put your instances and stuff like that? Okay, so you just fill up this information and then at the end you will have an installed and a deployed node or You can use Dev Ops tools right today We know there's a rich ecosystem of Dev Ops tools available for operating system platforms, right? You can use those, you know, especially puppet chef salt We know all work well with Windows right all those communities have put in a substantial effort to help embrace Dev Ops automation with Windows using their technology and we use some of it today in our CI as well How many of you are using puppet? Okay, chef. Okay, juju. Okay, salt salt Ansible Okay, good. I like it. There is a pretty good. It's good mix. Okay Okay, no, so nano server welcome to the next generation of Windows server We're gonna reinvent your Windows experience with nano server what it is. It's a Micro version of Windows. What about 400 mag right Alex? Yeah, it's lightweight. It could be pixie booted It's Windows without Windows. So not Windows core Windows without Windows it's a console welcome Once again extremely fault for extremely small footprint extremely fast to deploy and it's included as part of Windows 2016 it has hyper-reimbord storage and whatever else including also storage spaces direct. Yep now storage spaces direct is sort of a Our implementation of a distributed file system or a distributed block storage system that can be shared as Well, like a shared storage between all the nodes for converged purposes, right? We can build Scale-out file services with it and then use those file services As we said with cinder and manila Okay, so let's put all these things together and we get the open stack hyper converged based on hyper V This is something that we announced yesterday. Okay here at the summit is brand new stuff actually we are definitely so open stack has the first hyper-converged design ever based on Windows server, okay and How does it work? So you take your hyper V nodes for example for example with nano server We have a demo running at the boot by the way So if you want to comment our boot 32 and take a look at it, you will see it Each of those nodes has a hyper V. Okay, so for the compute part Networking and of course storage. Okay, so all the disk the local disk. So your regular out of the box commodity SAS or SATA disk SSD or regular hardware mechanical artists. Okay, can be used for this type of pulling So nothing expensive. So finally we get rid of all this cost that a son will imply and everything The storage is distributed meaning that thanks to storage spaces direct all those discs form a single pool or multiple pools depending on how you want to distribute them, okay, and On top of those pools you can create volumes in which data is mirrored Depending on the type of full tolerant settings that you want and striped across all those machines So the result is that at any time you can just take out one of the machines So let's say that it dies for any reasons and everything will keep on working What happens of course is that you will have some dedicated Networking in order to have the data transferring across the nodes and synchronizing. Okay So to make sure that everything keeps on working The good part is that this works also together with the SMB director and DMA So you have a harder of loading on the most Let's say used type of nicks in the indifferent in the modern infrastructure today So that actually you don't have to have CPU cycles or many CPU cycles dedicated to this So this is the same technology that Microsoft is using also So it comes with a platform and it will use also another type of clouds, okay We use it here in open stack On top of it We can use Cinder for example with scale out file server, which is a clustering Application that will use SMB 3 features To make sure that you have full full tolerance and balancing across those nodes for them For the host and guest and so on that will actually use the Cinder services on top of it Plus of course hyper V you take out one node Automatically, let's say that you one node dies You can just pull off the power. Okay, what happens is that the storage will keep on working So you will lose not anything from your volumes and at the same time The machine will simply be respond the line migrated to one of the other nodes, okay? So simple easy automated full tolerance and everything full support if you want either from cattle and not only for For sorry for pets and not only for cattle Next thing juju We are a big we are big fans of juju Actually, I believe it's the easiest way to deploy today Workloads on top for example an open stack cloud and very important it's It works on Linux it works on Windows and we have a ton of cloud-based Windows charms, okay from active directory hyper We of course exchange SharePoint Windows file servers SQL server SQL server always on VDI failover clustering Skype for business actually that's something we're going to be released pretty soon and so on and That's it if you want to try the charms just let me know there is will behind there somewhere Which can to which we can ask any type of questions We are more than happy to give you child versions or whatever plus we have also open source charms freely available If you want to try open stack on hyper V All you have to do is to use Vimaging which is a tool that we develop for a proof of concept Because the usual complaint that we hear from users so especially for newcomers to the open stack community is how difficult it is to deploy Open stack we said why and we created a tool in which just with a couple of clicks you can Deploy the entire open stack on anything starting from your laptop Let's say that you have a laptop with Windows 8 8 to 1 or 10 running hyper V Then you get open stack running there entirely so your laptop will be the compute node And you will have a virtual machines running the remaining Linux components. Okay, and it just works So no hassle in understanding how Open stack works you can just deploy it use it and then once you understand how it works You can think about how to do a deployment a scale, okay? We are adding also support now for the nano hyperconverged support that we were talking about Freely available on our site. Just download it use it and so on Next now if you have support questions or questions regarding open stack Please feel free to email open stack at Microsoft or look at ask cloud based on cloud based at it for information regarding that Okay, we got to the Q&A part any question Thanks gentlemen over there for the licensing of Microsoft case on Hyperveak, and we use our existing spa agreement. So we still license per socket absolutely. Okay, absolutely In fact, I believe that's the probably for a large-scale deployment. That's the best for public I'll just for public cloud is the way to go because you are needing to license actually to their parties Or even if you have like multi-tenant scenarios, that's the best Next questions Thank you for remote effects. Have you tried running benchmarks on the instances running a cloud? Yeah, we have some benchmarks We were actually going to publish them pretty soon. We were evaluating them today If you want to come to the booth we can talk about it. Okay. Thank you very much. You're welcome great More questions. Wow. No questions. Wow. I'd really be more than that. Oh, yeah, don't be shy come on over there over there Did we cover? Oh, no, we didn't cover deploying on ironic. Did we? Oh, no, then it's also we also sorry guys We we missed a lot. Yeah, we support the point on ironic and also since we are talking about it We have a manga that we release for this number and on the last page. We have a tribute to ironic Okay, since this metal as a service, you see the guy here with the horses on So, yeah, we actually have full support for ironic as well Yeah And it's actually also mass and ironic are to prefer the way of deploying with bare metal and in cloud Basically, we added also ironic support as a we talked actually now actually additionally and let's not be short on this Cloud base also added support for Microsoft's cloud server. How many people here knew that Microsoft was a contributor to the OCS open compute Wow So thank you So yeah, so basically they added ironic support for Microsoft's open compute platform Both an ironic and in mass correct. Yeah So yeah, you can deploy user ironic to deploy on the open compute probably which is actually a great platform other questions Yep here in front Just a second for the audio device to arrive Okay, so you support a horizon, right? Yeah, and what about service manager? What service? Virtual machine manager. Okay, so hold on a second Let's talk for a moment about this in terms of open-stack technology that works with Windows. We're talking about Windows The core platform. We're not talking about any of our management platforms Those are different type of deployment, right? They're apples and oranges So from that perspective, they're mutually exclusive if you were to deploy them together You would essentially have two different Management solutions running on the same infrastructure that were completely independent, right? So from that perspective, you know Microsoft has a specific target audience for you know, they're a CVV CVMM deployments and such they're typically, you know highly available those things, you know It's those deployments that require HA at the hypervisor level and those sort of things, right? Open-stack's a different model of deployment, right? We I like to think of it like we like you know open-stack loves toasters We have a compute toaster a storage toaster and such and no hyperconverged toaster now so from that perspective the the model at which is different and from a early on perspective We need to just focus on enabling Windows hypervisors and open-stack, so Yeah, there's no overlap between what happens in SCVMM and and that SCV might be able to be deployed on top But we haven't done any work in that space with the open-stack group. We are open Let's say if there are customer requirements or something that we never say no to but you can see like the Platforms or the Windows operating system hypervians on as the basis and then you have a variety of management stacks on top One of them is VMM one other is open-stack and there are some some others, okay? So that that's the main ideas about all the features are available in the platform so that this management stacks can consume it There is nothing in VMM that is not available in open-stack from this perspective because they're both consuming the underlying Yeah, the same core features, right? So from once again from that perspective our goal primary goal was to take those features that we think are great in Windows and allow open-stack users that want to use Windows in their open-stack deployment to get access to those features Okay, guys. We are at the end of the Session if you have any other questions, please come by to our booth or we will be outside here Willing to talk with you to own any question. Thank you. Thank you everybody