 Okay, can everybody hear me? Cool. So if you were at the introductory session a little earlier this afternoon, Dan Wendland talked about some lightning talks that our distribution vendors will do. So this is that session. And I am Dan Floria, I'm part of the OpenStack team at VMware. So let's get started. Before actually we go into the presentations from all the vendors that we have up here. I just wanted to give another plug for the Hands On Lab that we have for VMware running on vSphere. So this was mentioned earlier also. And I think it may be showing up in your schedule as also happening in this room. So the Hands On Lab is something that you can do yourself so you can log in, the URL is up there. You can try it out if you need help with the Hands On Lab. What the Hands On Lab is, it's probably the easiest way for you to try out OpenStack with the vSphere platform. Actually it's probably the easiest way to try out OpenStack period. You just go to that tiny URL and it just sets up an environment for you ready to go, ready with vSphere lying underneath. And anyway, it's very easy for you to get started. And if you need any help, there's people in the back of the room or maybe in the outside room as well. If you have any questions. And there's also the IRC channel. You can get help that way as well. So this session is, as I mentioned earlier, it's for our distribution partners to highlight the great work that they've done to support the VMware technologies. So we're actually really excited about this. We wanna provide customers with the choice of how they wanna deploy OpenStack on top of the vSphere platform. So we wanna provide you with a choice of what distribution you wanna use. So I'm really excited to have all the folks up here to talk about the great work that they've been doing. And it's actually a lot of work to support an additional platform. There's a lot of work that goes into modifying the deployment tools that they have and to building the engineering expertise and building a support organization to support a new platform. So they've done a lot of great work and they have a challenge right now to present all the great work that they've done in about five minutes or so. So these are lightning talks, they're gonna be quick. And hopefully I can get the slides to work. So without further ado, I wanna introduce Dave Russell from Canonical who's gonna come up here and talk to you about Canonical's work. Thanks a lot, Doug. Everybody hear me here? You see. How about now? That sounds a bit better. Hear me at the back? All right, good stuff. So this is all about Canonical and the amazing work we've been doing with VMware on OpenStack. Thank you. So hopefully a lot of you know who Canonical is but just in case, Canonical's the company behind Ubuntu. We're also known as the great big orange stand with the cool orange boxes on it. If you haven't seen them yet, go down and take a look. 10 node cluster in a box, very cool. So we're a, we've been around for a little while. Since 2004, over 600 people, over 30 countries, we're a very, very widely distributed organization. We've got people all over the globe but we've got major offices in the locations above. We're also very much, you know, very much around the Ubuntu platform. So we see it very much as a platform for innovation, a lot of the cool new technologies that are coming up, a lot of big data, a lot of no SQL, a lot of great stuff like, oh, I don't know, OpenStack starts very first on Ubuntu. Nine out of 10 OpenStack production clouds are running on Ubuntu. We've been supporting customers in production with OpenStack, including very demanding financial services, telcos and other folks for in excess of two years now. So we've been sort of supporting OpenStack and supporting customers using OpenStack for quite some time. We're also a fairly significant part of the public guest story as well. So this is really about our partnership with VMware and the work we've done together on OpenStack. We were the first company to announce our relationship with VMware a year ago now at the Portland ODS, in fact, during the keynote where we would be working jointly together on VMware and OpenStack engagements. We were the first company to actually do an engagement jointly with VMware and we'll have a little snippet on that a little bit later. And I'd like to share with you a couple of high level architectures that we've kind of come along with, we've engaged with certain customers and these are things that we found really work for us. So the first one is basically everything virtualized on VMware vSphere. I'm sure VMware have been telling you all day about the great reasons why you'd want to do this, but we found this has been pretty effective. Organizations that are already incredibly familiar with VMware but want to start to get a taste of that OpenStack goodness, this is definitely the recommended way to go about it. So you've got a VMware vSphere management cluster, you've got installations of the Ubuntu server OS on top of that and on top of that you've got all of the OpenStack services and then the Ubuntu management and orchestration services on top of that. And then on your right hand side here you've got a separate vSphere cluster or even several vSphere clusters that are then talking through the OpenStack Nerva Compute driver and being driven by the OpenStack environment. And of course you can then run anything you like on top of that so that can be as you see here Ubuntu server guests, other enterprise Linux guests or even Windows server guests. So that's option A. Option B is run your OpenStack services on physical servers, run, so that's Ubuntu on top of that and all the standard OpenStack services and then have all of that talking to a VMware vSphere environment. So the only difference between A and B is pretty much OpenStack services here on physical versus being virtualized in vSphere. Different organizations have different levels of comfort with introducing something new into their environment. So this suits some people better. Some people have different ideas as to how they want to scale out their environment and again, this suits some of them better. And then finally, this is what I call the kind of the rainbows and unicorns. This is what everybody's, a lot of people I find really wanting to drive towards. They want something where they've got the core OpenStack services probably on physical bare metal. They want the VMware vSphere environment. They've got key things that they want to run on that. It gets them HA for those things and it's important to them. And also, they're wanting to either dip their toes into or they already have some existing KVM. And for that case, again, you've got a complete parity of platforms. You've got a single OpenStack environment that can talk to all of these different pieces. So I want to very, very quickly whisk you through a single customer deployment story that we did jointly with our friends at VMware. So this is a customer who wanted to, they had an existing infrastructure as a service platform. They'd prototype it basically with their organization. They could see there was an immense demand for it, but they really needed something a bit more robust. And so OpenStack was the obvious answer. They chose to go for that option A, everything virtualized on VMware, including the core OpenStack services. And the implementation and the results, they did a really excellent job with this. We provided them consultancy and services. VMware provided expertise on their side. And a couple of things they did really well. They had really good internal stakeholders. They got everybody together on their side, on our side, on the VMware side. And we made sure that together we made this project a success, which of course it was. We learned a couple of interesting lessons. Initially when we deployed it, so this was a large financial services organization in the US. And initially when we deployed OpenStack, we did not have SSL encryption from start to finish all the way through the OpenStack environment. This was something that was important to them. Luckily, due to the way that we deploy OpenStack, whether it's on bare metal, virtualized guests, or indeed on VMware, we use our charms and juju. We just needed to alter the charms a little bit. A week later, rolled out upgraded versions of the charms, all redeployed, so great lessons. And future outlook, the project's expanding, the customer's expanding, and it's all ongoing. That's it. Thanks very much. Thanks, Dave. So next we have Andrew from Red Hat who's going to be talking about their efforts. Can we play the remote? Okay, good afternoon. I'm Andy Cathrow. I look after the virtualization product management team at Red Hat. You all know who Red Hat are and what we do. I want to quickly start with how we do it. I think it's important. So there are four steps that we take to go from a, this is where I criticize Max and talk about winning Linux. Here we go. So four steps that we take to go from an upstream project to a downstream enterprise product. The first and in my mind, the most important is participation. So you believe you have to participate and engage at all aspects of the community to be able to support your customers. It's not as simple as compiling and shipping code. There's a bar that we set at Red Hat before we can ship a product that involves engineers and QAs actively engaged in the product. The open stack is more than just Python services that are running and doing orchestration. It's running on top of Linux. There's a messaging layer, a database layer, user space libraries. Each one of those has to be integrated, tested and supported. If you get a bug and it's not an open stack bug, but it's something in Cupid or in Rabbit or in MariaDB or Galera, you can't say upstream issue. You have to own the issue from soup to nuts. So we believe you have to have broad and deep participation. Integration is taking all those upstream components, be it from Linux, from open stack and other projects, putting them together and integrating those to make sure they work. But also filling the gaps. There's many gaps that aren't filled by upstream open stack, installers, high availability, monitoring and reporting. So it's filling those gaps to put together a complete solution. And then stabilize, so that's testing, certification, bug fixing and back porting. And finally delivery, and that means not just give you the product, but support you, give you the patches, the bug fixes when you need them, not on an upstream step schedule, the services and the training. So there are two distributions from Red Hat. RDO is our community distribution. It's published on the upstream six month cadence for the six month life cycle. Anyone can download that, install that on Fedora, CentOS, REL or any REL derivative distribution. It's on the upstream schedule on life cycle. There's no commercial support, it's a vibrant community around it. And then REL OSP is our commercial distribution, enterprise hardened, long life cycle, certification and support ecosystem. When we talk about life cycle, why is it important? So upstream has a six month life cycle. We know about the cadence. We're here celebrating Ice House this week. Roughly six months from now, there will be no more upstream patches from Ice House. And if you have a bug, well, you should go to June 0. That's not good enough for enterprise deployments. You need to have a longer life cycle. You'll see here a two month gap between upstream in April and downstream in June. And this two months is used for testing, bug fixing, back porting and certification. Now, any bugs will be fine here. We fix in trunk and then we back port them to our stable branch, support that for three years. Quickly round deployments, three projects to mention. The first is pack stack, a very simple tool for deployments on POCs. You run a command line tool, you answer some questions and it will deploy a single node or multi node cloud. Great for POCs. For production deployments, we have readout open stack installer based on foreman, booty USB key or CD-ROM. I'll just go through a wizard, see the nodes it's discovered, compute nodes, service node, configure them, check box for HA on your fully deployed. And finally, triple O, which is the upstream deployment and management project. It's still work in progress. I hope it's gonna be tech preview by the time we get to June 0, but we're heavily invested upstream on triple O. So releases, so we have got new medical releases. So well, OSP four is our Havana release released back in December. I'll talk in a minute about this, but our 18 three update out of support for vCenter. The release that we're concentrating on now is well, OSP five, our ice house release is in beta right now. We'll be coming out GA in June, runs in well six and well seven. A couple of notable features we added support for rabbit MQ in addition to Cupid. So you can pick a messaging platform now and add a support from a rear DB instead of my SQL with Galera for active active support. Let me skip this for a second. I'm just gonna run out of time. So VMware has been working with Red Hat for many, many years now. I think it was the first hypervisor we supported before Zen, before KVM, before Amazon. There's a long time engineering relationship between Red Hat and VMware. That means if you get, you've got a rel guest running on top of VMware and there's a bug, we have the engineers to triage and work together upstream to fix those issues. It's a similar model to what we're working on with with OpenStack support. We're not a compiling ship company before we can add support for any platform such as the vCenter driver. We have to have engineers working on the code base. So go back a few months when we started down this path of VMware, we looked at the upstream backlog and we coordinated with VMware engineering to make sure that the code reviews are being done. We're participating in those, participating in bug fixes. So what we're delivering now we can support for three years and we know it's a more polished product than if we'd compiled and shipped at the start of a banner. So what we support today in Red RS P4 with the Async 3 update is deploying the vCenter driver. Now this is not the ESX direct driver, we're only supporting the vCenter driver, that's the one that's got the upstream support and backing of VMware along with NSX. We do have Nova Network support, but that's considered more for PSE deployments with the vCenter driver. We've got maybe a minute and a half left for questions, any questions I can answer? So one quick thing, I mentioned PaxStack, now I've been updated, so you can quickly deploy a PaxStack based, obviously deployment with vCenter, our triple O, not triple O, so excuse me, our foreman based installer, still work in progress, we showed that as our first Async update for Red RS P5 to add the vCenter support. Okay, thank you. Thank you very much. So next we have Pete from SUSE. You find it. Good afternoon everybody. I'm Pete Chadler from SUSE, we are also a long time Linux distribution, I hope everyone has at least heard of the green chameleon before. What I want to start off with was VMware and SUSE have worked together for a very long time, at least for 10 plus years, similar to Red Hat, we've supported SUSE Enterprise Server running on VMware, really from the beginning, and SUSE Enterprise Server is fully supported to run in a vSphere environment, and we integrate all the tools that you need to really take, to optimize use of that vSphere environment. One of the things that we actually have done is we work very closely with VMware and all of VMware's virtual appliance applications actually run on SUSE Enterprise Server. So if you run vCenter virtual appliance, you're actually running SUSE Enterprise in your environment. We have a number of extensions to SUSE Enterprise Server including SAP, which can support running SAP virtualized on top of VMware. We also have a high availability extension to SUSE Enterprise, which complements the capabilities that you have in VMware to provide application level availability and for mission critical applications. Lastly, since what we're here to talk about today is SUSE Cloud, which is our OpenStack implementation. So this is kind of a high level view of the VMware support within OpenStack, and you can see that the pieces that we support, all the stuff in light green is essentially basic OpenStack and the things that are highlighted in kind of this yellow are the drivers that you can take advantage of to access your vSphere environment. We support all of those, so you can easily deploy SUSE Cloud, which the current release is SUSE Cloud 3 based upon Havana. You can deploy that, take advantage of an existing vSphere environment through the vCenter drivers. In terms of capabilities over and above what you get with the vCenter implementation, we also started shipping high availability for the control plane, which really we think complements the environment. Most of our customers, I would say about 80% of our customers actually run the SUSE Enterprise Server on VMware. It's clearly the most prevalent hypervisor that our customers run, and they're running mission critical applications in that environment. So when they start looking at how I want to move to OpenStack, the first thing they told us was I need a highly available control plane. So that's one of the things we have focused on. And we've also simplified the deployment not only of that, but also of vCenter in your environment. So this is actually a screenshot from the SUSE Cloud administration server, which is our installation framework, our deployment tool built upon crowbar. And you can see, probably the guys in the back of the room can't see, but if you want to come by the booth, we can give you a demo. You can see a little more closely. When you have a node available, so you can see the first node down over here, physical server that you're ready to deploy a compute node on, you have a number of options of what kind of compute node you want that device to be. It can be a Hyper-V node. It can be a KVM node. It can be a QMU node. It can be Zen or it can be VMware. So in this case, I've dragged compute1 into VMware and said, I want to deploy the vCenter proxy into that node. You just come up with the next screen. It says, okay, what's the IP address of vCenter? What's your username? What's your password? And which clusters are you going to pick up from vCenter to pull into your OpenStack environment? So pretty straightforward, pretty easy. Once you've done that, it's all going to be available. We've also got a plug-in for Cinder. I didn't show the pull-down, but when you say I want to configure Cinder, you get a number of different selections, one of which is VMware, which is actually the NSX driver. Once you pick the NSX driver, again, same kind of idea. What's your username? What's your password? Which controllers are you using? What's the transport zone? What are the gateways? So it really leads you through the whole process to go through, quickly stand up an OpenStack environment and integrate that with your existing VMware infrastructure. And this one, again, is another eye chart, but once you get all that set up, when you go into vCenter, you can now see here's the network that's attached to OpenStack, and you can see when you go into the cluster environment, you can see that the OpenStack-based, the cluster that you've assigned to OpenStack now shows up in vCenter as well. So you can still take advantage of all the vCenter management capabilities even within your OpenStack environment. So that's quick overview. Any questions? Obviously you can stop by our booth or catch us after the session. Okay, thank you very much. So next we have Nick from Morantis who's going to, I believe, run through a demo and we need to do a quick switch of laptops here. Okay, can everybody hear me? All right, marvelous. So, so I've got six minutes, so I'm going to try and make this short and sweet. My name is Nick Chase. I'm from Morantis. We're the number one pure play OpenStack company on the market, which basically means all we do is OpenStack. We don't sell hardware. We don't sell operating systems. OpenStack is all that we care about. Today I am going to show you a demo, a recorded demo of VMware and OpenStack. So a little bit of a relief from the PowerPoint for the most part. Take for granted the fact that we do have an excellent services and support organization since I only have six minutes. If we take a look at the general roadmap, what you can see here is that for each of these major OpenStack projects, there's a sort of corresponding VMware product. The idea is that you can use OpenStack, but if you're a VMware shop, you can continue to use the VMware tools that you're familiar with to manage those resources. For example, the most obvious would be that you can create OpenStack VMs with Nova and then manage them with the vCenter tools or you can use NSX as the basis for your neutron deployment. But as you can see here, you can also use vCenter data stores, the backend for Cinder and Glance, and you can think about integrating Keystone with VMware single sign-on through open source drivers, that sort of thing. Okay, so how does it work? Well, speaking for a moment about compute and storage, how it works is that we have the OpenStack API, which uses the vSphere driver to connect to vCenter, and I'm gonna pause this for a minute because I got an enemy. And from there, it's basically just a normal vSphere deployment. Let me go back just slightly. No, I won't. It's the same thing for the NSX deployment where basically you have the NSX drivers that connect to the NSX controller and then you work from VMware. So let's take a look at a demo of how this actually works. So what you have is we're gonna go ahead and create a data center. Again, this is time compressed, so what I've done is basically recorded the demo and cut out the boring parts where you wait for stuff. So we're gonna create a data center and create a cluster in the data center. And I don't know what keeps banging, but I apologize for it. And then within there, we're gonna go ahead and add the host, which would be your normal ESX host. So at this point, this is all just normal VMware stuff. And so all your VMware people are probably going, why are you even showing me this? And I'm doing it for a couple of reasons. One is, I wanna show you that this is really just plain old VMware. We're not doing anything special at that point. But also there are a couple of steps that are necessary for the integration. Specifically in this demo, we're going to be using Nova Network. So we need to make sure that we have the switch set up with VR100. And you can see there the VLAN ID of 103. We're gonna use that in a minute. So this is fuel. I'm gonna stop this for a second. Boy, this is just... I don't know about you guys, but I'm getting tired trying to keep up with this thing. So basically this is fuel, and it is the open source, it is the open source deployment tool that comes along with Moranis OpenStack, which is the Moranis distribution of OpenStack. Now, basically what you're gonna do here is you're gonna go ahead and specify what you want. Let me just kinda go here for a minute. This is crazy. So this is for one, so you can see where you get it. You can choose Havana, but 5.0, which will be out very shortly, it lets you choose Icehouse as well. You can choose whether you want HA or not. But the important thing here is that you can choose vCenter as your hypervisor. So once you do that, you can go on and choose your other things. We're gonna choose Nova Network now. Future versions will let you also deploy with NSX, but that's coming later this year. You can include Cep, you can include other products. Obviously I recorded this before Savannah changed to Sahara, but we'll keep that simple for now. All right, so we go in and we create, now we need to go ahead and add our nodes to the cluster. So we're gonna say, all right, I need to add a node, and what we need is, all we need is a controller, because everything else is handled by VMware. So I'm gonna say I want a controller. I'm gonna look at the nodes that I have available. These are auto-detected by fuel, so I don't need to specify what it is or anything like that. Fuel will also let me see what the specifications of the hosts are so I can make sure that they're gonna be appropriate. I can figure things. Now, if you remember, we set the VLAN ID as 103. And here you see it on the fixed network. So that was why we had to make sure we knew what that was. So that's part of the way that we're gonna communicate between OpenStack and vCenter. So going forward, what is that noise? Somebody else have their mic on? All right, so going forward, if we go to the settings, you can see that as before. So we specify that we want to use vCenter as a hypervisor. We're gonna say, okay, this is the IP for the vCenter server. We include the admin username and password so that we can talk to it. Of course, sure, now it slows down. And then we're specifying also the cluster name that we added when we created it in vCenter. So that is how we're tying those two environments together. And I wanted to show you that because it's great to see the high level, yes, these two products talk to each other, but okay, how exactly? So that's why we're doing this. So as you can see here, later you'll be able to specify your NSX information as well, other parameters that you might wanna set so you don't need to edit configuration files and so on. So we're gonna save those settings and go ahead and deploy the cluster any second now. There we go. So we're gonna deploy the cluster, it's gonna tell us, oh, you need a compute node, but we don't need a compute node because VMware's gonna handle that for us. So it will go ahead and do the installation. And at this point, I'm gonna compress our time compression already by flipping over to an already completed cluster. Ugh, no, not so fast. So if we go over to horizon, we can see that the hypervisor, the vCenter server shows up as the available hypervisor. So any host that we create, that's where they're gonna go, any VMs that we create, rather. If we head on back to vCenter and we look at our data center and we look at the cluster that's associated with that open set cluster, we can see we don't have any virtual machines yet. We can see virtual machines zero. So if we go back over to horizon and we launch a VM, it doesn't matter what's on it, we're just gonna launch a plain old empty VM for the moment. We can see that as soon as it comes up, it will appear over on the VMware side, so over in vCenter, so that we can manage it from vCenter. So we can start it, stop it, do whatever we wanna do from vCenter. And that brings us to six minutes. Any quick questions? Okay, great, that's my time. Thank you very much, everybody. Well, thank you very much, Nick, for that super compressed demo. It's pretty impressive that you can get it all done in eight minutes or less, six minutes. Yeah, so. So that kinda wraps it up, but I just wanna say once again, as VMware, our goal is to provide customers with a choice of what distribution they wanna use for open stack on top of vSphere, and we're really excited to have all of these partner vendors working with us. And thank you very much for having these super speedy presentations, and we appreciate all the great work you've done. And just one other thing, if anybody's interested in canonical and VMware, we have people with collateral at the back, white papers on our deployments, the architectures, and all the cool stuff I outlined. So enjoy yourselves. So I think we have a little bit of a break, and then after following up on this, there's a talk on Congress, which is a new policy project that VMware is a part of in open stack. And then after that, there's a VCN talk. So thanks.