 So, hello everyone. Welcome to the presentation. My name is Esteban Arias. I'm a software engineer for IBM. I will be presenting a little bit of the work that we have done together with the OpenStack Cloud, particularly for a product called VMware Integrated OpenStack, how we deploy it in the SoftLayer Cloud as a part of a project. This is Arvind. Yeah. Hi. This is Arvind Soni. I lead our OpenStack efforts here at VMware as a product manager. That's why I will let Esteban go first so that he will lend credibility to whatever claims I do later on, right? So that's a good flow. All right, Esteban. Thank you. So, as I was saying, my role within IBM is particularly integrating products. So, I ended up with my team on a series of projects that require integration of different platforms. So, we ended up doing VMware Integrated OpenStack on top of SoftLayer. So, we will describe a little bit what was the mindset, the key points around it, and particularly deploying it as a part of an enterprise workload was a little bit challenging. But the idea is to get you what was the mindset and the lessons learned out of it. So, key points there. OpenStack on SoftLayer with VMware. We, as you probably remember, at the time of the interconnect this year, so at the conference, it was announced that we had now the luxury of having better battle-tested architecture for running VMware workloads in SoftLayer. So, when you see that panorama, you're going to the idea that you can take that leverage and integrate some other different products. So, that was the case with OpenStack. Nowadays, people is not seeing it anymore as one or the other, but how you can get the two working together in best interest of your customer. So, we took a piece out of it out of the architecture, and we lay out VMware integrated OpenStack on top of that, for a series of reasons. But in general, because I'm getting into the second key point, capacity, you want disaster recovery out of it, you want migration capabilities. Now, we are seeing, particularly for the case of SoftLayer, we are seeing people who is posting architectures that can take VM out of Amsterdam and put it into a data center in Washington, D.C., and just using SoftLayer private network. So, those capabilities regarding what you have behind the OpenStack Cloud are really an added value, and we took advantage of it. Some of the reasons are also the multi-tenant environments that you can have, like DevTest, Lab, so we took advantage of that. We also gain really company out of the customer when you start talking about certified environments. So, you can have things like HIPAA compliance environments, environments that respond to Nest, scans out of the box. That kind of thing is what really adds value to it. Another key point was that we wanted to start single region, right, and be able to scale worldwide. So, probably you know already that SoftLayer has 23 data centers. You can start VMs in Mexico and end up in Melbourne, and all those provide with capabilities, in particular when connecting all the different hypervisors in expanding the realm of the OpenStack Cloud. We wanted to be agile and private hybrid, so obviously, and someone was mentioned in the morning in a presentation as well, you want to compete against a very robust platform, and that has to be done with these kind of capabilities, so particularly speaking about AWS. We wanted to have, particularly in this project, we needed to have auto-scaling capabilities and provide that on a tenant basis. So, tied together with that, we had the use case of integrating this workload with an already extensive footprint of VMware VMs that the customer already had in SoftLayer. So, that if you put all that together, you kind of provide with a panorama that it's about integration. So, that's the key message on this, right? So, about the building blocks that we had our available for us to build this, you have three main points. You have compute with the capabilities of SoftLayer on bare-metal servers. So, once you get your account with them, right, you start getting different provision capabilities. So, you can start with bare-metal that are different flavors in sizes and processors. You can have a comprehensive network management out of band, so that's really a key factor when deploying your hypervisors that provide the foundation for the cloud. You have a model of networking that it's consistent of three layers. So, you have public-facing interfaces, you have private interfaces, and you have management interfaces. So, that gives you quite a bit of flexibilities in terms of working the cloud and working with all the NOVA computer nodes and the neutral nodes. You have storage, which is critical on this piece, because you have now the flexibility of having these bare-metal nodes with all SSD configurations. So, that gives you an underlying layer that you can leverage with BIS and products, and that connected with the OpenStack cloud in VMware running the workloads of Cinder and Glance gives you a very extensive framework. Also, you have some other storage options like box storage, file storage, and object storage that you can also leverage in order to serve the data stores that eventually will be translated into the pieces of the OpenStack cloud storage underlying system. We had a series of software components that were intended to fit into the project. The first one of them was the Urban Code Deploy, which provided us with a Blueprint Designer UI that basically connected to the VIO, providing the VIO, the VMware integrated OpenStack, providing the full orchestration out of it. So, it was kind of a very interesting having a UI that you can drag and drop different agents into the objects that you already imported out of the OpenStack cloud. So, the customer ended up with all these different agents that you can deploy several times and you can update on a single basis. So, I hope that's true, but on the right-hand side, you will see, sorry, on the left-hand side, you will see different objects imported out of the OpenStack cloud directly seen without having to interact with any other API, but heat and cilometer for that case. So, you have the Blueprint Designer, you have Operator Tools, or we had Operator Tools, which were vCenter and vRealize operations. These were intended to provide the customer with tools that were familiar to them in order to check the overall status of the VMware cloud and the OpenStack cloud. So, these ended up being just a supported material, but it's important to mention. We had, on the networking side, we also had VMware NSX, which is the endpoint where Neutron was talking in order to provision, and we will see a slide that shows the overall workflow, but in general, on this portion, you will have things like your Edge services router and your distributed logical routers. So, it's interesting because you eventually consume resources that are extendable through the data center. So, you don't have to re-use, you don't have to create two separate environments, but re-use what you already have. And that obviously is part of the reference architecture that I was talking about at the beginning, which is already vetted, right? So, that's important. The other point here and the final piece that we were missing was the integrated OpenStack. So, that's a layer on top of VMware. It consumes and talks natively with the VMware cloud and manages for this particular environment different tenants on VM storage and networking. So, you can see from it things like, for example, images, network volumes, all those interacting on the line with the VMware cloud. It's important to mention here, and it based on my experience, that the VMware support out of it was very helpful because at the end you end up troubleshooting issues that are related to both environments. So, it's something that adds a layer of action items on it. So, this was the framework. Regarding the integrated OpenStack, a couple of lessons learned there. One is that it's an integrated appliance. So, that means that it's an OVA that was a familiar language for our customer. So, basically, it's a pre-packaged image that will expand into a complete installation. So, the moment that you deploy it, you will have to connect things like your Nova cluster, your data stores, your Glantz data stores, your networking manager. So, that takes you to the deployment mode, which could be NSX control, right? So, overlay networks, or you can do standard DVS. So, when you start talking all these topics to a networking engineer who is familiar with the both, right, you start getting into what they call a POC or a production mode. So, you can deploy VMware integrated OpenStack in a POC, which is basically the same as the production one, but without heavy availability. So, you can have a small subset of the capabilities. You can show them, you can prove value out of it, and then eventually you can migrate that workload into a production one. So, that's how the appliance work. And it's, in my opinion, it's fairly simple. To deploy once, you have it totally architectured, in particularly in the networking piece, because again, when you have software, you have multiple servers, and you have different sets of capabilities in network on them. So, you have to be careful with the partition and schema that you will put into those networks, and that's the way that it's gonna connect. So, moving forward. This is the workflow that I was talking. You can see on point number one. So, that's DevOps, how they request, on our case it was the Urban Code Deploy, the designer. So, you can do that, obviously in different ways. We did it in Urban Code Deploy and we did it using HIT directly. So, you provision, you request, the OpenStack Cloud will reply back with traditional tools, right? Keystone, Nova, Cinder, Glance, Neutron. And then, you migrate into what is known or what was known for this particular customer. They, on point number two. So, we go and we talk to tools that they already have. So, we talk to NSX and we talk to the vCenter, which translate into the underlying based infrastructure, the ESXIs, and all the rest of the components that we have there. So, you can have these switches, you can have data stores, and those might be serving as well some other infrastructure. So, that's why I was talking about integration. So, once you get all that running, you can have multiple tenants or projects, or, and you can have, and this is going to be a discussion that tomorrow is going to be a very interesting talk on that. So, you can have single tenant routers, or you can have shared provider network. So, I invite you to go to that one. But in general, you will have these routers serving on top of infrastructure that will eventually connect public and private networks. So, you start, you start throwing things like loading IPs out of the provisioning systems of layer. So, that's where it gets interested. So, you end up in point number three, with the infrastructure managed by OpenStack and supported by VMware on SoftLayer. So, it's a win-win situation. All together, as I was saying, you can have the urban code at the bottom, one of the scaling policies that we were talking about, heat and silo mirror work key parts of this. So, on the, on your left-hand side, you will see one of the blueprint instances, how it had a couple of agents that are really probable and you can customize them. And that serving an out-of-the-scale policy that eventually will scale out the environment up to the entire infrastructure that you have. And that is where it gets really interesting because if you take it to the next level, right? So, you will start having constructs at the bottom, right? You will have your, what, this is part of the architecture that I was referring, tested in SoftLayer. So, at the bottom, you have your constructs of management clusters, edge clusters, and compute clusters, how they, making these divisions, you can have all the different hypervisors connected into the SoftLayer cloud using this networking model that I was talking about, the triple networking model, with public and private villains, which, at the end, take all these SSDs and the NSX portion of it, creating what you can extend across all the data centers. So, that is where I think that puts the value on it. So, on your management cluster, you end up having all the different appliances that are part of your entire cloud. So, you can have the, as you can see, the NSX manager, the NSX controller, the VMware integrated OpenStack, and all those serving on the same infrastructure. So, you end up having this base on the left-hand side and extend it all the way to different data centers. So, that's a key added value, I think. This is the VIO production deployment that I was talking about. So, out of the box, you will have a couple of load balancers, one in, everything is HAA, pretty much, but in particular, what I think it's important, it's that all these VMs, when set up in the right mode with the partition schema and the networking, are a fairly simple way to introduce and provision the entire OpenStack cloud with pretty much no effort, right? So, it's just planning an architect. So, you can see things like the database node, which is tripply replicated. So, if you take that and put it into another tool, like disaster recovery storage replication, you end up having a very worst infrastructure and underlying on top of the integrated OpenStack. A key piece of advice, as I was saying, the leveraging tools like the vSphere data protection and replicating your storage with what software provides you and multi-region, multi-data center, resulted into what was a really, really robust design. So, it provides us with several use cases in order to gain first trust from the customer and show very different scenarios on disaster recovery. So, that was a very, very key differentiator. All right, thanks, Esteban. The key thing to remember from Esteban's presentation is simplicity, right? You guys have seen all the user survey and it is centered around three major challenges of OpenStack, simplicity, stability, scalability. So, that's what I'm gonna focus upon. But for that, you have to agree with this model of OpenStack, which is it's a framework underneath you put different, different products. And we put vSphere, NSX, vSAN, not surprisingly, because those are the products that we can support and those are some of the best products in the world that we believe, right? So, it's a framework that allows you to put different products underneath and build an OpenStack cloud on top of it. And you can replace those products with different combinations, but the key thing is you will get the same API is the same tools, doesn't matter whether you build that OpenStack cloud with vSphere or some other hypervisor, right? So, it's a framework that allows you to do this. What is inside of VIO as Esteban pointed out, it's essentially a standard distribution of OpenStack. It has all the core components that you would expect, Keystone, Horizon, Heat, Cilometer, Nova, Neutron, Glance, and everything that you would need to run a really nice production-ready OpenStack, all delivered in a HA architecture. You don't have to sit there in a scratch. How am I gonna architect my message queues and databases and Nova and Neutron? No, none of that. It comes with a proven tested architecture which will work for up to 5,000 to 6,000 virtual machines in a single vCenter. So, that's for the scale, right? To start with. Fully supported by VMware. Anything breaks in there? Nova breaks, you have a security issue on the operating system where it is running, underlying vSphere has a having issue, NSX having a problem, we'll take the call. We'll deliver the fix. You will get the hot patch and good thing is VIO has a built-in patching mechanism along with Revert. So, it's not like you have to keep a notebook and oh, I did a step one, two, three, holy shit, it didn't work, now I have to go back. No, none of that, right? It doesn't work, there's a Revert command, it will revert the package, restart the services. This is what is needed for operators to run a cloud. From a consumer perspective, yes, it's all great. You give them OpenStack APIs, let them lose, they will start breaking the cloud, right? That's what happens. So, operators are the ones that need most of the tooling. Consumers, they are being served more than plenty in the open source community, right? These are all the things that came out of the 2016 survey and I was talking in the beginning, it's really about making things simple. Making things stable, making things scalable, right? So, let's see, of course I'm gonna claim that we do all of those things, not surprising, right? That's why I let Astaman go earlier and you saw some of the things, how they play out, right? So, in terms of simplicity, where does simplicity come from for VIO? What is the secret magic that we are doing? Well, not necessarily much of a magic, right? We have a decoupled architecture. I'm gonna talk a little bit more, but that decoupled architecture is at the heart of why things are very simple when you run OpenStack on vSphere and NSX, right? That's where it comes from. And we provide a lot of tools, as I said, patching, upgrade, backup, recovery, monitoring, troubleshooting. All those are needed to run a cloud. So, without those tools, things become complex, right? So, not much of a surprise over there. Where does stability come from? Well, vSphere has been proven to run like hundreds of thousands of workload and you say, well, I'm only going to run Cattles on top of it, fine. Just turn off the DRS and HA and vMotion. You still have those operational benefits. It's a battle-tested hypervisor that works with a lot of compatibility tests done with several storage vendors across the world. Several networking vendors across the world. That stability, whether you are running a cattle or a pet, is going to give you really robust environment. Your VMs will not run into a kernel panic. You will not get a blue screen on top of Windows machines running in a KVM, right? So, that's pretty much the gist of it. Those products themselves are battle-tested. OpenStack is running on top of it and provisioning workload to them. And as I said, when things don't work, we patch them, right? When there are security issues, we patch them and give you the support. Scale. Scale is an interesting beast. I mean, you gotta have a, if you want to build something as large as some, or anywhere near AWS kind of scale, you have to have a building block. And that building block, better scale along with operations as well, right? It can't just be that you put hundreds of hypervisors and you have no tooling how to operate them. So, it has to have things like vCenter. I'm not saying you have exactly vCenter, but you have to have some tools where you can go and manage a bunch of hosts, provision storage on top of it, provision switches, remove the VLANs, update the VLANs and things like that. Without that, operations will become very difficult as you grow larger in scale. Okay. So, what was the decoupled architecture that I was talking about? When you run OpenStack on top of vSphere, OpenStack is not talking to any ESX host. There is no agent in the ESX host. There is no Nova agent, no Neutron agent, nothing. The ESX host that tenant workload control plane, data plane is completely separated from OpenStack control plane. You can imagine what does that do, right? Now I can keep changing my OpenStack control plane, applying patches, upgraded, deleted, backup. Doesn't matter. The workloads are running safely on top of ESX. So that's why the patching, troubleshooting, upgrade, everything becomes simpler in our case because it's not like every time I have to do a change in Nova, I have to go update the Nova agent and I have to do a rolling and control and those kind of coordination. So that's the fundamental piece why VIO or OpenStack running in general on top of vSphere and NSX becomes much more simple. This is at the heart of it, right? We have a decoupled architecture in this for running OpenStack. We were not happy with the simplicity that we have, really. What we wanted is can we run OpenStack in one virtual machine and whether it is good or not. So we tried it. We said, okay, let's go back to the DevStack days and see whether DevStack is actually good for production. CI, CD workloads, you can run, you know, some things which are not mission critical, give it for demo, lab testing, whatever, right? And actually we found out that the single VM VIO is pretty good for like several thousands of VMs on decent high concurrency. So why not, right? I mean, every one of us has remote offices in Boston, in Arizona, small offices, you can set up OpenStack clouds for them. So what we will do is in coming months, right now it's in tech preview, 2.5 is gonna come out soon in a couple of months. It will be in tech preview, you can deploy all of OpenStack, the same architecture that Esteban showed running in production, the same architecture in a mini form inside one VM. This is simplicity, right? Really five to 10 minutes for a complete, almost production ready, it does not have HA, but you can start over there and we'll provide you a migration path to the full blown architecture, which is also not very bad, it's just about seven VMs, two load balancers, two controllers and three databases, right? And I have to, you know, share the fact that this work has been, that compact architecture is made feasible because of all the hard work in the community, right? The Furno tokens have almost reduced the need for memcache, it's much more compact tokens, right? The reduced chatter between NOAA and Neutron is going to help us not worry about the message queues that much. So there's been a lot of improvement in the community which makes that compact architecture feasible. We don't need separate message queues and memcache servers, right? In fact, we don't need separate message queues because our message queues don't get bombarded at all because of that decoupled architecture, right? And as I was talking about simplicity, without operational tools, running a cloud is like hell, right? We didn't have a syslog integration when we used to do POCs in the early days and everyone knows that OpenStack is very generous when it comes to logs and they're all scattered everywhere, right? So without a syslog, just simple thing, but without a syslog, you can't find out why NOAA is not, NOAA VM is not getting an IP. So what we have done is paid excruciatingly detailed attention to operations, right? To the extent that each one of these was like a P0 in version one of the product that, no, we have to have patching. We have to have upgrade. Without this, we cannot go to market, right? So there's been very, very focused effort in making sure that you can actually operate OpenStack very cleanly with complete support and these are all the features that are already supported. And notice the last one, when people say, hey, we are going to run pets, we don't care about VMotion, what happens when you have to retire an entire rack of server? Well, then you have to sit down and write scripts and live migrate things and that's really just a push button in vSphere, put them in maintenance mode, move them over, you're fine, right? So this is the power of what vSphere has built over 10 years. The ability to operate a really large scale data center using vSphere and now OpenStack brings even larger scale to it, right? So all of this does not matter much to the consumers, they get the same OpenStack APIs, right? But they care when your environment does not give them the SLAs, when they ask for VMs and they don't get the VM or the IP is not coming through, then they will complain and that's where stability becomes very important. If you don't have a stable cloud, might not even have a cloud, right? It's not worth it, let them go to AWS. So stability has a lot of other factors, right? What products you choose is absolutely crucial and not just the quality of those products. Whether you can operate those products, do you have in-house expertise? Choose the products that you know you can operate. I mean, it sounds pretty trivial, but we have seen people and doing innovations, lot of innovations in one go. You choose three different products, you choose OpenStack, all new, nobody knows how to operate and it kind of becomes very difficult, right? So use the products that you know how to operate, use the products that you can get support from and our pitch is not surprisingly that these are some of the best products that you can choose, whether you want to choose all of them or some of them, for example, NSX does work with KVM. There are some large OpenStack deployments that use NSX with KVM. So whether you choose all of it or some of it, we believe that this is one of the best ways for you to build not just OpenStack, but any kind of cloud in your data center, mainly because these are very, very stable. There is a single point of support from VMware and every company out there now has good amount of expertise or you can hire those expertise pretty cheaply as compared to OpenStack expertise, which you can hire pretty much here, that's it. Outside in the market, we didn't find any. Okay, scale. Building block approach is very important here, right? As I mentioned, you gotta have a building block template and then just rinse and repeat so that you can go and build something similar to like availability zone, east, west, region, east, region, west. But before you go all the way there, you gotta have a building block. And in our case, the building block is essentially vCenter, NSX and the clusters underneath it. You put OpenStack on top of it. We are working so that you can put one OpenStack and use multiple vCenters. It's not there yet, but that's the goal. But you can imagine that once you have this, the key thing to note is the operations remains the same, right? You can have the same operator going into that vCenter inside the availability zone, US, east, and do the manipulations and operations. Then the same guy can actually log in into the west region into the vCenter and do the operations. This is called the uniform operational model and it should not be that the handed server that you have to do is completely different. No, I mean, otherwise it will be very fragmented scale. So uniformity of operations is absolutely essential. And that's why vCenter and NSX managers, they help because they can help you give you visibility into the infrastructure and help you operate it, right? Those are the key things I had. Let me show you one quick demo while, I think we have a couple of minutes. So this demo is essentially an example of what we are doing, some of the operational things that we are doing, so that it will help you. So there was this tool called OS Profiler, which was essentially languishing somewhere in the open source world out there. And OS Profiler is essentially an instrumentation of all the open stack services. NOAA, Newton, Cinder, we have essentially done an instrumentation using this OS Profiler, but this is actually very, very useful. What it does is you can go ahead and you can say, okay, enable sort of profiling if it runs, right? So don't worry about the commands if you don't, if you can't see them. It's essentially what it's doing is going ahead and enabling the profiling. Say, okay, I want to trace what's going on with this API call, right? So it will set the profiler to true and then it will come out and you can run the command, right? So in this case, we'll say, okay, Cinder, some create volume and attach volume. So we'll see what happens behind the scenes, right? And it generates a nice stack trace file here, like this long UUID stuff, and you can convert it into an STML. And this is absolutely gold, by the way. I mean, it looks pretty simple, but we have spent several hours with a lot of our production customers figuring this out. Why is this API slow? Why is no one VM not getting an IP? Where is, is the DB really slow or is the message queue the problem or where is the issue, right? So this gives you a complete trace along with the time window. And this is, by the way, open source tool so you guys can also go use it. It gives you an idea of the breakdown of where the time is getting consumed, where the bottlenecks are things like that. So anytime you have an issue, you can turn this on, you can run some performance and get a complete idea. This is what I meant when I said, like we have a very focused effort on operations because without operations, these things are off private cloud and not going to scale and not going to function. So that's pretty much all I had. We'll open it up for questions. The key thing, essentially, the message is, look, if you want to build open stack cloud on top of vSphere and NSX, VIO, which is available free of cost and will support vSphere standard, which is our lowest vSphere skew. I think it comes at $995 per CPU, something like that. Starting with VIO 2.5, you can use that if you're using NSX, right? And VIO itself is free. So give it a shot and let us know whether you like it or not, okay? With that, any questions? You can come up. Feel free to shout it out or come over to the mic. Shout it out. Yeah, yeah, absolutely. That's what I was mentioning. Multiple vCenters within one control plane is what we are working on. Once we get that, then you can scale more and more. Right now, in some of the earlier customers, we know from their production workload, not our lab test, yes, there we can push even further, but in actual production environments, one vCenter, about 5,000 VMs, 5,000 volumes, so total 10,000 objects inside vCenter is good for out of the box. Pretty good. Any other questions? No container questions? Come on. You guys had your share of containers here? All right. Any operations questions? Any vSphere administrators in the house? People who know vSphere? Good, go. For those who don't know, you can download vSphere as well. Trial license, 60 days free. VIO is free with it. Question? Question. You can not with vIO because we can't support it. Like, you will have to go build your own distribution or get some other distribution. Otherwise, it will just break the support you will call us and we'll be like, Cisco ACI what, we don't know anything about it, right? It's mostly support problem. We don't do testing with it. Not gonna. Otherwise, you can take the code, you can hack it up, change it, but you will have to support it. All right? Thank you. Sounds good. Thanks guys.