 All right, good afternoon, everyone. Can everyone hear me OK? Is there anything that we can do about the size of the presentation on the screen? Or do I need to change my resolution? OK, let's. All right, cool. One second. It's VGA, but I can switch to HDMI if that helps. People can't see something or want more clarification, then I'll just go ahead and try to expand on whatever I'm showing. OK, everyone, thank you for coming to the session. My name is not Arvin Sony. It's Trevor Roberts, Jr. If you were hoping to see my product manager, I apologize in advance, but I'll try to do as good a job as he would with this presentation. Before I get started, how many folks here are VMware administrators? OK, so everybody here, all right. How many folks have tried deploying OpenStack? OK, cool. And how many people know OpenStack terminology, like you understand what Nova is, what Neutron is? And you can spell OpenStack correctly. You can't forget the capital S, right? There's no lowercase S in OpenStack, capital S. That's very important. OK, so enough jokes aside. I'm just going to skip over the high level what is OpenStack slides, because I think we all understand what OpenStack is. So before I get started, I'm the technical marketing manager for OpenStack at VMware. And I get to do a lot of things, such as come to speak to you lovely people about what we're doing with our technologies, write blog posts, write papers, reference architectures, and work with partners like we have to Sora here, the database as a service company who does a lot of work with Trove, and we're doing some joint work with them. And we're going to hear from their team towards the end of my talk. So again, skipping over the OpenStack one-on-one slides, lots of lovely animations that we don't need to see. OK, so let's get back to the meat of the discussion. Why OpenStack on VMware? How many of you have heard that OpenStack on VMware is some kind of hybrid deployment? Anybody heard that terminology? OK, I'm glad only one of you. Well, I'm here to tell you today that there's no such thing as a hybrid OpenStack when it comes to VMware. Your OpenStack framework needs to run on some kind of infrastructure. So whether it's KVM or VMware, there is some driver that's the intermediary between the OpenStack cloud and your underlying infrastructure. So let's talk about what VMware is doing with OpenStack. Aren't we supposed to be the evil empire or something like that? That's not the case. We're here to help everyone succeed with their OpenStack deployments on our platform. So first, we started with a commitment to the OpenStack community. So we acquired this company called NYSERA back in 2010. And some of you may have heard of them. Some of you may not. But they developed or helped develop the first implementation of Neutron. And from there, they've worked very closely with the OpenStack community. And when VMware acquired NYSERA, we took over that work and continued to contribute more and more. So for example, we added support for the vCenter Nova driver in 2013, the Cinder driver, and also improvements to the Neutron drivers. And in 2014, we launched a beta of our own distribution of OpenStack. And again, VMware integrated OpenStack is not some customized OpenStack code. It's pure OpenSource OpenStack with some VMware drivers that are also OpenSource that we just changed the way it's packaged. So we help you with deploying it, managing it, and also doing upgrades when you need to do so. So we had two major releases last year. And we're looking to do something similar this year. I just can't share exact dates yet with you, but be on the lookout for our next major releases. So coming back to why would you put OpenStack on VMware? Well, it comes down to a choice of what technology do you want to pair with your cloud? A lot of you are VMware administrators, if not all of you. And you have to make sure that whatever workloads you have on your cloud is going to be available. Now, we like our infrastructure, and we're comfortable with it. And when you have to learn new infrastructures, such as if you're running another hypervisor or network virtualization technology, there is some relearning that has to happen, especially when you want to run at scale. So for example, what advanced features are available on that platform? What's going to be the total cost of ownership between having to hire developers, to extend the platform if it's not vSphere? What about scalability of your infrastructure and your applications, and availability and reliability? The control plane of OpenStack needs to be fairly rock solid. There's high availability configurations that you can do, but you also want to make sure that whatever your control plane is running on is going to perform well. And there's things about security, troubleshooting, and support, and the list goes on. This is not an exhaustive list by any means necessary, but these are just some of the things you need to think about when you're talking about deploying an OpenStack cloud. And then there's also the kinds of workloads that you're going to run in your cloud. You may have heard of pets and cattle versus cattle or whatever, and, well, just the cattle type workloads need to be in the cloud, but that's not necessarily the case. We have these traditional applications that exist in our data centers, but we still want to have some kind of API-driven deployment mechanisms. We want to be able to get our traditional applications up and running in the data center with the efficiency that we see deployments of cloud-native applications. So we're talking about your SAP servers, exchange servers, Oracle servers, and they require that the infrastructure itself be highly available, so depending on things like DRS, HA, and so on. Even if your team is thinking of moving off of these traditional application infrastructures to something more modern, you still have to keep the legacy code running in some form or fashion. So for that regards, if you run OpenStack on vSphere technology, you can take advantage of DRS, HA, all the good stuff that we have to keep your application up and running. And then you also have the option of the cattle, which is we don't care what our workloads are called. We don't care what they're named. If they get sick, we just take them out to the farm and put them to sleep or just kill them in less flowery language. But your workloads aren't meant to be long-lived, and if they're long-lived, that's great. But they're taken up or put up or taken down as quickly as possible. And the infrastructure does not have to be reliable, but it must be available. So that goes back again to your control plane needing to be fairly rock solid and available to your application developers. So think about your OpenStack Cloud, the infrastructure you have to run on. It should be able to accommodate both of these types of workloads. So from a VMware perspective, what does it take to run an OpenStack Cloud? If you're using our drivers, you have your various APIs, and the Compute API will hook into vCenter with the vSphere APIs. The Neutron or Networking API will hook into VMware NSX. And then your storage for block as well as for images will hook into vCenter's storage APIs, whether it's VMFS on a fiber channel array, NFS on some kind of network appliance, or vSAN if you're doing software-defined storage. And then when it comes to infrastructure monitoring, how many of you are using vRealize operations? OK, not many of you. But we have some free plugins for vRealize operations so that you can visualize all of your workloads in the cloud. So as you're deploying your OpenStack Cloud on vSphere, you can have your vRealize operations OpenStack plugin, that's a mouthful, actually show you all of the workloads that belong to each tenant. It can show you the health of your storage, the health of your control plane, and give you alerts and things like that. And I have some demos that I can show you later after I show you the database as a service demo that we can dive into depending on how much time we have left. And then last but not least, you have the capability to put those application primitives that Jonathan Bryce was talking about earlier in the week on top of your OpenStack Cloud. And that includes things like a platform as a service with Cloud Foundry or OpenShift, having some kind of custom code on top, running Kubernetes or MesoSphere, or any of those solutions on top of an OpenStack Cloud. And that's all fully supported because when you're using OpenStack with vSphere, all of the APIs, all of the user experience is the same. The only thing that changes are some of the management tasks. We make it a bit easier for IT administrators. So let's talk about what are some of the deployment strategies that people approach OpenStack Cloud deployments with. First, they may think, okay, I know enough about Linux. So I can just go ahead and do sudo apt-get, Python, Nova Client, and Nova, and all of those packages that make up an OpenStack Cloud. And that's perfectly fine, but then you have challenges as to how are you gonna operationalize that? How are you gonna do upgrades? How are you gonna do software patches? And it's not impossible because there are technical companies that are doing this approach, but it does take a bit more effort on your part. Then there's their approach of working with consulting agencies, and they will offload some of that cause for concern with making sure all of your packages actually line up with each other, and that if you're committing changes to the OpenStack code, that it won't break other components that you're working with. Last but not least, there's distributions like VMware integrated OpenStack, or distributions from Arantis or Red Hat that will automatically deploy OpenStack for you, but then you still have some management tasks make sure that those management tasks, the way that they implement them, work well with what you expect from an OPEX perspective. So again, we don't want to dictate to customers how they deploy OpenStack on vSphere. We support both approaches, whether it's a tightly integrated product where we dictate which hypervisor you're using, or if you wanna mix and match with the loosely integrated framework. Regardless of which approach you take, VMware has open-sourced drivers. We want you to be successful with OpenStack on vSphere, regardless of the distribution that you use. So on the left hand side with a tightly integrated product, that's where VMware integrated OpenStack would come in, where we handle the deployment, the management, and the upgrades for you. It's all in an automated fashion. All you have to do as Mr. or Ms. IT administrator is just point and click through the prompts. And the best part about it is the user experience for OpenStack does not change. They still have access to the OpenStack Horizon dashboard. They still have access to the OpenStack APIs and CLIs. As a matter of fact, we are def core compliant, which means we adhere to the standards that the community has set up to be called OpenStack powered. And then on the right hand side, you have the loosely integrated framework again. So you may see some symbols on here that are competitors to us and some who are partners. Regardless of who it is, as long as they wanna manage OpenStack on vSphere, we're gonna enable them with the community drivers that we have open source for everyone to use. So let's talk a little bit more about the distribution that I work with, which is VMware integrated OpenStack. And we believe it's a fast and reliable route to running a production grade OpenStack cloud. And that's my sales pitch for the day, I assure you. Okay, so I'm gonna take my existing vSphere environment and I'm going to partition it into clusters according to vSphere OpenStack best practices. And then I'm going to deploy a single OVA file. This OVA file contains a build server, which we call the management server that will be used to deploy the entire OpenStack control plane. And then last but not least, this piece is optional, but we really recommend that you take a look at it, is to use the vRealize operations, vRealize login site and the vRealize business to do your monitoring, your log analysis, and your charge back, respectively, for your environment. Because you want to be able to know what's actually going on in your OpenStack cloud. You don't want it to be a mystery unless you like solving mysteries. Anyone of us like getting those calls a few o'clock in the morning that they have to go and figure out what's wrong with the cloud? No, just me, there's somebody in the back who we probably need to get you some counseling. But seriously, if you want to know what's going on in your cloud, I highly suggest checking those out. Because you're VMware customers, I'm sure you can get some kind of trial download to try them out with your OpenStack cloud when you deploy it. So again, it's a fully validated architecture, and we also have the opportunity for a single support contract. You can actually purchase VIO support. Again, VIO is free, 100% free. But if you would like support of your entire stack from OpenStack all the way down to the vSphere components, you can purchase optional support for OpenStack just to make things a little bit easier. Okay, so let's talk about what are the OpenStack projects that are included with VMware Integrated OpenStack? We are on version two, which is the Kilo release. And we know that's a little bit behind Mataka, but we want to make sure that any release that we're on is fully stable, fully mature, and ready to go. This year, we'll be taking a look at which major OpenStack release we want to go to. But regardless of what it is, you can use our automated upgrade functionality. And I have one of my superstar engineers in here who wrote that module, and I can tell you for a fact that it works. When I got hired by VMware, I thought they were lying to me when they told me they were gonna do upgrades automatically because I worked with OpenStack on another platform, and I told them, you're crazy, that's not gonna work. And they proved me wrong. So we got a really good team of engineers working on the interface. So at a minimum, we have Nova, Neutron, Cinder, Lance, Keystone, and Swift. Those are all table stakes. You have to have that in an OpenStack deployment. And then on top of that, of course, the Horizon GUI, heat for orchestration, and then the salometer that you can integrate with heat to do auto-scaling of your application infrastructure. And then on the bottom, we have the ties into vSphere to VMware NSX. And again, those optional management components that I really recommend that you check out. Okay, so I'm gonna just round it off with some benefits of using VMware Integrated OpenStack, and then we'll get into the actual demonstrations so that you're not just hearing me yammer about how great our product is. Okay, so again, when you're using VMware Integrated OpenStack or just OpenStack on vSphere in general, you get the benefits of an enterprise-grade cloud. So you're provisioning to clusters when you're using OpenStack on vSphere, and that allows you to take advantage of DRS, HA, if your application requires it. If you have one of these modern cloud-native applications, it's nice to have, but it's not essential. But if you're trying to put those pet-type workloads into the cloud, and we actually have some customers who are doing that, you can take advantage of that technology. And then of course, we're evaluating our OpenStack code that we include in the package so that it's hardened and running with optimal settings. So for example, there are no plain text passwords in any of our configuration files. The team makes sure that they're all encrypted, and I also have another superstar engineer from the team here who does a lot of work with Ansible to make sure that happens. All right, so no OpenStack PhD required. We say that tongue-in-cheek, of course. We know OpenStack isn't impossible to run, but there are some things that you need to know as an OpenStack practitioner in order to configure your storage correctly, to configure your networking correctly, configuring your drivers correctly. These things need to happen, and we can actually automate that process for you. And then simplified OpenStack operations. Think of the amount of work that it takes to add a cluster or add new compute components to an OpenStack Cloud. It could be a little bit of an involved process. So again, making sure that your operations are a little bit easier to get through for your OpenStack Cloud. And last but not least, the offering of single vendor support. And yes, the slide is now complete so it's time to take the picture. I've been trying to linger so you guys can get your snapshots in. But seriously, having the capability to have a single vendor support so that you're able to call us for support for OpenStack, call us for support for VMware vSphere, for VMware NSX, and all that. Okay, and yes, you can hold us to it. I have no problem leaving this on the screen. If you go to the mic, you sure can. Yes, we're being recorded. We wanna get all this evidence on tape. That support, I mean, does it allow the V motion kind of facility for moving one of the instance from one of the compute node to the other? Right, so I may not have made this 100% clear, but something that's different between using OpenStack on vSphere versus using OpenStack on another hypervisor is that we provision to clusters instead of to individual hypervisors. So the benefit to that is that your cluster has DRS turned on by, well, we ask you to turn it on. Obviously, you should turn it on. So it's taking advantage of V motion. It's taking advantage of just DRS scheduling overall. So you're able to take advantage of that. Does that answer your question? Okay, cool. Any other questions before I move on? And for people who are trying to take this picture, going once, going twice, sold, okay. So extending VMware Integrated OpenStack. So I showed you that lovely slide of all the OpenStack projects that are part of VMware Integrated OpenStack. And we get this question all the time, oh well, what if I wanna use Ironic? What if I wanna use Trove? Well, the thing is that we don't have that expertise in-house. So we partner with organizations like Tesora. So I wanna introduce the CEO of Tesora, Ken Rugg, who's gonna talk about how we work with Tesora to support Trove. Before you get started, Ken, I just wanna kick off my live demo. You can stay right there. All I have to do is get my database instance running. So I'm gonna do a live demonstration. It's not actually up there, but I'll make sure that it's up there. First, I'm gonna kick off a database instance. You'll see the output later. And now that's all for running. I asked where that is. P-A-S. Yeah, I made sure it's hidden. So I'll go ahead and turn it over to Ken, who's gonna talk about Tesora for a bit. Great, thank you. Thank you. And again, as we said, it's a big project. We're part of the Big Ten in providing database as a service. Tesora, as a company, we're the number one contributed to the Trove project. And I think when you look at your data center and how many of you have databases? Yeah, generally a lot of people have databases running. It's part of most applications. And when you look at workloads that you have to be able to run, as you move into the cloud, databases are one of the challenges because in many cases you wanna stand up your applications and you wanna treat those applications as cattle. Databases generally wanna be treated as pets. So I think one of the things to go back to the analogy earlier is that one of the benefits of using something like a database as a service is it's kind of the pet minder for those databases. It makes sure they're backed up. It makes sure they're replicated across regions. Your developer just goes in and has a self-service experience where it's as if they're pulling up a VM, but in fact they're getting a cluster of Cassandra or they're getting an Oracle instance or they're getting a replicated pair of MySQL nodes. And that's kind of the value proposition and the notion behind this. You wanna have that kind of self-service provisioning and that provisioning automatic and managing that within an OpenStack cloud is what Trove does. And one of the things that was great in terms of working with VMware was we were able to, I can tell you, this project is, we're based on all this open APIs, all the standard APIs of Nova and Swift and Cinder. We're able to very quickly get it up and running on VMware, kind of no fuss, no mess. We have few changes in variables. Another key thing, this is part of the whole aspect of that treating things like cattle but having them think that they're pets. It's not just about the provisioning. And this goes, again, it's how do you manage a database when you wanna resize a database? Well, when you wanna resize a database, it's not just put it on a new instance, kill the process, start it up. No, you have to move all that data. You have to attach different volumes. We manage all that. You wanna make sure that databases have certain tuning parameters. Maybe you've got a whole cluster of servers or maybe you've got a whole bunch of instances that are supporting a content management system. And you know that there's a set of, there's a particular profile, configuration options that you use when you're provisioning Drupal or something. Well, there's a notion for managing those things at large and you can have kind of managing those fleets and not management single instances. And that way, you know that every instance of MySQL that's standing behind a Drupal instance is gonna be configured the same way. It's gonna adhere to your corporate policies, all that kind of good stuff. So that's a core feature of Trove in terms of managing that and managing security and that kind of things in a consistent way. Again, one of the key things I think that makes Trove a little different than some other databases of service capabilities and it is that it does a lot of databases. And when you look at the list of databases, they go from commercial to open source and from NoSQL to traditional relational databases, all the way from Oracle down to things like Redis where maybe there isn't even some persistent storage, maybe you're just using it as a cache. But managing these things as clusters, managing replication, all those things are different things and they're kind of tied into the architecture of Trove. And the way Trove does that is you've got a controller and then you've got effectively adapters for all these different databases that basically translate if I wanna tell the database to do a backup, it will use the native tools for doing a backup, whether that's the native backup capabilities for incremental backups from MySQL or the backup capabilities that Oracle provides at a commercial level or the things that are coming out of community or commercials offering here. So I think that's again one of the key things is it gives you that kind of broad base support. Again, database as a service is an interesting thing because when you look at it from the application down, it looks like infrastructure and so therefore I think we fit very nicely into the open stack world. From the infrastructure up, it looks like an application and that's why it's a really nice fit to work together with the VIO because we're just treating the rest of the infrastructure down below as if it's open stack because that's what it is. So I'll let you go to the demo. Yeah. Okay. You don't need that. I might be tempting fate a little bit by doing a live demonstration at open stack summit. People tend to crash and burn more often than not. Let's see if my screen shows up now. All right, I might have crashed the screen but let me go back to my preferences and change if I'm mirroring displays. Okay, let's see. Okay, we're back to mirroring displays and so you can see my screen again. So I ran an automated script that ran some Trove commands, a Neutron command to get my port for the database up and running. So that starts at the top. Let's see. Where I created my Neutron port for a specific IP address that my application is looking for. That's bad practice if you're looking at 12-factor app architectures, I know but for the purposes of my simple demo, I specified a hard-coded IP address and so then I took that Neutron port ID for that IP address and included it when I created my Trove database instance. So you can see here that the output from the Trove command is here. I'm running on my SQL 5.6 database and if I look at the Trove list, I should see whether it's running or not. It's active and I also have my novel list. I can see my web application and it's kind of hard to see that with all the jumbled up text. So as Ken mentioned before, database as a service Trove is just consuming OpenSack like we normally would. So when it deploys a database instance, it's actually creating a new instance that's named after the Trove database and then I also have my web application called webapp that is gonna consume that database. It's a simple Django application that's going to be making a list of items. So from my IP address of my web application, I have a floating IP. So I go to my web browser and if I try to access that web page, the site cannot be reached. So let me go ahead and start my Django application. So I'll just log into it real quick. Okay, let me see if I have this queued. Okay, so I'm just gonna go ahead and log into my Trove or my MySQL instance. Okay, so the Trove instance is up and running. You can see that I went with the IP address that we specified before and I'll just do a show databases just showing you that nothing is there except for the Django database that I asked for. So I'll go into that and nothing is there if I try to do a show tables in that database. So let's go ahead and initialize our application. Oh, it helps if I'm in the right field or right folder. Is that again? Oh, the database name, okay. Oh, yes, thank you very much. It helps if my environment variables are set correctly. Let's try that again. All right, so it's going ahead and initializing my database on that Trove instance. So if I go into my instance again, I can see that my tables are now created. It's not just an empty set. So let's go ahead and try to run the application. All right, so my Django application is running. That means it detects the database correctly and if I go back and refresh the screen, I'll see my application is now running. So I can enter welcome to OpenStack Summit and it's showing up everything that I enter into my to-do list. We'll show up on the page because we're correctly accessing the database. Any questions on how to get databases and service running or any questions on VMware integrated OpenStack in general? We have the Tesora experts. We have VIO experts, VMware experts in the room. We'd love to hear whatever questions you may have. And if you have questions, please queue up at the microphones so we can get them on recording. It opens stack. The compute node doesn't run on KVM, it runs on ESX. Yeah, it runs on ESX I, yes. Okay, as you have the questions, please keep on coming to the microphones. We'd love to everyone to hear. Does it work? Yes. What kind of licensing would I do to get VIO working? So VIO is 100% free. So as long as you're licensed for vSphere Enterprise Plus, you can start using it because we make use of the distributed switch. It wouldn't work on 5S and Chills or Essentials Plus. If you have what? Essentials? We're trying to come up with different licensing schemes to support VIO. So if you leave your business card with me afterwards, maybe my product manager can get in touch with you about what we're trying to do. And the support would also be for the kind of open stack that I'm running. So if you purchase support from us, it's $200 per CPU per year, which is fairly decent compared to consulting dollars, for example, from another company or any other company. All right, thank you. All right, thank you. Question? Yeah, if I'm going to also be talked into or get interested in all the additional management tools like VRA and those, why even then run OpenStack underneath the covers if I can do all those same things with those tools? Okay, so vRealize Automation, it depends on the use case that you're looking for. So when we talk to customers about which tool they want to use, whether vRealize Automation, VMware Integrated OpenStack or both, it depends on the use cases. So if their end users are used to this Blueprint, I just click and I get everything done for me automatically, then maybe vRealize Automation might be more along their lines, but if you have developers who are used to those Cloud-style APIs, they want an AWS experience in-house, that's when you start looking at OpenStack and you may have developers already asking, well, I do it this way in AWS, so I don't want to deal with these VMware tools because they're legacy. That couldn't be further from the case, we're fully able to give those Cloud-style APIs in-house with VMware Integrated OpenStack. And then you have some organizations that want to use both use cases, like they want to do their dev and tests on OpenStack, for example, or they have some organizations within the company that use OpenStack for production, but then they may have other organizations that want to use the vRealize Automation capabilities. So it really is down to your use cases. Thank you. Any other questions? Okay, oh, you're okay. Can you run this again to Nest to DXA? We, in production, we don't recommend that just because, hey, it's nested ESXi. That being said, if you want to do some testing in-house, that's fully possible. But again, I wouldn't recommend that in production. Yes. Do you need NSX in order to do neutron security groups? You need NSX to do neutron security groups. And let me explain why. So I'm never going to knock the efforts of the community because neutron is a great reference architecture as it is, and the community is going to keep on improving it. But for some of our customers who have gotten to a certain scale, VMware NSX tends to be a better fit just because of availability of layer three services, the scaling out of the firewall, and all of those things. So if you get to a certain point with just using OpenVSwitch and all that stuff that is not giving you the scalability that you require, that's when we recommend checking out a network virtualization solution like VMware NSX or the other solutions that are on the market. Specifically with VMware Integrated OpenStack, you can start very small with the virtual distributed switch without VMware NSX, but you're not going to get the security groups and the other things that you're looking for. So we recommend using VMware NSX for those things. Okay, any other questions? Do we have any questions about databases and service? I have the two foremost experts here in the room, so please show them some love too. Any questions for them? Definitely silence. Okay, I guarantee you as your administrators, or sorry, as your developers are continuing to work with OpenStack, they're going to want simplified means of deploying databases to support their applications. Sure, you can do things like vagrant and terraform to stand up a database, but there's going to be a lot of Karen feeding in those automation scripts to get MySQL deployed, and then what happens when you have to go to another version and then you have to redo some of your tooling. So a tool like Tesora that is going to automate all of those items for you, not just the deployment, but the Karen feeding that in the life cycle management is really going to be a benefit to your developers. So if you haven't checked out Trove in your deployments, I highly recommend checking it out. When I saw how much magic it was to deploy a database, I couldn't believe that I used to do it manually. So thank you guys for that solution, and I really recommend you check it out. Any other questions? Okay, so in that case, I'll wrap it up for now. And if you want to ask me questions that weren't on camera afterwards, feel free to do so. I'm going to stick around for a few minutes and otherwise enjoy the show. Thank you guys.