 Okay, am I on? Yeah, you're on. Okay. Welcome, everybody. Today we're going to talk about this long title, a complete guide to running your own Dbass using OpenStack Trove and VMware Integrated OpenStack. So I guess I'd like to find out whether any other session at the summit has a longer title than our title. Maybe we get a prize for that. So I'm Doug Shelly. I work at Tesora. My co-presenter is Santosh Sundaran Raman. Sorry, it's right there. From VMware. He's product manager at VMware. So let's get going here. So this is what we're going to talk about. I'm going to do kind of a quick overview of Database of Service and Trove and then kind of show you like a quick demo of how you can deploy Trove and one of the options for deploying Trove to OpenStack. And Santosh is going to talk about VIO and show a demo of Trove with VIO. So with that. So kind of as a general overview of Dbass. So let's just start talking about some of the challenges that people have and how they can be addressed. So I kind of framed these challenges in two ways. One of them is from the view of the developer. So what kind of a developer see as challenges with databases? And these are some of them. I have a release and I need to get this out and I need my environment now. So it's all about, he needs to get databases provisioned quickly and on the timeline he needs. And then from the point of view of the IT guy that's supporting him, he just doesn't understand why they need to do things so rapidly. And then if you kind of reverse this, right, from the viewpoint of the IT guy, what's he see as challenges? So he's got budget issues, resource challenges. You know, why can't everybody use Oracle? That's really about, you know, IT guys really like standardization. So why does this guy need Cassandra? Can he do whatever he wants with Oracle? And then you get into some of the challenges around, well, the people he's dealing with all of a sudden put their credit card into Amazon and they're off running the database on a public cloud, which, you know, probably isn't a good thing for that company. You know, and there's risk issues, security, and the kind of developers response is these ops guys just don't get what I'm trying to do. So in this context, it's kind of like, you got the view of challenges from the developer view of challenges from their IT service. So what is the solution? So we kind of see solution as database to service. And effectively what this means is the delivery of a database software and everything related to it as a service. And I think we're, you know, at the OpenStack Summit, so we kind of have some concept of what as a service means. But what this translates to is it's available on demand. There's no, you know, without any hardware software installation or configuration. So this is easy, push button, single push button. And it should be fully managed and maintained by whoever the service provider is. And in the case of like enterprise private cloud, that would be your IT guys. So we've kind of, I think this comes, this is like our marketing guys here, they pop machine, right? That's what this database service should look like. You walk up to your pot machine and you push MySQL and MySQL database pops out the bottom of the pot machine. That's what we're looking for. So how is this addressed or how can this be addressed in OpenStack? So one of the solutions is the Trove, OpenStack Trove project. So this is the Trove official mission statement. So I'm going to read it here so you can listen to me talk. To provide scalable and reliable cloud database as a service provisioning functionality for both relational and non-relational databases, database engines, and to continue to improve fully featured and extensible open source framework. So it's a lot of words, but I think some of the key things there, one of the key things to me is really the comment about relational and non-relational. So this isn't focused just on MySQL. That's a key thing to the mission of Trove. And scalable and reliable I think is also very key to, well, I think to any OpenStack service, but particularly for a database service. So what is it? So what is OpenStack Trove? So one of the, I think key misconceptions about Trove and database as a service is that all that's done is provisioning. So that is one thing that is done. So one of the first things we accomplish is you can launch a database instance. But there's much more and I think that's important to understand. So this, besides being able to provision a single instance, you can provision and manage complex, more complex topology such as clustering. So provision a cluster of databases and things like replication. So for example, launch a master and several slaves. Then kind of from an automation point of view, so when you get outside of provisioning, what else can you do? So you get back up in your store, you can do failover resizing. You can scale clusters. So we got kind of horizontal and vertical scale. There's kind of fetching log information from the instances, which is important for some use cases and config management. So you can do database specific tuning. And again, as I mentioned in the vision statement, multiple database replication supported with common APIs, that's critical to the vision and the view and the implementation, relational and non-relational. And the management interface, the user interface is like everything else in OpenStack REST API, CLI, web-based UI. Okay, so as I mentioned, this is more than provisioning. This is complete life cycle and database life cycle management. And I'll just quickly take you through what kind of a representation of how we see this. So on the top left, you got provisioning. And I think we talked about what that is, demand. That's really a critical push button, wide selection of databases. And this is single instance and clusters. Security I think is important. I think some of the features, intro, that support security. There's a mechanisms for basically upgrading running database instances with latest patches for operating system and database software. Permission, user permissions can be managed. And I think another key thing is, and this is optional, but out of the box, restrict root access. Because I think part of the point here is that you're providing an environment that the end user is using where they don't need to be root to do everything they need to do. That's part of the point. If they need to drop into root access, then I think we've kind of blown the vision. Tuning, I talked about some of this. So you can provide log file access to people. So for example, the slow query log on MySQL is accessible through API. There's configuration management, so this is for tuning. So you could provide database specific tuning settings. And you can not just point at one instance and say do it to this one. You can provision, you can do it in such a way that it gets pushed down to multiple instances simultaneously. On the management side, so there's some schema management. The replication I talked about for scale and availability and backup and restore. And right, dual view on here is confusing. There's lots of databases supported and I think as I mentioned before in the vision, that's critical. So there's your kind of logo slide. I don't even know if 13 is actually right. It might be higher than that now. But all your kind of usual suspects, I think, are in there. Okay, next. So diving more into the architecture. So this is kind of how Trove sets up within OpenStack. And the first thing I wanna show you here is how Trove lays on top of standard OpenStack and then we're gonna look at what it kind of looks like on VIO. So you got all your, the services in the middle there, the Swift, Nova, Cinder, Keystone, Neutron and Glantz. Those are the other OpenStack services that are leveraged by Trove to accomplish its goals. On the right side, you have your Trove control plane. So that's, there's basically three services, API, task manager and conductor that do all the work. The API service looks basically like every other OpenStack API service. It presents a REST API. The two in the middle there, there's the circle with the crosser. That's messaging bus. So the services communicate with, say, rabbit, like everything else. And then they use a database to store metadata. And then on the right side is kind of where Trove ends up kind of being different than a lot of OpenStack services. It is the concept of a guest. So when you launch an instance, you end up with a Nova VM that ends up running an image that includes, so there's your image. The image includes a piece of Python code, which we refer to as the Trove guest agent. Database software of whatever the database is and then the operating system. And kind of out of the, in standard OpenStack, this would be a Kukau 2 image, right? That image gets put on a Nova instance on launch and then is running. And the Trove guest agent is communicated with via the same rabbit, say, as the control plane. And then applications have direct access to the database. And that's key too. Trove is not in the data plane, I call it. It's control plane only. So the data path is still your favorite database client communicating on the port that's exposed via the provisioning. Now, what kind of at a high level does this look like when you go to VIO? So basically VIO then provides VMware integrated OpenStack provides the OpenStack services and Santosh is definitely gonna talk more about this in his part of this presentation, so I'll leave that there. But there's some other changes on this slide that may be hard to pick off. The guest image ends up being VMDKA OVA as opposed to Kukau 2. While the Nova instance is still a Nova instance, under the covers, it ends up likely being ESXi. And then the Trove control plane would also set up in a VM in VIO as an ESXi VM. So I've talked about this a bit, I just wanted to expand on it. So multi-database is critical, I think I've said that enough that everybody gets it. But key to that is that it's data store agnostic, right? So Trove's view is to present a data store agnostic API. So that when you provision MySQL, you provision MySQL the same way that you provision Oracle or Postgres or whatever else it is, right? So the end user experience through the UI and the APIs is the same. That's important to the vision. If you had to do everything database specific, then what would be the point? Really, you might as well just use database specific tools for everything. So that's kind of on the left is the, yeah, I talked a bit about Horizon dashboard that comes with it. You'll see that in Santosh's demo. The Trove controller, the control plan is those three services. And then there is data store specific code in the guests. But again, that's not exposed to the people using the system. That's under the covers. So when you say Trove backup create, the task manager ends up sending a message down to the guest to tell him to do a backup. And then he knows how to do a backup in the appropriate way for say MySQL versus Postgres versus Cassandra. Okay, so I want to show you, and I haven't, this may be a difficult demo, I got to try it. So I want to show you how the installation and configuration of Trove goes. I'm actually going to do it in the context of Tessora's product. So I'm going to, because one of the things that we provide is simple installation and configuration. So I'm going to, I have a video where I'm going to run it and talk through it. We'll see how it goes here. Did that show up? No, it's not on my screen though. That's annoying. I don't know if I can do this. Okay, now let's, we went out of mirroring again. Sorry, folks. Okay, let me, I wanted to blow that up, but I can't seem to get to that screen. So, wait a minute, that's your demo, right? Yeah, I don't know why this did this. Let me just, there we go. I think Amherst out there is, I think you're going to be making jokes about my rotten fruit laptop after this presentation, right? Okay, here we go. Okay, there we are. Okay, so we basically provide a set up shell script that will install the product. But first, the installation steps for our product, basically. So you do some, so I'm basically setting this up in a VM that's going to be running Ubuntu, or is running Ubuntu. And so I got to do some things to tell Ubuntu to basically pull down our packages. So I'm going to do that right here. Just a couple of steps, set up APT repos and set up our key server. Put our key in there and now it's going to APT get update, I think. So that's our repo for our Enterprise 1.9 product. So the Enterprise Edition is basically a distribution of Trove. So now it's doing the update, and that fell off the bottom of the screen. Okay, so basically what this is, is we provide packages for the various pieces of Trove in Tesora Debass. And this is basically doing APT get install on those pieces. So it's blasting through that. I think I made this go faster in the video, so that's good. So once it finishes the package install, unfortunately it fell off. The last part is it does a package install of, we also package some of the Mistral pieces because I think in Metaca, or in Newton, I can't remember, Metaca. We added support in Trove for scheduled backups and we actually used it by leveraging Mistral because there seemed kind of pointless to rewrite workflow. So this installs Mistral. From a point of view of Trove, it's actually completely buried. So Trove has the APIs to actually tell Mistral to do the right things. So that's just doing the last bit of the install, and then we'll go on to configuration. Some of these Python packages are pretty big. Okay, here we go. Okay, so now that we're done with the package installation, and we're gonna run our setup script, which basically asks a bunch of questions. So I guess that means it's not actually opinionated. So it's gonna take you through configuration. So really, under the covers what this is doing is collecting some information about your OpenStack setup and then setting up the Trove comp files appropriately, in general, that's what it's doing. And I think everybody who's installed any OpenStack services knows, figuring out what to actually put in all the comp settings in the various services is pretty challenging. And usually you get it wrong and then stuff doesn't work. This is configuring the metadata store. So you've pointed out a MySQL instance, whether it's dedicated to Trove or you use the one for the rest of your OpenStack is an implementation choice, a deployment choice. It's registering the endpoint in Keystone. So much like every other OpenStack service, Trove is the database endpoint. So that gets put into Keystone, sets up some credentials for the thing and then specifies the endpoint. It's just creating a user called Trove, which is effectively the service user for the Trove service. And now it's going to, no, it's doing a Mistral config, okay? It's setting up the endpoints, creating the services. And now it's going to collect some information about where your rabbit is, get some user ID and password information. To get to rabbit and basically then set all the comp files and restart the services. So, and that Trove list at the end there is like my key debugging step. So you do all this type Trove list if it doesn't work, you did it wrong. No, so hopefully Trove list works. And in this case there's nothing to list, we have no instances. So what I'm going to do a quick, the tail end of this demo is just showing you how this sets up in Horizon. So this is the VMWare Horizon. So we're just going to sign into it. Now one of the things I did that I didn't show in the demo is I loaded a data store which I'm going to show you. I loaded MySQL 5.6 image. The tooling we have doesn't, at this moment exactly, it doesn't create the VMDK. So that would have been done manually and put in glance. So I didn't demo that. But basically you can see here's the MySQL 5.6 data store and data store version in Trove. And when you do a launch, you can see MySQL there. So with that, I'll turn over to Santosh. He's going to actually carry on and show you the rest of this demo. I'll just close mine down so that you don't trip over it. So I think it should be good to just flip to here when you're ready and you can full screen it by double clicking on there. But so go to those slides now. There you go and you should be able to see it down there. Yeah, there you go. So before I jump into a little bit of a VIO overview and say how we should talk about how VIO and Trove, Tessora's Trove together, provide a really stable database as a service, I want to sort of do a quick OpenStack 101 and set the context for what I'm going to talk about. So OpenStack, as all of us know, provides a bunch of tools for developers to consume infrastructure in a programmatic way, right? So it provides APIs, CLIs, heat the orchestration layer and a bunch of SDKs. So the developers, they can now automate their entire application and they don't have to wait for IT to go provision their infrastructure. The moment they have their code ready, they can automate the entire next steps to go deploy that code and put that into production. So the developers have what they want to develop their application, test it out and run it in production. But what about this guy, the operator? As much as OpenStack is a tool for developers, the operators who are equally as important, who has to maintain that OpenStack cloud, make sure that the developers get what they want 24-7. There's a lot of things that the operator has to do, things like deploy OpenStack in a production grade manner. Keep monitoring it, if something goes wrong, find out first, it went wrong and then troubleshoot to figure out what went wrong and fixed that. Then do maintenance on a regular basis. Say for example, maybe they have to add more storage or maybe they have to take out the storage disk to fix that. Do regular maintenance on it, patch OpenStack whenever there is a bug fix, whenever there is a maintenance release patch it. And upgrade OpenStack from one release to another. So there's a lot of complex activities that the operator has to perform. And it's really important to make sure that the operator has the right tool that makes the operator's life easy to keep operating and maintaining OpenStack on an ongoing basis so that the developers get the level of service that they want. And this becomes even more important when we are starting to add up more value-added services on top of OpenStack. Specifically like databases as a service, file system as a service. The more projects we keep adding on OpenStack, the operators should have the right tools to maintain OpenStack. And the more important it is to make sure that OpenStack itself is built on a stable platform. Because the stability of OpenStack as a whole, OpenStack is only as stable as the platform that you build it on top of. And that's where VIO the product comes into picture. So our goal with VIO is to sort of attack the problem that I mentioned earlier. One is, how do we make sure that as we keep piling services one on top of the other, it's easier for the operator to maintain the OpenStack. It's easier for the operator to do things like deploy, patch, upgrade, they do operations all of that. So one of our key focuses to simplify all of those operations through providing simple canned workflows to do all of those operations for deployment, upgrade any operation that has to do with OpenStack. So that's one of our goals, and we did a really good job of integrating a lot of the OpenStack workflows into vSphere so that typical IT admins, IT admins who are used to running vSphere in their data centers, they can now go use the same interface to now operate OpenStack in a well-known manner. So that takes care of the simplicity aspect of things. And the other aspect of making the operator's life easy is by providing a stable platform on which they can run OpenStack. And there what we do is we provide integrations with vSphere NSX, vSAN or any vSphere data stores. And we try to expose as many of the enterprise production grade features in these platform through OpenStack so that the operator now has a platform that they know how to operate. And now the developers get a solid resilient OpenStack distribution that's built on battle-tested infrastructure. So that's our strategy with VIO, which is provide a simple OpenStack distribution that's super simple to operate and that runs on a stable platform so that the operator is not going and troubleshooting the infrastructure continuously to keep the infrastructure up and running. So that's our strategy with the VIO product, VMware Integrated OpenStack. And what exactly is VMware Integrated OpenStack? It is a standard distribution of OpenStack, meaning the OpenStack code that ships with VIO is the standard upstream code that you would see with any other distribution or that you would see if you get OpenStack from upstream directly from GitHub. We are DevCore certified, meaning our OpenStack distribution, the APIs that are exposed through VIO are the same exact same OpenStack APIs that you would see with any other OpenStack distribution. In fact, we were the first distribution to get DevCore certified. And along with that, the distribution also includes the right tools, management tools to operate OpenStack. Do things that I mentioned earlier that an operator would typically have to do, like upgrade, deploy, upgrade, maintain patch, things like that. So in a nutshell, that's VMware Integrated OpenStack. So going back to our approach where one of the main pillars of our approach was providing an OpenStack that's simple to operate, and the other was to provide a stable platform on which to run. So on the first thing, providing a simple OpenStack that is simple to operate, the way we do it is first, we provide a lot of workflows that make it possible for typical IT admins, typical admins who are used to vSphere in their environment to use the same tools, but to start operating OpenStack now. So there are workflows to deploy OpenStack using a very simple 10, 12 step GUI-driven process, no CLI, no meddling with the config files, no downloading packages of any of that thing. The entire process is streamlined through a very simple GUI-driven workflows. Likewise workflows for operating OpenStack, patching and upgrading OpenStack. So that is a key component of VIO that makes it simple to operate OpenStack. The second is we've added a lot of nifty tools to our distribution of OpenStack, simple CLI tools that will allow the admin to get a quick snapshot or a quick summary of the OpenStack deployment to figure out what is the health of my OpenStack cluster? Is everything up and running? Are there any services that are down that I need to go take care of as an operator? Things like tools that enable operator to do performance troubleshootings. If a developer comes to me and complains, hey, my Nova VM boot is taking twice as long as it used to in the last week, we have tools to help the administrator go look at the entire trace for what happens when any OpenStack operation is triggered and figure out where the bottlenecks are and then go start troubleshooting what's going on. We have integrated the entire OpenStack log to a syslog server. And if the syslog server that's being used is log inside, which is our own syslog server, we've also built custom dashboards that will help the admins pinpoint what exactly is going on in their OpenStack distribution. Say if there is an error in Nova or something, they'll immediately see that in their log inside dashboard and that is going to allow them to start troubleshooting and maintaining their OpenStack distribution in a much more easier way. And the last bit is we've also built a custom management pack for we-realized operations along the short form is VROps. VROps is basically a monitoring tool that we have which allows admins to monitor their infrastructure. So we've built a custom management pack for OpenStack which is going to allow the admin to get a deeper view into the health of their OpenStack clusters. Say for example, all the data stores that are allocated to OpenStack are they healthy? Are they running full? Do they need to update or change something there? If a tenant deploys a VM, the administrator wants to find out which hypervisor the VM got deployed to, which data store on that hypervisor the VM got placed on. So the management pack that we've built for OpenStack provides integrations that helps operator keep tab on what exactly is going on in the OpenStack deployment. So by providing all of these integrations and all of these workflows and tools, we've made it really, really simple for an operator to provide OpenStack for their developers so that they don't have to worry about babysitting OpenStack on an infrastructure that's not stable or going and changing something that is not very familiar to them. So we've made OpenStack consumable for the average IT. And the other piece where we're providing a stable platform, I wanted to point out a bunch of capabilities that specifically apply to databases service. So one of the key things is the ability to leverage tiered storage. By that, what I mean is, as we saw in Trove, there are multiple databases being supported. As an operator, I may want to place some of my database types. Say, for example, I want to place my Oracle database on traditional sand data storage. I want to place some of the newer, no-sequel kind of databases like Redis or Cassandra on my commodity storage back in, say, like a hyperconverged storage where I put them on disks in the hypervisor. I can easily do that when I'm running OpenStack on vSphere by leveraging the storage-based policy feature of vSphere where I can define policies that direct different types of workloads to different kinds of data stores. And I can say, hey, my Oracle VM has to go to Gold Data Store, which is a traditional sand. My Cassandra data store database can go to commodity storage. So having that capability makes it easy for the storage, for the OpenStack operator to define which database needs to be placed on which storage back end. Likewise, when running databases in production, it may be very important to make sure that the database instance is not starved due to some noisy neighbors. The administrator, the operator, may want to define some policies around reservation where they want to reserve dedicated capacity for the database instance. Say, for example, I want to reserve a certain set of IOPS for my database so that whenever there is a lot of IOPS going on, my database is not starving due to a lack of IOPS. Or likewise for a network, if the database is writing to a back end storage that's connected through the network, I want to make sure that I reserve certain bandwidth for that database so that it does not starve when there is a lot of IOPS going on. So all of that can be easily done through a well-known feature in vSphere that enables the admin to reserve dedicated capacity for virtual machines or for workloads so that there is no starvation when there is multiple workloads competing for resources. And also, things like Doug mentioned that Trove also enables developers to deploy OpenStack or database clusters. And one of the things that developers would want to do when they deploy database clusters is make sure that multiple nodes that build up that cluster are not going to sit on the same hypervisor. That sort of defeats the purpose. So there, what vSphere already has the capability of providing affinity and anti-affinity rules between instances so that when a developer defines their database cluster, they can do so in a manner by defining policies that will place the database instances on multiple hosts so that when one of the hosts goes down, it does not take the entire database cluster down with it. So these are some of the capabilities that sort of apply very specifically for the database applications, the databases that can be deployed using the database as a service. But there are also a ton of other features that vSphere and NSX as a platform provide for running a really stable OpenStack cloud. Things like enabling HA at the hypervisor level, enabling vMotion or DRS where the OpenStack developer does not have to worry about what if one of my hosts goes down, what happens to my workload that it deployed if one of my hosts goes down, or how do I move my workloads from, say, if the operator wants to evacuate a host and take it down for maintenance, how does the operator do that? So all of those features are provided at the vSphere layer, and the admin can do all the maintenance, like say, for example, evacuating a host or migrating virtual machines at the vSphere layer without impacting anything on the OpenStack layer. So by running OpenStack on vSphere, the operators can provide a stable OpenStack platform for their developers. So that sort of, I quickly wanted to highlight how VIO saw some of the issues around, some of the complexities around operating OpenStack and sort of put that in perspective with Database as a Service, because Database as a Service is an additional service that is going to be run on core OpenStack. And as we start adding more services, it's important to make sure that the underlying platform is stable and easy to operate. And that's exactly what VIO provides. With that, I'll quickly jump into a demo when I get to switch this. How do you do that? Nope. That's great. Sorry about that. Thanks, Dad. So Doug showed us an example of how to deploy Trove using the tests of the packages. So once we deploy Trove, how does a developer actually consume Database as a Service to start writing their application? So I have a really quick example to show how a developer will use Database as a Service to build a web application. So this is our standard Horizon dashboard. I've already deployed a web application in one of my instances, which is, I just called it, WebApp. And the Trove or Database as a Service can be accessed through the Database tab in Horizon dashboard. So here, what I have is I have two tabs, one. The lower tab is my web application. And there, right now, I'm trying to connect to a Database, and it's going to fail because I don't have any Database provision. Next, what I'm going to do is I have a tiny shell script that is going to go call the Trove CLI to create a Database for me. So here, as we can see, I'm going to do a Trove create of a MySQL database. And inside that MySQL instance, I'm going to create a database called Django. And I give it a username and password for the MySQL instance that is going to run inside that Database. And I tell Trove which network to connect that to so that I place it on the same network as my web application for the simple example. Once I do that, and once I give a password for the MySQL instance, Trove is going to go deploy an instance and deploy a MySQL instance and configure it based on the input values that I just gave. And once I do a Trove list that Doug used to check his Trove deployment, now we see that the Trove MySQL database instance that I created shows up. And the key thing to note here is the Trove instance is actually a NOVA virtual machine. So if you go to a NOVA list, it's going to show up my Trove database instance as a running virtual machine because the database instance is actually running inside a NOVA instance. So that's what we see here. We see that we once this passes, yeah. We see that I have the web app that I already created prior to starting the database instance. And I also see my database instance VM here. If I go to my OpenStack dashboard, I can go see that Trove is trying to spin up the virtual machine and bring it up with the MySQL packages on it. And I can see the same on my NOVA VMs as well. So with that, we can either wait or since this is a recorded demo, I can fast forward. I hope I can do that in real life too. So now our Trove database instance has been created. It's up and running. And we can go back to see how I can actually use this database instance in my web application that I just deployed before this demo. So this is my web app instance. I've already logged into my web application. And here I'm trying to connect to the MySQL database that was created a little while back. Now, because the database was created, I'm able to connect to my database instance. And I see my MySQL prompt, which we will typically see when we access a MySQL database. Here I do a show database to show the databases that have been created, the database schemas. And I want to use the Django database to create my back end for my web application. This is a new database, so there is no tables or anything there. So what I'm going to do is I have a script that is going to go create all the tables that I need for my web application. I'll just run that script I have. And it's going to create a bunch of tables on the MySQL instance that was just deployed using Trove. Done with that. Now I'll log back into my database instance just to make sure that the tables I created are actually there. So I can go into my database and show all the tables. And I should be able to see all the tables that my web application needs to store its data. Now that my database back end has been configured, I'm starting my web server on my web app, which is going to use that database back end to store the data that it's going to send back to the database. So here, we see that the web app has a floating IP that I need to access my web applications from outside of the private network. So let's just copy that and access that web application. It's a very simple web application that is going to take whatever string I give it and store it inside the database. So this is the web app. And any string I enter here, it's going to take that string and put it back in the database. So in essence, like as Doug mentioned, what I've got from databases services by a single click or in this case by a single CLI, I have a running instance of a database with all the packages necessary configured the right way. And as a developer, I don't have to worry about going and setting up my SQL or if it's another database going and setting up a database, pulling the packages. I do one click or I call one CLI and I have the database ready. And I can focus on my web application or any application that I develop that needs database. And in VIO, what we've done is we've made it very easy to deploy and operate OpenStack itself. So the operator does not have to go learn anything new or if there is a new service like Trove or there is a different service for later on. The operator does not have to worry about going and learning new tools or learning new skills to provide those services to the developers. With VIO, they can use their existing tools, existing workflows that they're familiar with to just provide newer services to their developers. And with Trove, Tessera's Trove, which does the same thing for Trove, what VIO does for OpenStack, Tessera does that with Trove. And together, it is possible to provide a production grade development platform for developers that provides developers computer as a service, network as a service, and now databases as a service too. That's pretty much all I had. Any questions? Anybody have any questions? I have a question. Sorry, it's a little bit unrelated, but it's related to Trove. As a release of the Aronic as part of the Mitaka and the Neutron, is Trove able to run on our leverage Aronic? So we've done some testing of this. Basically, there's a driver in Nova for Aronic. And Trove is hitting the public API for Nova. So assuming you configure Aronic properly, it should, as it would just work, it's just, at Tessera, I know, I think, Amrith, is he here? Oh, he was here. He left. He had done some testing for us on this. To prove that would work. I have no reason to believe it doesn't. I mean, that's right. So that was funny. And I think it was, I don't know if you were in, I think it was in Austin. There was a discussion about that in one of the design summit sessions. I think there's challenges because, and I'm trying to remember, the drivers or driver, like the Nova Docker driver, I think, has been deprecated, right? Or something like that. Is that true? And I don't know that it's supported like Cinder volume. So there could be challenges with that. Whereas I believe with Ironic, we believe it's fully functional. But it's a good question. It's really going to depend on the driver support for containers in Nova, because at this point, Trove only knows how to talk to the Nova API. You mentioned that with vRealize log insights, there's some integration there and some benefit from that standpoint. Is there any benefit in vRealize automation that Database as a Service somehow ties into? For Database as a Service, not at the moment. But there are other capabilities that we realize automation has that can be leveraged on top of OpenStack. That I know. I know that vRealize can be used on top of OpenStack. It could be your VIO. It could be somebody else's OpenStack. But I was talking specifically about Database as a Service. For Database as a Service specifically, not at the moment. OK. So from a VRA standpoint, you're still launching an image and deploying it. It can still be leveraged, because end of the day, it's just a Nova instance. And when VRA talks to, automation talks to OpenStack. It would be no different than a Database instance versus a Web server. I mean, I think we just started down the path with our VIO, VMware friends. So I think there's still opportunity for us to collaborate on what makes sense in terms of the offering going forward. Anybody else? OK, well, thank you everybody for standing. Thank you so much.