 Hi, folks. Welcome to the session on Trove. I wanted to start this session off by telling you guys a short story. So a couple of years ago, I was working on developing a new application. I was working for a big company. And as most applications, this one too had some data requirements, and so we needed a database as a place to store the data. And as every big company, we had a dedicated IT team, and so I sort of shot off an email to the IT saying, this was the sort of data we needed to store in the database, and got back another email from them with a Word doc with, that was three pages long, asking me to explain what sort of data requirements we had, what sort of backups we would need, things like that. And it literally took me a day and a half to fill up the Word doc and send it back to them. So, where I'm getting at, let me tell you about Trove. My name is Nikhil Manjanda, and I'm the PTL for Trove, and I'll be co-presenting this with Amrith, and I'll let Amrith introduce himself. Hi, my name is Amrith. I'm one of the founders of a company called Tesoro, and we focus on Trove, and we work only on Trove for OpenStack. So, we're going to give you an overview of Trove, its architecture, follow it up with a short demo, showing you some of what Trove can do, and also sort of finish up with what we have in Juno and what we're looking to implement in Kilo. So, let's get started. Amrith will sort of carry on the first part of the presentation, and we'll talk about Trove and its mission statement and architecture, so I'll hand off to him. Thanks, Nikhil. Okay. I hope you guys are able to hear me okay in the back. Like Nikhil, I've spoken with many companies which have tried to deploy a database-driven application, and one of the major pain points which people talk about is that it takes sometimes months to get an IT organization to provision a database. So, we'll show you how Trove can change that. Just to set our expectations, how many of you here are using OpenStack already in some form? How many of you are using a database in OpenStack? How many of you have considered Trove? How many of you have tried it? Okay. You notice that number keeps coming down. Hopefully, by the end of this presentation, more of you will want to try Trove. Okay. My contact information is here. It's also on the last slide. We will be making these slides available. Copious notes pages on this slide, so you should be able to use them as reference material later as well. And, by the way, if you guys are tweeting about this, some hashtags for you. Oh, okay. Now it works. Oh, forgive the aspect ratio here, but... So, what is Trove? Trove, by its own mission statement, aims to make it easy for you to provision, operate, manage databases in the OpenStack environment. Connect. Oops. Okay. And the goal is to make this easy for people who are using OpenStack to provision and operate a scalable database framework. That's all that Trove does. A lot of times, people get confused when understanding about Trove. So I want to spend a couple of minutes on this picture here. Shown in the middle here is a database. It could be relational. It could be non-relational. It could be any database. And on top is your application. Trove sits below the database, and it's only interested in things which we call the management and provisioning plane. We provision databases. We do some amount of administration, some amount of management. Trove, by default, does not touch your data, does not execute queries, does not get into the data plane. That's what your application does. So your application creates tables or collections, creates indexes, things like that. It queries the data. It inserts data. Trove doesn't do any of that with one small exception. When we do backup and restore, we touch your data. With the exception of that, Trove only does provisioning, management, and things of that nature. With that very broad distinction, it's very important to understand Trove operates in the provisioning plane, apps operate in the data plane. And stop me at any point if you have questions, all right? Okay. So I like pictures. Everybody has gone to a Coke machine at one point or another. Nikhil had to fill a three-page document. Nobody will fill a three-page document to get a can of Coke. You go up to a machine, you press a button, you get a bottle of Coke. Trove is very similar to that, with a small exception, little plus-plus at the end. So think about a machine, which if you walk up to and you press a button, there's a button there which says MySQL, somewhere in the middle. You press the button and the machine's going to hand you a fully provisioned MySQL instance. And the plus-plus says it'll also manage it for you. It'll take backups for you. It will do replication for you. It'll maintain a replication slave and make sure that you have a highly available master slave network. This is what Trove does. And it does it for a bunch of different databases. You could do a clustered Mongo deployment with Trove. So you have OpenStack installed, you have Trove, and you can click a button and say, Trove, give me a clustered Mongo instance and it'll go off and do that for you. That's basically what Trove does. You do not have to fill the form which Nikhil had to fill, and instead when you need it, you get yourself a database instance. That's basically what Trove does. And Trove is a OpenStack project up there. I'm sorry about how the pictures came out, but now you notice I hope that's not going to screw up your demo. Trove is a project which relies on the other core OpenStack projects. Nova, Cinder, Swift, Neutron, Glance, and Keystone depicted here. All of the core OpenStack services or OpenStack projects are used by Trove in order to give you the databases of service. Trove exposes an API. You can program against the Trove API. And Trove uses only the standard documented APIs of these services. So if you already have a deployment of OpenStack, you deploy Trove on top of OpenStack, and that's it. Any questions? I'm going to pause and see if anybody has any questions at this stage. Let's talk a little bit more in depth about how Trove works. I hope folks at the back of the room are able to make out the lettering on the slide, but the code is approximately as follows. The stuff on the left in green is things which relate to Trove. The stuff on the right are the other OpenStack projects we talked about. Keystone and networking all the way at the very end. Glance, where we store images, Swift, Cinder, and Nova. And on the left-hand side we have Trove. And you notice there's this little overlap in the middle. Those are places where Trove has interactions with the specific underlying project. So let's talk about a life cycle of a simple instance. The user comes along and says, give me a Trove instance. Comes into the Trove API. The Trove API is going to do something, Trove is going to do something, but at the end of the day an image gets picked up from Glance. That image is given to Nova, and Nova spins up one of those compute instances and we'll talk a little bit more about what that instance is. But that is the database instance you requested. So you hit the button and you say you want a MySQL instance. An image for MySQL is loaded into Nova, executed. The connection endpoint is handed to you. That IP address, port 3306, for example, that's your MySQL database. Okay. You could also go hit a button on Horizon Console or with the CLI and say do a backup of a database for me. You created a database like this. Do a backup. That backup gets sent off to Swift and it's stored on Swift. You could at a later point and time come along and say I took a backup the other day, give me a new MySQL instance based on that backup. In that case, we're going to take the backup off of Swift, load it onto your instance, hand you the connection endpoint, that's your queryable database. And then you're going to create a new MySQL application built on top of these services. I'm not going to go into a lot of detail about each of these because Nikhil is going to be showing you a demo. But at a high level, you understand those services, the standard OpenStack services and how Trove builds on top of them. The one interesting thing here is the compute instance up top. Nova uses guest images for compute instances. A Trove guest image is slightly different from a Nova standard guest image. Okay. Next one. And this picture kind of shows you that distinction again. On the left-hand side, if you will, the database agnostic components. These components work for all data stores, whether it's MySQL or Mongo or Postgres etc., these are common. On the other hand, if you came along and requested a Postgres instance, which is now supported, that guest instance is going to be Postgres specific. So there's a guest image which has some operating system. Let's say it has Ubuntu on it. And it has Postgres installed. But it also has that Trove guest agent, which is a Trove specific component. Realize that the Trove API is a common API no matter what data store you have. There's an API which says create an instance. There's an API which says take a backup. But the way in which you spawn Postgres is different from the way in which you spawn MySQL. The guest agent is a thing which understands the distinction. And the guest agent is specific to the database. So you have a Postgres guest agent and a MySQL guest agent. And you install that on your what appears to be like a Nova image. But now it's a Trove image. The distinction is a Nova image just has an operating system. You may have a database on it. A Trove image also has a Trove guest agent. And the Trove guest agent talks to the rest of Trove over a message bus. Very high level architecture-ish kind of thing. Does that make sense? I see a couple of people nodding but everybody else seems to be like it's lunchtime already. No? Okay. Go ahead. Yeah. All of the stuff in there is in the image. Correct. So that thing there, the green box is the image which is going to be spun up based on or is the instance that is going to be spun up based on an image which is sitting in glance. Okay. This, if you will, is the end result. Are we necessarily distributing all of that stuff? Potentially not. There could be on that guest image sufficient code which says the first time I boot go get me the database from somewhere. And that might be some local PPA which you have in your organization. It might be some other licensed database which we're not allowed to distribute but we distribute images, HP distributes images, we don't necessarily have to bake all the software onto the image. Does that answer your question? Okay. Standard Keystone authentication. And again, good point. Database authentication different from trove authentication. The database user, the one which your application uses, that's something which you set up in MySQL, you'd say create user. Very different from the admin user who potentially is setting up your trove image or spawning your instance. I saw another hand up there. Yes, they do. You could probably talk at some point about your deployment in practice where you do that. Do you want to do that now? No. We can talk after the session too. Only anybody with a specific set of privileges can delete the instance. But you really don't want to go and delete the NOVA instance from under the covers. That's really a NOVA instance. You could it's exposed if you have credentials to go shoot the instance from NOVA. Absolutely you could do that. Not recommended. Your mileage will vary if you do that but it's possible. You should hold that question till futures because very shortly you will be able to do that. Sorry. The question is can you provision a Galera cluster using trove? Nikil is going to be talking about what is currently supported and what the futures are. So, okay. So, now that I've convinced all of you that you should absolutely try trove, I have to tell you where to get it. Okay. Trove has a couple of pieces. There's the CLI which you have to get separately. There's trove itself which you have to get. The first two tell you where to get it. Get clone. You're off to the races. If you're looking for blueprints and other things, specs, what we're planning to work on. The third one. And I told you that you need to have guest images. There's a project called trove integration which has all the piece parts for you to build a guest image. There's standard guest images which are available and if you want production ready guest images, contact me after the presentation. I'll tell you where to get it. Oh, great question. Trove was released in Icehouse and now available in Juno. There's a significant bunch of improvements in Juno, so I would recommend you use Juno. But if you want to use Icehouse, sure possible. If you happen to use Havana and you want to use trove, we're not covering it in this presentation, but talk to any of us after the presentation. It's perfectly doable. But Icehouse or Juno are the official answers. Okay. So I just want to end and pass it over to Nikhil by talking a little bit about modularity and trove. So trove works with any data store. Works with MySQL Postgres Mongo, Cassandra, Coach relational, non-relational. I'm playing with a graph database. So how do you do all this? And the important thing to understand is we do it using strategies. So now that the previous slide showed you how to get the source data, once you get it, you'll notice that there's a directory called guest agent, which is all the data store specific stuff. And you'll notice there's a folder called strategies. And I'm going to talk about how we do backup. Because MySQL backup is very different from a Mongo backup. So under strategies, you notice there's a folder called backup. Okay. Well, you go into backup and you notice there's actually an implementation for each of the backups. So MySQL has an implementation there as does Postgres as does Coach. So let's look at what MySQL does. Okay. So there's three actual implementations of backup. The first two effectively do full backup and the third one does incremental backup. Now, when you deploy trove, there's a config file which says, when I do full backup, what do I do? You probably said it to InnoBackupEX. Which means when you go hit the button and you say give me a full backup, that's the actual method which gets called. What this means is if tomorrow I want to implement a new database and support it in trove, there's a small set of things which I have to do and one of them is implement backup. So I'd have a if I had a new database, let me look around for a person here. Okay. Oracle oracleimplementation.py is going to be a file in there and that's going to have a mechanism to do oracle backup. That way there's no change to the to the trove oracle. There's an implementation of a guest agent and you added support for a new database. We do the same kind of thing, of course you do it for backup you must do it for restore, which you do the same thing for various capabilities and that's how you make trove easy to extend to other databases. I think that's about the point where I hand it over to you. Okay. So Nicole's going to show you how all this stuff works in practice and I'll hit space far. So I just wanted to let you guys know a couple of points on how you can get started with trove. So if you're a trove user, there's a couple of distributions that Tesora and HP ship and you can go grab ahold of those the HP Helion dev platform or the Tesora DBS platform. If you're a trove developer we talked about this a bit earlier but there's the trove integration scripts repo that you can get clone and then there's a sort of overall utility called red stack that wraps dev stack and we'll take care of installing trove on top of dev stack. But if you're interested in what's going on under the covers you could install trove just using dev stack without actually using red stack. You just have to add the enabled services to local RC and just one caveat there is make sure that you enable Swift as well because trove uses Swift for backup and restore. So here's where I wanted to sort of dig in and give you guys a little bit hands-on demo of actually going through and provisioning an instance with trove. So just a minute while I bring that up. Okay, so here I have an installation of trove on top of dev stack. Can folks see that or does it need to be a bit bigger at the back? Good. Okay. So just talking a bit about the create workflow for a create trove has the concept just like nova flavors and so to create a trove instance what you need to specify is you need to specify the flavor idea of the instance you're creating and the size of the instance you're creating and the size basically corresponds to the size of the cinder volume that we use to store the data store specific files on. So trove if we actually do a flavor list you can actually see the list of flavors that trove supports and over here let's go ahead and create an instance called test or let's call it something more original trove in kilo and flavor two and size one let's run that and as you can see we get back the instance takes a while to build a few minutes so you can see the status it's in build and you can actually see your list of trove instances over here you'll see I have two instances the one trove in kilo that I just created that's in build status and then I also have a pre-created trove instance that's active so yes very good question so wanted to talk their optional parameters as you can see here there's parameters that you can pass in which tell it the data store type and version so right now without anything you specify a default data store type and I have the default data store type set to my sequel and 5.5 but you can also pass in what other data store images type of images that you have created so if you wanted to do that you can do the same on go and the version couple of other optional parameters we support az if you have them set up in nova using the availability zone option and we also support neutron and you can pass in the dash-dash nick option along with the network id or port id or the actual ip so you can look at that by just a particular thing and it will tell you what the optional parameters that you can pass in are some of the other optional parameters that you can pass in are if you want your database to be pre-created with the certain databases on there or certain users on there you can pass that in as well and then there's these other couple of parameters called configuration or replica off and I'll come to them so so that was the basic create workflow any questions on that go ahead so you get back an ip which is part of the connect string through which you can access the database you don't have ssh or you don't have access so let me take that back depending on the deployment the deployers could choose to give you access or not like in a dev stack where you're setting everything up of course you could configure it so that you have access you can ssh under the box and what not the deployments of Trove that I know of just give you back the connect string and don't really give you ssh access to the database that so and I'll talk a little bit more about that in a couple of slides later so just wanted to talk a bit about once you have this database now some of the Trove operations that you can do on the database and so what Trove how Trove helps you manage this database some things that you can do is you can resize your flavor this under the covers uses nova resize so you created a database that was too small now you need more memory more compute power you can issue a trove resize that will then do a nova resize and you have a bigger database you can resize the volume so if your cinder volume is not enough for your data store size go ahead resize your volume and now you have a bigger volume there's also some data store specific extensions that we've implemented and you got to see some of these optional parameters to create before so you can pass in or calls to create specific databases on your trove instance and create users on your trove instances or grant permissions and then if you want to manage your database offline using whatever data store methodology the data store supports you can enable a root user and then use that root user to create other users and other databases so quickly show that so if we do a trove list you can see the trove and kilo instances now active we could do a trove root enable and that gives you back a root with a root password so if everything went well actually I'm not sure what the connect string is so if I get the password we had gotten earlier there we have we have access to our database and this is a root user and we can use it to do whatever you want I want to talk a bit about what trove is doing under the covers here when you're actually setting up your database instance so when the guest agent comes up and when we get the mysql instance in this case is being provisioned trove is doing a few things it sets up same defaults for mysql we set up in odb only and we disable load data in file and select into our file trove also basically goes ahead and tunes your database for its configuration based on whatever flavor size that you specified so you have the ability to specify configurations based on different flavors so for example your max connections might be much larger if you're using a larger flavor or your buffer pool size and things like that so the guest agent goes ahead and tunes all of that depending on the flavors and what you've set up trove also has an api called the config groups api and that was one of the optional parameters that you saw earlier where you can target specific configurations to specific instances and groups of configurations to specific instances where this is useful is for example if you wanted to change the default character set on the particular database or anything that you could change in my.conf you can do that programmatically via the api as well so apart from tuning via flavor you could do that separately via the api any questions before we move on to the next slide other things that are going on so the guest agent is also sort of securing the the data store instance underneath we remove the anonymous user we remove non-local host users other than the ones that you programmatically create via the api we remove local file access mangle root password and also so we set up security groups in nova and depending on the the data store type the actual security group report is configured correctly for the data store type you can set up default rules for your data store instance if you want it to be accessed only to a certain IP range and then you can also create change that IP range programmatically via the api so trove does expose a security groups api that allows you to do that user ssh access I think somebody mentioned this earlier talked about this earlier is not required so all of your management happens through the trove api so you're free to turn off ssh access if you're a trove developer for debugging reasons you might not want to do this on your dev stack box so you can go ssh on to the trove guest agent and check the guest agent logs and things like that but in most trove production deployments that I've seen ssh access is turned off and everyone sort of either uses the database through the connection string or uses the trove api to manage the data store yes wanted to give you guys a quick demo of backup and restore as well so if we go back to the screen here so we have two instances right now trove and kilo and we can create a backup for that particular instance and the command there is just backup create so if you look at trove help backup create it takes the instance id and the name is parameters and optionally it can take a description and a parent so the description is just any description that makes sense to you to tag that backup with and a parent is in case Amrit showed the incremental backup strategy earlier if there's an incremental backup strategy defined for that data store type and you're trying to create an incremental backup based on a previous backup that's where you'd specified through the parent parameter so I'm going to go ahead and create a backup here trove create backup backup create sorry there you go it says status new so it kicks off the backup so this is actually talking to the guest agent and kicking off the backup so it happened quite quickly here because our database doesn't have very much info in there but what you'd notice happening is you'd notice the instance going into a different state instead of active it goes into the state called backup and if you list your backups by doing a trove backup list you'll see that a new backup shows up in the state called new the backup is already done so it's already in the completed state and so I already had a backup that was pre-created so as we had mentioned earlier what this is doing is talking to the guest agent kicking off the backup strategy and then the files that are created as part of that backup gets streamed to Swift where it then gets stored in Swift so since this is DevStack Swift is on the same box and I can actually look at Swift to see what's there and you'll see there's a container called database backups where we store the backups if you look there you'll notice that there's a bunch of files corresponding to the backups that are streamed and gzipped and optionally encrypted as well so depending on your config file options in Trove you can specify different types of encryption and different keys and things like that so if you'll notice that the actual backup file corresponds to the backup ID that was created so if you look at the two different backup files that we have that one is the pre-created backup and that one is the actual backup in kilo that we created I already talked about the optional parameters that we have so you can create the description using a description you can use incremental backups using parent I wanted to talk a bit about what's going on under the covers so backups in Trove are fully managed they're triggered and tracked through the API as I mentioned they're streamed to Swift which is the object storage and then we do support multiple formats per data store so for MySQL we support extra backup and MySQL dump and as we're going further and further the different types of data stores including NoSQL ones we're adding support for backups in each one of those as well so this is coming to the exciting part stuff that we've been working on in Juno in Juno we added support for replication and so what you can do in Juno Trove is you could say create an instance but set it up as a replica of a different Trove instance so basically under the covers what Trove does there and this is for MySQL so it sets up a MySQL slave instance using the instance that you specify as the master later on if your master goes down you can manually detach that slave and promote it using the Trove update detach replica source command but this is all programmatic in a way and in Kilo we're looking at trying to figure out a way to actually make this so that you could do some sort of achieve auto failover scenarios and things like that but replicas are important not just from failover or HA standpoint but also for read scale out and things like that so they already do exist in Juno so how are we doing on time I can demo replication as well let me take questions though if folks have any questions before that okay good point somebody here speaks from experience on that okay go ahead any questions how about we start up there yep does Trove handle upgrades of minor versions of the database no currently Trove doesn't do any upgrades by itself there's work that we've had discussions of trying to figure out whether Trove should get into upgrading minor versions or upgrading databases by themselves today what folks do usually is publish a different image and then tell folks to do a backup and restore to the other image of a different data store type or different version but there have been some discussions about Trove getting into database upgrades as well but as of today does not do any I had a hand somewhere up here in the middle yep go ahead does Trove have its own what sorry say it again okay does it have its own resource typing so Trove today supports two different modes of operation it supports either creating the instances by calling the native open stack services themselves so if you configure have that option switched on in the config file it will talk to Nova separately it will talk to cinder separately and to provision the instance so there's a Trove task manager piece that actually does this and orchestrates that or you could turn on heat support in which case it just goes through heat we've published templates for the different data store types heat templates and that's configurable as well so Trove uses that to talk to heat to actually get your instance up and running as part of a heat stack so let me kick this off you had a question up here in the front so the backup and restore that we do is based on the volume right so we're actually using backup and restore tools to do the backup and restore so if you look at the backup strategies that we have we could either do a logical backup using MySQL dump or an actual backup using InnoBackupex which is a tool that Percona built that actually at the MySQL level gets the file streams them to Swift and so your actual data on the volume is not actually being touched there's nothing that prevents anyone from building a strategy that's a volume backup that sort of works across data stores but you'd have to think about quiescing the database so that when you take a snapshot of your volume it's consistent right so at the end of the day the backup is only going to be of your data it's not of the metadata on the instance that is an excellent question so if I understand the question correctly it was what's the current list of data stores and versions that Trove supports there's a wiki page that I can provide you guys a link for so it depends on what you actually configure for that particular deployment this deployment has MySQL 55 as the only data store configured because it's on one single VM that's running right so depending on so in order to get another data support another data store version what you'd need to do is create the images for those guest agents and having that particular data store having a way to go fetch that data store in that image and then telling Trove using a register API that this data store corresponds to this image and this version and so Trove is extensible that way so maybe one other thing to add just the names not the versions MySQL including MySQL MariaDB Postgres MongoDB Cassandra, Coach, Redis Postgres that's the let me send Postgres twice that's the complete list of all the data stores which are currently supported each one is supported to a different level for example we know that replication is now supported with MySQL and variants clustering is supported sharding is supported with Mongo but we're working on rationalizing that thing and coming up with the full support you could run Trove in a container for example but if you run Trove in a container some of the capabilities are not going to be available in the container like resize things like that so it's really a combination of where are you going to be running the data store what data store version do you have and what do you actually want to do that's the full matrix that's why there's a wiki page for it so yes there is an API that Trove supports that's the resize volume API so in case your volume is insufficient for whatever reason you can you can issue the resize volume and then it will change the volume size underneath it using cinder so it will call cinder to do that it might be a little long to do it as a demo but we can definitely do it after okay sure great question so today we do not actually integrate with Solometer for monitoring it's something that we've been talking about and there's a design session on it we do have notifications for metering so there's config values for rabbit queues that you can turn on by specifying the name of the rabbit queue so when instances do get created and deleted it will drop messages into that queue specifying that that particular instance got created and deleted for monitoring the guest agent themselves Trove uses heartbeats so the guest agent code actually has depending on which manager for which data store you deploy has logic which checks on the data store, makes sure it's up and running and then sends a heartbeat back to the component called the Trove conductor which says hey this data store on this particular guest instance is up and it's running so when you do a Trove list look at, you see the word active there which means that the data store is up and running if the data store goes down for whatever reason the status will go to shutdown and you could use that for monitoring to build out say things like failover but there's nothing automatically that happens today based on that and we're talking about doing some automation there as well yes, so if you're using heat separately you can use heat to auto scale I specifically haven't looked into that much so I don't have much experience with that go ahead so great question the question was are there any tools to help building the guest images so we use disk image builder which is a triple O tool to actually build the guest images if you go to the trove integration repository which the link was posted earlier and clone that you'll see elements for each of the data store types that we support all you have to do is trove kickstart data store type that will actually build that particular image and register the image in your trove local deployment so that you can then go ahead and test it out so and that will also leave the image around for you so if you want to pick it up and move it somewhere else you can use it if you want to get some other custom images happy to help you with that also to dovetail to the other gentleman's question about monitoring we do have images which have other monitoring things baked on to them so if you want to use those quickly to fast forward since we're running out of time something we added support for in Juno is clusters we only support MongoDB clusters and the call is pretty similar it's cluster create data store type data store version and the actual creation of the clusters creates shards and you can actually we support scaling that out horizontally through an add shards API call action but before we're done I also wanted to just mention what we've completed apart from that in Juno and what we're planning for Kilo so apart from replication clusters we added support for Neutron and Postgres we added support for some enhancements to configuration groups like configuration groups for MongoDB so these are basically the parameters for the data store that you can tune via the API backups for Cassandra and Couchbase and Tempest tests but the interesting pieces that we're looking for help with and where more developers can jump on or design stuff is planned for Kilo we're planning to build out clusters and cluster support including support for Galera clusters, Synchronous MySQL clusters some of that monitoring and failover support for replication associating flavors with data store types so if you're deploying Mongo and you want to make sure your Mongo instances only get attached to these particular flavors, support for that accessing the data store logs via the API so for example you have a MySQL instance and you want to check the slow query log to diagnose why certain queries are taking long time how to do that programmatically via the API and then some housekeeping work with removing deprecated OSLO code and upgrade testing using grenade that's it, that's not a finalized list if you have an idea how to make Trove better please come talk to us work with us on it and we're always looking for feedback and folks to come along and help us out that's pretty much we're growing community and find us on it IRC hashtag on free node thank you all for sitting through this and any questions, more than happy to take them now or even if you want to talk to us after the session is over, come find us and more than happy to talk about Trove if you have specific questions for Nikhil I think he's going to be at the Genius Bar in the HP booth later today sorry, they call it Genius Bar but that said I'm willing to talk about anything other than Trove as well at any point of time buy him a beer and he'll do that buy me two beers be clear about stuff any other questions, go ahead does it work with SQL Server, Oracle and DB2, the answer to two of those is I have them up and running and to the third, not yet the community version currently doesn't have it we're talking about how to do that right now so currently as of Juno Trove, we don't have image elements for anything to build those so we don't support it by far, there's work in progress to support some of that and Amrit's been working on that so he's probably the right person to talk to in that regard yes, so if you do a Trove show and maybe I lost over this a bit earlier it gives you some very basic information about how much your volume is being used and things like that but apart from that Trove doesn't really care about the data plane as much so let me just show you that if I take that instance and do a Trove show 0.13 is the volume 0.13 is the volume used and the volume size is 1 but other than that we don't really care about the data plane so in order to talk to the database you can use MySQL directly and Trove really is more of a provisioning and management tool and doesn't want to get into the data layer at all so if you were using MySQL in this case you can always run MySQL admin 1003 status and you'll get whatever you need and the same kind of command for any other database but not much more through Trove I guess so there have been it's a great question and there has been some discussion around getting some of the MySQL stats through Trove as well sort of like hey what's my replication lag now that we have replicas what's the health of my MySQL database that's not there in Trove today it's sort of minimal but we're still talking about trying to build that out so great question well funny thing you should mention that yes we do we do and if you're installed Trove and you get this pane called database where I created a bunch of database instances in a different tenant that's what the reason why it doesn't show up but there you go those are the databases that we just created up and running you can create backups from here you can do your resizes, restarts and then if you go to backups you can actually see the backups that you have so we do have support we added support recently for replication to Horizon as well and as we go along we're adding support for a lot of the new features that are coming up so we're fully integrated with Horizon go ahead so this is a great question the question is the actual guest agents do they get so it's not really request response because they're getting messages through MQP but is that over a separate management network or is it over the same customer network so in Trove deployments that I've seen on Neutron it's always been on a separate management network where the guest agents are talking to the Trove MQP server just to separate out and the network access and the other question is the other aspect of that is that when you create a Trove instance the customer can also specify a NIC to what other network his instance is attached to and so we really don't control that very much so we do create so I've seen Trove instances actually talk to the rabbit or AMQP servers on a separate management network and that's usually how you deploy it but that said it's really up to the deployer who's deploying the Trove deployment to come up with that architecture and figure out how those pieces are working together exactly it's on a separate management network precisely for that reason it's configurable on a separate management network yes so that's time so folks I know it's lunchtime want to leave please feel free I'm going to be here taking more questions so maybe I will come down and get demyced and we can talk about it a bit more so thank you all for coming hope you had an informative session about Trove