 Hello everyone, and thank you for taking some time off to listen to me talk about Trove OpenStack and what we have planned in Kilo. Just to give you a short introduction about myself, my name is Nikhil Manjanda, and I'm the PTL for OpenStack Trove for Kilo. And so we're just going to go through a short discussion on what Trove is, what we achieved in the Juno milestone, and what we're planning in the Kilo milestone. So let's get started. Next slide please. So I thought to kick off the discussion about Trove, I'd talk a little bit about what Trove is, I'll just give you guys a brief overview and what better way to start that than to reiterate the mission statement of Trove. So our mission statement basically states to provide scalable and reliable cloud databases as a service provisioning functionality for both relational and non-relational database engines, and to continue to improve the fully featured and extensible open source framework. I'll touch upon a few key salient points. That I've sort of highlighted in the slides, but overall Trove is a set of OpenStack services that lets you make API calls to the cloud and let you provision different data stores of different types, and also lets you manage those instances of those data stores that you provisioned so that through the lifecycle of the instance it makes it easier for you to work with them. So a few things that I do want to call out, Trove specifically does provisioning. It does not get into the business of into the data plane at all. So if you once the instance of that particular data store type is provisioned, your application can then talk to that particular instance using the data channel that that particular data store supports. So if you were using MyTequal for example, you would then use just MyTequal over port 3306 to talk to that particular instance. So where Trove comes in, it helps you provision that particular MyTequal instance, set up replication for example, do backups, restores, and manage the lifecycle of that instance. I do also want to call out that Trove supports both relational and non-relational database engines, provisioning both of them. So there is support in Trove for MyTequal and PostgreSQL for the relational side of things. And then we've also added support for MongoDB, Cassandra, and Couchbase. So we do also support those non-relational databases. We're also completely extensible in open source. We're an integrated open stack project and we are built on top of other open stack projects. So when you make a call to the Trove APIs we rely on open stack compute, NOVA being able to provision the actual compute VMs, open stack sender for block storage, open stack switch for object storage for the backups and so on and so forth. So just wanted to talk a little bit about Trove in Juno and give you guys an overview. Juno was a very successful milestone for us. We had a lot of new contributors contributing code. We had 322 different commits from 71 contributors. We were able to implement about 30 blueprints and fixed about 200 bugs. Lots and lots of code reviews thanks to all the folks in Trove who helped with the code reviews. And there were about 66,168 total lines of code changed. If you're interested in getting more details on the statistics there's a couple of links there that you can look at. And if you're interested in pulling the Juno release and looking into what exactly went into the Juno release in terms of bugs and blueprints you can visit the Launchpad page and you can get as detailed as you want to get about that. So without going into the details though I do want to give folks a little bit of an overview of all the things that we accomplished in Juno and of those things what we're still continuing to build on and work on in Kilo. So I'm going to talk a bit about that. One of the big things that we accomplished in Juno was support for Neutron. Before Juno Trove only supported working on Nova Networking so we added support for Neutron. So now when Trove actually spins up the Nova Compute instance underneath it supports the ability to pass through NICs using network IDs or ports to Nova. And then Trove also supports creating security groups. So we've added Neutron support for those as well. So once your database instance is created Trove will now talk to Neutron and set up the appropriate security groups. And when you want to change those security groups to allow or disallow certain ranges of IPs to connect to your database instance Trove will talk to Neutron to do that going from Juno if you have Neutron enabled in your environment. As part of the Neutron support what we also did was we made sure that we made the relevant horizon enhancements in the Trove dashboard so that if Neutron was enabled and you were trying to create a Trove instance it gives you the option to select which network you want your instance to be on or what ports you want to be attached to your instance. So sort of enabled Neutron support through the system so through the backend, through the Python Trove client and through the horizon dashboard as well. We also in Juno added support for replication for MySQL and this was basically in the form of asynchronous MySQL master slave replication on creation. So when you create your new Trove instance you can specify that another Trove instance that was newly created or that's an existing Trove instance should be set up as a master and this instance that you're just creating should be set up as a slave and so once you make that API call Trove will take care of creating the replication account and setting up replication between the master and the slave. As part of this work item we also made sure that there was some ability for you to detach a slave from the master and cut off this replication in case say the master was down or for whatever reason you wanted to detach the slave and promote it to its own master you're able to do that. So this is something that we were working on or had talked about working on pre-Juno and going forward to Kilo we're continuing to make this more stable and I'll talk a bit more about that when we reach our Kilo slides as well. We also added support for clustered Trove instances in Juno. We have a new clustering API and our initial implementation is support for a MongoDB cluster. The new clustering API we had a discussion during the end of Icehouse and during the Juno Summit and sort of got very good feedback from users and operators and decided to implement that in Juno so we now have the ability to spin up a MongoDB cluster and as part of this MongoDB cluster not only do we spin up MongoDB nodes that are part of a replica set but we also spin up a query router and a config server. So basically what that gives us is we can do high availability through the replica sets and we're also available to grow horizontally by adding new shards to your MongoDB cluster. Of course when you provision the cluster you get back an IP, a couple of IPs to talk to your cluster and this is configurable and those are the IPs of the query routers so you talk to them and the query routers figure out which shard it actually needs to go talk to and so all of that is transparent to you and your application just needs those IPs to talk to the cluster. In Juno we added some enhancements for configuration groups. We added default configuration templates on a per data store and version basis so that you could now add different templates if you were deploying a different version of your data store. Where this gets interesting is say for example you wanted to deploy both MySQL 5.5 and 5.6 in your environment and your configuration template that you have for MySQL 5.6 needs to be different so Trove not allowed you to do that before we only allowed you to deploy those configuration templates on a per data store basis but now you can pick them up even on a per version basis. You could also do configuration groups so this is a feature that we enabled in Nice House where you are able to override that default configuration in templates through the API programmatically by specifying configuration parameters and grouping them and targeting those groups to certain instances. So you are able to target those parameters to those instances on a per data store and per version level as well. We also tightened up the validation for the values for those configuration parameters. So those are now backed by a schema which tells it things like what the min value for the parameter is or what the max value for the parameter is so that when users try to override their values there is at least some sort of basic error checking that goes on there so that users are not shooting themselves in the foot by specifying a value that doesn't make sense for that parameter. We also had some data store improvements. We added initial support for post grids and also backup and restore for couch base. And this is something that we are sort of trying to do on a more frequent sort of basis where we sort of figure out where the holes lie for different data stores and try to pass them up and make them so that we have backup and restore for not just MySQL or not just couch base or so on and so forth. So this has been an ongoing sort of effort intro and I will talk to where we are trying to keep that effort going even through Kilo as we progress. So that was some of the work that we got accomplished in Juneau. I want to talk about some of the work that we are looking forward to in Kilo and some of the upcoming features and differences or focuses for Kilo. One thing that I do want to mention is that in Kilo's trove is moving to the whole spec process like a lot of other OpenSlide projects as well. In Juneau we did our Blueprints through Wiki pages and we found out that was getting really hard and cumbersome. We were having problems doing reviews just because Wiki pages were really hard to annotate and sort of go back and track changes who said what, when, who made a change to the Wiki page based on that comment, things like that. And the other reason we decided to switch as well was because using Wiki pages it was really hard to get feedback from ops and users. We found that a lot of that feedback was happening through email and mechanisms outside of the Wiki page which meant that everyone reviewing the Wiki page didn't really have the entire context of what was going on. So like a lot of other OpenSlide projects we decided to move to doing specs using the same code review process that we used for reviewing code using Garrett. And so there's just another repository for specs called TroveSpecs exactly where you'd expected. And in order to propose a new spec you make a proposal of Garrett Patzett with the information in your spec in a particular format and the template for that particular format is in the TroveSpec repo so it's easily available and you can take a look at it. If you want more information about this spec lifecycle and how projects are using it there's a Wiki page and you can click on that for more information. So out of the way I wanted to talk about what exactly we're concentrating on in Kilo. I mentioned this briefly we're going ahead with the plan that we had for data stores and marching down the path and making improvements to existing data stores that we already have and adding other newer data stores. So we're looking at adding initial implementation of at least a couple of other new data stores. We're looking at adding CouchDB and Vernica. We're looking at incremental improvements for existing data stores. I know there's folks working on trying to get back up and restore for Mongrel and a few other pieces so that say for example Postgres support would be to the same level as MySQL support. We're also looking to add an API to be able to fetch data store specific logs from instances where this becomes really interesting and important is that users run specific queries on their data stores and say something goes bad. Users now need to have a programmatic way of figuring out what went wrong and since they don't have in most scenarios don't have actual SSH access to the instance the only way of figuring that out is through the API. So the idea here being that they'd be able to get certain contents of those data store logs. Of course this would be specified and allowed by the operator such as the MySQL error log or the slow query log through programmatically through the Trove API. So they'd make an API call and they'd get part of the log back either as a response we're still trying to figure out the exact details whether that part of the log will be copied to Swift that they can download or whether there's pieces of that log that would be streamed back as part of the response body. But it's something that we're looking at doing. We're also looking at building on the replication scenarios that we enabled in Juno. We're going to add Horizon support for replication where you'd be able to create a new instance and specify that it's a replica of the previous one and detach the replica. So in Juno we added support for that but that was just through the Trove clients and now we're just going to close the loop and add Horizon support for that as well. We're also looking at improving replication based on global transaction IDs. This is a new feature in MySQL 5.6 and as we're adding MySQL 5.6 to the list of supported data stores this is something that we're looking into to improve the async replication that we already have. We're also looking to add support for failover. So GTIDs do give you a bit more of an enhanced story when it comes to failover and Trove has a method through Trove heartbeats where we can monitor what the state of your data store is. So we're looking to see how we can leverage that heartbeat information that we have to failover from a master instance to a slave instance for example or give you the tools so that you'd be able to set up that failover based on certain timeouts that are configurable for your particular deployment of Trove. We're also looking to build out the clustering support that we added. We added the clustering API and we have support for longer DV but there are folks interested in adding clustering support for other data stores out there. I know people, I know certain folks currently are working on talking through the design of how we'd add Cassandra clusters to Trove. Also how we'd add Vertica clusters to Trove and a lot of folks are interested in adding Galera support for clustering to Trove as well through extra DV cluster. So that's sort of some of the new features that we're looking at enabling in Kilo. Apart from that there's a lot of housekeeping work and paying off technical debt that we want to accomplish and get done in Kilo. Some of it were pretty strong and already well off, well on the way to accomplish. We've removed the third-party external deprecated Trove CI. So this was a third-party CI system that would run the Trove integration test and the functional test but it was not done in the same manner that the opens back in for a CI system was running the Infor test for a lot of the other projects. The reason for this is it basically was set up before a lot of those Infor building blocks were in place but not that the Infor building blocks are in place. We've gone ahead and gotten rid of that deprecated Trove CI and all of our functional and in-tests are running in the DevStack VM gate environment fully under opens back in for us. We're also working on cleaning out a lot of the deprecated Oslo incubator codes that exist in the Trove code base and moving to the actual graduated Oslo modules. So an example of that would be switching to Oslo messaging for RPC calls between the Trove components instead of using just the common code from Oslo incubator that exists in Trove today. Oslo messaging is just an example of one of those. There's a lot of other Oslo incubator modules that we need to switch to using the Oslo graduated modules for. This is still ongoing work and I think while we can accomplish a big chunk of it in Qo it's one of those things that we have to be vigilant of staying on the ball and making sure that we're always keeping on top of the changes going on in Oslo. Another effort that we're making in Qo to get better with some of the technical depth that we have is adding initial support for upgrade testing using Grenade. So a lot of you might be familiar with Grenade already. It's opens back tool that we have that lets us basically run DevStack for a previous release. Then use the Jatlin provision resources, go ahead, upgrade DevStack to a new release and then test and make sure that the resources that were provisioned in the previous version still continue to work as expected in the newer version. So we're working through adding support for Trove to Grenade and making sure that we have good upgrade tests for Trove using Grenade as well. One more thing that we're concentrating on is simplifying some of the ops for Trove. I had a lot of good conversations with ops folks and users at the Qo Summit and one of the constant pieces of feedback, consistent pieces of feedback that I heard from them is that it's hard to build Trove guest images today. So we're working on an easier way to build those Trove guest images up and essentially the only way to do that today or one of the ways to do that today is using some of the older scripts, the RedStack scripts when Trove existed as RedDwarf before which is a bit convoluted and difficult to parse and understand. So we're working on not having to use RedStack to do this and having a standalone way to just build the Trove guest images without breaking into all of the beast that is RedStack. So also working on getting documentation improvements going, not just for image building but also for deploying Trove, getting a better set of instructions out there that will help folks give them an easier way, perhaps to deploy Trove and also better documentation for getting started with Trove development. So that's some of the fixes that we're working on from a documentation perspective. And that's not all. We're a growing community. We're open to ideas. And there's a lot of room for improvement. Frankly, there's always room for improvement. So if you have ideas, please find us. You can find us at OpenStack Trove on FreeNode, on IRC, and just shoot someone a message. And there's always new folks joining. And folks are very friendly on the IRC room or very receptive to feedback. So drop us a line and say hi. Even if you don't have an idea right now, just want to say hi. You're welcome. Please come over and say hi. So we're still growing. And the Trove project is growing as well. So we're really receptive to new ideas. So please come along. Bye. If you have any questions specifically, you're always free to contact me. You can reach me through IRC, Twitter, Gmail. All of those handles are the same. So Flicknick on IRC, at Flicknick on Twitter, and Flicknick at Gmail.com. So feel free to shoot me a message. You can contact me in any way. And I'm looking forward to hearing from you. And I hope this presentation and these slides were useful and informative. And you gave you a good or better idea of what Trove is and where Trove is going over the next six months in the kilo timeframe. Thanks for watching.