 Hello, everyone. My name is Nikhil Manjanda, and I'm the PTL for OpenFact Trove. And thanks for coming to take a look at this PTL recording series. And today I'm going to tell you a bit more about what we achieved for the Kilo milestone and what we're planning to achieve in the Liberty milestone for Trove. So let's get started. So I just wanted to start off with the mission statement of Trove. As you all know, Trove is the OpenFact database service. And its mission statement simply states to provide scalable and reliable cloud databases as a service provisioning functionality for both relational and non-relational database engines and to continue to improve its fully featured and extensible open source framework. Just keeping that in mind as we go through sort of what our goals and plans are for Liberty just so that we're sort of aligned and we know where we're going. So that makes sense. Next slide, please. So just to give you a quick overview of Kilo and how it went for us, we had a lot of commits in Kilo. About 270 commits from 71 different contributors from a lot of different companies. We closed out about 22 blueprints and fixed around 150 bugs. There were about 3,000 code reviews and 46,000 total lines of code changed. And you can see all of the details of what changed and what blueprints were fixed and what bugs were fixed on Launchpad and more details for the numbers on faculty. Next slide, please. So some of the big sort of themes that we covered in Kilo were sort of Kilo was the first really the first release where we introduced the notion of specs for Trove. Other open spec projects like Nova, Cinder also moved into this format. Basically before using specs in Garrett for Trove, we used to have specs in a wiki and so doing specifying specs on the wiki was getting cumbersome where it was hard to do reviews and get feedback from not just people who are writing the specs but from actual operators and users and also it was very hard to track changes because on the wiki, you could get a date and time step but you don't know sort of who said what at what time and how do you respond to those comments and things like that. So we moved to doing specs using the same mechanism we use for code reviews which is through Garrett and so we started that in Kilo. It worked really well for us and so we're continuing to do that in Liberty going forward as well. So if you're interested in commenting on some of these specs that I'm going to talk about or even proposing a new spec for Liberty, there's more information about the spec lifecycle process that's available at the wiki at that location and please feel free to sort of look into that and we'd love to have your input. Apart from that sort of big shift to specs, we also accomplished a lot more sort of blueprint changes in Kilo and so I'm going to talk about some of that. Next slide, please. So one of the big changes that came in Kilo was an improvement to replication. As some of you might know, we started having replicated instances in Chirov in Juno but the replication that we supported in Juno was largely binlog-based replication. MySQL 5.6 came out recently and they added support for a new kind of replication called GTIT-based replication which is basically replication based on a global transaction ID. So in Kilo we added support for this type of replication which included offering a newer replication strategy that may use of this. Some of the advantages of using GTIT-based replication over binlog-based replication is that with GTIT-based replication you could actually query different MySQL slaves to see which MySQL slave had the latest set of replication data that had made it out to them. And apart from adding this replication strategy, we also added some of Horizon support for replication so you could actually go and log into Horizon and then create a trove instance as a replica or detach it from a replication source. I mentioned that with GTIT-based replication you can actually query the different replica slaves as to which slave has the latest data. So that allowed us to add a couple more new APIs to support various methods for failover such as detach instance and eject replica source. The idea here is basically if your replica source, that is your master instance, flows out for whatever reason hardware failure or the compute host goes down or whatnot, then this eject replica source would actually query the slave that were attached to that master to see which had the latest set of data and then promote that slave to the master. So it gives a sort of more complete story for failover strategy in case you want to, in case your master goes down and you want to promote your slave to master. So we added a couple more of these APIs in Kilo. Next slide, please. We also did a lot of data store improvements in Kilo. We added implementations of few new data stores. We added implementations for single instance CouchDB and single instance IDMDB2. We also added support for HP Vertica, both single instance support and support for Vertica clusters for the community edition which is up to three node clusters. And so this is sort of a trend that we've been seeing in Trove since Juno and Icehouse when we integrated is basically we're seeing a lot more sort of SQL and node SQL databases have guest agents in Trove. People want to deploy not just MySQL and Mongo and Redis, but also a lot of these newer databases that are coming up, plus also a lot of the sort of older traditional databases sort of like DB2 and things like that to extend Trove to be able to support them as well. So we'll see sort of this trend carries through to Liberty as well where we're doing some more work along these lines to add support for new data stores. So next slide, please. So the other thing that we did in Kilo is we spent some time to pay off technical debt that we had accumulated previously. We used to have a Trove CI that ran as third party CI called the deprecated Trove CI. We've gone ahead and removed that. All the testing is done under OpenStack Infra now after Kilo. All of the functional and intests run as a DevStack VM gate environment as a functional job. Unit tests also run completely under OpenStack Infra. Apart from that, we also cleaned out a bunch of deprecated Oslo incubator code we had. We moved to sort of the latest Oslo incubator code base. We switched out Oslo messaging. We switched to Oslo messaging for our PC. We used to use the older sort of simple combo method that was part of the deprecated Oslo incubator code, and so we now move to actually using the Oslo module Oslo messaging. So that was Kilo. Next slide, please. Now I want to sort of talk a little bit about what we're planning in Liberty and what we're looking to accomplish. So going with that theme of data store improvement, we have support today for MongoDB and MongoDB clusters. However, we don't have support for a lot of common trove scenarios for MongoDB, and so we're looking to add support for these. So for example, support for backup and restore for MongoDB using MongoDEMP as an initial strategy. Support for MongoDB configuration groups. We have configuration groups today for MySQL, so extending the same idea to MongoDB as well. The idea here being if you deploy a MongoDB instance and you want to tweak some of the config values in the MongoCons, being able to do that as part of the trove API so that a user doesn't have to access H onto the box and tweak settings there, so doing that for MongoDB as well. And in addition to that sort of basic user and database management for MongoDB, it's something that we're looking to tackle in Liberty as well. So being able to specify or create users and databases through the trove API so that you don't have to actually log into Mongo to do that. Next slide, please. Apart from Mongo, we're also looking to make similar improvements in Redis. And the reason we chose Mongo and Redis or folks are looking to tackle Mongo and Redis is it's actually interesting because we have support for a SQL-based solution in MySQL. We have support for sort of a document-based data store in Mongo. And with the Redis, now we'll have support for a sort of caching solution in trove through Redis as well, so sort of covering different bases over there so that people have sort of access to each one of these different types of data stores in trove. So for Redis, we're planning to update to the latest Redis 3.02 code base and support for backup and restore for Redis so that you're able to dump your data and load it again into a new Redis instance when you restore the Redis instance. Also support for Redis configuration groups, which will allow you to tweak configuration file values for Redis through the trove API so that you don't have to SSH in and muck around with those config values. Next slide, please. So another theme that we're looking at, and this is talking to a lot of folks, they're really interested in this, is basically improvements to the current clustering solutions that we have today. Or in Kilo, we have clustering solutions for MongoDB and for Vertica. And really the MySQL, we also have clustering replication for MySQL, so it's sort of a synchronous master-slave replication, but folks are really interested in getting synchronous multi-master MySQL clusters up and running in trove. And so we've been working closely with folks from Percona and Galera, Tysora, HP, all of us, to actually come up with Galera clustering solution using Percona X3DD cluster. And we're looking at landing this during the Liberty timeframe as well. So a lot of users have been actually asking for this, and so we're excited that this will become sort of, we're looking to land this in Liberty, and hopefully this will make it into trove during Liberty. Apart from Galera clustering, we're also looking to land support for Redis clusters. So I did mention that we're planning to move to Redis 3.0.2, something that Redis announced in its latest release 3.0, was support for clusters, and so we're looking to enable this in trove so that you would not only be able to spin up single instance Redis, but also a Redis cluster. Next slide, please. Another blueprint that we're looking at enabling in Liberty its flavors per data store. The idea here is basically to limit certain data stores to be able to run only on certain flavors. As we're going through the different milestones and increasing support for the different data stores in trove, this is getting more and more important. To give you a quick example here of why this is important, my SQL, for example, can run just fine on a small instance with a two gig disk, but Vertica or MongoDB really has issues sort of chugging along on an instance of that size. So the idea here is to be able to limit certain data stores, for example, Vertica or MongoDB, saying that they're able to run only on certain flavors so that when users try to provision instances of that particular data store through the trove API on a flavor that is smaller than the size that is supported, they'd get an error up front in the API and wouldn't have to deal with issues such as small instances being spun up and then having a bad experience with trying to run the data store on an instance that is not supported. Other areas that we're looking to improve in trove, Horizon is up there on the list. Again, this is based on a lot of user feedback from the Summit. We want to, so we have a lot of features in trove that today we're able to deploy or use through the CLI, that we're still not able to use through Horizon, so basically we have a couple of blueprints that we're looking to tackle to improve Horizon to be able to turn on these features through Horizon. Some of these features are the ability to deploy trove clusters through Horizon, user and database management through Horizon, including the ability to create a root user so that you can initially create a root user through Horizon and then use that root user to do all of your user and database management offline with the tools that the specific data store uses. Another big one is using configuration groups through Horizon so that you're able to use a programmatic GUI through the Horizon pages, through the database panel in Horizon, you're able to set the config values that you want for your MySQL instance or for your Mongo instance. So we're looking to enable that all of these features in Horizon so that sort of completes the GUI experience in the database panel in Horizon. So we've gotten a lot of feedback through the KiloCycle and Eman sort of during the beginning of the Liberty Cycle about best practices deploying trove in production and a lot of emails in the mailing list talking about folks who want to deploy trove, asking questions such as, hey, how do I set up a messaging layer? How do I set up RabbitMQ for trove? Or how do I set up databases for trove? Or how do I actually set trove up in a secure way so that users of trove aren't able to break out of their instances and sort of learn secrets that they're not supposed to know? So based on the feedback that we've gotten from the mailing list and summit, we've decided to author a new operations manual to tackle some of these deployment issues as to how we could deploy trove in a secure manner. How to deploy trove in sort of multiple different configurations. Trove today is deployed at various locations. It's deployed in HP in production. It's deployed in Rackspace in production. eBay deploys trove as part of its sort of private cloud service. And each one of us does it slightly differently. So the basic idea here is for us to sort of pool our knowledge about how we're deploying trove and come up with a consistent guide and a consistent manual that we can then publish in the open so that operators who are looking to take trove and deploy it on their cloud have some guidance about the best practices as to how folks are doing this. So hopefully that will sort of ally some of the concerns people have with the security issues with deploying trove and sort of will make it easier for folks to understand how to deploy trove in a more production-like environment. And apart from that, we've got sort of a plethora of other improvements, smaller, more sort of tiny blueprints that tackle different areas of trove strategically. So some of these are exposing the data store logs to users through the trove API, support for management APIs in Python trove clients today. The management APIs exist on the server, but you're not exactly able to call them unless you use something like Curl. So making that story much easier for folks so that once it's exposed through the trove client, you'd be able to just use the trove clients to make those calls, extending guest heartbeats to monitor data stores today. We support heartbeats that basically tell you whether a data store is up and running or whether it's down, basically extending that mechanism to give you some more information about the data store, not just whether it's up or down. It's something we're looking into. We're also looking into adding metadata support for trove instances. So say you want to tag your trove instances with certain metadata that you want displayed in the UI for your own tracking purposes or whether it's part of a production cluster or whether it's using a certain image or certain data store type, you'd be able to do that going forward. So these are some of the other improvements that we're considering adding to trove. At least we're starting work on this. I'm not sure if all of these will land in liberty or not or whether they will sort of continue on to the next end release, which I believe is called NIDA. So that sort of wraps it up for what we're planning to do during the Liberty release. Of course, this is not a closed set of features that we're planning to tackle and we're more than happy to have your idea here. So please, we are a growing community of contributors, 136 contributors from 30-plus companies, more than 2,000 commits and 150,000 lines of code. So we're always open to new ideas and code, lots of room for improvement. So come find us at OpenStack Trove on FreeNode. If you have ideas, if you have code that you'd like to contribute or if you just want to come talk to us about deploying Trove or anything about Trove, come find us. Next slide, please. And you can find me on IRC if you have any questions related to Trove. I'm Slicknick on IRC at Slicknick on Twitter and Slicknick at gmail.com. Feel free to email me and hit me up with any questions that you may have and I'll be more than happy to assist you with that. Thank you so much for listening and hope you have a good rest of the evening or morning or whatever time it is in your time zone. Thanks.