 Welcome to the PTL webinar series. These series evolved from sessions that were held by the PTLs regarding updates to their projects at each summit. We converted them into webinars to extend the reach of these events beyond the summit. Today our PTLs are going to update you on what may be new for Juno, as well as detail any items of note for our users and operators. Today our PTLs are Nikhil, who is going to review database as a service, aka Trove, and Zane, who's going to review orchestration, codename, HEAT. So each speaker is going to talk for 10 to 20 minutes, and then we'll take questions if they arise, and we are going to start with Nikhil. So I'm going to put Nikhil, your presentation into slideshow mode, and you can take it from here. Thank you, Margie. Hello everyone, and thank you for attending the Trove Juno PTL webinar. My name is Nikhil Manjanda, and I'm the current Trove Project Technical Lead for Juno. Over the next few minutes, I just wanted to take you through some of the cool new features and blueprints that we completed in Icehouse, and that we're currently working on in Juno. If you have any questions regarding any of this, please save them for the Q&A section later, and I'll do my best to try not answer them. That's it. Let's get started. So one of the things that we did in Icehouse was come up with a mission statement for Trove that we then presented to the technical committee, and that was accepted by the technical committee. And so I just wanted to go over it and sort of point out a couple of salient features regarding it. So the mission statement for Trove, as it stands today, is to provide scalable and reliable cloud databases as a service provisioning functionality for both relational and non-relational database engines, and to continue to improve its fully featured and extensible open source framework. Couple of important interesting things to note here. The first thing is that if you notice it says that we provide reliable cloud databases as a service provisioning functionality. So that specifically does not call out anything related to the data API of the data store underneath that we provision. And so Trove tries to remain as agnostic to the underlying data API or the data communication that's happening underneath. And the other point that I wanted to take some time to mention is that we specifically call out both provisioning both relational and non-relational database engines. And so there was quite a bit of back and forth amongst the Trove community on this. And finally, we sort of decided that since we're talking only about the provisioning and not about the actual data, it would be okay to extend to both relational and non-relational database engines. So now that I have sort of covered those two pieces, let's talk about some of the actual work that we got done in Icehouse. And that's completed an intro of today. So we added support for non-relational database types in Icehouse. And before we could do this, we actually had to have support for data stored type and version. So when you provision the Trove instance, today you actually specify, hey, give me an instance of a MySQL data store type or an instance of a different data store type like MongoDB or Cassandra. And in Icehouse, we added support for a few other database types. We added support for Cassandra, MongoDB, Redis, and Couchbase. Since this was just added in Icehouse, it currently only supports sort of single instances. And as we progress, we're looking into adding support for clusters of instances and things like that. But that is still in the works. Another interesting blueprint that we completed in Icehouse was DNS support for Trove instances. And so up to and until Icehouse, when you provisioned a Trove instance, it sort of came back with a public IP. And then you could connect to your MySQL instance through the public IP and connect to the MySQL database and have what you make. But so going forward from Icehouse, we have a sort of DNS layer. And the way we integrated with DNS is we actually implemented a designated driver that plugs into this DNS manager. And so you could put in your configuration settings to connect to designate in your configuration file. And then when you provision Trove instances, not only do you get an IP address, but you'd also get back an actual DNS name corresponding to whatever configuration settings that you put into your configuration file. So Trove actually talks to designate to set up the DNS name for your Trove instance. Another important blueprint that we added in Icehouse is support for configuration groups. This was in the works prior to Icehouse. So this has been in the works sort of in Havana. And the idea here is that a lot of different data stores have configuration parameters and configuration files that and users would like to tweak these configuration parameters in the configuration files. But since they don't have direct instance access to the Trove instance, they don't actually have any way of doing it. So we had to programmatically allow them to access those configuration files through the Trove API. So to give you an example here, something would be, so if we take MySQL, this would correspond to the MyCNF settings in MySQL. So Staya user would want to change the max connections or some other MyCNF setting. In Icehouse, the user can now define a set of configuration parameters as a configuration group and then target that configuration group to a particular instance, therefore allowing them to choose what sort of configuration parameters each one of their Trove instances is configured with. Also, other things that we added during the Icehouse time frame, another important thing that we added was better integration with heat. Prior to this, Trove sort of talked to the native OpenStack API. So Trove directly talked to or talked to Nova and to configure sort of the instance to actually bring up the virtual machine. And after this, so you could change the Trove configuration settings in the Trove configuration file to turn on heat support. And if you did this, Trove would then talk to heat instead of Nova. And so as part of this, we also shipped with some default heat templates for each data store out of the box. And so Trove would talk to heat using these heat templates provision that particular data store, set up, if you had security groups enabled, set up security groups for that data store, et cetera, et cetera, so that we could orchestrate this better rather than Trove having to do all of the orchestration itself, talk to heat to do it. We also added support for user-defined custom heat templates. And so the heat templates that are in Trove are extensible. So say if you wanted your heat templates to do something else as part of that workflow, then you could just bring your own custom heat templates. So we, Trove had backup and restore functionality pre-ICE house. I think this merged in Havana when Trove was still called Redorff. But we've been improving the backup and restore functionality that's in Trove. And something that we added in Nice House was support for incremental backup and restore. So it's no longer necessary to perform full backups every time. And the way this works is in the Trove API post-ICE house, you can specify a parent backup in the API call. And the parent backup can be either a full backup or an incremental backup. And so Trove underneath the covers uses, leverages extra backup to perform incremental backups based on the LSN stored in your previous backup. So right now this is available for MySQL since we're using extra backup. And as we'll see going through what the work plan in Juno later, we're also trying to sort of fix backup and restore for other data stores as well. So another thing that we added in Juno that's worth mentioning is a new component that we added to the Trove control plane. So if you deploy Trove, you will also get this new component called the Trove conductor. And the main reason to add this was basically the old guest agent needed to have a direct communication with the Trove management database. And that was a security vulnerability and we wanted to improve the design there. So we added this component called Trove conductor and the guest agent talks to the Trove conductor over RPC. And it basically talks to it about heartbeat messages, backup restore checks. And so all of the communication goes through the Trove conductor and so the guest no longer needs direct connection with the Trove management database. A few other miscellaneous features bloopers that we completed in IceHouse that are worth mentioning. We got rid of the XML ATI intro and now we support JSON completely in line with the other OpenStack services. We added support for some basic tempest ATI tests. It was very minimal in IceHouse and we're sort of continuing that work as we progress in Juno. We also did some great work around documentation. We came up with a Trove deployment guide and started work there to provide some better documentation for people who are actually trying to deploy Trove. And that's still work in progress and we hope to improve a lot of that documentation as we go through Juno as well. So what I also wanted to cover what we are currently working on in Juno and what I expect to sort of finish during the Juno milestone. And one of the big pieces in Trove that we're working on in Juno currently is support for replication, specifically support for asynchronous MySQL master slave replication. And the idea here is that you could have an existing or new Trove instance and you can specify using the Trove create API, you can say, hey, create me another Trove instance as a slave making this other existing or new instance a master for this slave. And then Trove would take care of actually connecting to the master setting up the config settings so that it can now replicate its data to this new instance that it created as a slave. Things that we are working on here at a high level is the ability to also, in case a master goes down programmatically via the API, promote a slave. That's the slave of this master to master and are also other APIs to take existing slaves and detach them from master so that you could use it as a standalone Trove instance. Another sort of big blueprint we're trying to get emerged during Juno to align with the direction of the rest of OpenSpec is support for Neutron. So today, Trove works with Nova Network pretty well. And it sort of works with Neutron but with a lot of workarounds. So we're trying to make that a lot easier. And specifically, we're looking at tests and specific scenarios. So being able to add Neutron mix networks on instance create. So when you create a Trove instance, you could actually provide it a network, a Neutron network and then the instance would have sort of an IP on that Neutron network. So things, scenarios that this would support is before where whereas a Trove instance needed to have a public IP, now you can sort of have a private IP on your particular Neutron network. And so this would, this Trove instance would not be world accessible, but it would be accessible to say for example, your API server on the same network. We're also looking at adding a support for default, set of default Neutron networks. So as providers, if people who are deploying Trove choose to have Trove instances come up on certain, certain networks for, for example, for monitoring purposes or what have you, they would be able to set this in the Trove configuration file. And when Trove instances came up, they would not only have a make on the customer requested network, but also on sort of these, these management networks. We're also adding enhancements to Horizon for specifically for Neutron related instance launch issues. So updating the wizards to, if Neutron is enabled to actually be able to set a lot of these makes or Neutron networks. We're also enhancing configuration groups. Before configuration groups used to be only per data store, but while testing configuration groups and talking to a bunch of people to deploy Trove, we found that it also needs to be version dependent. There are configurations, for example, in MySQL 5.6 that don't exist in 5.5. And so therefore we need to have different configuration groups with different settings. We're also working towards adding configuration groups for MongoDB and making sure that it works across data stores as well. We're going to also allow users to add descriptions to groups. This has sort of been a customer request and also do some better schema based validation for values of actual config programs so we can do a better job at validating them. Also making a lot of incremental improvements to data stores. We're going to be adding a couple new or we're looking to add a couple new data stores, specifically Postgres and Vertica. And while we're doing this, we sort of realize that we specific data stores like Vertica need flavors that are specifically configured either really high memory or what have you made. And so we need to actually be able to associate flavors with certain data stores so that we'd be able to provision instances of those data stores only on those associated flavors. Other things that we're looking at is the ability to view data store specific log files. So if a customer wanted to view the MySQL error log, for example, there would be some way of doing this programmatically. We're also looking to make enhancements to backups. I sort of hinted at this earlier. We're trying to work on backup and restore for Non-MySQL databases. So specifically to standard, couch base are things that we started working on and started looking at. Another thing that we're looking at is to make backups that are stored in Swift today sort of available to Trolls deployments that are cross region so that you'd be able to restore a Troll instance in a different region based on a backup that you took in some other regions. So another big drive in Trolls in Juneau has been to get some better testing and align the testing that we have in Trolls according to sort of open stack testing requirements. So as part of that, we're looking to add more tempest tests, specifically guest level API tests, client tests, and some scenario tests. And we're also looking to support upgrade testing through grenades. So basically grenade testing from Icehouse to Juneau. Also a lot of other smaller tasks and miscellaneous items that we've been working on in Juneau. So now we're working on support for our capabilities API. This is specifically, so this was born out of a requirement from I believe it was Horizon where the idea is that Troll has a lot of configuration tweaks that you can do to configure Trolls in a certain way, but there's no way to actually discover what those configuration tweaks are. So to give you a concrete example of this, you could set up Trolls today to require the data store instance bits to go on a volume, a Cinder volume, but if that's the case, now the Horizon panels need to show that a Cinder volume is required, but the Horizon panel has no programmatic way of knowing whether the configuration for Trolls has the values set up that way or not. So capability sort of answers that question to be able to dynamically query Trolls to say, hey, do you support volumes? Do you require volumes? Or if you're a specific data store type, do you support these features? So we made some progress, we're making some good progress towards that. We're also looking to migrate to Oslo messaging. The RPC code in Troll is pretty old, and Oslo Intubator RPC code is going away, so we're looking to do that. We're looking into heat enhancements, specifically sort of migration of a non-heat trove install to a heat-based trove install, and using things like StackAdults to make that easier, so on and so forth. Also looking to some of the newer heat features that are coming out that we'll talk to later, like Lifecycle, to sort of make resize this intro easier and better. We've also worked a lot on improving logging and documentation in Juno, and that's sort of going to be an ongoing effort, and hopefully we can get our documentation to the point where it's a lot better for users and make Troll easier to deploy and use. That's it. I also wanted to sort of take a minute to talk about this stuff, so we're in discussions about talking about a clustering API. I don't think that's going to arrive in time for Juno. It's probably going to be, it's going to take longer, so perhaps the K-release, but I just wanted to throw that out there and mention that that's also something that a few folks are working on. But apart from that, we want you, if you're interested to sort of join Troll and make a contribution, we have a really big and growing community of contributors to Troll, and we're completely open to new ideas, new code, and find us at OpenStackTroll on FreeNode. Come let us know how we can make Troll better and how you can make your first contribution or your subsequent contributions to Troll. So we'd love to have you. That said, I don't know if there are any questions in the chat room. I don't know if there are any yet, but you're going to hopefully have one later. We'll see. Okay. But thank you very much. Appreciate that. Thank you, Morgan. Sure. Thank you. Okay. So we are going to move over to Zane. Zane, are you there? I'm here. Can you hear me? I can't hear you. Put this into full-screen mode, then you can take off. Okay. Go right ahead. Thank you. Thank you, Maggie. Hi, everyone. So just kind of give a quick update on what's been happening in heat in the Ice House timeframe and what we got planned for you in June. So I wouldn't put too much stock in these numbers exactly, but I ran the statistics on Havana and Ice House, and as you can see, the project is continuing to grow. So we've got a lot of contributors there in Ice House. I think heat is now second only to Nova in terms of number of commits and the integrated projects in the Ice House cycle. So it's a very big project now, very healthy ecosystem. So we're happy to see everyone getting involved, and we're kind of rotating the PTLs as well. So we're building kind of leadership depth in the project. So I think the project is in a very healthy state. And we've got a bunch of changes that happened in Ice House to improve heat. The biggest one, I guess, is the software configuration and deployment resources. So this allows you to now define your software configurations separately from the servers on which they're going to be deployed. So this is a big improvement for reusability of configurations and also just separating out the infrastructure from the software side. And it will integrate, you can use it by itself or you can integrate it with your existing configuration management tools. So it's not going to replace Puppet or something like that unless you want it to. But it will integrate on the instance with something like Puppet or in fact existing configuration management tool. And some of them, if you look in the heat template repository, Steve's already written integration points for some configuration management tools. And we expect to see more get added during our development. The hot DSL, so hot is the heat orchestration template format. So that's our native open stack template format, which has been kind of in development throughout the Havana and Ice House cycles. And so we've now frozen that. So if you write templates to that version of hot, they should continue to work in the future. And if we make any breaking changes from now on, we'll be versioning it. And you can rely on your current templates working, hopefully. And as part of that work, we have made the template formats are now pluggable. So if you wish to write your own template format, you could plug it in in much the same way that you add resource plugins so the operator can choose to do that. I would say that support is not 100% complete in Ice House, but we're certainly going to use that to, if we need to bump the versions of anything, the support that we have in Ice House would be sufficient for that. If you wanted to write your own completely new template format, then all of the changes required will be in a journal for that. And so part of that is the intrinsic functions in the template like get param, get resource, that kind of thing, get attribute. Those are now also pluggable as well. So you can create your own intrinsic functions and add those to your own custom template format and drop that in the plugins directory. And that should just work. We also have now pluggable parameter constraints. So this is kind of allowing you to verify earlier on what parameters that the user is passing are valid. So for example, the first example is if you have a glance image name being passed in as the image to boot an over server from, you can specify the glance.image constraint on the parameter. And that will check that that is actually a real glance image. And it will give you the error message straight away saying the problem is that this isn't a glance image if you have that problem. And those are also pluggable. So operators will be able to deploy their own ones. And we expect to be implementing more as time goes on. Autoscaling resources. So up until now, we had to rely on the CloudFormation compatible AWS Autoscaling resources. So now with iSouth, you'll be able to create OpenStack native Autoscaling resources, which will allow you to scale any type of resource. So the obvious one is to scale OS and over server resources instead of the AWS EC2 instance resources. But in fact, you can scale any type of resource now, including provider templates. So if you want to scale a whole template of resources, that is now possible. And that is leading on eventually to a native Autoscaling API. So the kind of format of those resources is reflects what we expect a native Autoscaling API and OpenStack to eventually look like. The heat engine, you can now scale out and run multiple copies of the heat engine. So it uses a database flocking scheme to make sure that no two engines are operating on the same stack at the same time. But you can now run multiple copies. So if you have many users heading the heat engine with requests, you can distribute those requests out to multiple heat engines. And of course, the heat API was always stateless. So that could always be scaled out, but now the engine can as well. We've improved the API for operators. So you can now query the API, for example, if you're an admin and you're the OpenStack operator, you can query the API for a list of all the stacks from all the users, for example. And those improvements will, I expect, continue into Juno. But yeah, it's just about adding stuff to make it easier for operators to find out what heat's doing. And the last big one is that now you don't need admin privileges to do basically any of the standard built-in resources. So previously, we had to create users in Keystone in order to create certain resources. For example, weight conditions, because we needed to create a user with minimal privileges so that we could put their credentials into the instance and have that call back to trigger the weight condition. And creating a user in Keystone requires admin privileges. So what we now do is when you deploy heat, or when you deploy OpenStack, sorry, you create a separate domain in Keystone that is just for these users created by heat. And so heat will have permission to create users in that domain, so it won't interfere with any LDAP integration, whatever you have in your main Keystone deployment. And so that allows us now to finally remove that requirement to be admin for some resources. So ordinary users should be able to use all of the built-in resources now. So that was I-Couth, and we've got a bunch of more stuff coming up in Juno. Software configuration is kind of the first stages in place now. We're going to be continuing to improve that, particularly around managing the whole life cycle of software configuration. So as the stack changes and the software configuration changes over time, we will allow you ways to modify the software configuration. And you know, you can have it when you delete the software configuration, you can have an action to clean up the application on delete and that kind of thing. Recovery from a failed update is the main one. So this has kind of been a long time problem when you're doing a stack update and something fails. That was kind of, there's a rollback, and if you had the rollback enabled, then that was good. If that worked, or if the rollback failed, or you didn't have it enabled, then you effectively lost your stack. There was nothing further you could do to it apart from delete it. So that will be changing in Juno. If a stack update fails, you'll be able to recover from that by doing another update. Similarly, if a stack fails, you'll be able to recover from that by doing a stack update to whatever template you want. So that problem should go away in Juno. A couple of things in auto scaling. One is being able to delete on, when you scale down an auto scaling group, you may want to select which server you actually scale down, so rather than have heat choose one for you, more or less at random. So that's something we want to implement in Juno. And also, notifications to warn you that the server is about to be scaled down so that you can evacuate that server around queers at whatever processes that are in place on it. So both of those, we're hoping to get into Juno and the notifications will extend to other things than just auto scaling eventually as well. So that then goes to allow the user to configure notifications on any event and heat. In progress updates, if you start an update and you realize partway through that the things have gone horribly wrong and you want to just stop it and roll it back or whatever, there's no current API to do that. So we'll be introducing API to cancel and in progress update. Notifications are a big one. So this is, I just talked about kind of outgoing notifications. This is incoming notifications. So right now, heat does a lot of polling on various resources to determine when a node server, for example, is fully built. And we know that we won't be able to eliminate polling altogether, but we can reduce it by a great amount by listening for notifications instead. So that is planned and that work is kind of leading into what we call the convergence specification, which is we'll be constantly monitoring the stack and seeing if a resource dies or something like that and bringing it back into spec. So that part of it is not a journal thing. That's a future thing, but kind of more reliance on notifications rather than polling is kind of the first step in moving towards that end goal. Multi-region support. So if you have heat installed in multiple regions, you can manage, obviously, stacks in each individual region, but you also might want a way to collectively manage all of those stacks. So we're going to introduce multi-region support. So you'll be able to create nested stack resources in a different region from the region that you're creating your initial stack in. And so you'll be able to manage multi-regions from one top-level template. Steve Baker has done a lot of work already on performance improvements. The triple-o project is depending a lot on heat, and so their goal is to deploy some really quite large overstack clouds using heat, and so the performance is very important to them. So Steve's done a lot of work already on kind of improving the or optimizing the access we do to the database. And also now we are able to profile heat using the rally project, so that should help us kind of track down any other nasty performance bottlenecks, and you can expect better performance from heat in June or other than we had in Icehouse or Havana. And the last one I've got here is stack abandon and dock, which Nikhil kind of alluded to earlier. And it is actually available now in Icehouse, but possibly a bit flaky in Icehouse, so it's interesting thing to experiment with, but maybe not ready for prime time yet. And we hope for June, it will be ready for prime time. But this is kind of, it's probably a dangerous thing to use manually, but for automated tools like Nikhil was mentioning Trove, you know, you've already created resources and now you want heat to manage them, even though you didn't create them with heat, but you don't want to go delete them first. Then you can adopt those resources into a new stack and allow heat to manage them from then on. And abandon is obviously the reverse of that, so if you want to stop heat from managing a set of resources, you can do a stack abandon and it will, heat will stop trying to interfere. And those are kind of jewels as well. So when you do a stack abandon, you're giving a document with all the metadata that you need to do an adopt again. So for automated processes like Trove where you can kind of assemble that data in quite a reliable way, that should be a useful feature. So that's the main stuff we have on the roadmap for June. Of course, there's a lot of other kind of small things happening. I just wanted to go through some of the changes that might affect operators. A big one is the Keystone now has a version 3 API and heat is increasingly relying on that for these new features like some of the trust stuff that we use for autoscaling, some of the creating users in a different domain to get around the admin requirement. So for now, the V2 API is supported with a little plug-in shim inside heat. That won't be around for a long time. I think it will probably still be there in the general release, but all bets are off at this point for the K release. So I would definitely encourage people to start testing the V3 Keystone API and look to deploy it soon because there are heat features that are going to rely on that. In June, we will switch over to Oslo messaging. So we're dropping the existing kind of Oslo incubator code and moving to the latest Oslo messaging library. That will mean potentially some config changes, so it's just something to be aware of that we are shifting over. And the last one is there will be a new process to deploy in Juno, we expect. So as well as the heat engine, there will be the heat observer process, which as I mentioned, we're going to be listening for notifications more and so that will be the job, that process, to listen for notifications. And that should be kind of horizontally scalable from the beginning. So that's another kind of control plane process that will be added to heat. That's all I have for you. Do we have some questions in the chat? I think there's one from Nikhil here. Hi, this is Nikhil here. I just saw the question from Matthew in the chat room and so I just wanted to address that. So just to read the question, it says, I wasn't clear if we'll be able to create databases with Trove from the Horizon dashboard in the next release. So we're working on making a lot of changes to the Horizon Trove dashboard in Juno to make it more useful. And yes, we will be working, one of the things we'll be working on is I think today you can list the databases and you can delete databases, but it's not possible to create them. So specifically we'll be working on that task as well to create the database. There's a blueprint up for that already. Let me just link to the blueprint in the chat room and hopefully you can take a look at that and follow that along and hopefully we can get to that soon in Juno. Great. Thank you. Are there any other questions? You are now unmuted. Everybody else have any questions? Well, I'll give it less than a minute. Can't not talk for a minute. But feel free to put it into the chat box or just ask your question out loud. That's fine. Well, if you do have any questions, you can send them over to the Foundation, to MarketOpenStack.org. You'll also get an email from Meeting Burner asking you how you thought the meeting went. You can send it directly from there back to us as well. I think that comes from Lauren. Or you can ask our presenters directly. Otherwise, thank you very much for joining. Thank you, Nikhil and Zane. I know you're both very busy and we appreciate you doing these readouts for us. This webinar actually concludes our PTO Webinar series. We've had seven of them, so way to end strong. Thank you very much. If you would like to listen to this webinar or other webinars in the series, they're all on the YouTube web channel. Well, this one will be up within a day or two. But there's a playlist out there and you can listen to them there. Otherwise, thanks again, everyone. Have a great day. Thanks, guys. Thank you.