 Hello everyone. My name is Mike Perez. I am the Project Technical Lead for the OpenStack Blocks Research Project Sender. And so today we're going to be looking at a little taste of what's coming in the Liberty release. So looking back on Kilo, we have 45 volume drivers in total that is supported in Sender. These all include with a continuous integration system. In particular with Sender, we have made the requirement in the Kilo release. It's been about around quite a long time coming of a requirement that we've been working towards, in which we want to have all volume drivers to be tested against the same test that we run against the reference implementation in OpenStack, which is LVM. So we made the decision to, for example, with ECI systems, all vendors would have to have a CI system in place for every single patch set that comes in for Sender. But as a poll in that patch set, bring up an OpenStack environment with that patch and actually test that against their actual backend solution that took up to that OpenStack set up. And so, of course, this would be done in the automated way with CI's. And it can be a lot of patches that comes in. But we're dealing with the amount of traffic that is coming in for the different CI's. And so I'm happy to say that with the drivers, I can say that all 45 drivers have passed through these tests. And so I'm hoping that the amount of these drivers themselves have been a lot more stable for operators in use. Along with that, though, we had some volume drivers, unfortunately, that did not meet the requirements in time. So as I mentioned, the community is very well committed with making sure that the volume drivers are tested. So we will continue to work with these vendors. Some of them have been added as well into the Liberty, back into where the Liberty released. And they can also be re-added at a later time in another release. So along with that, we have 75 blueprints done for Kilo, as well as 353 bugs that's done. And I'm happy to say, with our release notes and items that we have listed there, there is documentation listed, documentation links for each of the new features that we're providing. So I am very happy to say that documentation has also been improved. Next slide. So Liberty, so far, we just tagged for the Milestone 1 release. And we have 19 new volume drivers, all with CI system. As I mentioned, this is a requirement for all new drivers, as well as existing drivers to continue to stay in the Cinder ecosystem. Along with that, we have 29 blueprints done, as well as 134 bugs that's done for the first Milestone. Next slide. So nested quotas, I'm basically just going to go slide by slide of some important features that I think are good for operators and end users to know. So nested quotas, we have hierarchy quotas. This is working in particular with Keystone, providing a hierarchy structure of being able to have projects and sub-projects and so on. We want to be able to provide the ability to have quotas within those different projects that are within different projects. So it's kind of easy to just see this example that's on the slide of project A, which has a sub-project of B. And B has a sub-project that says C. So A starts off with a hard limit of 100, allocates 50, which it provides to its sub-project B. And so B has a hard limit of 50, and it's allocated to project C. And you may notice that project B, for example, has 20 used, and C has a use of 10. So that's taken into account for the use of its own sub-project and B, but also B is using 10. What do these numbers represent exactly? I didn't mention that earlier. They represent gigabytes. So it could be represented as gigabytes, as well as number of volumes, number of snapshots, like we do today inside of Cinder. Definitely, I recommend reading the full spec, though, if you want to see a bunch of different case scenarios. It's pretty well documented. So happy to see that coming into Liberty. Next slide. So force detach. This, in particular, is something that comes up quite often. Also came up in the operators' meetup at Insoli. I was there, and this was one of the few things of, and I say, unfortunately, one of the few things that came up for Cinder feedback. And so it's the problem with my volume is stuck in an attaching state. And I can't do anything about it, except maybe update the database to set the volume back to an available state. Don't do that. That's going to cause some sort of error-prone issue, and we would actually prefer people not fussing with the database. So what does this provide us exactly? Operators and end users can safely detach stuck volumes. How does it do this? Well, Cinder will go ahead and communicate to the volume driver that it wants to go ahead and put this volume into an available state. But it's currently stuck in a detaching state. So we'll leave it to the volume driver and Cinder to orchestrate terminating the connection correctly. And then once that has been confirmed on the actual back in solution, then we could safely set the volume back to an available state. Next slide. And generic image caching. So for your popular images, we will begin the ability to actually cache these images. So a problem that exists today inside of Cinder, not for all volume drivers, some volume drivers do some smart things, but you have your glance back store, which is storing images. And then you have your Cinder back in solution that's hooked up. And you want to copy an image to a volume that you are creating on your back in solution side. So unfortunately that copies over the network. And for big images that could be really slow. It's also going to cause network traffic and all these other things. We want to avoid that. And we want to make that a lot faster for volume creations that are done by images. So in order to do this, we will be able to cache the actual images that are popular in the actual back in solution itself. So then when a volume creation request comes in and it wants to go ahead and create a volume off of an image, instead we could do a clone copy on the right of that particular image and the volume driver can do whatever it does to be performant in this area and do the copy on the right, reference that image and then create a new volume. It's pretty quick. So the best thing about this is, as I mentioned, it's generic. It will be supported for all volume drivers to take advantage of. Next slide. So rolling upgrades part two. I mentioned part two because from Kilo and onwards we allow a schema upgrades to be independent of services. So essentially you can update your sender database and your sender services themselves can continue to work properly and you don't have to frame them all down at the same time of the schema upgrade. So that's sort of like the first step and we were committed to making sure that we didn't create any sort of silos and coming up with a generic solution around this. We actually took this solution originally from Nova and then we made it into a generic library into Oslo for all OpenStack projects to take advantage of. It's called version objects and it allows the ability of having this separate layer for objects that could exist in your code. So it essentially allows this first step of the schema upgrades to be independent. The second step though we have to deal with is RPC compatibility. Messages that are being sent from different sender services, messages that are in flight, as well as the receiving service needs to be able to handle upgrades that are happening as they're being rolled out. So this is proposing the idea that there will be a master version that sender will know about. So say for example, you're at version O and you want to upgrade to the version P. So you can go ahead and tell the sender service that okay, the master version right now is O. You can upgrade your services and slowly start rolling out the code updates to the different nodes to be at version P. And then once the code has all been rolled out to these different services, you can then tell sender, okay, go ahead and start sending messages as P and all the different services should start being able to receive those messages just fine. And it will also wait for all messages going in transit to be completed as well before it does this. So it allows us to avoid different rolling upgrade pains. You can upgrade your sender services in any order you like. And of course, there are a variety of other projects that we're working on this particular solution with, clean with project key. So we will also be making sure, just like we did with version objects that this is something that is shared across open start projects for everyone to take advantage of. Next slide, capabilities. So with a sender client, just in today with sender, you have your back end solution and it provides a variety of different policies that you could set to sender. To sender, these policies are called volume types. And you set different tiers with volume types and then they have different policies attached to them, which we call extra specs. So the problem though, is you as an operator have a back end solution and they all have different things that they, all different names that they call their different policies. Unfortunately, it's not easy to create an interface and have all different vendors agree on different terminology, go figure. So the next best thing that we could do is actually be able to ask the back end, what are you currently capable of doing right now that I could set up a volume type. So I could set up my policies. And so this will actually allow the ability where you do not even need your vendor documentation to actually know which different policies that you could set to a volume type. Instead, you can ask sender, give me the list of capabilities that I could set to a volume type and then you can enter them in as you see them presented to you. And what's neat about this is you don't need to know, have a documentation that is specific to a specific version of your back end solution. So, and also clients like Horizon can use this API to provide an interface which will allow things like, I don't know, maybe like drop down menus of setting up policies based on what it seems that the back end solution could do. And then you could fill in different values of what you want, say your different QOS for max IOPS and then IOPS to be for a particular tier that you're creating in a volume type. It's less error-prone and I think people will definitely like this improvement as clients roll it out. Next slide. Improving migrations. Today, operators can migrate and be specific. This isn't live migrations of between volumes to different instances. We're talking about migration of volumes from a back end, a storage back end to a storage back end or within the same back end from one pool to another pool. That is something that sender already provides today, but unfortunately right now there are some issues in particular with knowing like what is the progress, for example, of this migration. Sometimes it could take a while. So you want to be able to get that progress. We're going to be able to provide the ability so you could pull in that progress. We might have that ability as well inside of the sender client so that you can have that progress being pulled in and you will get the periodic updates to know when that migration finishes. Along with that though, operators may want to have the ability to force migrations. So taking the scenario, for example, there's an issue with one of your back end solutions and you want to migrate to another back end solution that you have deployed in your data center and you want to migrate the volumes over. So in order to do that, we need to make sure that the volumes aren't being put into an attached state or being used by the end user. So you can start doing a force migration, allow these volumes to be marked in a maintenance state and then they will start being migrated over to the other back end and then they can't be used by the end user and so this will kind of allow for those different scenarios there. Next slide. So these are some smaller things that we're adding in that I thought maybe operators might like but may not notice behind the scenes but improvements with Nova's use of the sender's API and error handling. We're currently right now identifying a variety of issues that we have in terms of the usage that Nova has with the sender's, with sender client, for example, and we're identifying those different issues and we're rolling out a variety of bug fixes that plans to work with John on, for Nova on communication as well of the different issues that we need pushed along to kind of help in that area. Along with that though, there is also, the plan is this for a Python 3 ready for sender. So far, good progress being made there. There's a variety of tools that people are making to make the migration quick for identifying different compatibility issues. So happy to see that coming along as well. Next slide. Ta-da, that's it. These are just variety pictures of the sender team. Anyway, thank you. If you wanna reach out to us, we are on the free note. Our IRC channel is open stack sender and please reach out, we're kind of a nice group. All right, thanks.