 Hello, welcome to the Cinder project overview and update. My name is Brian Ross-Meta. I'm a principal software engineer at Red Hat. I was the PTL for the Victoria Release, and I'm serving again as PTL for Cinder for the upcoming Bollaby Release. So what does Cinder do? Well, it's the block storage service. So what we do is implement services and libraries that provide on-demand self-service access to block storage resources. And you can see in the diagram the basic layout. We provide a REST API for clients to contact. There's a message bus, there's several different services that comprise Cinder. There's the scheduler and then the volume managers. And so the long story short, we provide software to define block storage by abstraction and automation on top of various traditional back-end block storage devices. So if you want a volume for your instance, Cinder is where you get it from. So what does the Cinder project do? Well, we produce software in a whole lot of repositories. So the Cinder repository is where the main Cinder code is stored. So that provides the REST API and then all the services that make the block storage service work. We also have a library called OS Brick and that's what's used to actually attach volumes. So Nova uses it to attach volumes to any of your instances. Cinder itself also uses Brick when it needs to attach a volume to perform some type of service for it. We provide the Python Cinder client which provides a Python, it provides Python bindings to the REST API. We also provide the Python Brick Cinder client extension which allows you to use OS Brick to do attachments but via the command line for particular applications. We also provide the Cinder Tempest plugin and Cinderlib and I'll talk about those a little bit later. So who does it? Well, we've been working on Cinder since the Falsal release of OpenStack. This is a photograph from the recent PTG just about a week and a half ago. It's not the entire team, it's the people who stuck around for the photograph but it gave you an idea of the people who were involved working on Cinder. As far as contributors go, we had about 124 contributors in train from 42 companies and you saw we had about 97 contributors from 31 companies and the stats for Victoria right now are 77 contributors from 25 companies. So you can see there's been kind of a decrease but it may also be due to the data in Stackalytics not being completely up to date because the last time I presented it looked like we only had 30 contributors in Yusuri and it turned out that that was wrong. So the project seems fairly healthy as far as number of contributors goes. Now who does it? As far as company goes, you can see Red Hats doing about 38.8% of the commits into Cinder in the Victoria cycle. Dell EMC was responsible for a bit over 20% and then we had InSpar and NEC and Mirantis and then a lot of other companies making a few commits. Red Hats commitment is pretty large but overall to open stack Victoria it was 38.2%, something like that. So we're right on target as far as Cinder goes. Although my point in showing you this isn't to show off that Red Hats the best just because I work for them but it's just that we could use some more diversity and it would be worth putting some resources towards Cinder if you have some. Okay, the latest survey numbers as far as who's using Cinder, it's deployed in 86% of production deployments. This is based on the 2019 survey numbers I believe. So there were 331 respondents for this. Just to give you some perspective, NOVA was running at 90%. So it looks like we're pretty up there so I can't imagine how you're gonna run open stack and not include NOVA. And then as far as testing and proof of concept deployments we're deployed in about 73% of those. NOVA's 75% so it's a lot is basically what I would say. We're almost everywhere in open stack. Okay, why are we looking at the horse's back end? So if you've been looking at the Cinder logo you notice the horse is sort of facing the wrong direction but from our perspective it's the right direction because your block storage has to actually be stored somewhere. So the storage back ends are very essential to Cinder and the way Cinder supports these back ends is through drivers. So the drivers are code that mediate between the block storage API which provides a consistent interface to users and the particular back ends where data is actually stored. So as far as the back end drivers go in Cinder, right now we have 79 volume drivers and seven of these are marked unsupported. I'll explain that in a little bit. We've got six backup drivers. One is unsupported and is going to be removed in Wallaby. It's the IBM Tivoli storage management driver that does not have a maintainer. And then there's two fiber channel zone manager drivers. One of those is unsupported. If you're interested in Cinder drivers what they do, because there's a basic set of functionality that all drivers have to implement but then drivers can also provide more optimized functionality depending on what their design is and what their back ends can support. So if you wanna see the support matrix for what functionality everything supports if you go to DuckDuckGo or your favorite search engine just type in all about Cinder drivers and you'll get a page that'll explain what drivers are, how we work with them in Cinder and also have links to information about the individual drivers. So what's an unsupported driver? So all Cinder drivers must run third party CI systems that test proposed patches against an open stack environment that's connected to the vendor's back end. So open stack runs its own continuous integration system but open stack does not own every kind of vendor hardware that people want to use with open stack or that people wanna sell to open stack clouds. So the way we've handled this is we ask people who contribute drivers to also run third party CI so that they can run continuous integration tests on the code as it's being developed and we can also tell that it hasn't broken any connections to the driver. So continuous the CI's for the third party CI's have to report on every patch whether the changes in their own driver or not because you never know what the effect of some change might be. And then if no CI reporting occurs within a two week span or some other issues are found and aren't addressed in a timely manner the driver is marked as unsupported. So if a driver is unsupported at the time of release then an operator has to set the specific configuration option in order to use that driver. So that way we get information out to the operators that you can use the driver as far as we know it's still working but be aware that support is lacking right now. So that makes the driver eligible for removal in the next development cycle. Now since January 2020 the Cinder team has decided to allow an unsupported driver to stay in tree as long as they continue to pass OpenStack CI testing. So even though we don't have the third party CI for the unsupported drivers they do still have to pass the regular OpenStack CI testing which includes all the unit tests and functional tests. So they are being tested but just not against their actual backends which of course you do want them tested against. Now the reason we did this is that our experience has been that most vendors address driver issues eventually or fairly quickly and dropping drivers and then restoring them was becoming inconvenient for operators because a driver might be unsupported and disappear for a cycle and then be back again. And so for the release where the driver was not included you had to get it from somewhere and that was causing problems. So we've decided to go with this model and we would be very interested in feedback on whether this is a good idea or not. Okay, so what tests are the third party CI's running? The OpenStack integration test suite is called Tempest but the difference is that Cinders configured to use the vendors hardware instead of in the gate where it uses some test backends. We also run additional Cinder focused API and scenario tests that are contained in the Cinder Tempest plugin which was one of the software products I mentioned earlier. What that allows us to do is add extra integration tests for drivers that can focus on particular areas of functionality or regressions we've seen for particular configurations. So it's kind of helpful. And if you wanna see what one of those looks like you can take a look at the review that's listed there review.opendev.org and it's 737-380. It's just an example of a patch to the Cinder Tempest plugin that adds some extra testing. So that's what we have going on with the drivers. Now an interesting thing is that Cinder drivers can be reused for container persistent volumes in two different ways. There's something called Ember CSI and that uses Cinderlib which I had mentioned earlier. And what that does is allow you to use the Cinder drivers which have been developed and tested with vendor backends but you don't have to actually run Cinder. So it's an interesting project. And there's also the Cinder CSI plugin which uses the CSI interface to connect to an actual running Cinder. So if you're running Kubernetes on OpenStack say, so and Cinder is there as part of the OpenStack then you can use a Cinder CSI plugin so that the containers that are running on top of OpenStack can get access to Cinder and to persistent volumes. There's also a way to run Cinder in standalone mode but that's kind of heavyweight. But anyway, you can use your favorite search engine to find out more information about this. But this is a nice feature because vendors have spent time developing Cinder drivers. We've been asking them to run the third party CI and this gives them another context in which to use the drivers that have been tested very carefully. All right, so what you're here for those, you wanna know what's new in Victoria. Okay, so a few things. Microversion 3.61 adds the cluster name to the volume detail response if it's called in an administrative context. So regular end users don't see it but administrators do and that can be very helpful when you're troubleshooting. We've also got Microversion 3.62 that adds a default volume types API and it allows management of a default volume type for any particular project. So this is a way operators asked us for a way so that they can have particular projects use particular volume types that are either tied to a particular backend or a particular storage class or something like that. So this gives you an API by which you can do it. And we also have improved handling of the Cinder default volume type. And this improved handling has been back ported to Yassuri to 1620 and to train 1540 to keep the behavior consistent. So default volume types have been around for a while and train they were made mandatory in the sense that Cinder does not allow you to have untyped volumes anymore. So you can consult the release notes to see in what way this handling has been improved but trust me it has. Also we've got the Z standard algorithm compression support was added to the Cinder backup service. So the default is still deflate or what's known as Zlib but now we also have this very popular modern technique of Z standard that can be used with the backup service. Also a couple of new drivers were added. Dell added the power stored driver for I SCSI and Fiverr channel. And I Tachi added the HSPD driver for I SCSI and Fiverr channel. Then in addition to that many volume drivers added features beyond the Cinder required features. So if you look at the Victoria release notes you can see a list of what's been added. Okay and we just some security issues also. So there was OSSN 0086 Dell EMC scale IO or VX FlexOS back end credentials exposure. So that was fixed during the Victoria development cycle and then back ported as far as Queens. So the vulnerability did not occur in Victoria because it was fixed before Victoria was released but we discovered it during the cycle and it's been back ported. So that's something to be aware of if you use Dell EMC scale IO. There's also OSSN 0085. Cinder configuration option can leak secret key from SEF back end. It only applied to SEF deployments that were using this particular RBD key ring comp option with SEF and that option has been removed in Victoria. So it was deprecated in Ysuri and the OSSN was issued and then we removed it in Victoria. Okay, one other thing I just wanna bring to your attention there was an upgrade to Ysuri issue that was discovered during the Victoria cycle but it does not affect the Victoria release but I wanna bring it to your attention just so you're aware of it. Now, if you've already successfully upgraded from train to Ysuri, then there's nothing to worry about because the problem that's caused would not allow you to upgrade. So if you're able to upgrade, you're fine. And if you started with train, so if train was your first open stack installation then you don't have to worry about anything either. But if you upgraded from Stein to train 15.3 or earlier and you did not purge your Cinder database before the upgrade, not that you need to purge the Cinder database in general, it's just that it so happened if you didn't, you ran into this problem. So if that applies to you, you upgraded Stein to train 15.3 or earlier and you didn't purge the database before the upgrade, then please read the release notes for Cinder 15.4.0 and for Cinder 16.2.0. So there's several ways that you can address this issue but you need to read through the release notes and just decide what's the best way for your particular situation. So just be aware of your upgrade path from train to Ysuri may require some actions in the train deployment before you do the upgrade. So I just wanna make everyone aware of that. All right, so what's planned for Wallaby? Well, one major thing is we're gonna move version two of the block storage API. It was deprecated in Pyke and version 3.0 is just like 2.0. Now the difference, the difference. Why would you use version 3.0 when you can use version 3.62? That's entirely up to you but if for some reason you have scripts or something and they're expecting the responses from the version two API, you can get something very much like those responses. If you specify version 3.0 when you make your requests to the block storage API. So consult the block storage API reference documents for more information about that but we will remove version two during the cycle. There's some new drivers that have been proposed. OpenE, Jovian, DSS has already merged. So that's a new driver that's guaranteed to come. Cephyscuzzi's most likely gonna be delivered. It's very close. And then Keoxia Kumoscale is going to be contributing a driver. It's kind of an interesting driver because it uses NVME-OF and they're gonna make some updates to the OS Brick libraries handling of NVME-OF to bring it up to date and then also to support Kumoscale. So that's gonna be interesting. We're also gonna be doing a consistent and secure policies initiative. So not too much to say about that other than we will be consistent with other projects and the policies will be as secure as we can make them. And then there gonna be various internal improvements in Cinder. We have a whole list that we discussed at the Wallaby PTG, the project teams gathering that was just held about a week and a half ago. And if you're interested in seeing what these various internal projects are, you can go to the OpenStack Wiki and look for the Cinder Wallaby PTG summary. And there's a list of everything we discussed and what we plan to do. And if you wanna contact the Cinder team, I've given you this tiny cc slash cinder info. It'll take you to our base contributors page but it gives you a very nice listing of all the repositories the project contributes to and also our various means of communication and what our basic processes are. So it gives you a good idea of what the Cinder team is all about. All right, and get involved. There are some things that we would like you to do that we could use some help. So for instance, the Cinder documentation could use an analysis by a good information architect or even just an information architect or even a high school student could probably do this. Basically, we have documentation that's been written by various people aimed at various audiences and it's kind of interleaved and we would like to separate out things that are aimed primarily at operators for running Cinder, operators for configuring Cinder, documents aimed at end users and documents aimed at developers. We have all those and we have actually some pretty good documentation but it's not always easy to find things because of the way it's organized. So we could use some help with somebody coming up with a nice plan for a good way to organize it. Also be good to make your backend vendors aware that you value Cinder third-party CI and their drivers. It's not easy for the vendors to maintain the third-party CI as we can see because they're constantly going down and having to be fixed. So it'd be good if you let your backend vendor know that you think it's important that their third-party CI is constantly running on Cinder changes because it guarantees better quality code. There's always a possibility to add tests to the Cinder Tempest plugin if you're so inclined you may have run into a scenario that would be good to be tested. We're always looking for that. And then there's an interesting article that I've been telling people about. It was written in 2013, I think, but it's still very relevant. It's 10 ways to contribute to an open-source project without writing code. So if you don't wanna write code for tests, you don't wanna write code for features, there are various other ways that you can contribute to open-source projects like Cinder. And so I encourage you to check that out. That's all I've got, thank you very much. And I'll be happy to take questions at the appropriate time. Thank you very much.