 OK, let's get started. This is API microversions for operators, what, why, and how. We're going to talk about what is a microversion. Why would someone want to use microversions? How to use microversions? And then we'll leave a little time at the end for some Q&A. So my name is Clinton Knight. I'm a software engineer at NetApp, focusing primarily on our Manila and Cinder drivers. I've been in the application storage management space for about 17 years. The last two and a half, having a great time in OpenStack. I'm a core developer in the Manila project, and I did the port of microversions from Nova to Manila. And I'm Scott D'Angelo. I'm a software engineer with Hewlett Packard Enterprise. Started off with our HP public cloud as a DevOps engineer and a deployer. Moved on to be a distro maintainer with our Helian product, and I'm an OpenStack Cinder core. And I did the port of microversions to Cinder. And I'll say that the heavy lifting was done in Nova by Christopher Yeo, who's no longer with us. He gets all the credit for the cleverness of microversions and the hard work. We just ported it. So why use microversions? Sean Degg, when they rolled this out in Nova, put out a blog post that sort of described several actors and why they would care about microversions. Jackson, Emma, Sophia, Aiden, and Olivia. And hopefully, these actors will represent some or most of you here. So Jackson, the absent, is the developer who writes an app and disappears. The app lives on in production, and it needs to continue to work as we bump the API versions. We can't break those existing apps. So this is a big deal today. We've argued for years in Cinder that we can't do things in our API that we'd like to do because it's going to break existing apps written years ago, perhaps. So we needed a mechanism to be able to change our API and still maintain backwards compatibility. Then there's the opposite end of the spectrum. We call Emma the active. That's someone who's consuming new APIs as they come out, very interested in the new APIs, willing to write the code, to toggle on the logic. Sophia is similar to Emma, although in her case, she's writing a cloud application that has to run against multiple clouds. So she needs, for her application, at runtime to be able to inspect what it's talking to and react appropriately. So she needs to understand what all the different versions are, what features are available from each of the clouds that she's interacting with. Aiden is a cloud operator. So he's the one that's deploying the OpenStack services, managing the users. He needs to understand who is using the features of his cloud or conversely who's not using certain features when they get old enough. He needs to know when it's safe enough to upgrade in such a way that some really old features might change in an incompatible way or disappear altogether. And then finally, like Scott and myself, Olivia is a contributor to the OpenStack projects. So historically, the major API versions in the projects only happened once every few years because it was a significant effort to do that. But she doesn't wanna have to wait months or years for the next major version to get her feature into the projects. So she needs a way to do that more quickly. So what are API versions? Many of you are probably familiar with semantic versioning, major dot minor dot patch release. OpenStack doesn't use the patch release idea, but we've traditionally done major versions when you make an incompatible API change. Minor version would just be a new feature added that didn't break it. But micro versions, they're kinda like minor versions within major versions. In Cinderware, we're at three dot 17. We'll keep incrementing that second minor version number. But it breaks Sember. We will never add a new major number. We will never go to a 4.0. And with Manila and Nova, it's the same. They're at two dot X, two dot 22. They will never go to a 3.0. But we will be able to make a backwards incompatible change within the major version by using micro versions. So Cinder originally had an API endpoint, Cinder URL, port and V1. We added a V2 endpoint in Grizzly and it took until Juno for Nova to start using this new V2 endpoint. So that in of itself was a bit of a problem for us. We wanted to make some changes. We wanted Nova to be able to pick them up in less than a year and a half. What happens if we need to keep adding breaking changes and versions? Well, you end up with a proliferation of endpoints and this drives the service catalog people crazy. We were begged not to add a V3 endpoint, which we did for micro versions. But this is a bit of a problem to keep adding endpoints for deployers and the catalog maintainers. So the answer is micro versions. We added the V3 endpoint, which is exactly semantically compatible with a V2 endpoint. And the reason we did that is we want people to know specifically that they're getting micro versions. So you'll always hit this V3 endpoint and we'll get to the guts of how micro versions work. One other driver for this was we needed to make some changes in the API that Nova consumes. We wanted some new semantics for attaching and detaching volumes. We want to add volume multi-attach. So we needed some way for Nova to be able to discover what version sender's at without having to see which endpoints are there and hit different endpoints. So this also solves that problem for us. So how does it solve the problem? So micro versions are fundamentally different than some of the previous mechanism where you had just a V1 or a V2 in that a micro version rest endpoint actually supports multiple versions of that API at the same time. See with Manila V1 or sender V2, you get V2 and that's it. And so the projects did have some rationale for making compatible changes within V2, but their hands were really tied for making breaking changes. If with micro versions, since you can have an endpoint, say the Manila V2 endpoint as of Newton can actually service API requests from version 2.0 all the way up through 2.22 and everything in between. And yes, as Scott said, there are potentially breaking changes along the way, but that's okay because if something breaks you but you're asking for an earlier micro version, you're not gonna have a problem. Not only that, but each API within a given micro version is independently versioned. So theoretically, you could write an application to list the sender volumes at version 3.5 and create a new sender volume at 3.10. There's probably not a good reason to do that, but it's theoretically possible. These things are all independently versioned. So if you got one endpoint supporting all these versions, well, how does it know what to give you? And the answer is it's all up to the caller. A specific version of an API is requested by whoever is calling the REST endpoint. So how does that happen? So REST APIs from OpenStack are bound to HTTP. And so what the Nova guy said was, we'll define an HTTP header that you can send along with your REST request and specify that the API version that you want to interact with. And when I say version, that's all of the arguments, all of the return values, all of the semantics within the server are supposed to be preserved correctly for each micro version that that server purports to support. So what you see here is Nova defined this header, X OpenStack Nova API version, followed by a version string. A Manila followed suit with OpenStack Manila API version. Ironic, I also did, I think they were the second project to implement micro versions. After Manila did it, as the third project, the OpenStack API working group got a hold of this idea and recommended some changes to the header. So, and that was in time for Cinder to pick up the new version. So you see the header for Cinder is OpenStack API version, followed by the service name, volume, and the micro version. So just be aware of that minor difference there. All right. So with a little bit more detail, and we'll see this in a live demo a little bit later. But, and if you didn't know, you can use the debug switch for some of the clients, including Cinder and Manila. It'll spit out the curl semantics for actually invoking something. So if you wanna re-issue the command yourself or tweak it or call a different API fairly easily, this is a good way to start. But you can see here, this is a get call to Cinder v3 and in large font there, you can see the header, OpenStack API version, asking for volume micro version 3.2. All right. So just because we have micro versions and there's never gonna be a Cinder v4, there should never be a Manila v3, that doesn't necessarily mean that deprecation is a thing of the past. We still have to talk about that. API deprecation within OpenStack has been something that's been pretty uncommon and is itself a controversial topic. You know, there are some folks in the community who believe that once you've released an API, it should be there forever, so that any application written to an API will never break. There are others in the community that understand that forever is a very long time. And after some period of years, the weight of the baggage of carrying around really old semantics from old APIs begins to impede forward progress. So I think that's a tug of war that's probably gonna go on indefinitely. But specific to Cinder and Manila, Cinder's a v3 with microversions. I think the Cinder community understands they've still got a lot of folks using the v1 API with Cinder. So I think they don't feel the freedom at this point to deprecate and remove v1. Manila being a little bit younger and behind Cinder and the adoption curve and having gone to microversions in v2, v2.0 being functionally identical to v1. Manila community, we do feel like we have the freedom to deprecate and ultimately remove v1 without causing any harm. Even if several years from now, we've deprecated and potentially even removed the older non-microversion endpoints, that does not necessarily mean that the deprecation is a thing of the past at that point. So again, that's gonna be something that's subject to a lot of decision and discussion. But nevertheless, deprecation within a microversion endpoint would be accomplished by just raising the minimum version that a given endpoint accepts. And that mechanism is already in the code. That's not a technical problem. That's purely a social one once most folks aren't using the really old microversions. A few comments about experimental APIs. So I think a lot of the project teams have realized that sometimes six months just isn't enough to develop a really complicated feature. On the middle side, we've been working on things like migration of file systems and replication and other things. And we've needed multiple releases to get to a point where we're comfortable enough with the API, supporting it long-term. So one of the things that I did when I did the port from Nova was to introduce the concept of an experimental API. The way that works is you simply, the developer specifies in the code the microversion for a new API, but you can also set a Boolean flag that says this is experimental. And so the benefit for users with that is that they get early access to features that are in development, beta level stuff, if you will. So they can kick the tires, give the project team feedback before it's set in stone and subject to long-term support. But of course we don't want anyone to be surprised and develop an application calling an experimental API without realizing that. And so to invoke something that's experimental, you have to provide this additional header, OpenStack and LAPI experimental as an explicit acknowledgement that you know what you're doing, but you wanna try this out. So on the other hand, the benefit to the project teams are is that an experimental API is defined as something that can change or even be removed without being subject to the deprecation rules of OpenStack. So there's a win-win on both sides and we would love to hear from you guys, from the employers and end users if you find that notion valuable, the ability to try stuff before it's fully baked and offer feedback and be able to affect change. So just to give you an example between Cinder and Manila that we work on, both projects have been working on replication features for quite some time. Cinder has actually had a couple of fault starts along the way and at least one point decided that they needed to make a breaking change to the API for replication and they did so without regard for the deprecation rules in OpenStack. I think in the position that they were in that they did what they had to do, but nevertheless, Manila released its replication feature with the experimental flag turned on those APIs and we've been able to mature that feature over a couple of releases and we've really enjoyed the freedom of doing so while still being able to talk about the feature, get folks using it, demo it here at Summit and so forth. So anyway, if this is something that you find valuable, we'd love to hear from you. So how does this affect our actors? The Cinder API version three is identical to the V2 semantically, so we'll always have V2 and in fact we'll always have V1. We've made that decision not to deprecate. So if you write an app and you disappear and your app lives on in production for years, it will always still work. There will be no breakage of anything no matter how far you advance the API. For our Emma the active user, we have version discovery at the endpoint so you can find out the highest version supported by the server and she can use logic within her apps to decide whether to use these new semantics or not. So for the multicloud integrator, similarly they can probe the cloud using the version discovery and see if they need to do anything that involves changing their apps. Our own infra is happy with this model because that's what they do. They run against, I don't know how many different clouds it is now, but five, six different clouds. Cloud operators can actually look at the inbound requests and see what their users are using. If the users are still using older versions, they know that they have to support it. If users are upgrading their apps and they're using the newer APIs, they'll be able to determine that programmatically. And then as contributors, we can put new API features in much more quickly. The core teams are much easier, much happier to approve of changes to the API if they know that it's versioned in this way and it's not gonna break anybody. So it's changed the speed of development tremendously for new APIs. All right, so let's switch gears here and look at a demo of how microversions work. So when applications that are acting with a microversion endpoint, the first thing it's gonna wanna do is call the versions API to learn what versions are available from that project. All right, so I'm gonna do a Manila list command here with the debug switch so we can see what's happening. So there's a couple of things here. First of all, we see a get to the Manila root without anything else there. No V1, no V2, no resource name or anything. So this is the versions endpoint. Works the same way in Cinder. You can just send a get to the root of the endpoint. And what you get back is a 300, which means multiple options, and the body is a version structure. So we can see a couple of things here. We can see that the current version is available at the Manila root slash V2. So the current Manila version is version two. And if we look at the version fields, we can see the minimum version. This endpoint is happy to work with as 2.0 up to 2.22. There's also in this structure a supported value, which is the V1 endpoint. And the version fields are empty. So we can learn a number of things here. There's two different Manila endpoints, V1 and V2. V1 is not microversion aware, but it's still supported. V2 is microversion aware, happy to speak all the way up to 2.22. So but we asked for a list of shares. So here's the get to the V2 endpoint for shares. And here is that magic header that we talked about. API version 2.22. So this is us requesting from the Manila endpoint that the shares using the 2.22 version semantics, okay? Well, the server says, okay, no problem, 200. But it echoes back in the response headers, the same thing, 2.22. This is the server's acknowledgement that I understood what you asked for. I'm giving you the version that you want. And I'll interject that Cinder CLI does not do this automatic version negotiation. You'll have to say, I would like version 3.7, 3.10. We're working on that for O-Kata release. I'm sorry, I always get it, O-Taka. For O-Kata release, we hope to have the same identical version negotiation where it'll automatically determine this highest supported version. But as of now, you have to manually add the choice of which API version you're asking for. Okay, so what happens if you don't send the header? Okay, so I'm gonna do the same thing here, but not send the header. So here we see the get, okay, to the V2 endpoint. But nowhere in here is the version request. Well, it's not an error condition. I still got a 200 back. And, but the server says, hey, you didn't ask for me for anything. So I'm gonna give you 2.0. So that's the defined behavior of microversions. If a specific request is not made, the microversion endpoint will respond with the oldest version that it knows how to deal with. So the theory being is that that would be the one that's most compatible with the most clouds. You could have clouds with multiple, different versions, different releases, different newest versions. But if the oldest ones are all at 2.0, your applications should work across all of them. Okay, what if I send a microversion request to an endpoint that doesn't know anything about microversions? All right, so here I'm gonna specifically ask for the middle of 1.0 endpoint, which we know we saw was not microversioned, okay? So here's the V1 request. Here's the header asking for 2.22, okay? That's not a problem. So I said that the V1 endpoint is not microversion aware. Actually, it's just microversion aware enough to echo the header back. You ask me for 2.22, but you've sent it to a 1.0 endpoint. I don't know anything about microversions. So I'm giving you 1.0, have a nice day, okay? All right, so it's important to understand as features are added to the projects, the version, the microversion, at which the features were added or potentially a breaking change was made. So let's do another call, okay? So Manila being the shared file services project can provide the network export location list for shares, all right? So I asked for this at microversion 2.8, okay? And the server came back with 4.4. I don't know anything about that endpoint at microversion 2.8. I'm sorry, I can't help you. Well, turns out that API was added at 2.9. So as the application author, you need to review the release notes for a given release. If there's a new feature that you wanna use because you'll have to specifically ask for a microversion add or newer than where it was added. So if I repeat the command with 2.9, it works fine. But you'll notice that there's an empty thing here, okay? Well, with network access to file systems, sometimes one path is more efficient than another, but the data's not coming back. That's because the preferred path field was added at 2.14. So one more time, if I shoot the command with 2.14, we get the data. And so some of our developers have asked, well, I just need to add a field to the response of an API. It's a compatible change. Do I really have to bump the microversion? And our answer is, well, yes, you do because if someone is depending on this new field that you're asking, they need to be able to request the microversion and be guaranteed that it's gonna come back so that they can use it. So virtually any change to the API would require a microversion bump. But the beauty of it is, is bumping a microversion typically doesn't require hardly any code change, certainly no code duplication. It's designed to be a very lightweight mechanism. All right, so Scott mentioned a little bit about version negotiation. So I'll show you how that works, okay? So I've been using the Minel client, which is happy to work with version 2.22, same as the server. But what if we have a newer client? I'm actually gonna go into the client and change its version to 2.30. And then issue a list command. So we can see when we go to the versions command, this is when we go and get the version. We actually send a microversion header. The version's API itself is not, in fact, versioned at all. But we can see that the client is starting with 2.30. Well, but the server comes back, like we saw before, well, I'm good 2.0 to 2.22. And so it's the responsibility of the client, not to stray outside of what the server can handle. And so the client says, okay, fine, I see what you can do server. So I'll talk to you on 2.22. The server responds on 2.22 and everything's fine. All right, conversely, if I have an older client, I can reduce this number to say 2.15. And same thing's gonna happen. We start out with 2.15, but the server says, hey, whatever, I'm good 2.0 to 2.22. But the client says, well, I'm not. I would really prefer to stick with 2.15. That's the best I can do. And the server says, okay, fine. I'll talk to you on the 2.15 version semantics, no problem. All right, so with that, I think we're happy to take some questions. 22 versions in like two years or one year. What happens after 10 years? You will have 200 microversions. And these expand the code base enormously? Well, it doesn't expand the code base much. You really version different functions or even different code paths within a method with a simple wrapper. And so what you're really doing as you bump these versions is you're indicating that you've added a change, which is often a backwards compatible change. So in the past, every time we would add a new feature, we would write the same code. We just didn't bump a minor version. In other words, Cinder was at 1.0 for several years, then they went to 2.0. But we didn't have any minor versions within it. So just because we're bumping these versions for every addition to the API or every new field we're adding, doesn't necessarily, I mean, if we were using strict semver, we would have been bumping that version for these changes. So the code itself is always gonna be in there anyway. It's just got a simple wrapper saying, API version 2.17, you execute here. It doesn't bloat the code. If you look at the Cinder code or the Manila has a lot more in it, you can see that it's just a wrapper around different methods or different logic within the method. Yeah, to answer your question, Manila released micro versions in Liberty. And so after three releases, we're up to 2.22. So it's not something that we're gonna see bumping very, very quickly. But like Scott said, it's not something, like it used to be with V1 and V2, you tended to have a significant duplication of code between the major API version bumps. With micro versions, there's virtually no duplication. So the hardest thing that a developer would need to do is to make sure that when he's adding the functional tests as well as unit tests to make sure that he tests the semantics of the API right on either side of the micro version bump that he made to make sure that he preserved the older semantics of the API as well as the new one. The second thing is our API compatibility. Basically, let's assume we have some tester who decide to run script, test script, increasing API version one by one on each call. It would pass test or not if all those calls are available in all APIs. I mean, when you have one request going to API number 10 and second going to API number 15, is internals of project would be consistent with those changes in different versions? So I think I understand what you're saying. Your test will have to understand what those API versions are gonna return and what they should expect. Yes, yes, client side is okay. I mean, on the back side, inside the project because all the API was doing something less. You are doing something more and you can get some records and database different depending on a P code and this can lead to untested scenarios. Well, it's the same old testing problem, right? You're adding more complexity. You have to have more complex matrix. Complexity is 22 versions now. I completely understand. But if you think about it this way, let's say we didn't have micro versions and we just added CinderManageList to see what manageable volumes are on a backend array. Well, you'd still have to test whether or not CinderManageList works. If you have an older version, a Liberty version, it doesn't have that. So how do you know as a tester what whether you should test CinderManageList? Well, you know whether your server's running Liberty or whether it's running Newton. In this way, you can program the logic. You can say if the server says it supports 3.17, I know that CinderManageList will work. I could run my test. If it doesn't, I can't. But in the past, you would have to know manually that you're running Liberty or you're running Newton. I mean, there was no programmatic way to discover what you were doing unless you'd try to run the command and you'd get back a 404. This way, you can anticipate whether or not the method is supported, the API is supported by hitting the version endpoint and seeing the version. So I don't know that it changes anything in tests. In the past, you still had to figure out whether or not it was supported. You just have, I think, a smoother mechanism. Yeah, and we wrestled with how to test this thing. We'd give an infinite resources. We run Tempest against every microversion we have. But we're not doing that. We don't feel the need to do that. But like I said, every time we bump them to the version, the tests around whatever changed. We'll test it before and after that particular version to make sure. If I had a field at a given microversion, we'll actually add code to strip that field out for previous microversions so the API appears to work the same. Thank you. I wanted to get your opinion on an issue we had in Nova. Recently, we removed the Nova network functionality. And we used a microversion bump simply as an announcement that after this point, now that this is out, you can no longer call Nova network functions. You can still call the old microversions for any of the other previous things that changed. But this idea that microversions make everything backwards compatible, it raised some controversy. Should we do this or should we bump everything up past that level, raise the minimum, but then all that old functionality is gone. So I was wondering if you guys had run into anything like that and how you approached it. So in Cinder, we haven't run into it, but we started to discuss it, and people are very hostile to the idea of removing something in the way that Nova did. We know Nova did it and Manila has done this, and there's definitely people in Cinder community that are completely against this. Now, there's a deprecation cycle, so we floated the idea of if we follow the standard deprecation, can we remove stuff? Well, yes, but we would probably be very hesitant to do what Nova has done, and Manila has done the same thing in Cinder. People are pretty adamant against it. Even with the deprecation cycle. Well, with the deprecation cycle maybe, because we've deprecated things that are clearly not being used, but to remove something in the way that you guys have, I don't know that we would do that. I mean, we're still supporting V1 forever. People raise their arms in protest when we talked about it in Tokyo, including infra mainly, we're still using V1. So, in Cinder people are very much more conservative about removing things. Yeah, your question is very relevant to Manila, because with Manila we're providing file systems, network file systems, directly to instances, and so we do so with Nova Network, with Neutron, with standalone config options, and so the removal of Nova Network was definitely of interest to us, and so we blame you just to be clear, but you guys followed a clear deprecation policy for that feature, and we chose to do exactly the same thing. So, with a certain microversion, we will strip out Nova Network support, but we are following deprecation rules in that case. So I guess the removal is one thing, the other part of it is a microversion change that doesn't give a changed functionality, it's just a marker saying, this is where we remove this functionality. Is that consistent with your understanding of how microversions work? Well, yeah, so as far as we can, we'll try and preserve older semantics, but you gave an example where it's just not gonna be possible for us to continue to support something, in this case we couldn't if we wanted to, and Nova Network is gone, right, and so in that case we have no choice, we can't provide that functionality in older microversions, so we'll announce it to the world with the deprecation announcement and then strip it out at a given microversion. So the unfortunate thing is that when, I don't know who first added this second endpoint, Cinder maybe or somebody, they shouldn't have done that. And if we'd had a microversion from the start and we wanted to use Sember, you could just bump up to 2.0 and go from there, and then you can go bump up to 3.0 as long as you didn't have this proliferation of endpoints if you'd have just kept it all at the root endpoint. But since we started down that path, we'd confuse users if we ever bump the major version without adding a new endpoint, is sort of the conventional wisdom. I don't know if that's true, but so now we can't say, hey, breaking change, Nova, we just removed Nova networking, we'll just bump to 3.0, well, because people expect of slash V3 endpoint, and so that's unfortunate because everything would be great if we had this from the start and you could just bump the major version, have no endpoint, and just use the microversions to figure out what you want. Good, all right, thanks. Yeah, I don't think there's an easy answer there. I just wanted to ask you if you rely on the common framework to implement the micro version? No, which framework? Common framework, I mean, ultimately this all should get moved to Oslo, there's a lot of commonality, there's some subtle differences in how people do it between Manila and Nova, and Ironic actually isn't even using WisG, so they've got quite a bit of different code. So ultimately it should be, I mean it's, but we're very, since Manila was forked from Cinder, we're already very code compatible, and so these two are very similar. Nova, we ported from originally, so it's very possible to do, at least with those three services, and should be done someday, I mean it's a typical, somebody does something, someone else does, sooner or later it goes to Oslo when someone wants to do the work, but it's not there yet. Other questions, who else? All right, thanks everyone.