 Okay, yeah, we'll start. Good afternoon. So this is Naina Patel and Ravi from HP. And we're going to talk about the API and what we have seen to date, where the issues are when we are testing with Diablo Final and SX. And now Folsom. So we need some kind of community reactions on how we can get better at this. So we're going to do, like, this is a discussion rather than, I think, talking through what our experiences are. So we will start with one section, which is documentation. So Ravi, you want to lead that? Yeah. So basically, like, OpenStack API did a great job, like development, QA documentation. They did a great job, and we came so far, right? All the functionalities were supported on top of Python client over REST APIs. But still, we have a lot of gaps. We can make a lot of improvements. That way, like, we can make new users life very easy. They can come up to speed very quickly. Also, the bindings, right, which were based on top of Python clients, they can know very well, like, what are the API changes happened, and they can work accordingly. They can know what are the improvements they can make. Also, like, large-scale deployments, right, they can know what are the API changes happened, what are the documentations. They can implement very confidently. So primarily, like, this is very brainstorm session. We expect two-way communications, and then we want to take suggestions, and then we can pass it on to the right teams. So primarily, like, we were involved in OpenStack, QA teams, discussions, decision-making, all those things. So we have a stay there, see there. So this is a brainstorm session, basically divided into four sections. Documentations, support, testing, and then development. So first, documentation, right? Yeah, like, each one may have different pain points. One pain point I observed is, like, there are different source of information, different source of truth. How we can have one single source of truth that way, like, a user, they can go there, get all the information they want. Right now, like, they have to see NOVA help, they have to see, like, API, API doc, all those things. So today, when you look at the documentation, it's out of date, right? And the only way you can figure out what's been implemented with APIs is you have to go dig into the code. So you have to really actually install the DevStacks, look at the GitHub, look at the code, and figure out what change control is implemented. So it's a very tedious work for somebody to get up to speed, and it's not a very user-friendly for anyone to kind of figure out what changes has made it into the trunk or master. So it becomes a very complex problem, and we want to address creating a process where I think the community can understand that maybe, I think, when you submit a blueprint that has got documentation or API that's required to change or an additional API or with extensions or proposal, and I think we want to rely feedback from the developers or people who are working with APIs here to kind of understand that when there is a blueprint, we need discussions on the blueprint itself. When the code is written, maybe we need to kind of figure out a process where you get the code checked in, and it's reviewed, and at some point in time when it merges to the branch or trunk or master, we need to figure out how are we going to get a documentation in place rather than later in the process because when you wait for so long, there is no one to kind of look at the documentation, no one to look at whether the implementation went in rightly. So we are kind of, I think, fostering discussions here to kind of get an idea from everyone on what the right way to think about is getting the documentation up to speed. So I would like to hear from, I think, audience here. That's not happening, right? I mean, we all know that documentation bugs and getting the refactor of the documentation. That's not happening today. Yes, we are doing it. We log a bug with that. With this API, there is no documentation. Then they are addressing it. But still, we are one release behind on documentation. Right now, still false documentation, not yet complete. It's like catching up, right? It's like a race condition, right? Yes. And also the challenge is now, we have been working on this for the last couple of releases from Diablo final. The QA didn't exist at the time, and we've come a long way. The community is growing, right? With a lot more people and a lot of developers and different type of business units are coming up. We need to kind of understand what are we sharing with those. I mean, getting into the code is not a very viable solution, right? For everyone. And I think that other gentlemen just pointed out that it's like as a new person coming in, looking at the API, I don't know what to expect, you know? That means I need to understand the product, right? Another thing we're finding is also the DevStack, and I have an action item from Monty to talk about is, the DevStack is not stable that we tag with the repo, right? So when the repo is tagged, when the release is out, we don't have a repo that says, this DevStack also tags with that repo. So that's... So I've been told just recently by a couple of people, and I have to investigate this a little more, but that's not true. So when we cut the Folsom, so it would be good to kind of, I think, put a note out to the community. Yeah, that would be good to kind of, you know, state that, because I think there is a lot of misperception on that. So that is my thing to do, because I've found a lot of people talking about this over the weekend, and I have personally not checked, because we're still working on DevStack on Folsom, and we didn't have any issues, but someone else pointed out that there are issues. So this is a documentation issue, right? Should we consider API is complete only if there is a document available? If there is no documentation, we can't say that the API is complete. Only if the API plus documentation available, then only release should happen, right? So who would take these action items to the community? I mean, the PLs, right? Because this has to be filtered down to the PL. It's a process improvement that we are suggesting, you know? And knowing that there is not that whole lot of QA people who are involved in testing this as well, that makes it a little difficult for the voices to raise up. We had a parallel... Did you attend? A few of, I think, somebody from our team attended, right? Ronald, you attended, yes. So Annie is here? Yeah, Ronald. Sorry, I think you let me answer it. No. So that's... The thing is that there are no release. Yeah. So we are actively now looking at that process as well. So, like everyone else, right? We have challenges, and I think we need to kind of get this in the community that anyone who finds that needs to raise those bugs. Not only HP, right? Our QA team is actively involved, but there are other parts of the organization and companies that have QA or even the developers. We encourage everyone to kind of participate in this process. So it would be kind of nice to kind of put it in the mailing list. I think it would be good for documentation mailing list to go out of this process. Yeah. Yeah. Because to monitor every check-in, right? It's like really a nightmare for anyone, you know? Yeah. Good. So we want to move on to the next topic. Next topic is about support. Support for APN. So right now, like... We provide support for REST API and Python, but other than that, there are so many clients, so many languages, for example, J Clouds or like Ruby bindings. All those... There are different things, Windows PowerShell, all those things. In OpenStack, do we commit we'll be supporting J Clouds or we are supporting Ruby... Ruby Unreal, that kind of things, right? Right now, like, users, they just keep guessing. They're not sure whether they should be picking up J Clouds. Right? Yeah, but at least... Right. Certainly those ones can be considered. They're under GitHub slash OpenStack, correct? Yeah. So that's... I mean, they're in core, essentially. Yeah. So what makes them easy to... Yeah, primarily like at hpcloud.com, we provide support for Ruby bindings, Java Clouds, PHP bindings, Windows PowerShell, Unix Shell, all those things. So it will be nice if you know how much support is there. Yeah. Okay. Next one, EC2 support. Yeah. Next one is EC2 support. OpenStack, we provide EC2 support, but Amazon is making a lot of changes, right? They are adding new APIs or modifying existing ones. So we are not sure we are under percent covered or is there any cap. So if there is anything like that, it will be good. So there is a mixed bag of, you know, communication on EC2 support in community. And I think we need to understand from the development community, are we supporting all the functions that are in OS today with JSON and XML similar to what EC support is? Because we keep hearing that it's going to be deprecated. And I would like to hear that, you know, where are we going and what is our roadmap that looks like, you know. I thought they're pulling EC2 and S3 out of OpenStack. Yes. That's what I'm asking is the question. Is it, I saw in emails along that line deprecating EC2. Is that true or can we put a statement around that so that I guess the people who are using this at least understands our stain for the community? So anyone wants to volunteer or probably not a whole lot, right? Yeah. Does anyone care about the EC2 APIs anymore? I mean it's a valid question. Okay. Yeah. Sure. Yeah, I don't think we can represent that question, you know. There are probably distro, there are, you know, integrators. So we have no idea. There's a difference between removing it from OpenStack and saying it will never, you can't call it, it will never exist. I mean, it will be just a compatibility layer kept separately. Right. Exactly. Yeah. Exactly. Yeah. Big time. Yeah. Yeah. I know last conference with Folsom we talked about in San Francisco talked about this contract on API. Anybody knows where we are and what we're doing with that, you know, where are we positioning ourselves with that process for API and contract? Maybe this should be a part of that discussion as well. Because I haven't seen any further discussions on that topic. So this is a big discussion. And we don't know how to take this forward, but we'd love to see some traction on this. And maybe the PL is the right team to kind of insert this into their design discussions. Yes. So maybe we should talk about that. Yeah. Yeah. So actually like every time that's going on, what is supporting EC2 data formats? Also, we want to talk about JSON support and XML. Where do we stand on that? Yeah. Primarily like we are supporting JSON data formats out of the box. But there is a need for XML also. And in the meetings, right, open stack meeting, they discussed about providing XML support. So we have to have a clear cut definition of what are the data formats we are going to support. One may be like primary, JSON may be primary. And some data formats may be secondary. But we should have some clear cut definition. As maybe like data formats could be growing up further. But we should have like clear cut definition, how much we are going to support. If there are any issues come up on XML, are we going to address only critical issues? Or even like minor issues also we are going to address. That kind of support things we need to know. Seems like I think we are moving towards API. API is central integration for everyone. And that's how the entire system talks to each other. We need to think about, I think in the community, about contract and need to get serious about it. How do we expose API? What kind of data formats do we use? What are clients? I think there is a whole notion on the API subject itself. And would love to see somebody in the development community leading this effort to kind of have a clear cut, you know, transpires in standardizing the processes or formats. So I don't know how we should probably take this forward. Who is the decision making on this? There have been several discussions in the community, even in the, you know, meetings, XChat, last year contract discussions in sessions. But nothing has been, you know, moved forward. And would love to see something, you know, along that line. Because this is, I think, our, you know, a bigger chunk and challenges that as the product comes in, and we don't have some standard processes in documenting and also understanding the data formats, clients, we're going to be having a lot of issues with the integrator and even the large deployments. And how do we go from one release to another? So there's a whole DevOps model here. So there's a whole, you know, integrator issues distro. So we need to figure out where we want to go forward with this. So I don't know. I mean, I think anybody has suggestions, because this is a big topic that I think we would like to, you know, discuss at a higher level. Yeah. Yeah, exactly. I guess we are not talking about only our needs, right? I mean, I don't know in this room, how many people are using this, and maybe they need to speak up as well, right? This is our observations, you know, dealing with this and day in and out. So I'm sure I think there are a lot more people who are using this also, and would love to hear their, you know, feedback and comments on these problems as well. Silence is nobody's using it. Take this, that's my question. I think that brings up to also versioning, right? And also backward compatibility, forward compatibility, because we are looking at the deployments, right? And I think people are trying to use this in our large operations as well. So we really need to think about that seriously as well. Yeah. I think there was a discussion earlier on data migration. This is similar to that, okay? How can you deprecate a function that was working? So there is a deprecation of functions. There is also, you know, versioning support for changes. So this in itself is a big topic. I think we need to probably raise it. So that's fine. I think we'll put something out, a kind of proposal for PPP to review again one more time for the technical committee. Yeah. And probably take this and see where we can get the answer. And we can put some specific examples in some of these areas as well. Yeah. That includes the backward compatibility also. Yeah. If you have like so many clients, right, the latest Python client 2.9, if you don't want to leverage latest APIs. So what happens, right? If I want to use the old version of Python client, then are we supporting whether it will work, that kind of information if it is available? Yeah. Right. What's the life cycle, right? Life cycle and that. How many? Today, yes. Yeah. Which version of it? And supporting multiple libraries, you know, that. Yeah. That becomes challenging, right? Yeah. Yeah. It's not a very pretty picture when you have, you know, a dashboard sitting on top of it. And then how do you manage that, you know, through the dashboard as well. So it's a whole layer of integration issues. Oh, I'm not talking about supporting. Right. Exactly. Yeah. And then also debugging, right? And there is an issue. How do you debug that? So should we cover testing because no one from QAT, right? That's okay. Generally, we need to cover the testing. Yeah. Yeah. Testing side. So we actively work with OpenStack QA team, but still like we want to mention some of the issues we run into. One is like we are doing extensive testing with REST APIs. And so, but our testing coverage for EC2 or Python clients doesn't exist now. So that's one of the issues. And then Tempest, I think we need to take it a little forward. This is a framework that we settled in for API testing. But today Tempest only supports like positive test. And also sometimes when you look at the negative test that we want to look at from the API perspective, the notion is that we can't put it into the repose or we can't, you know, check it in. So we need to come up with a better, you know, testing tools that would allow us to do negative test and also the positive test for APIs. And the Tempest today is just a kind of, I think, supports mostly positive part of the, you know, functions. So it's very restricted. And maybe I think this is something that we need to talk to like the QA teams and see how we can improve and get some more, you know, test added to the APIs. So something breaks or error codes, you know, boundary conditions, error conditions. Limits quotas, right? Yes. We already have a lot of those tests, but we need to push it. But we need to figure out what's the best mechanism to push those out because of the limitation of Tempest. Also regarding API performance, right? We don't know like what should be the response time. Then if you have base response time, then we can compare over the releases. Are we doing good in false and release for the particular API? So that kind of API performance also right now we lag and there are gaps. So some kind of, I think when you're writing the code, when you're writing the code, you're writing the code for some kind of, you know, thresholds. And it would be nice to expose those thresholds or document somewhere where people can probably use that to measure their metrics. That's true. But at least the restriction from the code perspective would be nice to kind of filter up, you know. Yeah, but DevStack, right? It is the same environment. It can run over and over again. That reference implementation is just DevStack. And we also have, of course, the one cloud that we're running with the REC space. And that's like multi-node system. It doesn't test performance or doesn't, it just runs Tempest, which is a positive test on API. That's pretty much what it does. Yeah, but we should have some benchmark and we should be able to compare, right? How do we do the release? Are we doing good? Yeah, it's a trick, I agree. I guess a better way to put it is characteristics of, you know, the services that are in test would be a better way to handle this. Okay, next. Next is like development topics. Yeah, next one is regarding development, right? There are so many changes happening on architectural side. What are the impact on the front-end or API side? For example, like UUID, there are some changes happened, architectural changes. So existing implementation, right? We want to migrate to the new release. We don't know what are the impacts on the back-end side, whether we will lose data because of the changes, that kind of things. So if the blueprint that address the architectural changes, if they say there are no implications on the API side or upgrade side, then it will be very useful to the deployments over deploying it. So basically filtering all the information out to the, you know, wider community. Yeah, right now blueprint doesn't give much information, right? If it gives over-implements that API, they can get benefited. But list bug prioritization? Yeah, this maybe like we talked about that. Yeah, specific clients. So these are our observations. These are something that I think we need to kind of do the improve some of the processes. So it allows us to kind of focus more on good solid test plans or testing. So we will write a proposal to PPP on the API contracts and see where it goes. But I would love to hear from the development community what else you guys propose. So you're near the issues you have seen? Yeah, exactly. That's an extremely valid point because to date we have been only addressing the fresh installs. Now, as the releases goes further and we are adding a lot more, you know, products and components, we need to start seriously thinking about the upgrade path and the support of that. Because there are a lot of, you know, companies who are deploying that, including distro, integrations, all that. And we need to understand, you know, how do we take that forward without breaking, you know, my existing customers workloads? So that's the... Yes. Yeah, exactly. So I think with Grizzly we'd love to see that, you know, discussions with any new changes that's going to be proposed or any incubator projects that are coming up. The implication of backward compatibility with the teams that has deployed, you know, the Folsom, for example, and how do you go to Grizzly? It would be a nice thing to kind of... Even for test development, right? If there are more details provided in the blueprint, then we can write a test accordingly. So I think that concludes our presentation and our, you know, observations that we have. Any questions or, you know, feedback, comments? Very quiet group, huh? Okay, and then I think that's it. If nobody has any questions, thank you for your participation and thanks. Thank you very much.