 All right. Good morning. Good morning, everyone. Welcome to the session. We're going to talk a bit about how the tools that we're using in the interop space and RedStack project can help us to identify does a tool or a specific application work on an OpenStack based cloud. We've done quite a bit of work in that space in the last years, and we have some ideas how to move this forward. I mean, we want to talk to you about these and present them. Maybe, Katherine, you introduce yourself. Hello, everyone, and good morning. I am Katherine Dief. I work for IBM, and currently I am the PTO of the RedStack project. Hi, I'm Chris Hodge from the OpenStack Foundation, and I also work on OpenStack Interoperability and administer the interoperability program for the Foundation. I'm Kurt Galloff from the Systems. We run OpenTelecom Cloud, which is a public cloud based on OpenStack. And working with Anonage Mobility as one of the pieces where we have first started to support the project and the community with our contributions. I've been in the OpenStack community for a bit of time. We started 2011 actually with our first OpenStack work, so it's been fun and a nice ride since then. Good. One remark. The unfortunate thing we found out is there's another interoperated session in the room close by, so probably we only have half of the audience that we would like to see here. But then there's also an interoperate working group meeting tomorrow morning. Is it 9? Yeah, tomorrow morning, I think at 9 o'clock, we're going to have the interoperability working group is going to be getting together, planning issues that we're going to be working on for the next few months, and working on some of the things that we're going to be talking about in this session today. So if you're interested in that, please feel free to join. Also, these sessions are being recorded, so if you're interested to see what's happening in the session next door, you'll be able to find that online pretty soon. Good. Well, let's get started. So talking about interoperability, let's look and understand on the problem we're trying to solve. On the left side, I try to become autistic and do a little drawing of the problem that some other clouds are having to solve. Let's call it the orange cloud. There's one big public cloud, a number of regions maybe, but one big public cloud. And many users, of course, as users come with different expectations and needs, but they're all talking to the same APIs, to the same clouds, the same installation managed by the same company. They have some challenges as well, because not all regions have the same list of services, but I mean, that's still a pretty simple problem to solve. If we look at the OpenStack universe, it's a bit harder. I mean, it's not like one big public cloud that is the reference for everything else, but there's many. There's a few public clouds out there and many private cloud installations based on OpenStack. And if you've done an OpenStack installation, one thing you learn pretty quickly is there's a lot of choices you have. There's a lot of configuration options, a lot of design patterns that you can follow to adapt OpenStack to meet your specific needs. And I think that's great, because it's really one of the advantages that OpenStack has. It has this great adaptability tool, so you can adjust it to exactly the needs that you have and fulfill those. At the same time, it makes this whole interop story somewhat harder. So if you look at some of the users that we have and what they are using, what they expect as you see, a lot of them are using more than one OpenStack public cloud. Maybe they have an eternal one. They've built up just for development purposes. And now they maybe want to go in production and then they've taken a decision. Okay, well, let's maybe do production in a public cloud. Of course, the expectation is the applications they've built, the automation they've written, all can be transferred to that public cloud and just works. And it's one of the problems we need to work on to make sure that actually that is the case. Or maybe if it's not the case, at least it's easy to see why this is not the case and what kind of challenges they are. So at least to make it transparent. And then, of course, there's also people that are using multiple private clouds, maybe from different vendors, maybe from the same vendor. And that makes the problem, of course, even a bit harder. So I think for us as a community, it's important that we understand that challenge and try to manage this. So I think we have this great freedom of having the choices that OpenStack gives you. Literally thousands of possibilities to configure all your services. Maybe not all of those combinations make sense, but there's still plenty of them. And that freedom is important to our community and to us. I think it's one of the values we actually have as a community to drive this forward. At the same time, we need to make sure that we don't break the expectations. You could imagine, really, that there's a very, very fragmented community. Everybody doing their own thing, forking here and there, creating very strange configurations for very specific needs. And then the expectations that people have. You can use an application that you've written for one version of your public cloud isolation and move it to another one. We'll just not work anymore. So that's kind of the anarchy that we need to avoid. The other option, of course, is we kind of understand this. And we come up with a menu of random choices. I used a hamburger picture to kind of do this. And what this actually means is, I mean, a hamburger is a hamburger. You know what to expect. There's some bread and there's some pieces you can select. And there are still a lot of choices, but they still have something in common. And you understand what the things in common are and you understand what the choices are. And you have some visibility and then can take the choice. So this is some of the things we hear from our customers. The kind of questions they come and confront us with. They go through cloud transformation. So a lot of our customers are really in a rather early phase of that. They understand they need to rewrite the applications, make sure they match this paradigm of being scale out, of being possible to, well, of really being studable to run on the cloud. But of course, the investment, they want to know, okay, what platform should we choose? How can we make this future proof? So that's one of the questions they ask us. And of course, we as an open stack community should give them a good answer to that. People challenge enterprise readiness by just looking at this large, this variety of vendors that are involved in the open stack ecosystem at the variety of projects in the Big 10. And they say, well, this is chaotic. I mean, this cannot possibly be enterprise ready. So we need to work on bringing some order into that. A number of our customers, they really look into cloud bursting scenarios. So they have set up their own open stack private cloud. But they don't, well, I mean, they don't want to run into the same capacity over provisioning that they have been done on the legacy infrastructure. So they really want to have the ability to say, okay, well, let's do some private cloud because we need it, but let's make sure if there's bursts in load, we can use the cloud bursting mode and use some public cloud that then will handle those load peaks for us. And then ultimately, of course, there's application and tool developers that want to know, I have a specific application, I have a tool written, and I have used certain implementation of OpenStack to develop against. I know it works there. How can I know whether it works on the more broad OpenStack ecosystem? How do the other OpenStack-based clouds look like? Does that tool work there? And I want to know some answer to the question. So what we want to do, it's the goal of this interoperability effort that is underway in the community. We want to make sure we have standards that we can measure against. And what we have today, of course, is the interrupt guidelines for that. Then we want to make it easy to test against those standards. Do you have a client who helps with that? And what we also want is then to have a place where we can collect and publish the results. So we provide this transparency also to the people that want to have that information. They don't need to do that testing all on their own to find out. And that's what the restrict server helps with. I'll hand over to Kathy. Okay. Thank you, Kurt. So what I will talk about next is what has the OpenStack community has been doing to address these three areas? So pretty much in the fall of 2013, the OpenStack Foundation Board has found a difficult committee to work on the formalized definition of interoperability guidelines. That is the area that the OpenStack community is trying to address the measurable standard area. In the beginning of this year, the difficult committee was renamed as the interop working group. So the primary responsibility of this group is to define the interoperability guidelines. What does an interoperability guideline consist of? So the guideline defines capabilities that must be present as part of the product APIs. And the ways to test the presence of these capabilities is by assessing the must-pass test. And not only does the guideline define capabilities, it also defines designated sectors that are the OpenStack code that must be present in the product. Oh, my God. So complying with these guidelines is required to qualify for OpenStack power trademark for public, private, and distribution clouds since the beginning of 2015. So now with the guidelines that we have already defined, we need a set of tools so that it can be used to test the clouds. And once we have the test results, we need a set of tools so that we can upload these results to a centralized repository and share that with the community. For that purpose, the RefStack project was formed. So the RefStack projects actually have two parts. The RefStack clients and the RefStack servers. The primary responsibility and goal of the RefStack client is for testing. How do we have a tools that enable the user to test seamlessly, trying to make it as easy as possible? Do we get there yet? Maybe not completely there today, but that is the primary goal of the RefStack clients. And once the test results are collected, the RefStack server is a set of tools that can be used to set up a RefStack server, and that serves as a centralized repository for the user to upload their result to, and also it provides a UI for the user to check their result against the various guidelines. Both the RefStack client and the RefStack server can be installed on-premise at the vendor location so that they can check their test result before uploaded to a public repository. So a little bit more information about RefStack clients. And RefStack client is a command line tool. As we say earlier, the user can just install that on-premise and test the cloud in-house. And currently the RefStack client is based on Tempest test suite. There could be other extensions later, but currently that is the underlying test suite that we used. And because it is a Tempest test suite, RefStack clients will install a default Tempest version that the RefStack team has been verified at the time that the interop working group, working on the guideline, we use that Tempest to verify that, and that serves as our default Tempest version. And of course the user can use any Tempest version to test in-house or to run the test. But a word of caution is any project, the Tempest project is very dynamic. There's a lot of updates and change to the project. And if the user is running using a Tempest version that is not the default version, there are occasions that you will see some issue because of the code discrepancy. And when that happens, we encourage the user to create a bug in the storyboard to both the interop working group storyboard and the RefStack storyboard, and then we can work together with the community to see what is the action that should be taken. There may be a need to update the guidelines. So once we have the results, now we can upload the result. And the result for RefStack is in the JSON format, and it can be uploaded anonymously with the signature. So earlier we talked about the RefStack server. For the official OpenStack power trademark program, the official data will be uploaded to a RefStack server that was built in the OpenStack infrastructure environment. And on the screen here you see that URL. And if you have a computer and you are able to get to that URL, we can look at some of the features that provide by this website together. So with now I'm going to switch to the websites. So what you see here is the RefStack website. On the tab here, one of the tabs, the OpenStack power guidelines. So here is where you can see the guidelines. The interop working group to find the guidelines in JSON format. The RefStack UI just presents that in a form that it will be easier for visualization. So if you click on the version drop-down tab here, you will see all the guidelines that has been published by the interop working group so far. So the first one that we published is in 2015-03. And the latest one is 2017-01. And on any times there will always be two active guidelines that the user should be tested again. And these guidelines will be denoted by a guideline status of approved. So you can see that guideline 2017-01 having the status of approved. And 2016-08 also have a status of approved. But then when you go to 2016-01, the status here is superseded. That means that this is a previous guideline that is no longer in effect, and any tested should not be tested again this guideline. As we talked earlier, guidelines will have capabilities. And in this guideline 2017-01, the capability will be listing here. So the first capability listing here is a compute image create. And the next one is compute instant action get, etc. We're also saying that the capability is measured by a set of must pass tests. This is the set of the must pass test. So now that we have a guideline, how do we check the result of our test against the guideline? So let's click on the community result tab. And just click on any result set. For example, if I click on this one, you'll see that this test result was uploaded on May 9. And you'll see the number of pass tests here is 1,445. 1,345. And this is a very good user who tests the entire API test and not just the must pass test. And that's what we highly recommend that whenever testing if it's possible, to test the entire API test, it will give us a lot of data which will be used for the interop working group to define the next guidelines. And for this set of test results, if we check against guideline 2017-01, you will see the big green square with a yes inside indicate that this test set has passed the requirement of this guideline. So you can also check the result again, a different guideline, et cetera. So that is what has been done so far for interop transparency. And I will pass it to Chris to talk about what will be done in the future. So right now, the OpenStack Powered Program has three different trademarks. And they're all roughly based on the same guidelines. We have the OpenStack Powered Storage, which is OpenStack Swift, it's object storage. We have OpenStack Powered Compute, which includes what you would consider to be kind of the classic integrated projects. That would be Keystone, Nova, Cinder, Neutron, Glantz. So all of your compute networking and storage. And then we have the OpenStack Powered Platform, which is a combination of the first two trademark programs. It's OpenStack Powered Storage and OpenStack Powered Compute together, so you're adding object storage Swift to your system. And as Katherine mentioned, these programs have been in effect for a couple of years now. You know, we're coming up on our third year. We've done a lot of iteration on them and we've kind of hit the point where those programs have stabilized. We've added all the capabilities that are essentially going to be there. There's not going to be a lot of change. But there's been a lot of demand from within the community for being able to identify different classifications of interoperability. And this leads us to a question of how should we be expanding the OpenStack Powered Trademark Program? Should there be new major programs? For example, NFV is a very large use case within the community. And is there a space to be able to add a new program where we can talk about capabilities that are specific for NFV and attaching a trademark program to that? But there are also a lot of individual projects within OpenStack. And as you've seen in the keynotes and in a lot of the discussions this week, there's a question of do we want to take programs like Cinder and make those stand alone? And if we do that, do we also want to apply an independent major program trademark to that? But finally, we do have this OpenStack Powered Compute and Storage and Platform. And there are a number of different projects that integrate on top of that platform. For example, heat orchestration could be one of those projects. And do you want to be able to say that you have an OpenStack Powered Cloud that has an extension of orchestration or has an extension of, say, database? So this is something that we presented to the board of directors over the last weekend, and they're pretty excited about that. And for the last few months and in the upcoming months, we're working on a new version of the interop guidelines. So the interop guidelines version 2.0. And there are a few new features that are going to be added to this that we're pretty excited about. The first is allowing new major programs. And so going back to the NFV example, having a new OpenStack Powered NFV that can be applied across the industry to say that this cloud is ready to support NFV workloads. Also adding program extensions where you can specify dependencies. So you can say that orchestration could depend upon the existence of compute and networking and storage, or the same with database. But in a lot of ways, the programs mostly stay the same. We're going to identify capabilities. And these capabilities are identified with tests, and so that users have a way to check an existing cloud to make sure that it does meet the capabilities they need for their workload. But also designated sections with code to be able to identify the products that we call OpenStack are indeed OpenStack. These guidelines are going to be community-driven. We're encouraging all of the stakeholders to build their own guidelines and to work with us on building those guidelines. So we have a strong relationship with the NFV community. We're actively working on building a guideline for them. We've also gone to the project PTLs, the heat PTLs, Trove, Designate for DNS. And they're working with us to build out the new set of capabilities for their vertical and extension projects. Now, the Interop Working Group will maintain control of the official trademark eligible guidelines. These are going to be developed in conjunction with the TC and approved by the board of directors. But our ultimate goal is, throughout the community, is to have project owners, in general, define the minimums for interoperability and allow them to publish them so even if they aren't part of an official OpenStack-powered trademark program, users will still be able to go and determine if cloud products are compliant with the interoperability guidelines established by the community. So I've talked about a few programs. Proposed platforms for 2018 are NFV. There's also been a lot of discussion about adding new platforms that capture just individual projects. And so it's conceivable that we could have a new Cinder vertical platform for block storage or Keystone for identity. You know, these are things that have been a tremendous amount of interest has been expressed by the community. And it's something that we're looking at adding and releasing in 2018. We also have a number of proposed extensions. These are trademarks that are going to be applied to existing OpenStack-powered clouds. So there are a bunch of them. For DNS, we have Designate, Orchestration includes Heat, Secrets, Barbican, Container Orchestration, Magnum, Database Trove, and Big Data, Sahara. These are just some of the programs that we're looking at adding extensions onto existing OpenStack-powered products. And with that, I'll turn it back over to Catherine and she can describe how RefStack is going to integrate with these new extensions and programs. Right. So, as you see so far, all the guidelines is one type of guideline. It's the OpenStack-powered guideline. Going forward, since we are an able user to define their own guideline, what we call customized guideline, in an effort to giving a framework for people to define their own criteria, and also with the hope that it will be all testing in on-premise and feedback to the foundation and the working group, and maybe make that become an official one later. But with that, the RefStack will have to update to host a guideline other than the OpenStack-powered guideline. And there is work to do in the future, but the goal is being able to host the customized guideline and not only hosting the customized guideline and being able to accept test results that may not be tempest-based in the future. So that is the future work for RefStack. I'll turn it over to Kurt. Okay, so maybe I'll take the last slides to close this and look a bit how this could look like in the future. I mean, what you see here, the two large black boxes kind of represent the currently existing inter-warp guideline programs, the OpenStack-powered computer and object storage, and of course the platform is the superset of those two. And we're looking into covering additional services like orchestration or designate or maybe bare metal. I think we've heard that request as well in the discussion the other night. And then, of course, there's also another thing that we do is over the time, the amount of interfaces and capabilities that are covered are expanding. You said we've reached a mostly stable state there, but I mean, as these services expand, of course, we will adapt and cover more of the API functionality that those services offer. What I put on there was two yellow boxes. So we could imagine there's like specific guidelines or specific test sets where you cover very specific functions like you want to use image management and you know in my cloud I can only consume certain image formats and you want to test against that. So it's possible to create these as well with the framework that's currently being worked on. So this is how a new vertical program could look like. I think at this point not completely clear yet, but it would be a superset of the currently existing OpenStack computer or maybe there are some pieces. So the output from the board meeting was that a vertical program like this would not necessarily cover all the capabilities that existed inside of the existing programs, but it would not be incompatible with existing programs. And so NFV may not have all the capabilities inside of an OpenStack Power Cloud, but it wouldn't have any capabilities that were in direct contradiction to the existing capabilities. That would be stupid, honestly. To have such contradictions. I mean, of course, it should be possible to certify against both guidelines and have both at the same time in one cloud. Yeah, absolutely. So one of the questions we got asked by our customers early on, I went and bubbled on, I mean, how can we make sure my tool or my application works on the OpenStack Cloud? So what this custom profile work that's being, that's currently ongoing, sorry for the German. What this work would enable customers to do is actually create their own profiles. So what they could do is, they could identify what is it my application does require, look at the existing tempest test that exists that cover those things. Maybe it's also a nice opportunity to contribute to tempests, some new tests to cover the things that you have discovered that are not yet covered. And one example of this. This is a common request that we've gotten from a lot of members in the industry that they'll provide a cloud that is compatible with Amazon through the EC2 APIs. And so it's conceivable that a vendor would create an EC2 compliance profile. And then they would be able to determine that their clouds work to not only against an OpenStack Cloud, but also possibly against, say, an Amazon Cloud. Yeah. I mean, obviously creating those custom profiles that there's a bit of work involved there. I think the first, maybe the most difficult piece is understand exactly what the application and tool needs in terms of supported API calls, but also in terms of expected behavior that might be less obvious than maybe the API calls themselves. And then, of course, create those tests, use the ones that are already existing, create the list of tests, and then create your custom profile. The way I imagine this to evolve is that once we have enabled people, customers to upload profiles to the RefStack site to test their cloud against, we were thinking about adding some sharing possibilities. So I have created a profile. I make it visible to somebody else. I want to use this to test against. Of course, the next step, and that's something we need to discuss carefully, is maybe we should also allow people to make this publicly visible. Say, well, I have here this nice profile. The thing that makes it a bit difficult, I upload a profile called a Terraform profile, and then everybody, of course, that sees this as the expectation, if I use this, this has a meaning. So there needs to be some kind of quality assessment and curation process to make sure if something is then officially shared with everybody. There's a certain amount of quality in there, and the tests are meaningful. So it adds value as opposed to just confusing everybody. That's something we still need to define and decide how we do this best. Of course, one of the things that will also happen is if we have a lot of custom profiles and they meet the quality criteria that we have, we use this as input to then also have discussions about could they be part of one of the future trademark guidelines. So this is kind of the process that we imagine. We want to enable just a larger and a more diverse set of people to contribute to the inter-op transparency this way. So that's kind of the goal we're heading towards. Yep. Good. I think that's what we wanted to show you. Definitely, we would appreciate to get some help by getting your feedback questions right now or afterwards you can talk to us, people that you see in the inter-working group and the RevStack project. Are there any questions? Does anybody have questions about the programs? So aside the very good technical aspects of the entire program and the process-wise, which I would highly appreciate, I think there is another component which is even more important, which is how do we communicate this plan and the content of that in a form to the community and to the market that customer can rely in decisions on that one. I mean, this RevStack site is a tool, but this is yet, as it is built up, not the appropriate platform for any kind of transparency in the sense not each and every single feature, which is subject of a test case, but rather the entire story. What belongs to an OpenStack standard version 1, 2017? Which function needs to be supported in which form on a higher abstraction level? Is there a plan to any, let me say, work stream as part of your working group to address that appropriate? May I start, Chris, and you can maybe then amend? I mean, one thing, of course, is if you qualify for the OpenStack Power trademarks, you get the listing in the marketplace. So there is some visibility we provide to this by having the trademark and incenting vendors and service providers to actually be listed on the marketplace. But, I mean, what I hear you saying is also that may not be enough and we need to work more, Chris, maybe you. So, yeah, as Kurt mentioned, the OpenStack marketplace is where most of this transparency happens. For any public cloud, private cloud, or distribution that passes these guidelines, they receive a mark that states the exact guidelines that they pass. We're in the process of adding links back to RefStack so that you can actually see a listing of the capabilities that are being provided by those clouds. We also have the OpenStack brand page, also in the introp page talks about how the interoperability program works and includes information on the capabilities that are being provided. So, but that is something that we can, you know, that I think that we can have a stronger message about and I think that in the coming year, we're actually going to be, you know, expressing that more through, you know, as these come online to the community through the mailing lists, through marketing materials, through communications to the board and to vendors directly. But yet, as this program goes out, we're actually going to be using it in a number of other initiatives to express how OpenStack just isn't a single monolithic thing that you have to pull in all of the pieces, but that it's actually composable infrastructure in that you're able to take the capabilities you want and verify that those capabilities exist for your installation. Any other questions, remarks? Yes. My name is Roman. I work for Mirantis. I have many questions. Do you want me to ask here or do you want me to come over and ask them later because I have, like, five questions? We have, like, one or two more minutes, I guess? I see. Okay. So then I will start with important. So how this is different from Tempest? Okay, Tempest is a functional test suite. You limited the amount of tests which you do to what you think is important, but what you think is important might be different for different clients, exactly. So one client doesn't really care about object storage and maybe even doesn't care about the block storage. He doesn't have it. I mean, how this is all meshed up and how it is different from Tempest. So Tempest is a fully functional test suite that covers the entire API, both administrator and non-administrator tests. One of the goals behind the interoperability program is that it's user targeted. And so you, as a user show up, you don't have administrator access to a cloud. You have user access to it. And so there are certain actions that you want to perform. So we are taking a subset of Tempest and verifying that user actions can be accomplished. And so that's... So currently the interoperability tests are a small subset of the functional tests. The goal being that you're able to test basic actions but that you can also have a trust-but-verify model where a vendor comes to us and says, we've tested our cloud. We passed all the required tests and we qualify for the certification. But you, as a user, can show up, run those exact same tests and make certain that those capabilities are available for you. So you're saying that instead of... So this is not a vendor intended tool. This is more specifically user tool which user can run and verify that whatever was given to him is acceptable. And that's the reason why we provide a tool like RefStack Client because a lot of users probably don't know... I mean, for Tempest, not a lot of people know about that. So our efforts is to simplify that test system so that any user... The final goal is any user pick up one installation and being able to test. Yeah, I mean, if every user would have to set up Tempest, just to validate the cloud works, that would be... But that is the main difference between Tempest test itself where it tests pretty much a... what I call a well-defined environment for a user... It is testing a end user cloud. Okay, I still have certain doubts, but let's go with that, okay? So the next one is do you guys plan to add performance testing because what we found is it's super important not only to make sure that API worked but they work at a certain level and certain specific actions are, you know, done with an reasonable amount of time because otherwise it just all becomes a mess. So performance testing is outside of the scope of this project. It's for a number of different reasons. You know, they're... You know, but it's not outside of the scope of the future programs, especially, you know, things where performance actually matters like an NFV cloud, network performance matters. You need to have a certain... You know, you need to measure a certain level of latency. So while in the current projects and in more of the basic projects, we're not going to be doing performance testing, we could see specific cases where when you're testing for a particular type of application, that scenario and performance tests are admitted as part of the test suite. I think you should... Not I'm making a recommendation, but I'm just making general statement. Please consider this because for many applications it is absolutely critical. So for example, yesterday, they were talking about large files being written in red from a storage without any performance testing. This really doesn't fly for them for the reason that they just can't do their work. Yeah, it's... I mean... So there are tools like Rally that will do performance testing and if a client needs those tools, they exist and they're available. Right now it's just out... Within the scope of applying a trademark to a product, performance testing falls outside of the scope of that. That's okay. So you are saying that it's more kind of trademarking and get the stamp type of program versus... Well, it's to guarantee... All-encompassing testing to make sure that all APIs work and... The trademark is a guarantee of a base level of interoperability. You can add more onto that. It's not exclusive of saying that... I mean, that becomes a vendor marketing point that you have the fastest cloud and you can demonstrate that, but that's outside of the scope of the interoperability and trademark program. But we also can see that this is pretty much the base layer. If you don't even pass this layer, forget about performance. So that's a very basic base layer. Okay, two small questions. Do I have time for them? Do we have time to take the last questions? Okay, so you mentioned other projects and this is super important because, again, some of the projects are designed to be there. Some of them do not. Some of them expect trove. How are you planning to deal with these different aspects given the fact that number of projects which people can expect is various? Are you going to create a different baseline? Let me finish. This is... I don't know. This is big data type of application so this capability needs to be. Are you going to introduce different profiles for different types of workloads or how it's going to be handled? So we've described the vertical programs and so one that we have today, the OpenStack-powered platform would be one of those vertical programs. If you need something like Sahara or big data, you would call that OpenStack-powered platform with big data. Can we do this? Yeah, and so we're calling these extension programs. The extension programs will have dependencies that are required so DNS would require networking and so any vertical program that had a networking component could attach the DNS extension to it. But do you see that it's slowly morphing into all-encompassing tempest tests? Extension is... If you want to have all the possible trademarks in the future, maybe then you have most of tempest covered. But it's still a choice. Some customers will have limited use cases and that's valid. When you say all of tempest, it's not pulling in all of tempest. We're actually looking at the non-admin APIs that are checking for the capabilities that are required for a user to have a base level of interoperability. And so right now there are thousands of... How many tests are in tempest? 2,000? Yeah, roughly 2,000 tests. The current OpenStack-powered platform only checks about 230, I think. Yes. We're only checking 10% of tempest. The same across the other projects. There are lots of functional tests that are going very deep into all of the APIs. But when you whittle that down to the user-facing APIs, that set becomes a lot smaller. Okay. So just a comment before the next question. So we found that it's important to use a very high-level type of actions. Mm-hmm. Instead of going to actually check API by API. Yeah. And run something on it to cover the majority of whatever user expects. So I'm just making a comment saying that the higher you go up the level, the less actions you need to do to check all the functionality. And that's... Like some scenario tests, again, they fall outside of the scope of what we're trying to accomplish. But we actually... And that's actually an upstream project within OpenStack that is essentially scenario tests against OpenStack clouds. And if you're interested in application scenario testing, that's a great repository to start from. If you're looking for scenario testing, things like rally provides, you can also do that. Again, we're not exclusive of anything like that. And there are other projects out there that can help you, you know, do, you know, do something against your cloud and make sure that, you know, it's really going to run the application workload you want. Okay. Thank you very much. And I have a really, really last question, so if you can scroll to the page where you had a certification result, it says accept flagged tests. And I have a question, what is accept flagged tests? So flagged tests... So in the course of building these guidelines, we make mistakes. We make tests that, for some reason, should not be in the program. So an example of this is there's a capability for image attach that is in the Nova API and is also in the Cinder API. The Nova API test is the one we want because that is, that performs the entire task of attaching that API. This is actually a helper for the Nova API. So one is implicitly guaranteed by the other, but that API may not want to be made public in a cloud. But we were checking for that capability. And so it's in the current guidelines. We came at the interoperability working group came to the conclusion that that test actually shouldn't be there anymore. And so we flagged it. So the test is still part of the guideline. But we no longer require it for compatibility testing. And so any test that has been flagged within the current guidelines is not required to be passed. Another quick example is a tempest change. Sometimes test name changed. And then the old test result that upload doesn't have that test. So the correct thing in this scenario is flag that test and add the new test to the new guideline. So that is a way to make update to a approved guideline. It's a safety valve for users and vendors. It's an understanding that our knowledge is broad, not necessarily deep. And sometimes we uncover things in the wild and it's a way for us to identify those problems and change them now. But also without having to wait for the future to be able to address them. Thank you. Thanks.