 Hello, everyone. Thank you for coming. I hope everybody was able to find some a seat We have a huge crowd outside waiting. So you're lucky so How many of you have not heard of Defcore or interop working group here? Have not heard All right, excellent. We have a couple of people. So We're here to answer any questions. You might have provided a little bit of history I am Agnes sigler and Mark Walker is My co-chair on the interop working group and we've been working on this for quite some time So a bit of history a long long time ago. There were many different open-stack deployments and My open stack looked very different from Mark's open stack and the board decided that that was not cool and The board back then I think it was in Hong Kong Release the resolution to create Defcore committee Defcore was chosen as a word to I think it says defining core and The idea was that This committee will wrangle all the different open stacks and create a guideline That says if you want to call yourself open stack, you must have these things so What's with the name Defcore and why we're no longer using it? Some people were coming up to us and says I don't understand what this Defcore is So we decided that it doesn't really reflect what we're doing. We're about community and we're about in interoperability So we changed our name to interop working group Interop because it's easier to say for me than interoperability, which I can say some of the time So why do we care who can call themselves open stack as I mentioned? Different deployments of open stack and look different and It's really not great for interoperability If I am a developer deploying applications on open stack clouds and Mark's Open stack has neutron and my open stack has no one networks and The app I'm writing is expecting neutron, but It's not gonna work with no one networks. So That's not a great user experience So The open stack foundation will not issue you now will not Would really does not want you calling yourself open stack if you're running things that are not part of the guideline So as part of the Defcore or interop We now have guidelines and I think we'll be talking about layer mark will go over like how they're created and what they look like and But right now if I want to call my deployment open stack, I have to pass a certain guidelines and Those guidelines have a set of lists a set of tests that I must pass in order to call myself open stack The tests are right now calling API's. So you may say, okay, I have this open stack deployment. So I or my test passes API's therefore I can call myself open stack is that all that's really needed now You also must have Designated code base As part of your deployment. So for example, if you're if you're exposing Nova API's you also must be running Nova test Nova code This you might think like, okay. Well, that's usually the case. Well, not really we'll cover that a little bit later But so yes, it's not just about API's. It's not just about test. You must be running open stack code So if you're a user, why do you care about branding? Well, you probably Don't unless you're looking for open stack logo to buy or you're looking to buy some open stack cloud And you think okay marks mark has open stack and actually has open stack I'll buy it and I'm sure they will be the same But how do you know that they're the same or have similar capabilities? That's where passing interoperability guidelines come in place and As a vendor you are not allowed to call yourself Your your deployment of open step or your product open stack unless you pass these guidelines So why are we even talking about interoperability? As I mentioned different open stack deployments cat can have different Can look different. I think even If you just look at the policy, I think Nova has like 400 policy options 400 plus so as you can imagine you can really fine tune everything in that deployment like who can access what and My open stack cloud might be very restrictive Mark has private cloud that will let you as a user do everything and my public cloud will say Uh-uh you cannot the only thing you can do is create a VM and delete the VM. Well, that's probably not great if you're Wanting to do something more creative and interesting with your open stack Also open stack is has several releases over like I think a year we have two releases so If you've been you running open stacks since 2012 you might be still on Essex or Whatever or Folsom and You know, is that something that your user will want to buy and if my Cloud is on Folsom and I'm selling it to you as open stack and you you're not, you know You're not asking me the right questions. You know, you might find yourself in an adequate version of open stack So our current interoperability guidelines cover Three and a half releases or so or the last three and an upcoming one Another thing that I mentioned was that you must be running open stack code. So when you you have If you're running if you're passing the guidelines and you can say hey I am I pass all of these tests that are testing API API is all that matters, but if you're running Self for example with your open stack deployment. Is that really full open stack? Is that pure open stack? Yes So you're very very close. So yes if you're running Nova keystone neutron and safe you can so call yourself Open stack what you cannot call your cloud Open stack powered you you just can use the open stack compute part And yes, there is Why does with does not have different back ends? We're not we're not going to answer that question here. That's a Different you have to talk to the swift guys, but as part of the Since we are guarding the open stack logo as a spot of that We have to make sure that where what wants to say that they have open stack storage, which is Swift back and that they actually running swift and not just passing swift API tests because you could potentially be passing all of the Guideline tests and running self behind it Now that meets your requirements. It probably meets your user requirements, but you cannot get the full logo Which which is fine. You still get the compute part So why do users and operators scare as I mentioned earlier users want to know what they're getting As part of the user survey what we heard was that Users don't like being locked in Into vendors and that that's one of the reasons they pick open stack Interoperability also fosters strong ecosystem of tools and It's also part of the user survey we heard that People like the open stack enables them to accelerate ability to innovate and common Behavior of open stack enables usage of tools such as Ansible Terraform puppet, etc. And As I'm sure you have been hearing here at the summit a multi cloud is real people are using multiple different clouds whether it's multiple different open stack deployments or Open stack AWS Google cloud people want to use all of that We want to make sure that open at least the open stack part is consistent So how are we getting there? How do we guarantee that my open stack behaves similar to mark's open stack? So we have current programs that I kept mentioning and right now we have Three different programs that you could get logos for open stack logos the top level one is Open stack powered And that covers both compute and storage and if you are running Just know what with something else or you're not running swift. You can just call yourself open stack Powered compute If you are running just swift With with keystone, you can get you can also get open stack logo, but you'll be calling yourself storage And as I mentioned earlier if you're running Nova with Seth, you will qualify for the compute But not for the platform one any questions about these programs so Right now we have 34 distributions and appliances 12 public clouds and since 16 managed offerings in open stack marketplace all have passed These our current guidelines whether it's for open stack compute or the whole thing and You're able to pick and choose what you want if you are a customer and want to buy public cloud all of those If you find these Products on the marketplace you can be guaranteed that they have passed the guideline and on the website They will show which guideline they passed And I think I already mentioned some of these things earlier open stack powered compute it will require Nova Cinder Glance neutron and geestone projects it in the current guideline there are 214 tests and It's really not a lot of tests to pass. I think the total Tempest Test code base is huge. We're really Just narrowed down to the essentials and there's a functionality what it is that you must have Open stack power storage requires only 49 tests So it's if you cannot pass that and you really can't call it yourself Open stack powered storage I think and that's just a Doesn't have most of a lot of the more complex with functionality. So if you're just a user you can easily Pass those tests and Open stack power platform as I mentioned already it requires both compute and storage so If you're trying to certify your cloud, how do you get there or how do you pass these guidelines? What can you do? So we have this wonderful tool ref stack and the Catherine Is the ptl of the project? It is a tool set for testing interoperability between open-stack clouds. It is a database web Backed website and it supports collecting your test results and it publishes them on the ref stack website the The results can be anonymous or you can if you want I think you can enter Your information and share it. I think most of the results right now are anonymous there but as a user you can create profile there and register your company's test results and It is a community develop project if you're looking for open source project to get involved in this is a great one and This is this is what the test run results look like as you see there is a guideline version that you get to select and Target program and how many tests have passed and this is just a screenshot. So it doesn't show everything but you If there are any failures, you will show how many tests failed and you can click in and see what exactly failed So who uses ref stack? vendors if you are trying to certify your open stack Against a particular guideline ref stack is the way to do it If you are a user or a cloud administrator, you can go and see which vendors have smear the results and You can also compare what your how your test results compared to other tests Rest if you are running ref stack Please submit all of the results all of the tempest test not just for the ones that required So how guidelines get made and here's it. Here's what I hand over to mark. All right So let's talk a little bit about what goes into making guidelines So we do have a six month cadence which follows the other cycle releases It is offset by a couple months from the open stack releases themselves So that's why you get this concept of every guideline covers three and a half releases, right? Because we have a Release train that will be in development while we're writing that guideline. So we got in forward Forwarded so that when it does come out In between the two guidelines the current most current guideline will cover the most current release Vendors can use either the two newest guidelines when they apply to the upside foundation for logo and trademark license And all these guidelines are created by us and ultimately boarded on by the voted on by the board of directors So all that work has to be approved by the board It's a little different than a lot of other open-sack projects where ultimately the governor is the TC for us It's actually the board of directors And ultimately they have to Approve whatever we do So they don't they don't vote plus twos and Garrett like other projects It's every six months. We roll out something that I'm at a board meeting and there's an actual roll call vote So we're gonna talk a little bit more about the next couple bullets and the next couple slides here Basically for all the capabilities that we want to include in these interoperability guidelines We basically go through look at the API's or whatever other capabilities we want and then the group scores those based on 12 criteria that we'll talk about in just a minute and One one sort of key thing is that Because we are covering multiple open-sack releases whatever features we vote into these guidelines have to be present in all of those releases And so keep in mind there are also vendors can use either the two most recent guidelines So there's actually a pretty good spread of releases That should tell you that what we're looking for is pretty core stable stuff that people use Not like the new shiny objects, right? It's gotta be around and baked and people are actually using it before we actually require everybody to to have it in their products candidate capabilities They also have to have tests. So obviously we can't verify the clouds actually provide this stuff We don't have a test for it today The TC has asked that For scoring criteria that we only consider tests that are in tempest That's kind of an ongoing discussion that we have with the community. We'll see where that goes in the future Click maybe not Try again. All right, so let's talk about scoring criteria So I mentioned that there are 12 criteria and you can kind of see those and sort of the pink boxy looking things around the edges there These are the criteria that we have in the middle of the kind of circle there You'll see basically, you know, what we're trying to get out with those criteria So if something is going to show proven use it in the field, then we think it should be widely deployed It should be used by tools and it should be used by clients If a shiny new feature comes out and it's not present in open stack client and has no sport in horizon Chances are not many people are using it similarly if feature is not used by a lot of the tools in the Sport by a lot of tools outside of the open-cycle ecosystem like say Terraform or Ansible or Maybe even platforms with cloud providers like Kubernetes Then chances are again, maybe that's not such a widely used feature, right? For some of these criteria, we try to be proactive about looking both at Sort of documented things that we have like the user survey to see who's using what and we also actually go Look out at the larger ecosystem for some of the stuff Like I've actually spent quite a bit of time fishing around in J clouds code To see if you know certain of the second API is actually appear to be supported by J clouds same with fog same with Ansible and several others So the score exercise here can be a little bit laborious, and that's why we only do one of these every six months Because it actually takes a good amount of time to look at this stuff That's all we need to say about that so how stuff actually gets added Ultimately when we're scoring these things, it's a bit of a subjective process, right? Ultimately, this comes down to human beings going around looking and saying I think that this actually meets these criteria or not Funny thing is that in the other side community we have a lot of subjective processes like Whether code is good or not is kind of subjective as well And so we have reviews And we have core reviewers that vote these things ultimately in or out And we have a feedback system with with code views We use that same process for determining these guidelines basically all the scoring goes through Garrett So somebody submits a patch to Garrett and says I think this meets these criteria and it's good enough to get into a guideline The rest of the interoperability group members can chime in anybody in the community can chime in if you got a Garrett account You can go vote on our reviews And then ultimately the core viewers plus two or or or not But at the end of the day, whatever we roll we put in there rolls up to the board of directors and they they ultimately vote Yes or no Again, we look at kind of a lot of different data sources Kind of one of the the early criteria that we use is does the thing being proposed actually have tests that we can use Because if not, there's never a point in doing all the rest of the work So the first thing we do is go look at tempest tests. One of the criteria that we have for this is that Users of clouds should actually be able to verify these things themselves So if I want to go use Rackspace public cloud I should be able to verify that hey, they they said they adhere to this guideline and it actually does all that work So the tests that we look for in tempest are ones that don't require administrative access Don't require multiple user accounts And are therefore much easier for mere mortals in clouds to go verify for themselves So, you know, there's there's kind of 12 criteria here, they have different weights There there are a few that are sort of less important than others In the in the grand scheme when we kind of figure out the the totals When people kind of ask me like so I have this thing I'd like to see everybody support it What's what's the key thing I need to do to get it into a guideline in the future? The answer generally is wide adoption So a lot of the other criteria kind of surround how widely adopted something actually is And that's adoption both from a product perspective and from a user perspective and an ecosystem perspective So there's kind of multiple things there If it's if it's a thing that's not supported by a lot of different clouds Well chances are it's gonna fall flat on a lot of criteria same thing for You know some of the other things we looked at here If it's if it's not stable if it's something that's changing quite a lot from release to release because it's a shiny new object Again chances are you know if if you can't can't show what adoption of that in the tooling ecosystem You know people can't use Ansible with it can't configure it with puppet Can't you know do whatever they need to do with terraformer or or run Kubernetes on top of cloud that has this Again chances are a lot of these other criteria are gonna fall through as well So that's really kind of the the key metric that we focus in on kind of early in the process is sort of a Sort of a not a final answer But kind of a an indicator of whether something's gonna be good candidate or not All right, so let's talk about future programs everything we talked about so far has been kind of what's out there today And you know we kind of give you the numbers on on tests It's not a huge battery of tests and a lot of it is very basic functionality So it's stuff like can I get my VMs up? Can I create my networks? Pretty pretty simple stuff for the most part What we've actually found is that there are a couple of emerging needs We talked earlier about how it's kind of a very rich very flexible platform And we built something that actually fits a whole lot of different use cases it turns out and some of those use cases are Very different from one another so kind of a general-purpose compute cloud actually looks pretty different from one that's designed to say run NFV, right? NFV clouds have Requirements around things like maybe I need a new mower or scheduler. Maybe I need PCI pass-through. Maybe I need Certain things in orchestration. Maybe I need certain performance out of a data plane So those characteristics can look a lot different The other thing we found is that there are a lot of people using some of the projects that aren't as widely adopted and They may be sort of small in number compared to the whole general population of OpenSec users But they actually really care a whole lot about compatibility for those projects that they are using So there again, it's something where if they're using vendor a vendor B should also support the same thing in their minds, right? So we'll talk about that that kind of use case first Something they're working on developing now is what we're calling add-on programs And sort of the example you will use for this So DNS is a service If you look at the most recent user survey Prod and non-prod clouds combined only about 16% of the people that respond to the survey are actually running designate So it's a pretty small number and if you look back at our criteria It's certainly probably not gonna hit widely adoption and a lot of the other criteria may fall away as well as a result of that So it's kind of a tough one to say yes That is a core piece of OpenSec that only 16% of the population is using At the same time though we hear feedback You know in sort of my day job I hear feedback all the time people that want designate because they have a real need for hooking up DNS to their clouds What we'd like to do is get to a point where for projects that Sort of have having established user base out there that cares about this stuff We'd like for them to have a way to define what is interoperability look like for that project So if I just look at the 16% of the population that's actually using designate What is what is sort of core for them? What are the things that they expect to work from cloud implementation to cloud implementation to cloud And then we want to have some way to reflect that kind of in the marketplace, right? So if I go look at the other stock marketplace today, I can see you know 16 Clouds and I want to know which ones actually support those designate APIs, right? So this is kind of where the concept of add-on add-on programs come in So we have a tech power platform and compute and storage today What we're thinking about doing is also having sort of an additional badge You probably will have something nicer than a small yellow block in the logo But basically some way for providers to say this product also supports DNS And that means it has passed this extra set of tests for projects that run designate We're kind of kind of piloting this out now with just a couple of projects Since it's early days before we kind of kind of open it up So goals Again users that depend on those less less like widely use projects Want those interoperability guarantees as well. So hopefully this is a way to deliver that for them And moreover we'd kind of like to Make the whole process of determining what interoperability means less centralized for us So it turns out we have a fairly small number of people on the interop worker And there's a vast number of projects in open stack that people use Right, so wouldn't it be great if those projects who really actually know the technical details and probably hear a lot from users If they had a more direct say in kind of defining what interoperability looks like for them Right, and you can kind of see that some some projects have already done a bit of work in that direction Like Cinder actually has capabilities that are they sort of require for all-entry drivers, right? So those are things that are core for them Yeah, it's kind of an example Generally we want people to use like the same general criteria as the power programs But just sort of looking at the the audience that it actually applies to so again Designated example if there's 16 percent of users out there. Well, let's look at those 16 percent and see what's common across those right We kind of also goes that saying we are angling to use kind of the same tool sets that we use for this So the same languages and the same schemas that we used to define the interoperability guidelines today will apply to these programs as well All right, so the other use case I mentioned is is what we're calling vertical programs So now that we have this big flexible platform that's good for a lot of different use cases We're kind of now at that point in open-stack life cycle where we're seeing it move into kind of Niche use cases that have pretty pretty big different requirements In this case, we'll use NFV as an example And if he's like something you hear about all the time and open-stack sounds now I'm sure nobody has walked down the hall and not heard people talking about NFV And again, this is something that probably has very different requirements from some from general purpose compute Here again for people that are actually actually care about running NFV on top of their Open-stack clouds There are certainly some things that they would like to see in terms of being able to run them on different clouds and expect The same results right so these people care about interoperability as well So what we're doing here Kind of similar goals right these cases for which of the second is very popular should have the same interoperability standards So that we have the reduced vendor lock-in and all the other things that we talked about in the user survey that people care about We also want to help foster those vertical ecosystems just like we do the general ecosystem right one of the things that The foundation was hoping to get out of this whole interoperability push was make open-stack more accessible to people Help foster a rich ecosystem of tools around it. They can use it Make it more accessible to things that want to ride on top of open-stack like say kubernetes or opnfv or you know, whatever else We want to do that for these sort of vertical use cases as well And we want to work with the adjacent communities To work out what capable is are actually needed so here again People that know a lot about general purpose compute clouds don't necessarily know what goes into making a good NFV platform Right turns out we have a lot of friends in the open-source community to do so and you see some of them here at the The open-source community days that we have here at the summit opnf. He is here at the IO is here I think and a few others This is kind of where those those cases where I'm reaching out to some of those adjacent communities would make a lot of sense And again same general criteria is powered but applied to these specific use cases So rather than a chunk of the population we're looking at a portion of the population that is a specific use case here All right, so where do we start? This is this is kind of two big new programs in addition to the work that we're already doing for for the general sort of core open-stack stuff So we're as we kind of develop these programs. We're kind of starting small and working our way up For the out-on programs we're working mostly with trove and designate right now And for the vertical programs we're looking at NFV is kind of the first use case there That is not a final list by any stretch of imagination, but you know walk first then run, right? So we've just finished a lot of the scoring work on the next open-stack Interoperability guidelines, so pretty soon here. We should be able to now kind of get underway with some of this work for add-ons and verticals One of the things that we're working on right now is that the schema we use to define interoperability guidelines He's not well suited for kind of some of the things that we want to do in these new programs so we're working on a new version of the schema that kind of Gives us the flexibility to add some of the things that we need for these programs and that's under review right now Excuse me We'll put these slides online later so you can go click on all the links here But if you're interested in in this work either The existing guidelines or or some of the new programs they talked about there's some information here about how you can get involved Or find more information of course, you can always find us here as well And with that we will get questions Would you mind using the mic so that people can hear the recording point? I think it's so far the difficult or interoperability the brand Just just a face the sound challenger. Okay, it's special Because this is why we feel customer to realize that the brand is okay Or it's important or not. He don't have any any idea about what is the death call? What is the interoperability? Okay, that's the first challenger because for example like her in China Okay, maybe the China telecom use the oversteck Okay, they passed as a death call, but the golden cloud is northern doesn't to use as a Death call, but they use a part of the Open-stake the project but he didn't pass that but from customer side, he don't know what is the difference Okay, so my first Suggesting sir, how we can to promote This is very important because if you know promote no customer no interesting The the the brand nobody knows what's that okay, so that's just our people to do that But for all the side nobody know what's going on. Okay. That's the first Second you said is of we but so far open we also have a tester program to do the same thing How we can alignment the different and for others not only for the Of we for example in industry in the manufacturing industry They have an other organization we call is I see okay, how we can alignment is I see tester program They also have a tester bed. They also have a tester program They also have a tester the brand how we can element because and the third question Because so far the oversteck is not only close product the distribution product Few the few but the cloud is means it's a it needed to Alignment the application is not only for the for the computer storage, right? So there's only care about is a service the experience. That's the more care about that For example, like in China They have a want to do is a trust the cloud the program trust the cloud program means is They just like a hotel one start five stars. Okay, they can to separate Okay, maybe you crowd only just a five one start. Okay, you crowd can be five start Okay, that that is a very important because the customer can have a right to choice the different start Okay, if you want to cheap you can choose a one-star if you want to delicious you can choose a five-star So my means is how we can to cooperate with that's this program Yeah, because for example, like a trust of cloud program. They don't think the dev core is important Okay, they don't care about that, right? Well, so Let's take those one at a time because I was there was a lot of questions all went. Yeah So starting the top Let's talk about recognition right promotion in the industry of what this actually is what it means So that is actually why a few months ago They changed the name from def core to interrupt because there were a lot of questions from analysts about what does def core mean? Because I hear this name. I don't know what it means when we say interrupt interrupt working group All of a sudden the analysts get it. They know what happens, right? So the foundation has heard that message already and it's starting to work in that direction part two of that is You just saw them rebrand the entire marketplace just got pushed live. I think last week So they're actually moving toward, you know, how do we talk about promoting the products in the ecosystem as well, right? And a big part of that is making it more clear in the future like what products actually passed which standards And that is now kind of front-page news if you just scroll down the list of providers that are in there the one of the very first things you'll see is that tested logo and a The number of the guideline that they passed and also if You see a cloud that calls itself open stack and they have not passed This interoperability guideline they cannot call themselves open stack. Yeah, so You know here talking to Someone that you just met and they say I have this great open-stack cloud And the next question you should have is great which guideline have you passed because otherwise they they can Can call they can use open-stack code base no problem, but you call themselves open stack They have to pass the guidelines. Yes, but frankly say I Think just a few people know interoperability. Also. It's difficult to understand. I Mean that course and the difficulty but interpret my suggestions that we need to find the local the coordinate to help us to promote that because I Frank said many Chinese the customer don't know the English They don't know what's the difference right? Yeah, will you help us in China also in Europe in Germany also is somebody also have an idea what is the difference? so It's my personal opinion be all I just to communicate with some mind customers, right? They just said you are power it off the open-stack. Okay. What's that? Yes, just like oh, you know So my means is the name is not so important But also of course that's important, but we also need the local the coordinate with special the customer They have a lot of customer so they can promote us That is important. I think that's a great point and maybe we can Have you work with the foundation to publish an article in Chinese that explains what that is So that probably would be very helpful. Yes Yes Yes, yes, we need you Okay, so let's go to question number two which now that we talk about that you're gonna have to remind me what question number two was It's all of we are the vertical industry. Yes Oh, we also have a test program. They do the same thing. Yeah, just just you know Just in this session that they're talking about we test right. Yeah so Part of that is why we started working with folks like opnf e So, yeah, that's kind of a part is we move into these vertical use cases what we actually find is in many cases there are At least some kind of programs out there that are maybe not a sort of interoperability, but a core functionality at least But again kind of other things that doesn't address all these cases But what's what's kind of a core comments that people should be able to expect right? Our preference would be to work with them to help define the standards that we run The open stack foundation would actually like to own the logos that say this is a product that you can run at a fee on Right for our definition of whatever that is, right? We don't want to create that in a vacuum and that's why we start working with some of these other outside organizations Some of the testing for NFV specifically Some of the stuff that's out there now kind of is a layer or two above the infrastructure layer and sort of not as germane to What what you need out of open stack in order to achieve that? So there's there's kind of a little bit of stuff to weed through as we as we go through there Look for NFV, especially there's there is quite a lot of testing out there already being done every year There's I figure which which group it is that sets up the Interoperability testing the light reading publishes over here for example, but some of that kind of rides way above the open stack layer It's great, but half of it is not germane to you know What we're trying to do the open stack and what an open stack product needs to provide So that's that's kind of the medium that we're trying to strike there, right? Okay, so question three was oh Yeah, okay, yeah Right so The we actually talked a little bit about something kind of similar earlier on In the development of the your ability programs One of the things we found is that it's very tough to measure operational metrics especially for the kind of Breath of products that we have here, right? So if I'm a if I'm a public cloud It's relatively easy for somebody to externally monitor and say yeah that API is always up or down or whatever it is, right? Or provides these things for say private cloud deployments You know half the installers out there I can pick and choose which components I Deploy maybe I'm never going to deploy salam or maybe I'm never going to deploy I don't know sender because I have no need for that And I'm going to play it on a lot of different hardware in a lot of different networked apologies You know a lot of different storage systems, so it becomes very difficult to kind of measure quality if you will And that's something that the foundation is kind of shied away from because of that Now Part of the programs that we part of what we're doing with the add-on programs is trying to be a little bit more informative about what a Commercial product actually provides, right? So if you go to the website, there's there's usually links in every vendor's Marketplace century to where you can find more information about what it actually provides We'd like to get that into a more centralized place And then for the things that they do provide actually say is it actually interop with the other ones that are in the marketplace? So that's kind of the means that we're struggling here. It's not really so much about quality Or the amount of stuff that something provides. It's about the interoperative things that they do provide So if we've got one provider out there who's actually providing, I don't know mis-roll or something That's great Not sure it's really a good target for interoperability since they're the only one providing him, right? In a sort of quality or quantity system They might sort of get an extra star for providing that project From an interoperability point of view, it doesn't really help right, but at the very minimum any product that People sell and they call themselves open stack make sure they're in a marketplace And that they have the link to the guidelines that they pass and hopefully they pass the latest guideline as opposed to Five guidelines back which they wouldn't even qualify for the logo It depends, but you just show will show you how old their product is and How recently they have passed the guideline so you can filter hopefully on that like if you want private cloud You know search for one that has passed 2017 01 guideline Or whatever the latest will be the you know in the future Yeah, exactly Yes Yes, so basically there's a list of you know 250 whatever tests Right if you fail to pass one of those No logo. Yes, and as you saw the number of tests is not very big And if you pass only those tests you like you probably wouldn't have a working open stack Like you still have you still need a lot of other stuff think of the guidelines as a Minimum spec that you have to have so you must implement creating the m's you must provide the network you must Have sender Yes, well you are many things, but you are not open sack Yes, so if I as a as a end user cannot call the open sack API to get a security group That's not really open sack, right? So if I have to call some third-party vendor API, that's great It may give me all the same functionality maybe even more functionality, right? But it's not open sack and that means that when I go try to use The ansible provider for open sack or the terraform provider for open sack or try to run Kubernetes on top of it Or any of these other things it's gonna break So it may be many things, but it's not actually interoperable All right, and if you find yourself that you believe you you are running open stack with all of the major functionality and it's failing one test and if you can persuade us and the community that that test is Either it's a bad test or it's not actually should not be required then we can flag it But in and we have a process for that so you can walk through it and it's it's not it's not just purely, you know Black and white if you can persuade us that we made a mistake and we do make mistakes In evaluating things, you know, we can definitely Work with you so it's it's not like that. You don't have any options Of course, you can all you know if we say if you come and say, you know what? I don't believe that a user should be able to create VMs. I will create VMs for them We're like, ah, well, maybe you should actually go and make sure that your cloud is able to spin up VMs Or a user is able to do that, you know, and obviously this is a very bad example, but Sure of picking, you know finer points of the guideline, but Okay, I think we are pretty close at a time Yeah, I think we have one minute. So if you have any questions left, yeah Yes Let's talk about that So ultimately what we send up from from vendor vendors do the test themselves, right and send up results to us It's essentially a text file You could game that system The enforcement angle comes from two directions. One is community policing We do hear occasionally from people that say hey vendor said this was supported. It doesn't actually work Part two is probably the stronger one when you get that logo from the foundation you sign a legal document That I had to get you know when I did this my company I had to get like a senior vice president to go sign up a legal document and boy did they run that through a fine-tooth comb So they're actually legal consequences for not adhering to that contract. So it's kind of enforced through through legal systems I'm not a lawyer. I won't say how will they work across the world, but so far so good All right And you know and you as a user if you go and use you say, you know You start using something called mushroom cloud that claims to be open stack based and it does, you know, it's running No one network in the back end You know, you can You know raise a hand send an email and say hey this mushroom cloud claims that they're running opens Stack when in fact, there's no way that it would be passing this guideline. Can you investigate and? You know sometimes People do make mistakes They don't realize that they have to pass the guidelines to call themselves open stack and then foundation will reach out to them and Ask them. Hey, I do have plans to certify if not, you know consider removing the logo So and you know, I'm sure they have a nice conversation first All right We'll have to take that one to the hog something right of time, but we've had some conversations around that there's some challenges there Thank you everyone for coming. You know how to get hold of us