 Hello, my name is Rob Hirschfeld. Josh McKinney and I are the co-chairs of the Deaf Core Committee for the board where we have been on a long road to define what OpenStack Core is. We are probably halfway down the journey. And today's talk is going to be about what we're doing, how we're doing it more importantly and most essentially how people here can get involved, which is really what we want to have happen and how we're defining core. We're going to go through a bunch of slides and just outline what we have done already, answer a bunch of typical questions and then go straight to Q&A. So if you have any questions about anything, including like Docker and what it is, that's not the right question. Everything else. Everything else. And if you get a fire small arm when we say the word Docker, tell us so we can duck. Go ahead. And so, but if you do have a question, please raise your hand. I think Josh and I both prefer an interactive style and if we're going to answer the question in the next slide we'll tell you. So this is the boilerplate for Deaf Core. We've been talking about Deaf Core for years, a lot of time. And so we're not going to present you with the history in this presentation. We're going to try and do just enough information for you to understand what it is and figure out where we're going from there. So it's important to understand this. Deaf Core sets the requirements for all OpenStack products by defining designated sections of code, must pass tests and capability, and that these definitions will use community resources, involvement to drive interoperability by creating minimum standards for products labeled OpenStack. Sorry, this is the only slide I'm going to read. I hate reading slides because this is really important. I was going to go through and highlight the words that I thought were important on this slide and it's just everything. Every single word. This is Deaf Core distilled to 37 words. So the really important caveat here, and we're on the board, the board controls commercial use of the mark. The other uses of the OpenStack trademark are protected in the bylaws. So the community is always allowed to describe themselves as OpenStack and the code itself, the integrated release can always be described as OpenStack too. So what Deaf Core covers is only using the word OpenStack in your product, and everyone who does that has a license from the foundation, and that license requires them to do some things in this particular case, Ship Core. And product, we get, people get hung up on this, product includes distros, product includes cloud services, product includes anything that you are doing to make money off of OpenStack. Appliances. Appliances, ecosystem, products, right? There's actually marks for that. Those are controlled. This is how to patch it to the software, anybody can use it. This is how the OpenStack brand means something. If we don't control it, actually this is where the side note. If the board does not control the use of the mark, we actually can lose control of the mark. So it's very important for the success of the foundation that there be controls on the mark. That's why we're doing it. All right. So this is how we've organized ourselves, this is the overview of what we're going to cover. There was a lot of debate as to whether Def Core needed to be split up into many subcommittees and working groups, and we didn't do that. Actually we did. Two co-chairs. There's two co-chairs, there's a bunch of other pieces. We've been moving as slowly as possible so that Def Core will become boring over time, and then it becomes easier to, it's actually a very tedious process. The stipulation of any board activity that it has to be boring. It has to be boring. So we define a bunch of principles ahead of time, we'll cover the principles. We use those principles to create criteria in terms of how do we select what capabilities will be part of Core. We boiled all of the tempest test of which there are hundreds and hundreds down into capabilities, which are groups of tests, and then we are producing a scoring matrix for every release, applies the criteria to the capabilities and sizes, which ones are in Core. I think we need like a flowchart for that. So here's the idea, and I really like to stress this. This is not the way you're used to thinking of Core in the past. So if you're used to OpenStack in the bylaws, it actually says Core is Project 1, 2, 3, 4, and 5, and that's not what we're talking about. After all this process we figured out that's not the way we think it's best to define Core. What we're doing is we're defining Core by the tests, the features, the capabilities tests are an expression of that in the OpenStack that are defined by the community. So the community defines tests, those tests reflect features and capabilities of the product. That's what we use, and that means that a project like Nova isn't 100% Core, only the capabilities that are must-passed tests are Core capabilities, and that allows Nova to continue innovating and changing things and adding new APIs and stuff like that. It doesn't lock all of Nova into being a fixed thing that can't change. There's a really simple example of this, and it's one of the problems we set out to solve. Nova itself has always been in the core definition, but none of the extensions were. And yet almost everything that people do with Nova requires extensions. So having key pairs, key pair auth was an extension. Floating APIs was an extension. Most of what people did with Nova from a technical standpoint wasn't required to be in anyone's product. And when it wasn't in there, OpenStack didn't work the way people expected it to. And the goal is to make OpenStack useful and interoperability is the ultimate destination here. So we have to use the things that people find most useful and define those as what's core for the product. Seems pretty straightforward, it's really been exciting to do it. The challenge is, we don't have a slide for dead puppies, I meant to put a dead puppy slide in here. That means that there will be features and capabilities that people write that is in core, and that's okay. It doesn't mean they're not important, it just means that we're not controlling them from interoperability perspective. We'll talk about that. These are the ten principles. We're not gonna read them. It took six months. To put these together and then get a board vote on them. It ended up being a unanimous board vote, which was amazing. Rob actually traveled around the country to meet with OpenStack user groups to talk about what we were trying to do and how we were gonna do it. The most important piece of this that I will emphasize is that Defcore ended up being two things. One is the tests and the other is the must pass sections of code. And this is confusing for people who come in to an Apache 2 community and they're like, we're pretty sure Apache 2 means you don't have to use any particular piece of code. GPL means you have to use all of it. But we're using the trademark license to require the community to collaborate in certain cases. And what code is designated is controlled by the technical committee. But as a simple example, the Nova API is a designated section. If you're gonna use Nova, you've gotta ship that piece. The drivers are not. So if you have a different crazy hypervisor, whatever you wanna use, maybe it controls robots, awesome. You can use that code, you can include it in your product, no problem. And you don't have to include driver code for hypervisors, you don't ship. And we have some slides about that specifically. Sorry. The one thing I would, before we jump to the next slide though. The reason why I traveled so much to talk about these and I continue to really work with users and I love to get feedback about this is we are being very careful that what we do in defining core drives the culture of OpenStack. That drives positive behaviors that we wanna see. We intentionally chose tests because after talking to a lot of people, we found that encouraging the use of tests to determine core creates a culture of creating tests. And it creates commercial incentives to improve the testability of our product. We spend a lot of time thinking through the implications, that's why we go slow. So that when we roll out core, everything we do creates positive virtuous feedback cycles to make OpenStack stronger. Very deliberately chosen. If you have questions and wanna talk about it, it's one of the things I spend a lot of time actually thinking through and tuning and you'll see that as we go through the capabilities. This is what Josh was talking about with designated code and tests. We put this together to make it very simple, try to make it very simple. So there's two things in the matrix. There's designated code, which is code that's defined by the technical community is must be included. And the TC has a set of principles, draft principles, I don't know if they've finalized it. I think they ratified them. They did it? Awesome. Principles that define how you know what is designated code and what's not. So when you're writing, working on a project, it should be very clear reading those principles. If the code you're working on is designated or not, it's things like, did you mean it to be replaceable? Does it have multiple implementations? Things like that. And then the must pass tests, right? So we have, we pick these tests. And if you have designated code and must pass tests and you're using, you've implemented, remember, commercial use. So your product uses the designated code and passes the test. You can get a license. You can call it OpenStack. If you pass the test but don't use the designated code, there is a future plan to maybe have a OpenStack compatible mark. It doesn't exist today. So today, in order to use the mark, you've got to use the code. And we'll probably keep it that way for a while. There are folks like Sinefo that have done total re-implementations of OpenStack and said, hey, can we call this OpenStack? Not really, no. Maybe in the future. Maybe in the future. If you are using the code and it doesn't pass tests, you are misconfigured. And if you are in neither camp, get ahead. Why are you even having this conversation? You took off my more emphatic no. Yeah, well, there was a much more, yeah. OK, no, no, no, click. We'll just fix it here. OK, we're not going to read these ones either. Let's just say that this is the main work product of the DEFCOR committee. We had a lot of long meetings to come up with. How are we going to score every capability? And the goal here is to be as objective as possible. It's tough to get in a room with a bunch of arguably vendors and say, how are we going to decide which capabilities everybody should have in OpenStack and which ones don't matter? And we wanted it to be data-driven. We didn't want it to be emotional. Also bear in mind, this is criteria and scoring that the board uses to make a final decision. It's not that the score is an absolute measure of it's in or out. The board can use their best judgment if they have to. But so far, we haven't had to, which has been a good sign for the criteria. And what was interesting in coming up with these is we came up with the criteria. We actually started with like 17, and we've boiled them down a little bit after hours of discussions. We didn't plan them to be in these four categories, four top categories. They actually naturally fell out into this. And if you go back to the original, even before we did the principles, these four categories reinforced the cultural emphasis that we thought core should have. So it's really nice to see, you do a lot of discussion, you talk to people, and after time you come up with things that actually reinforce what your principles and goals are, even after doing it from an implementation. I do sort of miss criteria 13 and 14, which we dropped. 13 was about admin APIs. And we had to decide that things that use admin APIs can't be core because you can test them on private cloud software, but you can't reliably test them on public clouds. And number 14 was if it weighs more than a duck, it's a witch, which was called the duck criteria. That one, we couldn't figure out how to score anything on, so we had to take it out. And the ducks were not floating, that's the problem. Moving on. It is worth noting that we, these are release by release criteria. So the purpose here is that we're sharing these with you. If you look at these and say, oh, you missed something obvious, please, the criteria will be adjusted, release by release, people often get caught up. We're not trying to go and make Juno decisions today or things like that. It's still time, it's always time, but right now is a really good time for you to look at what we've done, give us feedback. That's actually our call to action at the end. Okay, we stated this before, but we took all the Tempest tests and we grouped them into capability, actually Troy Toman did all the work. There are almost 800 tests as of Havana. We can't score 800 tests, we would go insane. So we rolled them up into capabilities, we ended up with 75 capabilities that we could score. And this is an example, one block snapshots means these two tests pass. Some of the capabilities have dozens of tests, some of them have only one. And our expectation is the technical community will own the capability grouping, not the board. This is now stored in a JSON file in the refstack project, so if you want to argue about how they're rolled up as some of the PTLs do, awesome, we can do that in a review queue. Oops, hey, this is rebooted, I might be back. All right, this picture is a cheat sheet that I put together, because we haven't, this is the first time we did the scoring matrix, so it's a little bit not user friendly. Our UX experts haven't weighed in yet. So you can actually go Google Defcore cheat sheet and find this matrix, and that'll show you, sort of take you through it. I don't want, we don't want to spend a lot of time in here, but capabilities boiled down into test groups, test groups were scored on these 12 criteria. Then this, we actually did a histogram of the scoring, that broke into three pretty strong group categories of, these were pretty clearly the must pass category, these were the maybe category, and these were the now we don't think so category, without a lot of shades of gray, in the nice W shaped histogram, like statistics. So please review this, we don't have time here. Okay, this is the call to action part, and we're gonna get into teacup and ref stack next, but just the, it's that entire three of the 12 criteria rely on community feedback, and so it's, are these capabilities widely deployed? In other words, do public clouds support these APIs? If they don't, there's no point in us saying they're in core because people can't expect them to work. Are they used by tools, meaning are there, pass tools or dashboards or orchestration tools that consume those APIs, and are they available in client libraries? Fog, Bodo, JCloud, et cetera. So if you have an opinion, if you write one of those client libraries, or you work on one of those tools, and you say, hey, we really need this API, bring that to our attention. It'll bump that score up in the next pass. Yeah, and this is from, I think I'll speak for Josh too. That criteria is the criteria, right? We have a balanced criteria, I think we're actually gonna be able to keep them all 12 off and balanced, but it is really important to us, and I can speak for other members of the board because interop is what we're trying for. This is how we know that we're actually passing for interop, and so everybody needs to help us collect this data because we want to be able to stand up in front of the technical community, in front of the user community, in front of press and analysts, and say, this is what's getting used, this is what's important, and then come back and say, and don't change it, or if you change it, it's gotta have these really good reasons to change it because it's gonna start breaking people. Until you've given us the data and helped us collect it, it's much harder to have that conversation, and then you get into a design meeting where they have incredibly strong technical reasons for changing an API. Always do. No idea of the actual cost in the community, and we wanna give them that cost so that we can make better decisions about it. All right, we should go quickly through the examples so we have lots of time for questions, but one of the things that we did- Let's pause for a second for questions because it's a good breaking point. Anything confusing about that? Okay. We have more pretty pictures, yeah? Oh, there we go. So I go through the testing with one configuration, so I get the branding, or do I have to tweak it, or do I have to pass it down for all the different configurations I could deploy? It's a really good question. I think it has been debated hotly. The current thinking is, if you're selling a software product and you have a reference architecture that represents your typical configuration, you can certify against that and say, yeah, I get the branding because my reference architecture configuration will pass tests. Right. We actually do look for tests that don't have a specific, that would fail in different configurations and those get scored down so there's some capabilities stuff for that. It's a great question. It's something to watch for. Yeah. There's still open debate at the board as to whether we need a broader family of marks over time and you end up open stack certified for private cloud deployments for the medical industry or something. I don't know. Until we have the data, we don't know what extra marks are, so we like to talk about kicking the can down the road. We called this cycle, we named the cycles for the def core. This was the elephant cycle. We're trying to eat the elephant. There's a couple of analog ways we use the elephant analogy, but we don't try and, so we intentionally don't solve problems that we don't have enough information for. You named these, not me. So we thought it would be really helpful to come up with three specific examples so that people could get a feel for the impact of what it would look like and then this ends up being pretty specific. You want to do banana cloud, now I'll do this rocket. Simple example, we had to go to the board and say if everyone today had to match def core criteria, who would be impacted? So let's, we have an imaginary service provider runs banana cloud. They use Nova, they ship the designated sections, the tests all pass, that's green. They implemented the Keystone APIs, but they don't actually ship Keystone, which most service providers don't, but because Keystone doesn't have any designated sections, that's also fine. They also ship Trove. Trove isn't in core, they get karma for doing it, awesome, doesn't interfere with their use of their mark, but it doesn't matter if other people don't use Trove. And again, they use Horizon, Horizon's not in core. Actually, yeah, I modified this incorrectly, I'm sorry. They don't use Horizon, was they? No, it was Horizon. Also doesn't matter because Horizon isn't in core and it turns out that's deficiency in tests. There aren't really any tempest tests for Horizon. No tests? It can't be in core if there aren't any tests. And so all of the technical, I've actually had now conversations with some of the PTLs and they're like, thank you, we now have a stick to beat my contributors up to get tests in. And now commercially it is important if you're paying a developer to work on a project and they are not contributing tests along with their code, there is no avenue to get into core. It's not necessarily the only important thing, but we want to reinforce that culture. I'm always gonna go back to the culture card. Mm-hmm. Oops, you and I are fighting. You use the code. I have the remote, go ahead. So Sprocket is a private cloud, right? So somebody who's using the OpenStack code, but chose not to implement, I think key pairs was the example we have. We're gonna post this as a blog post after we get it a little bit edited and make sure some of the statements in there like these don't freak people out. But Sprocket ships as much of OpenStack as they possibly can with, they missed one thing that they needed, they have to add that, so that becomes a thing. They don't implement Swift in our example, they use Ceph, which is pretty common in private cloud implementations. We are currently discussing whether or not Swift has designated code or not. And so once that's resolved, we'll be able to answer this question. If Swift has no designated code, but say Ceph can pass the Swift API tests, rock on, you're good. If it's 100% designated, then you need to ship Swift. And if it's 10%, then we have to figure out what percent that is and Swift could use that. I think my expectation is OpenStack projects should have a degree of designated code and we'll eventually have a target percentage. And then they use Heat, great. There are no tests for it, but there will be some designated code in Heat. Hyper-V, Subdriver, no designated code, and doesn't have tests specifically. I think actually it does have tests. So they're all in good shape for there. And this is one of the things. Shipping products that have OpenStack projects at our core, we love that. That's great. It doesn't make it bad. We're just all about use of the brand and mist. So this actually isn't impacted by Def Core at all, but we're deliberately trying to engage with the client library community because if they need APIs, we can't deprecate those. And if they're consuming APIs that are outside of Core, it's a good indicator that maybe they belong in Core, right? So this fictitious API client called Mist, they should tell us what they're doing. And if they add tests to Tempest for capabilities they're using, that's very helpful because then we can consider those in the future. I think I'm gonna rename this one Meow because I like the graph. Yay, end the tools. So this is a good place to pause again in case there's a question about those things. Does that help? This is the first, the major feedback we got when we did the first presentations of this was, so what does it mean? Does this help people understand a little better what the impact is? I'm seeing there's a question back there. Okay. So I'm gonna repeat your question rephrasing a little. So the idea is can people fake it out and cheat? So here's, I love that question because we get asked that a fair bit. And the answer is yes until your customer runs the test suite that you said you passed on that cloud and fails it. And so one of the things, and this is actually a great setup for this, one of the things that we're doing in making these tools generally available, and T-Cups is a big part of this, making it incredibly accessible for people who aren't OpenStack developers to use this information and share this information is that if a vendor is saying one thing and publishing it to get the mark and then their customers try and repeat that experience and don't have the same experience, then that's gonna come out very, very quickly. If they truly did it in a way that hid everything and they created this black box and stuck in code that they shouldn't have stuck in, we're not trying to be an enforcement agency. If they can pass all the tests for interoperability and somehow make it work behind the scenes. The other thing is there are not an infinite number of vendors using this process. So again, this is a process that applies to licensees. In order to be a licensee of the OpenStack trademark, you either have to be a corporate sponsor, a gold member or a platinum member. So there's only 300-ish companies that are in that bucket and not even all of them are licensees today. So we have a small ecosystem with a lot of peer pressure. We're not super concerned about policing it because the reality of getting caught out is pretty extreme. There's 17,000 members of the foundation. It'll be pretty public if you were cheating. All right. So we feel if once again kicking the can down the road, if that becomes an issue, then we'll have to take action against it. But we hope that what we've got to show you with the tools and tests make it very hard for somebody to sort of say one thing and then ship something different. All right. Rev stack. Okay, so we bought this domain name. We built a community portal. The idea being when vendors certify by running Rev stack the code, they end up with a scorecard that lives on Rev stack the website. And this is so that when folks are thinking about using OpenStack, either software or services, they can go to one website and say, okay, which capabilities are actually gonna be available broadly? So if they're thinking about hybrid or they're thinking about avoiding lock-in, they can say, okay, if I consume these APIs, I know that I can go and buy OpenStack from other different people and I can get it from this public cloud or that public cloud and what I'm using will work. I'm not getting locked into using an API that is only supported by a small ecosystem of vendors. Conversely, the vendors can also then highlight that they are beyond the bleeding edge, that they're already supporting Trove, that they're already shipping heat, that they're whatever those other sets of capabilities are, they can highlight that in their scorecard. And they might be able to use that to say, hey, if you're really excited about what's coming in OpenStack, we're already shipping that for you. We also wanna be able to show how people's compliance has improved over time and particularly for, again, the APIs and tools community to consume that. And I wanna reinforce that we're not trying to post just the must-pass tests, right? That's 15% of the base. The goal is to show every test, right? Or at least every positive test, because we suspect people are gonna prefer to post only their passing tests, probably. But the idea here is that that will allow us to see up-and-coming capabilities that are widely deployed in Interesting OpenStack. So if a lot of people are interested in Trove and we start seeing the Trove test passing, that's data that indicates that it's on track to get into core and then the market can actually start to implement that more often in their clouds and then it becomes an interoperability candidate, which is really what we're driving. If we get this data, we make better decisions about what to highlight, what to include. That's the whole point. One thing we have skipped over earlier on is talking about the timeline. And I think it's because we've given this talk so many times, we forget pieces. What we're doing right now is living in the past. So the entire matrix that we just scored is for Havana. And obviously, Icehouse just came out, right? We are hoping to have the Icehouse matrix done within 90 days and then by Juneau have the scoring matrix for core done at the same time as the release. And at about the same time of Juneau to be enforcing those requirements against vendors. So right now this is advisory to the existing vendors to say, hey guys, it looks like your product or service doesn't meet the definition of core. Six months from now, we're gonna be having a tough conversation about your license and use of the mark, so you probably wanna do whatever remediation is required. And I'm hoping that in Paris when we certify Juneau we will also retroactively certify or at the same time certify Icehouse and as an advisory and say, look, you can actually go and test your products right now and see if the Icehouse ones are in compliance. So the other piece of that sort of preliminary period is that we're not publicly posting the scorecards on RefStack. So the tools available, the vendors can use it, they can generate their own scorecards and use it for understanding where they're at. But we're not gonna expose everybody's dirty laundry when they haven't had a chance to work on it. And this actually comes back and we'll go back and call action but it's worth reinforcing. It is really important, really helpful for people to come in and say, yes, I think this makes sense. Because the vendors, when the community says these things make sense, I understand them, the vendors then start to listen. Cause you're ultimately the influencers the vendors care about. And so we'll go back and call to action why that's important but read them, respond to them. And if you don't like them, that's okay. We wanna hear what's wrong, we're constantly adjusting this process constantly. And so we're very receptive to feedback, positive and negative. In some ways the negative helps us the most. So do not think, I'm not interested. They don't wanna hear it, we do. We've made a lot of changes over time. All right. You wanna take this one? I'll take this one. So RefStack and I've been working really closely with the code side and actually Piston's been has some people forces here. David should be here too. Yahweh, IBM, Yahweh, IBM, some people from Dell and I've had a whole bunch of people come up in the summit to get more involved have been helping with the code side of RefStack. So this drives the public site. But what we found is a lot of companies have been building Tempest results UIs. They run Tempest as part of their normal process both for building deployments or for actually testing like building their own gate. And they wanna be able to take those results and start comparing them and see if they're improving over time as they tune their deployment engines. And they were repeating the work. So that work is synergistic with RefStack from the UI perspective, the public UI. And so we have a lot of people who are investing in it because they wanna use it internally. It's really helpful to have the same thing that you're gonna get scored on publicly and privately and that's perfectly good. We also have folks using it to validate deployments, right? When they're out doing a customer deployment of OpenStack and they just run, it's a real quick way to look at all passes green. This is the configuration matches what you were expecting. That's what the Dell team's actually been doing that for several releases. It's our final site validation test. And there is this caveat because we keep talking about Docker and T-cup. There's no Docker in RefStack. Just everyone's clear. We did not introduce a Docker dependency to a chunk of OpenStack which I know a number of people were nervous about. So we're gonna talk about Docker and T-cup next. And the enthusiasm for Docker, we did not do that. But T-cup does use Docker. So T-cup is a very small, small sliver of what we're doing, but it's an important one because it's the community facing piece. So our goal, so RefStack is set up to create community feedback. Run the Tempest test against your product where we're talking about cheaters, catching them. To do that, we need to have people who never don't know how to get cloned, don't know how to do a PIP install. They have never set up DevStack and are never going to. We need them to be able to use this. And more importantly, we need our community members not to spend hours on the phone helping them figure out why they have a Python dependency or issue that they couldn't resolve. So what we did was we took the RefStack project and we put it in a container because that's highly portable, very repeatable, and it's a much smaller surface to support. And T-cup, so T-cup is Tempest in a container, and then I added Upload and Probe because I love the phrase. It's an acronym. Yeah, it's an acronym, yes. It's a acronym. It's an acronym. Yes, it is a acronym. Speed of cheating. Yes. And so that's the idea with T-cup. And so the way it looks, what it does is the flow, you basically put your credentials in, test your cloud, after a test is completed, it uploads the results to the RefStack database. Pretty straightforward. You could point it to another database if you want. That's the flow. What it looks like is this. You get the Python file, minimal dependencies by design. You source your OpenRC file, which is your credentials, your login, your Keystone API. You run the Python file. It takes a while. It runs the entire Tempest test suite. It builds the container, it runs the test suite, it uploads the results. And there's a manual mode so that if we have to help people, they can do it, or if they just want to run the tests manually and see the results, they can do that too. But it's mostly selfish for us because we don't want to have to play IT support for everybody who's trying to figure out their scientific Linux variant that I haven't used yet. Okay. Is there any questions on that, by the way, before we skip straightforward? Yeah, there's one over there. So this isn't true at all. So, and this is where people get really confused. Okay. So the question was, does this mean we can't test anything that has specific hardware drivers? So just be clear. The container does not have to be in your cloud, right? Tempest doesn't actually have to be anywhere near your cloud. What Tempest is running is API calls against a cloud somewhere else. So typically, teak up your run on your laptop or in a VM somewhere, it's designed to be like, oh, I've got Docker on my laptop because I'm a dev and lots of devs have Docker on their laptop. And so I just run teak up on my laptop, but it's pointed at a cloud somewhere else. That cloud can have hardware drivers. It could have a magic nova driver that uses robot arms, which I still think is a cool idea. You can point it at a public cloud. And that's deliberately what we want to do is have community validation that public clouds are providing the APIs that they claim they are in different availability zones and regions, whatever else. So there's no limitation on the kind of open stack. What we did have to limit is that probably your admin API tests are going to fail unless you have admin credentials. And so we've said we're not gonna have any admin APIs as part of core. And the reason is, I don't have admin credentials on the HP cloud. I would like to have admin credentials on the, I think that'd be really fun, but they won't give them to me. And since this is true for most community members, they don't have admin credentials necessarily on any clouds. We can't get the level hybrid experience for interop on that side. And that reminds one things I keep taking for granted. Our goal is to be able to test private clouds too. Ref Stack originally, if you go back in time, was about probing service providers' clouds where there was access. And T-Cup came out of saying that there's more private clouds that we want to test. If we want interoperability, we need to help a company who has bought a cloud and installed it, get that, get that, test it, get it running. Because once again, we want, I'll call Piston as an example, if a customer buys a piston cloud, that customer without any help from Piston should be able to test that cloud, right? The CIO should be able to sit down in their desk, crank up their, open their Linux desktop, and run T-Cup without ever having installed anything else. And we felt like that was a really important thing because that's where we're gonna drive interoperability. And a lot of companies, we talk to, have multiple private clouds that can actually test interoperability between their own clouds, between different versions of their clouds, ends up being a really useful thing for us to collect the data about those sites and systems much more broadly than just probing external interfaces to service providers. I mean, our original thesis was if you were a software vendor, you would have to stand up a copy of your product, literally a public cloud of your private cloud software in order for RefStack to test it. And that didn't, nobody was willing to do that. Some of us couldn't. Yes, some of us, some of us are not allowed. You're sad. Any other, did you have another question? Yeah. This question came up yesterday. So the question is, Tempest is non-deterministic. What then? So we're not, sorry, we're not part of the QA team, although there's a debate to say should RefStack actually join OpenStack QA or be put into the OpenStack Info program. If we were part of the QA team, we would have to figure out how to fix this. Since we are a customer of the QA team, I would really like them to figure out how to fix this. That's a totally unfair answer. We had lunch with them yesterday. We're collaborating, we're collaborating with them, and there's an element of, hey, you guys are encouraging tests and test use. Oh my God, you're encouraging tests and test use. So, this is the whole thing, right? I'm a huge fan of test-driven development and more tests and automated tests, so we're really creating a lot of pressure in the community by design to encourage the test coverage, the test stability of OpenStack. Ultimately a good thing, short term, we're gonna expose gaps. I mean, and the other thing is if a test can't pass reliably, probably that capability can't be core because it also means the capability isn't reliable, or at least not, or the test is not exercising the capability adequately, and we should look at better tests. We should make that a requirement. We did. It's probably expressing one of the criteria. No criteria, we didn't. No, and test reliability is totally a requirement. So, the resolution process. Let's repeat the question. What's the resolution process for criterion designate a test? Disputes. So, right now it's a deaf core committee. I think it will probably always be the deaf core committee, although we will cycle through other people because if Rob and I spend this much time in this many meetings, people never do anything else. Sometimes we're gonna sunset that. The deaf core committee has been expanded to include some members from the TC. So, we have TC liaisons, which has been really helpful. Let me be much simpler. We hadn't worked that out yet, but I think it's entirely a great question and what we will do is we will set up at least two public meetings to review the final list and send out information about it to the community list so that people can participate in a community review of the list. So, thank you, yes. We try to engage the community as much as possible and that's the exactly type of feedback that we want. We've done this before when we went through the principles to get community feedback. There will be two meetings, one favorable to Asia time zones, so evening in the US and one in the morning that's favorable to European time zones. We've done this as normal for us, but you're right, we haven't scheduled it yet. Well, then actually it's really good to highlight that we've done a lot of outreach with the board and we've done a lot of outreach with the gold members. We've not actually been talking to the rest of the corporate sponsors and licensees, although the foundation staff I think has been reaching out to them to keep them in the loop. All right, we need a lot of help. Oh, sorry, another question there. Last question. Versioning is fun. So we spent a lot of time this week with the Tempest team. There's a blueprint for Branchless Tempest which is designed to be able to test backwards across other releases with the current branch of Tempest. It's not done yet. Right now, RefStack has a separate set of capabilities, JSON files for each release. So there's a Havana one, there's et cetera. And we use the Tempest branch as of that release. So you're really only certifying against the test weed as it existed when that release was cut. There's a lot of work still going on inside RefStack to figure out how do we deal with stable? What about when we introduce new tests in stable that prove the capabilities are actually broken? What do we do there? From a license standpoint, the belief now is if you certified as of that release, you're fine. And if new tests get included later on into stable that prove that those capabilities don't work, you did pass at that time, it's fair game. But it is a, the Defcore release capability set is a per release statement. And vendors will be expected to certify their release each time as they come out with a new product on that release, they'll be expected to make a statement about that release. So that's gone back and forth. I'm happy to take the question, but we'll answer it in a second. We are technically out of time. So we're gonna wrap, we need to wrap up. There's a lot of discussions about changing the mark. So the board just, sorry, the foundation was just presenting to the board and has some more, has some more work to do to present to us about simplifying the mark and creating less marks, right, compatible and powered. The license requires vendors to declare what version they're shipping, including service providers as well as, but it's not incorporated in the mark. It's an additional requirement. So the name mark versus the brand mark versus the versioning pieces are still in the license, but they're not all incorporated in a very simple way yet. We've gotten the feedback pretty clearly. Fewer marks is better. So it's gonna be an uphill battle to add any additional marks. There's training, compatible and powered, right? That's the new, the new coming trend is along those lines. Actually I wanna flip back to the, we'll post this, but please, please help us make this process work, bring up concerns, bring up issues. If nothing else, just say, hey, this made sense to me. We'd love some plus ones. That's gonna help roll this process faster because if we only get silence, we go slower. If you want us to go faster, and most people we hear do, we need the positive, we need the conversation, the dialogue, that actually accelerates this process more than leaving the room and just saying, oh, it's nice, I like it, but assuming that everything's on track. Speed comes from community consensus. We're out. We're out of time. Thanks. Thank you all very much.