 All right, people, I think we're ready here. Hola, Buenos dias. Can you say? All right, everyone. So we're gonna try to do things a little bit differently here today. We obviously have this wonderful, amazing panel full of wonderful, amazing people, but we recognize that we also have all of you amazing people in the room here. And to do things, I guess, open to that style, we actually even have an etherpad going. The link is up there. I think several people have tweeted the link. But the hope is, is that there's obviously a lot of ideas, a lot of opinions, a lot of definitions even of what interoperability is. And with kind of one of the big themes of this summit, being the interop challenge, I think it's just a great idea for everyone to get together and discuss some of these topics. So if you've got a question, like I said, don't be shy with something in the etherpad, shout it out. I can run up to you with a mic. You can do whatever you want in the etherpad. If you have your favorite ASCII yard, I don't care, that seems exciting. So all right, we might as well go ahead and get started. I think just obviously with the name of this presentation being interop, what you think it means isn't necessarily what I think it means. Maybe we should start out with whatever the official definition of interop is. And I'd like each of the panelists to introduce themselves. And Chris, being that you represent the foundation, I think you should go first. Hi everybody, my name is Chris Hodge and I'm the interop engineer for the OpenStack Foundation. And I've been working on interoperability with, you know, it's kind of my full-time job for a couple of years now. And, you know, I kind of, I think of more of a formal definition of what an interoperability means, which kind of boils down to interfaces that are discoverable, meaning that you're able to discover what an interface is and how to use it. You know, there, it's durable, meaning that it persists over time. If you think about like, you know, interoperable interfaces that we deal with every day in our lives could be like, you know, even just like power sockets, where, you know, that is they, you know, varying from country to country. It's always the same no matter where you go and it's always usable. In the final thing being open, that there are interfaces that, you know, aren't proprietary and aren't, you know, specifically restricted to only, you know, you know, one particular group. And so those are kind of the three conditions that I see when we talk about an interoperable interface. Awesome. So that's our like definition, but how about the rest of you guys? How do you view interoperability? Who do you want to guess? Yeah, whoever's most eager. I'll go first. So my name is Rob Hirschfeld. I served on the OpenStack board for four years and I am sort of the, considered the deaf core person, although I guess taken over and Mark, thank you guys for carrying the torch. And so that, and then my day job, I am a co-founder of a company that does hybrid infrastructure automation. So we really care about things like making OpenStack, Google, Amazon and Metal all work together, right, interop. So my definition for interop, after sort of going through the crucible of deaf core, is a contract. So it really comes back to a contract that multiple people have agreed to enforce. And that means that when you are using a service, that service provides a contract that you can count on to persist, both over time and across multiple vendors. So my name is Paul Chakovsky. I'm a cloud engineer at IBM. My perspective comes from the kind of operator slash user perspective. And my thoughts around interop is more about, less about the API specifically and more about the usability and the behavior. So it doesn't worry me so much if one has a slightly different API to another or has a slightly different way of giving me a network than another, just that it can give me a network and the way it does give me a network is documented well enough and supported in the ecosystem of tools for doing deployments on the cloud. That's, yeah, that's where I'm focused at thinking about interop. Okay. So my name is Katherine Diep. I work for IBM. And I have been working with Rob since day one on interop mobility for OpenStack. As you just see, each one of us have different definition of interops. So for me, the ability to be able to test whatever the definition of interop that we all agree with, I mean, still a lot of discussion and still evolving, but the testability aspect of it is essential, essential for us to know that whether we really have a common core or not. So to me, whatever interop definition that we agree with, we should never forget the testability aspect of it. And for that, why my definition is so centralized on test aspect, because I'm the PDL of the RefStack project. Awesome, thanks guys. And so just now we talked to say with our panelists that let's get some morning exercise here because let's be honest, this is pretty early for Barcelona time. Everyone varies me. So if you're a user of an OpenStack cloud, raise your hand. Oh yeah, I kind of figured an OpenStack conference that might be, keep your hands up everyone, keep your hands up. All right, now, if you also operate an OpenStack cloud, raise your hand, but keep that other hand in the air if you're a part of the first group. Yeah, yeah, get those hands in the air, like you just don't care. All right, finally, how many of you are currently using some form of multi-cloud? If the answer is yes, and you already have two hands in the air, you better just give yourself a hug because you probably need one. So, all right, so that's good to know with the room. So, you know, to kind of get into the meat of it, we've already got kind of netted out some of the topics that we want to discuss. I think the first one, you know, why are we even talking about interoperability when here we have this thing, this amazing, wonderful universe of OpenStack, but it is just one code base. Why isn't interoperability an issue when you have just one code? So, you guys, who's most eager to... Paul, you look really eager. I'll happily take that one. So, we deploy a lot of clouds and we have like 4,000 configuration settings or something ridiculous that you can set inside of OpenStack, and each one of those in some way, shape or form, affects the behavior of the resultant cloud. You know, even if you're fairly cookie cutter at how you build them, some choices, either yours or customer choices about whether they want Swift or whether they want block storage or what the networking in the data center looks like, makes from small to large changes to the behaviors of the cloud that they end up with. You know, the APIs all might be the same, but the underlying behaviors can become quite different. Right, I'm not sure the APIs are all the same. So, I mean, I think it's a really interesting point about the idea that OpenStack is one thing. Hopefully people in the room understand it's not, just even which projects you've implemented. But the fallacy that we sort of started with when we did the DEF core work was there's a lot of discussion about using code and APIs and requiring both. And as we got into the meat of the matter, it became clear that when you look at time drift, different versions, right? OpenStack has significant variation of version to version code, APIs drift, code drift, things like that. It becomes very hard to actually point at any OpenStack cloud and say what code is in it, what code it's running, if it's the right code and things like that. And so we really found, I found is that as we got into test, it became very hard to sort of use the code as a, well, if we all use the same code, that will solve all the problems. Not even the config, you could use the same code and configure it a thousand ways. And so we really found that interop was not solved by the assumption of same codes, same behavior, same API. Yeah, all right. Chris, Katherine, anything? Yeah, I mean, kind of what they said just a second ago is that there are so many different configurations and there's so many different drivers and this is part of the openness of our community. And I actually do think that the work we've done in interoperability has helped, though, by defining what behavior there needs to be to be able to call yourself OpenStack in some official legal way. I think that it's helped define a core set of APIs and some expected behavior. And while it may not cover all the behavior that you can build out with an OpenStack cloud, I actually think that it's helped focus our community in general and it's not just the development community but also the vendor community to make them realize that even though that there is a tremendous amount of configuration options that you have, that if you drift too far away from what you consider to be what the Devcore working group has under the board and the foundation and the community has defined to be essential capabilities that a cloud has to have, that that's okay because we're open but that doesn't mean that you get to call yourself OpenStack anymore. So for me being able to test some things, my dream is I should not care what you configure, how you build your cloud. If I want a VM and connectivity with the outside world, I should be able to just get that without worry about the underlying cloud configuration. So that is the goal that we try to get to. Maybe one of the way to get to start with and we already did that is to define a very finite behavior that we all try to enforce. Just give an example. If I would like to create a VM with external connectivity as a tester, I should not care whether you use floating IP, you use a VLAN with a public accessible capability or anything like that. All I know is I would be able to create a VM and I would have connection. So that would be the goal that we would like to get to. I'd like to discuss that a little bit more. One of the things I think that's really interesting to think about with this is that a lot of people at OpenStack and OpenStack Summit are using OpenStack for their own internal function and they're trying to be successful. CERN is a great example. Huge pool of OpenStack resources that they consume internally, but that doesn't create an ecosystem on top of OpenStack. And one of the things is the board and part of the original vision going back to the first summit was to create an ecosystem around OpenStack where a vendor would say I could sell you a service or a product or an extension that would be portable for the vendor to create the ecosystem in a market, has to be able to sell it to multiple vendors. And so if they can't show up at your site with confidence that your OpenStack cloud is gonna work the way somebody else's OpenStack cloud has worked or figure out what the differences are easily, then we haven't actually created an ecosystem on top of OpenStack, right? A lot of people here are worried about just using OpenStack for their private infrastructure and they don't think about the benefits they get when we can actually move workloads or vendors or capabilities site to site to site or customer to customer to customer. It's a very important thing to build. Great, and so getting kind of onto the transition to the next point. So obviously, although we had this really amazing keynote yesterday, it's hard to order some pizzas for 16 people and figure out how to all work together to do that, let alone figure out an interoperability challenge. I mean, that's a real challenge for people. So just if we could give a round of applause for everyone who participated in that, I think that would just be amazing. How many people did? How many people are on the stage? Do we have a couple of the... Yeah, a really great job. Everyone who is involved with that. There's a lot of work going into that. Yeah, and so with that, I think it's interesting to kind of understand what was the journey that before even you get to something like an interoperability challenge. Obviously we have things like ref stack and Def Core that have for a while been looking at some of these interoperability issues, but now have kind of come to emerge as part of this challenge. How would you guys like to describe that journey? Being that you, I think many of you have been around since the beginning. Okay, let's go over together. Okay, so for me, Def Core def course defy a core that ref stack will try to test and enforce. So from that point of view, it is still at the foundation area. So with interop challenge, one of the things that we try to test is with these criteria and the testing that we have done, in reality, where are we in terms of interoperability? We go up one level at the application level. In this case, a lamp stack, how does it behave? Is our criteria good so far? If there are gap, what are they? Yeah, let's go next. So for me, about a year ago, I was actually having an argument with Rob on Twitter, which if you don't frequently have arguments with Rob on Twitter, you should do it. It's a lot of fun. And out of that, I was- They're probably trolling me now. Out of that, I was like, okay. Which is good, please. I really want to show where I think interop should work. So I put together a bunch of tooling using Terraform, which I put up on the OSOps Contrib GitHub repo that showed using Terraform to install a few things like I think it was a Docker swarm, there was Kubernetes, and there was an elk stack. And I said, look, this is how we can show interop by using like even an external ecosystem tool. And then people can run this and say, hey, this works. This doesn't work. PR changes so that it can work on their cloud as well as our cloud and kind of figure out what some of the differences are and maybe even push up to Terraform and say, hey, here are some changes we need to make to your open stack support to make it more suitable for more clouds. And given that the interop challenge actually ended up in OSOps Contrib, I'm going to take credit for most of it. Yeah. There's a Twitter we're starting right there. This is something that we really, it was a long-term vision. It took three years to go through the board process, right? Down we had to start with principles. We had to really work through what we were doing and why we were doing it. And there were a lot of compromises and pieces along the way. So it obviously as a political kind of be a very political was a political process. Because at the end of the day for interop you have to be able to say no to things and you can't say no in an arbitrary way in a community like this. You have to give reasons, you have to give rationale, you have to give waiting and a process to do it. And so one of the things that is worth thinking through with this interop question is when you say this is the standard part of open stack, that means there's a whole bunch of stuff that's not. And especially with Big Tent showing up, we have this huge bathtub curve of APIs where there's a small set that are gonna be common and there's a whole bunch of optional pieces. And you have to be able to have a process that says, yes, this has become a standard part and this one hasn't. So ref stack is a big part of that. That's why it's been doing that for a long time. Defcore is the process by which we say yes and no to things. And it has to be transparent and predictable. And actually one of the things that was really important for us early on is it has to drive community behaviors. So if you create a system that allows gamification of the APIs, those APIs have commercial value. And it's important when you think about, hey, we're talking about interop and making all this stuff work, it's a big deal. But we also don't wanna create interop in a way that advantages one vendor or has them pull the project in strange ways. And so you have to put that hat on when you think about this in a historic perspective. And it's not just about the definition of interoperability that the Defcore working group has come up with, that so much of the success on stage yesterday depended upon tools that the community built in the OpenStack ecosystem. And you can talk about the library shade, which it was kind of originally joked as, every line of shade should be considered a bug because it's something where there's a difference between clouds in OpenStack. But I think that the authors of that library have changed their point of view on that and they're actually seeing it as the interoperability library that once we accept that there are going to be some level of differences between clouds and our ecosystem, shade provides us a common way to access those clouds. And for the end user, it hides some of those differences like Catherine was talking about. If you only care about booting a machine with a network on it, then shade is a way that allows you to do that. So it's not just work that's done by the Defcore committee, but also work that's been done by our community, by the people who are using clouds on a day-to-day basis. And I think that it really speaks to the strength and maturity of OpenStack that we had 16 different clouds on stage, including different processor architectures. Great, I was just about to ask, if anyone has any questions, we actually have t-shirts for those that you are ready to get involved. Yeah? One size fits all t-shirts. Yeah, Chris, we're happy that you mentioned shade because when we did the Interrupt Challenge, that's the tool that we use the most. Well, that's the only tool we use, actually. Yes, well, yeah. So, my question is that, moving forward, the Interrupt Challenge group, how do we work with the shade project and make sure that whatever we find that you find actually influence that project so that the API is very capable of doing what we want it to do? I mean, it's an open project within our ecosystem and so that's one of the nice things about it is as people's needs grow, they can grow the capabilities of the library too. But also, from the point of view of the Interoperability Working Group, how that informs us, we have criteria that helps us to find what we want the next, what we think are the APIs that are important to try to enforce across clouds and shade in some sense becomes a tool identifying what the APIs are that real cloud users are using and we can take that into account when we're evaluating current and future APIs for inclusion. Okay, thanks. I don't share the enthusiasm for shade. I agree with the original idea that shade's a defect. And I think that while it gives us sort of a way to cope with the fact that we have a lot of heterogeneity among the cloud infrastructures, it keeps us from actually solving the problems because shade's a Python library. If you're consuming the clouds from other libraries or even from the OpenStack CLI, you don't get those benefits. And then it's still, it puts the onus back on the user or the person creating a configuration file within shade. It doesn't actually address the underlying interoperability problems that OpenStack has. And even more, when I look at interoperability, because I have tooling that works on Amazon, Google, OpenStack, right, physical infrastructure, the differences, behavior differences, going back to one of Paul's points, the behavior differences in those different clouds is very concerning. And the fact that we can gloss over that with a Python library keeps us from having a hard discussion about fixing those fundamental problems, making the APIs discoverable so I can ask an OpenStack cloud, what implementation have I done? So, we're hiding it for somebody, it made the interop challenge work well because we had a Python tool calling into OpenStack, but I don't think that that's the same as actually creating real interoperability. When I talk to people in the field, they use Amazon and Google as much or more than OpenStack. And so interoperability with those platforms is important to our user community in a very significant way, and we need to consider that. That, to me, is as much a part of this discussion, should be as much a part of this discussion. Clearly, what you're saying, and I think there was one of the presentations at the summit where somebody's showing from the horizon dashboard going to Amazon or whatever, I think you're making some valid points that we're always customer-driven and crawl was, all right, make sure all these OpenStacks are operable and then a second phase, like you're saying, is, okay, now let's make this work well with these other clouds. The, yeah, I don't mean to diminish the challenge. The challenge was a significant accomplishment and it shows where we've gotten to with API interoperability and the fact that Defcore and RefStack are helping people get to a point where the APIs are consistently working, which is a big deal. It certainly is, when we started this stuff back at the San Diego Summit, that you couldn't count on OpenStack APIs at the most basic levels to work together. There were all these extensions, it was crazy. So, yay, huge. But I also think that it's easy in a community like this to take something that sort of is a bomb, say, all right, we don't have to worry about that and fix it, and it's harder to keep the focus on the hard problems. But I don't think that the two exist in isolation from one another. That's one thing that I've seen in the two years since I started this work is there has been, what I would almost consider a recommitment from the community to preserve API compatibility across versions. And so you look at things like Cinder, which just announced the version three library, which is actually entirely backwards compatible with the version two. Or you look at work that's being done within the Glance community, in part as a direct response to the concerns raised by the Defcore working group about how to build a discoverable image API that still allows you to have multiple implementations or even implementations that have recently landed in Neutron about just a single API for attaching a network. These are all things that we don't wanna take all of the credit for that, but the work that we've been doing with Defcore has helped drive some of that development. And I'd like to point out that actually, I think in many ways, Defcore kicked off the whole awareness of the multiple ways of doing things and the need to actually converge. And now there are multiple projects that are actually working towards that. We've got the Defcore rev stack, we've got the interop challenge. We have the open stack client. We have the API working group. I think the real key to keep this going and spur it on is to get, it is also happening right now and that the node pool is being expanded to be much more multi-node testing as opposed to the old DevStack model of everything on a single machine. And when the developers start feeling more of the pain and start seeing the differences and they are starting to do that, I think we are as a community starting to converge and I think that's a sign of maturity. And to me, API is the foundation layer. If the API level does not interop, shade, terraform, Ansible, it will not work. So to me, that foundation layer that Defcore has been coughing out this limited important API that need to be interop or compatible for all open stack ecosystem, it's important foundation layer. So back on the shade topic a little bit, I don't really feel like each line of shade demonstrates a defect. I think it's more each line of shade demonstrates somewhere we could probably improve things. The way I look at things, if you're using the CLI or Horizon or whatever, you're probably not doing a ton of stuff like complicated things. If you're doing complicated things, you're gonna be using something out in the ecosystem like terraform or Ansible or share for puppet or whatever, or one of the many cloud SDKs and going that way and doing the whole infrastructure as code thing. And if you're doing that, then we have ways of working with differences in the different clouds. We just need to know that they exist and we just need to find out what the capabilities are or what the differences are. Which is kind of what Rob was touching on and figuring out like can we ask the cloud to tell us what it supports or can we at least get good pointers to documentation and can we get people working out with the ecosystem, working with terraform, like go for cloud, lib cloud, all those things to help bridge those gaps and bring shade like usability into some of those other tools that are commonly used. And we probably have information about what those tools in the OpenStack user survey and stuff. So that may help us target which of those tools are the best ones to really show off as like best of breed. These are the best ones to use with OpenStack and then hopefully that will encourage the rest of the ecosystem to improve themselves because it forms some healthy competition. So I think that's an interesting point. The issue, Mark has a question next. The issue that shade, the shade configuration file is maybe the defect and the extent to which you could inspect an OpenStack cloud and get that configuration information, that would be sufficient in my mind. It's the fact that you have to put an entry into a file that says this is the behavioral characteristics of my OpenStack cloud. One thing that we sort of gloss over but was a huge point in this, when Defcore came out, it's not a version, it's not an OpenStack version spec. And this is a really important point for people to think about with OpenStack. The Defcore standards are dates. They say on this date, this is what we expect your behavior to be. And then you can go back and say, I wanna go back to the 2016 March spec and I would need you to conform to that. We're not saying it's cactus or kilo or mataka or Liberty. It's this is the API spec that we conform to on those dates. And so it becomes a sliding window of date conformance. So it's important when you think about interop to not, you're not supposed to care what version somebody's running. And that is, it's a very different thought process if you're used to thinking about, oh, I started running Liberty and it's gonna solve these problems. From an interop perspective, it's not about Liberty or any version. It's about the behaviors that you get. So one of the themes you're kind of both hitting on there is discoverability, which has been kind of a hot topic with Defcore lately. And especially on the operator side, that's operator choices seem to impact discoverability a lot more. And so it's things like, I can make a change to policy.json that says you can't upload an image to my cloud. Right? And there's actually no good way to discover that if you're a client other than try and fail. Which is kind of a terrible thing for end users. So since we have an operator on the end of the bar here, one of the interesting things is that neither Defcore guidelines today, nor the interoperability challenge really address the operator side of things. Defcore doesn't accept tests that require admin credentials. And the interoperability challenge was an end user workload, right? So, and one of the things that you said, Paul, was that the APIs aren't necessarily the thing that you guys care about. It's more of the behavioral differences. So what does a set of tests look like for an operator that would be good for interoperability? I might have to talk into this mic over here. You know what, I'll stall for you. So Paul thinks that this is a really challenging question. That actually gave me a couple moments to think. So I think, I mean, when we're doing CI for our tooling, we are doing, can I spin up a VM? Can I attach a sender volume to it? Can I attach an IP address to it? We're doing a lot of that sort of stuff. So I think being able to codify scenarios and moving those and really testing out the behaviors, maybe even doing something a little bit complex like docker swarm or all the way up to something crazy like Kubernetes where there's a lot going on and you're testing a lot of the capabilities of the cloud rather than just a couple of things and you're testing them all as a unit together rather than saying, I span up a machine. I span it down. I created a sender volume. I destroyed the sender volume. I put a thing in a bucket. I deleted the thing in the bucket. Right. I think that when we look at, I've become a big fan of splitting the def core tests from Tempest because Tempest really tests utility like that. And I think that def core should add tests that actually test behaviors. Did I get a Linux machine? Do I have Sentos available? Do I have an externally accessible network? That I can, those things are actually operable issues that somebody's trying to use the cloud cares about. And I'm actually a big fan of splitting those out. From an admin perspective, this is way deep legacy but def core used to score, initially, scored admin tests and we, for good reasons, push them off into the future. But the goal was not to never have admin tests. It was to say that we would have different classes of tests and that we would eventually get to specialized silos of core plus admin, core plus telcom, core plus something else. And I would love to see that come back in. It's just, we're not as fast as I was hoping we were a couple of years ago. But what you mentioned, especially around, you know, not being able to upload a glance image until you try, that's pretty bad and it's pretty common all across a lot of things. And the error message you get back even then are not like, it doesn't say you can't do this because of a policy setting. It just says error kind of thing. So, getting better errors into the clients is very important. A lot of the times, the only way to find out what happened is to read the logs and you don't necessarily want to give the user access to read the logs on the hypervisor that they're trying to launch a VM on. And one last thing about, you know, operators of clouds, I mean, from the administrative point of view of watching a number, you know, over 40 vendors now, you know, running the tests and submitting passing test results is the clouds that have the most difficult time passing are the ones where the operators haven't been testing, where they've made decisions about, you know, implementation decisions or they've made changes to code and they're not running the tests to understand how those changes impact the behavior of the APIs. And so one of the best things that you can do if you are operating a cloud, if you are selling a cloud to somebody is to actually be continuously testing what you're doing to make sure that the changes don't have unexpected side effects for your users because that's been the biggest pain point for everybody who's been trying to pass their operability is they come in, they make a change and they don't, and that change, they may think that they've covered that change in how it impacts one API but they don't realize that it changes the behavior of other APIs. And I just want to add that if you do the test, please test the whole set of APIs and not just the must pass test. With that set of API, you give us a lot of data so that we can do a more meaningful next set of, divide the next set of must pass test. So maybe what we need is like a wall of shame somewhere with a list of the logos of the vendors that aren't fully testing or testing it all from Devcore. And we can put that up on the big screen of the keynotes. The board actually took an action when we were in this process to specifically not have walls of shame but it was a very good idea. It looks like we're taking an action today so. That's actually a funny thing too because there are members of the community who will actually come out very vociferously against particular clouds and say that you're behaving badly, you're doing something this way. And again, in my personal experience, I found that that isn't, it's useful to know that when someone has a problem, when a product has a problem but it's also recognizing that I don't think anybody in this community is out to create vendor things that are exclusive to them. Generally when they're part of a community, they want to do the right thing and again in my personal experience is when I communicate with them, when I communicate the problems in a way that understands the vendor's concerns because we are a community that's made up of users, developers and vendors. They typically want to go back and do the right thing in their product and I've seen that several times. Once we told them what the right thing was. We were just about out of time so I need to get you guys off here but we've kind of, I think, gotten into some of the other questions but unless there's any other questions in the audience right now, I'd love to hear, where do you guys think, where do we go from here? Where, you know, what direction do you think will continue to set things going? I need a moment to think. Okay, so for me, Interability is a journal. So we started three years ago. We are getting somewhere with the latest guideline. We have about more than 200 tests and we're including the five most important project in there. So that is an improvement but we know this is a journey and there's a lot of good input here and that's what we need, the community involvement to keep working on the interability. I think the biggest success that we'll have is when interoperability is just entirely boring where people don't think about it. You've tested, you run, you know that OpenStack Clouds work and I think we're on that path. I think that we're on that path and that we're gonna reach that point in a year or two. I think we're doing good work. I agree with those points. I think that it's important for the users, people consuming OpenStack Clouds, to ask the vendors, ask their ops teams to conform to the tests. That the biggest liability I see or danger I see with interoperability in OpenStack is that it's not a user consumer driven thing right now. That will make the vendors conform and adopt and move faster. So we have to have people in this room, we have to have people in the community asking and demanding that the interop tests are being followed. That's where the power of DEF course gonna come from in the interoperability committee. Without that power, they're not gonna be able to enforce things and then OpenStack will go back to a multi-headed beast. Yeah, I think I would really like to see a lot more people looking through the stuff on the OSOps repo. There's Terraform, there's Antibole. I mean, it might be some heat as well for deploying a bunch of interesting tooling and to have more people, whether they're users or vendors or operators, testing those out on your clouds and maybe making pull requests, improve them and make them more interoperable. I know when I was doing the Terraform ones, I made a few decisions that were like, these are pretty blue box specific, but I can guess at what other people are doing or I can just leave those specific things in there. So I would bet a bunch of them, if you tried to run them on your cloud, they would like almost work, not quite work. And I would like to find those parts and keep improving them and have like a place you can send people to. Here are some really good, you're like reference level versions of the Elk stack or whatever it is that you can install using these tools to help you learn what the cloud's capabilities are and also how to use tools to interact with them in a very DevOps-y and think-of-quotes way. And I'm pretty sure we're all out of time here, but did wanna remind people, I believe, is it at 11 a.m. that I think Catherine and Brad have another session discussion in the interop challenge? And Rocky. And Rocky, and so everyone is invited to that because I think there's still a lot more to be discussed and I think it'd be great to continue discussion Are there any last questions for me in the room? Or I think, like I said, let's be respectful to the next set of presenters in this room. So hi, I'm the guy who's basically responsible for ensuring that Cloud Foundry installs on an open stack. I'm so sorry. Yeah, and from the most recent user survey we saw, this is like the second most popular workload people install on an open stack. And still it's like almost impossible to make sure that it installs on Bluebox as well as on all of the other open stacks which are out there, right? So I'm still not sure after following this discussion, how are we going to improve that? I mean, it's like you're saying that vendors should adhere to a set of tests. Unfortunately, everything covered in Revstack or Devcore is way insufficient to make sure it installs which led us to basically develop our own test suite, right? So we are now pushing that out to people saying, hey, if you run that and everything is green, then you're about to be sure that it actually installs. This should work out of the box, right? This is... So as we say, this is a journey. So absolutely, we're not there. So what you say is where we want to get to. When we started this, and Chris, Rob can talk and Mark can talk about this, when we define the set of Devcore criteria, the mean to start with is to get community involvement. We don't mean to fail a lot of people to begin with. So to attract community involvement is goal number one. Of course, getting to the stage where you say you can test your Cloud Foundry. It's just a workload that you call the underlying API. That is where we want to get to. Are we there yet? I don't think so. And you know that. Yeah, and just for my personal interest, would you put a link to the Cloud Foundry tests into the etherpad we have up here? Because I think I'm interested in seeing it. Right. This to me reinforces what I'm trying to say about it has to be client-driven, right? If the end users aren't putting pressure on the vendors about, hey, wait a second, you passed Devcore, the ref stack, but you didn't, it still doesn't work. It's still not compatible. That lets us bring more tests into it. It lets us continue to tighten what those compatibility issues are. But it can't just be something that's done from the board down. It really has to be the consumers saying, I want compatible Clouds. I want a tighter spec. And the vendors have to do it. There are vendors who don't participate in this effort. Major vendors who don't participate in this effort. And so when you think about that, there's plenty of people who can ship OpenStack, meet the, I'll be brutally honest, there are vendors who can meet the spec and ship an OpenStack and set it up and it won't conform in the field. Yeah, exactly. That is your experience that is what you are seeing. And the only way we're gonna get that to stop is when the people who are buying that cloud run ref stack themselves and say, WTF vendor, fix this. That's how that gets fixed. Otherwise, it's not gonna, all we're doing is creating some nice marketing blitz at that point, until the users care and tell the vendors they're not gonna accept it. Okay, thanks. All right, I think we're painfully over time here. I just wanna thank all the panelists and everyone in attendance. I think this is a great session. Thank you very much. Thanks. Thank you. That's actually a good closing comment.