 Can I have the mic? Thank you. Hello, everybody, and welcome to the DEFCOR session. I'll stand so everybody can see me. We had a great shout-out today during the keynote. I hope everybody saw that. We are going to spend a lot more time talking about what DEFCOR is. I'm going to do a five-minute intro, and then we're going to talk about futures with our awesome panel. But I do want to spend five minutes, and then tomorrow we have several hours of sessions talking about DEFCOR and where it's going. DEFCOR is going to become the front lines of interop discussions in the next six months, where the rubber is going to meet the road on talking about interop. And so we're going to talk about that. And what I'm hoping for our panel to do is set you up understanding where we're going and what we see happening in the next couple of months. My job is to talk about the history as fast as we possibly can. Actually, I see what you're saying. Can everybody see it? I'm not doing this presentation right now. This presentation is something we gave twice to the community a couple of weeks ago, and it's recorded, and you can watch it two times, and hear us say almost the same thing twice, and hear them the questions. But, so in this question, we go through what DEFCOR is. Very briefly, DEFCOR is the board process to drive interoperability. It's taken us two years to define this process because fundamentally our job is to tell people whether they are or they aren't OpenStack. And that is a very high stakes question. And as a consequence, we went very deliberately and carefully in addressing how we would make those decisions so that it was a community-driven process and people understood what they were. The thing that's important for people to understand and is often confusing is this is a commercial distinction. It's about OpenStack brand. It's not about whether you're a project or not. There's several good talks. A lot of good talks about Big Tent about how projects get into the OpenStack community. This is about how vendors get to use the brand. And that's a very important thing. We talk about that a lot. And then the other, in all of this material, two years of material, the other thing that I think is worth pointing out for people to understand as a background is that when we talk about DEFCOR, it is by design a subset of the OpenStack project. And it's meant to be a trailing indicator. So it's about which APIs are safe, stable, long-term APIs for the product. It's about interoperability. And so when you look at what we do at DEFCOR, what we're really doing is we're picking the components of OpenStack that are time-proven, that are stable, and then we're providing longevity information about them. And DEFCOR is a lot about saying what things meet those criteria and we spend a lot of time figuring out how to make those choices and things like that. In five minutes, that's all I'm going to pick out from all that we've been doing for OpenStack. There's quite a bit about how that works. Okay, and then somewhere in here, I actually have the... Cool. No, there it is. All right, so we did... I was going to put a slide together, but priorities being priorities. In here somewhere we have the list of questions. Here we go. Can y'all read it? Okay. These are the questions we worked out in committee to figure out what we were going to do. We have open committees. So if you're interested in joining the process, it is a board committee with open community membership. Anybody can come, participate in the discussions. We love to have people participate the more we have, the better. And so what we wanted to do is say, all right, we've gotten all this accomplished. We have interop. We're being presented as part of the keynote. It's really awesome. What happens? How does this impact you? What's the impact going forward? And so what I wanted to do is turn it over to our panel. I'll let y'all introduce yourselves. In the interest of time, we are going to allow them to pick four of the six questions. So think that through, and then we'll pass the microphone, and you can answer the ones you want, and we'll just keep going in round. We're trying to save, I'm rushing, because we're trying to save time for questions. So if you have some questions, please, we'll take the mic as a thing. And if somebody from the audience knows the Etherpad link, it's just from our last meeting, could tweet the Etherpad link, then people could actually add questions under bonus if you want. Actually, that would be sort of fun. Did I miss something? Excellent. So I'm Mark Belker. I'm the OpenStack Architect at VMware, and I've been active in the DevCorp community since about December. Rachelle Grober, also Rocky Grober. You'll see Rocky every so often. I'm Guawei, and we've been working on this since January of last year. My name is Chris Hodge, and I'm with the OpenStack Foundation. I work as an interop engineer. I've been with them since about September, hired to work specifically on DevCorp and help people out with their testing and becoming DevCorp-approved. Hi, everyone. My name is Agnes Sigler, and I work at Rackspace. I joined Rob as co-chair at the beginning of this year in January, and it's been quite a ride. Hello. My name is Catherine Dip. I work for IBM, and we have been with DevCorp since day one or day two. January, 2013. So, and my name is Rob Herschfeld. I'm the CEO of Racken, and I've been the co-chair of this committee for quite a long time. I don't know if Josh McKinty's around. Josh McKinty was...no? Okay. It was the co-chair when we founded this way back when, and then Alan... it's the chairman of the board for the OpenStack Foundation. Also, it's been instrumental. A lot of people in this room have contributed, and thank you. I think I'd spend too much time if I called everybody out. So, what I'll do is I can read the question out if that helps. So, why do you think this was an important effort? Who wants to answer the question? I've got to be the first one. Okay. So, at IBM, we think that it is to ensure the interoperability between the OpenStack deployment is very essential for any open-source project. It is also critical for the continually wide adoption of OpenStack. So, with that, we have been committed to join DeafCord. Not only DeafCord, also the tools, a RefStack project, which is the tool to help realizing all the goals that are set forward by DeafCord. It's worth noting we've talked about RefStack in the past quite a bit. RefStack is a community project that's used to collect the results from the tests, and then allow people to compare them and see the results and score them against the DeafCore standard. So, if you're interested in participating in the Cloud RefStack, it makes that job easier. So, our second question is what is the hardest thing about getting DeafCore approved? Which of you wants to take a crack at that? So, for me, just because I joined in January, the hardest part was consensus and all of the work that happened beforehand and catching up. I think the new people that joined the conversation, they're like DeafWhat and they have all of the questions and discussions that have been covered over the last couple of years and if you ask Rob, he'll give you the background behind every single decision. I wonder if you should answer that question, Rob, because you've seen like a lot of us, like I know that I arrived, there was a question like, is the board going to approve DeafCore? Is it going to be there or not? So, there's a question of what my work would be, defining work as an open-stack board member. Yeah, the interesting thing about this is that it was really just being patient and working through the process. You know, when Alan and I sat down and so if you're interested in the history, I blogged about every step, so my blog has a lot of DeafCore history on it and it started with this picture that we called the spider. Very convoluted where we basically said this is a tangled mess, it looks like spaghetti. And so the hardest thing was really that aha, getting to that aha moment where we just sort of said we're going to have to pull one thread at a time and then helping people be patient as we went through that process. So I'll take that from a slightly different meaning. So I worked for a vendor, a vendor which now has a DeafCore approved product. So we had to go through all the steps of the DeafCore process. We kind of had an easier time with that so I might think because I'd been involved in DeafCore already and kind of knew what to expect. But I think as we kind of went through that whole process there were times when, you know, I'd talk to one of our engineering managers and say, okay, we'll need to do the DeafCore thing. It's just great, where do I start? And I say, oh, there's not a document that I can just point to that has all the steps. So for us it was about kind of one of the things that we wanted to do was continually kind of improve the documentation around that project so that we could figure out for all the other vendors out there through so that people can focus on, you know, the capabilities more than the process itself. Y'all, are you saving the last questions? Well, and I think to maybe speak a little bit to what Mark is talking about, there's very much a sense of this is the first time that something like this has been done. So you have an idea of what your goals are and what you want to accomplish, but you don't necessarily have a complete understanding of the reality of the tools that are there. And so you say this is what we want and this is the way we're going to go forward, and then you realize that, well, maybe that doesn't work for a particular reason, like maybe, you know, maybe you assumed a particular tool would be perfect for what you were doing, like this test is going to test this thing perfectly and then it turns out that it doesn't. And so there's this constant re-evaluation and learning, you know, while trying to stay true to what what your goals and values are. And I'll put a slightly different spin on it. I think part of the journey getting to where we are now is literally getting all of the community involved and getting each segment of the community to step up and recognize that they needed to participate and for them to each define their own roles in the process. And so there was a lot of back and forth two steps forward, one step back, but getting the TC's the TC to discuss what their responsibility was within DeafCorp and I think that it will change over time still as we feel our way through this getting the board to say this is important, this part is not important. We need to be very clear on this and this is something that we don't need to address at the moment and then getting developers in various areas and the vendors all of them to step up and say okay, this is what my responsibility is and now let's discuss how to implement and so this first round is extremely important in getting that first stake in the ground. So for our next question actually it's a nice segue is how will DeafCorp change OpenStack in the next six months? What's the impact going to be and then for six years Catherine wants to kick it off. Again just like what Rocky was saying community involvement is the key. What we need now since we have the framework we have all the steps, all the way how do we score the test. We need the data. This morning we talked about data. We need the data. RapStack there are tools in there to do data collection, data I mean data analysis will come with those data it will really help DeafCorp to defy the common, the smallest denominator that it will be a common platform for OpenStack. So I think that DeafCorp will be really huge for OpenStack. I think operators going forward they will ask will this have an interop flag will I be able to claim that hey I am passing all of these tests and I will play well with other OpenStack clouds. So this is from operator perspective. From developer perspective I think they will have to think how do they want to stay involved in the community. Will they try to fork some feature or implement something that does not work well with the rest of the community. So I hope that PtLs of all of the projects will be very mindful and actively involved. I would like to give a big shout out to John Dickinson PtL of SWIFT who has been part of this process and he's asking hey how can I make sure that SWIFT is part of this and how can I help. So I think one of the interesting things about the next six months is thinking about the past six months as well. Because six months ago DeafCorp was not an enforcing process. If you wanted to use the OpenStack logo on your product you didn't have to go past DeafCorp tests you just needed to ship Nova and SWIFT. So just in the past few months here it's actually become a process where if you want to ship a product with the OpenStack logo this is now part of your game plan. And in order to make that happen you need to take a look at where the community is today. We didn't have the luxury of sort of starting DeafCorp at the same time we did their whole rest of the software projects. So in the next six months I think we'll start to see we've got a very small minimal set now. I think we'll actually start to expand that set now that we have community feedback about where people are today. I think we'll actually see products changing a little bit to meet those increasing standards of interoperability over time. I see a lot of things happening over the next years. We're in the process in the developer side of going to the Big Tent. And the Big Tent means we're going to have lots of projects out there and that means that the board is going to have to evaluate how the trademark is used on these projects and these products. Some of them are close to the maturity point the DeafCorp needs. But do we say that it has to ship or do we have lots of different focus products that are defined by subsets and supersets of the tests. That's one issue. Another thing that's going to happen in the next six months with the Big Tent is that the API testing is going to be sucked out of Tempest and into the individual projects. As a single organization that needs access to those APIs across multiple projects, we've got to get the developer community to come together and agree on some standards on what these API tests are going to look like and how they're going to run. Otherwise the vendors won't be able to run the test. We won't be able to run the tests and it will become very difficult to again get to the trademark issues. So, we've got that on the developer side. On the vendor side we've got them coming together and saying we're going to get this trademark. We're running the tests. Well, they're also going to start saying you're not capturing a key part of interoperability. Whereas this they're going to start proposing tests, writing tests perhaps and they're going to get more involved in the development of OpenStack and in the testing of OpenStack. So, we've got that happening too and we'll have more interop tests, not just DevCorp but this allows us to open the door for interop testing with AWS with the Microsoft cloud with other open source components that aren't part of OpenStack and those tests will be allowed there and will be available for folks to sit there and say this is something we want in OpenStack. And then the third thing is these tests are going to mature and as they mature they will move from just functional tests to actually is this cloud really working interoperably? What are the minimal performance standards that are required for this to be a usable interoperable cloud? So it's not just going to be function it's going to have to be performance. We're going to have to step up the game over time and there's just so much in the future that needs to come together. The community is going to need to actually do interoperability amongst themselves to make this really work and make this gel. Yeah, and I think I'll apologize in advance to Matt Trenish who's the PTL of the OpenStack QA team and does the tempest work because I think in the next six months there's going to be a lot more attention on his work and I think overall that's a good thing. We often the gate is in some ways taken for granted. It's the thing that we just have to get by and now it's a thing that we it's going to become one of these front and center projects and that's kind of neat to see that. I'm a big fan of testing. So it's worth noting here that in some ways what we've done is taken tempest which is designed from a developer gate perspective and morphed it into an implementation validation test and so it is important to realize we're putting a lot of stress on OpenStack by using the community tests in this way we always knew it would be stressful but we felt that it was a much more authentic way to solve the problem than coming in with a second set of tests or an alternate set of tests and one of our charges for the community is to help us write additional tests that provide coverage for interop we've given you a way to do it we just need help creating these tests so and we're back everybody else stop using the bandwidth I did it and we're down here there we go okay so is Defcore vanilla OpenStack does it create, is that a reasonable term? this is my favorite question so I think just like Vanilla gets a bad rap for being known as plain we associate okay Vanilla is plain something I think this also can mean okay is this plain OpenStack just like Vanilla is a great building base for all the big goods that we love or ice cream and the same with Defcore Defcore I see it as a spec for you to start out to build your OpenStack that's, you start with Defcore and all the Defcore capabilities on top of that and add all of the great things all the additional flavors and all the additional features so is it Vanilla OpenStack? sure it's the spoon that makes sure you can eat your OpenStack ice cream we're getting tortured analogies anybody else? is Randy Bias in the room? I saw Randy walking over he's all about OpenStack flavors it was a conversation we were having back in San Diego we had a lot of talk about OpenStack flavors and so Randy would tell you that we absolutely need this is the base and then we need additional flavors on top every once in a while it's important to channel Randy Bias when you talk about Defcore because he's been a real good voice in helping keep this on path and shooting towards the longer term objectives in the analogy? yeah it's actually fascinating because right now where we are in Defcore we are the minimal needed to have a working operable interoperable stack but what does the future hold? we already have three flavors of Vanilla we have we have storage the storage stack we have the compute stack and we've got the platform so we already have more than just Vanilla and moving forward the real question comes in as this is what Rocky is referencing yes at some point Defcore will need to be able to say what's the minimal for a trademark but what else can be added and should be added and what are those minimums for this extra flavor because the platform is the whole banana split but as the tent keeps growing there's going to be the database there's going to be the Hadoop spark whatever and there will be all these different implementations on top of the core initial Defcore state so we're going to have to walk a fine line of making sure that the necessary stuff is defined as interoperable without putting in stuff that doesn't belong I'm going to take that as your answer number five I'm going to segue to number five so the next question is what are the hard problems in front of Defcore this is one that we've talked a little bit about as developers we like the DRY principle don't repeat yourself and as an open site community we have violated that standard to death we have a whole lot of different ways to do a lot of different things we have two networking stacks today we have two different ways to upload images in an alternate workflows part of the criteria that we have and we're evaluating for Defcore is the capability widely used and widely deployed so in those cases where there's more than one way to do thing in some cases they're mutually exclusive we can't run Novonet and Neutron together for example right so in those cases we have to make a call as to how are we going to handle those situations that are both fairly widely deployed but we have to ask if they meet those bars So I think we've got some choices to make in the community there. Second, what you said, the next hard thing will be to define a platform that truly represents all of us, all the OpenStack deployment out there. So to have a reliable data point for DEFCO to define that we really need the data, and we encourage all the vendors to submit the data, not just what the small set that we define today, but a complete set so that we can have a reliable view of what are the capabilities out there, what is the common point that represents everyone out there. So at IBM, we test all of our products through this process and we submit the data to the community, and we encourage everyone to do that. One of the hard problems is I think Katherine mentioned that it's a pretty small test set, and I think that it's not difficult to pass right now. In some ways that is by, you have to start somewhere and you have to understand the challenges you face, but as DEFCO becomes more central and more important, at some point you're going to have to tell somebody no. And that's when you're talking with the vendors and the community leaders and the companies that are a huge part of what your community is about, that if you imagine the community as a triangle where you have the developers and the users and the vendors as all part of that triangle, at some point for the standard to mean anything you have to tell somebody no, and I think that's going to be really hard. Right, and I think DEFCO will have several really hard problems. Some of them are capabilities, some of them are NOVA network or Neutron, because obviously your cloud needs networking, and right now DEFCO is not opinionated which one to use, but I think going forward people will start saying, hey, please pick one, and that's where we need community, community's involvement and if you want to be part of that discussion, on Wednesday we have two working sessions. So I think it's important to note, the reason it took so long to get DEFCO and we heard for the last two years why is this so hard, we have to tell people no. That is what's going to start happening in the next six months. We're going to have to make decisions where one person's, one company's cloud is not compatible and other people's are. And the reason we took our time to do this is we have to have that decision, that discussion based on principles and based on process. And so it has to be a fair decision, it has to be an open decision. So really important to have people participate in this because as we make these decisions, your clouds will be impacted. Your development efforts, if you're a developer, your vendor efforts, if you're a vendor, your user efforts, if you're a user, we need your opinions in this so that we can make good decisions about it. That's the data Catherine's talking about, but it's also your participation in the process so that if you think something's wrong, if you think it should be Novenet instead of Neutron, those are things that we need to hear from people. I'll move on to the next. What happens if this doesn't work out? What happens if Devcore fails? I think if Devcore fails, we will have open stack clouds that don't look like each other where you cannot deploy your application on one open stack cloud and on a different one and expect them to behave the same way. And worst case, probably open stack fork if Devcore fails. I hope that does not happen. I hope Devcore helps with interoperability issues and some other questions as well. I think it's an interesting question. I think Devcore is incredibly important. When I took this job at the foundation, someone said to me who was very involved in the open source world came out to me and said, you know, Linux has tried this and Apache has tried this and he rattled off this list of foundations that have tried to do this thing and they couldn't do it. And he goes, what makes you think that you and open stack can do this? And I think one of the reasons why we've been successful up to this point is because we have such an open community-driven process. So if Devcore fails, it's because we as a community have decided that it doesn't meet our end goals and so we don't need it anymore. And so is that a failure or is that us growing and adapting to what our needs are? And so, you know, the philosophy behind Devcore was encoded into our bylaws where we said that, you know, if you want to have an open stack product it needs to pass a faithful implementation of testing standards. And so we're living up to that. We're living up to our values. And if our values change, then it's not failure. And it's important to note that was in the original bylaws. We've amended, actually faithful implementation is still in the bylaws. We've just been extending and trying to figure out how to do that the right way. But you're right, that's a core value from an open stack perspective. Is it time to answer the bonus questions that audience is asking us? Yes, so we've got a couple of minutes for questions. We have some showing up. If you're interested in, if you're in the audience without access to the Etherpad and want to stand up at the mic, we'll add questions and we'll go with that also. Yeah, if you all want the bonus question, do you want to take a shot at the bonus question? So the bonus question was originally which project or capability should be the next component. So let me define what this means a little bit, because I was very brief in the introduction. OpenStack, the Def Core has two levels, has a platform level, and so it has, that's where you basically have all of the capabilities. And then we have people in the community who just want to compute or just want an object store. And so the idea is you could license just OpenStack compute or just OpenStack object. And the design is such that we could bring in a new component where somebody could say, oh, I am an OpenStack something that was not part of the platform, they could be their own standalone service, if you would. And so I'm interested statistically in what people think the next one might be. To me, I think the next one, and it's also the challenging one, is the network. The network area. I think it is a capability that we really need to work on. It's a challenging one. And I agree that we need to get the network right. But it's not in any state to make it in the next couple of few, at least the next two Def Quares. I don't think we will have an answer for that. We also have various versions of APIs. And getting the more robust and the versioned APIs in there. I really think the next step is to get to the versioned APIs of the various projects. And it's solidifying, increasing in quality. And we might see performance before we see neutron. Since I have a line, if you have a short answer for which one? And then I can answer the three questions really quickly. Identity is an obvious choice as a standalone component too. So that would be Keystone effectively as a standalone? Well, I was going to say identity as well. I actually think Trove is on track to potentially be a standalone project. I just want to put in a reminder that right now Def Core is only concentrating on the user side. So admin is not part of the equation. And so looking at things from just user space is something you need to keep in mind in how we're moving forward. That would be a logical one of the extensions that the flavors that I would expect us to add would be to include admin. So there'd be a Def Core version that included the admin capabilities. Yeah, that would make perfect sense. I don't know what it wouldn't be vanilla. I don't know a flavor. So just to really, I want to respect the people on the stage. I'm going to just very quickly try and answer. So vendor-specific extensions are handled because we don't require all of the code. We allow people to have substitutions. So that's specifically handled. If you look at what's called designated sections is how we handle that. We specifically do not version based on release. Def Core trails releases. So we switch to date-based guidelines. So the Def Core guidelines are based on dates and they cross-releases. So that's an important thing. Right. And to add to that, the foundation will accept test results from the last two releases of the Def Core standard. The last two Def Core dated releases, you can be used for your testing suite. So if you don't, if there's some reason you don't match the most recent, you can step back one. And the same is true for the releases of OpenStack itself. That Def Core is guaranteed, each release of Def Core is guaranteed to work on the three most recent OpenStack releases. To get started with Def Core, I'd recommend looking at the tempest project. Sorry, the ref stack. So tomorrow at 2, we have a session how to get started with testing. Chris and I will hope to see you there. I don't get the room number. There's also, if you go to opensack.org, is it interop or interoperability? Interop. So opensack.org slash interop, there's a page there that'll kind of give you a rundown of the process and give you some pointers about how to get started. Okay. Go ahead. So I'm head of engineering for a company called Data Center in the UK. So we've obviously just been through this process. So I thought I'd offer a couple of bits of feedback and hopefully some things to think about. I mean, I think the documentation on running the test suite was something that everybody probably knows could do with improving. I mean, Chris was a fantastic help in that. But it certainly took a little bit of tweaking before we worked out exactly what things needed to run. I think listening to this conversation, especially when we start going into the realms of production, I think there are things that are very different in terms of testing a private cloud versus testing a public cloud that's got paying customers on it. And that's also, to a certain extent, is modified by the size of that cloud and what the impact of that testing is likely to be. So that's obviously something that would be a consideration for us as an operator. And again, in the realms of performance, as you start to move into multi-architecture, how do you set what those boundaries of expected performance are? I mean, we've just launched an ARM64 cloud. Well, it's got a very different performance characteristic from an X86 cloud. That's something we have discussed and it's coming. We have a long view and issues like actual performance testing and requirements end up becoming things that we expect to be absorbed in the future. Great questions. Yeah. Okay. Cool. I think, do we go until 15 of? One more. Well, I wanted to congratulate you guys for working so hard on this. This is a huge effort. Thank you for doing it. I have a question, but I want to say also you guys talked a lot about interoperability. Another way to link it to that is the applications. People developing apps, what they want is to be able to point their app at any OpenStack cluster and really get that. So I hope that that Defcore becomes a vehicle for people working on applications to understand which clouds that they can point this app at. And that sort of dovetails into my question is about innovation. I worry that Defcore, made by necessity, have to try to trim down as much as they can to that widely deployed thing that you can really depend on being in a lot of places. But innovation is going to push at all of those edges and application developers are going to be targeting the newest feature, vendor extensions to take advantage of some new stuff. And I wonder how long it's going to take for some of those things to eventually work their way back into Defcore. Do you guys consider that to be a challenge? Will this slow down innovation? That was actually going back Alan's smiling. If you go back to spider, that was the fundamental challenge we started from is don't compromise innovation but keep stability. And the very short answer to respect time on this is the reason we want to collect all test results, not just the passing, not just the required test results, is so that we can get early indication of things that are gaining in popularity. So when you're running, this is really important, it's actually good closing it. Defcore is about certifying clouds, right? Validating clouds. It's very important that when you run the tests that you run all of them and share your results. Part of this is to collect that data because if every single cloud out there is sharing the results of what tests they pass on the whole suite, not just the subset that's Defcore, then we can start making the questions like you do and we can start saying, oh wow, this Trove capability is deployed in a whole bunch of places. We don't wait for the user survey to tell us. We get real data from real clouds, real implementation and that allows the innovative things that are gaining traction to become identified as standards that you can start to depend on. So we gave a lot of thought to that and it's a great question and we're always happy to tune the process to try and make sure that we don't lose the edge and become sort of a stale thought. And with that, that's my cue that we're ending. Panelists, thank you, that's fantastic.