 I was terrified Hi everyone Welcome to the deaf core 101 session. I hope you all came for that for that session specifically if not, please stay don't leave No, that was a little while ago you were supposed to bring Yeah My name is Zagli Sigler. I work for Rackspace. I am also co-chairing the deaf core committee with Rob Hello, and I'm Rob Hirschfeld and I work for Rackend to start up doing physical provisioning I used to be the old crowbar stuff So today what we're going to do is we're going to give a quick intro to deaf core And then we'll have some really hard questions for you all or you can ask us and we'll have Sam answer them Tomorrow tomorrow we have two working sessions back to back. This is how Deaf core works It's all about the community and the work gets done in our meetings or if we meet in person We get them done in working group sessions in person so What is deaf core really about it's a Hopefully it's about user experience And we're here to represent them because if you if if I implement a version of open stack and Rob implements a version of open stack that you may not look exactly the same and The applications that Sam writes may not be able to work the same way on both of them So make some cry You could be speaker of the house With open stack being so huge How do we control and make sure that? One open stack is interoperable with another and I guess yeah, someone has to decide what's going to win and what's going to lose and Who's going to pick the winners and losers? Anyone anyone want to pick them? Anyone wants to decide which APIs will be required? Yeah, so That's the fun part and this is what deaf core committee tries to do and What we do is we try to get feedback from everybody we try to get feedback from everybody that speaks up Either in person email IRC or any other format that you can think of if you don't speak up We will not listen to you It's just the way it is you have to make sure that we know what it is That you have issues with or if you disagree with a particular API being required for interoperability then Someone has to tell us about it We're very anti-star chamber Right, we don't want you know a select group of people deciding how everybody's cloud should look right that's not open stack Right, which means that we might be hearing conflicting opinions Someone might say no. No, I really want this API owners like well, it's really an extension It should not be part of deaf core. It should not be required for every single cloud out there Even though it is someone's baby Just not going to be in the main core group of APIs So are we being fair about these APIs? Hopefully you can tell us All right, how many people know what deaf core is almost unanimous I'm I my job here is to spend as little time as possible educating the people who didn't raise their hand Okay, which is not very many people so This is the obligatory what is deaf core slide It's a process right that sets the base requirements for all open stack products, right so products With two pieces must pass tests and designated code Okay, and the definitions use community resources So this isn't external it's within the community and it drives interoperability for minimum standards And it's about products labeled open stack. So it's a branding component which one's forward All right So the things to parse that out a little bit deaf core is about commercial use Right, this isn't about I created a new project and I brought new code into open stack. This is about I'm selling Something using open stack. Okay, that's what we control. That's what deaf course for now It obviously echoes backwards, but that's what that's where where our jurisdiction is You just want to have your own great flavored open stack But not call it open second I have the logo for it and not sell it as open stack product Then you can do whatever you like Now if you're selling it open stack code and you don't want to use the brand That's actually problem for the board to deal with Because that means that people are using the code, but not actually claiming to use the code. It's perfect. It's Apache. That's okay But that means that there's something that says open stack as a brand is not valuable enough for you to feel like you Needed to use it. Does that make sense so? So there's a very virtuous cycle in here. So what does deaf core deliver? We have a process. We have artifacts the artifacts look like these guidelines This is the text version. We have a Jason version of these guidelines and the guidelines Basically give you a definition of all the things that you have to have and what that looks like Is not that? Thank you. What that looks like is a serious stack up of Platform capabilities components, so I'm gonna I have a slide in a minute that'll talk about this So we're just a little out of order But what happens is we publish these guidelines every six months. We give them very clever names like 2015.07 which is the month we passed them. So things are very clear. They're not about releases They are about point in time So if you think about the this graph shows you the release building up code and we're adding more and more Capabilities into the release we pick a subset of the overall capabilities and then we add those into these guidelines So the guidelines aren't the whole of open stack. That would be impressive. We'll talk about that. That's one of our questions It's a subset that we think is the required minimum and then those build on themselves on on each other over time So it's it's literally time snapshots. That's what a guideline is about and Then the guideline internally has At the top this platform concept the platform is to decompose into components. So compute and storage components and Inside the component there are capabilities which you might call a feature We call capabilities consider API based and then those capabilities are validated using tests the tests come out of Tempest and So it's sort of this upward stack the community builds tests and we lump those tests into capabilities and then components and platforms Okay, it is possible though. We don't have any examples yet of having a Component that was not part of the platform Like heat is one of things that's coming up through as a potential new component and That component may or may not be licensed as a standalone thing We have mechanisms to deal with that Well, and that's so I'm going to pause for a second before we jump into the bulk of the kinds of people Does that help people understand Defcore a little bit? Any questions did we lose anyone yet? It's okay. If we did because yes This Defcore cover so there's only two right now. There's object which is basically swift and Compute which is Nova? Glance keystone. We're adding neutron and cinder So when you see products open stack powered compute Then it will be just focused on the compute aspect Which is no one the related like that you need to have in there when you have a platform That means they have both compute and storage or you can just have just a swift based product and it will be called Open stack powered object storage So the thing that's confusing sometimes when we talk about a component it really is about licensing So you have to take off your technical hat and put on a vendor hat So when when the components are all about somebody being able to license Parts of open stack and and certify them as a product like swift So that would be open stack powered storage or if somebody wants to run compute without swift Then that would be open stack powered compute dream host who uses stuff instead of swift So they each have a licensed mechanism and then a lot most people do both and they're the platform And so you know what you're getting with that That's actually I think a good segue to the lessons learned right so one of the lessons we learned was that There are never enough tests and I think I'm going out of sorry out of yeah, it's it's number two on my list Because these tests were not written Thinking about deaf core deaf core came way after most of those tests were written and When we started thinking okay, how do we look at it? Do we write a whole new set of tests that has just deaf core specific stuff? Nobody really jumped up and said yes, we want to do that so we went with what's already in place which is tempest test and We go when we look at the test I think for main things It's pretty good coverage. Like can you spin up an instance? That's pretty well tested, right? and things like that so we do Look at things and make sure that it's there. We're we're really considering of Making sure that there's more than one test for the capability. It's just one test Also, we have a process calling test flagging. So for example if We have something in our guideline that says you must pass These tests and someone comes up to us and says, you know what this test is great, and it does I don't know Create VM, but it's actually calling. I don't know heat for example terrible example Not a we actually had something similar where it was calling There was a requirement in one test for a a guy that wasn't required in the platform And so that test would break if you in certain cases, right? and says hey, I would like this test to be flagged and Making it not necessarily required for that particular set of tests to pass and We have a list of acceptable reasons why a test could be flagged. You can't just flag test arbitrarily So it has to be like either a test is broken. It's calling something. That's not required and a few other things that I don't We actually use Garrett as part of this process And so we use it looks like a development project, right? We have a hacking rules That's what that was referring to we have you know Garrett reviews and all our files go through those processes So it's very community oriented from an open-stack perspective That number three Great question So, yes API and implementation matters If a test fails 90% of the time because of the performance. It's probably not a very good test So yes, right now we don't have just performance specific tests in deaf core But if someone comes up to us and says Hey, this test is really terrible because it almost never passes. That is probably a good reason for flagging it So It's whatever the test is testing So I and there we only pick a percentage of the test so if the test is not You're usable as an API test Ideally we would bring in as many tests as possible and this is why the flagging exists So sometimes we scoop up a test and a vendor comes along and I can't pass this test because it's not fair We actually go through a process scoring and evaluate and Look at the test and see What tests test which capability so it's actually tied and We have some great people doing a lot of work on scoring and actual actually looking at these tests and telling us now This is a terrible test for this capability or yes, this is a great test for that capability For those of you don't know this is Catherine She's the ptl of the ref stack project which and the ref stack was written just so it is easier to run these deaf core tests and It has rules and checks everything It's Garrett actually so what what happens is if you have if we have a test in the system There's a you a vendor would actually add a patch that would flag that test and then we would review that It was part of our normal review process So everything's very public when people ask for things they have to provide justification and so it's a we've made the process incredibly transparent By conforming to open stack process correct Excellent question so the guidelines themselves go through a six month scoring process and we actually have a What we call next file where we we build up this review And so we're constantly adding at the next file We do patches and reviews and things like that and then at the summits We we clone that file into a Prove a proposed guideline or review guideline and then that has three months of time for the community to review it Which we did just last week the board actually approved or you know We showed it to the board which kicks off this review cycle And then people would submit patches against that They're there for Process-wise and we have actually a long process document Things just don't get pulled out of guidelines because that ends up not being fair for the community What we don't want to change the rules on vendors so when we create a guideline We've we've gone through a slow process to get there Doesn't change very quickly and so we don't just take things out now if we find an error Or usually it's a test error and flat we flagged the test so we wouldn't remove a capability would remove the tests We had one capability all the tests were flagged For valid reasons and we removed that capability because there was no test for it So there are two different phases like one in the guideline when the test is already required required status now at this moment we also have a lot of tests that are going to review and They're going to become advisory next time unless someone submits a batch and says hey, can you remove this? I disagree with this test So now is the time to review all of the upcoming changes the the ones that are not advisory yet and are in the next is for Everyone in the community to review submit patches submit comments and say hey I think this test is terrible and Like right now. I don't know how many tests we have in the dot-next guideline That are not advisory or required yet And we will they will not be going into the next guideline because we had a lot of conversations In the last three months about the process about those tests today The dot-next the next file and the guideline the the next guideline are the same So what we were actually about to start taking patches to pull things out of the What we call 2016-01 because that's when it's supposed to be approved the 2016-01 guideline will start removing capabilities from based on the community process so you The action for people out of this this meeting is to go this session is to go to that file Look at that file 2016-01.json and say and if there's something in there that looks bad submit a patch Removing that that entry and we'll start a discussion on it Required the advisory will take will take patches on yes, so if it's already required that's a different process Also get a process you flag it to make it either deprecate it or have a good reason where it's like hey This test is terrible It no longer exists or something like that and yeah, we'll look at all of those that come in but yes Capabilities that are not yet required This is the time now is the time until January Afterwards it'll be a lot more complicated. I think I saw getting great great questions We can so we're getting a lot of sort of 101 questions, which is fine So we'll slow down and we'll go back through because by by deaf core review was super fast And so I love these questions and we'll make sure we explain it We you know we The only one that really Catches people's eye is keystone which has not not that much testing. There's all we always need more tests Right, so it really would be look at you look at a capability and say oh I'd like to have more tests on that and you could write them and we'd encourage that There's no capability. There's no capability that couldn't use more tests Every test every capability so every capability needs more tests I mean it we could flag out individual ones, but at the end of the day if if You have the capability to write a test for something Just write that write the tests on whatever capability and we'll start pulling them in There's I don't want a single one out because I wouldn't you know except for keystone Which we could always more tests on because we added the keystone test to give it coverage The and the reason we had to do that is because people felt the keystone was getting covered because it's it's a Has to be tested by everything else So people were like well, we don't actually have to test keystone because it's already thoroughly tested and So but we actually needed it as a test, but yeah I don't want to it's a it's a fine suggestion But it would only sort of bias people to go do that work anytime if you want to write tests just write tests We'll pick them up So it depends on what your objective is So this is actually one of the items. This is actually the number one item on this list as a lessons learned Right So right now technically what you're supposed to do For the logo you have to certify once a year now. How many products are you certifying? if you were and That this is this is a hard question, right if it's just slightly different Version of what of one product is that really a new product? I don't know So So there there's a couple of answers to this so I don't want to be squishy I actually want to try and be specific for you So the the foundation has rules that apply these guidelines to your certification as a vendor and And they are very specific This is what I need to do to keep my license compliance to use the brand and that that will require you to Submit a result once a year against one of the most two recent versions of open stack And they have all sorts of logo requirements and things like that But I what we'd love to see is for every cloud deployment you do in the field You run your test and so canonicals, you know already certified you got your brand Pressures off every time you do a deployment. I would love to see everybody in the room doing this Every time you do a deployment you run the test suite against it You take those results you upload them to ref stack and even if they don't pass everything They give us data about what you've implemented so logo aside You can get that by doing a generic deployment but what we really want to accomplish as a community is we want to know if you've turned things off and It's no longer in compliance or you've you've added things that are that aren't required in compliance But still pass the test if you upload that data to ref stack then we actually start collecting data about things that are in use Or not in use passing or not passing and that turns into things that we think in action So ref stacks designed to be able to take whatever results you want and so just upload them So so the more data we collect the better and then you're the vendor gets to choose which ones of those are representative of your product But the more the more data we get the better We're going to be able to do it and so we and if you took the same cloud and did it 20 times We have we also contract that it's the same cloud over and over again And so we won't we don't get that doesn't create false positive results Yeah Question so right now all of the tests are in tempest but runnable by tempest Yeah, but going forward. I think they're working on tempest plugins where the other tests are not tempest test can be run by tempest So they they do not have to be in tempest repo. They do have to be open stack controlled Okay, and they they do have to be runnable by tempest or its plugins So if you wrote some you know rack space had a list of tests that were completely outside of tempest Those tests we wouldn't consider we might one day today. We made the decision. It was too confusing to take multiple sources There's nothing inherent in what we do that limits us to only using tempest It is a policy decision for the present to keep people's heads from exploding If they could be run by the tempest framework So it's that is actually one of the questions we have for for the audience to discuss because There are pros and cons to how fast we absorb new capabilities into the system and then what the the repercussions are And so actually we want to I don't want to shortchange these questions because they're really good But we want to do a little bit of show of hands about some of these issues because what you're describing is a non trivial Question, right? It's shades of gray on that one other question, so that is also one of our audience questions Generally historically we have been small core trailing indicator Meaning right we need to be trying to identify a small set The minimum set needed to run a workable cloud sort of the definite the the concept and then Things that are established in market not things that we think are coming and we want to force people to adopt however That is there there are elements where in that where people want to expand what we do For good reason and people want to be more forward forward looking on API conformance not this trailing And so part of what we're trying to do part of the reason for this session is to have people think through the implications of Forcing a API adoption keystone v3 adoption If deafcore makes a requirement then vendors are going to have to implement it rock on that Might be awesome, but it's also going to hurt users because now users are going to find that they're not compatible with Rackspace implements keystone v3 to be compliant with the new guideline Now everybody who's running an older cloud is going to have compatibility issues because v2 and v3 aren't compatible And so we have to shred sort of thread the needle of how those things work What's the future technical direction and our other tools using it? So if all of the tools are using one version we it would be really hard for us to go forward. Yes so So technically that's it. That's it. Have a like I Would I would so there's a technical issue that keeps them from being I would love for the the the technical Deliveries to always be backwards compatible multiple versions and then we wouldn't have to have this conversation, but we So we had this discussion no We had a long discussion about it and and that the simple answer is no it it causes all sorts of challenges and Problems and things like that and here we're thinking about interoperability, right? So from a user's perspective If they're What are they going to code to and How are they making sure or they don't we don't want them doing extra work From going from one cloud to another They expect it open stack to behave the same way and this is what it's all about, right? It's interoperability It's this that now you see why we put we start the slide with are we picking winners and losers? We we we take that very seriously We are very contemplative about what we're doing and here's the dilemma We have to pick something Right. It is a bigger harm for us to say we're not picking and Just let it ride because then everybody just does whatever they want We have to make a choice to create an interoperative. I Strongly agree with that statement, which is why def core was designed as a trailing indicator So that we wouldn't hurt users, right? This is why the Tron reference, right? We fight for the users that the challenge is that You we have a significant amount of pressure to adopt leading APIs that would cause that problem and so part of our ongoing discussion is to Basically, we have this tension within the spec to say which API are we going to pick? Are we going to hurt users who are using the product? But and but then make it harder for migration to future APIs because now we've basically told users They have a two-year window to keep using Keystone v2 The windows are exactly it any current Moment in time you can use the last two guidelines, which hopefully would be like a year and That could even cover like the previous releases So you don't necessarily have to be on the latest guideline on the latest release You can be a you you will be able to certify if you are not running the newest version of trunk Right, and that's why when we do the scoring we we ask the users. What is it that you're using? We're we're not just Deciding this in our clothes little chamber and saying hey Rob. Do you think we do or v3 is better in this case? So If we saw a lot of Keystone v3, then we could have and we're just picking on v3 we have for this issue on multiple APIs But but if we have a whole bunch of data showing people adopting Keystone v3 Then it makes us more it makes it easier for us to say you know what there's actually a Tipping forward adoption right and one of the things that that helps us do is actually make the user surveys Which are based on 350 you know people volunteering information We could actually be getting much more comprehensive data. They actually told us not just hey you're using Keystone or not Are you using Keystone v2 or using Keystone v3 or using you know the token or using the catalog or using and we'd actually Know down to the capability what people are doing And then we can make very fine-grained answers. I I sympathize with your you know the angst We have this problem all the time. We have to make the decision right somebody has to say this is the required API and this This isn't So that the extent to which that that is technically feasible We we strongly encourage people to adopt as much of the API as they can and as they want is to provide utility for them We're just trying to say what minimum set that you have to do if you want to be a product Companies that want to certify to have the open-stack logo on their product if someone is running sx open-stack They will not be able to Get the new logo for it right that doesn't mean they can't run it or can't provide it to their customers I mean the can't part is optional, but I Mean it's going to be there. We're not We're not forcing users to implement the newest feature in throw or Nova or whatever It's it's actually established feature set Yeah, totally right and And and so one of one of things I want to I'm gonna extract this for a second and then we'll keep going back to questions How much time do we have? So so actually so we have a couple I want to actually do the slides This Intentionally creates pressure to have these conversations. So the conversation please come to the working sessions but the questions you are asking are exactly the questions we want people to ask and We did this knowing it would put backwards pressure on the technical community to create migration paths and things like that Because at the end of the day somebody does have to say this is the API that we're going to require for interoperability And it's going to flow backwards Could you jump to? I don't think we have time we'll upload these slides and we can do this but I would ask you to think about what a success for OpenStack look like and The short version of this is we don't know My my thought is if we have an ecosystem of people building products that require OpenStack, then that's success right counting installations is nice but If vendors are building products for OpenStack Then that is actually an indication that we have a market right that was one of the that was the fourth question on this We're beginning to talk about it. It's a it is a challenge. Yes Yeah, I we're they're much less equal than we thought Remember Defqord became official This year in March right so it's we're still refining the process and we're trying to figure out what works and what doesn't work and These are these are the lessons that came came up through the last half a year When we actually started when OpenStack foundation started requiring these tests Being passed and this is what we saw. This is these are the issues we heard people bring to us Yes There's a patch right now and we have started talking about it the the actual process for the foundation is right now once a year and We're looking to see if it's going to be required more than that so I want this is this is actually a nice segue to me and this is what I like about the besides the Tron reference Right Defqord is not going to win Foundation is not coming communities not going to win if it's the foundation with a bat hitting vendors on the knees to make them comply The users have the power if the users are demanding that vendors comply with the latest guideline or the advisory guideline That is where we'll start getting a you know of hosts keeping a you know showing their score on a daily basis I'm compliant. I'm compliant. I'm compliant. I'm compliant with the latest. I'm compliant with the next That if the users are asking that question That's where the power of these guidelines comes in it is absolutely not from the foundation enforcing on brand Right, that's the that that's the weakest way to do it It's the users who are saying use these guidelines if you're not passing the guidelines. We're gonna get mad That's where the power is gonna come in. That's where the actual movement is gonna come from the vendors You We need your input if you are a user we definitely need your input if you're an operator Speak up during our meetings because what we decide will affect your product will affect your deployment and We definitely don't want to be that bad that goes around and breaks your product and Or we definitely don't want to be that stick that foundation gets to use to issue your logo We do have a lot of challenges which we covered some of them and it's it's still a huge work in progress and We wanted this to be an interactive session. Thank you for providing a lot of interaction. Thank you. That's that's what we want Yes, join us tomorrow