 Come in come in shut door behind you. Thank you All right, so welcome everyone My name is Jay pipes. I work at Marantis. I work on the Nova project upstream and my Colleague over here in the API working group Chris Dent the one with hair He's gonna be joining me on stage here in a bit to Demo the the Gaby tool, but we're gonna talk today about why apis matter So if I can get this working done done done All right, so the biggest reason the apis matters They're the first thing that developers see when they interact with open stack. I mean you could say well developers also Use the Python clients, right, but not all developers are Python people right we have SDKs for Java and all sorts of other languages so All of those things are making calls to the apis and so if you think about it the the restful apis that the open stack projects Publish are our first impression. So we want to be looking Professional as my little pug friend here looks we want to seem like we're well put together. We're consistent mature Unfortunately This is kind of it What our apis the impression that they tend to give not all of them but When you look at the inconsistencies amongst our restful apis a lot of times These are the sorts of impressions that you get. Well, it's kind of like, you know immature Unprofessional it's like toy apis that kind of thing and especially this these last two points Not being deliberate right when I say that I mean that sometimes our apis seem like They just kind of appeared when someone likes it wasn't like a full process of design It was just like well, let's see if this works and just throw it out there and the incoherentness can sometimes Come into play because you'll be using one rest api for one service and you'll be trying to do something almost identical in another service api and it's entirely different for Reasons that you can't as a developer comprehend Or as we can comprehend So here's an example of Where things are incoherent and not deliberate admin actions, so what do I mean by admin actions like? Things that cloud admins can do right Just taking nova as an example here We have things called api extensions in nova that extend the api in in some ways and allow Different types of actions to be performed against the api well, we have an api extension in nova called os admin actions and There's various things in there like pause server lock server. These are different actions It's not really restful, but you can get into that later But yet other things that are administrative actions Aren't in the api extension. That's called os admin actions. So for instance changing the admin password You can do that in a completely separate way But it's not in the os admins Actions extension and so you can't discover it in the same way. It's actually os change password or something some admin actions are Embedded into other api calls like create server. So for instance as a normal user you can't force a particular Placement onto a specific host, but you can do that as an admin, but the entire api call is all within the post Servers api call. It's not an api extension. It's not discoverable in that way But also remember the change password thing. Well, there's also a Different api extension called os chain or os server password that allows you to kind of do that Instead of changing the password it allows you to reset and clear the password No idea why one is in you know the a different extension than the other But it's completely incoherent and inconsistent and unless you're going through a lot of the code and You know and gentle and the doc team have made Superhuman strides in getting the api documented on the opus act at orc site But unless you go through it with a fine-tooth comb Honestly, you're going to miss really subtle inconsistencies like this and It's just kind of all over the place Other things that have their own extension entirely so again before we had this os admin extension or os admin actions extension So you could pause server do this and that other if other resources have their own entire api extension But other other extensions like os guests only works with the zen hypervisor But there's no indication of that through the api you'd have to go in and look at the documentation You know just sort of discover it by by accident So that kind of thing when you see that an api only applies to a specific driver That is what I consider to be implementation details leaking out of the api, right? And that's the kind of thing that gives the impression that our apis are kind of toy or they're you know They're not really deliberate. They haven't been thought through much Okay, so Finally to end on the The admin actions thing so Keystone Has an entirely separate endpoint for its administrative actions like adding a user You know adding a role to a user Groups and things like that But only in the v2 api in the v3 api. It's all back into one endpoint again No idea why but that's the case. So surprise Example number two of where we are completely inconsistent in our apis meta data Which I really hate the the term meta data for a reason The definition of meta data is it's data about data We refer to data as meta data in a lot of cases right so data like a good example of meta data is The size of an int right it's data that describes other data well we use meta data to refer to Key value pairs like just random strings and all sorts of other things that is actually not meta data But we call it meta data anyway So arbitrary collections of key value pairs that we call meta data Glance v1 has something called image properties So there's these arbitrary key value pairs that are attached to image resources So Glance v2 also has the same concept of image properties, but it also has these simple string tags Which is great and both of them can be manipulated via the Images object resource, but only one of those things can be manipulated by its own collection resource Again, surprise No real coherency around it right if if if we were implementing the tags collection resource in Glance now and As I'm gonna get to the API working group was around We might advise saying you know both of these things image properties key value pairs and tags are collection There are a resource collection. Why don't you? Make the API's look the same right that's one thing we might advise Okay, so Nova and cinder API's also have this thing, but they call it meta data and not properties in Nova you have instance metadata and system metadata the system metadata is stuff that the Nova system itself Attaches to instance records Instance metadata is stuff that the user attaches to the instance Cinder has the exact same concept of this, but they don't call it system metadata. They call it admin metadata There's also Glance metadata in cinder, which is even more confusing, but Depending on you know who you are you see some or all of it Not completely inconsistent between two things But it kind of sounds reasonable that you'd have this sort of segregated metadata But then when you really look into it It really doesn't make a whole lot of sense because instead for instance in Nova There are certain types of system metadata specifically stuff that around the flavor that an instance was spawned with That are injected into system metadata They were done that way because it was easier to just add a new key value Property in system metadata than it was to actually change the database schema in Nova So again, this is kind of where implementation leaks out of the API because it was easier to do something than To actually make an actual attribute on the instance Resource it was easier to just throw something in this little blob of key value pairs Well Now we have a whole set of system metadata that are actually really just attributes But we threw them into key value pairs because it was easier, right? Staying within Nova. We also have something called extra specs are extra specs any different at all from metadata anyone No, they are exactly the same. They're just called extra specs Uh host aggregates insert that literally they're exactly there's key value pairs Host aggregates and server groups have something called meta details No idea why it's the same thing So anyway, so these two examples of ways that our API's even within a single API like the compute API are Inconsistent, why is it this way? Well, I've already already described one of the reasons. It's just easier, right? If you've had if you have this sort of freeform key value thing and it's a lot easier just to throw stuff in there Then change the instance schema and add a real attribute to it Well, that's some of the reasons why the API looks its way but other reasons are like good people disagree about how things should look and because there hasn't really been a Group that looks out for the consistency of the API's across OpenStack projects You know these good people on different teams go in different directions and they don't necessarily communicate with each other And so you get these sort of wildly different ways of doing virtually exactly the same thing so About what was it four or five months ago? We started this effort called the API working group Okay, what seven eight months. Okay, seven eight months ago. Thanks everyone We created this API working group that our responsibilities involve looking across all of the OpenStack REST APIs and providing guidance to new projects as well as Showing existing projects how they can Evolve their API over time to become more consistent with each other So raise your hand if you're in the API working group. I see a few people in here Yeah, so all all of these folks are people that are concerned about what our APIs look like Right that we want is we want it to seem professional and consistent and not baby pug, right? all right, so We work with the project teams to evolve the APIs. That's what we do what we don't do though we're not some kind of like Gestapo secret police that's you know, like going in and Forcing people to change things or anything like that We just discuss what the Guidance should be for a particular rule say, you know the response code for a particular HTTP call and That guidance goes into a set of documents that gets published on the open stack org site When we see something in one of the open stack projects that either is not quite aligned with that guidance Or that there is no guidance in the the API working group repository Then we create that guidance and and help the project teams and the person that submitted that patch start to become more consistent with what What the the API working group is recommended? So here's some example guidance this actually comes from the How to do tagging so a couple projects have implemented simple string tagging glance has Nova has a series of patches currently going through that implements server tagging We wanted to make sure that we had some guidance there So that the folks that are submitting these patches say, okay We have something that we can go look at and in determines, you know, how we can make this REST API look consistent with how other other projects are doing it Apologies, I thought I had something after this But this is the last slide before I introduce Chris and he explains the Gabby tool of how what we're What he's doing in Gabby is it's a tool that will functionally test the restful API's of open stack projects and highlight Where the inconsistencies currently are in a declarative fashion. So anyway, it's chris dent He'll continue from here. Thanks Hello everyone See if I can find my screen right, so my name is chris as we've established and I've been working on open stack for Just over a year four days more than a year And I'm one of those people who came along to open stack and did in fact look at the API and think hmm What's going on here? This is a This is a bit confusing a bit chaotic And that was unfortunate because I came onto the internet long before there was a web and When the web did show up I found it to be one of those things that was going to be such a huge promise it had had such a Glorious opportunity for people to be able to do stuff and it became even better when there was the idea of doing web API's and When I came to open stack I struggled because because the web API's were were not so great I think of web API's is a conversation between a client and a server where the client is actually you The person and the server is some set of things that you want to do There's the technical implementations of the client the server, but they're not quite the same thing There's a thing you want to do. There's the stuff the server will allow you to do and there's a conversation going on between the two of you if you follow the rules of HTTP on both sides of that conversation then You will be allowed to do things well But in order for you to go to do those things well your code has to work properly and you have to use HTTP properly and to create Systems that are like that. We have to be able to test it That's why I created Gabby. We'll get to that in a minute So this is how I perceived open stack when I first got there. I think a lot of people have this experience This is a really sad guy looking at a dead clown I guess I guess the clown is probably the promise of open stack Maybe that's a bit of an overstatement. Maybe that's just me It certainly was for quite a while It was like that for so long that I finally got so frustrated that I needed to create a tool to help me. I needed to go from sort of that Which is the chaos to this which is elegance and balance and nice proper web-ness So how do you do that? How do you go from bad to good? Well, the first thing you tend to do is identify some of the things that are bad about the existing system in In open stack for me the biggest problem was the stuff is very hard to learn. There's a lot of Convoluted stuff in both the active code and the test code When you look at the code, it's very hard to find any clear source of authority Where are the things that define what endpoints exist in the API? Is it the docs? Is it the code? Where in the code is it if you're using a system that has Object dispatch instead of explicit routing How do you know? the tests themselves at least for the things that I've inspected have been horribly subclassed and Just you look at a test and you don't know what it's doing you have to chase the code through several levels and several steps There's client code that has been custom designed within those tests to do exactly what you want it to do and So the tests of course pass because you're doing exactly what you say you're going to do well out in the world People writing client software are not going to do exactly what you want them to do You need your server code to be resilient in the face to more more complex input That follows on to the next point which is basically that The testing in much of open stack is regression testing It's there to make sure that things haven't broken that things haven't gone wrong that when we've put part a with part b It doesn't blow up That's good. It needs it's important to have those things But you also want to be able to have testing that allows you to write things well in the first place so Some of the solutions will fix those problems you want to be able to easily evaluate what's going on with the API You want to write tests that aren't verbose that they just explicitly focus down on What you want to be testing? You can read all that I don't think I need to go into too much detail on those things because the next slide Shows an example That on the top is a traditional test. I think that comes from Solometer, but but it has the usual usual things. It's a It's a it's a method on a class that has a special method for doing a web request that constructs a query and then evaluates the the response I Don't know from looking at that what the full URL is But if I go here in the Gabby example, that's the URL that's being requested just that it's right there It's in the forefront And that's what I wanted that's what I wanted to create and so I did and now we're gonna have a demo which I hope Will will work The first thing we're going to do is start a live server Gabby doesn't need a server to do its testing. It can use Whiskey intercept which is a tool that basically just Automatically allows you to test against the the code without a server in the middle But in this case for the sake of the demo, I'm going to use a server and this is just a little helper script to run different tests through against that server so What a Gabby? test file is is the ammo file with a sequence of Of tests any test is required to have a name and a URL and nothing else the if you run this it will pass because The status code of the response by default is evaluated as 200 And that's really the only thing it will check so I'm gonna run it now And there you see that it ran This verbose output is just something that it can do if you ask it to it shows you the request it's going to make and Then some various response information This shows One of the things that's called a response handler You can evaluate the response in various ways with code called handlers This one is evaluating the strings in the response with the content at Root on this server is just an html page. So this tells you the html and evaluates that response handlers are Either built in and do a variety of things and I'll show a few of the other ones or you can write your own and add them to Your your test harness so that you can do things like Pre-process a DOM and then evaluate it against pquery or something like that So that that one's go that's passes This one shows a little more verbosity with Evaluating the response. We're going to send a different method instead of get and we're going to check for a different status than 200 and In this case, we're checking for the response headers now This demonstrates a bug either in Gabby or in my server code 50% of this time of the time this will fail because post and get will be in a different order So let's see what happens this time and it fails. So there's a trace back of the failure Down here at the bottom you see Why it failed and The the response codes Here is a more complex test. This one is Creating a container in the API. This is basically sort of a swift for dummies this this little API that I created In this case you can send some headers you set the content type With this you're sending data if the value of data is not a string it is translated into JSON before it's sent to the server So that we're going to send a little object with an owner of Sam. It's going to respond with the status of 201 This next test which will only follow after this one. They are they are run in order and this one will evaluate that the body is Containing JSON with owner Sam and uses JSON paths does is that something people are familiar with JSON path Basically allows you to do queries into JSON objects This is a magic variable that will be replaced by the location header from the prior response so you can make a request of Whatever you just created to confirm that it has the things that you want to do So let's make sure that worked it did This is another one that just shows a different JSON path we're checking to see what objects exist in the shed that I just created there aren't any yet Here we have creating an object in the shed. This is Some information about a car apparently we're going to evaluate the response headers to check and see that it has a legitimate location header There's several things going on here. One is that if you bound the value of a header With slashes on both ends it turns it into a regular expression This regular expression is saying does the thing on the end look like a UID? Dollar scheme and dollar netlock are replaced by those values In the in the server in this case will be local host and an HTTP We're getting to again. We're checking that the response headers are all we expect and that the objects Look like we expect This is another Create creating another object in this case We're doing a put because we know the name of the thing that we're putting into the the shed if You're sending data you have to set the content type Otherwise it doesn't know necessarily what to do with it in this case We're sending a file that we're reading in if you use this little Little set of symbols here will read a file from the current directory and send that so this is going to post a kitten We hope And I can actually check that that's something I wanted to make sure that was working. So here's the front page There's our kitten who doesn't like puppies there's a final object a final example Just some more json path stuff you can do slices on On a raise You can use json path in the construction of URLs and in the construction of queries against The response the previous response so in this case what we're doing is saying Get the first object or the first element out of the list of objects that was in this response and use that to create Batteries dead use that to create the URL Just another example of how to do some some yaml in this case. We're doing a Content negotiation to ask for text plain instead of json, which we've been doing up till now That all worked now the the fun thing with this is that that server code was created with test-driven design to using Gabby itself and If we look at The server's test code We can see How you would use this in your own tests it uses the unit test load tests protocol to create tests and Then provide them to the test harness We're gonna I'm gonna run this because it should be relatively fast Just so you can see what's going on maybe it'll be fast There we go in a in an environment where you have concurrency happening The tests will be divided up by the name of the file That the tests are in so if you have multiple files They will be distributed across the processors and each each test File will be run in order on only that processor back to this So all of this works because underneath there's the code is translating the yaml into unit test test cases and then each of those test cases is assembled into a test suite per file and And then that's run through a custom test suite that Allows you to use fixtures to create data and configuration the The response handlers are built in at test creation time I believe that 11 so I want to get it into this part which is how to use Gabby well One of the best things about Gabby's that makes things easy to write you can write tests really easily and If you if once it becomes easy becomes fun just to throw crap at your API and see what happens And this is a fantastic way to to break things and once you break things you've have got bugs And once you've got bugs you can fix them and once you fix them You've got better stuff which in the end it's the entire reason for all this stuff It's usually the case that if you are using this tool against an open-stack project You will need to establish some configuration and The best way to do that is in a config fixture, which is associated with a test the config fixtures job is basically to Tell it where the API server is running what host important things like that and Do things like in at least in some cases you have to Disable the keystone middleware depending on what kind of authentication tests you want to do It's tempting to Try to do too much in any one test because the tests are so easy to write you've kind of ten have a tendency to want to Just sort of put everything in there You sort of request every single thing and evaluate the entire body of the response in Your test if you do that the tests become unreadable and Then what's the point using Gabby the whole point is to make your tests more more useful more and more readable It's also tempting to want to use Gabby tests to do things like test your persistence layer That's probably not a good idea You want your persistence layer to be a known good thing based on other tests rather than the test you're creating now Gabby's pretty useful for contributing to To existing open-stack projects because it's an easy way to get into a project and learn about it because you can write these API tests easily and quickly and and in the process teach yourself something about the project while doing something good for the project that you're interested in oh and You know why this is in the Working group related thing you can also of course validate Working group guidelines with with Gabby Gabby itself needs a better docs doesn't everything It could also do with additional response handlers That allow it to evaluate specific types of content Right now the input data is always yaml, but there's no reason why it has to be that you could you could Use whatever you want as long as it eventually becomes a dick that has the same structure and I think For me one of the things that really critical for the tool to become especially healthy is that it get input from a variety of communities right now I wrote it and there have been maybe four or five other people who have used it and it works for everything the five of us have tried but That's only five people out of a whole big world. It could be a lot of things that it doesn't do that It should do That's basically the end of the demo in the end of my talk I wanted to point you at these things though The thing in the middle the the Gabby demo. That's the code I used for the demo And then the other ones are kind of obvious. I Think I've left some some time for questions for both of us I hope if you do have questions, please use the microphone over there, or if you can't reach the microphone I Will repeat them for you You you have a question So right now as far as I understand you have Integrated Gabby tests into salameter and no key and no key Are there plans to Integrate Gabby into Nova and neutron and Cinder that's the thing I actually forgot to mention which is that I invite anyone who's interested in that kind of thing to find me in The hallway so that that we can sort of get you started together The other option is that probably Friday afternoon. I don't think many people are redoing anything, but I'll still be here and If you want to pair if you know that's still allowed in this kind of environment on on doing this this This work, then I'd be happy to do that It's certainly something that I want to see happen. I think it will be very useful But I do think it's something that It's best that a community of people do it rather than just me running around from project to project project and helps so yeah, so the question is can Gabby be used for The unit test API versus The Tempest API so I I think what you're asking is like Maybe can Gabby replace some of what tempest is doing with the API testing is that I Think so yeah, I think But it doesn't necessarily need to there's certain goals that the API Tests that are within tempest are trying to accomplish that aren't necessarily the same as what Gabby's trying to accomplish there has been some discussion of using Gabby for what they're calling negative tests and Nothing there hasn't been any progress on that but it's certainly something that I think would be useful because it Sort of a fast and easy way to do to do that job Whereas I think you'll recall from my earlier slide. I was I was complaining about how some of the tests use a specialized client Well tempest is the example there. I mean tempest tests Don't even bother to look at the response code really they just return the body and that's not really a very good test of the entire system ever Yeah First you took my first question Jay, so thanks for nothing And the other thing I want to make you aware of there's a effort over in the docks land about generating API docks Right from the the Python source code. Yeah, I'm not Entirely convinced this is a good idea or not, but it seems like there might be You know also generating these kinds of tests the the am yaml files from the yeah, so when Gabby was Earliest talked about there was some some discussion. Well, why not swagger for example, and And then we could have you know magic docks magic tests magic everything and at the time I Didn't go with that because I wanted to make something that was small and focused But it would be as you're saying it would be very easy to take anything that auto generates anything Can be transformed so it would you certainly would be possible. I think it's probably a good idea You know API blueprint or swagger and it's like they're not It's not as declarative it's it's like declaring the schema of the API as opposed to A test case that's sort of clearly declaring what the like a sample looks like almost So it's a little a little bit different there. I mean, I I like both swagger and API blueprint, but I think Gabby Certainly, it's I think reading through the Gabby files. There's Quite a bit clearer than reading like JSON schema from from swagger I think I think we're basically in agreement Hey, um, is there a way I don't know how useful it would be yet But is there a way you could run this as unit test and functional test from the same set of specs? so Because it really depends on your definition of of unit test Against the live server and against a test runner. Yes, so it is possible to Basically use a different set of files Yeah, because it'd be like the same set of files against in two different contexts. It's perfectly possible And apparently we're out of time if you've got more questions, I guess we'll both be kind of around Thank you