 Hi, this session is OpenAPI, which is a new way forward for API documentation, design and tools. This session presents a working to implement OpenAPI on multiple different areas like documentation, NOVA, Magnum and MQA. So we have discussed OpenAPI during the preparation for this session. And the discussion is very interesting and very productive. We have gotten many ideas for OpenAPI, and I am already satisfied before this session. Thanks to all our members. And you all audience also satisfy, that is my hope. Okay, let's start this session. Alex, please introduce yourself. Okay, hello everyone. I'm Alex Xu. I'm mostly working on the NOVA and NOVA core reviewer and most active on the NOVA API team. And I come from Intel. And I'm Ann Gentel. This is the first time I've had a slide with my new employer, Cisco. I recently, after five and a half years at Rackspace, decided to really challenge myself in the area of APIs. So that's my new goal. Hello. I am... Oh? Oh. Motohiro Otsuka. Come from NEC. I am at Magnum Core Reviewer. Hi. I am Ken Omichi. I am a QAPTL and NOVA core reviewer from NEC, America. So this is today's agenda. First of all, we will explain introduction, overview of OpenAPI. And the next, we will explain reason and merit, why we need to apply OpenAPI to OpenStack projects from different multiple viewpoints, like documentation, implementation side and so on. At the last, so we will explain how to apply OpenAPI to each project, like documentation and NOVA and Magnum. Sorry, the end of slide is cut now. I'm not sure. So, okay. The first is introduction of OpenAPI. We... OpenStack community has an open API working... Sorry, API working group for consistent API design across all OpenStack projects. So in this working group, we have defined OpenAPI as a standard way for describing restful API document. And we needed to work together to apply OpenAPI across OpenStack project. In this session, you will run overview and merit of OpenAPI for end users and Contributor developers and how to use and implement OpenAPI on each project. And so at the last, what is the next step to use and apply OpenAPI standard? So, there is several standard format to describe restful API design and documentation, like OpenAPI, WDL, RML, and Blueprint. So, by comparing other format, what is OpenAPI? OpenAPI is called as SWACA last month, but the name is changed from SWACA to OpenAPI last year. And OpenAPI is a simple format for describing restful API, and OpenAPI is very readable for both machine and human. And then it is easy to share documentation between server-side and client-side developers. And in addition, OpenAPI is very flexible for fitting to complex API like Nova API. Nova API is very huge and very complex due to some inconsistency. And there is a lot of usage, example like IBM Watson, Amazon, API Gateway, Kubernetes, and more. Next, Anne will talk about reason for using OpenAPI from documentation viewpoint, please. Yes, thanks. That was a great introduction to the formats, the standards, and I bring up really retro things like keyboards and typewriters, because there's really good reason to apply these standards to documentation to help out our users. So of course we want to provide consistency, we want the documentation to be helpful, and that builds trust with the consumers of the APIs. So how do we get there? I'm going to talk a little bit about the current state, some of the tools, and the movement towards a sustainable process. So this is the plan and the future, but let's talk a little bit first about what we've seen so far. In the Kilo release, we had over 120 dock contributors, and Mataka, it's still well in the hundreds. And so in each release I'm seeing, even in the last two years, more and more API doc contributions, which is amazing. I was actually surprised to find that there was a release where API docs were the most contributed documentation across the entire doc suite. So we had the most commits in the last release, nearly 650 commits just to the API site repository. And it's just great to see the number of contributors. And I just updated this slide by doing a count on all of the documented operations. This is for... An operation is what I'm calling a get-put-post-patch-delete head-call for a particular endpoint. And that includes a version, that includes a resource. And so combined across the 12 services that are documented on developer.openstack.org, this is how many operations I have counted by web scraping every page. Now, 12 services have operations documented, but there are 25 services at last count in the projects.yaml governance in OpenStack. So the API ref today doesn't even cover all of the services. So I don't want to dwell on it, but I want to make sure people understand the magnitude of what we're working with. So I've talked a little bit about our current state. This is the site. This is how many contributors we've had. It's actually done really well, but there is no way to keep that kind of work going. It is an antiquated format right now. It's XML, and it is large. It's 1.8 million lines, 1,800 JSON files. 161 bugs, which was actually down from 180, so I had this cool trick with ones and eights, but now there's more bugs fixed. This is a good thing. But it also means that we need to look at ways to how can we make this sustainable by spreading it out across the 25 services that need to write this doc, right? So it's a lot for one core team. I think we all can accept that as truth. Now, many of our APIs were designed in 2010, and that was when Waddle was an upcoming standard for how to describe your REST APIs. However, it allowed us to do things like have 22 post-slash actions on a compute resource. This started to spread into other areas of the OpenStack ecosystem, and that's not really able to be described by something like OpenAPI. The idea originally was that we would be able to be using vendor extensions, which is an extensibility in Swagger, in OpenAPI, that would allow for this, but we're still finding that there are, you know, a handful of APIs that it's not going to be easy to describe them directly with Swagger, so we have to build a path to that. So, the other thing we discovered is that microversions, that is where you can actually get incremental version changes, make a call to the service, and it will tell you what microversion you are running, and so then you can do the call based on that microversion. Our docs don't support telling people about that at all. They literally have to grab the source code to find out. So, we did do a Swagger migration of all 12 services that were documented in Waddle, and you can go see that on developer.openstack.org slash draft, because these are drafts, slash Swagger, and you can actually point to those in the Swagger, editor.swagger.io to look at a side-by-side rendering, and because we were able to use vendor extensions, all of them will render. Something like Compute takes a very long time. It's 231 operations just for the Compute Swagger file. Something like this one in the screenshot is 13 operations, so I think that there's a little bit of difference between the services on what we'll be able to do. So, what are we going to do? And there will be a deep dive on this. The intent of this is to give you information so that you know where and what to deep dive into, but we actually have a working proof-of-concept now that converts Swaddle to RST, and then Sean Dagg has built some Sphinx extensions that let us write these parameters files that we'll talk about in a little bit how we can eventually get to the OpenAPI standard and start to test against these. So, write that one down. Wednesday, get to the Hilton. And so, this is what we have today as a proof-of-concept for migrating in this next release. So, if you go to developer.openstock.org slash api.ref slash compute, that is what you'll see. Now, I've taken enough time on this, and I want to make sure we get to hear these other awesome ideas about getting to implementation. Thanks. Hi. I will talk about implementation viewpoint, especially the problem which makes us recognize the necessity of OpenAPI in Magnum. Do you know Magnum? Yeah. Magnum is known as container-as-a-service, which deploys a container orchestration engine, such as Kubernetes, or Docker swarm, and MESSOS. So, we have three options to use Magnum. So, if you want to use Magnum, you must choose a container orchestration engine, which you want. And also, client should do a client program, like Horizon and Python client, to show which container orchestration engines are available in server-side. So, this is a problem. Currently, we have no way to get which container orchestration engines are available in server-side. So, hard-coded value of client-side don't make sense, because these parameters are different in each cloud. It means some clouds may support Kubernetes and MESSOS, but other clouds may support only Docker swarm. So, server should tell client which parameters are available. But how to get available parameters from server-side? So, JSON Home was a candidate for notifying available API resources from server to clients, but original JSON Home doesn't cover parameters on specification. So, OpenAPI covers both available API resources and parameters and some open-source Kubernetes and et cetera provide these info via REST API. Client application fetch OpenAPI data from server-side and show available option to users based on server version extension configuration. And, ultimately, the maintenance cost of client application will be free. So, these are the reasons why we need OpenAPI. So, next, Alex, please implement OpenAPI to Nova. Okay, I will give a short update about the current Nova status. So, first thing, the expectation for the OpenAPI from the Nova side is more about the document, our API reference document, it's totally out of update. And the user really need to read our source code before using our API. So, I think the part of reason is due to the Nova API reference document is not in the Nova repo, it's out of the Nova repo. So, most of the Nova reviewer won't track on another project. So, that's a little too... No one can track on the quality on the API reference document. And, as appears in any... mentioned, the micro-wordings probably in the Nova. So, until Mitaka release, we already have micro-wordings 2.25. In each version, we will have a little improvement for our API. And this means we already have 25 version API, and without any document. It's due to the current WADL2 chain didn't support micro-wordings. So, we want to resolve the problem. So, we give it some try on the open API. So, we write some proof of our proof of our concept code. So, we found some good on that. So, it's really good we can auto-generate all the API and the point from the Nova home-made WSGI stack. And... And we can implement something like we just document close our code. It's really good for the reviewer, review the code and the compare to the document. It's helpful to improve the document quality. And we also can discover all the micro-words from the code. And we also can use that generate open API to improve our testing. Like we can use the API to generate the open API stack compared already generated to ensure not breaking something in our API. But from the POCO still found a lot of trouble. Like any already mentioned that's an action API. So, in Nova, like the API, you post a server, I post a server, stop a server, all those actions in a single one API and the point. And that's not really a rightful style and open API doesn't support that. So, we have think about use the open API extension to support that. But that means we use something that means we drop the ecosystem of the open API. And that is not. So, except that we can discover a lot of things from the Nova API code, but still have something not like the query parameters and some headers. So, this thing is coding deep into the code. So, we cannot get that out. And we also have a response body that's in the tempest and another huge problem is how to migrate exist WADL document to our new document. So, as you know, Nova have a lot of API. So, if we start from zero to fix our API our API document that's a huge number of works. So, this is thanks to Sean point out we should use the central fox. So, we shouldn't let user with more few cycles before fixing all the things. So, we should move our API reference document back to our repo then the Nova contributor and the reviewer can help on the fix the current document. And then we'll begin after our Nova repo then we'll begin to fix the bug for the API content and catch up the document for the micro-resin. This is based on previously animation, the things plus the rest of straight text. So, this is not an open API. But it's really good thing is this API guideline and the developer reference already based on this workflow and RSC is already familiar with most developers. So, people can get a repo on the document without too much background and any already have some tool for convert the WSDL to the REST tools that resolve the migration problem. And the current API reference already based on the new things, this really thanks to Annie and Sean it really could work. Only in very short time, yeah. And the next will introduce the further step. Thank you, Alex. So, as Alex said there is a lot of content bugs in RSC files that we need to fix these bugs in this cycle. So, there are several pieces and hints to different pieces for fixing these bugs. Nova has API routing information like URI, HTTP method body. In addition, Nova has a JSON schema for request validation. JSON schema contains parameter type, parameter name and available values. That should be very useful for fixing these bugs. But JSON schema of response exists in Tempest which is integration test suite. Because now Tempest validate response body with this JSON schema. So, as you see , the diagram, in Tempest each test sends a request to target services like Nova. And when Nova receives a request Nova selects API operation based on URI, HTTP method and body. And Nova selects API operations and before operating executing API operation, Nova validate request body with JSON schema. And if there is not any problem, Nova execute API operation and send back to response to client like Tempest. When Tempest receives response, Tempest check response body with JSON schema. It's by this flow. So, JSON schema of both request and response is super valid and great hint for fixing these bugs because we are using this JSON schema every day on the gate test. One year it's that migrating JSON schema to test response from Tempest into Nova. By migrating all pieces into Nova repository, we will be able to fix these bugs in Nova repository. In addition, there is a Tempest plug-in interface which can use external code from Tempest. So, even after migrating JSON schema into Nova, Tempest can be used the migrate schema in the test. In addition, Nova team will be able to use the migrated schema for unique test and functional test without any DevStack environment. That is good to improve the quality in Nova also. After migrating JSON schema into Nova, it is possible to convert JSON schema to request and response parameter like this. And we can fix these bugs of RST contents. I created a prototype for converting this JSON schema to RST file last week, and that seems work fine now. So, in the future or in long term, this info can be used for open API data. We have already a prototype for this JSON schema to RST conversion and on open API and JSON schema response can be used without any conversion. So, that is the next step for implementing open API in Nova. In the next, Motohiro will explain how to implement open API in Magnum, please. So, I'll talk about implementation detail of Magnum side. And unlike Nova, Magnum uses Pecan and WSME framework for API implementation. So, it's slightly easier than Nova to support open API. We already have a prototype which supports open API in Magnum. Basic concept is API routing is gotten from Pecan and response and request body is gotten from WSME. Pecan and WSME knows almost things which we want. About Pecan side, we are using Pecan swagger library to get API routing information. It provides a decorator. The author is out there. And it provides a decorator to support getting the API routing information. But currently, it's still unlisted to use. For example, there are several ways to customize API routing in Pecan. And the API routing of Magnum also is customized now. However, Pecan swagger library doesn't support API routing. So we try to extend the decorator which Pecan swagger provides to adjust our rules. So about WSME side, WSME has all the information about HTTP entity. So there are no magic to get the information. And WSME also provides a decorator which methods requires which parameters and response body. For example, we can know that get one method as requires these parameters for request in this case type UID or name. And we return Bay which WSME defines. Magnum API knows everything necessary for open API. So we can provide open API data from Pecan and WSME. So next please summarize implementation. Okay, I try to do that. And summary is current situation WADL files converted into RST files. We already started fixing contents of RST. And JSON schema is useful for fixing these bugs. But there are several next steps for implementing open API. And we cannot find how to implement micro version and action APIs or to generate header and query from the code. We need to find best way for this kind of thing. That's all. So are there any question or comment or something? Please use microphone for many audience. So you're saying that open API as a Linux foundation community doesn't yet support micro versions. Is there anybody going to talk to the Linux foundation about either implementing that or kind of that balance what we need to do so micro versions can be supported inside of that or maybe viewpoints on that? Yeah, micro version is open stack specific and open API initiative does not know it at all at this time. So we need to communicate this micro version thing with there. That is the best way as the open source community. That's a nice point. Thank you. Are there any other comments? That's right. Have you guys used the swagger JSON to automate testing or generate test cases for your APIs? That's mean you can generate test from open API specification? Yeah, to test your APIs based on your documentation. Ah, nice point. That is an interesting thing. So, yeah, I think it is possible to do that. Now, in the Tempest all tests are implemented with some static code and if we do that we need to create all generated tests without open API as a previous step. After that it is possible to implement this test case. This point is very interesting for me. Hi, so you looked at Nova and Magnum and it seemed like it was never quite a perfect fit for newer projects. Are we putting out guidelines so that it would fit perfectly with opening swagger? Oh, great question. Thank you. Yeah, the API Working Group has a documentation guideline and I've recently revised it where if your API can be described perfectly by open API swagger, by all means use it and the last thing we still need developer work on is publishing HTML from that, but if just copying up swagger.json like we've done in the draft format would be okay, then that's fine and there are I'm trying to think of what percentage could be described by open API I haven't done that exact test but I bet 10 could be out of the 25. Yeah. It's giant. One follow up question with the test data generation, right? Is it possible to develop an API sandbox also in addition to the test data so that anybody can play around with the API with the test data in that environment? I have a blueprint from 2012 that asks for an API sandbox to be excellent and actually in order I look at it as if you don't have compute you don't really have anything you can do that's interesting in a sandbox so that would be the hurdle is getting a compute open API that could then do the sandbox and you might have more ideas. Sorry, I cannot catch the conversation. So the vision would be TriStack it's a free open stack cloud you could get an account and you could just do the sandbox on TriStack, so an idea. The other option would be the OSIC building that thousand node cluster or something so we can use some portion of it at least. Thanks for coming.