 Okay, it's time for the next presentation. Okay. Stefan Karek. Yep. We'll present us some data-related work that is in a slightly different area, web services. So we're pleased to hear that data is not only for real-time systems and for embedded systems and for safety, clinical and whatever, but also for other important applications. So just a bit of history for me. I'm doing some open-source development during the last 20 years. I've done a lot of work on GCC a long time ago. And during the last seven years, I'm doing a lot of work on ADA, mostly interested in web application. So today I will present REST API and OpenAPI standard. And I will show you how OpenAPI and Swagger can help in creating a web application, REST application. I will briefly introduce OpenAPI and Swagger. Then I will describe how to create REST client ADA client. Then how we can write a REST ADA server. And how we can handle security with OAuth. And hopefully I will try to make a demo if it works. A little bit of background about RPC, 40 years of RPC. It's not new. I put a number of some interesting RPC mechanisms that have been used. Some RPC is an old RPC mechanism which was introduced for NFS. It used a description file which was close to the C language. And from this description language we described how the client and the server interact. You describe the protocol and operations. And from that you generate some client and server stuff. In 1991 there was a standard with the corba and the IDL with the interface definition language. And again in IDL you describe the interaction between the client and the server. And from this high level description you generate client and server code. ADA 95 has a distributed annex. So it was standardized a long time ago. And you had nothing to do except declare some pragma to make your operation removable. But it was ADA centric. Java on its side with RMI did the same in 2000. But again it's Java centric. Around the 2000 we saw the emergence of Wistel with the SOAP protocol. All of the above Sun, Corba, ADA and Java RMI use more or less binary protocols which is difficult to debug. When you have to understand the interaction on the wire it's difficult. So people tend to say it's easier with XML. And they created SOAP and XML for that. And so the Wistel describes the interaction between the client and server. And from this Wistel you generate some code. Also recently you have Google which has with the GRPC mechanism and protocol buffers. They are pushing their RPC mechanism and they have their own language. So what we see is during these 30 years we have the same goal. Which is we want to simplify the developer job first things. We want to describe the protocol between the client and the server. And we want to have the code between the client and server which is generated. We want to also hide the communication details so that the programmer just have to call a function. And that's it. Last is documentation. Documenting the interaction, documenting how you can call your server with your API is also important. So why Wistel and OpenAPI? Wist came also since 2000. It was first mentioned in the thesis of Thomas Fielding. It was a simplification. It was easier to use. It was an organization of URE to be able to make requests more easily than SOAP. Which came at the same time. The good thing with REST is that you can use it from your browser natively because it uses standard HTTP GET POST PUT operations. And what we see is also an increased usage of REST API with all the mobile applications. Google, Facebook, they all provide REST API because they would like to have more mobile applications developed with their own API. With all of this we need to have documentation. We need to document our API. And you need also a language binding. So each time you create on your own API you will have to write a Java binding, a C binding, a JavaScript binding and so on. It's complex. And here OpenAPI and Swagger is a solution. So this came as an emergence of a company which created Swagger and which then created the OpenAPI specification in 2010. And they made a consortium with Google, Microsoft and a few others to standardize the way we are going to describe REST API. They published their first, in fact the third version of OpenAPI recently last year. And the specification is available on GitHub. So here I'm focusing on version two of the OpenAPI because most of the tools that I will use use this version. The version three is not yet ready. So what is an OpenAPI document? This is just YAML or JSON file which contains well-defined tags. So here I have defined the well-defined tags. Info is a tag which you describe generally what your API is doing. You have a description of the security with security and security definition elements. You define host, base, base and scheme which tells you what is the entry point of your server, what is the host name, whether you can reach it with HTTP or HTTPS. You describe what your server produces, what it consumes, whether it generates JSON, whether it generates XML, whether it generates binary, whether it consumes JSON, XML or forms. And then one important part is the pass. In the pass part you will describe each operation in terms of URL pass and in terms of operation, put, post, delete. So this is really HTTP centric, REST centric. Then you have tags and external docs which is some additional documentation which brings some semantics to your API. You also have a description of data types, parameters and response. All of them are used in the pass. I will show you a little bit. So from this description what you can do, first is that you generate... Could we have... If you have... Just the pointer. Okay, thank you. First you can generate static documentation from the Swagger file. You can also have online documentation. The online documentation is interesting because within the online you can also make some queries and use your API. So it's an interactive documentation which allows you to play with the API. You can also generate the client binding so that you can immediately use the API. And you also generate the server part so that on the server side you have less code to implement. Swagger also allows you to generate Apache configuration and also provide support to generate automatically some validation... Sorry, validate automatically your API. So Swagger, we have a collection of tools in fact. An editor which allows you to edit, yes? You're saying validate the API to validate the server implementation or validate the API specification? Validate the server by injecting patterns and doing validation of your server. Okay. So the editor allows you to edit your open API specification and the editor will tell you a lot of things about mistakes that you could do. You have the generator which takes the open API file and generates the code for you. The Swagger UI which is a dynamic UI that you can include in your web server so that you publish your API online and everything is documented... Everything is published on GitHub. So this is a Swagger IO editor. So here on this part you edit your text. On this part it shows you the documentation and including the errors. So the generator, what it does is it takes your document and it is able to produce around 25 different languages including static documentation, REST, Ada client, the Java client, Python and so on. On the server side it is also able to generate Ada, Java with several frameworks with Spring and others, and Python and others. So I will briefly describe what is the process now to write a client with this model. The first thing that you will do to write a client is you will get an open API specification. Either by going to Swagger Hub they publish a lot of open API that you can pick and use, or you can look at API GURUS which is a repository which also publishes... Today there are more than 500 API from Google, Facebook, Twitter and others. These documents are available. You can pick them. You can generate the Ada client code with that, and then what you just have to do is use the generated code. So I will describe here briefly the open API file. So as I said, this is YAML or JSON file. In this example this is YAML. So you have specific tags. This is the tag that I spoke about, info, host. This is the primary tag. And here you will describe your API in terms of some general information. Here you have what is the server that will respond to this API, what is the base pass, a number of tags. Here you can access it with HTTPS and HTTP and so on. So this is general purpose description. Here we have really the interesting part. So here you really describe within this one operation. In this operation what it tells is that my API is accessed by going to slash to those. There is also the base part. And the operation is HTTP get. Here I have a description of my API. This is documentation. Here this API produces a JSON and it accepts as parameter, an optional parameter. The requirement is not true. The parameter is, as a name, it's a status. This parameter is passed in the query. You can specify query or form or others. You specify the type of this parameter. And here you also describe the fact that this string is in fact an enum which can have only one of these four values. Then you describe the response. And in the response you describe the schema, what is really returned by your API. And here this is in a definition that I will present a little bit later. Here you have the status of the possible response code that the API can return. So you do this for each API, each operation that you have. Here I have the description that is returned by my get operation. So you have some kind of syntax where you describe a record. So this is a record composed of ID, title, credit, donate, status, and so on. Each of them has a type, a description. So there are a number of predefined types in Swagger which has standard types. You cannot introduce new types. So there are some limitations. And you describe whether a parameter type is optional or not. I will describe it later. So here what we have to do, you have your Swagger file. You just use the Swagger code gen to generate the client part. And what you get is this set of files. So your input is this file, todo.yaml, and it generates all of this. Separated in client and model, you have on the model part, you have the data types which describe the response or input of your data model. And on this part, on the client part, you have all the operations that allows you to invoke, to call your operation. In these four files, you don't modify them. They are generated automatically. Your code is then in this part which you can change. And this is just a sample program which shows you how to start and call the API. So in terms of when your client is running, what component it is using, your application is at this level. You have generated this part which takes care of the interaction with the server, which takes care of the data which is passed between the client and the server. We have a small runtime which is using a number of libraries. So Ada Security is a library that I've created to bring the support with OAuth. Ada Utility Library is a set of, a collection of support that I've written to do a JSON and XML serialization. It is also doing some HTTP abstraction on top of CURL and AWS. So you can choose either you use CURL or either you use AWS. But this is transparent at this level. So the models package contains the type that I've described in the definition. In the YAML file, you have a description of what is returned in the response. And from this description, we generate Ada record. So the name of the type is taken from the name which is here. And each member comes from a definition that you see here. There are some parameters which are optional. Typically, I spoke, there was a required parameter which says ID, title, create date and status are required in the to-do definition. But the down date is a parameter which is optional. So it has, in fact, another value in Java language you can use a null to represent the fact that there is nothing. In Ada, we have no concept of having a null. So you have to use a specific type which is here a nullable date. We could have a nullable string which says there is nothing. It's optional. So in fact, this is just a boolean and the value. We also generate a vector which is used to represent, to manage the collection of to-do types. Just as recall, here I'm saying I have a type. This is my to-do type. But my operation is returning an array. So it is returning an array of to-do types. And to implement this, this is the Ada container vectors which is used for the implementation. So now, for the client API it is represented by the client type. So we have this type which is a tag type which is generated and it provides each operation that was described in the YAML description. So here we have a create to-do, list to-do. The name was part of the description and then you have, of course, the client instance and all the parameters and all the response of your API call. Now, to use your this API, it's very simple. You just have to declare an instance of the client type, set up the server where you want to connect to and then do the operation. So here, with these two instructions you retrieve a list of to-do. In fact, there's a list in this. Okay, let's see now on the server side it's a little bit more complex in fact because on the server side you really have, the first thing is that you have to write yourself the YAML and OpenAPI description. Then you have to generate the code and then implement what is behind your server. And then what you can do is also share your OpenAPI to a swagger app so that you share it with others. So on the server side the command to generate is this one so it's very simple. You just have to change the generator. So it generates a number of files. So the server is very close to the client model. So here, client and server will share the same data model. And on each side, the server specific files here you have some skeleton and the skeleton in fact contains all the work to extract the parameter from the request and respond, call the server instance and respond to the client. And here you have some files that you can modify and which contains the implementation of your server. Of course it is empty. Stefan, what happens then if you do this implement to server and then figure out, well I have to make some change to the API can you read on the generator? Yes. So swagger code, swagger in fact has a swagger dot I don't remember exactly the name it has like a git hub with a git ignore it has a swagger ignore file where you insert in that file the name of your file. So for example, it shows that if you run this command several times it generates everything. But if you decide to edit this file you will add this pass to the swagger ignore file and then the generator will not overwrite your own file. But then you have to adapt it yourself? Yes, you have to adapt yourself. So how much code would you where would you go in to adapt and how much code would you have to add on the generator? In general you don't have to modify this part and this part. And the part that you have to modify on this side is in general very light. I will show you. So on the server side we also have a swagger runtime here which is using almost the same component with a new one which is the add-aservlet which is a way to the add-aservlet in fact is an add implementation of Java servlet that I have done some time ago and this is a part that brings rest support for all the system. So it's run on top of AWS. Here you have the generated code and what you just have to do is at the top. So the skeleton. The skeleton in fact generates a server type limited interface with one procedure for each operation that is defined in your open API file. And in this operation what you get is you have your server instance you have the parameter input parameters and what your server has to produce. You also have a context parameter which gives you information about the request and the response so that you are able to do some more control about it. So what you have to do in fact is just implement this server type this server type interface so it's very simple implement all the operations which are declared by that interface and all the serialization and deserialization everything related to JSON, XML and so on. This is handled by the skeleton so you don't have to do it. The last part is the server model. There are two server models. One model which is we create one instance one server instance per request this is a simple model and another model where you create your server instance is shared among all requests. So you have to choose between the two models. When you choose this shared instance you have a single instance of your server and it is protected by a protected object so that if you have concurrent operation we are safe. So you just have to instantiate the skeleton generic package and then you have to register this package to the application and then when the server is started it will be able to serve the requests which are defined. A word about security so the security you will describe the security within the swagger document in two parts. First this is a general description of what security you expect so here this is OOF security in the security part you will describe the scopes so here the scopes are related to permissions so you allow writing to-dos and writing to-dos and then on the API part you will specify what is the scope which are required for the caller to make this operation. On the client side if you want to use OOF you will create a credential object and this credential object will hold the OOF information OOF contains a number of pieces I will not describe OOF completely because it's really complex so you will set an application identifier an application secret so this identifies the application from the server point of view you will describe the OOF access point and then with this call you will authenticate to the server with your username password and asking to the server this application I want read to-do and write to-dos scope so I want these two permissions once you are authentified you obtain a token and you just have to set your client credentials and you are done so now when you will make a call like this in fact what will happen is the authentication will be passed in this authorization here either in the bearer token which is here so this is an RFC 6749 for those who want to look and this is what grants you access to the API now on the server side what we have is each scope in fact generates a definition of read to-do permission which is in fact the instantiation of this package so this creates a definition of the read to-do permission and then the generated code what it does it uses the context to verify that you are authenticated if we are not it rejects the request with a not authenticated error and then it verifies with the context whether the client the application has the given permission the last part is more configuration you have to register the application which are allowed to connect so that means the server has to know the client ID and the client secret of applications so that it can verify that these applications are allowed to are granted the access and you also have to configure the possible user names user and password this is this is I would say some kind of toy you can do it programmatically but for the presentation it was a little bit complex ok so I I can do the demo not sure good luck yeah thank you keep on praying you are lucky so I will probably do I don't see anything on my laptop so it's not really easy that's ok I'm not going to make it ok so I will make it differently so here I have my to-do example where I have it's a final example I am sorry it's a complete example with a working client and a working server so in this part I have some the generated code on the client side so here you see the operation that I have created create to-do, delete to-do so these are the operations which are defined in my open API file just to show you the implementation of create to-do what it does is it serializes the parameters and it makes the HTTP the post call and it deserializes the results so it's very simple I will show you the client so here I have an example so I have my instance of the client I configure the credential of this client and then here at this line I am calling the create to-do with my title and I obtain a to-do object which is at this level and with this my to-do is created here you have an example of getting a list of to-do you obtain a list and then you can easily iterate over this list ok on the server side if you have to implement it's very simple I will serve for that so here I am defining my server type instance so it implements the server type skeleton that I have presented I have used shared instance so that I am holding all the tasks in the object which is here what I just have to do is populate populate the information return it and insert it in my object so the code that you have to write is really simple on this side for deleting a to-do is also the same and just to show you the skeleton so the skeleton does the same extract parameters before calling your implementation and then it serializes the results before returning ok so I cannot show you it's too complex to limitation and improvement and possible improvements for the future other things was a problem for this why because we have many many types of strings we have strings we have wide strings unbunded strings unbunded wide strings bonded strings bonded wide strings do we have more I don't know so it's nice to have it's all for the moment it's nice but each time you have to pass a string at some point you have to do a conversion I made a choice to use unbunded strings for every parameter which means that when you have an operation which gets a string as parameter you have to convert it to an unbunded string so it means that you come up to a lot of two strings or two string conversion to do from one type of the other enums are treated as strings it's not really good we don't benefit of all the all the benefit of having a real enum the problem is that in YAML every operation can define its own enum which means that we would generate one type for each enum definition and we would end up in other problems like having a lot of enum definitions which almost say the same thing and you will have to convert between them circular dependencies is also difficult in Java you can have circular dependencies in the definition of YAML and you can pass objects the representation of a circular dependency is a little bit complex you can do it with core pointers but then you would have to free the memory who owns what it's not easy the error model needs some improvement when you have a 404 error which is written by the server we can raise an exception but when you raise an exception you will use some information about the API call that you are doing sometimes it's not easy to always manage errors with just exceptions some possible improvements there is no support for files uploading a file is not yet supported and asynchronous operations are possible some clients generators support asynchronous calls this is just a matter of implementing this open API describe your rest API and swagger generates a code for you it's really easy you can publish your API you can use API published by others just do it GraphQL this is another way to make request to a server GraphQL was introduced by Facebook two or three years ago this is another challenge because it's another interesting challenge you describe elements of data that you want to retrieve you push some kind of GraphQuery to the server and then the server reply this could be a next challenge thank you what you describe is a lot like SOAP and what AWS is doing for SOAP so one in AWS you are able to generate a description file from a data specification yes is it available do you plan to make is it available no in fact the tool I'm using is tool implemented in Java by a community which is not ad-acentric this is a community which is oriented PHP Python JavaScript they focus on really starting from the open API specification and then from that generating code from different languages they don't do the reverse yes but you can get that with Pascal away and ask him to adapt his tool to generate this rather than I'm looking to this problem or possibly recently and although we so actually used the data specifications as a we used the data packages as a source code or some SOAP services but when we look into open API the first law was that we actually have better ability to document an interface with open API than with data package specifications so we decided to go with open API and then a custom unfortunately still proprietary implementation similar to what Stefan had presented and then they'll make the API specification as a reference generating the data from that it's going further by question how is it better than SOAP it's different in SOAP in fact if you want to do a SOAP query you have to create XML document in which you describe the parameter in which you describe the operation that you want to call and then you will have to you will use a post only a post HTTP command to invoke your request rest is easier to use in general because you will just do a get of a page put of the same page delete of the same page the HTTP keyword to express the operation that you want to do getting, updating, deleting creating when you create an object you post some content on a pass when you want to delete an object you delete a pass when you want to update an object you update a pass so it brings something easier to use which is in fact easier to construct because you just have to pass to construct instead of a complete XML document that you have to write to express I want to call this operation I want to pass this parameter on this parameter have your SOAP, your XML document that you subscribe to the server my question is with all these on Spagger app and OpenAPI website do these publish their own rest all APIs but we could just call Spagger all of these and get all the libraries at once iterate all the data from all these descriptions just using their own API I'm not sure I understand you Spagger API, the website which collects the APIs for all the services to be able easily to collect all these APIs they publish they allow you to publish your API but to collect them for us we are able to generate all the data code yes you can yes you can the question is I have a remark to you the ADACO generator the Spagger code generator contributed to the official repository the ADACO generator is fully integrated in the official Spagger distribution William Chen is an official maintainer and each time we fix something he integrated immediately the fix so he is really proactive at integrating fixes and improvement in his core project