 Thanks for joining my presentation. My name is William Zhang, and I'm the core team and funding member of OpenAPI Generator. Today, I want to share with you how we scale the test coverage of OpenAPI Generator to support more than 30 programming languages in our project so far. The agenda for today's presentation. To start with, we'll talk about what is the OpenAPI Generator. Then we'll talk about how we test so many generators that we support in our projects. And then we'll talk about how we scale the test coverage as well as how we scale the community. At the end, we'll have the Q&A. A little bit about myself to start with. I'm the core team and funding member of OpenAPI Generator. And I happen to be the top contributor to this project so far. I've been working on this project for more than five years with thousands of committees, millions of live cook changes. And I also published several ebooks with different languages such as Japanese, Chinese, English, French on cook generation. Please feel free to grab a copy to support my work. And I'm also the founder of a company called WestUnited.com, which makes it much easier for developers to generate SDK documentation and cook samples for their REST APIs in only five steps. And I was previously working at MohenStanley and here are my GitHub and Twitter handle. So what exactly is OpenAPI Generator? As is true in the diagram here, it is as simple as the process that takes some sort of input in the form of YAML or JSON. And then we'll generate something useful for you such as OpenAPI clients in different languages, documentations, and more. For the input, we support OpenAPI specification, which is formerly known as Swagger. We support version 1.2, 2x, and 3x. So let me show you what the OpenAPI specification 2.0 looks like. As you can see, this is for one particular endpoint. This URL is a Gantt method. We have a text to classify this endpoint description. And then for this endpoint, the server will produce either XML or JSON payload. For parameters, it's expecting something in a par parameter here. And it needs to be integer 64. If the operation is successful, it will return a pet object back to you. And for security, you need to pass the API key. So this is for OpenAPI specification 2.0. And for 3.0, as you can see, it looks very similar. We also have the URL here. We have Gantt method, description, parameters. One major difference is that now, so we describe this payload in the response. If it's successful, it's 200. Then we have two different types of payload, XML or JSON. So either way, you get back a pet object. We also have the same security annotation here. To indicate, it requires API key authentication. And for the process for OpenAPI generator itself, there are different ways to use it. You can include it directly in your Java applications as a Java dependency. You can use it as a CLI, as a command line tools. If you're using Boo in Mac OS or Linux, you can simply type Boo install OpenAPI-generator to install it for you. If you do not want to install Java as a dependency in your machine, you can use our public web API server by simply issuing an HTTP call to generate the code for you. If you are using Docker, we have published some images to the Docker Hub under the OpenAPI tools ID. So we have published something for CLI and also the web API server. So you can host it locally in your internal environment. So for CLI, we have achieved 1 million downloads already. I think it's pretty good. And we have also another way to do it, which is using the NPM package manager. This is I think one of the popular methods. If you go to our page here, if you type the NPM install at OpenAPI tools, OpenAPI-generator.cli, you will be able to install it very easily. We also add something called version manager that you can set the version very easily. So you can just simply roll back to a very old version. We've done much effort. So we have achieved 110,000 weekly downloads so far. I think this is pretty good, pretty active. If you're using something like Maven or Gradle Puppines, we also have Puppines available. If you want to do it right from your IDE, for example Eclipse, we also have Puppines available for that. So for the output, we support more than 30 programming languages, many server-side frameworks, API documentations. You can also convert your OpenAPI specification into GraphQL schema, protocol buffer, and more. You can also change it into Apache Web Server Compact, or you can also generate schema directly. So we have something called MySQLDB Schema Generator that you can try out. For a full list of output that we support, please go to our GitHub page. And then you can just scroll down a little bit. And here we have up the sponsor section in the overview section. You can see we have this all, the language framework that we support for popular languages, Java, C-Sharp, TypeScript supported for some not-so-popular languages. Haskey, Ada, Apex, even Bash, or even ActionScript. So we still support this. And also documentations, convert files, schema. So you can find all the latest that we support right here. So you may wonder who are actually using OpenAPI Generator in production. There are actually many companies and open source projects that are already leveraging OpenAPI Generator. Here are some of them. I got a data doc. For companies in Japan, we have Sony Interactive Entertainment, Yahoo Japan. And Kubernetes is also using OpenAPI Generator. For a full list, you can go to this website, openAPI-generator.tag. And you can find a full list here. Feel free to add your companies to this list if you are using it in production. You can simply do so by going to our GitHub page. And so scroll down a little bit to this one. Companies watch out using OpenAPI Generator. And add your company to this list. I hope by now you have a better idea of what OpenAPI Generator does and how it can help you. In the following, we are going to talk about two things. How do we test and how do we scale? Here is what the test looks like on day one. On the left-hand side, we have the OpenAPI specification 2x or Vx test files in YAML or JSON format. We have the OpenAPI Generator written in Java. For the output, we have API clients in several languages such as Ruby, Java, PHP, and Python. For the test input, we are using the OpenAPI specification 2x and Vx files in YAML or JSON format for testing. For 2.0, we have more than 50 files with more than 30,000 lines. And for Vx, we have more than 100 files with about 25,000 lines. And moving forward, we'll be focusing on the OpenAPI 3.0 test as it goes mainstream. And we will not be spending time on the OpenAPI 2.0 test. I think it shouldn't be a surprise that we add unit tests for the OpenAPI Generator modules, just such as the CLI, Online Generator, Maven Pop-ins, Core, and more. We also add unit tests for generators such as Ruby, PHP, and the corresponding WSTAT templates. Here is what the default code-gen test looks like. So we try to test to make sure the form parameters have certain default values. We load the spec to start with and then process it and then do the accession. For language generators such as the Ruby kind generator, we have performed something similar. We load a spec, new the generator, perform certain operations, and then ensure the output files contains what we are looking for. To ensure the auto-generator client such as Ruby and PHP works as expected, we have added the integration test to test the API clients with the WEST API test server. For example, we will make a GET request to the test server to make sure we get back something that we expect. We have added this test for the PHP, Ruby, Python, and Java clients. Let me show you what the integration test looks like. And here is the test for PHP. For this method, we are going to test the GET add-by ID with the HTTP info. So we are going to specify the ID and then we are going to check out the functions and we are going to assert the returns is what we are looking for. And we're going to inspect the payload every fields one by one. We're going to check the HTTP header as well. I think so are so good. We have added a unit test for the open API generator modules. We have added integration test for various API clients. What about using this thing called contingent integrations? We are going to want the test in the Java CI which offers free contingent integration services for open source projects. We are going to want this test against a public web API test server to ensure that the API clients are functioning normally. And here is what the Travis CI looks like. So we have a list of jobs here checked by pull requests or changes to the master. And if the job fails, it will show that something's wrong. And if it succeeds, that means we have all the tests one successfully. Also as part of the integration workflow, we published the snapshot version to Maven or to Docker Hub to make it easier for our users to try out the latest master. But here comes the challenges. We need to add more tests in the module itself or the integration test in the auto-generated clients. And we have add more and more client generators Perl, TypeScript, ASCII, you name it. More server generator as well. Ruby on Well, Rust, ASP.NET Core and more. And we also start covering other types of generators, documentation, schema converters. And what about the language versions? Ideally, we want to test against JVM 7, 8, 11 as well by to make sure the auto-generated clients works with a different type of JVM. What about testing clients in particular platforms? We want to test these Swift clients in iOS, She-Shop clients in Windows for sure. And here comes the problems. As we add more and more tests to the projects, check the CI timeout because the jobs take more than 15 minutes to complete. So the job will never finish. It will always fail. What are we going to do? And how do we test Windows or Mac-based clients? So as the C-Shop or Swift? Chalice yet does not offer any Windows-based image at the moment. I think the PLC did offer the Mac-based image but we test it out and we got some other issues. So how are we going to handle different OS, different platforms? And since we are using single public web API server to start with, if there are multiple PLCs open at the same time, that check on multiple builds, testing against the very same test server, most of the time it will fail. And for the public module repository, we find that it's trying to follow the connection from us because the project was just so active. There are so many awesome contributors submitting PLCs to AdVisuals or bug fixes. These public modules just simply further our connection and saying you know what, they cannot allow us to download things. Using Caching may possibly solve the problems but from time to time we find that their further connections and website reshutting our connections does not allow us to install dependency or tools or testing. And most importantly, because it's an open source project, we don't have any budget, right? To try the commercial offering by Trevor CI which should solve some of these problems. So what are we going to do? One way to address these problems is to scale out the CI's as Trevor CI is not the only one providing free services for open source projects. So we use other CI providers such as Circle CI, Shapeable, John.io and GitHub Actions, right, so that we can check out all these test workflows in parallel so that it won't exceed the 30 minutes time interval set by Trevor's or other CI providers. And for different OS platforms, at Wires we find that it offers window-based testing, BitRace.io, we find that it offers testing for SWIFT clients. So we test SWIFT 4x and 5x clients in BitRace.io. And we are going to want a local West API server inside this CI so that there will not be any race conditions. They can test with their own server, everything is good, all these tests are isolated. And to test different JTK version, for example, for example, we in Trevor's, we install JTK8, so that all the tests will be one against JTK version 8. And for Circle CI, we're going to use, for example, JTK11 so that we can test JTK11 as well to make sure the, for example, the Java clients also works in multiple JTK version. And now there's another challenge. Now we have more contributors joining, which is a good thing, right? We welcome them to submit PRs, open any issues to help us move the project forward. From time to time, they ask, how do they contribute a new generator? They want a generator in Rust, maybe it's so basic or something else. How do they do that? And now we have so many PLs to reveal, right? Every day, people submit a PLs. Every day, people open an issues. How are we going to handle that? And you know what? Some PLs accidentally remove some tests, or simply wipe out all the test files. What particular clients? How are we going to handle that? And here are a few tips to onboard new contributors to your projects. First of all, prepare a release chart list. So what exactly is that? So here, when I try to submit a pull request, so I will have this templates populate the content of the pull request. The common explaining what we are looking for, and here is the chart list. Make sure people read the contributing guidelines. Make sure people clearly describe the PL. And if they are making changes, cook changes, then make sure they update the samples so that the CI can test the change. And also copy the technical committee, for example. So this is to make sure the PL is ready for reveal. So this is what the PL looks like if it's ready. So we have all these items completed. So this will put everybody on the same page, and we don't need to actually go back and forth to ask users, can you do this? Can you update a sample? Can you copy the committee, for example? So this makes everybody live easier in terms of the one who submit the PLs and also in terms of the one who reveal it. And treat your new contributors like VIPs, because if they have a very good experience to contribute their first PL, it's more likely they will come back and contribute more enhancement to your projects. And one good starting point is use other PLs as a reference, show them, hey, if you want to make changes or fix something, this is the one that's very similar that you can use as a starting point. For example, if someone wants to create a new generator, the best way is to look for, oh, how do we create other generators to start with? Let's say, so previously, we add a C++ Unreal Engine full-kind generator. So if someone wants to add another C++ generator for kinds, then I think one very good starting point is this particular PL, right? So we can show them this is a good starting point. They can, for example, take on files changes so that they know what file I expect to be added, what files I expect to modify, how they're going to test it. So it's much easier to show them, hey, what this will be done to add a new generator this way. And try to help them to get the new generators or new changes as soon as possible. By that, I mean you can test it locally. You can test the PL locally. You can help them to fix minor issues in their PLs so that at least the generator compiles and also the output meet the MVP requirement minimal viable product requirement so that it's ready for testing, right? Because we want to get it out as soon as possible so that the community can test it out, right? So previously, when we, for example, add the PowerShell generator, we want to just add the MVP version so that we can work out collect feedback from the PowerShell community, right, to further improve on the generator. And we can also help them to self-test for them in CI because it's not easy for new users to understand how we test our projects. We have so many CI. We have so many different kinds, different server generators. So we try to help them to take care of this for them so that they don't need to worry about testing, right? They only need to focus on the generator. And then we can just get things going this way. Do the contributors need to know each other in order to make a contribution to this project? Let's say, creating a new generator? The answer is no. This is how the worst kind generator was created. So this engineer was from Google. He is asking, hey, why is deep diving in the Java code right now? Is it supposed to just focus on what? So our answer is, yeah. Just like previously, how we work out the PowerShell generators. So they only need to create the PowerShell and then we can use the output that they create. And then we do the reverse engineering to create back the generator as well as the time pay. That's how it works. And then he's happy to contribute the worst kinds. And then we just reverse engineering that and then we create the first version of the worst kind generator successfully. Without the contributor knowing anything about Java, right? So they don't need to write a single line of Java and they can still create a generator. And this is the experience, I think, that would welcome these users to come back and contribute more because it makes their life so easy. So they don't need to worry about other things. They can just focus on a particular language and we just take care of everything for them. After we win so many PLs, here are some tips on how to better reveal the PLs contribute by your community. First and foremost, understand the change. Understand what it does. Is it trying to fix something? Is it trying to add new features? If it does not have a good description, ask the user to revise it. Ask the user to provide more information so that you can make a better judgment on whether to merge the PL or not. Ask good questions to the PL submitter. How can you reproduce the issue locally? Have they been actually using this fix? Have they tested locally? Are they using this fix in their positional environment? This will give you a better picture in terms of this fix. Is it mature enough to be merged into master? You can also pay with the change locally. You can always pull this change to your local machine and you can just compile the project, try to use the fix, try to test it locally. You can also try to break it, try to cover edge cases not yet fixed by this particular PL. It's also good to build a community of reviewers. So this is what we call the technical committees. So in the project with me, there's a section called Open API Generator Technical Committee. Anyone who submits free PLs merged into master are eligible to join. So if someone submits a PL, let's say for a C-shop client. So we will copy the C-shop technical committee and which is based on language. So if someone submits a PL for the C-shop server generator, we'll copy the same people as well. And some of these reviewers are actually very active and we even give them a meanwhile so that they can merge the PLs directly without the intervention from the core team. And try to release often. Try to reveal the PL often, try to release it often. Contributors do not want their PLs to sit there for a year before it gets revealed. So that's why it's important to have other people try to reveal the PLs because there's no way several people can just reveal so many PLs to the projects. Try to release it often. When we have the bi-weekly patch release, I think we also have the bi-monthly minor release and the yearly major release, which I think works pretty well for the community so far. For your project, you may need to change it a little bit. But the idea is you need to release it often to keep your users engaged. Contributors do not want to see their changes, their effects to be included in the release a year later because that would be too long for them to get the effects. And what about tests that can't be moved as part of PL? So sometimes people accidentally remove the test. They try to wipe out all the test files and then we generate the test. So to attempt to fix it, the first approach is we try to restore the test file from a particular folder. So we try to copy some of the tests to a particular folder. As part of the CI workflow, we try to restore these test files and copy this into the current folder. Right, so this works somehow and we are able to at least ensure the tests are there, a person as part of the content integration. But the problem is it's pretty kind of intuitive to the contributors that they need to update the test file in another folder instead of the one that they used to test the client. So here comes our second attempt. So we actually now monitor the test files instead. What I mean by that is so we have a conflict file like this. So we are going to monitor the C sharp test files and the image for upload. So we are going to specify the test file here. So this file contains some menu tests that we have written so far. If anyone changes it, let's say that we move the test, right? So the sharp S256 will be different, right? You have a different value here. If let's say someone completely removed this file, so we will notify as well because now the value is not the same, right? The file, because the file is gone. So this way we will notify whenever people make changes to the test files. Even if people add a new test case, which is a good thing, so we will get notified as well. At least we can get notified and do something about it. Instead of we're wearing a PR that changes a lot of files and potentially sport that one little changes that remove the test file or test case. Last but not least, we would like to take this opportunity to thank our sponsor for their financial contributions to this project. You can also find a sponsor list in our project with me. So our sponsor are Namesaw, LiveUp, DocSpain, DataDoc, and Pheliz. We also would like to thank GoDaddy, LinNope, Charlie for sponsoring the EPS domain name as well as the monitoring. And if you also found this project useful to your work or personal projects, please consider doing a financial donation. You can go to our open collective pages or just search for open collective, open API generator. And then you can consider making donations one time or on a monthly basis. Please also start our project in our GitHub page. We have 7k so far. And we have about 1,700 contributors who have made changes to the project so far. And we look forward to the PR from you to help make this project better. And we sincerely hope that you will find this project useful to your work or personal projects. Thank you. And here are the credits. A huge thanks to all the awesome contributions from our contributors. There are now more than 1,600 contributors to this project. And we look forward to the PR from you as well. And the presentation can pay as well as the icon and images from these parties. Thank you for your time for attending this presentation. I hope you find it useful. If you have any questions, you can find me at Twitter or my email address listed in my GitHub profile page. And I think you can also ask questions here in the chat room. So thank you.