 Hey everyone. Welcome to our next session, API Management as Code, a declarative approach to handling API artifacts. My name is Richard Stroop and I'll be your moderator today. It's my pleasure to introduce our speakers to this session. We have Hugo Guerrero and Vomcy Ravula and they will be going over three scale and how we use it. For a few logistics, just before we get started though, if you have any questions during the session, you can submit them in the chat. We'll try to get to them if we have time. If not, we can always follow up with you afterwards, so don't worry. There'll also be a recording of this session live on YouTube after it's done, so you can go check it out again or share it with people you think would be interested. So make sure to take advantage of that opportunity. And with that, let me turn things over to Hugo. Thanks Rich. Perhaps you can help me out sharing the screen. So yeah, we're going to cover a very interesting topic today and it's related to API management and it's related to API in general, as well as the DevOps process, GitOps and declarative approach for that. So with me, it's bouncing as you can see it and we'll see for this in the before. And we're going to be covering this topic in the next following minutes and if we are getting out, we will be seeing a video recording for the demo that covers an example of how this can be managed in real life. So let's get started. Then when we talk about API management, it's when you get so crowded with cables and ports and it's like when you are doing a lot of APIs, right? How do you start it with one or two APIs when you start to grab your strategy, but then suddenly you realize that you get a lot of them and you are getting out of control. So that's where you need management of that. When you need cable management, when you look at your API landscape like this, it means that you will need to have some management and that's what we are thinking because most of the times a lot of people think that API is just exposed in a REST API, an HTTPN point, but there's a lot of things around that that helps you out with monitoring, provisioning, contract management, access control policies and so on. And these kind of things that we are surrounding our API or endpoint or implementation for that because remember the I in API is related to interface and not the implementation. So you have more things around. And when we think about more things, we think like what is the policies? How you are configuring your policies? How you are configuring your application plans? Because your user will have a plan that offers you a certain amount of rate limits through an account, through a certain kind of keys, through some applications that need to create and register and being able to sign up for those. So you have services, you have specifications. So you have a ton of things you need to handle and you need to be able to care about and be able to manage. So it becomes complicated, it becomes a lot of things to think about. And this is just an example. This diagram shows you how those different objects interact between them, but there's a lot of dependencies, a lot of co-relations. So there are a lot of things to be able to manage there. So it's not just code, it's you as an API owner need to be able to always bring this from one, from first your dev ideation to development through different environments. It kind of get complicated. And it gets complicated when you suddenly realize that APIs do not offer any single value until they reach production. Yeah, you can have 800 APIs in development or in testing, integration testing and pre-production. But if they're not getting all the way to production in a securely and very well-managed fashion, it's most of the time it's going to be just a wild west, no value, no real management there. So this is where the deployment and the launching of the implementation as well as the contract becomes super important. So remember this, DevOps also, it's one of the practices and structures and activities that allows us to be able to increase the quality and the capabilities to deliver our API. So we're going to try to see how the DevOps can intersect with APIs to be able to deliver this kind of benefits. So one of the things that we are going to be focusing on is the concept of the operator pattern. And my operator pattern is this capability of a controller to be able to absurd a declared desired state and being analyzed how it is different from the current status, the current state of a system, a cluster, an environment, and then take actions as a controller to be able to remove that difference, remove the gap, and then move that to the actual state, the state to the desired state. So that's the concept of the operator pattern. And then this type of pattern allows us to be able to have this approach to the collaborative manifestations. And one of the things that we will be using for that, it's Git. So Git is a way to have a place where we can manage in a very mature way our single source of truth where we have already a lot of maturity on how to handle PRs, how to handle a history of the changes, who is going to be blamed when there's going to be changes. So we don't need to reinvent the wheel there. We can just reuse Git for that. And then create everything as code. So it's not just the implementation that we have as a code in Git, but it's also the API management objects and the APM management artifacts that we want to also try to treat as code. So we're able to get the benefits and leverage all the benefits of Git. And then through the Git workflows, like the PRs, like the promotion that the approval reviewers and such, we can then get all the benefits of this mature process to the release of our API artifacts. So in this case, we have Git where we update our objects, our desired applications, and then the controller or a controller will see what's going on and then move our system, our cluster, or the desired state, and being able to take some actions. So how does it look when we execute this kind of applications? Well, is that we have our implementation repository where we have our application being deployed. We have continuous integration, and then we are able to generate the binary or that it's going to be deployed into a registry, perhaps secondary image, or so our file, or so the zip file, or a jar file. And then we are also being able to manage the same thing with our repository of artifacts of our API management artifacts, where we also have a continuous delivery model where we know that there's going to be a new implementation, but there's also changes for the contract or even just changes to the contract that does not involve changes in the implementation. And we are going to be able to push and update the different objects into our target environment. So this is basically a process with a workflow that also is to deploy APIs, API implementation, as well as the API management artifacts that we want. Now, you will wonder, what kind of tools do we have to do that? Because this is very high level. Well, there's a ton of different options right now that goes from the traditional approach when we have things like very well-established things like Jenkins, Travis, even GitHub Actions, to more a GitOps approach, more cloud-native or cloud-friendly options like we have Argo CD, we have Fluke, we have Jenk6, Tectom Pivalry, to be able to build as well as to deploy on a more native way. So this is the kind of landscape where we can see different toolings available to accomplish this kind of solutions. And how do we see this? Well, if we are using a native approach, a clear native approach, we usually tend to relate on two things. So first, it's operators have the operator SDK and implementing the operator pattern and being able to deploy on the cluster. And those operators rely on Kubernetes custom resources. So we can extend the Kubernetes native primitives to be able to increase the amount of resources that can be managed using the Kubernetes API. And one of those are, for example, a product custom resource that custom resource definition that then can define a custom resource with certain information that then operators like the three-scale operator can take in count and be able to update our target environments, be able to process that information and such. We can define a lot of information in those resources. In the case of three-scale, we can define from the product name, the backend names, some information for the policies, the application plans, the kind of limits and methods that we want to record in our processing and in our analytics. So there's a plenty of information. There are different kind of resources. You can have different approaches. In the case of three-scale, we are defining some basic custom resources where we define some information. But there are different type of resources. So now it's going to be time to show you how we can do this using solutions like three-scale and the three-scale operator for that. So let me share with you my video and being able to load this and present how we can do this with three-scale and the three-scale operator. So first things first, what we have done already is we have three different three-scale tenants, the development tenant, the testing tenant and the production tenant. So what we'll do is we have created three different ARGO CD applications that will use custom resources, which are in the form of YAML files, to configure different things in these different clusters. And if you look at one of the ARGO CD application, which is used to configure the dev tenant, and let's go to the details. For those of you who are aware of ARGO CD, you already know this, but the repo URL is the repo that ARGO CD application will track. And we are telling it that for dev tenant, look at the YAML files or monitor the YAML files in the dev branch for the changes. And this is the folder or the path that you need to look at. And you need to apply only the files that go out of sync. So we've done the same for the different environments, basically saying for production, do the target the production branch, for testing, target the production testing. So what happens is initially, once we've configured the ARGO CD applications, as you can see, it shows out of sync, which means something new was added to the repository that ARGO CD wasn't aware of that is not configured in three-scale currently. So if you look at three-scale, let's go back to our tenant again here. You see only the default API is here. And when you look at the repository, there is a file in the dev branch, which creates a three-scale product and a three-scale backend. And the name of the product is operator product echo API with a daily rate limit of 30 per day. So as soon as I say synchronize from my ARGO CD application and hit on synchronize, what it'll do is it'll take the updated custom resource and apply it to three-scale. And as soon as you do that, I refresh my development tenant. I can actually see the operator product echo API with the relevant rate limits that are being applied here. Let's go ahead and check if that's done properly. Good. Now let's do the same for our test. And let's do the same for our production. So let's see if that's reflecting well. That's reflecting well. Yeah, everything works well. So that's simple enough, right? But this is not what we typically do. We don't go and keep adding files to, say, dev, test and production separately. We make changes. We add changes to the dev and then merge the dev to test and then merge the test to production after we are sure everything works well. So let's try to make a change. For example, let's talk about a scenario where the business comes to the development team or the API team saying, you know, we want to change the name of the API. And at the same time, we also want to change the rate limits. Currently, it is 30. We want to change it to, say, 50 per day. And let's see how they do that. So first things first, let's go to our development branch, which actually tracks the changes for the development tent. And, you know, if there is any change, it intimates, you know, our CD to make the changes. So let's go change the rate limits and the name of the product in our dev branch. I'm going to save it. And then let me come in. As soon as you do that, Argos CD here, when you go back and refresh here, Argos CD will realize that there are changes being made to the repository that tracks the dev tenant, which is the dev branch. And it says it's out of sync. The current state of three scale and your repository are different. Do you want to synchronize those changes? So as you see here, the product name is the same. The rate limits haven't changed. As soon as I hit synchronize, here, when I go back to Argos CD, as soon as I hit synchronize here, right here it is. You should see the product name and the rate limits have both changed here. The product name is changed to modified. The rate limit is changed to 50. Now our next task is let's go back to our you know, VS code and say, okay, now I'm done developing. I want to merge the changes in my development to my, you know, test environment. Let's do that. Let's check out our test. Check out test. Let's go get merge dev. I'm merging the dev environment with the test. And so just push origin test. I'm pushing the changes to the original repository. As soon as you do that, again, the Argos CD application, when you refresh it, the sorry, the test one will go out of sync. Again, it's saying the current state of your three scale test tenant, the name of the product and, you know, the, the rate limits that you've changed are not reflected. It's 30 and the name is just API. There's no modified here. Do you want to synchronize it? I'm going ahead and synchronizing it and you will see the changes are immediately reflected. And last but not least, just for the purpose of completion, let's do the same for, you know, our production environment. Let's check out our production branch and merge the changes in our, from our test environment. You know, we're done testing. Now we want to push it to production and there you go. And then let's just, you know, push, push the prod to the repository on our GitHub. Do that. It goes out of sync, refresh, synchronize. And as soon as you do that, the changes should be reflected in your three scale production tenant. The name is modified and the rate limits would have been modified too. So all in all, what you're doing is you're creating a single source of truth for your API management configurations that both your business and engineering can track. And so that different people are not working with, say some, some people are not working with customer sources, some people with UI, there is a single source of truth. And everybody can come and look at this repositories for what is actually happening in your API management program. And at the same time, because we are using our CD custom resources and GitHub repositories, you can include all this as a part of your, you know, pipelines and include it in your GitHub as a part of your GitHub strategy. So that brings us to the end of this demo. I think we can open up to more questions now. Okay, thank you very much, Bamsi, because there was really enlightening. Now, one of the things that you might be wondering here is whether the different things that Bamsi covered during the demo, there are some questions that you might have raised, because you also need to be doing this kind of right questions. Like, for example, when we are talking about the different artifacts are like, who is the API owner, who's going to be managing, who's going to be approving those flows, who's going to be able to decide who's going to be doing the approval in each one of the gate branches, who's going to be the owner, who's going to be publishing these kind of changes, who's going to be managing those gate repositories. Perhaps it's going to be the API owners. If your organization has these kind of things, perhaps are the same steps that are doing the implementation and the application for the demo. So there's also information regarding how many of these API artifacts do we want to manage that like that, perhaps every one of those. Then the other thing is, should I be able to modify those artifacts when they're in production? Because we do have access to the UI. So most of the time you will try to avoid that. You don't have the UI available in production. You will have all the changes being done in development, perhaps for the custom resources, and then move them towards the end. So there are the different kinds of things that we need to ask ourselves when implementing these kind of strategies. So as a summary, you can handle your APIs in a declarative way. They will help you out to manage the components of your API that are not just implementation. Those artifacts are part of your API management strategy. Certainly, it will help you with the definitions. It will help you see in a single place all the different configurations and aspects of your API management artifacts. So it helps you with the definitions. It helps you to do collaboration and be able to share those artifacts with the rest of the teams and the owners that need to approve this with each one of the promotions. And one of the most important things is that this enables automation. So instead of going to each one of the environments, having a person doing all the different updates directly on the UI or having to code and add some capabilities, like automation, as it will play just to do all those changes, you can rely on things like the embedded automation from things like obviously to be able to handle this. And it's very well crafted for your type of work. If you are an organization that is relying heavily on design-first or API-first approach or if you're still doing code-first approach, it doesn't matter. At the end, you will get an API contract, an API plan, API management artifacts that you will need to be able to manage, deploy, and move across different environments. So this is the kind of things that you will be able to implement with these kind of strategies. If you're already using TreeSkill or TreeSkill users, you can get this out of the box available. It's going to be playing well with your OpenShift GitOps and OpenShift Techno application so you can deploy your API implementation, your API management artifacts, all across the different environments. So that's almost everything for now. We just want to invite you again to use the Developer Sandbox available from the Developer site. That can help you out to get started with OpenShift. We started with a lot of the application capabilities that we have in the Red Hat portfolio. This is a very good way to start to try and get started with Kubernetes and OpenShift. Thank you very much. If there's any questions or comments, Richard, perhaps you can help us with that. If not, then we'll obviously enjoy the rest of the event. Yeah. Thanks again, Hugo and Vomsi for the video. There are no comments in the chat currently. I'll give everybody 30 seconds to type something while we go over a few things. But again, if you have any questions later, you can always follow up or add comments to the video when it's posted on YouTube and we'll be following up there as well. I don't see any other comments. Thank you again, Hugo and Vomsi. Thank you all for joining us today for this session.