 Let's get started then. Yeah, welcome everyone to the DevCon day two. And yeah, we are in the application development and the containerization track. We have with us today Khalid and Michael and Hujo would be presenting out the session over the lifecycle of an API versioning and operators. So please stay tuned and I'll be standing there for the session. Jen, for today, it's gonna be pretty quick. We don't have a lot of slides, although what we wanna do is just kind of glimpse over the current state of the product and what we think we are right now. The architectures that we know that are being used right now for tree scale and then the quick demo and then a quick time just to provide some of the challenges that we found doing this. And then we can open for questions and any discussion or any feedback that you guys have. So we wanna start with just setting up the premise that we absolutely love the concept of API management as code. We absolutely agree with the whole lifecycle for the API management as presented by the BU. We really like some of these tools that you can see on the screen right now and we'd be playing with them. But we wanna start getting a little bit more into specifically tree scale and how you actually take one service that already have your open API specs and how you integrate it in here and what are the good things that we found and what are the challenges that we found. So as you guys probably already know we have a lot of tree scale deployment options. The first one is the Hosted, which I put a big X. And the reason is I know that there are a lot of customers using this but we think is probably the most challenging one where it's hard to explain to some of the customers at least based on my experience the architecture of having the API gateway posted or embedded within tree scale, right? Most of the customers they have the services in different clusters or in different ends and you want to get your API gateway as close to your service as you can. So as you get into customers with hybrid architectures with services in different places, then this becomes challenging. So all the other, we're fine with all the other deployment options where we just have the gateways in some other places and like views often shared, the premises and so on we have no problem with that. I will say that the reason we wanted to just quickly glimpse on this is because we think right now based on the current state of the three scale operator and three scale it's a little bit challenging to actually deploy Epicast. Even if you're self managing your Epicast there is a whole process that you got engaged into just making sure that the Epicast communicates with the API manager. So we found that a lot of customers they just wanted to something simple that you can just put next to your service and you want it to run and it's easy to configure instead we have to go through the UI or the management API or the toolbox to try to do a lot of things like this and it creates a challenge. So we decided that we wanted to tackle how can we improve this through the operator? Basically the idea is how could we do that? How could we take the operator and modify it so it would do the reference architecture that Michael had up, the API manager and the self hosted or on-premise Epicast gateways, right? And the idea was to both change the custom resource definition and modify the operator to allow that. That's pretty much the gist of the proposed solutions. We also made some adjustments to the other custom resource definitions for creating an API, metrics, mapping rules, plans. The biggest changes were basically in the API. And Michael, did you want to leave that for later or go into it now? Yeah, no, that's what I was looking on the operator side but pretty much what we wanted to also incorporate as the title said is like, what happens when you have your service and you already have a mature service that you are generating your own swagger your own open API spec through JSON and then you want to take all these versions of your service, you already have a mature state where you're promoting your service through different environments and then you want to actually update your API product definition on the API manager. And that's where we got to this point where we needed an easy way to not only use the CRDs for the different resources like TANN, API metrics, plugins and so on, but in an easy way that we can just quickly align with Agile and API management and SCODE and CI CD best practices to just quickly recreate APIs and configurations that could be just quickly pulled for the different app and app casts in different environments like in a pipeline, right? So going back to our initial loop, what happens when you already have a service that you want to promote version 1.0, 2.0 and then you got to go back to the API manager and do all the mappings and do all of these either by the toolbox or the management API. Where you guys, we're going to dig into a little bit more into exactly what happened in versioning, we're still working on that, but we want to show you what we have right now in terms of how this will be aligned like into a Jenkins pipeline using the CRDs of the modified operator. So this is a quick demo architecture, so it's not a pretty diagram, but it's just too quickly exposed what we want it to do. And what we present here is just something that it may be familiar to some of you guys, we just have different namespaces or projects where you just have your service, we just call it W-A-T-M-PROD. And then as I mentioned before, you want to be able to automatically deploy gateways to each one of your namespaces where you have your services to keeping close to your service. And then what we have is an API manager in a multi-tenant mode, where at the same time that you do the configuration for where do you want your app to cast, we're also creating the tenants that you want for them. I don't know if you guys are familiar with this, but when you usually do this by hand, you need to after you get your API manager deployed, you need to go and create your tenants, then you need to use some of the tokens and create secrets and use the template, or if you're using the app cast operator then deploy the specific gateway in the target destination and then you need to go back and forth a bunch of times and then expose the service of the gateway, get the route, then go back to the API manager and do all these things. So what we did is we actually modified the operator and the way that it deals with the CRD specifically. One of the main challenges that we found is that the operator right now only works in one namespace. So the default operator was only gonna be able to actually deploy the operator in one namespace and the same namespace for the API manager and the tenants. We modified this and now our customized operator can deploy the API manager to the namespace that we want. We have a different namespace where the operator lives, different namespace where the API manager lives and then we can define where we want the tenants and the whole configuration is done automatically. Then through the modification of the other CRDs, what we're doing in the pipeline is using what we call a template called create API that has certain portion in the pipeline, we go and actually create a specific API definition under a specific tenant for the API manager. And then we just do a basic curl just to test that the service is working. So it's not fancy, it's just a way to show, okay, are we actually being able to expose the gateway, the way that it is, are we actually being able to access the service through the gateway using the user key. That was another challenge that we found. We modified the operator so we can generate the secrets that we need with the specific user keys that we need to access the API definitions, right? Right now, there is no CRDs for that. So we have to kind of do a hackish way on how to create the services that, sorry, the secrets in the actual API manager so we can access them through the pipeline. So that's just a quick intro of the architecture of the demo. And then now we're on demo time. So let me bring this screen, I have all of this. I apologize for all the tabs. So this is just an OpenShift cluster version 4.2 if I'm not mistaken. And what we did, by the way, is just a combination of some scripting, the modification of the operators, but also we're using the Ansible Applier for creating a lot of the resources we needed for the services. And then we're just using the old technique for a build configuration pipeline just to deploy the services in the actual different namespaces. So go ahead, Alan. And as you can see, we've separated out, like Michael mentioned earlier, the three scale operator, the current version that is the official version works only within one namespace, but we split it out. There's a namespace for the operator. There's a namespace for the API manager and we have a namespace for each of the APCAS gateways, API dev, prod, and UAT. And that way you get a kind of more realistic separation. Obviously, if you have different clusters, it's gonna have to work a little bit differently. And that's when the APCAS operator could fill that gap. But for a lot of customers and a lot of people, just having everything in one cluster is enough because they don't need that complete separation or they share a lot of resources between different teams. So this would help them get up and running a lot quicker. Yeah, and obviously I'm pretty sure a lot of you guys are thinking, well, what if there is non-prod and prod clusters, there are different clusters. Yeah, we're not there yet, but everything was designed with the view of being able to just access the operator in a way that you can just target what are the namespaces. So the next step is to actually work with different clusters, but as of right now, we have everything within just one cluster. The ideal scenario in the future will be to actually continue working on this so we can just have something like non-prod, lower environments, and prod upper environments. So just a quick walkthrough of what we have right here. Like I said, a lot of these resources are created by the Applier. We're just using the Ansible OpenShift Applier Playbook. We have a service and let me just switch quickly here to code and we're gonna share this repo in a few. But pretty much what we have is we have a service that already have an OpenAPI spec. So it's a simple spring food service. It's just called Person, and it already provides the swagger definitions for everything that we need. And then what we have is a different repo that's called the CI CD repo, sorry, a CI CD folder where we have everything from the Applier to different templates that we're using that access the resource definitions. And so in each one, when we provision the operator, the operator, then it's gonna provision, then our scripts are gonna provision the API manager. And then in the API manager, we have a configuration where we have the specific target environments that we want. So we create the different objects that we need. And then through the provision of the API manager, we provide the different Abicast and then we use the tenant custom resource definition to create a tenant for each one of the namespaces that we want. Then also using the Applier, what we do is in each one of the target namespaces, API dev, UATM prod, what we have is the service. And like I said, the service is just a service that is doing a person, then we are doing versioning through the URI. So we do something like person slash the 1.0 slash one. And in the same namespace, we have the Abicast. And as soon as we get the Abicast provision in the target namespaces, then we create the routes. And that's how we configure about the API manager. So finally, we have another namespace that's called the API CI CD. And it's just Jenkins ephemeral with about the basic pipeline, the old fashioned way. We're gonna have, guys are gonna see a lot of builds because we were just testing it, some of the changes that we recently did. But pretty much what this pipeline does is what I have over here. It's a simple pipeline that is just checkouts the service from the same Git repo. It builds it, it creates an image out of it, just using a basic star build. And then we just promote the service from dev, UAT to prod through the environments by just doing an OpenShift tag. Here are the key points. So when we get to configure the three skill tenant, what we do is we actually use one of the templates that we have that is called create API. And we pass some of the parameters that we need for this template. And pretty much the template create API, it's a template where we use a custom resource definitions defined by the operator, but this time, it's a modified operator. So the way that you do it is we have CRDs for metrics, we have CRDs for mapping rules, we have CRDs for plans, and then we have the CRDs for the API, where we can just pass the private URL, the public URL. And then the key of making this work, there is a CRD called binding that pretty much binds as it names as the everything that is just through the API selector API label. So then you can just get this into all the different tenants and test your service. So before the promotion gate, after we create the specific API, what we're doing is just getting the secret where we have the user key, and then pretty much we're just waiting what we're doing a loop to just do a curl on this specific public base URL, using the key on the header, just to make sure that it's working. And then we have promotion gates to UAT and it does the same, and we have promotion gates to CROT. So, Halle, do you have anything else that you think we're gonna mention? No, we can touch upon it later when we talk about the challenges and what do we have to do? All right, so here is the master. We're gonna provide a video where we did, where we do the whole provisioning of this. As you guys can see. Say, unfortunately it takes anywhere between 10 to 30 minutes. So we didn't want to spend 30 minutes, everyone kind of just watching pods come up. Exactly, so unfortunately, the demo that we're gonna show today is just running the pipeline and showing you guys how all this gets exposed. But the provisioning, even though that the CRDs get created, there is a whole work between the operator and the API manager, and it takes a lot of time, so. And we'll send a video so that you guys can fast forward through the foreign parts. All right. So here in the master console, we have the three different tenants and they're just regular tenants. We created through using the tenant CRD. They have nothing fancy, it's just a tenant with an admin user. And then we have each one of the different tenants. So this is the depth tenant, this is the UAT tenant, and then this is the prod tenant. And what we do here is just, when we run the creation of the tenants, then what we do is we wire everything related to the app, because automatically using the operator. So let's just run the pipeline. And while this gets started, we're gonna go through all the sections of what we found as challenges. But the idea here is that we can just run a pipeline as I showed before on the code, where we can just get a service and just part of your provisioning of your service, you can just define where your open API spec is and then we're still working mapping properly. As of right now, it's just one specific route that is being mapped, but we're working on trying to figure out how to make the operator take the open API spec and then based on that actually create the mappings. So that's, we found some people supporting this idea and we think that should be the direction to go, right? If you guys go to the operator hub, there are different operators that they have different maturity. And we really think that we shouldn't have every time that we configure an API, we shouldn't have to go to the API manager and go on to just different mappings or do the specific way three scale wants to do the URI versioning and all of that. I don't think we should have to do that. We think there should be an operator that we can just pass. Hey, here is my open API swagger spec. Just take it and you do all the mapping, do all the response codes, do everything for you. And in addition to not being able to use the open API spec out of the box right now, another thing that's lacking is while you can create an API, mapping rules, plans and put it all together to actually make use of your API, you need to map it to an application within three scale. And since that part is not available as a CRD, that means you can't use a CRD to actually create your API and your application and start using it right away. You need to either go into the three scale UI to create that application, map it to the API or use other things such as the toolbox and that kind of breaks the, what's it called, let's say the illusion of that CI CD. Of that CI CD, if I can create it with the CRD for creating the API, why can't I start using it right away? That was something else we changed to, it was a little bit opinionated. We basically said each API has one application, we're just gonna automatically map it so you can start using it right away. Obviously that leaves some room for improvement. But at least for now it's functional. So here as you guys can see on the pipeline we just run the template where we just create the APIs. So on the depth tenant, it's gonna keep on creating more but we're gonna have the actual API definition already presented and then what it's gonna do is it's already gonna promote it and everything, the integration and everything is already set up. The backend is set up based on the parameters that we're passing. So we're just using the local cluster local service of the service that we provision, they use in the pipeline too. And then the mapping rule as I mentioned is just a static mapping rule as of right now. We run into a lot of problems actually trying to get the service discovery. Sometimes the service discovery will actually auto discover and create the mapping rules but they wouldn't create it properly. So then we have the validation of the service where we actually go and just have a look where we just curl the public API cast and then we pass the user key that we got from the secret. And then we have the same over here on UAT and then we have fraud. There you go. As you guys can see, there's still a lot of room for improvement. We have a lot of problems trying to get the proper secrets updated in the proper namespaces. Three scale right now is really opinionated where this needs to be and how it needs to be managed. But pretty much what we're doing is the reason we're, you guys saw in the pipeline that we're making it wait is because even though that we're processing the CRDs through the template and we're creating them, we need to give it a time to actually create the secret that we need in the API manager so we can access the different keys. So here you can see the specific name, the user key and then the name of the service and then the actual binding. So this is for dev, this is for prod and this is for UAT. And you can see that this is the value of the actual devs application. And as you can see, well, we created a bunch of applications that we run the pipeline in different tenants. But for dev, this is the key and back here you can see that it's the same number. So we're just getting, although we're doing this through the CRDs in the operator, we're just creating a secret in the API manager namespace. So we can just through the pipeline, we can just get it and use it. Right, so troubleshooting was any of the changes were a bit difficult just cause it was it was a little tough to figure out what was going on with three skills, specifically the number of configuration options, what, why the operator did things a certain way. Initially what, how that maps to, if you were to deploy the template, say template manually instead of using an operator. So that, that took a little getting used to and took a little trouble. The, there are some components that don't survive a cluster restart well and we're still looking into why that is and whether there's anything that can be done about that. And, you know, we expect them to survive no restarts but it may just be a limitation that the whole cluster going down may not be healthy for any of the three skill components to begin with. A little trouble with the Jenkins ephemeral getting up and running. Some of the custom resource definitions, they lack completeness like we mentioned earlier, you can create an API mapping rules plans but you cannot use that to create an application or there's, or other resources that you need to, you know, get complete use of the three scale API. And the first thing we actually had to go and change was that the operator does not support clusters that don't have a read write many storage option. And that was the initial change we made to allow it to even if you use the reduced resource configuration, it still requires read write many for some PPCs and that limits you to clusters that basically have like NFS or Azure storage or, you know, any of the other read write many storage options which actually is not usually the case. You know, if you provision something in our HPDS it's not gonna have it. If you use the default open shift installer in AWS it's not gonna have it either. So that really limits the places where you can run this operator. All right. So before we go to questions and some feedback this is what we have in mind for future work. We really wanna keep on enhancing the operator for, you know, day two or day three operations. We really think the operator should be in a combination with the service auto discovery the way to actually define your API based on the open API specs. So we really think instead of using the toolbox instead of using the management API like all of these are great tools when you need to do other things but when it comes down to like just aligning with your just normal DevOps of your service you need something else that can create this for you. So we really wanna get to a point that we can actually properly for the open API specs and do all the mappings based on that. Hey guys, can you hear me? Yep. All right. I just put on the chat the public give up for this demo. And my partner Alex, he just put the actual part for the operator. Let us know if you have any questions. Obviously, as you can see is a work in progress. Last time that we check we didn't find in the latest release the three scale, we didn't find any big changes regarding the operators. So I think as we as an industry move towards you know, API contract first and API is code, you know, I'm pretty sure this will be more relevant efforts. Let us know if you have any questions. Khaled, do you wanna say anything else? Nope, you pretty much covered it. Is my audio sound all right? Sound like it's a rock. Oh, okay. How about now? No, sounds like it's still dead. It does. I think, all right, I think I don't see any big questions popping out in the chat box. So just we'll be winding up the session and I thank Khaled and Michael for taking out the time and doing this wonderful presentation. We have the next lined up session over the improvements in the OpenShift Python S2I which would be taken over by Fredo Lynn at 2.30 p.m. So stay tuned and join our breakout room. Meanwhile, if you have any of the questions Khaled and Michael would be available there. Thank you very much everybody. Yeah, have a nice day. Bye bye.