 Hello again, my DaVincian friends from everywhere in the world, welcome to another DaVincian Tech Talk. We had a break last week because it was Thanksgiving here in the US, but we brought you some amazing content today. And today, directly from Madrid, Spain, we have a very special guest, Davi is going to be talking everything about serverless tecton and argocd, and without any third delay, Davi, the stage is yours. Okay, thank you very much Edson, let me share my screen, so I'm trying to share my screen. No worries, we know it happens. Yeah, it's why I can't. The streaming being good because it's hard to share the screen, sometimes we're on mute or the camera doesn't work. So if you're watching this, you know that we're doing this live because sometimes it just really happens. So, strength, let me guess, I don't know if I can rejoin the call. Probably, if you want to try, you can drop and reconnect. Yes, strength, just give me a second. No worries, I'm sure that everybody is super excited to be learning more about serverless tecton and argocd, so just stay tuned. Sebastian is suggesting that I should sing a song, but I don't want you to leave the channel, so I'll just avoid this horrible situation for you. So just keep talking, and if you were here in the US, I hope you enjoyed the Turkey last week. If you're not, well, I'm pretty sure it was a normal week, if you can call it normal because of this COVID-19 and everything that is happening. And while we're waiting for Davi to rejoin, he just did rejoin. And we're having some screen sharing issues, so I hope we figure out, and yes, awesome, we can see your screen now, Davi, and I'll just talk it and let you do your presentation. Thank you. Yeah, apologies for that. I don't know what's going on. Okay. So thank you, it's an important introduction. My name is Davi Sancho. I'm a senior architect, I've had services and I'm based in Spain, and as you can tell from my accent, of course, it's a little bit, it's not native English. So today's talk is based on a two-part article that was published at Red Hat Developers some weeks ago. The name of the article was CHCD Workflows for Serverless Applications with Red Hat Openship Pipelines and AgroCD. So this means that in case you miss any part of the session or you want to reproduce it yourself at your own pace, feel free to pass by the blog post and follow the instructions step by step. So let's start. I'll give first brief introduction about the, to get a high-level understanding of the CHCD workflow and also about the technologies and frameworks that they are involved in it. And also I would like to mention that during the session, I will jump from the demo to the presentation to explain what's going on under the hood. So which are the main technologies involved in the CHCD workflow that will be shown today. First one is Tecton, which is a Kubernetes native open source framework for creating continuous integration and continuous delivery pipelines. So Tecton pipelines are cloud-native. They run on Kubernetes and they use containers as building blocks. They are also decoupled because we could say that one pipeline can be used to deploy in any Kubernetes cluster. Tasks which are part of these pipelines can be reduced between different pipelines and also these resources like parameters, work spaces and so on, such as Git repos and Git registries, can easily be swapped between different runs. In this session, Tecton will be used only for the continuous integration part of the workflow. Then we have Fagocid, which is a declarative GitOps continuous delivery tool for Kubernetes. Fagocid follows what we call the GitOps pattern of using a Git repository as the source of truth for defining the desired application state. In our CHCD workflow, Fagocid will be in charge of synchronizing the application state when it detects differences between the desired manifesting Git and the life state in the cluster. And finally, we'll use Customize, which provides a purely declarative approach to configuration customization. So we'll use Customize to enrich how the integration between the continuous integration delivered by Tecton pipeline and the continuous delivery provided by Argo CD. We'll see that in detail in the following slides. Then about the serverless part, we'll be using Knative, so mainly the Knative project that will provide, for instance, a very simple and rapid way for defining and deploying applications. So in this sense, we'll be using a single object called Knative service instead of having to manage multiple objects, such as the deployment, services, routes, and more. And also Knative will provide some automatic scaling up or down to zero. And finally, it will provide a candidate route for new revisions to be tested independently. And then our demo application is implemented in Quarkus. So for those of you who don't know Quarkus yet, it provides a container-first approach for building Java applications. And Quarkus applications have a very small memory footprints and fast startup times. So thanks to this, I think it's the perfect choice for developing serverless applications. That's why I choose Quarkus. And also, as we'll see later in the demo, the application configuration is based on three different config maps. We have defined three different levels with different priorities. So in order to consume these config maps, Quarkus includes a very powerful extension called Kubernetes config that will give us the chance, let's say, to prioritize properties from different config map sources. So about the demo workflow. So let's look at each step of the workflow more in detail. So the first step is when a developer pushes new change in the application source code repository. So then we have to configure in the source code repository, which is, it had, in our case, a trigger detect on pipeline. And one detect on pipeline has started. The first task fetches the source code from the repository. And then a maven task packages the application called the ZR5 and do some unit testing before building the container image. So for building the container image, we'll use a build a task that builds and push the container image to the open shift internal registry. So far, it's been all about generating the artifact of the application and also the image. So we could say that it's basically the continuous integration part of the workflow. And then the pipeline continued execution by cloning the repository where we'll keep the decided state of the manifest of the application that will later be used or will become the K native service created in the open shift and the additional conflict maps we talked to before. Initially, this git repository might be empty. So this task is smart enough to initialize the repository with the first version of the application manifest. So once the manifest has been created or modified, if it's the second run, this task pushes the changes into this repository. These files, files and folders, let's say, are structured in a customized way. So as we'll see later in the presentation, is mainly the glue between what we call the continuous integration delivered by the Tecton pipeline and the continuous deployment managed by ARCOCD. And finally, ARCOCD, of course, will pull from the configuration repository and synchronize the existing Kubernetes object in the open shift cluster. At different point in time, we're completely out of this regular CIC workflow. There might be also changes produced by the operations team, for instance, to, let's say, change one parameter in any config map. The target microservice or some information that is unknown by the developer team development team. So this last step could also create an autosync state in ARCOCD, which would lead to a new synchronization process. That's the step number 10 and 11. So I'd like to spend a minute just to briefly explain the structure of the repository once it's been initialized. As I just mentioned, the Tecton pipeline finishes its execution by pushing the desired state of the application in a purely declarative approach into Git. So what we see on the left is the structure of this repository, which is organized with a base directory that contains global or shared descriptors by all the existing environments. And then for every environment, which are represented in different folders, will make use of the customised concept of overlays. For each environment, we have a customization jammer for identifying which resources are used for the composition and customization process. These files are then consumed by ARCOCD, which then synchronizes and creates those objects in the offensive cluster. Well, so far it's all about the workflow, technologies, and so on. So I'll move now to the demo, but I'll come back later to the presentation to continue explaining certain parts of the workflow while the pipeline keeps running. So now, first of all, since we have just 30 minutes for the session, I have already pre-installed the main tools, such as the offensive pipelines, K-Native and ARCOCD. And also, I have already created three different projects on MSpaces, which are development, production, and staging, which are basically what will deploy the applications. Also, I have created a CICD project where I'll keep the ARCOCD instance running, the ARCOCD server. And well, in this ARCOCD server, let me move to the main dashboard of ARCOCD. I have also created three different applications. So basically, each application has the same repository for the three of them, but they have a different path, development, production, and staging, which is the main where the ARCO application will take the sources of these manifests to be synchronized in OpenShift. If I move back to the CICD project and then do the pipeline stuff, I have also created the K-Native pipeline, which is the pipeline that I have described before, which all these steps identified as a task within the pipeline, and also there are some steps within each task. Also, I have created an event listener that basically will serve us as an entry point for the to trigger the pipeline from web hook that we have configured in our source code repository of our application. So this is the, everything has been installed through the OpenShift serverless operator. So it's just a very basic K-Native serving instance for this demo. And that's, I think, pretty much all from the pre-installation that we have done. So now let's try to run the pipeline from a change that it will apply over the source code. So I have checked out the project, the Quarkus Hello World application. Yes, I'm going to, for instance, change the message here. It's going to be, for instance, the nation. I'm going to copy this message because I have to also change the unit testing, unit test here. Just for a quick verification, I'll run the unit test. Yeah, pass, perfect. OK, so what I'll do now is I'll push these changes from the application to the source code repository. So git, add, source, unit version, it push. So once I push the changes, I will see the pipeline running in our OpenShift cluster. If I move now back to the OpenShift cluster, I see the project pipelines. Here it is. So the pipeline is running. And the pipeline is going to take, hopefully, between three and five minutes. So meanwhile, I like to explain a little bit. I want to give more details about the structure of the output, the files that they are generated by Customize. So the main one, the main object that is going to be generated from the Customize files are, it is the K-native service that is on the right side of the screen. So this K-native service will take the main structure from the K-native service channel located in the base folder. So this means that every final K-native service deployed to any environment will be built from this base K-native service. As you can see, almost the full K-native service is highlighted, because it is taking the whole structure from the K-native service channel in the base directory. Then for every environment, we have a traffic routine channel that will basically add the main route for the service, targeting the desired revision using different percent or amount of traffic per revision. And as the repo has been just initialized, because we think that this is going to be the first run of the pipeline, we'll only have a single revision behind the main route. And also for every revision, we'll set some specific details, say, its name, an image. And this element comes from the revision parts.yaml file. And then we also have a canopy route for this new revision, which is defining the route in parts.yaml. That's interesting because, well, in the first time, the canopy route is going to go to the same revision as the main route. But when we have two different revisions, that's going to give us the chance to canopy test that new revision. So that's all about the K-native service composition. But we'll also have some configuration at different levels, as I said, when I explained the Quartus application. In the base folder, we have a global config map that will be synchronized in every environment. As I said, this is the first iteration of the pipeline. So this config map is empty. We are just initializing this config map. So further changes could be made to this config map, for instance, by the ops team to include some global parameters. And of course, AlgoCD will synchronize this new config in all environments. So remember that what we have in the base directory is going to then be synchronized in every environment. Then we have also a config map per environment, so same as in global. So it is currently empty. And any new change would be synchronized by Algo as well. And finally, we have the config map at the revision level. So this config map has been copied from the application source code repository. So we let developers know how to provide some config and defaults for the application. And this configuration may be overwritten by this other two config maps that we have explained before. Well, let's move back to the pipeline. See, perfect. That's finished. Good. Everything is in green. That's great. So we move now to Algo. And we refreshed the applications that they are managing the development environment and the staging environment. We do the same in production. But there is one main difference between these three environments. While in development and staging, AlgoCD will synchronize automatically all the changes that we get from these source code repositories. I mean, it's from the deployment repository. In production, we have to manually synchronize it. I'm not going to do it because I will do a change after testing development and staging. We have also, if we move to the topology developer view and the topology for the cognitive service, we can see here that now in development, we have a single port that is basically from the cognitive service that we have deployed. And there's a vision that we have just synchronized. So if I move back here, I'm going to get the URL, the main route here. Now it doesn't make any sense to try the canary route. That is the other one that targets directly to this revision that we just deployed. So I'm going to just do some tests. So the here. Probably, yeah, much better. So if I run the test to this URL, don't forget about the path. Well, now it's taking some time because, yeah, meanwhile, we were just moving from one screen to the other. The cognitive was scaling down to zero, this bot. So now it's scaling up again to one. OK. Safe nation rocks. That's true. Yes. In this case, this is going to be for the staging. I'm going to test the staging environment. I will get the URL from the staging. The same here. I'm going to use the main route because the canary route doesn't make any sense now. Hello, since we have just a single revision. So again, the bot was scaled down to zero. OK, good. And in production, so first, I'm going to pull the changes to my local point system. From the Quarkus Hello World Deployment Repository. In this case, in production, I'm going to do a change, which is basically, I will overwrite the configuration that it comes from the application. Yes, for it. Yeah. I'm going to copy what it comes from the application source code repository, the configuration that it has been set by the developer. And I will overwrite for instance, the message for production. OK. So once we have these changes, I will commit them to the repository. So basically, I have done some changes in production. I will commit, let's say, overwrite message in prod, environment, and get push. Oops. All right. So if we move back to Argo, let's refresh it. We'll force a manual synchronization for this environment, so production. Let's move in OpenShift. We'll see the CainativeService could appear soon. Here it is, CainativeService. And then we'll still see the CainativeService revision. Let's wait a little bit. No revisions yet. Yeah, it is. Have the revision. I'll copy the route for this revision. And I'll test this endpoint as well. High production. Perfect. So the main difference here is that instead of having OLA as a message, we are having high, because we have overwritten the configuration for this environment. OK. Good. So next, I'm going to change the code again, because I would like to have a new revision. So for this sense, this is awesome. I'll copy the message well to the unit test. And create one test. OK, this has passed. Perfect. So I will push this change. And what's going to happen is that the pipeline will be triggered again. Any message? Awesome message. Cool. Then we move back to the pipelines. Pipeline runs. Here it is. New pipeline is running. So again, I don't know how long it took the other one. So three minutes. So probably we have to, I have some slides to talk for another three minutes here. So let's see what's going on when we run the pipeline again. So basically, there will be a new folder for in each environment for this new revision. So all the overlays will go for this new revision in these two files, revision patch and routing patch. So the first one, revision patch will overwrite or will add the revision name as well as the new image. And for the routing patch, in this case, we are including a new canopy route, as you see in green, the right bottom of the screen. But if you look at the main route, it still goes to the first revision. This is what I was talking about. So the 100% of the traffic still goes to the previous revision for the main route. What we'll do in the next, once we have the pipeline has finished, I will change the amount of traffic for the main route. So we'll see how we can split traffic between different revisions in a very, well, I would say, very easy way. So let's see, with the pipeline has finished, we'll still have some stages to be run. What it could do as well, probably I could show the details about the, for me, what is the most important task of this pipeline, which is the push-cainative manifest. This is the custom task that I have just developed for this demo, that it basically do a couple of things, which is planning the deployment repositories. So let's go to the spec here. So the first step is creating the base manifest. So remember the structure of these files and folders. And then we'll create the manifest that are per environment, that are specific per environment. And finally, we'll push all these changes. Well, there are a lot of stuff here. And finally, we'll push all these changes to the git repository. Let's see the pipeline still running. Well, we are now fetching the deployment repository. And then we'll run the task that I was talking about. One important thing, or one interesting thing for instance about Argo as well, is that you could do a rollback based on the history of deployments or synchronization. Here, we'll just have a single revision. So I cannot rollback to any revision at all. This is just the first one. OK. Good, we have the second revision now. Let's now see Argo again, the development and production. We'll refresh it. It's auto-syncing. So we'll see syncing is already going on. And in production, we still have to do it manually. We'll wait for that, as we did previously. We'll move back to topology. So let's move back to the development environment. OK, here it is. The second revision in development, as well as the second revision in staging. So if we see the topology, we'll see two different bots. That's cool. I'll take the, as I copied, the canary route just to test that the canary route kind of isolated. So if we move back to the test, let's just make it bigger. As I said, the main route has not changed at all, even if we have deployed the second revision into the development and the staging environment. But if I test the canary route for this new revision, I miss the path. OK, this is awesome. This is the second revision. So let's do exactly the same for staging. I could just, for instance, copy this for staging. So it seems that everything goes fine. Well, it's starting again. It's scaling up the bots. Oh, again, the bots. Hello. Hello, Ola staging. So once I see that the second revision works perfectly, I will just change. I will pull the changes into my local file system. And I'm going to split traffic between the different revisions. That's going to be very easy to be done. So this is the new revision. I'll copy this. So basically, I have to modify the traffic routing file. Remember that if we don't modify it, we'll keep the same route or the main traffic to the revision that was previously committed. So in our case, I'm going to change it. In development, I'll set, for instance, 20% to the previous revision and 80% to the near one. While in staging, just to see some differences in the output, I'll do 50-50. This file is really important because it's the one that is managing the main route. And in production, I'll do 90 still to the old revision and 10 to the new revision. I'm going to just commit these changes, split traffic revisions. We push, keep testing here. I'll just run the test again here. Be the main route, which was this one, this one as well. Production still be tested. And what is important, for instance, I would like to show you the, well, let's refresh it. And we'll refresh production as well. Since we have a manual synchronization processing production, one interesting thing is to know what you have changed or what has changed it between pipeline runs. So in our case, in production, what we just did was setting 90% to one revision and 10% to the latest revision. And also, these changes came from the pipeline automatically because we're deploying a new revision. So these changes are fine. I will synchronize it as well. And what we should see here, OK, here it is. In development, we have almost all the traffic going to the latest revision. While in staging, it's just kind of 50-50. And in production, we are just getting some, let's say, 10% of the traffic to this latest revision, which is this. This is awesome. So I think that's all from my side. I think it's time for questions if you have any. And also, well, these are the list of resources that you could probably consume. Mainly the blog posts, which are, as I said, you could do this kind of demonstration at your own pace because everything is documented there. And then, yeah, a list of these repositories that I have used for this demonstration. Thank you, Davy. And we had some questions, but I guess most of them were answered in the chat. But we have one last one from Marcel Mauricio. Are these revisions written manually for every butte? Oh, sorry. Could you repeat the question? Are these revisions, I suppose, the cumulative revisions written manually for every butte, or is it something automatic? It's automatically. That's something that is done by the pipeline. So when we run the pipeline every time, we'll create the new revision. So we'll remove every revision that was old, let's say, and we'll just move the customized file to reference to this new revision. And when Argo CD builds the final output, it will take the new revision as a source for the K native service. So the only thing that we are doing manually, it's in production. We just tell Argo to synchronize when, let's say, we are sure that we want to synchronize those changes. But in product, in staging, and in development environments, everything is done automatically. And one last question from James Date, who is one of our Dev Nation champions. Other than this being native to Kubernetes, what is the advantage of doing this over doing CI CD with Ansible Tower and Git? Yeah, well, I think it's a matter of, probably, preferences. In my case, well, I don't know Ansible Tower that much. So I wanted to test Tecton and integration with Argo CD. So I would say that there is no answer for that, like do it with Argo or do it with Tecton or just the Ansible Tower. It's probably for different use cases, but I think it's a matter of also preferences as well. Yeah, I guess that if you already have a Kubernetes or OpenShift cluster, it's better to stick with the native feature. Or else you can use anything like Tower or Git. All right, we are already on top of the hour. I'm glad that the screen share in the beginning didn't work, but all of your demos worked perfectly. So we had a very nice outcome of this tech talk. If you're watching this, I hope you enjoyed a lot. And I'd like to thank you for watching us. And thank you, Davy, for this awesome presentation. And hope you to see you on the next Dev Nation tech talk. Bye. Bye, bye.