 Hello everybody, so it's really nice to see so many people here. I'm really humbled and As you can see already, I changed the title of my presentation. So it was tecton versus Jenkins X You know, it's just a pitch to to get accepted at the conferences. Actually, there is no competition here so because actually The the second project is based on tecton. So it's just let's say a comparison. So who here is Practicing CI CD you can raise your hands. Wow, such a good audience normally in other conferences, there was just one guy and I think he was following me at every conference But he's not here today So why do we do see ICD in my opinion we do see ICD because we want a quick and continuous feedback loop. Why I Believe that none of us actually can write the perfect code straight away So coding is just a sequence or really small experiments And we would like to basically gather feedback as soon as we can Why because we want to delight our customers and Basically, we have to guarantee that our software can work properly on production we know that Reality actually sometimes looks something like this. It's war. You can see smoke. You can see gunfire you can see people screaming and here At the bottom left corner you can see even the product owners dead on the floor. It's a mess Yeah, but our goal is to continuously improve. So we should try to do that But let's take a step back. We have seen massive changes in the last Let's say five years with the rise of containers Microservice architectures container orchestrators in order to deploy and manage containers at scale and the de facto standard nowadays is Kubernetes This is a big part of this paradigm shift and it's actually influencing also the design of CICD tools. Why? If you think about the tools we use normally, they were not designed for the clouds You can see the logo of Jenkins for instance here. This is the most used CICD tool out there And it's awesome It has allowed us let's say for like more than 10 years to go fast But still was not designed for the clouds. So we have a big JVM running with gigabytes of memory and It's continuously running. So wasting CPU cycles if you are on the cloud and Since we pay for CPU usage or memory usage, we would like something with a selfless. Let's say mode On demand and that's where the project tecton comes in so it's a pretty new project has been announced more or less one year ago and Is trying to redefine in my opinion CICD for the cloud and for Kubernetes basically stems from the Knative build projects and The Knative build project was used to build functions. Let's say into containers so In order to to have a selfless Let's say workloads on top of Kubernetes and the community was so excited that they decided they wanted more tecton basically Tecton so my view of tecton is basically the first Kubernetes native pipeline execution engine and How does it work? So I will just let's say go through the basics. So we have a few Custom resource definitions. So identify let's say five building blocks, but they are actually increasing and There is also a tiny Controller which is what watching for CRDs in our Kubernetes or OpenShift cluster So this is let's say a bird eye view of their resources. So let's go quickly through them But first a question who here is already using tecton So I think I believe few guys. I believe there are even a few tecton developers here So I hope not to say any bullshit So we have a pipeline as usual a pipeline is a collection of Tasks a task in tecton is executed within a pod in Kubernetes You can execute tasks basically sequentially or in parallel. Normally you will define the so-called directed Cyclic graph and of course Pods are basically orchestrated by Kubernetes in other words. They can run on different nodes a Task task we said runs within a pod and Defines a sequence of steps each step is basically executed within a container and Normally they're executing Eventually as we define them in the YAML definition. I know you all love YAML Especially for the indentation so if If we want to run a pipeline we use normally a pipeline run So it's a CRD to instantiate if you think about let's say object oriented programming We can think about a class and then an object out of that class. Let's say more or less So pipeline run allows us also to bind Resources so pipeline resources at runtime so for instance git repositories or Docker images and so on Now tasks can be run in isolation. So you can see at the bottom task runs So you can do that or you can execute the pipeline, of course within through a pipeline run This is again the complete picture. So my opinion so I'm pretty amazed by tecton Because pipelines are now cloud native. So are based on containers orchestrated dynamically by Kubernetes and They are decoupled because we can actually Execute pipelines on any Kubernetes cluster on any cloud provider, of course We can run tasks in isolation We can compose them as we wish basically and we can bind resources dynamically at runtime It's pretty cool in my opinion, but This was the boring stuff. So I Pray the demo gods So for I try to wake them up. Hopefully they are with me with Wi-Fi with everything I will see it for a second So let's see So Just guys just shout if you have any trouble the seeing from from the bottom It'll be more No worries and I will also zoom during the steps So I for time reasons. I already have a Kubernetes cluster with tecton installed We can hope. Let's see if the Wi-Fi is still working So I have a Kubernetes cluster with three nodes here and Let's see That tecton controller is running Luckily here so and watching for custom resource definitions We can see the CRDs here for example We can see that tecton one And so on Okay, so now what do I want to do I Want to basically build test package and deploy a really simple application So it's a spring boot pet clinic application so I have the code over here in the pet clinic mechanical folder As you can see this is just a super simple Spring boot application. You can see there the starter parent You can see a few dependencies the actuator the starter for using relational databases There's the web starter for rest and points and so on so and Yeah, you can see Basically, this is a classic spring boot application. So nothing really fancy Okay, now I would like to deploy this application basically on my Kubernetes cluster in order to do that I also have another folder. So basically get repos on my github if you want to to have a look at at them that Deploy let's say the deploy folder the deploy folder just contains Kubernetes Definitions you can see here a classic deployment. We will run one replica here We specify our Container so taking it from the from a container registry I defined a few resource limits and requests and the classic readiness and liveness probes This is what what we will use Let's say to deploy our application. So the third folder here The pet clinic tecton contains my pipeline But before before going let's say through the tasks and the pipeline, I would like to apply the resources So that's why the pipeline is building. We can go through the YAML definition Okay, so everything is in the pet clinic Tecton I said Okay, so let's apply a service account for Access rights then let's deploy That pet clinic pipeline I want also my resources and Finally so after now applying the see the the basically the objects I Would like to run the pipeline. So I'm using the pet clinic Run Okay, so let's see if we have our stuff Okay, so we can see Don't worry now about them. We'll go through the YAML definitions. We can see three tasks Let's see if we have the pipeline. Yeah, same stuff can be done with the tecton CLI so tecton comes also with a and With an end the CLI so you can also play with that and we can see We have our stuff Okay, so let's go now through while the pipeline is building. It's gonna take like three four minutes let's go through the Pipeline definition as you can see it's it's really short So but I have a I have a really complicated application any questions till now Okay, so we said we want to build test package and deploy Small spring boot application in order to do that. I defined three tasks the build maven The container build and the deployment and then I grouped together all of them in a pipeline Let's go now through the tasks so You can see a tecton task here and This is the maven build a task normally must specify of course at least one step This mandatory and can specify inputs and outputs in this case You can see over here. We have a few inputs. So Workspace a git workspace it contains of course our code and a parameter It can be anything you want for example the artifact name the artifact version. It could be a git token Anything in this case for example, I'm just specifying a working directory for instance So in order now to build our code We need to specify steps which are executed within containers and as you can see here We have that the important step the maven build which is executed in a container Which contains maven and the JDK? As you can see we execute the classic command maven clean install and here I'm also executing the tests because I like to do TDD But if we want to do something else, we could even uncomment it, but this would be called JDD Jesus driven development. So just Just push something. Let's say on git and pray that it works Sometimes I have seen that in the project so Now we have another task the container build The task is named build canicle as you can see here. We also specify a few input resources and The input resource comes from the previous task is basically the workspace containing the artifacts and We have also a few parameter for example the docker file path Over here The output of course as you can imagine is a container image Now that's that I Have one step, which is the canico build so canico is a by the way a tool in order to build Container images inside a Kubernetes cluster without mounting the docker socket and you know that Has some security issues, so I'm using canico here And as you can see I mount a secret in order to be able to deploy to the container registry and expose the secret as an environment variable Then we have the third task the last one Is the deployment task here The input is a git repository because we said we have our manifests in a git repository Separated from the code So this is at least my personal preference So that for example, I don't need to rebuild the code if if I just change a yaml definition But there are other strategies as well if you want just to use a single repo We have also a few parameters as you can see which for example there the deployment the file the yaml file we want to use and And The steps so the first one is doesn't really matter is for cleanup stuff This one that's the important one. So the Qubectl Deploy so here. I'm just using a container which contains the Qubectl CLI and I I'm executing Qubectl apply Now we have three tasks. We group together in a pipeline a Pipeline definition you can see that here Named pet clinic pipeline in my case We can define a few resources which Will be passed to the to the tasks in this case you see two git repositories and one Docker image We could have defined parameters But I didn't really need that for the demo and then we start listing the tasks We can see the first task which was the maven clean install. So the maven build we reference our task and We specify the resources you can think at this stage in the pipeline when you start passing resources at least that's my perception as Logical resources the physical resources will be represented in that in another CRD which we will see later. So here we specify which Basically resources the tasks will be using and of course the pipeline can override resources and parameters defined within our tasks So we have seen the first Task reference the second task reference of course is referencing the canico build the container build and We have a few resources here. That is something interesting in the input resources we see at this line we are receiving a Repository and This repository is coming from the previous task So basically we are with these Keywords we can basically define sequential execution. So ordering of the tasks So this task cannot start before we have a jar for example then The output of course is the docker Container the last step we said we want to deploy with the cube CTL So the pet clinic deploy So basically the task is referenced here and here How do we guarantee ordering? We don't have a logical binding between input and output resources because we are building Basically, we are applying a Kubernetes manifest Therefore I need to specify That I want to of course deploy the container image after I have a container image of course then We have a few pipeline resources as we said In our case we have three resources You can see the first one in the spec a git repo which is pointing to my github. It contains the the pet clinic application then Another one which points to the container image in the stored in the container registry as you can see the type here is an image and Then the last one is again a git repository which contains the Kubernetes manifest In order to run the pipeline the last so manifest we have a look at is the pipeline run So here we are instantiated. We are starting our pipeline We can specify a service accounts in order for to have different rights and And of course here we perform the binding between the pipeline resources we have defined previously here and basically our tasks so This will be the resources provided to the task execution That's it. Okay. Hope everything is clear So let's see what's going on here Wow, the demo gods at least for now are with me so we can see our application up and running and We can see also three completed pods. These are the pods which Executed the three tasks. You can see the maven build. You can see the container image build and you can see the deployment Okay, so We can use that tecton CLI to get some information for example about the task runs Okay, we have three task lands X. So started for example seven minutes ago status succeeded If we want some more information, we can see for example the task run describe And I want to describe for example the maven build So here you can Basically see what we have defined in the yaml definition. So that well, we have the status success luckily We have input resources We have here output resources still a git repo and a few steps So we can see for example that one we care about they may even build There some other of them are automatically generated by tecton so If we want to get some logs. Oh, well, let's list tasks again So if I want to get some logs, I can say log For my task run We should be able for all these logs So we can see Hopefully the well the Wi-Fi is yeah, it's Fast enough so we can see classic the maven logs. So downloading the the world basically Okay, cool. So our application now is deployed To the Kubernetes cluster and I'm exposing it using a Kubernetes service. I Will show that to you. So we have a Kubernetes service here Of type load balancer. So a controller inside my Kubernetes cluster is automatically generating a Cloud L4 load balancer with an ephemeral IP address So I can get the service This is the external IP. So let's try to Access it. It was port 1990 Yeah, so super complex application But it's here up and running Okay now Let's continue With a few slides So we have seen tecton pipelines which are awesome. They are scalable. They are portable They are decoupled but in my opinion as let's say as a Consultant dealing with code and middleware What I see is that for the average developer, which in my opinion is more than half of Let's say of the developer population For for most of the developers define such pipelines is a bit complicated Generally Kubernetes is a little bit complicated and when something is complicated it becomes error-prone So a normal developer what I see by the customer just cares about Building the code deploying it, but they don't want really to understand the underlying infrastructure and all the tricks and Also because in my opinion everything is awesome Kubernetes, especially extended with tecton But Kubernetes in my opinion and of course OpenShift they are like a baby. It's really nice to play with but you don't know when they start crying so Something can happen So that's where basically Jenkins X comes in so Jenkins X even though the names it looks like is the old Jenkins It's a complete so basically it's a new project. So the cloud bees Folks they are trying to rearchitect Jenkins for the cloud at the beginning actually was even based on the static Jenkins master But then it evolved So Jenkins basically tries not to reinvent the wheel tecton is pretty good already. So Jenkins X is trying to build an abstraction on top of tecton with a few additional controllers Which basically Translate Tecton the Jenkins X YAML this time pipelines into tecton Resources, so we are used. I think most of us to the classic Jenkins file with the groovy DSL the new let's say Jenkins X pipeline is Exactly the same but just defined as YAML and translated into tecton resources and Something I really find awesome is about Jenkins X is the use of github's So what is github's? Basically we not we don't just define code on a git repo the infrastructure the pipelines But also we define Operational let's say knowledge so the the configuration of the CICD ecosystem Which environments do we have staging production everything is represented in git? Which application do we have on staging which version everything is on git? We have an audit log we can revert in case of troubles We have seen there in the picture at the beginning about real life deployments and of course, it's also pretty cool in case of disaster recovery in my opinion Jenkins X so as just a Drawback, so I would like to first start with this slide So it's awesome, and I will try to show a little demo to you but of course as A few drawbacks it's opinionated and in my opinion is not yet rock solid So as a lot of moving parts so tecton the The Jenkins X controllers the webbook handler with prow also for chat operations So sometimes being based on so many tools makes the tool really shaky and As you can imagine as I was preparing the demo so in the last few days There were a few regression bugs which let's say Will Not allow me to show the complete application deployed on on production, but I will try to show you The best I I can at the moment So I already have Jenkins X installed in a in another Kubernetes cluster So let's make it a little bit bigger Jenkins X comes with a and the CLI as you can see I have the CLI here now Let's switch the context now. I want to use that Jenkins X cluster So how did I install it? The best way to install it would be Jenkins X boot with the Jenkins X boot you start from a git repository Which represents the CI CD configuration and you install it on your Kubernetes cluster So I have for instance an example here This is the repo that Jenkins X boot config repo as you can see there is a bunch a lot of stuff Probably the most important one is here The requirements for example, we can say which is the cluster the owner on github Which environments do we have for example a development environment a staging environment? Do we have TLS enabled everything? So configuration for the ingress control storage and so on So everything is pinned up with the JX boot in Alternatives since as I said is not super stable yet, but I truly believe is really promising Otherwise you can use that JX install command now Let's see what's it what is running on this Kubernetes cluster So we have a bunch of stuff as you can see and everything has been installed just by typing JX boot or JX install you can see here this guy here Install out of the box. You have the tecton pipeline controller then you have a few Jenkins X specific controllers here which will cooperate and also translate the the Jenkins pipeline into Tecton definition you can see a bio and just we don't care now about names, but just For you to know so this is basically another application which is used as a web book handler This is proud which is used by the Kubernetes project itself So we have a bunch of stuff already installed now. We can get also the CRDs and as you can see I'm not lying. So everything has been installed by Jenkins X. I Believe this is pretty awesome now Let's try to get the environments. We said we are using githops Our environments are represented on gith Cool. So Jenkins X is opinionated and starts by default with two environments a staging environment and a production environment The staging environment has automatic promotion Since we are doing CD and you can see here is automatic promotion then We have production here by default is manual But if you want to do continuous deployment if let's say the team is mature enough, and we have everything automated. Why not? at the moment everything is still just on a single Kubernetes cluster and The environments are represented by different namespaces as you can see in this column over here The team as far as I know so I used to cooperate in the open source community a little bit with the Jenkins X guys They told me they are working also on multi cluster support So as you can see we have a Git repo for example for staging. So this is my personal github and As you can see the staging configuration is completely under version control and everything that is deployed on these Namespace on this environment is represented here As a as a dependency So for example to controllers when we had applications they will be added automatically to this repository through a pull request Automatically we said for staging for production. Of course, you would open manually a pull request I want to promote for example the microservice X version 2 on Production you open a pull request. This is gonna be reviewed and This is pretty awesome because everything you do even the promotion of microservices. Let's say goes under version control now I said that The last version is affected by a few regression bugs it means I will not be able to show the complete application running on production But still I would like to show you how you would create a microservice or how you would import a microservice so I have a Copy of the pet clinic Application so We are here So as you can see that's exactly the same application the pet clinic is just a call a copy Now in order to to import this application What do we do first of all? Okay? We enter the directory Then we say JX import. Please demo God be with me So in this case we want to import these microservice So I want to use my github so Jenkins X guides us through the process of importing microservices Yeah initializes the github repo for us with an initial commit and also now in a few steps Here it is it's gonna apply a build pack in this case. I'm even build pack So it basically scans then the palm realizes in this case It's a Java project and applies a build pack providing everything We need to build and deploy our application to Kubernetes in other terms for example a docker file Kubernetes manifests and of course the CSED pipeline So now let's see if it at least this works so Now we are importing our microservice on my personal github and Jenkins X is also registering a webbook Let's say which will notify Our Jenkins X Therefore tecton In order to start the pipeline. So here probably the Wi-Fi is a little bit slow, but it looks like now It's actually creating and pushing the the repository and here it is we have the webbook Register as well. So we have now everything is already on gith if we want to see let's say The activity we just We can do that As you can see the pipeline is running But unfortunately is failing for the bug. I was telling you about before important to notice is here Automatically generating step which is creating the tecton CRDs So if we want to get logs, let's say if it can fetch something We can get the build logs for example for the pipelines associated to the repository or to Our microservice. So now it's trying to fetch logs. Of course there is a small failure here Cool So this is just a really short introduction. I really encourage you to try it out Also together with tecton Okay, we already talked about the drawbacks So let's go forward Let's recap What have we introduced today cloud native CICD, which is awesome We have seen tecton as a kubernetes native pipeline execution engine Afterwards we have seen jenkins x and we have seen that is based on tecton So there is no competition actually jenkins x and the The cloud beast guys are also are also cooperating with the red dot and other companies to tecton So this is pretty awesome not yet. I would say production ready, but really promising and it really allows developers to let's say To speed up operations So cloud native CICD It's pretty cool But in my opinion, this is not a silver bullet Nothing in my opinion. Nothing in it is a silver bullet. Why? Because the biggest challenges we see are not technical challenges Normally the biggest challenges are human challenges So my last message for this short presentation is always be curious Always keep learning And keep sharing because knowledge in my opinion is something that it doesn't make us poorer if we share it It's not like money Thank you So if you have any questions I would be happy to Try to answer yes okay, so The question was now we have seen that We have staging and production different environments as namespaces in the same cluster What happens if we have different clusters? So at the moment is work in progress as far as I know That the trick here. So the let's say the jenki's ex guys are building a custom controller, which Allows the clusters to communicate with each other. So we will have one cluster, which is just for cicd and other clusters with a tiny controller connected through istio And so everything is basically encrypted and they will be able like in a multi cluster solution They will be able to communicate but as you have seen is still already Still a bit shaky even though it's promising But I as far as I know so the final answer is is under development Yes so Not not necessarily So, yeah, I will repeat the question for the those are the back So do you still need always the docker file? Not necessarily So it depends how you build the application. So for example in the case of Tecton I used canico But for example, we can use a scaffold And or we could use build packs. So it depends what you configure So in my case, I'm using the docker file here, but it's not mandatory And that's actually what jenki's ex is doing So if we check for example, what I imported I have the my pet clinic application with that at the beginning was pretty tiny and here we have for example docker file And also a scaffold definition This is what he's using to build the docker containers And by the way, this is the jenkin's ex pipeline. It's really complicated as you see Sorry, so Yes Uh-huh. So the question was about argo cd and the usage with gtops So I have played just shortly with argo. So I'm not really an expert But as far as I have seen so argo is just a cd solution So it's not a ci solution is so is used to deploy stuff continuously to kubernetes or even open shift So the comparison in my opinion is argo is more specific for cd Jenkins ex tries to do a little bit of everything So please So Okay, so this is a really interesting question. So what I what I have seen is basically What I have shown is basically the the open source version There is support for security and all these enterprise stuff, but of course there is an enterprise Enterprise add-ons which you have to install on top of it So it's it's possible also to integrate it for example with held up It's even possible to integrate it with volt for secret encryption, but some add-ons are are not for free So I think I'm out of time unfortunately, but I will be hanging out hanging out a little bit outside Thank you