 So, yeah, welcome everybody, and wait, I tried this. Ni hao shanghai, thank you that everybody joined my talk. The talk, if you may not know about this or the tutorials about from cat to lion, building a practical guide to build a large secure CICD platform with Tecton. This is the slide I took from the CNCF. They send it out and said, hey, please use this. But actually, I like this much more because that's the thing I like so much about Tecton. And you will see, if you're like, what is Tecton with a lion or a cat to do, you will see later if you did not already notice. Shortly to my person, my name is Engindiri. I'm from Germany. You may hear it from my weird accent. I'm working at Pulumi as a customer experience architect. Do everything, cloud transformation, cloud enablement, so whatever the customers come up with questions, I'm here to help. So that's me shortly. If you want, you can follow me on some of the social medias. I'm even have a GitHub handle. Let's switch to the agenda for today. Okay, give you a short introduction. And then I will also make a very short introduction. I'm not going to bore you with some extensive details about Pulumi, Tecton, backstage and so on, but give you as much information that you know, okay, how to follow and what we're going to build here. Then we come to the tutorial part where we see all the pieces brought together and moving to say, hey, this is actually working. This is the story behind this. And then we just make a wrap up and Q and A. The idea of the talk is, I mean, we all saw during the day at work, we saw different CNCF projects. And sometimes it's interesting to see them working together and to say, okay, how can you use one tool to enable the second tool? And what is the second tool doing to deliver more user experience or to speed up my deployment process? And this is all about. There are some pieces missing. I led it for you folks to check out to say, okay, how can I add, for example, a missing piece? But from the idea, it gives you a very good user journey from beginning to the end. So let's start with the first tool. I chose Pulumi for the infrastructure as code tool. You can also use probably Terraform. You can use your preference tool of choice. It doesn't change the idea. The idea here with Pulumi is, or with our infrastructure tool is to say, okay, I have different providers I can use. I have different possibilities to connect to cloud providers to create Kubernetes deployments, for example. So this is the idea here. If you use a different tool of your choice, yes, the language will be maybe different, but the idea is the same. I write my deployment of my infrastructure as code and can save it for collaboration with my colleagues together. Here in this case with Pulumi, as you can see, Pulumi offers languages in Go, Python. So you can choose your programming language of your choice to write infrastructure as code. This is useful, for example, where you say, okay, we are heavily using Python for our application. Let's use Python also to create our deployment. So this is the part here. Of course, Pulumi is an open source tool. It has many, many integrations in an existing system. You have web hooks, you have secret management, so everything comes out of the box. This is about Pulumi, but now let's dive into the idea of what we're going to use with Pulumi, because when we see the code later, you will recognize and say, ah, okay, that's what Engine meant with the high-level overview. So this is the Pulumi architecture. As you can see, we have a language host. So if I choose, for example, Java, as my programming language to create infrastructure as code, the Pulumi CLI detects that you are using Java and starts a GRPC service in the background with your language host, translating Java to GRPC calls. Everything goes into the CLI and Engine. CLI and Engine, inside, is the main core. And then depending on what provider you're using, for example, you say, I want to provision on Ali Cloud, I want to provision infrastructure. Pulumi CLI detects, okay, you are using Go and you want to use Ali Cloud. It's automatically downloading the Ali Cloud provider from GitHub and starts also as a GRPC service. Then they start to communicate, yeah, create a resource, update, delete, and the result of every action gets stored in a local state, in a state file. And then you can decide what I'm going to do with the state file. I save it locally, maybe not do it because you could maybe delete it and you end up with orphan. You can upload it to an bucket and object storage. Everything is supported and you could even use the Pulumi SAS offering for free and say, hey, Pulumi, here's my state file. Please take care of it. But again, we talked with somebody during lunch and he said they have a heavily regulated environment. They cannot go out for a SAS service. Then they use, for example, an object storage. Just save the file here. Okay, that's the high level architecture of the Pulumi engine. How does a Pulumi program looks like? So everything starts here with this, on the right side with the diagram. So a Pulumi project is the wrapper around everything. The moment you put in your folder a Pulumi.yaml file, Pulumi detects it as a Pulumi project. You will see later in the code what I mean with the Pulumi.yaml file. Inside the Pulumi.yaml file, I can put some default values in. For example, you say, my Kubernetes cluster per default should always have six worker nodes. You can put it there. The config in the Pulumi.yaml file is always the default config. And then we head over to the stacks on the right side. Now you can say, okay, I use this Pulumi program to deploy different environments. Dev, QA, prod, you name it. You can use it for ephemeral deployments. And inside a stack, you can overwrite the value. So imagine you have a dev stack and you want that your developer or your team is not using the expensive worker nodes they could use in productive. So you could overwrite this. You could say per default, everybody gets the AWS T3 large, for example. But development doesn't need this. It's a very small Kubernetes cluster. So you can overwrite this stuff. This is what stacks are for. Next point is resources. This are our building blocks. This is our Lego pieces. We can say, take the Kubernetes resource, take the S3 bucket resource and stitch them together. And the glue between the different resources are the input and output. So every resource creating an output and every resource has an input. So I can then say, when the Kubernetes cluster is up and running, and this is what I have in our tutorial, the Kubernetes cluster gives me the cube config. So I can take now the cube config and feed it into my deployment or helm deployment. I can say, here's the cube config, use it. And that's the story we're going to do here, making the turnaround and say, build the resource, get the output, put it into the next resource. And then you can build a graph. So in the back, Pulumi is creating a duck and say, okay, Engin wants to create this infrastructure. I can provision this independently. For this one, I have to wait for the output that the resource is finished. So Pulumi takes care that everything is in order and gets executed. And the interesting part, we will also see in the tutorial, you can use the output from a project as an input for a project. So now comes the interesting part. We can now model separation of concerns. We can say, okay, the database team is owning the code for creating database. Me as a user, I can maybe use a ticket system, gyro, service now, whatever, to tell them, please create me a database. They create it and then I can just reference the Pulumi program and get the values out like database, URL, password, and so on. So that's all about Pulumi for now. And this is all we see during the tutorial. Now we come to the next one. And now you see why I say the lion, because the tecton icon is a nice cat. So when we use tecton very cleverly in our projects, it can become a lion because it has superpowers. Shortly for tecton, yeah, tecton is an open source framework. Also main contributors are Red Hat, IBM, Google is also contributing to this. And it belongs to a foundation. We know the last weeks, we all heard about licensing and all the stuff. So here, for example, with tecton, we know it's already here in the CD foundation. It will be governed there. So from this side, it's very, very cool to use it. What does tecton offers us? Tecton offers us here composable, composability, let me say like this. So I can create tasks and bundle them together. That makes it very powerful. So I can build really my own steps and then distribute them again inside my project, inside my company. Nobody needs to create the 10 time the doggie built for my Go application. I create this and I distribute this. The newer version of tecton also supports OCI bundles. So I can create OCI bundles, upload them to my container repository and people can then just reuse this. I'm going to use in the demo also the OCI approach. Declarative, I don't need to tell anything about this. Anybody who worked with Kubernetes know the power here and declarative is also working for tecton. Everything is visible. Everything is changeable. And what you see in your Git repository is probably what you also deployed on your Kubernetes cluster. Reproducible, it runs in containers. This is very, very helpful. Some CICD systems don't run on computers. They have specific worker nodes. Here we know every task can be run in a container and it will be always the same container, same task. That's very good. We ensure here the immutability. And of course cloud native. I mean, yeah, it runs in Kubernetes so it's cloud native for me. And just again, for the benefits of using this, we just, I mentioned some of the points. It's customizable. Yes, we can adapt it to our needs. It's reusable. That's what I love so much. You can create tasks and share them. I just mentioned this. It's expandable. Tecton offers a tecton catalog, a tecton hub. So you can create now your own components as I mentioned before and share them inside your company through the tecton catalog. People can browse and can just reuse them in their own pipeline. Very, very powerful. It's standardized. Yes, it used the Kubernetes resource model. So we have here a standardization going on out of the box without thinking about this. And it's scalable because it grows with your cluster. It's really seamlessly growing. What are some other benefits? Because yeah, we have tecton pipelines. There's a whole ecosystem waiting for us to use. So I put some of them with the icons. So we have tecton pipelines. That's the main one. We have tecton triggers. We can create now our own triggers on basic stuff like Git changes. Somebody pushes new stuff in a GitHub repository or in your Git T or in your Azure DevOps. We can configure and say, hey, please execute this. Please build the tecton pipeline. You can connect it to your gyro. You can connect it to service now. It doesn't matter. As long you get informations out, you can send it to tecton. You can do some interceptor work, transforming the body of the workload and then execute whatever you think. Very powerful. Tecton CLI. If you need the CLI, you can execute all the CLI commands also. Tecton dashboard, what you will see, I deployed tecton dashboard because I like to see dashboards from a developer perspective. It's also there. Tecton catalog, I just mentioned. Tecton Hub. If you ever use, for example, OpenShift, you will see the tecton hub is connected to OpenShift. You can use then out of the box the tasks other persons in the community wrote. The tecton operator I use also in the tutorial makes easier to deploy all the pieces of tecton. So I just need to deploy the operator, define my CR, what I want, and it automatically deploys it for me. And the latest one I like very much and it's really important for our security needs, for our security posture. It now has also support for tecton chains. So I can sign my images. I can check. I can check for artifact provenance. Yeah, I can even sign tecton tasks and say, okay, this task is signed and I can tell tecton just use tasks which are signed. So you really exclude the situation that you maybe execute tecton pipelines you don't want. You can generically disable this and say only signed one and the signing procedure is up to you how you define this. Okay, how does a pipeline looks like from a high level perspective? We have the pipeline. Every pipeline consists of different tasks and every task has steps. And now you can see, you can run them independently, task A. And then you can even say, hey, task B has a dependency to task A. I say to him, please run after task A is finished. So you can really create a nice structure depending on your needs. So if you, for example, need to update a ticket or close an issue, you can do this before you proceed to your next step. So have a really, really powerful way to create a dependency here. And then the high level which is executing my pipelines is so-called task runs where I can execute a single task or I can execute it via pipeline runs. I can execute a whole pipeline. And again, pipeline runs offers also possibilities to define schedules. I can define a service account. For example, I can say this pipeline doesn't need cluster admin rights. It just needs rights to write on a storage, for example, to execute, to create a deployment in Kubernetes. So you can configure this on the pipeline run. You can say the pipeline runs with the service account. We have it in the demo. Or you can say the specific task runs with a certain service account because it needs to access a secret, maybe. And then high level for the tecton triggers, as we said, event listener, event comes in. We can then look into the trigger bindings. What did we configure here? And then you can then execute depending on your interceptors and your binding. You can then say, please execute following pipeline. And then the pipeline starts to run. So the tecton trigger is really, really a powerful functionality. Okay, so far to tecton. Now I choose, for this tutorial, I choose an engine for policies. I wanted to be sure that we can define policies. And I wanted a policy engine which works also very well with tecton, for example. We come later to the idea why I choose, in this case, Kiverno. So Kiverno, for everybody who doesn't know it, it's a very Kubernetes centric policy engine which derives from the Greek word govern. Just look this up, so please. And what are the key capabilities here? What I like very much about Kiverno, in this case here, it treats policies as a Kubernetes resource. So I can just define it in my Kubernetes resource. I can, as usually, I write my other resources, I can just go on with the Kiverno resource. It has different capabilities for validating, mutating, or even clean up resources. So I can come up with a deployment and I see somebody puts in five CPUs. I can say, hey, wait a minute. That's not something I would like. So you could say, I reject this deployment. You can mutate the deployment. So somebody forgot the resource and you're like, wait a minute. If you forgot to set your resource, I'm going to set it for you. So you mutating the request. Or some other interesting thing we just saw in talks before, Sidecar. You could now use Kiverno to mutate a deployment to automatically inject a Sidecar into your container. Or you could use it to update a Sidecar. Imagine you own the Sidecar and the development team doesn't even know that it exists. But you can take care to deliver the people all the time the latest Sidecar version. Just some ideas. Powerful, we just had it with Tecton Chains. It's also the container image verification. So that's a big plus for me when choosing a policy engine. And yeah, everything is, you can set it up everything in tools like Git, for example. The policy rules I just mentioned, we have generate, mutate, validate, image verification, policy exceptions, exceptions also very interesting. If you, depending on the certain conditions, you can just also exclude policies and clean up is also very nice. This is how Kiverno works. I jump over this, okay? It's in a nutshell is you get our API request via CubeCuttle, for example. And then it's before it gets applied the admission hook listens for this one and executes the rule. So there's much going on in the Kiverno engine, but for us as a user, there comes a request into the API, Kubernetes interjects this, looks it up, run all the engine stuff and says yeah or nay and then things get either deployed or not. And this is everything around this. And now comes the part why I love Kiverno in conjunction with Tecton because it offers us already out of the box policies for Tecton. So that's really, really cool though. I maybe don't need to think about at the beginning to write rules. Maybe I don't even know or in my old company we had this situation. Compliance is owned by a team who doesn't know how to program. For them, even Yamal is like rocket science. They look at you like can I not click on a button and so on. So we could argue then with the people inside a larger company and say hey, let's use Kiverno, let's use some of the default policies already listed here and then if we see the benefits we can start to write our own policies. And here we see with Tecton already some out of the place. I will use one in the tutorial and that's very cool. Okay, so you could argue say okay, why Kiverno and not somebody else? What are some of the features which makes Kiverno also additionally better for admission webhook? Background reporting, I have event creation that's some nice functionality on top of it. I like the functionality with offline usage of the Kiverno CLI. So I can use now in my Tecton pipeline, I can run now my Kiverno CLI to just validate stuff without even connected. So I don't need to wait that somebody says hey, your deployment failed, I can do it shift left, I can do it while creating the stuff or at least when the pipeline runs. We have visualization of real-time visualization of our violation, difficult word combination with the policy reporters or there is something inbuilt. And now again, I just mentioned this, it has a huge policy catalog. And this helps you to grow the adoption very quickly inside the company. Nothing is more annoying than having a nice cool tool for your use case, but then it misses some stuff and you need to build everything, it kills your stories. So when you have something to show management out of the box, look, hey, you forgot the resource, Kiverno blocks it, that's very nice. So you can start, you don't start completely empty, get some of the, one of the catalog and then replace them later with your stuff. Okay, next one, I hurry up, Kubevilla. And this is, I think, this is a Chinese project, I think from Alibaba, Alibaba Cloud, somebody correct me. So I'm very happy that I'm here talking about my tutorial and I have a piece which is here maintained in China, so very cool. Kubevilla is heavily connected with the open application model. So I cannot talk about Kubevilla without talking about the open application model. So just shortly, again, I don't want to bore you too much with details and there are many, many talks, I think also in this area. What is the open application model? The open application model is a way to manage our cloud native applications, a specification, you can look it up. And it provides a very flexible way to define and deploy our applications and it's vendor agnostic. So that's also very cool. So I create with Kubevilla a layer, whoops. And people may don't even know what is underneath this because the platform team taking care of the integration, I can just say, hey, please expose my application of type web service on port 1990 and it will take care as a load pencil will create the ingress. So I even don't need to think about what is the ingress, it will take everything and makes it very easy for potability. And the idea of Kubevilla is that developers think in application architecture and not of infrastructure. I work at Pulumi, we do infrastructure as code every day. And yeah, it's true. Most of the time, most of the developers really don't care about this. They're like, hey, here's my code. I don't care if it's the VPC, you did very perfect and so on they say, just run it. It gets a problem when the application is not running. You have to think about nobody in the company comes to you when the infrastructure is working perfectly fine. But the moment the application what generates money, what makes the money for your business is not working, then you get a problem. So that's the thing to say care and say, okay, being application-centric and this is what the OIM is, it's an app-centric approach can help. So what is Kubevilla done in this term? Kubevilla is the runtime of open application model. They are different runtimes too, but I think Kubevilla is the reference implementation of the open application model. Kubevilla leveraged Kubernetes as our delivery, delivery control plane. It's a CNCF project and it offers you multi-tenant, multi-cluster approach. So you can really define your application as a developer, you know, okay, cluster A, B, C, they're available, you can just define them in your application and Kubevilla takes care to deploy it to your multi-cluster environment. And it's extensible, that was also good. So if you're not happy with the current add-ons, maybe they don't fit your need, you have now the option to say, I can create my own add-ons using Q, for example, or I can also create custom definitions. We will see this in action. Here, an overview map, how Kubevilla fits into the system. As we can see, yeah, we have our CI part, we use the Tecton one, where is it? No, it's not working. Okay, it's not working. Ah, yeah, okay. So, well, okay, that's the Tecton part, that's, okay. It looked in Amazon better than it is in real life. The add-on catalog we just talked, and then of course we have our day two operations on the right side. And Kubevilla is continuously working on new functionalities, so I don't know when you had the last time you worked with it, or maybe you did not had a chance at all to look at it. It offers a very cool UI. I have it in the tutorial. It has a GitOps approach with Flux, so now it automatically adds Flux as a GitOps engine, and then you can just deploy also via the GitOps approach, everything using Kubevilla, and of course observability, you need observability. Okay, how does an application in Kubevilla looks like? Application is the top level entity, and then we can start with the component. Component is mostly associated with a microservice, so I can choose from different types. I can say, hey, this is of type web service, for example. I can add then to this application, top level application, I can add traits. I mean, components, they are not one. You can do end components. You can define your whole structure. You can say, my application, shopping UI, consists of the shopping cart, consists of the catalog, everything is a component, and then you have the traits, which enhances your components with additional functionality. You can say, the web service has also an ingress. This is a trait, for example. You can also add policies to this one, and I did not do this now for this tutorial, but it also offers some kind of workflow engine, but then again, there are so many workflow engines out. I was not sure how this is going to work. Maybe it makes sense, have a look into this if you like this. Okay, last but not least, this one is going to be very quick because everybody knows Argo. If you don't know Argo, please look it up. It's the GitOps engine next to Flux, and it fulfills all the GitOps practices so we can declaratively create our deployment. It's versioned, immutable. It will be automatically pulled when there is a change in your system, and it's also continuously reconciling it, so Argo CD completely fulfills the requirements from the GitOps working group. This four stuff completely fulfilled. This are the architectural overview of Argo CD. So we have Argo CD UI, everybody needs a CLI, and then Argo CD itself consists of different pieces. The server, the repository server, oh, I have two times repository server. We have the Redis, we have DAX, we have Argo notification inside, and this lives on the cluster. I can even set up Argo to do multi-cluster deployments, so what I create in Argo is then a hub and spoke. I can say I have a central control plane Argo, and then I attach different clusters to Argo, and then I can use the system of application set to then deploy my application, depending on requirements to different clusters. Very, very powerful, and you should see this. We're going to use this for our tutorial too. And last but not least to make the round finished, backstage. Backstage is a relatively new project in the CNCF. It was initially created by Spotify, and you have to think about backstage is an internal developer or developer portal. So you have now the situation. You have a way to talk to your customers, a guided way. So there are two parts of backstage. First of all, you see the catalog of all your applications. You can configure backstage to scan your Git repository, to scan your wherever you use, you save all the artifacts of your workplace, and it will create automatically a catalog, and then you can see who's working on what, what products do we have, who's the owner of it. So you get a whole structure. You will see this in the demo that makes more sense. And we have also the situation that we can offer with backstage so-called software catalog templates. So your development team or your infrastructure team can self-serve them infrastructure or projects. They say, I need a go application. They go to backstage, they register themself, they answer some questions you can define, it's completely free, and backstage is provisioning everything for you. So with backstage, we have our different requirements. We want speed, scale, we want to reduce any cause control. So we don't want the fragmentation that everybody does what they want. Hey, I created a Git repository here for whatever, and I use make files and so on. No, we have a central way now, and this is backstage in the middle, so the best of speed, scale, and cause control. And on top of it, you can say, okay, Argo CD and Pulumi is heavily supporting this approach because you can define your infrastructure, you can deploy application, everything is possible. So I just mentioned this backstage for developers, it's very cool. And backstage again, because it's an open source tool, it's heavily customizable, you can extend it with plug-ins. It uses underneath, it uses material type script and React was it, yes, the React framework is underneath. So when you come in a situation, you want to extend it, this is your software stack you're going to need. I created a backstage plugin, and it's not that difficult. So the moment you got it, you can now start for your company to create plugins so people can see everything without leaving backstage. Okay, tutorial time. So enough of the theory, everybody talks about theory. Let me talk about the architecture, okay? So how is the architecture look like? So I created here the so-called GitOps platform. We have on top of the GitOps platform, we have the platform team. I created the EKS, I use Amazon, I use the EKS is created with Pulumi using type script. So everything, the VPC, the gateways, everything I need is all defined in Pulumi. And then it creates for us the Kubernetes cluster with some of the deployments already in. The middle column is the deployed workload. So what I deployed, maybe you don't see this, backstage, Argo, Tecton, and Kubella already deployed. And backstage is built to listen to a specific GitHub organization. Everything what happens in this GitHub organization will automatically picked up from backstage. So now comes the development team and say, hey, I want to create a new project. They go to, this is the green arrows. They go to backstage, they order their project. And what backstage is doing in the background, it will create a pull request in the platform team repository for Argo CD. And it will also create the blueprint of the go application in their own repository. So that's the green arrow going up here. So this one is the code where the go application is. The development team can work with this. This is the pull request to the platform team git repository where it belongs to them. They decide. So they will now get a pull request and the platform team is like, oh, there's something new. They can review it and say, oh, that's nice. We do this. They accept the pull request. Argo CD is like, hey, there's new workload I need to deploy. And this new workload is our Tecton pipelines. And when the Tecton pipeline runs, it will automatically deploy this application via Kubella into our Kubernetes cluster. So much going on here. Okay, and that's the initial bootstrapping all the day one stuff. And now you can think every time when the development team works on new code pieces, we talked about Tecton triggers. Tecton trigger will see the changes. Will trigger the Tecton pipeline again to either rebuild the image because there were changes in the logic or just execute because they changed something in the Kubella definition because we just talked about you can add traits or you can add components. So every change gets detected and gets executed. Okay, I will show this now and I hope it works. So, but so far any questions feel free to interrupt me the session is for you folks. So if any questions so far before I switch to the code ask, otherwise we can do this also afterwards if you feel more happy. Hello, and then as you know, that's when we do, when we use the token pipeline to do the stuff to the deploy that's sometimes we meet that's our task is a large, large, large task. We have the scenario that we needed to separate the task into pieces. Just for example, we separate the task to task A and the task B. And yes, we know that in the publisher and the subscriber system there's a, when the task A is passed and it will be promoted each status will be promoted to a channel and task B subscribes the channel to be triggered after the task A is successful. But yeah, as I know this, I just know that the task B only can subscribe, subscribe the status of the task A, but in some scenarios, we may need some materials. Yes, some result from the task A. Yeah, let's use the example. In the task A, for us that we provision our cluster, we create a cluster and wait for the cluster to be ready. And the task B, we need to use to do some actions on the cluster. So we need to, in the task B, we need to know which the cluster is. So my question is that, is there any way or any global variables so that we can translate to the task A's result? Yeah, it can post a cluster ID from the task A to the task B. So just take it be understanding. So you talk about the cluster creation and then one cluster is still reconciling or not fully ready. And you want to know how is it possible to get feedback about this or? No, sorry. Task A, a private cluster, private cluster, private or set for a ready cluster, but the task B needs to do something on the cluster. So task B needs to get a flag, such like a class ID or class name from class A. So that's why we can do something on the cluster B. Maybe we discuss this later words because maybe that's a little bit bigger. So sorry for this. Then I will come after the talk to you again and we can talk a little bit. Is this okay? Okay, no problem. Then I just continue this because it's a bigger question and more for thing. Okay, so let me show you everything in action. What I created here now is the demo code. You will see the QR code. You can look it on. I created now two folders, one zero zero infrastructure and one zero one Kubernetes. We talked about Pulumi. We said we can create two different kinds of stacks. We see the Pulumi YAML file is here inside and defines everything in the folder. It's a Pulumi program. So I create one Pulumi program in the infrastructure folder defining. So here I can define now everything related to the creation of the infrastructure. So the gateway and everything it's here. I created a stack called def. This is here. I said you can define in your stacks specific config which is different from the normal stack. So my development stack will be deployed in EU central one. If I create a second stack, the prod stack, I could then say, hey, this is in US West for example, doesn't change anything. Here inside is the code and here we create our Kubernetes provider. Where are you? Here, here we create our EKS cluster and we see the EKS cluster gets also out. Where are you? Here, this is the cube config. What I said, every resource has an output and gives for example, here the cube config. So I can say now, create a Kubernetes cluster, create a Kubernetes cluster in AWS and then create a Kubernetes provider using this cube config. So this is this definition. Let's have a look how we deployed the single applications. So here again, it's a JavaScript project, a TypeScript. And we have here, where are you? So this is very important for the whole idea with the stack reference, I can now reference the values inside the 00 infrastructure program and can say, hey, please give me the cube config from this stack because I'm not the owner of the Kubernetes cluster. Maybe that's the infrastructure team doing so. I separated this, somebody run the infrastructure deployment, me as maybe a platform engineering team or a platform developer, I can just reference this and start now to deploy all my applications. So we see here, for example, I deployed engine X as a Helm chart and then we go, I deployed external DNS because I wanted to show with the DNS everything. Here I deployed Tecton and so on Argo CD. The thing what we created here now, which is also very cool, as you can see, I created a Qiverno, but this is not coming from Pulumi. Pulumi doesn't know about Qiverno. What I did in the background, I create a Puyah was here in this talk, a Pocable abstraction. So I created a component resource called Qiverno, which has the real deployment inside. So now I hide the implementation of Qiverno and somebody else can use Qiverno as a component in my Pulumi code and he doesn't need to know the implementation. So if one day the thing change, the person who's including Qiverno doesn't know about this. He says, hey, I included the version of Qiverno, version 1.0 as NPM module. I then say as platform team, I say, hey, I created a new Qiverno version. He then just can include this. So the trick we do here, creating transparent abstractions using component resources to hide the implementation, to reduce the blast radius of anything and people can just use it because sometimes maybe the person doesn't know what's the best setup for Qiverno. So what he's doing, he just includes Qiverno in classical type script using the new Qiverno, changes some properties, can send some dependencies, that's it. So and in this way I deployed Argo, backstage, everything is deployed. So let's, I will share the code as a QR code. You can have a look into it. How does it look like in action? So now we come to the last part to see everything moving together. Where are you? Sorry for this one. Okay, so this is backstage deployed. This is Argo CD deployed. This is my GitHub repository where I connected backstage to it. This is Kubevilla deployed. Oops, let me go to the start page. Okay, and this is Tecton dashboards deployed. Okay, so let's do, we are on development team. We go back to backstage. Where is my mouse? So here's the mouse. So I go in, I log in here. In this case, I choose GitHub as an off provider. So I log into the system and hopefully the internet is not breaking on me. Come on, come on, don't do this now. Just a second, sign in. Okay, now I'm signed in. We created our catalog. As you can see, I created also a test application. This one is our test app and I can see now the relationship between them. I see, okay, this app belongs to who? It belongs to the development team. It's owned by this person, very, very good. If it has an API, I could also publish the API of it. But the interesting part is now, I see all the components available. I can also browse through domains, location, and all the users. But what happens if I want to create a new one? We go to create and then we see the different templates, I said. This template is completely written by myself. You can write every template to your needs. So in this case, we have a brief instruction. It's creating a new go up with following features. It creates a GitHub repo, Tecto and CI, Kubella application and Argo CD application. So let me choose this one. It asks for some stuff. I can ask if I don't want then it's up to my, to me. So I say a KubeCon, CN and I can add some demo text to it. And as you see, owner is mandatory. I have to say who owns this component. I can say it's the whole development department who owns it. Now we go in the next step. It asked me, where should I create a new GitHub repository? I say in my silly organization and the name of the repositories should be KubeCon CN. Then we go next step. It shows me the information. You can create unlimited steps as you want. It really depends on your process. I can then click on create and now the magic happens in the background. So I mentioned before, it creates a pull request in the Argo CD repository. It will create the application, GitHub repository and I get all my links here now. I can go to the PR and I can see my application code. So let's see how does it look like. If I, now I'm the platform owner, I will get an email, a notification, slack, whatever. And it's written, hey, somebody created a new application, the new KubeCon application. I can now review the code and say, yeah, okay, that looks fine. I can run a pipeline for conflict checks and so on. Yeah, that looks fine. And I could then accept the pull requests. So before we accept this, we also check our Argo CD. Okay, we see Argo CD currently has only deployed the test app from before inside. Okay, that's fine. So now we check also the application code. How does the application code look like? So backstage created this folder called KubeCon CN. It also templated some of the readme. So everything is free. The cookie cutting of the project work perfectly fine. I have here my tecton pipeline. I have my KubeVela application definition perfectly fine. So me as a developer, I can start to work. So the only thing which is missing now that the platform team says, yeah, that's fine. So we say squash and merge, we say confirm. And now we go to our Argo CD, let me refresh this. And we see it detected automatically that there's a new application with new tecton pipelines. It's automatically deployed the new pipelines and it already started the build of the image. So when I go now into my tecton dashboard, here you can see now under pipeline runs. Here, it started the KubeVela deployment and it also started the KubeVela pipeline to build everything. So when we look into this, this was a pipeline we mentioned we saw in the picture and these are the different tasks. And every task has a step. You see, I created a task for cloning the repository. I create a canico to build the image and most importantly, also write the hashtag back into the repository. And then we see also the KubeVela will be also executed. So KubeVela push to apply the application. So this takes some seconds, it should not be that long. Okay, a canico if you don't notice you can create a doggie image without a doggie agent that's very powerful. So the build of the doggie image happens in my EKS cluster. I don't have a dedicated worker node running somewhere. So now he's pushing the image to GitHub. So I want that it's pushing to the GitHub container registry. And now I tell KubeVela, hey KubeVela, please deploy the image with this digest. And yeah, that's it. And then just telling KubeVela the tecton also and now push the changes to the repository. So let's look how does it look like? So this is my application code. And if we have now a look, we see, hey, there is a package. There is the tecton package just created. And we see, yeah, this is the image I created here with the latest, fine. And when we go into KubeVela, you see there was a change just now. And there should be in the customization YAML. Yeah, there is the Shah. So everything should be fine working now. Then we go back to our KubeVela UI. You see KubeCon CN app is deployed and we see all the information here now. These are the components. I can now add additional traits to it. And yeah, the application is so far deployed. And if I need any changes, I can then just update the git repository. Okay, let's head back to our slideshow. Okay, wrap up. What did we saw? We created a platform all in code using infrastructure as code tool. Again, in this case, it was Pulumi. We created Backstage as a front-facing portal to interact with the platform. So we have now a UI where people can interact with our platform with the golden paths I provided them. We see that Tecton and Argo CD plays very well together. I can, because everything is Kubernetes, it just works, that's brilliant. What of course is missing now, we have to implement now triggers. We should create some kind of authentication for the users, a DEX for example. We should think about security observability. Yes, there are some missing pieces, but I think the idea to create a full story using Tecton with other parts of the ecosystem work perfectly fine. And yeah, that's it so far. So now thanks for everybody who stayed to the end. Shinshin, any questions? Thank you. I have a question about the Kubevilla in the graph. I mean, I see that the Argo CD is the last thing that trigger to deploy the new version, right? And the Tecton is the runner that makes the chain on the Kubevilla configuration so that the Argo CD can trigger the deployment. So my question is, so basically I can replace the Kubevilla with something like the Terraform or something, right? I mean, yes, of course, I mean. Yeah, so when will be the run of the Kubevilla on your example, I mean, so I did not follow. Yes, of course, sorry, maybe it was too quick. So I have here in my Tecton, I have two tasks. So two pipeline runs. So when the image gets built, it makes automatically a Kubevilla up because then the image changed. And I want that the application definition has the latest image inside. Every time when I change something in my Kubevilla file without changing the logic, maybe I want to add a trade or a component, it will execute this deploy only one. And here there's just one step into it just to do the Kubevilla up, you see? I use customize to change the image tag because the image gets built from Tecton and the SHA number has to be brought back into the customization YAML because we want to do everything GitOps. So what the one pipeline is doing, the moment the image is built, it gets the SHA number of the new image, push them into the Git repository into the customization YAML. And when I do a Vella app, I'm not doing a Vella app vanilla, I do customize apply, customize change the SHA number with the latest one inside my application YAML file of the Kubevilla and then deploys it with the latest SHA. Okay, okay, thank you. So that's the thing that the chain on this. And yes, of course, you can change Kubevilla with vanilla helm charts. So I just need to update the template as the first plate to use the other, right? Sorry? I just need to update the back state templates to use the other thing like Terraform replace for the Kubevilla, right? Yeah, if you want to say, I want to offer a new backstage template where people can choose between Kubevilla and Terraform, you can either, I would then say, okay, you can create here a checkbox. You can say Kubevilla or Terraform or helm. So you can do as an application platform owner, you can give people the choice because as I mentioned, this is everything what you wrote. So if you say either Kubevilla for everyone or they can choose between Kubevilla and whatever, so it's up to you. I see, I see, I see, thank you, thank you. Okay, I will be here around to you. Hello, thank you. I just want to confirm that the configuration of the application and also the application code is in the same repository. It belongs to the idea was, let me show this here, that because why do I use Kubevilla? Because Kubevilla is easy for developers to change. So everything is inside like a Monorepo. So I have the Tecton files. They're not somewhere else. The application team can still change the Tecton file. Argo will deploy it. So I just gave them here, hey, this is the good thing. If they want to change it, they can change it. And now comes the thing, with Kiverno, you can check it now. You can say, hey, if they do some stuff I don't want, I can block it. But the application team is completely independent now because they own now the Kubevilla definition. They own the Tecton pipelines. If they want, they can change it. If not, they leave it like this. So that's the idea behind this. So making them independent while applying to confirm and to rules, you know? Because if you don't deliver this and this, then a new project has to keep copy, paste the stuff from somewhere else and so on. And now you say, hey, I create the Git repository. I put everything. This is also created by Backstage. So everything is created to my rules but you have to free them to change it. And with QKiverno, I will check it. If you do some not good stuff, I will block the deployment because, yeah. So scalable, the people have speed, what we say. Speed, chaos, control, everything inside. I hope it makes. Thank you very much. Okay. Then thanks again. And I will be here around if you want to talk, then feel free.