 And off we go. I want to thank everyone for joining us today. Welcome to CNCF's live webinar, Modern CI CD with Tecton, Kaneko, and Customize. I'm Liddy Schultz and I'll be moderating the webinar today. We want to welcome our presenter, Jason Smith, an app modernization specialist at Google. A few housekeeping items before we get started during the webinar. You will not be able to talk as an attendee, but there is a chat box where you can drop your questions on the right hand side of the screen. Please feel free to put them there and we'll get to as many as we can at the end or if Jason prefers during however you want your flow to go. This is an official webinar of the CNCF and as such a subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. And please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They are also available via your registration link and the recording is also available on our online programs YouTube playlist under the CNCF channel. With that, I'll hand it over to Jason to kick it off. Thank you very much Liddy and thank you everybody for joining on a early Tuesday morning. Well, not too early, but we'll learn some interesting things today. Let's jump right into this. So today we're gonna talk about modern CICD with Tecton, Kanako and Customize as the title suggests. My name is Jason Smith. Some people know me as Jay. I respond to either as mentioned at modernization specialist over at Google Cloud. That's where you can find me on Twitter. You can see a little picture of me and my dog and just putting it out there. Forgive me if you hear barking in the background. She's still young and random noises cause her to go crazy. So we'll power through it. So let's look a little bit at our agenda here. We're gonna talk about the perfect tool. Modern code pipelines, building the pipeline. Maybe do a quick little demo and then Q&A. Well, let's first talk about this so-called perfect tool. I mean, we are all looking for the best tools for our job. You know, if you're a mechanic, contractor, something to that effect, doing some kind of construction, that perfect tool may be a hammer, screwdriver, saw, any myriad of things. If you're a chef, perfect tool may be a whisk or some kind of futuristic technology like you see in Back to the Future where you just pop something in this microwave and it comes out and it's a full cooked meal. Maybe that would be the perfect tool for a dream chef. But we're all looking for the perfect tool to do the job. One thing I see a lot of people talking about when we're talking about cloud native, moving to the cloud, moving our apps to the cloud is we talk about how great it is to be on the cloud. We don't talk about how to actually use the cloud, how to deploy applications to the cloud in a way that is beneficial for the cloud. So what is that tool that will help us actually deploy modern applications? Well, the best tool for application design doesn't exist. So I'm gonna give you guys back about 55 minutes. Thank you for your time. Not really. So what I usually get a lot of people asking me either in my role or just in general is everybody seems to want a solution, kind of a bundled solution for building code, for deploying code, but I can just kind of one click install, everything's good to go. What I usually find with a lot of these use cases is there's a lot of customization that tends to happen after the fact. So there really is no such thing as a perfect tool because every tool you deploy, you are gonna have to do some, let's call it aftermarket configurations. So you deploy it, now you're loading it up with shell scripts or just different types of commands. And sometimes I've seen people who are, myself included, I'm just as guilty as anybody else who are just like, yeah, I don't even wanna touch this code pipeline anymore because I'm pretty confident it'll be like a pile of Jenga bricks. And the minute I pull one out, the whole thing will collapse and nothing will work. So what I usually say, instead of looking for the best tool, look for the best components and the best platform. What does this mean? Well, this means look for something that gives you the building blocks you need to build the best tool and that you can iterate on top of rather than trying to find the best tool and then trying to tweak it. Cause if you're gonna wind up tweaking and customizing anyway, from my perspective, it's easier to start lower level than to try to customize on top of an opinionated system. Now, one great example of this whole platform component idea, I like to say is Kubernetes. Kubernetes is a platform for building platforms. That's how I've always chosen to see it. You hear a lot of people talk about how Kubernetes is a future. Kubernetes is the cloud, a Kubernetes, it's great, everybody wants to use it. Let's containerize my apps, microservices, yada, yada, yada. And that's all true. But if somebody asked me to define what Kubernetes was in one phrase, it would be it's a platform for building platforms. It gives you a lot of different tools to run the containers. And that's because it gives you a lot of these different declarative ways. It ultimately became the de facto platform for running containers. A lot of it is because it abstracts away the infrastructure. You're able to declare, you're still running on servers, you're still running on nodes. There's still a Kubernetes master, you're still dealing with load balances. We didn't get rid of those components. We just abstracted it away and you're able to declare it in the animal files and then Kubernetes does its thing to make it work for you. So now I can build a lot of different things on top of Kubernetes in the cloud using traditional VMs, traditional objects really. So I've seen people do machine learning, I've seen people do like a sentiment analysis, just a variety of different things on Kubernetes doing additional customizations because API is declarative, as we all know here, we're all Kubernetes users and it's extensible. So you can create your own controllers, create your own objects, create your own CRDs. A lot of people have done that. We're, I mean, if we just look at the ecosystem and apologies that this is an eye test so early in the morning, I'm not gonna ask you to read the smallest line, but this is just like a landscape of everything that is CNCF related as of a few days ago when I added the slide. For all I know it probably changed, but this is just a collection of all the partners, projects, et cetera that are part of Kubernetes. And a lot of it is because they were able to iterate on top of it. So you'll see some companies here that have been around for years and years and years, but they've been able to turn their product into something cloud native because Kubernetes gave them the API to essentially extend their application to be more cloud ready. So, but it's also not a magic bullet because you can't just throw something into the cloud and call it done. A lot of people just, I've seen a lot of people will come up to me and they'll say things like, oh, let's move to the cloud. Well, we want to containerize our application, microservices, and then you start asking questions and then it's like, oh, okay. Well, let's take a step back. I don't think we've thought this plan through yet. So Kubernetes is in a magic bullet and you really shouldn't be thinking about your CIC pipeline as a magic bullet. You shouldn't be thinking, okay, well, I want this solution that's just gonna make everything happy, gonna make all of my developers happy, and whatnot. So let's, now that we've covered kind of a primer, if you will, talking about Kubernetes, talking about best practices, talking about how Kubernetes is a platform for building platforms rather than just being the perfect platform, how do we build applications on it? Well, the way we build applications on it is we have to look at it the same way we look at Kubernetes and a platform for building platforms. So we want to build a code pipeline, but we want it to be declarative. We want to be able to iterate on it. We want to be able to templateize it. We want to be able to expand upon it as needed and make the changes with the least amount of friction possible. So if we look at maybe just a very basic diagram, so we have what our data center can look like, what the cloud can look like. We have the infrastructure, we have our servers or nodes. We have Kubernetes which is abstracting that away. We need a tool to abstract away the code deployment portion too, but we could build that on top of Kubernetes. So we have a tool called Tecton, or there's a tool called Tecton, it is open source. It is part of the CD Foundation, which you can probably call like a partner foundation to CNCF or as they are both kind of subsidiaries of the Linux Foundation. It uses Kubernetes native components that are declarative, reproducible and composable. Basically everything is extension of Kubernetes. So everything you declare on Tecton is creating pods. It's creating containers. It is using Kubernetes components to do it. So if you know Kubernetes, you can figure out Tecton. There are then triggers for automating build processes. So let's say if you want to create a trigger, if something is pushed to a specific get repository with maybe a specific tag or on a specific branch, it needs to do X, Y, Z, but if it's on a different branch, do this. It comes with a concept called catalog, which is a bunch of reusable tasks and pipelines. We're gonna dive into a little bit of what tasks and pipelines are. But basically there are a lot of components that are gonna be very similar, regardless of whether it's your pipeline and other company's pipeline, some projects pipeline, something like maybe deployed to Docker Hub or deployed to whatever your repository is. Those will probably be pretty similar. No point reinventing the wheel. There's a catalog of common built tasks that you can just plug in your variables into your different parameters. And there's a lot of products that are starting to integrate it. So Jenkins X comes to mind, K-Native as well, which the fun fact about this K-Native, it actually used to be part of K-Native as a product called K-Native build. And then it really just kind of, I guess a quick primer on that story is over time. People realized, oh, you know what, it's so good. It shouldn't be limited to K-Native. It should be something more. So it's spun out into be its own project called Tecton. Let's talk a little bit about what makes Tecton work. Now, Tecton, you know, isn't a single product. Like if you went to GitHub and you looked at the Tecton project, you're not going to just see one repository that has everything that you need. There's a bunch of different ones. You know, obviously there's some of the things like for the website and community and whatnot, but there's I would say two major components and that is the pipelines and the triggers. There are a few other ones like there's a dashboard, there's a CLI, those are kind of self-explanatory, so let's talk a little bit about these other ones. So a pipeline has three primary components. There are obviously other ones, but let's talk about the primary ones. You have a step, which is a single operation in a CI CD workflow. So that could be like running PyTest on a Python application or running a build or something to that effect. A task is a collection of those steps. And these are instantiated on a Kubernetes pod. So whenever a task is executed, it spins up a pod, completes a said task and then spins that pod down. Now a pipeline is a collection of tasks in order. So once task A is complete, do task B, once task B is complete, you know, so on and so forth. Trigger is the other component. Now trigger is the component for eventing, as I like to call it. So it's responding to an event in the world. Basically it's, you have these multiple components such as the event listener, which is essentially a CRD that enables a declarative way to collect HTTP events with JSON payload. So let's talk about GitLab, let's talk about GitHub, let's talk about any Git repository really. You know, you can set up web hooks to where when a specific event takes place, like a pull request or a push, it will then push to a specific endpoint to do something. So you can actually expose your event listener to the wide, you know, the worldwide web, set up a password or a security key. And whenever a certain event happens in your Git repository, it will, the web hook that within that Git repository can then trigger an event. And for what it's worth, it's not limited to Git repositories. Obviously that's the most common iteration because that's how we code, you know, we push code to our Git repo and then do our CI CD. But there are other things you can use to trigger builds. Then there's the trigger template. Now this is where you declare the resource for the trigger. So trigger, so an event happens, a Git push happens and there's new code on the main branch. So the trigger template is, okay, great. What are we gonna do with that new code? What is the action that I want to take place? Build a task, build a pipeline, do we wanna push this code, do we wanna containerize it? And then trigger binding essentially binds the trigger template to the event listener. And it also can pass parameters from the JSON payloads. So things like the Git repository URL or the branch, things that are gonna show up in that JSON payload, you can pull it out and essentially turn it into a variable and pass that along to the trigger template. And it's a little, you'll get access to the slides later. So if you can't see this, I apologize. But basically here's kind of an idea of what a task would look like. This is a build task. As you can see, there are two main steps here, the Kanako one and the Pi test one. We'll dive a little deeper into what happens. With the Kanako one, it's actually pretty cool and we'll talk about it. But as you can see here, it looks like your standard Kubernetes object. You know, I'm calling the specific API, it's task kind, give it a name, the parameters, resources, these are essentially inputs and outputs. So in this example, the input, this is the Git repository that it's gonna be getting. And then the image is the name of the image that I want to be built that I'm gonna be pushing to my container registry. Of course here, you just declare the steps, you give it an image. Very important to point out, every step is its own container. And because of that, you can actually create incredibly intense steps if you have like a very common thing. So you can see here, I'm using a Python container. I can pull that from anything, run Pi test, good to go. And then you're also able to pass along arguments. So as you can see, just kind of like standard Pi test arguments. But let's say I have a very niche use case that there isn't a current container that exists or perhaps there is a container that exists but it's kind of 80% of the way there. I need to add a few extra lines, a few extra features to it in order for it to work. That's fine. You can put whatever container you want in the image file there and that container can be its own step. So when I say building, like be having the components to build what you need, this is exactly what I'm talking about. You're able to actually build your own step, your own job. If you wanna do a specific type of analysis as part of the pipeline, you're able to do that. And then here kind of is a pipeline. So as you can see, it takes in the resources. So it passes along, get image, it passes along a name, or I'm sorry, it passes along get, it passes along the image variables, parameters. Then we actually give the, we list the different tasks that are part of the pipeline in order and this is what it does. So here we have our build tasks. I have a separate task called deploy which does like the push to Kubernetes. I mentioned Kanaco a little bit. Now one thing a lot of us have probably had to deal with is building Docker images and how do we do that? Most of us, I probably do Docker build or Podman build or whatever tool it is we wanna use, but ultimately it boils down to I need to have some kind of CLI on a machine that is running some kind of Docker image or Docker machine, whatever tool I'm wanting to use, run that command and then do the push. So spinning up VMs just to do that. Kanaco is an interesting project that is open source and governed by Google Cloud. Essentially it allows you to build containers right inside the cluster. So Kanaco is like a container image for building container images. There's an actual Kanaco image and what happens is the image will execute the Docker the Docker builds Docker pushes and whatnot and within your Kubernetes cluster build an image and deploy it. And so you don't need to worry about a specific type of Docker daemon or anything like that supports your standard Docker file format. So no surprises there at this point in time it does not support Windows containers. I can't say that it never will. I'm sure as the demand goes up there and it might it is also open source. So if anybody has extra cycles to help develop this please by all means join in. Now one thing we also have to think about is iterating on top of our code on top of our Kubernetes deployments. Day one application is gonna look one way. Day 500 application is gonna go through a lot of different changes, a lot of different patches. Heck there might even be some changes within the actual Kubernetes API over the course of two years that you might benefit like how we have just had the gateway API alpha the other day or last month. Maybe you wanna take advantage of that for the new version of the application. How do we actually iterate our application to where it can constantly change without it just becoming a mess of YAML? Well that's where customized come in. Customized as a Kubernetes SIG project. It is essentially a, I refrain it's kind of templating but kind of not it's essentially creating configs and then it builds on top of it. You could do a dynamic resource build. It's actually built into Kube CTL now. So in the past you had to download the customized program and customize build yada yada yada. Now as if I wanna say 1.14 it is actually a part of Kube CTL, Kube Cuddle, Kube Control. Let's not get into that. Apply dash K to do the build and it is not a traditional packing tool in the same way you'd probably think of maybe say like Helm. It's more of a way to organize your config configurations and make it easier to iterate on new versions, create different patches and so on. So what you would do is you would create your normal resources. So you'd create your deployment files, your services, config maps, all that stuff. And then you'd create a customized.yaml. That customized.yaml essentially declares what is needed for the build. So it needs these resources. It's gonna do this, gonna do that. Then of course you have your overlay. Now this is where things get interesting because overlay allows you to add patches on top of what is already deployed. So you've set the customized base which is the base application, what you originally deployed. And then from there you can create a patch like let's say I want to increase the amount of CPU used on a given pod or a new deployment to be part of that. And then of course name the resources. So just kind of a high level overview which comes from the GitHub repository. You know, I can have some app. This is my directory and I have the base app. So this is the base code. I create an overlays directory and now we have a development application and a production application. So on top of, for the overlays, if I want to do a development or a production push, the customize.yaml will take these default base applications to the base deployment, the base service. And then on top of that, it will deploy a different CPU count or replica count for development than it will for production. So a lot of times people find this easier. I'm not necessarily trying to say that it is a replacement for Helm or JSON or Scalfold or whatever tool you may be using today. It's really just a different way of thinking of things. I know sometimes charts can become difficult as time progresses and you start adding more to them. This makes it a little easier to iterate on top of it. But then again, it's just, it is a tool. It's not the tool. It's one that I like to use though because I find it easier to manage the code longterm. Building a pipeline. So as we mentioned, we have the reusable tasks. We have the reusable pipelines. What I can do is I can take a, for this example, we're doing a go. I can run a go test when I push the code. So here's my pipeline here, all the tasks. You know, I push my code here, run a go test, build the image. We can also scan the image based on whatever parameters we set if the scan completes, move forward. If the scan fails, stop and give us an error, deploy to Canary. So there are actually a lot of good use cases with Tecton of people using Istio to do Canary deployments. I do not have that in the demo, but I am gonna continuously iterate on top of the GitHub repo I have. So at some point it will be there. And then of course, once the analysis is done, do some kind of deploy. As you can see, you know, it's not a singular line. There's some branching that takes place here. So you are able to program some responses in there, some kind of intelligence to where if X happens, do this, do that, so on and so forth, which is awesome. And basically this is what it can also look like. So here I am, I'm writing code, push to get repo. I have my Tecton pipeline. Canico is building the actual container, turning the code into a container. Pushing it to our container registry, whatever it is you wanna use, use that, whether it's Artifactory, Docker Hub, GitHub, there's so many to name. Then customize deploy and hey, I have a happy application. Let's see what that looks like practically actually. Give me one second. This is always the fun part, like jumping from one screen to the next. So while I do that, let me see if there are any questions in the chat. Yes, I am gonna share repo in the slides. I am putting a, I'm working backwards here. In the slides, I am putting the bit.ly link to my repo. Let's see, can customize be used with Helm? I believe so. I've personally never done it, but I've heard people do that. Can I increase the font size? Sorry, I've already, no. In the URL, you'll have the slides. So, all right, let's see here. And then also just as a side note too, as somebody mentioning integrating Helm with customize, I heard of people doing that, as I mentioned, I've never personally done it. That being said, because what we're talking about is components to build a pipeline. I don't see any reason why you couldn't do that, realistically. On top of that, I've seen people use Tecton with other CD tools to do specific tasks, such as people doing Tecton to Argo to push Helm charts and whatnot. So, let me share, so let me share my repo first because we have people asking about code. All right, let me try to expand the font here. Yeah, there we go. Let me see what it looks like in the live webinar. Let me do one more. Perfect. Oops, maybe that's a little too big because I can't see everything in it. There we go. So, here we have my different Tecton components. So, what we're gonna do is we're gonna talk about each one individually. So, first, let's talk about resources. Resources are the items that we're gonna pass along in the pipeline. So, in this example, I mean, you can have multiple resources, but in this example, I'm wanting to tell it, okay, this is where the git repository is for where my code lives that I want you to build. And then this is saying, this is what I want, this is what the resulting image should look like. And here, obviously, I can replace that with whatever, but yeah, so here's what the image should look like when all's said and done. When I jump back to tasks, I have a build task, which is the one I showed earlier. Set some parameters, such as the Docker file path. So, basically, Workspaces is, I should have mentioned this earlier, Workspaces is essentially a part on the volume while the container is running in the pod where it will store the code temporarily while it is doing the build. Obviously, when it's pulling code down and trying to do a build, it needs the code to live somewhere, so that will be your workspace. Here, I have an app directory, Docker file, source path, where's the source code, canico text, context, and then here are the inputs, so it's input, I'm telling it, okay, go to this git repository in that resource I showed you earlier, output, create this image. It's gonna run a test, and then it's gonna use, as you can see, the canico project image called executor, I guess, tomato, tomato. And then it will use canico to essentially run this command to build and push the container to a specific registry where I listed image, deploy pretty much the same thing, only it's deploying code. So, as you can see, I have the apply k there, and this is using a kubectl image. So basically, this is a Docker image, or container image that exists purely for the purpose of executing kubectl commands. And Google offers a lot of different ones, and there's just a lot of different ones out there, and of course, as I mentioned, you can create your own. If you don't like what this kubectl one does, you can customize it and put it in your own registry or create something entirely different. Now, this is not necessarily best practice, largely because if you deploy this way, it's hard to know who owns what, but this is just kind of for demo purposes, so it doesn't matter. There are different ways to do this. And then, of course, we have the pipeline, which is, I've showed earlier now, interesting thing here that I didn't talk about. We have this concept called run. You have task runs, pipeline runs, and then there's some new versions that are being tested yet now to kind of replace some of the resources, but it's not important at this point in time. This is basically saying, okay, this is what's telling the pipeline to execute, so we don't want the pipeline to just randomly run. A pipeline run as a file here, as a single file, if I do a kubectl apply to this file, it will essentially tell the pipeline, oh, triggers, this is like a way to manually trigger things, but what if I want to automatically trigger things? So we have a listener, and here's our event listener, as I mentioned, I gave it a name, because I don't want just anybody accessing and sending a payload to my GitHub repo or sending a payload to my event listener and getting it to do whatever it is. I put in a secret. I give it the value of what kind of event type. You can use any event type that is provided by your Git repository. You have to read their documentation though, because I know GitLab and GitHub have different, or they name them differently anyway. The binding and the template that it's gonna use serve as account for security purposes and the resources, or you can actually set resource limits for the containers. This is what's binding and what it's gonna grab from the URL, from the JSON payload from the trigger event is it's gonna pull the Git revision in the Git repository URL and pass it along to the trigger template. Trigger template will then essentially is, the trigger template is essentially a pipeline run or a task run. So you're essentially saying, okay, the event happened. What do I want it to do next? It's pretty straightforward, but it's pretty nice too. And it's nice because you can just build on top of things. And real quick, so you might have noticed that I had the app on there. So just simple app. So as you can see, I have a Docker file. It's just gonna look at the Git repository, which is this app and then do the build. Let's see here. And then there's some manifest. So nothing too exciting, but as you can see, here's a customize.yaml file. Very basic, but it's declaring the resources. It's matching with a specific application. And these are the simple resources for a Hello World application. And now, and I do need to update the readme. There's actually a script that I wrote that automates it and I'm essentially just gonna decompose it into the readme. So bear with me on that one. And while I connect to my cloud registry here, let me see if there are any other questions. Cease. Oh, thank you for using Tecton and Argo CD. So your question about Tecton positioned as a tool to build other CI CD tools. Yes and no. I would say that yes, it still fits that category in the sense that you can build on, if you look at Tecton as being a platform for CI CD, then you can build your own CI CD tools on top of it. So, such as Jenkins, X and whatnot. However, the individual building blocks of Tecton could be used to build its own pipeline. I've seen both use cases. So I've seen people extend their Jenkins using the JX plugin to do specific use cases, but I also have seen some people who have just used straight Tecton to do all their deployments. There's no right or wrong way. Really the point here is that we're giving you the basic components to build what you need to do. So there are some people who only use Tecton for the CI component and then use Argo or Flux for CD. There's some people who use Tecton for both CI and CD. There's some people who just use it for testing. There is no right or wrong answer for it. It's supposed to be the components and yeah, there's a lot of people just build on top of that. I see Tecton mainly on CI. Jenkins X is good, is based on good ideas, but actually I've never been stable for CD and UC. Yeah, as I mentioned, there are people who actually do that. It's a very common use case. And yeah, from a security perspective, there's definitely a benefit of using Tecton with Argo CD for the CUBE CTL apply of customized. So how would you bootstrap Tecton without external CI CD solutions? If you can give me more information on that, I would be able to give you an answer. But yes, let's see here. Let's jump into this screen and then I'll show you what I've got. Oh, while I see this question just pop up, is it best practice to use Tecton in the same cluster with a different namespace? Yes, I would say that's the preferred way to do it. A lot of people I've noticed. So one of the greatest use cases I've seen of Tecton just in my field and whatnot is people who want to build on cluster. So people who don't want to have to reach outside to go to a third party to do their CI CD, they're already having to reach out to get repository granted, you can deploy GitLab or GitT or whatnot in a cluster. But there's a lot of people who wanna have just everything inside of the cluster. So I've seen people use, say GitT or GitLab, deploy that into their Kubernetes cluster on one namespace and then deploy Tecton in the same namespace. Their code never leaves the Kubernetes cluster, but then of course for security purposes, they are back and all that fun stuff. Let's see to deploy RSI. If you don't have a system like, say GitLab to deploy to Kate's, because I do it with Tecton, how I deploy it initially. You can, oh, if you can elaborate on that a little bit, I've used Spinnaker but manage that, the pipeline specs through Terraform, yep, cool. All right, so let's jump to this real quick. All right, so now I'm using GKE because I work for Google and I have access to my Google cloud platform and whatnot. But I want you to know, Kubernetes is Kubernetes, you can use Tecton in anything, you can use my Git repository on pretty much anything. If you find that there is some kind of weird feature that I'm not noticing, just let, you know, create a GitHub issue and we'll make it work. All right, so what do I have here? I'm gonna go ahead and actually show you the Tecton cluster, nothing interesting to see here today of course, until I actually go through and set up Tecton, which you can do by running these simple commands. So here's just installing the pipeline. As I mentioned, they were all different components. Tecton's also evolved or at least the trigger portion has evolved a little bit to where it is able to use, to where there's some built-in interceptors like for very common event types, so GitHub, GitLab, all of that. I'm going a little slower than I thought, but you know, it happens. Anything, I'll just go back to answering questions. I don't know if anybody's ever had it where you just have like a random epiphany of what to do to improve a demo and then you just do it, but then it didn't actually make anything better. It's always a fun experience. So there's this Tecton CLI, which is, oops. Oh, I installed the wrong one. I installed the Mac version on a Linux machine. Alrighty then, and then, actually I need to set another variable. Basically, actually I don't think I do. So let me go ahead and take a look at our Tecton files. I think I need to replace the variable in one of them resources. Coffee hasn't fully kicked in. All right, so let's go ahead and do this. I usually like to just build it one piece at a time. Makes it a little easier. So let's jump, oh wait, yeah, I want to be in Tecton. So let's do some QCTL. Let's apply the resources first. All right, now I have the tasks and the same tasks that I showed you earlier. So I'll go ahead and apply those. All right, now I'm gonna apply the pipeline and components individually. So let's take a look. So in the past, you would just have to run the logs to figure out what's going on in the build, but because of the Tecton CLI tool, I can actually list that and hey, it failed. That's always fun. Yep, I made a fun change, but hey, we can also diagnose real time because all we do is Tecton pipeline run logs. Okay, so gets removing from the task run there. Didn't I deploy it earlier? I'm pretty sure I did. If there is like a bug, I'll fix it and that way you guys will have it available for actually trying in real time later today. And I'm also gonna continue iterating on this. So hopefully in the future we'll have stuff about how to do canary analysis and whatnot. Code, yes, I created a Git task. I'm pretty confident. Oh well, might also just be a weird setting. Yeah, it's always something, I guess. Well, because I wanna get to your questions, I will go ahead and just jump back to the slides and we will do Q and A and then we will take you from there. So let's do this. All right. Oh, somebody's deployed Tecton on K3S. I have never tried that and that sounds interesting. Oh, so you can, so first all the resources you can do a TKN, what's the word actually? It's TKN and then it lists a bunch of different things that you can list. So you can list resource, you can list task, you can list task run. So in fact, let me go ahead and just show you. I'm still learning this whole screen share on this platform, so bear with me. So here I run the Tecton command. And as you can see, in fact, here, let me, there we go. As you can see, I have this resource option so I can just do like Tecton resource. Okay, Tecton resource list, describe create delete. So list my resources. But I can do that with just about anything. I can list the event list or cluster tasks, trigger binding, conditions, all of that good stuff. Let's see here. We do our goals, you need to deploy both, the Tecton operator and a pipeline, simplify some management, that's good. Is there a debug functionality that allows you to connect and execute steps inside the build container? I know, circle C. I do not have an answer for that. I don't know if it exists today. If it doesn't, I would be willing to bet that there are some tools or there's probably a pull request or something on that. Does Tecton run on Minicube and kind to test locally? You can't, I've never tried it on Minicube. I've tried it on kind, you can. At the end of the day, Kubernetes is Kubernetes. So, it doesn't matter. I just run on GK just because, you know, work for Google, I mean, one, I like GK but I work for Google, so it's easy for me to get access to it. But this can just as easily be anything else. I mean, I actually have a Kubernetes cluster running on like eight different Raspberry Pis and I've run it there using Qubate ADM. So, you know, Kubernetes is Kubernetes. Let's see, some CD tools, use convention of VM. Have you seen a common patterns for organizing pipeline definitions? Not off the top of my head. So, in theory, you could even use like customize. Ironically, you could probably use customize to define all the resources to deploy Kubernetes, to deploy Tecton and manage like the pipeline files. So, that would be an option. Let's see here, Tecton with Armia, I'm pretty confident it's possible to do that. Yep. What's the pattern people follow for this setup? Get hub accessible only in on-prem and you need to deploy to the cloud. Usually service account keys, things like that, just to authenticate, give the right permissions. Then you recommend a way to store Kubernetes secrets in the Git repo. So, I don't know if there's a best practice for storing secrets. So, like in my, obviously I'm not following best practices because in my version, if you actually look at my secret file, there's the password, but it's also, I don't really care, it's howdy y'all. But there are a few different ways to manage secrets. So, you can use like a secret manager, such as what you might see from different vendors. There's also other Kubernetes way. Let's see here, any other questions? So, we have about 10 more minutes. So, if anybody has questions, I'll be happy to answer them as best I can. Otherwise, cut you guys loose and get back to your lovely Tuesday. Yes, there is a dashboard and it's still relatively new, but it is being built upon. And it didn't exist a while ago. So, it's kind of nice to see that it is kind of going that direction. And hopefully, it is an open source project. So, obviously it goes kind of through the same open source struggles that a lot of people go through, like just people committing time. That being said, Google, IBM, Salesforce, a lot of different companies are contributing to Tecton, and quite frankly, are using them internally. So, I can only expect to see better things coming down the way. So, here is where the Tecton dashboard is. I dropped it in the chat. Best practices for organizing, it really depends. What I do is just, I tend to live in the world of folders and subfolders. So, I might have a subfolder called like tasks. And that's every task, but then within that folder, I'll have sub tasks like this is my, these are tasks related to building on my development branch. These are tasks related to building on, and so on and so forth. So, the very basic Tecton is the Tecton pipeline. Like that was like the first component to be deployed as part of Tecton. Everything's built upon it. I usually recommend, it really depends on what you're going for. If you want to have automated triggers to happen. So, somebody pushes to get repository or something like that. Then you can install triggers. That being said, you can just use manual pipeline runs or task runs to trigger things. If you want to, you know, it's up to you. It depends on what makes sense. I know automation is what most people are going for, which makes it better to have the trigger. But, if you want to run everything manually, then you can forego it. I do recommend using the CLI tool because it just makes it easier to view logs and what's going on. As far as the dashboard, that's purely a personal preference thing. I honestly barely use it. But, if you want to go for it, that would probably be the vast majority of all the tools that you would actually need to deploy to use Tecton. That being said, I think there are some new features that are coming out. So there's like the Tecton catalog, as I mentioned, and there are some, I think, Tecton change, which is security related. But yeah, there's a lot of different things coming down the way as well. Coming down the pipeline. I do not have the, oh, I do have, did I deploy it? I don't think I deployed the dashboard. It's a separate component. So I can't really share that because it's not installed. I believe it does. Cam, I'm not familiar with that product. I'm going to look it up later. I always like learning about new fun open source projects. And any other questions that we have, otherwise give you guys back seven minutes. Okay. Well, I want to thank you all for your time. Follow me on Twitter, follow me on LinkedIn. As you can see here, the slide deck does have a bitly link to my GitHub repo. I'm going to constantly iterate on top of it because I just like sharing, you know, name of open and the name of open source. It's always good to share, create demos, all that good stuff. And yeah, thank you very much. Thank you so much, Jason. Thanks everyone for joining. And like I said, the recording and slides will be online later today and we'll see you at a future CNCF webinar. Thanks so much, everybody. Thank you.