 All right. Thanks everyone coming here. I'm going to present about Mario's adventure. I'm not going to play video games in front of you. It all makes sense. The title makes sense during the presentation. So, yes, as my name is Schmuel Buschner. I work on the upstream tecton. So I work directly on the projects. I work mostly on the CLI. So I've been developing the CLI and we've been like working on the features, making it like everything that's happening in pipelines like on the main project. We've been trying to make those features available on the CLI and expose it. So I'm probably going to speak upstream, midstream, downstream, which is probably confusing for a lot of people. But what I mean when I say upstream, I mean the tecton project, like main project. Midstream is what we do as an engineer in Red Hat is that we take that product and we validate it on an open shift, making sure it works on an open shift. And so maybe I'm going to mention upstream, midstream at some time. And that's the meaning. So what's the agenda? I'm going to tell you about tecton. Another was another presentation just before about tecton. And means like I'm not going to go into too deep as what the other presentation has done. But I'm still going to give you an overview, at least for the web viewer, and to be able to know what is tecton and how we did it as well. So now the main goal of that conference is to expose you, is to tell you how we are doing our own plumbing, which is a CI, is an older stuff that's running the project. And what's the problem that we have seen coming out of it? Is that what's the process and where it's coming? And really what are we working on? And what's going to come in in our plumbing, in our CI, which is like using ourselves, like tecton, and so what's the future that the end user as well is going to use? So let me start with the introduction to tecton and what is tecton? So first, the main motto of the project is to provide a set of shared and standard components for building Kubernetes-style CI-CD. So it's clan-native, it's Kubernetes-native. So we directly plugs into Kubernetes and we directly plug into it via the CRD system, which is the resource definition, which is an extension mechanism inside the Kubernetes, like to expose objects inside the Kubernetes object for some other thing that's not inside the Kubernetes. And we are part of the CD Foundation, which is the Cloud Delivery Foundation, which is with different companies from Google, Red Hat, Cloud Beads, IBM, PyBotal, and many, many more. All right, so tecton in a nutshell. So what does tecton? It's Kubernetes-style. So pipeline, which means it's YAML, it's a bunch of YAML. You declare the YAML, it's like a declarative pipeline, and those get picked up by Kubernetes Custom Resources. Like you can, Kubernetes Custom Resources is what you define your certain type of objects, and you say that I want to define pipeline definition or task definitions, and those get picked up by an operator, which is like the web controller, and that controller would just like doing reaction on the events. That's how you extend Kubernetes in general. So the tecton, when it works, is like it runs the pipelines in containers. So everything, every steps that you do inside tecton, you can write like a huge share script, but you can separate it like two multiple containers. What's the advantage compared to the big share script is that you can reuse all the facilities of different containers for different type of tasks and different type of things, and have a small task that is dedicated, and they can be shared differently to the others. So you have different containers. So those are stuff that's coming out of Kubernetes, which is like the Kubernetes style to do that kind of CA. So to build images, tecton doesn't have any vision, like to provide all the tools and everything, and to be like a huge masterdom, and it's leverage on other building tools, like source to image, builder, can echo, jjb, which is like for Java application, and that's like usually what's going to happen is that you're going to leverage on those tools to be able to build your containers. So when you say tecton is building my container, we don't build the containers. We just launch the tools that we leverage, and there is different way to do that kind of things as well. And we are running on Kubernetes. We are Kubernetes native, but we are deploying. We can deploy anywhere. So it doesn't have to be deploying to Kubernetes, even if it's easier to deploy on Kubernetes because you are running near your CA. But if you want to deploy to OpenStack or to Amazon or to whatever, then you'll be fine with it. And we provide like some powerful command line tools, which I have is powerful, and that make it easy to like list your pipeline, create it, and choose it and kind of thing. And it's all very interactive and very nice. And the next really is going to have emoji support. So the concept of pipeline, so I told you like what's the general overview, but when you look at tecton, like we have like five main building blocks. The first one, which is like the one that's the bottom of it, it's a step. The step is just like a little step that's going to part of your pipeline that's going to say I'm going to set up my Git project or I'm going to start to do my go testing or make builds. It's one of the steps. After if you look at high level, so I'm going from the bottom to the top, if you go to steps inside, you're going to have like a task. Like it says it's going to like all the CICD tools that you have seen before. And that task is going to give you an overview, it's going to list all the steps and how you're going to connect it to your commit as cluster. And a pipeline, pipeline is a set of tasks. So if you have like a pipeline of set of tasks, so your full pipeline that's going to do a lot of task which going to have a lot of steps, that's the high level definition on the top. I have another slide after that's going to give you a better overview. And you have like pipeline resources. Pipeline resources just plug inside pipeline and that's the input and output of the task. So you can have like an input can be like a Git repo or a pull request can be an input as well. And the output could be for example like a Docker registry or a container registry or whatever. And that would be the output that you're going to do. So you can plug, you can define your resources and how you're going to do input and input. Some tasks can have input and output together. So for example you can have like the pull request resource. So that's when you say that as an input I want to have like a pull request from somewhere. So it's going to check out that pull request. And it lets you like to change on your task. You can add a comment by specifying like a file inside the file system. And you say that that task, sorry that pipeline resource it's going to do the output. So it's going to post back that comment to the GitHub or to whatever VCS you're using. So we have a multiple driver. So that's my main point is that you have input and output. And after that you have like the runners. So the runner is pipeline run and task run. So you have your pipeline definition and you have like a bunch of runners with different arguments if you wanted. So you can have like a pipeline like build my code, like which is a go test or a task that's going to do go testing. And as parameters you're going to have like a Git repo, whatever Git repo, but part of the runner you're going to have like a link to your repo. So it is a little bit more makes sense. So whatever I was saying before is that you have on the top now we're coming. We have the pipeline which is like the biggest thing that's included. Part of pipeline you can have multiple tasks. Those tasks are inside the pipeline are ordered. We have an order which call a DAG inside the thing that's going to order the task and how they're going to run. So inside your pipeline you can say like I want to run after this and run after this and after this task. So you're going to order the thing. The way it works, like it's not complicated at all. It's very straightforward and very case way is that we create like a file on the file system. And the task, like there is a demon kind of demon called an entry point that's going to wait for that file to be deleted or to be available and it's going to run it. So that's how the ordering is working is like with some kind of file looking. So it's very simple in a way but it's very efficient because you don't need any persistent storage or any extra demons to run and everything. Everyone is waiting for each other like using a file on the file system. It's using like a empty dear kind of volumes to do that kind of things. So you get like ordering in there and you have everything that goes like that goes there. And usually you take some runners, task run or pipeline run like it depends. And you can run with different pipeline resources to run your pipeline again and you can change, you can just compose it like all your different tasks and different resources. So what do we do and what do we provide currently because there is stuff that's happening and new stuff that's going to happen. But currently it's like pipeline, that's why we call the core tecton city. That's what provides like the whole shenanigans of building and listening for whatever the user configuration is or whatever. You have another project called catalog. So catalog is like a shareable task definition. So you get like a project with a lot of different tasks like for building on Builder, for building on GE, for deploying with Terraform or a lot of things like that with different one. And so people can pick up and those tasks and integrate it in that project. So there is no automatic way to specify like a remote task currently or like a catalog of tasks. But it's something that we've been working in, we've been designing and it's something that we want to have. So the CLI which is pretty straightforward is the thing that I actually personally involved. It's what you'd use like to interact usually with tecton in a CLI way. But obviously I mean it's one way to implement it. There is other tools like Genkin X is another tool that we have with like it's an IO level that does a lot more as well that interact with tecton. And it's as well as other tools. But that's the official way currently on tecton. And the most, another interesting project is Triggers. So Triggers is a project that just came out like 0.2 just came out yesterday. So it's just pretty brand new. But the way it does is like it's going to listen to HTTP request. And out of the HTTP request, you're going to get a webhook for example from GitHub, GitLab or whatever. And it's going to have some definitions like how you want to handle that request and how you're going to, what you're going to provide as a pipeline run. So whenever like there is a pull request on that repo with that command or whatever, then you can have like a new pipeline run that's going to run. So that's what the binding between the two. Triggers is very new, but it's a very interesting project. And there is a lot of work going on like how do you going to be, it's not just like a GitHub webhook listener and to be much more than that. And the language inside Trigger I find like really interesting is that the way you're doing filtering is like you have a proper small little language called CEL where you can do like a bunch of conditions and everything is inside your definition, how you want to handle like the webhooks and the web request. So the operator is another thing that our team in Red Hat has been working on is that it's an operator like to easily install Tecton and to upgrade. I'm not going to talk about that because I'm sure you've been hearing about it for a lot over here. And we have a web UI dashboard which is a really nice web UI which has been worked heavily currently and which came to give you like a nice overview and to create your pipeline and to run your pipeline and everything. But there is a lot of people who did develop like different UIs like OpenShift as well OpenShift Pipelines like we have our own dashboards part of the OpenShift console. But there's other people that's been doing but that's the official way. And it's a very easy to deploy as well. And you just have like to send Kubernetes create and you get everything set up. So what's Mario's plumbing? So that's the title of my thing. So plumbing, what's plumbing? So plumbing is like as you see the definition which there's a word I can't even pronounce in that thing which I don't understand. But plumbing is what you have like in a house and what you have like that makes like all your plumbing and all tight stuff together. And for us here, what it means is like it's all those shell scripting and all those really set ups and all that stuff. We have one repo, we have everything all those scripts are there. And that's what makes all the projects going on and validating the project and making the release or testing the CI. We have all the configuration as well there. So that's what we call plumbing. So Kubernetes and Knative use the word testing probe. I think plumbing is a bit more fun. And we can have Mario on the slides, so that's nice. So the initial probing. So one thing that I didn't mention is that Tecton came out of Knative. So Knative is the serverless function less to whatever it's called or Lambda or whatever that's a project on Kubernetes. And we were part of it at the beginning. It was, it used to be called, Tecton used to be called Knative build or it became after Knative pipeline was a bit messy but it came out of Knative. And we became our own projects and with our own like part of another foundation CDF foundation and everything. So we graduate, we are not in Knative anymore. We can interact and we can have integration between the two. But it's nothing tight to connect to. But the problem, but not the problem, like since we came to Knative, we have a lot of those script of releases and everything. Everything was tight to Knative. So the main challenge that we've been facing is like is to get away from Knative and start to do, and start to do, and start to do like graduate on. And using our own things like with our own knowledge. We have, like one of the things that we've been using a lot is Prah. Prah is the project from coming out from Kubernetes. I don't think, I don't know what we were using before because it had been three years ago. But it's the main project that's been using by Kubernetes and OpenShift. And it's the CI system that's coming. So we are very tight and we're still very tight to it. And one of our goal and what we discovered in this presentation is like how we can untight ourselves from Prah, and really doing tecton on tecton and being able to dog food ourselves. Because a lot of the stuff that we are doing, that's the stuff that people has been asking us. So we are trying to get there. So that's the initial perm. So Prah, yeah, I was whispering. It's like Prah is a very interesting project. He has almost everything now because Kubernetes is such a huge project. A lot of use cases. And it's a bit the same. Like it's a native Kubernetes CI CD, which have its own jobs, it's called Prah jobs. But it's really written for optimizing for the Kubernetes project need and OpenShift as well. So we do currently using Prah in some way. And we are using as well on a midstream like what I was talking about, like we're using as well the Prah. But our goals, like the work upstream or midstream, or like that's the work we've been doing is to move away from that and to do tecton on tecton and to really use ourselves for doing LCAs. So let me describe a little bit more of Prah. And all the components, I think there is another talk going through soon about Prah or maybe tomorrow. But there was earlier, yeah, sorry, I was in here. It's good to talk about. So for the person like me who wasn't here, like I'm going to explain what's Prah. So Prah is like a different components inside it. So you get like a hook, which is like, so all those terminology is from botting stuff, Kubernetes and all that stuff. So hook you get, which is the thing that's going to handle all the GitHub webbooks, like the same stuff that we have like triggers that I was explaining before. You add the plank, which is going to go and try to figure out like the job execution and life cycles, like which jobs are going to run, which jobs are going to run in which pods and really control the resources, the hardware resources. You have the decks. So the decks is really the dashboard and you get like a full view of the recent jobs and the plug-in and information. And you can see as a PR author, you can see all the history of your PRs and when it started to fail or when it didn't. I have another slide which shows you. And you have Tide. Tide is the one that's going to, so you configure Tide on Prah and you're going to say for that PR you need this kind of amount of core reviewer, like one or two depend of the project or three. And you need like this kind of plus one to be able to decide that PR is going to be merged. You got like the overall low job which going to, like it's basically going into the periodic jobs, when necessary. And you get the sinker, which is like clean up the old job and spot, which is actually really important. So like if you don't have something that cleans up your cloud, I mean you really run out of problem very easily because that's something that I never thought all the time but it's always like you just end up like which a huge namespace with a lot of pipeline, a lot of things and we have a lot of pipeline runs. So having something that's clean up which is something that it's a really nice but we don't have anything that's built inside inside tech town now. That's very specific to Prah. So here's the pro dashboard. So you can see on the pro dashboards like what's you've been using. You have like the deck and the status page and you get the measure requirement and all those things like that. We get logs as well and the logs are you got like some kind of highlighting of the, so it's basically like trying to get like regs of fatal and errors and after it's going to say the context of the 10 lines that's between to explain the failure. But usually you take the failure out of the exit code on a pro with whatever exit code that's coming if it's one over zero then it's like a fail job. So that was Prah. So now it's like we are in the process like to use our own pipes like I was saying before. So the first steps like to get that to get to other independence. So we had to create our own tech town city plumbing and we start like to document like all our needs of plumbing that's what we started to do and to start like to own all CI scripts. Instead of before we were like we were like getting like all the plumbing testing fra from Knative, all the scripting and stuff. So now it's like we have our own and a lot of it is based on the same semantics of what Knative does because we don't get to reinvent everything but it's mostly tightened more and focus like more on tech town CA. And we have as well our CA image. So a lot of the base image was before like on Knative, like I was like based on Knative and everything. So it has like we have our own CA's which like only does for tech town and not for all the Knative projects. So the really is the tech town on tech town. So we started like to develop a few tasks. Those tasks we made it available on the tech town city catalog. So those tasks are the one that's going to build and test and link the code or even the coverage. It's going to use, it's going to publish image with KO. So KO are probably not, you're probably not aware what it does. It's a tool coming from Google, which is really nice when you develop, when you are a developer. And even for production is that it's going to take a full set of config which is like a config for deploying your service and which is Kubernetes configs and it's going to build your code like in a very efficient way. And after it's going to deploy it and it's going to change like the image references inside your templates. So you can reuse those images directly. And we generate a release.tml. So what's release.tml? Release.tml is the big template file, YAML file that usually the end user is going to use and going to Kubernetes create, could create to start it. And we publish release on Google Container Storage and that's what the task is doing. And all the tasks are available for others. And we are aiming to executable everywhere, like to be able to do that kind of thing anywhere. But the problem is that currently we are very, very tight to the Google infrastructure. And so a lot of the things are running there. So part of our work is like trying to untight us, especially from the mainstream perspective because we don't run GCS. So we try to get stuff that's more agnostic to the GCS infrastructure. So I'm going to list, like that's a list of what's available on Pro and what's the comparison to Tecton. So you get like the hooking, which is a trigger and does that. And we call it like interceptor, which is like the logics thing that's going to handle like web requests. You get the job execution lifecycle. So that's pipeline for us. We get the deck, which is our dashboard. I'm not sure why it's not in there. I'm sorry. So we don't get a merging bot, that's for sure. We don't have like a periodic job, but you can emulate that with Kubernetes crown jobs. And we don't have a garbage collection, which is really annoying. And I just end up doing OCD later all, but it's not really efficient. Oh, QB control there. But anyways, like, so we started like, so to get our way of Pro, we started to work on the integration and how to get the integration. So the Pro and Tecton works. It's like, you get a definition from Pro, and you say that the agent is going to be a Tecton pipeline. So we have like a small agent that's going to watch for the crown jobs and is going to run and going to generate like the pipeline CRDs, the pipeline runs, which is going to start the pipeline, start the task, start the Tecton controller. So that's how the integration that we have currently with Pro, it's working. So it's pretty straightforward. You have something that is pretty straightforward in a Kubernetes world, because everything is watchable with the operator lifestyle kind of thing. Then, so you just watch for the projects and you just adapt it to whatever you want to do. So we have some issues with that. We have the logs that are not integrated. So that was really hard because the way it does, it's not, we're doing something else and they're doing something else, so it's not very integrated. We have like the integration is tied to a very old version of Tecton, which is depends on the zero treat one. And when the current release is zero 10, we just came out yesterday as well. And that's incredible. And there is a complexity with Pro, which comes out of every time you're doing like a really large project like Kubernetes. You need a lot of use cases, but there is a lot of complexity in there that's very hard to figure out for a lot of people. It's very, it's kind of difficult to figure. So we started to get our own dogfooding cluster, which is a new Kubernetes cluster where we started to experiment and I'm starting to enter these services in there and to be able to test those services without disrupting the orders, the main infra. So our first thing that was really important, especially when you do CI, is to be able to do logging. So we have on logs for everyone. That logging component is unfortunately very tied to the Google Container Storage infra, which is using some kind of logging system, which I can't remember the name, but yeah, it works for us because we run on GCS, but it will not work for everyone. But we are acknowledging that issue that we need something that are component and we've been working on it. So currently we have all those jobs that gets all the containers because those containers are ephemeral, so we need to collect them and we are putting that on GCS. So we have like for like to be able to stop, to start with moving RFM-PRA, we started to do our releases. The releases are started to be on Tecton. So we have like, we have different tasks for different release type and we have a current job that's going to trigger for nightly. Everything is like code, like Github on everything and we have, so everything is like every time you're doing like a push, it's going to deploy that thing and we work by pull request obviously and everything. So the way to do, to get into CD, so we can do our continuous deployment inside Tecton, we started to have one, like continuously build images. So we have like a tool called Mario which takes care of building like all the images together. We started like to deploy like all the Tecton resources with pre and post deploy testing and so all those steps were like incremental to get to our own Tecton services. So let me try to go quickly like over the missing packs. So there's a lot of stuff missing and it's very new like as it was like we are on 010 and we have a service configuration that's missing. We have, we need to have like chatbot integration so we don't have like those chatbot from Slack so whatever that to be able to interact with Tecton which is something that the pro has. We don't have like any mergers, like any final mergers and we have a dashboards but we have a Tecton dashboard which is, but the problem is that the access control is open for everyone for action but I think there was a PR just two days ago like to be able to have read only for public access. We need to have like the test portable because the point is like it's very tight to GCS so it's trying for example like to build some images and upload it on GCS and currently we are very tight to it. So we need to be vendor-natural and we don't have any monitoring or tracking resources version and config. So there's other little bolts that's missing which is we don't have a way on task to say finally so whenever there's a failure or success you're doing something out of it. We don't end our failure as well currently but it's been like worked on really, really, really currently how we want to do that properly. We don't have like params output so you have like a param, you can specify parameter as input but you can't say that you're going to buy another output so whatever like you build like a new image and you're going to have like, you're going to deploy a new cluster and you're going to generate like a cluster resources and you can't specify like a parameter out of it. So we don't have like optional inputs. The inputs are like a compulsory but you can put like some default though and you, we don't have any notification but like we are working on design phase with cloud events. We are not able to have like remote task which was what I was talking about before but there is like a huge PR and a colleague of mine has been working on heavily on this and we don't have hooks, switch and loops which is something that we want and or we don't want and so we've been working on it. I'm encouraged like whoever is interested like can join us on the discussion. So yeah, but demo, but it's a video that should take five minutes and it will be right on time for the end of that thing. Probably like I need to find it. So that's a small demo which I'm, so normally like I would demo like the Mario bot that would build the bots but in that case I'm going to present like a use case for Tecdon. So that was very easy to do. So what my aim was to do is to be able like to edit like you have like a web developer that's going to needs to develop, deploy is our application and needs to work on it. So every time you're doing a PR we want to have a preview environment runs and test that PR and show that PR why it does. So the main point is that like that's available. I mean a lot of CI-CD system has that. The main point of that, of that thing is that it's very easy to integrate the deployment because it's Kubernetes native and because you're reusing Kubernetes that whatever like you're doing your CI you are doing your deployment at the same time it's very close to each other. So it's very easy to implement instead of having like external resources or external plugins or everything. So what it does is like it's going to go over and going to do a PR and it's hopefully going to submit it. It's not hopefully because it's recorded. So hopefully I watch it properly. So it's going to do that kind of thing and after it's going to do, it's going to use the web book system I was talking about triggers to get notified and create like a new PRs, a new test and start to test it and deploy it and build it and deploy it. So let's see. So now it's like you just send the PRs and I have a message of Vincent which is my course speaker that was supposed to be here but it's only here in a Slack message. So here is like I'm watching like what's going on. So it just created like a new pipeline preview URL and it's going to comment it. So here is like he's just sent a comment and you just say like it's a pending checks and it goes well. So just after let me a little bit. So now it's like when he builds it like he started like to get you like a link to the UI which is like the OpenShift UI but OpenShift, the text on dashboard works well as well. And now it's like it's going. So you see like all the pipeline run and everything and everything that gets built. Here is like you get the CI which is new CI with the emoji. And like you can watch with the CLI all the steps that goes on and all the building. So it's building the image itself. It does the imaging and it does the full building of the thing and it's going to get it pushed. And so that's push. And now it's like you have like the live environment that's new URL is going to be to be to be generated. That new URL will contain the change that's tied to your PR and which would be done. So that's the demo. And yeah, my main thing which I'm trying to show is that it's very easy to build those things inside Kubernetes and with Kubernetes like deployment. And yep, I think that's mostly it from my side. So I don't know how many seconds I have for QAN. So if you have, so that's the recap, you can read it or you can ask me questions as well quickly. And I think I'm going to take a few questions. If we don't have time, I mean, I'm here going to be on the booth and you can do. If you have any questions? Yep. Yeah. Clad event. So we don't have any customer resources from KANATIVE. We have clad events, which is like a standard of things which is like for notifications of... I've seen one. Which one is it? I'm pretty sure I build that thing every day. Maybe you're talking about the APIs. So the API definitions, like API versions, like kind of things. What's the definition? Yeah, those cases, maybe you're talking about the, you know, like the namespace. So you can have like KANATIVE dev slash v1 alpha one. But I don't believe we're using directly the CRD from KANATIVE and going away. We are going away from that. But maybe I miss it and in that case, but anyways, like our goal is definitely to move away from KANATIVE. So we're not tied to KANATIVE. So that's the main goal. Yes. That's a good question. So we... Can I ask you a question? What? Can you please... Ah, yes, sorry about that. Yeah, yeah, yeah, sorry. So the question is like, what's the advantage compared between PROW and Tecton? Why would you use Tecton against PROW? So I was just mentioning just before the differences is that there is PROW in itself has been designed like very, you know, Kubernetes, like for Kubernetes upstream or for OpenShift and to be able to do Kubernetes. So it gets very heavy and it's not very user-friendly, I would say, and it does a lot of things. Like we are aiming to do building blocks for orders to plug into it. And with a nice way, when you see our YAMLs like are very easy with a bunch of steps, that's very clever. When the other one is like YAMLs, like a lot of different things, a lot of configuration. And it's really like a big thing, but like a really big piece to run. And it's really hard when you run it to get away from it as well. Yeah, that's correct. It's to execute CIs on top of, it's to do CIs on top of Kubernetes like in a native Kubernetes way. So Tecton itself, like the question is like, if your CIs can work with anything, Tecton itself run inside Kubernetes. So you can't get away. But you can deploy anywhere, anywhere. So you can have like a target, that says something else. And that's not, you don't have to deploy inside Kubernetes. That was my point before. Yep, the question was, is Tecton was, is Tecton was managing the artifacts? So Tecton doesn't have artifacts manager or like things that you do. You can plug your artifact manager from whatever project if you want. We do upload artifacts to GCS directly. So to go back on the storage, so you'd have, so we don't provide our own currently. So the way to do that, the way to do that, so inside task, if you want to transform informations between those tasks, the way you're going to do that, you usually going to use like a PVC. So we just have like on zero 10, we have a new concept called workspace, which is like workspace of all your file that you're going to pass through. Those workspace are going to be tight to a PVC, but PVC is not available everywhere. So you can have a small passing information with your like config map. And that config map will act as your storage. But obviously if you can have a PVC and you can have a Google Cloud storage as well, as a resource for the workspace. So the concept of those aspects address that. I think I'm out of time, but I'm available. I'm here and I'm friendly, I think. I'm not out of time as well. I'm out of time.