 All right. Welcome everyone. Thanks for joining us today. Today's CNCF webinar is titled Helping App Developers Adopt Kubernetes with Tecton and Argo Automation. I'm helping host this webinar and Phil Estes, one of the CNCF ambassadors. And interestingly enough, I actually work in IBM's cloud platform group as a distinguished engineer and it's kind of fun to lead off a webinar hosted by a couple of my colleagues in IBM. So I'd like to welcome Roland Barcia, who's CTO of Solution Engineering, and Sean Sunberg, Lead Architect, Cloud Native Toolkit at IBM. They both work in IBM's garage, which they're going to tell you more about, but it's really connecting a lot of these great cloud native technologies that some of us work on and open source, some of us use. They're getting that in the hands of practitioners and developers and users. And they'll talk to you a lot more about that in a few minutes. First, before I turn it over to them, a few simple housekeeping items. Obviously this is a webinar, you're not going to be able to talk to our presenters today as an attendee. What we'd love for you to do is use the Q&A box that's there for you at the bottom of your screen in Zoom. Feel free to drop any questions in there anytime throughout the presentation. The presenters are going to try and keep an eye on that since there's a couple of them. Somebody can be watching for Q&A and they'll either deal with it in the presentation or type out a response to you there. If we see generic questions popping up in the chat, we'll remind you to post them in Q&A just to make it easy on our presenters. This is an official webinar of the CNCF and therefore the CNCF Code of Conduct is in effect. And basically what we'd love for you to do is not add anything to the chat or the questions. It would be in violation of that Code of Conduct. And of course that just means be respectful of fellow participants and the presenters. All CNCF webinars are recorded, including this one. So if you miss part of it or want to share it with others, check it out later at cncf.io slash webinars. They're usually up pretty quick. And so with all of that, I would love to hand it over to Roland and Sean to kick off today's presentation. Thanks guys. Thank you Phil. Hopefully everyone can hear me. Phil did a great job kind of introducing our roles. Just a little bit more. As Phil said, I'm Roland Barcie. I'm also a distinguished engineer in IBM and CTO of our solution engineering team. I lead a team, I'm going to say a tribe of different squads really in the garage. We focus, we started about, I'm going to say, over five years ago inside of startup community spaces like we work and galvanize. And we kind of built out a methodology to get to quick outcomes for clients, bringing together things like design thinking, extreme programming. We can start up Spotify method, a bunch of DevOps and SRE practices to get to quick outcomes, build minimum viable products, but do it in a kind of enterprise production context. That includes building new cloud native applications to modernizing existing applications. IBM delivers a lot of solutions based on the really cool open source projects out of CNCF. You know, a lot of emerging technology with Kubernetes and Tecton and Argo, which we'll show some examples of. One of my lead engineers is Sean. Sean, you want to quickly kind of introduce yourself? Hey, everybody. I'm a lead developer here in Austin from our garage team. I've been working in customer engagements for all of my time here at IBM, long time at IBM, and within the garage have led a couple of Kubernetes-based, OpenShift-based engagements as well. I've been working on and leading a team, building out a garage cloud native toolkit that will help us make our developers more productive and our projects more effective on OpenShift and Kubernetes, use the best of what's available in the tools. And we'll walk through some of that here in a little bit. Yeah, and I guess some of the goal here is from a practitioner perspective, a consultant type role that uses a bunch of different open source projects with clients. It's interesting, you know, when you work with large banks, insurance companies, government companies, et cetera, there's always a mix of multiple open source and commercial products that need to be mixed together to deal with things like security, compliance, testing, and an automated pipeline. And there's always a level of risk, you know, do I start using new open source projects at the right time of maturity? Do I introduce one or two of them into a larger pipeline? And out of necessity, working with our clients, we had to create various different ways of working, right? Do we go into a client and they're like, well, we have a Jenkins farm already deploying legacy things and it's doing our security scan, and if you're using a bunch of plugins. And so we need to integrate that, but we'd like to adopt our code to start managing our Kubernetes configuration. And Sean is going to take you through the toolkit, but I'm going to give you as a CTO of you, my engineers try to keep me out of the code because they're scared that I'm going to break it or do something to it, kind of give you a getting started point of view around Tecton and Argo. And so Tecton is to me is a really exciting project because it's first native Kubernetes, a lot of great opportunity to not only run in a cluster, but actually treat Tecton pipelines the same way I treat my application. And so the ability to define YAML, define infrastructure as code inclusive of that pipeline and being able to deploy that like an application kind of pipelines for pipelines, it creates great opportunities for something like Argo to actually push out the pipeline itself. Argo is another great tool from a perspective of CD and not just CD, but really GitOps and driving how clients might be able to work with infrastructure as code. So developers for a while have been used to having Git as the center of truth, leveraging CI CD doing continuous integration and deployment. Operations teams coming from a VM background have different degrees of comfort working in that space. So it's, you know, we work with kind of an adoption model there. And so a very simple example, and this is just kind of integrating two open source projects together and we're using OpenShift as my Kubernetes environment here. You can do this with any Kubernetes environment. But the idea is build a simple application, grab the hello world from Node.js, you know, build out and build out a pipeline out of, you know, Tecton artifacts and use Argo, which can be configured as an operator using the operator framework CRD onto a Kubernetes cluster and then have an integration around, you know, taking Tecton, having Tecton do build and publish of images into an environment and then having Argo kind of drive the deployment of a particular use case. So we're kind of doing a couple DevOps flows and, you know, I did my own, you know, hello world example where we come in here. And here's an example of, you know, a couple of things that you get with Tecton. So first of all, I have an OpenShift cluster here and this is the OpenShift environment and, you know, operators provide a great way of kind of installing capabilities and actually I can manage operators themselves through a pipeline. But in general, since I'm installing some of the tools around that, you know, I've installed Argo as an operator and I've installed OpenShift pipelines, which is a layer on top of Tecton that gives some visual helps and some automation on top of Tecton and integration into the OpenShift Kubernetes environment. And they give you a few different levels of things like cluster tasks, right? So integration into buildup, for example, which is another cool open source project for building out container images as well as, you know, using things like Podman and other types of things and different levels of cluster tasks that we make use to. So, you know, from a pipeline perspective, you know, I can put together a pipeline, which is a template, and I can create some pipeline resources. So for example, you know, where is my Git project and where is my image repository? This could be something like Quay, it could be Docker Hub, it could be anything. And I create some hard-coded ones because I want to be able to run a pipeline manually, let's say, through there. And I could take a pipeline which are made up of tasks, and I'm using predefined tasks here, which kind of show the reusability of Tecton, right, where I want to use a build a task to build and publish my image, maybe getting my base image from something like Quay, my standard Docker file, obviously this is a bad practice from a security perspective. And then other things that you can use from an innovation perspective, Argo CD and Argo CD sync tasks. So I'll get into Argo for a minute, but the ability to actually then take and use Argo as the CD out of a Tecton pipeline. And so I should be able to take a simple application and a simple pipeline. And I'm going to switch namespaces to an application node web project. And I have the pipeline already deployed and the steps, I should be able to take that pipeline and run it. And I'll do that in a moment. I want to get a little bit into Argo. I'm using visual side of Argo because it kind of demos better. There's a full command line interface to it, where it takes all of my deploy. So I have two applications. I have a node application, which I just talked about hello world, and I have my pipeline as well. And right now, yep. Sorry to know. Could you zoom in on your screen a little bit? Sure. Maybe a little more. Yep. Thanks. And so you have the various resources defined. So let's look at that, right? Besides the pipeline and the resources, the next thing is I want to trigger a build from the outside world, a web hook usually from Git, or GitLab, and different techniques for that. I'm using a very simple web hook for illustration around Git push. Obviously, there's all different ways to do what we're branching and tagging. But a trigger template, and this is areas where I'm hoping the open source community drives to getting less artifacts to create things from the outside world, but there's reasons for that. A trigger template kind of gives me a way to define a template for how I want builds to run dynamically. So I recreate some of the pipeline resources and I might want to, and here I'm hard coding, but I could change, for example, the tag I want. I can grab the input from the web hook request and really do a bit more dynamic things and then create a pipeline run out of these resources. And then there's an event. That event ends up being a pod that runs in my Kubernetes environment that's listening for web hook request and I expose it through an open shift route. You can use ingress, you can use noteports, whatever. And then a binding to bind the template to that request, so binding the incoming web hook request to the build. And that's a good decoupling practice because I might have different types of source control repos and I want to reuse pipelines in a large organization. And so these are different kind of techniques to do that. I've also configured some secrets. I won't show the secret. I have a template for the secret. Even though we talk about GitOps, probably Git is the wrong place to store secrets, credentials. Probably want to grab that out of some type of vault environment and pull that from there. But the template of where I connect to my server, user ID and password. And so I check all that in and you see that node, Argo CD, actually has my pipeline in sync, et cetera. And I can kind of make changes to that. So if I make a quick change to the resource, it will actually go ahead and check if it's in sync. So let me just pick something easy to change here. So let me for now take out the tag if I save that. And then I'm going to go into my environment. I'm going to commit that and push it if I refresh. And that's because I don't have automatic syncing. You see that it's now try automatically syncing my pipeline over to my environment. I'm going to go ahead and put the change back. And you see that it's automatically syncing again. So I can have a full out pipeline for my pipeline. And that's a lot of great potential from that perspective. Tools like Jenkins, other tools, they're great as well. There's a lot of different advantages to that. Some people are very comfortable with Ruby scripting and things like that. You know, it's just preference. I really kind of like that. And then so now I have that pipeline deployed. I'm going to go into that project. And so the next change is kind of from a tecton perspective, I showed my pipeline deployed. I have pipeline runs that have already been kind of kicked off. I'm going to actually stop those runs and take a look at the application. So from an application perspective, very simple, hello world. You know, no jazz. I was going to kind of do this kind of CNCF. Now let's make it one about six save that. I'm going to update my deployment as well. It's good to have if you're going to have different versions to do things like an airy testing, blue, green deploys. It's good to have, you know, very static type of deployment files for each deployment that you're really going to represent. I'm going to go ahead and push that into an environment. And you're going to, you know, instantly see a pipeline running. So that's going through my trigger. And that's going to actually create a build and push it out. I'm not doing any tagging or testing. You know, I'm going to get into that in a moment with with Sean's demo. So I'm going to let the build complete. And we should take two minutes kind of tested that. But you could see we can get access one to a few things. So, you know, open shift as a pass on top of a Kubernetes environment. I had a lot of interesting views around certain things. So from a developer perspective, I have my trigger and I have my note app running. I'm going to actually go and you see this was the old version while I'm still building. So I'll close that out. And you can get a lot of different views. You could actually see if I go to my Argo CD namespace, the Argo, different things all installed with a single operator, including my Argo dashboard as well. You see things are syncing up again. What's interesting is Argo detects actual pipeline runs. And it detects it is out of sync even though I needed to put an ignore resource in there because pipeline runs are probably things that are going to you're going to have a lot of them. And as you're executing those builds, I kind of go back to my administrative view and look at my pipeline. You notice that it's creating and go back to my namespace. It's creating that pipeline, but it's also creating dynamic pipeline resources. So if I kick it off from the web console, I have some statically defined resources or I can input them in. But if you want to be able to track the type of resources together with the pipeline run, you can do that. So here are the two stages. It's doing my build and my push. My Argo sync should have been kind of the zoom thing gets in the way. So let me go back to my project. You see that it's already detected that it's out of sync and the sync is going to be executed very shortly at that finish. And now you see that it's completed the build stage. And now it's going to actually go in and detect that the node deployment is out of sync. And it's actually going to go and put things back in sync. So if I go back to my developer view, hopefully things, it's actually, you see here that it's ramping up my pods because I had three replicas. And if I go out, you see hello CNCF, hello new pipeline, 1.1.6. So the goal here is, you know, there's quite a bit of work and integrating these things. And so, you know, one of the things that we work a lot with our clients from a garage perspective is this notion that, and I'm going to hand this over shortly to Sean, is this notion is it's never just two tools, right? It's always a mix of things, right? You get into things like Helm. You get into things like an Opstack. And one of the things that my team is focusing on is really coming in and saying, hey, you know, what is the tools profile for a particular project? Well, you know, how do we bring these together? How do we integrate them? How do we do that? So we can get development going quickly and not be coupled to certain clouds or whatever, right? Whether it be any, any, any type of cloud platform doing it in an open way. And so this kind of gives you kind of an intro and an overview of kind of how you can use Tecton and Argo together. And I think now I'm going to hand it over to Sean to start sharing and talk a little bit more about showing a fuller demo of a pipeline stack. You're up Sean. Thank you, Roland. Let me share my screen. All right. So as I mentioned earlier, kind of in introduction, we've been, and as Roland mentioned as well, we're working on this cloud native toolkit to help stitch together these tools. There are a number of tools. I know this, this group knows better than, than anyone, just the landscape of the many different tools, the different options, how to put them together to make a solution. So we've, we've put the tool to get together. There's a starting point for initially to help get our development team up to speed with building containerized applications. And then it turned, it has turned into something we use for projects to accelerate our projects. The way that our projects work in the garage, not that different from many others, but we work in an agile way using lean startup and other approaches where it's very data driven decision making. We come up with a hypothesis at the beginning. We have a very quick MVP that we build to prove that hypothesis, maybe four to 10 weeks. And we really don't have time to spend a lot of, spend a lot of time building out an infrastructure and making these tool choices and making them work together. So this toolkit helps us to do that rapidly. We start with a default set of tools and then we work to customize it based on the environment. So in an environment where there are no tool choices or where they're new to cloud native, you might just take all the defaults, but then we can make choices on top of that. So this is all done open source that we're building. This page is for our developer guide that gives a lot of what I'm going to go through. It'll be kind of a walkthrough of what's found here in this developer guide. There is the link here at the bottom to the GitHub org that has all the other content. And so when we start, we're really looking to build an environment to support DevOps and best practice DevOps in containers. So building containerized applications within the container platform. We support Kubernetes and OpenShift to be able to run these tools. And then with each type of tool, there's continuous integration. There's choice amongst those. We're working in an MVP approach where we start with the common ones and then add on as we encounter them or through community contribution to get more tools added. So some of the basics, continuous integration, code inspection, contract testing, continuous delivery with Argo, or it could be done with Jenkins or Tecton as well, but separating those out conceptually, artifact management, and the image registry. And so we have a set of automated scripts, Terraform scripts to provision this environment, which you make your tool selections, lay down all of the tools and hook them together through config maps and secrets within the container environment so that the pipelines know where those tools live and how to connect to them. So then we can get up and running quickly. So we'll walk through one where I've already set up the cluster. We have Jenkins and Tecton, both there, using Artifactory, Jager, Packbroker, or SonarCube. So we'll walk through that environment and see what the development experience is there. We've also, as a part of this, created two other components, a CLI, to help smooth over some of the more complex pieces, particularly for developers that are used to a more PAS-like environment. So just simplifying the interface and the access to the tools. And then a dashboard, which is a similar thing, to help bring to the developer, surface those tools, surface the activation links and some of the other components. So with our container, with our tools in place, first thing I'll do is look at the dashboard. And we've integrated our CLI as a plugin to group control and the OpenShift CLI. So it takes us to the dashboard. You can see here some information about the cluster, the version and the type, the resource group, and region, some of that for the IBM Cloud where that's located and which image registry we're pointing to. In this case, we're using the IBM Cloud image registry, but that could be the internal in cluster registry or some external one as well. And then the tools that have been installed here will be going through an example with Tecton and Argo, using Artifactory as our Helm repository. So our default pipeline will package up your Helm artifacts into a Helm repository in Artifactory and use that to deploy SonaQ for security scan or for code scan and code quality scanning, pack testing for the contracts in the Jager for the end-to-end trace. So we also, as a part of this dashboard, surface some activation links, some Catechota and other content to help educate and bring the team up to speed on the technologies that we're using. And then we provide a set of code patterns, starter kits for getting up and running quickly if you have, you know, starting a new project and you are working in a particular technology. We also give the ability to, if you have your own starter kit that you prefer or you have existing code, to be able to enable that code by laying down some Helm chart in a Jenkins file so that it will build, it can be run in the pipeline. So if we take the example, we'll start with, we'll make a TypeScript microservice. So if I click this, it takes us into the repository where we have this against all public GitHub. We're using the templating engine that's available in GitHub. So I can pick my org and create my repository. So this will, the templating does a basically a fork but squashes all of the history. It still gives me the link back to where it was generated from, but it gives me my own copy of this to get started. So I'll clone that down. And now I've got my version of the application. I could run locally, but really what I want to do now is get that into a pipeline. I need a namespace first. So we give this command sync, which essentially it creates the namespace or the project if it doesn't exist yet, but it also synchronizes the pull secrets and sets it up with the config maps that describe where all the tools are. So I can set that up and get my environment. Now it's ready to go. If I go back to my dashboard, I can look and see within the OpenShift console, I can see my pipelines here. So I've got my namespace. It's ready to go. So the next thing I'll do is I need to register my pipeline. I'm going to use Tecton. So we already have Tecton installed because the tools were installed. And I just run pipeline command. It will ask what type, since we support both in this case, we want Tecton. And it's going to read, well, first it's going to set up a resource, what Roland just showed, it's going to show a resource that contains the Git information in it. So it read from my local file system, which repository I'm connected to. I tell it which branch I want to build off of. And then it's going to read, we have some template pipelines that have been created. And so it's going to read those from the environment. And ask which one I want to apply. Since this is a node-based application, I'll pick our node pipeline. And so it has created the pipeline resource with the Git repository, a pipeline resource for the image. And now it has created a pipeline, which you can see here, a pipeline. And as part of that kicked off the build with a pipeline run. So it is going to is going to run through and build the stages of our build. So you can see there's a the build and test. Since this is TypeScript, there is a build stage where we do the transpiling of the TypeScript. We run our unit tests. We have the PAC testing. That if the repository has any PAC tests, the contract testing that it includes, they will get executed. If not, it will skip. It does the sonar scan run. We tag the release in Git. So we're using Git as the source of our versioning. So using Git tag. So when the build runs, it reads from Git. If there are any tags that are available. If there are, it takes the latest and increments it and then adds that tag back to the Git repository and uses that tag to version the home chart, to version the image. So they're all tied together. It also means that our build process is completely ephemeral. So if we kill this tecton instance, if we kill the Jenkins instance, and then we recreate it, we'll pick up our builds where we left off. So this will run for a little while. While that's running, I'll show you the tasks. Now, this is a Git repository that's part of what's linked in the dev guide. So this, I see there's a question. You don't need OpenShift for this. We support code-ready containers. So if you run code-ready containers, which is OpenShift on your machine, you can run all of this there locally. Now, we have a cut-down version if you're running locally because you don't need as many resources and there's some parts that don't make sense. You could also run it on Kubernetes instance as well. So I'm going to kube or another kube service. So this is the task. This is a nice thing about tecton. Let me make this a little bigger in case it's too small. So the nice thing about tecton is it's very modular in terms of the tasks. We are able to build both Java and node and reuse many of the tasks that are part of that process. So between the Java and the node pipelines, we differentiate on the build phase because obviously to build the node is very different than to build Java. And even this is Gradle. So to build Maven would be different even still. But once you've built it, to build the Docker image, and we're using build to do the build, to build the Docker image down here, that logic is the same or your container image, I guess. The logic is the same regardless of what the source is. So that is now generic task that's reused. Same thing for deploying into the cluster. That's shared task that we use. Now for the package verification, that's specific again to the platform. So we vary there. But packaging the Helm chart and GitOps, those are all common regardless of the language. So we're able to reuse those tasks quite a bit. And we compose those tasks into a pipeline. So these are the template pipelines that we chose from earlier. We make our own copy into the namespace. So we create our own copy of it here. So as a part of that, I have one that's been built before with some components. So again, as a part of that run, we are doing a sonar scan. We're doing package verification. So this one uses a little package verification. If I go back here and packed, if you're not familiar, is a contract testing framework to be able to model your APIs and then validate them between the consumer and the provider. So this is a BFF consumer and a provider, the contract defined between them. And so you can define these. Basically, a contract is a series of interactions. Each interaction is when I send this information, this is my expected response. And so we see that this ran. And in fact, there was an error in validating the service the last time this ran. And it would have failed the build of the service at that time. This is a different, I have a couple of versions out here. So this is a version different than the one that we're looking at, but it will validate and prevent you from deploying a code that breaks the contract between the consumer and the provider. So we're validating that with each deployment. We're also using sonar cube to look and do a code coverage scan, look at duplications. And we have gates, quality gates on, it's set by default on new code. So any new code has to meet a standard in terms of coverage. So we have, there's an example from a different build here that failed because the new code had a code coverage of 74%. And it had some duplicate code in it. So it failed the build because of that. So you could, we're enforcing that quality. And again, it can be configured based on the environment. We have some defaults. We're also essentially, we're using the open source version of all these tools as a starting point. But that can be updated with a licensed version and get full features. So similarly, we're using Artifactory as our helm repositories. As the build progressed, we deploy it to their CI environment to validate if it's valid. And we publish our helm chart into the helm repository. Since this is the open source version, we are using the local storage, generic local storage. So this is one, you see that it built. And this is the helm repository, the version that was built previously, or I guess, I think that's still in progress. And then the ones that were built, again, for the inventory that we were showing. So it's based on the tag. I can go back to that repository and validate that tag against what was built. And this is what gets deployed into the environment. And then there's the last stage for GitOps, which is essentially, so I've linked here. So it's essentially hooking into our GitOps repository that was configured, that was created previously. So for each component, the build, what it's doing is updating. Since we're using helm, we have a reference to the helm chart that was built as a part of the process. That helm chart includes the image version that matches the version. And so the automated build is just updating the version number with each build that occurred. So it gets incremented, that kicks off a deployment in Argo into our test environment. And then we have, so this test branch is the default. And then the staging environment is handled through the staging branch. So we've opted, in this case, to use GitFlow as the approach for handling versions across environments so that we can use standard things from Git, like pull requests to promote changes from one environment to the next. So we've got the deployment into, so you can see there's a different version of the UI in staging than is currently in test. And so that all got deployed into Argo. Let's see. So each component has been deployed into both the test namespace and the staging namespace, separate applications. So if we look at any one of these, we can see, this is a fairly straightforward application at this point. We can see the deployment and the pod. There's one pod there. With each new build of this, it will get refreshed. This is all set to auto sync at the moment. So if I were to change, this is our test BFF, if I were to change our BFF, and again, we're taking the defaults at this point, because it's simple, but whatever custom configuration to the Helm chart would be required, would be added here in the values file. So it would be whatever varies by environment would be updated here and managed as a part of that process. So let's upgrade the replica count. This is an easy change to make. Now, typically in a real environment, you'd let the autoscaler do this. But for purposes of the demo, this works pretty well to show just how, again, in this get-offs environment, Argo is watching. So if we waited, it would automatically sync. I'll go ahead and sync it to have it find those. And so it will detect that there was a change and then see that it's out of sync. And it will bring it into sync just like that. So with each deployment, it is going to synchronize the changes and see that a new version is available and push that version. If there's new config available, it would push that. And then we can use, again, the value of get-offs, we can use get pull requests and change management to manage those changes across the environments. So then last thing here I'll show is the Yeager is also installed. I've got a simple example. A lot of these are the health checks that I can filter down. You can see the NRUI of what's built. So NRUI that's built, we're building a UI component, a BFF that is GraphQL and a Java Spring backend application. Right now, it's not connected to a particular backend. But in this case, for the purpose of the demo, we're using a canned response. So we can, with the starter kits that we provide, every one of them has swagger UI that it provides. And by default, the ingress is turned on so that you can see it. So we can look at the data coming back from our service. This swagger UI, the open API UI can also be used as a part of the contract testing. We can also, for the BFF, it actually supports both REST and GraphQL. So I can run those same queries with GraphQL. And it has built in this nice playground to use to try it out. So you can see those are the same items that we're being returned from the service we're connected through. And then the UI, and it's a very simple UI, but returning those same items that are going through the BFF. So we can, we've validated each level what's happening. And then we're also showing in the topology, we can look at those components. And this is a OpenShift feature, right? That's not currently available on Kubernetes. But for developers, a nice view of what's been installed, what are the relationships between the components? In this case, you can see we have our UI, we have our BFF, we have our service with a link into each of those components. And a link to the, a link into either the source code, the repository, in this case, CodeReadyWorkspaces is installed. I was having a storage issue, so I don't, I think it resolved, but let me see. So we can launch into the workspace and see, see the code running there, right? So it's direct access into web-based development. So BFF back in for front end. So it's a architectural pattern. In this case, we've used it more as an example of building a three-tier application and how the components will work together. But so the idea is for more complex applications, it's a little bit overkill in this case, since it's one service and one UI. But, you know, if you had a BFF that, or an application where you had multiple back ends, back end services that needed to be aggregated or brought together, where you perhaps, you know, there's a large payload coming from the back end and you only need a handful of fields, instead of sending that all the way back to the front end, you have a layer in between that, that works with and massages that data to fit the UI. So that's where GraphQL actually works really nicely in terms of refining the data and not having to code that into that layer, but letting the query that's being sent do that for you and also aggregating multiple calls. So how does this tool get different from Jenkins X? So we use Jenkins. We're tying together some of the tools, some other tools that are used like security scans and some of the others. So we're incorporating Twistlock and Aqua to do some security scans, but leveraging those platforms and providing some basic out-of-the-box pipelines that provide some of the more advanced features that most of the time you can get kind of a hello world, if anything, out of the box. The other part of it is how all these tools got here, the automation of provisioning of those environments and to get all of the pieces together. So we're leveraging Jenkins in the container where that's being used because we have a lot of customers that are still using Jenkins and they're familiar with it and comfortable with it. We still have some. And I think we've gone from a Jenkins X perspective which leverages Tecton. Our goal is to bring together a set of tools, not just DevOps, but also the ops side. So things like Jager and Elk stack or an EFK stack or Grafana, Prometheus, the full lifecycle of a project. And then we go into a client like Sean talked about and client X says, you know what, I have a standard. I'm using Nexus instead of Artifactory for a repository and I'm using these test tools and I'm using all these different tools. And so the problem is broader than just who orchestrates the build or who orchestrates the build and the tests. It's kind of a broader issue when we go out and work in the field, if that makes sense. And there are multiple deployment models that we'll use if Jenkins is on premise. And so the question is beneficial to hide Tecton. So there's a, I know there's some debate with it, even just the toolkit itself, right? The goal is to help facilitate projects. I find if you try to hide details too much from people, eventually you get yourself in a bad way. So, you know, by using the toolkit, we're not absolving any of the developers from understanding the containerized applications or even the DevOps environment. Somebody needs to understand it. Maybe that's your SRE on the project and developers don't need to focus on that as much. But there still needs to be a fundamental understanding of how everything works. The goal is to borrow a phrase that's popular now. So kind of flatten the curve for learning the environment to get up and running and learn as you go. So I think there's still, you know, we still give you enough to get started. You know, like I said, in our projects, we don't want to spend four weeks just piecing the parts together and be able to then be productive. We'd like to figure, you know, make those tool selections up front. We, in our process, have a, have a discovery where we lay out, these are the tools that you, you know, as a customer that's already using and want to continue to use. These are the tools that are new that will incorporate and get all of that set up so that, and we call it, we call it that part, the provisioning part iteration zero, because iteration one is when you start the project and do work. So we'd like an iteration zero to get all of that installed. And in iteration one, day one, the development team is already writing business logic and not infrastructure plumbing. But at some point that, you know, you can't hide all of the details. So somebody somewhere has to know that you can, there are some things happening within Tecton itself. I know that that helped make that easier to hide the details or to make the visualization of the pipelines better. But that's the nice thing with Tecton in particular is the modular nature of the tasks. As long as you've got a good understanding of what they do that you can plug and play. So the, so the, I guess there are two questions. So open source, so everything here is open source, that there are no licenses required apart from getting yourself a cluster. So that's where you can use CRC or you can use, you know, get a cluster from somewhere. But from that, from after that, all of these tools are open source versions of everything. Now, there are paid versions of, you know, licenses that you can get for most everything that we're using, that gives you either additional functionality. So SonarCube out of the box for open source version, you get most every scan language that, at least that we have needed, Swift, you have to pay for and some of the others. Artifactory, there's some paid features. But everything, everything that I just showed is open source free versions of everything. Hey, Sean, can you talk about one of the goals of the project is someone, let's say, says I want to use Nexus instead of Artifactory, right? How would I plug tool I use for my project? Right, so there's, there's a tool selection up front in the, and we go through that in the administrator guide, how to get the environment up and going, how to make the tool selection. So part of that would be picking, say, Nexus over Artifactory would be the matter of a tool choice. We've, if there is a module that exists already in some cases, in many cases, we have the alternatives already there. In the provisioning, you would just pick one over the other. And then in the, when it gets provisioned, we set up the configuration differently. So the pipeline would know where that comes from. If there's a tool that doesn't exist that you'd like to use, that's where, you know, we have some examples of how to provision a new one could be added. And it would be great to get some of those contributed back. As I mentioned, you know, we went with the core set kind of an MVP model that we encounter most often. And in the case of Argo, we went with that because it, it has been, at the time, it was fairly new, but it was the one that had a lot of buzz around a lot of interest and proven to be a great tool for using in this manner. So, so there's, those tools can be plugged in and there's some configuration that would incorporate it. And then there's a model for, you know, how to build your own and contribute. But that addresses the question, Roland. Yeah, perfect. And I think we don't have any more open questions. There's about three minutes left. So, unless there's one more question, let's see if there's one more question. If not, go ahead. Do you have something else? And I'll just make one more point. I focus very much on the developer's story. But we also, part of our, part of our objective and building, you know, the MPs, we don't want them to be, they're quick and we're trying to prove a particular functionality. But the goal is for that to be production ready and to go into production. So, build the manage is part of what we do. So, logging, monitoring, which, you know, we would get installed as part of the platform day one. And within the starter kits and the code that's built, thinking about those things from the beginning and not leave that to the last minute. And I think that's why incorporating things like JMeter and making performance testing part of that early as well. So, it's not just about the developer experience, but there's a whole SRE journey that's part of this too, to help build applications that can be managed and run well in production. And the last question, it's pretty loaded one. Any life lessons from DevOps world? I guess there's a lot of lessons, right, to cover in one session. I think that, you know, there's a lot of different scenarios that clients work with. Some people want their DevOps outside of the cluster because they're doing it across multiple clusters. Some people want to leverage managed versions of services that exist. So, you know, we kind of detect and use what's there. You know, the last piece where, you know, giving you where are all my troubleshooting logs and all these things are important. I mean, there's just tons of lessons. I don't know, Sean, if you have any final words? Well, I'd say in this space, the development space in the UI, there's a lot of opinion. There's less, not as many in DevOps, but there's still quite a few. So, the tool selection, that's where we try to be somewhat open, but there's still some principles there that have kind of formed, especially in a containerized environment where, you know, your CI pipeline is separate and distinct from your continuous delivery pipeline. Whether you're using something like Argo or not, you still make them separate because in this model, your CI pipeline is building a versioned immutable image, is packaging up some, you know, Kubernetes resources. In this case, we're using Helm. You can use other technologies. But keeping that separate, you know, there's, I guess, problematic when all of that is part of the same bill. Doing the, you know, the image scanning up front at the time of build, but also periodically or continue at deployment time in the cluster is important because even through doing, building up this toolkit, things age fairly quickly just sitting on the shelf. So there's a lot of monitoring and validation that has to be done throughout the lifecycle of the code. Okay. All right. Thank you. Yeah, I think we're at the top of the hour, maybe a minute past. Lots of great information. Thanks so much to Roland and Sean for their presentation. Lots of questions got answered as well. So that was great. And of course, given we're out of time, that's all we have time for. Thanks for joining us. I know some folks showed up late and we're hoping there'd be a recording that will be online later today. And we look forward to seeing all of you at a future CNCF webinar. Have a great day. Thanks.