 Alright, hi everyone, thanks for joining us today for another CNCF live webinar building a cloud native CI CD pipeline. I'm Lydia Schultz and I'll be moderating today's webinar. I'm going to read our code of conduct and then hand things over to Jason Smith and Mariel McCabe both with Google. A few housekeeping items before we get started. During the webinar you're unable to speak as an attendee but there is a chat box where you can list all your questions and we will get as many as we can at the end. This is an official webinar of CNCF and as such a subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct and please be respectful of all fellow participants and presenters. Please also note the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They're also available via your registration link and the recording will be available on our online programs YouTube playlist which I just linked in the chat. With that I will hand things over to Jason and Mariel to kick off today's presentation. There we go. My computer was just like I don't want to work right now so alright. Alright well thank you for the introduction Libby and thank you everybody for joining us today. As mentioned we are going to be talking about building a cloud native pipeline CICD platform rather and it is going to be very open search rate heavy and we hope a very cloud native heavy and we hope you enjoy it. Just to get a few introductions out of the way looks like I'm having information so my camera keeps getting turned off so we'll just do it this way. So just a brief introduction here I'm Jason Smith you can call me Jay I'm an app Ecospecialist customer engineer at Google Cloud. You can find all of my links and there's probably a few I need to update at Linktree. My peer reviewer is Winry she is often heard barking in my meetings or sitting in my lap right now she's sitting at my feet and Mariel I will turn over you to introduce yourself. Thank you very much Jay and thank you as well Libby for having us. My name is Mariel and I am an app Ecospecialist as well. There at the bottom is my peer reviewer Orion Russian Blue who loves to at very an opportune times walk across the keyboard but of course he's just doing that to make sure that all my code is limited and sanitized so thank you all again for being here and just as a final note we are excited to be here our opinions are our own and not necessarily reflective of our employer so we can jump right in and get started. So today as mentioned we are going to be covering a Cognitive approach to CICD but there are you know very many things that fall under this world of Cognitive and CICD so we are going to scope that pretty definitively and look at some specific tooling in that space and particular Argo CD and Tecton and then from there we're actually going into some live demos. As Libby mentioned we will try to get to all of the questions at the end but just since we want to make sure we can cover all of our material we will take those once we have finished. All right so let's dive right in. So as mentioned we're going to kind of scope down within the world of both cloud native and CICD in particular what we wanted to focus on is deploying containerized applications to Kubernetes in the cloud. So when we're looking one at this very container first approach and then also as far as the platform goes looking at these hosted or cloud running Kubernetes instances. So let's start with the Kubernetes bit of it. So I love this because it's one the simplicity the complexity and the beauty of Kubernetes all in one. When we're looking at the power of the system you know hey it's just a bunch of yamels how hard can that be. Well for all of you folks that have been running Kubernetes in different capacities you probably have experienced firsthand just how not easy this could be. But when we're also looking at cloud native and CICD that's actually one of the powers of being able to have your configurations and your applications defined in code is that ability to take those yaml templates actually now will empower you to be able to do a lot of really great things when it comes to looking at continuous delivery and continuous integration. So moving on. Now when we're looking at the world of DevOps you know there are also very many things that fall under this DevOps world. We're looking at this slice that in particular falls under the realm of both release engineering and DevOps particularly the continuous delivery and deployment part. But every team implements this in a slightly different way and this means very different things to cross teams. So for some we're looking at some different processes such as you know agile or stand-ups or continuous integration. So well what does DevOps actually look like when it comes down to your organization. So as mentioned we're going to now take a quick definition check. These are actually all part of the glossary or these have come from the CNCF glossary it's an open source glossary you can actually contribute definitions but really these are different terms that fall under the cloud native world. So when we're talking about things like continuous integration integrating code changes as regularly as possible this is really looking at the build portion of your CICD and release pipeline. So building your images going in that inner loop of development where you're taking your code committing it to your repository having builds generated doing testing and so on. And ideally this is something that is happening on an ongoing basis very frequently multiple times per day. Now what we're going to be focusing on as I mentioned today is really on this second half of the CICD pipeline looking at one there's the continuous deliberate portion of it where now it's taking those changes and the code that has been created in this inner loop of development and now deploying that into your environment where you can actually see and test out the code or see the application running. And then a step further beyond the other CD continuous deployment is now taking that software and being able to create methods and processes to run that software into production. So we'll continue on. There we go. So now when we're looking at some of these characteristics or guidelines around cloud native DevOps or cloud native CICD pipelines one portion which you know so one portion is get ups and you may be saying right now while we're talking about our code and build doesn't get ups have to do with infrastructure. Well yes so one really key principle is now that your build and your release systems are actually systems of their own that need to be managed in certain ways and to be able to manage those systems using a lot of the same methodology such as infrastructure as code and so on to be able to ensure the resilience the scaling and so on of your build systems themselves. So that's another key component is that you have this tooling that can also be managed in very much the same way as your code is managed. Another really big principle which also kind of falls in line with the get ups methodology is now looking at having a separation of both your application source code. So the actual functions and features and logic of your application and the application configuration. So maybe your destination if we're looking now at the Kubernetes world the repositories with the actual manifests and so on because now your application is a bit more than just the code that you're running you actually have configurations such as the services the endpoints any of your config maps and secrets and configurations as well. So these are all components that now are decoupled from your application source code. When we're looking at the centralized DevOps tooling kind of goes back to just the get ups and being able to have all of your configurations and in front of this code we're taking a very container centric development approach just to really lie into the portability and the reusability of your systems. And essentially what we're looking at is with this portability being able to now decouple certain parts of your CI CD pipelines for your your build and your deployment pipelines don't necessarily have to be tightly coupled these can be decoupled as well. Automation you know as another key DevOps principle across the board is also very important in the cloud native world the frequent mergers and deploys and additionally looking at just that movement of now being able to shift left and and do more of the testing and security early on in the build process rather than after the fact when something is running already in production that security and testing is now going to be baked in across the board. Thank you Mario and so one thing we want to talk about that's out of scope at least for this conversation not for the concept of DevOps in general is like deployment strategies we're not going to really talk about blue green canary rolling updates deploying functions ML data pipelines DevOps is kind of a large scope so a lot of times the concept of DevOps can be integrated into other forms of computing not just in terms of deploying applications but functions ML data pipelines we'll talk a little bit about that conceptually but not going to death and we're not going to talk about infrastructure automation so you know infrastructure is code anything like that this is all stuff that can be covered over DevOps you know we we might do a webinar series or blog series on it later but for the purpose of this conversation we won't be covering that so we've been talking about DevOps this is called a CICD platform webinar what what's the difference one thing I always point out is DevOps isn't really a tool like when we hear the word we often think of it as a tool or you know you might go on to LinkedIn and look up DevOps engineer job stuff like that DevOps is really more of a philosophy in a platform a philosophy and an execution of said philosophy into a set of tools and whatnot it's how we decide to deploy applications and how do we make that work in such a way that it is repeatable that's why we have kind of this infinity symbol here where we go from planning to coding building test release deploy operate monitor and then we start all over again and that's how we continue to reiterate and improve our application fix bugs all of that and continue to have that application constantly evolving over its life cycle and then also integrating new life applications or new workloads into the life cycle over for entire stack this is not an exhaustive list by any extension of the imagination of a bunch of DevOps tools provided by DevOps cube dot com and this kind of gives you an idea of how extensive DevOps is as a philosophy as a platform like these are the tools that we use to execute DevOps within our organization in the same way that we might use certain systems networking tools and whatnot to integrate or to execute infrastructure in our system so again non exhaustive list but you know you might see a lot of these names that you've seen before most of these are open source but there's also proprietary stuff out there but we love open source here at CMCF and Google and personally I've been a huge open source fan for decades now so that this just gives you a quick idea you might see here that towards the top there are two particular ones continuous integration continuous delivery those are what we're going to focus mainly on today in the DevOps stack and I also took this screen grab here so if you go to landscape.cncf.io you kind of get this as I call an eye chart of all the different projects and companies that are supporting the entire landscape of CNCF here in particular are the various companies and projects that make up the application development and CICD toolkit some of these again you might be very familiar with two in particular we're going to talk about today are Tecton and Argo but any one of these is great we're not necessarily saying that Tecton and Argo the end all be all but whatever works for your organization that's why we say DevOps is a philosophy versus like an actual platform we don't say these are the absolute tools you need to use this is how we believe in using it figure out the culture of your company figure out the culture of your organization and figure out what tools exist that fit that culture and as you see there's a large tool list of cloud native tools and this is kind of a high-level idea what we see the CICD process looking like or the DevOps process looking like which CICD is part of using a variety of different open source tools so almost all coding begins in some kind of IDE whether it is hosted web based or on your own laptop so something like VS Coder IntelliJ but then there's like you know repellent and a bunch of different tools get within GitHub get lab then you have your source control whatever you choose to use get lab get hub bit bucket get T so on and so forth there is deploy as you can see we have Argo and Tecton but there's other tools out there a large list of tools that you would have seen in the previous slide packaging and storing security runtime operate so runtime Kubernetes maybe you want to deploy server issues K native dapper Kita these are all open source tools by the way or open source concepts as well now this is all a lot of words that I'm throwing out here what does all this mean well when we talk about cloud native you know there we can have a huge long definition about it but the way I always see it is it's containerize all things like yes that is a very reductive definition but it's an apt definition when we talk about cloud native we are talking about making containers and deploying them because when it when applications are containerized they are faster they're eating you're bringing your own runtime and they are easy to make portable across different platforms whether it is kubernetes cloud run you know another containerized platform as we're starting to see that the doctor the doctor's guy image runtime is a very popular use case so there's a lot of different ways you can use it so when we talk about cloud native CICD we're essentially using containers to build containers so everything's container and so yeah we we are using a lot of the containers to build containers that's what you will see with like our go in tecton as we got into the demo we the different steps actually build your application our containers themselves and with that I'm going to actually turn it back to Mario to talk a little bit about our go awesome thank you very much Jay um so let's get into our go and we're going to go through things just from a very introductory and high level here but the best place to start is actually what is our go because our go we're focusing today specifically on the Argo CD version but the Argo project itself is a collection of open source tools that were purpose built for kubernetes for everything from running workflows managing clusters and so on but using a very kubernetes centric point of view and I'll talk about that a little bit shortly so the Argo CD portion is really looking at this declarative get ops model for continuous delivery within kubernetes but there are also some other components such as workflows which is the workflow engine using like the directed graphs and step based workflows Argo events for event based dependency management and Argo rollout so all of these tools can be used on their own or in coordination with each other but they are all projects that are within the Argo foundation or the Argo project itself so that one so really briefly a little bit of the history behind Argo was originally developed and into it kind of by way of startup that had gotten absorbed so within into it this project had started officially was added to the CNCF in 2020 and graduated as of the end of last year in December I got several contributors I have it has been really really gaining a lot of popularity do I in my opinion to a lot of this very kubernetes centric approach using the same and similar constructs to be able to manage your applications within kubernetes and essentially when we're looking at these kubernetes native tools it's using that same camera where the kubernetes resource model controllers custom resources and so on and a little bit about the naming history Argo is actually the name of that Greek ship that carried Jason and the Arnavats on the quest staying in line with a lot of the Greek naming conventions within the kubernetes world so that is actually photo of the literal Argonaut was just actually also an octopus and you can read more about it it's a very interesting animal so Argo CD is mentioned we'll focus on the CD part of this where we're looking at all of your application definitions the configurations everything within that environment is declarative inversion control so the same structure the same mammals and so on that you will be accustomed to seeing and using when you're managing your application coding your applications themselves are going to be the same pieces that you can use to manage the application definition your pipeline definition and so on so this creates a lot of the ability for automation auditing you have the single source of truth for your configuration and then also even from a resilience point of view when you're looking at being able to restore your configurations all of these kind of live in this this version controlled system so you do have your backup within source control and also you have the ability to get more granular as far as your actual access to the systems themselves for your roles and so on determining who can act who actually needs to have access to the kubernetes clusters themselves versus just to the code so there's a lot of real flexibility there within the Argo CD world and if we page on and I guess is another note as well this allows for decoupling from your system so within more close and everything you are able to to do continuous integration but for example if you got a team and you're already invested in things like Jenkins or you have things to do your build systems themselves these can remain decoupled where now you have a tool that has access and it's set up within your cluster to be able to manage your actual deployment into kubernetes and the way that this really works and where this is really powerful is that you actually now don't have to get a lot of these creds and so on set up in secrets access to your cluster and everything within your build systems your build systems don't need to have that connectivity back into the cluster which now creates that additional level of security reducing your risk footprint with Argo you now have the server components the API server repository server application controller that are living within your cluster are directly within your cluster and so now it's doing this sort of outbound full base model and it makes it very flexible for being able to manage as I mentioned your security and if we go on to the next slide actually so I'll talk about the architecture and patterns in a second can we go to forward one more I have these reversed so when we're looking at the patterns now of being able to manage your Argo you can actually have a few different models of looking at this installation you can have that entire server architecture as shown previously deployed within your application cluster and deployed directly into that cluster itself and you can have each instance separated that way or you can have a single maybe a main instance of Argo deployed in a dedicated cluster and then your various other clusters that are being managed by Argo now just need to have these service accounts that communicate back with a dedicated cluster so you could have this distributed model where you have a deployment system that now is able to manage multiple clusters as well so you have a few of these installation modes and there also is there are various services out there like Q&A and so on that do offer fully managed Argo CD as well so that now it's just using a sort of agent based model to be able to manage your infrastructure and we'll go into a little bit about what this looks like in the demo as well so I think we'll go back one I'm a little bit out of order when we're looking at some of the key resources that you use to be able to define your pipelines within Argo there's this concept of an application and you can see a sample of that on the right and this application is essentially an instance of you know whatever that application or that business application is it's essentially just defining the GitSource and the destination namespace so it's kind of as simple as that just where do your actual manifests or definitions of the application live and I'll talk a little bit about the different forms that that can take and then also what cluster should that go to so it's using this very declarative Git centric model now you can do things like control via pull request and get access who can deploy to certain environments and so on without necessarily having to deal with the roles and permissions within your Kubernetes cluster then we also have the idea of a project which is essentially a group of applications a repository which is now the source repository as you can see here it's just using a GitHub repo but the repository and the credentials are defined there as a Kubernetes secret and then when you're looking at the manifest themselves that are in your source that can take the form of just your standard YAML or you can use things like Customize, Helm, JSONnet to use as your application definitions. Alright as far as the setup goes it's actually pretty quick and straightforward the documentation on the Argo CD site is really great so kudos to them but essentially once you have a cluster that you'd like to get this set up on then it's just a matter of creating the namespace the YAML which is here on the Argo project GitHub and then you're essentially up and running you can pull that down and customize it or if you want a diversion lock and pin it but this is really just kind of the quick start alright and then as far as using Argo and we'll really dial into this when we demo there are kind of three three main modes one there's the Argo CD command line tool that allows you to be able to interact with Argo CD server view your applications lists and so on which is really flexible for being able to do just some scripting there is also a dashboard that is built in so once you get that Argo server deployed as we did in the last step it's just a matter of exposing either through you know some type of Ingress or you know port forwarding or so on but that dashboard is already built in and available for you to have a UI based of interacting with the server and then there is the Argo API as well so from here I believe we will hand over to Jay to talk over Tecton thank you very much so yeah we talked a little bit about Argo now we're going to talk about Tecton these two are not necessarily exclusive for each other in fact I have seen a lot of different industries and you'll see a lot of examples online where people actually use them together they use Tecton for the CI portion and Argo for the CD portion of their DevOps pipeline so let's talk a little bit more about Tecton this is actually a pretty famous tweet by Kelsey when he was talking about Kubernetes he said Kubernetes is a platform for building platform it's a place to start not the endgame so it's not the idea that you know oh I have Kubernetes and it's ready to be deployed and all my applications are going to be there instead think of it as the building blocks to build the platform that you want to host your application rather than starting from the ground up and using bare metal servers and trying to write your own orchestration tool to build that platform another way to think of it is if you're trying to build you know build a model house or something like that it'd be much easier if you got an instruction booklet and the balsa wood and stuff versus having to go out into the forest and cut down your own tree and you know cut down the lumber in order to make the sticks so this is the platform for building platforms not the endgame in the very same way I always try to say that Tecton is a platform for building your DevOps tool not the DevOps tool itself so it gives you the same basic primitives that you will see in Kubernetes to build on top of Kubernetes specifically for the use case of DevOps or CICD so the idea is that it's composable declarative reproducible and cloud native everything in Tecton is a container it sits on top of Kubernetes it is becoming an industry standard and it's already utilizing Kubernetes API so it is standard in that sense and it's also extensible it's declarative everything's extensible and there's a lot of different benefits to it that we'll dive into oh and even dogs love it so we have a little Tecton cat there and left it on the bed while I was doing something and she decided that it was her new toy and I think she is right now building a pipeline to help get her some of her dog food into the ball without me there now one thing I want to point out Tecton is part of the CD foundation I've heard of it some of you may not you can think of it as a sibling foundation to the CNCF so both are kind of child foundations to the Linux foundation so they work together but whereas CNCF has kind of a larger overarching viewpoint on cloud data technology CD foundation is really focused on continuous delivery and continuous delivery exists in and outside of the cloud it can exist on premise it can exist for ML Ops it can exist for data pipelines so on and so forth so that's why people decided that there's a benefit of having a separate foundation specifically for CD and Tecton was kind of if you will one of the founding products around it in the same way that you can say Kubernetes was kind of the cornerstone product around or cornerstone project around CNCF Tecton actually used to be a part of which is a CNCF project under the name of Knative build but eventually the creators in this kind of very short story decided that there's so much to be gained from what Knative build was becoming it really should be own project so it evolved into Tecton as we see it today Tecton has multiple components this is actually not an exhaustive list but these are some of the key ones and this is actually growing because some of these did not even exist at the earliest stages what most people know Tecton for and if you were to like say hey what's Tecton people would assume pipelines in most cases and that wouldn't be wrong but there's also a dashboard a CLI tool chains which is supposed to help with with your secure software delivery triggers which is event listening so automatically trigger if X event takes place and of course a catalog which we will talk briefly about later to give an idea of what Tecton pipelines is which is part of what we're going to be talking about pipeline is the main CICD component and so what makes up a pipeline so you have a step each step is like the individual thing that happens in a CIC workflow like run a pie test on a application pull something from github push to a Kubernetes cluster task is a collection of the different steps that are initiated in a single pod so a pod step will be that container that executes in the pod and then a pipeline is a collection of tasks one way to look at it is so here you have the pipeline and this is the entire thing like I want to take my code from github do what I need to do with it test it containerize it, push it to my Kubernetes cluster so that's the overall pipeline that's the thing I want to happen the task of individual items that take place or the individual actions that take place that each represent a separate Kubernetes pod so you have a task for you have a task for pulling from github you have a task for containerizing you have a task for running tests and then each task is a different step so we're going to do a kind of an idea we're going to also talk about the trigger component which is for eventing so and if it's just me working on a personal project executing the runs of each pipeline isn't that big of a deal if I'm a large organization I want some automation I want it to execute based on XYZ events so we have what we call the trigger component for the project on that works together with pipelines so you'll have an event listener which is a CRD that enables a declarative way to collect the HTTP events with JSON payloads there is now a new protocol called CD events if you're at all familiar with cloud events it's very similar so trying to create a a different type of protocol for cloud communication a different type of protocol for CD communication we are starting to see a lot more integrated now trigger template are resources for the trigger so you know declaring things such as the secret that I'm going to use to pull from GitHub and stuff like that and then the binding binds the trigger template with the events and passes the parameters from the JSON payload so excuse me for the idea here but this will look at a real one later but this kind of gives you an idea of what a task with steps look like it looks very familiar to you it's just simple this looks very similar to what I would do if I were declaring something in Kubernetes same thing with the pipeline very simple I'm declaring all the different steps the different tasks that are supposed to execute in order and it's pluggable and that's one of the big things too each one of these tasks that make up a pipeline is an individual component and it can just be moved from pipeline to pipeline across different projects in fact earlier I mentioned this catalog so Tecton has a catalog available today that is a bunch of reusable and tested so there have been testing done by the community so there is some you can rely on using them they're the containers that are their tasks that already have do a specific item so if I want to deploy a 10% canary if I want to do a canary analysis if I want to deploy two Kubernetes pull from GitHub instead of having to rewrite those tasks from scratch I can go to the catalog pull them down they're probably already 80% of the way there I tweak it with my specific branch or whatever my variables or specific use cases are and boom it's ready to go so now we're going to take a quick moment to do demos there are two demos here one for Tecton one for Argo in the interest of time because we do want to give you a chance for questions I am going to do this really quickly so bear with me right here as I get this going let's see let's just do a window here all right let me expand this too so this is Google container engine or Google it used to be called container engine Kubernetes engine so this we're Googlers so we use it but I want to be very clear that this will work on any kind of Kubernetes platform that's not just limited to Google any kind of Kubernetes platform will work I also want to highlight that Google is very committed to Tecton we are involved in its development and we want to see it succeed so here are some ideas of what you would see with the Tecton pipeline all of these are actually on the demos too if you go to the Tecton GitHub the Tecton Triggers and Tecton Pipelines you can find a lot of these demos as well and play with them but as you can see this is just a Kubernetes object called a pipeline using the Tecton API I can go ahead and just name my pipeline put a namespace I set parameters what I want it to do where it's going to pull from all that good stuff build the Docker image so right here it's going to deploy something called event display and it's actually going to build that Docker image using Kanako which if you're familiar with the project it's a way to build containers in your Kubernetes cluster rather than having to like pull it down to a VM and execute Docker build or whatever build tool you utilize and then it deploys it as a pod so here are all the different steps for my are the different tasks and steps as you can see so if we actually compose it or decompose it we have a task in each part here is a step so this is actually a singular step task where it runs cube CTL on the specific item I create an ingress there's some RBAC settings that you can set up that you will find in the documentation I have an event listener here which is listening for my GitHub secret I'm not going to share my secret on a YouTube channel but you understand the concept of secrets and it will use my trigger binding here which has the variables for the GitHub URL and whatnot the trigger template which declares my GitHub URLs and everything like that simple enough and I can deploy so this kind of just shows you the benefits or how it works let me see if I can actually get it to deploy to run something here real quick like I should just be able to run the event listener let me see here this should work live demos are always fun and I named it my tech founder if I preferred to call it that the namespace that is okay well anyway you gotta love live demos right so what would do is display if I execute let me actually go ahead and display the dashboard that way you can actually see a good visualization here so bear with me one second because I did deploy that dashboard okay oh that's in a different pod I can actually expose it simply by doing so it's like changing as I sporting the way I wanted it to well this is fun oh here we go alright there we go and so here we'd have the dashboard we can see the pipeline runs the tasks I deployed in the wrong namespace so apologies for that the different deployments the task runs the custom runs that I've created the event listeners the triggers I don't know I can use the TKN CLI if I prefer to live in the CLI in my dashboard that's perfectly fine but I also have this nice dashboard if I don't want to live in terminal so I have both options available in the interest of time rather than having people watch me debug but I will update with the github to show you what I have deployed so you can follow along I'm going to go ahead and stop sharing and give Miriam a chance to show her Argo CD alright perfect so let me go ahead and share the window over here thank you very much Jay so I do have a combination of a couple things that we're going to try live but then also we've got a fall back just in case so I'm here essentially in my workstation it's sort of my web based workstation that I have for being able to interact with our Argo cluster so just to start essentially when we're looking at the configuration that's laid out or the simulation is that I have one of my main clusters here which is going to be the primary cluster running those Argo CD components and then also we have two application clusters this Argo app cluster and then also a brand new cluster that we're going to try to deploy to so if we wanted to take a look right now at some of the workloads that are running within the Argo cluster itself I'm going to pull this up and we have everything here in the Argo CD namespace you can see we've got our application controller application sets and some other components here that are running along with you can see here this guestbook UI and I'll get back to that as well in just a little bit so you can see here some of these different components that exist within the Argo world but if we wanted to now make sure that we're using the right cluster so I'm here in the cluster itself so if I run this all right so you can see here that I have my Argo applications application sets and app projects that are now part of these custom resource definitions so if we look at that so over here as well now we're going to look at the actual applications themselves so you can see we have this very Kubernetes centric method of being able to interact with the Argo CD server so if you wanted to use a lot of these native commands you are able to do that we have the boutique in this you can see even the health status this one is degraded but we have these other two that are synced and healthy and if we wanted to go in and describe this application that has been set up you can see that now you have very similar definitions to the ones that you would use for being able to manage your actual deployments and so on of your application logic so you can actually even you know use Argo to manage Argo since we have all of these same constructs so it's kind of this infinite loop so to speak so now we are going to attempt to add this new cluster as I mentioned we have actually so here let me go back into Argo CD you have your command line tool and if we wanted to see some of the different commands here that are available for Argo CD you've got a lot of these things that are available to be able to manage your actual interact with the server so I've already logged into this server that I have set up so if I wanted to do say Argo CD then you can see that I have two clusters of the applications here similar to that Argo CD cluster list so you can see that I have two clusters that are set up right now the native which is in cluster so it's the actual cluster that Argo is deployed to and then this other remote cluster so to speak the Argo app cluster so you could just essentially build out a collection of all of the different clusters that have been set up to be managed by Argo so now we're going to attempt to add that new cluster in which is that cluster that I've just created so to be able to do that and of course this is just one method of being able to interact with Argo CD which is this a little bit more of like command based essentially what this would look like as you're running the production is you would have various yamls and configurations defining your code and we'll look at that in a moment but if we did want to add in that new cluster we would do a new cluster and essentially you can see we've just got some service accounts and roles and so on that are being created in the new cluster and now this has been added in so when we go back into our Argo CD cluster list you can see we have both the Argo app cluster and this new cluster in here there are no applications it's not being monitored just yet so if we wanted to be able to create that via the command line and actually let me update it so that we are using this here for the destination server and we're just going to deploy into the default namespace just keeping this really basic for now there are actually various sample applications that are available on the Argo project site so I'm going to I'm going to page over here but when we look at these example apps because each of these folders is set up with the actual YAML and deployments and so on and some of them with helm and customize essentially all you really need to do I don't have pull or push access or anything to this repository itself but because those YAMLs are there I can just point to that folder and path as the source for the actual application define within Argo here what the destination server is so you can see we have the repository we have the path we have this destination server the destination namespace which by default the namespace should already exist but there are also a variety of different flags and options if you wanted to have it auto generate the namespace for you and we can just go ahead and try to create that so let's do this and so now we've got this application helm guest book created so I did not go into the application tile so you can see it just showed up this helm guest book so this is the actual Argo CD application interface or the dashboard I have a few applications that I've created previously I've got this remote guest book this guest book and cluster and you can see here is the brand new helm guest book that I just created so it's missing and out of sync so what I'm going to go ahead and do is set up a sync and this sync can also so basically I'm essentially synchronizing the manifest into the repository and set up right now by default just to do manual but of course this can be set up to sync at an automated basis so that it's just pulling the repo for any changes that might be happening again this is very much centered on that deployment side of things where a lot of the CI is going to be happening and the CI and the builds and so on are going to be happening before this point so it's mostly once that manifest lens in that destination directory but now you can see this is healthy it's synced we have a few different views here in this dashboard as I mentioned this ships out of the box with Argo when you do your baseline installation and you can see for each of these there's a source a destination and when you're looking at being able to do just multi cluster management that's where things like having customized and having some of these other templating structures put into place to manage your say dev and production and staging environments or multiple cluster footprints that are distributed as well so you have a few different views here you have this sort of project view and then you also have a little bit of a high level view if we wanted to say what's in sync what's out of sync what's healthy what's not healthy and you can see here with this boutique application you know I actually had intended for it to be in a healthy state when I first deployed it it was more of my cluster sizing issue but it actually is really helpful for demonstration purposes because now you can see how Argo CD having this very Kubernetes centric and integrated point of view you can actually get visibility into the system using the same type of constructs and objects that you would be accustomed to so you can see here we have this deployment status we can see the application health and so on and then we can actually also see where we have some services that are actually experiencing issues so if we wanted to go in here and actually see some events and logs and so on then we're able to do that as well so let me actually just pop over here and I'm going to wrap this up soon just because I know we wanted to save five minutes or so for questions but I will go back in here to my workloads I see okay my ad service is broken something that you know you wouldn't want to do in the production or another environment I'm like let me just delete this see what will happen so I'm going to delete this we'll come back to this one in just a second but what I also wanted to show so those were a few methods of being able to add to add an application of course you are also able to add an application here through the UI where you go in and you can create another app another guest book so on but one thing that I project name and so on and one thing that is also great about being able to add this through the dashboard is you have the native ability to be able to also see the YAML that's being created so even though you're using the UI based method you can also now have this as a reference for the future so say you wanted to say well let me go through the UI for the first time I'm not sure of all of the different parameters and so on you can build this out have the YAML use that as a template for the future and of course this is going to be now part of that CRD and that actual definition within Kubernetes so you can always output that as well so that's one piece and then the final is looking at this other method of now actually the new one now being able to have this definition for that pipeline we just saw that within the UI but now I have this here for an application that we are going to be deploying into this new cluster so this is that cluster that we had just that was sort of like newly added into Argo CD I've got this in the repository you know we've got some information like the namespace and the server, the source and the target revision that's actually the branch that I'm going to be using because I want to have this just pull off of a specific branch you can have some other parameters as well and essentially now what I'm going to do is we'll take that YAML and say I'm going to make sure that I'm using the Argo cluster and deploying into that namespace and now I'm just going to apply this much like I would another deployment and so on and so now you can see that I've created this new application and if I go back in here and click that a little bit too quickly but you can see we have this new Boutique prod configuration that was just applied so I'm going to just synchronize all of these manually once again and as you can see this Boutique application is syncing once more and actually if I go back here and refresh for that cluster essentially what should happen is those two deployments that I just deleted are now back again because it's taking that default YAML and saying okay I'm going to sync so even though I went in here and deleted it it's using that repo as a source of truth to deploy it again and right now essentially I can go and debug the switch it just needs more resources on the cluster so that was just kind of a very quick demo at looking at some ways of being able to interact with ARGO CD and how that really loops into the Kubernetes centric point of view so I think from here I guess maybe we can open it up for any questions okay great we have a few in the chat let me back up a little bit okay to containerize do we use OmniDec or Docker any container platform so for containerization I believe we can just use whatever so if you want to use Docker, if you want to use Podman if you want to use anything like that as long as it creates like the an OCI image it follows that OCI image standard for the run time then you're fine I personally use Podman on my machine so I don't think I'm not sure if that answers your question but you can use anything in fact we mentioned a little later how you can use Kanako to build containers in the cloud in Lua Docker so alright okay this is a long one get ready okay I have implemented CI CD using Tecton and ARGO CD while creating a pipeline where I am working right now what I've noted is that it takes more time for building images from Docker files specifically JavaScript based run end application than any other tool when compared to Tecton is there any recommendation on how it can be improved I'm using Kanako for building image secondly while running pipeline it takes a huge amount of underlying node disk space while checking it with Prometheus and Grafana it can be noted that it consumes close to 90% node disk space is there any way to improve this without maybe knowing a little bit more about your specific environment it may be difficult to debug why this one in particular is taking a little bit longer I think if maybe you take a few other if you've got similar applications or similar you know images that you're building that are using roughly the same stack and framework and so on and if those are building a lot faster then I think maybe dialing back and going kind of in the stepwise manner to look at maybe there are some dependencies or something like that that are creating this really huge footprint so I think it's mostly looking at how to be able to optimize and build like a slightly leaner image I know that I don't have it handy but I think there are some guides online that talk about how to be able to streamline streamline your builds and streamline your images maybe looking at the base images and cells so there may be a few different things that you can do for looking at optimizing that built that portion of the build itself because I think that might address both of those is that maybe there are some things that are happening in the background or some dependencies that aren't necessary for that actual application like how can you minimize and make it a little bit more modular for example all right another problem I faced is if pipeline is triggered more than once at a time is there any way to create a queue of triggers so that once a pipeline run is completed then the other in the queue should be triggered um and then I guess I would answer and maybe Jay would type in protect on its wall but the short answer is yes so there are abilities to be able to create these stepwise dependencies or like these triggers based up of events so that if you do have something that is required to run before another then you would be able to define that sequence cool yeah you can usually somehow link the final step of a trigger of a pipeline to trigger the first step of another pipeline and you can usually use event finding or there's a lot of different ways you can execute it but it is doable and there are some demos like in the tech time community specifically on that use case all right well we are at time I know there were a couple other questions left if mariel and jason want to pop their handles into the chat um then maybe y'all can follow up with them after the show um but thank you everyone for joining us today thank you jason and mariel for an awesome presentation um lots of great interaction and conversation and um I'll leave this up for just a minute while everybody logs out so that y'all can be sure to get their info and follow up with them but thanks again both of you and we'll see y'all again for another live webinar tomorrow thank you everybody for letting us spend some time with you all right thank you all and for the questions that we didn't answer yes feel feel free to reach out to us and we'll we'll try to circle back on on any of those items 100 percent all right thanks everybody all right bye bye thanks you