 Let's get into the big topic of open source, something that we actually say about it. This is so awesome. We are an open culture that is actually able to produce that process that they develop or, let's say, as the communities of the system really bring them. All right, and we are live. Welcome, everyone, to another exciting episode of Get Up Sky to the Galaxy. I am joined by my co-host here, Hilary. Hilary, how are you doing? Doing good. Yeah, doing good. Sorry, I was still muted. It happens. Yeah, I mean, the mute button moves all the time. I swear it does. It just constantly moves. Yeah, plus I've got so many dogs, three of my own and one on loan for a month that I habitually mute just in case of barking. That's right. So yeah, so I'm joined by... You can get dogs on loan. Wait a minute. Dogs on loan. Yeah, dogs on loan. Yeah. Yeah, it's like a rented dog for a month while your friend moves program. I love it. It should be... You know, there's always a... Everyone says there's like an Uber for X, right? We should do like an Uber for dogs, right? So that'd be cool. So we are joined here, a special guest here, Jamie, from the University of Michigan. And so why don't you introduce yourself real quick to those who aren't familiar with you, who haven't gone to the OKD meetings, who haven't gone to the open get-offs meetings. Sure. You can introduce yourself. Sure. So I'm Jamie Maguera and I am a longtime engineer, 25 years doing software and CIS admin stuff and now DevOps stuff as they call it, about 25 years working in education research technology. I focus on higher education and I've worked for the University of Michigan for about 18 years now and started working with Kubernetes probably about 2017-ish and really just became really interested in Kubernetes and the sort of the possibilities with the stuff that came with that, which eventually led to get-offs, right? And I'm a co-chair of the OKD working group, which is the sister distribution, the community distribution of Kubernetes that is sister to OpenShift OCP, right? The commercial version from Red Hat. And I'm also a member of the open get-offs working group and occasionally make an appearance in the Fedora Core OS working group and now apparently going to be getting involved in the environmental sustainability working group that's going to be coming out of the CNCF that's going to be launching. So a lot of working group stuff. That's it. Busy, busy, busy man here. So I'm excited to have Jamie on. So the reason I had him on, he actually, he did a presentation at the OKD, sorry, OpenShift Commons day of get-offs. And he did a fantastic presentation about what he's doing over there at University of Michigan and all the stuff he's doing with the open source aspect of it, which is really cool. What I really like about education and what about Jamie's doing is like open source kind of has its roots in the education system, right? If you're like a nerd like me and read history about like BSD and Unix and Linux and all that stuff happens in universities. So all that stuff still, open source software still is being used in university. So I always like people who are using OKD, right? Using the open source version of OpenShift that we're using, Argo CD directly, which is the core of OpenShift get-off. So that's always really cool. We're always really exciting. He did a great presentation there. I thought, you know what? Let's bring him on, have him explain kind of what he's doing there. Two quick notes, though, before I kind of just, you know, end it off to Jamie there. Have it be the Jamie show. One, Get Ops Con is on. So Get Ops Con is happening, right? So that's on schedule. The schedule is out, right? I'll be there. Hillary will be there. So Hillary will be there. So we'll both be there. So at Get Ops Con and at Cube Con, so stop by and say hi. And hopefully we'll see Walid there, right? So Walid, I know I think you're going. So one of our longest viewers here. So hope to see you there. Also, CFPs are open for Argo Con. So Argo Con will take place in the Bay Area, right? We're going to, it'll be actually at the, I don't think it's announced yet. Wait, hold on. Let me see here. It's not officially announced, so I won't tell you exactly where it is. I might get in trouble if I tell you what we decided on and it doesn't go through. But it will be in the Bay Area, Mountain View Area of San Francisco, right? If you're in the San Francisco, California, the Bay Area. It'll be a three day event. So CFPs are open. So if you, if you have a CFP, please, you know, let us know if you're doing anything cool with Argo. Jamie, you can, maybe I'll call you out here. Maybe you can do a CFP. There'll be a virtual component. It'll be a hybrid event, right? There'll be a virtual component. It'll be on, on site component. So check that out. I put that link in the chat for Argo Con. So I think with that, I will pass it over to you, Jamie. And yeah, kind of just go over like, what are you guys doing over there? The University of Michigan with, with everything, right? OKD, Tecton, Argo, like they want, they want to hear, you know, everyone wants to hear, everyone loves it and then use the story. And I'm pretty sure you get tons of questions all the time about how you're using things in production. So yeah, so you're right. There's a long history at the University of Michigan in terms of open source and standards and sort of the technologies that underlie the internet and software development and whatnot. So LDAP, in part designed here at the University of Michigan, the initial sketches for how the internet would be, how the nodes would work out in the network architecture. Star Network was on a napkin at a hotel here in Ann Arbor, actually during a conference back, you know, very early on in the early 60s. And so a huge history here at the University of Michigan. And currently there are a variety of colleges within the university that are utilizing Kubernetes and, and starting to embrace GitOps principles. And so, for example, there's the College of Engineering. There is the whole health system is embracing Kubernetes and starting to embrace GitOps. And where I work that currently the area where I work is very much heading in that direction. So I work for, within the university, an entity called ICPSR. This is the inner university consortium for political and social research. And we provide leadership and training in data access curation and methods of analysis for social science research community. And we provide data for research. So it's like a clearinghouse basically of research data that other entities can use to fill out their research and researchers can then submit their research into these repositories. So we write software that allows folks to submit research data and also to pull research data out. And also to do the sort of the paperwork of it, signing off disclosures and stuff like that. And so this is something that is very web application heavy and Kubernetes fits right in with that process. And then GitOps is almost essential, I think, in managing large endeavors such as this with a lot of microservices. And I think as we go through this, you'll see the different ways in which I've leveraged ARGO CD to sort of maximize every aspect of this process. Some of the problems that we had at ICPSR are, we originally started out with just a handful of developers and engineers. Literally just under six engineers and a handful of developers. And then those respective groups have grown to dozens just in the past couple of years as data becomes more and more important in political and social research in terms of access to large amounts of data, I should say. And so although development was being done in Git, the changes were not accurately synchronized to the clusters continuously. The deployments were not accurately, they were being modified and synchronized in real time like you really need to do to get that CI CD going and really realize the benefits of CI CD. And microservices were not deployed in unison for applications. So some applications work, some microservices work really well as a globally available microservice. So you have an API and everyone calls the API. In some cases, particularly if you're transitioning from old world monolithic applications, you may not have an API that can service all of the different clients, microservice clients from one instance. You may have to have separate instances running and so that was something that was actually necessary to do in ICPSR. So you would have an instance of an API deployment for one application and then we have an instance of an API deployment, maybe of a different version for another. And this allows developers to have a little bit of breathing room to then get to the current version of working with that particular microservice. And promotion of applications through the stages was not as efficient as I said in terms of getting from development to what we call UAT and then our development and then QA stage, UAT and then prod. We've since changed our QA stage to calling it system integrations testing because really that's the second stage after developers have sort of played around. That's where we want them to really test their microservices against everything else. And GitOps is really important for this because then the developers can make changes within the repos. Argo can then sync those back out to the cluster and DevOps engineers can make changes to the deployment configurations and help charts that can get synchronized down, et cetera. Another issue that we had that we needed to resolve is configuring external resources. So if you have a microservice, it might have a database in AWS. It may be relying on traditional Apache Solar, which is basically a data indexing type of service, right? Or elastic search, things of that nature. You might have those running outside of the cluster. You need to interact with those. So we said, okay, we can automate the process of updating the configurations for the Solar and whatnot. But how do we make sure that the developers and the DevOps engineers can make those changes in Git and have the pipelines synchronized properly into the cluster. And pipeline runs properly synchronized into the cluster to then launch those pipelines to affect those external resources. And you'll see what I'm talking about in a second. And then, of course, the underlying infrastructure of the clusters, getting those synchronized very quickly, getting them brought up very quickly. And more and more we're seeing GitOps being applied to the actual infrastructure of the cluster as opposed to just applications. And so I'll show you a little bit about that as well. A little bit of a note. I'm going to be doing a lot of talking and just a little bit of screen sharing. And the reason for that is that, because ICPSR has a lot of government contracts and a lot of contracts with organizations that are doing research that involves, in some cases, information that has to be signed off on, non-disclosures and things like that, or information that collects data from those requesting it and whatnot, for a variety of reasons. I will not be doing a walkthrough of our system, more it'll be screenshots that have been sort of cleansed to be sure that I'm not doing anything. Got some normalization of screenshots, right, of design. Got you. Right, exactly. So we want to be respectful to the people who are supporting our work at ICPSR and follow along with that. So a couple of the solutions that we came up with to really work with the GitOps model and with our GoCD is we separated out the application code from the deployment code. It used to be that Git sort of had one, each project, each microservice had sort of everything all in one of the home charts. We're all in with the code. And Christian, I know you've got an opinion on that as well. And I share that opinion that I think that those two should be separated out. So we separated those out. We separated out the application code from the deployment code. And we created an external dependency configuration so that basically these external resources that was talking about dependent resources in AWS or VMs that are running, you know, Elasticsearch or Apache Solar, whatever, those configurations can be stored in the repositories as well, separate from the code. And we created helm charts for the services and for the infrastructure as well. So I created helm charts for various parts of infrastructure that Argo can then synchronize and we can, for example, version our pipelines and then Argo synchronizes those helm charts for the pipelines out there. And those are versioned and we can keep track of them really well. And, you know, we use Argo in all steps of this to provide that synchronization. There's a lot of tools out there. We landed on Argo because of the feature sets and because of the community around it and because of the availability of the GitOps operator, both in OCP and being able to install Argo CD directly from the operator, from its own operator, community operator into OKD. So we just landed on Argo CD and so I'll be focusing on that today, mostly. And so what we have in terms of clusters is, let me share my screen here real quick. So what we have in terms of our clusters are, let's see if I can do this. Are you seeing the full screen or the presenter screen? I am seeing no screen yet. The presenter screen. Okay, let me flip that. Let me actually do it this way. Let's see if I can get this to work. I've always liked the OKD logo of the pooping panda. Okay, so do you see the full one now or the presenters? Still presenters. Still presenters? Okay, hold on. Let me see if I can do this. I'll try one more time. Let's see. I'll do it like this then just to get it. Let's see if we can get this on here. Okay, so there you see full screen. All right, let's see here. Yes, there you go. Okay, great. So we have a variety of clusters. As I said, we have development clusters. Development clusters that are OKD and OCP. And these are used for our dividend quality assurance stages. Or as I said, we're changing to SIT. And then we have production clusters that are OCP. And these are used for our user acceptance testing and production stages. So in essence, we have four stages. We actually have a fifth sort of tangential stage, which is a, or a side stage, I should say to production, which is a testing and training. So for example, if a group that utilizes our applications wants to do a training of their employees, we have a separate stage for them to be able to use a copy. Yeah, so it's like a copy of production essentially, or like a sandbox copy of production. Right, a sandbox copy of production with some dummy data, basically. Right, something that's not production data. So I don't know if you knew this, Jamie, but before I was a reliability engineer at Red Hat, I was a quality engineer for more than a decade. And so my little quality engineer heart right now I'm geeking out so hard, just silently. Awesome. Yeah, so this is what we have in terms of clusters. And so we can, for, in terms of the infrastructure, much of our day two, or as I call it hour two stuff, is syncing configuration, helm charts of configurations. So we have helm charts that contain manifest for active directory binding, backup configuration, project templates, console links, session timeouts, the cluster monitoring operator configs. And we also have like internal routes for the ingress routers, which we, some of our stuff is in AWS. So we'll have ingress routers and open shifts, spin up a new AWS network load balancer. And so using Argo CD, we're able to locate those configurations across all of the clusters for things that need to be the same across all of them, and then also individualize them for clusters. So this is our use of Argo CD for sort of that day two or hour two, as I call it, configuration. And here's our overall process here. You can see, you know, cute little icons basically that layout everything is working. You know, we've got helm, GitLab, Argo CD, and AWS and OpenShift and its various flavors here. And they're all talking to each other. And I think that's why Argo CD is providing that, you know, that crucial step of communication between GitLab and the cluster. And there's a lot of web hooks in there that you don't see. I simplified this down quite a bit otherwise. Arrows and lines like going all over the place. What I like to call out here is, I guess this is a question slash statement is that I think it's pretty cool that you're running OKD in Dev and QA production, right? And so, which I think a lot of people do, not a lot of people say that they're doing that, which I just, by the way, I think it's great, right? It's kind of like the nature of Red Hat's open source, right? It's like we just use upstream bits. It's not like it's, you know, we just package the stuff, right? And we have engineers upstream. So I think it's kind of cool that you're like, you know, you're running kind of your tests with OKD and then doing in production with OCP. Cool. Yeah. And so let me delve into that for a second. So I'm going to put on my OKD co-chair. Yes. Yeah. We're switching hats back and forth no matter what you know. Exactly. So one of the reasons we're using OCP is because Fedora ColorOS, the operating system underneath OKD, is not FIPS compliant, right? There isn't a switch that you can turn on to use the FIPS approved encryption. And that's something that they're aware of. There's an issue opened up with the Fedora ColorOS folks on it. Whether that will get addressed in the future is unknown. But our work requires, the majority of it requires FIPS compliant encryption. And so that's one of the reasons. And the other reasons is it allows us to allow the developers to just really bang around in things at a lower price point than licensing an open shift, right? You just pay for the compute, right? You just essentially you're paying just for the compute. Exactly, exactly. And there's a couple of operators, quite frankly, that are not available that are, for example, some of the operators like logging, I think, are not available in OKD. They haven't been created and provided in a community repo yet because there's some specific Red Hat bits that are in some of those operators. OKD Working Group is in communication with Red Hat on resolving that situation and making more of the operators, more parity, I guess, between the operators. So, OK, switching back. Now I'm back. All right. Now you're back in your University of Michigan. Got you. Yeah, exactly. A hat on. And so here's an example of our Argo CD interface with just a few of the projects that we have or apps as they're called in Argo CD within projects, connected to projects. And so you'll see we have, we're synchronizing pipelines, Helm charts that contain pipelines. We're synchronizing pipelines for builds within the cluster. We're synchronizing pipelines for those external resources like AWS databases. And so this is an example, I think, of going beyond just synchronizing and doing GitOps with your application deployments. But it's that full picture of all of the aspects of it. And one thing this screen doesn't capture is sort of that day two stuff. But that's in there as well. I don't know if you're going to go over it. But I'm curious. And by the way, for those, Hillary as well, and whoever, don't let me bogart all the questions, right? If you have questions, please drop them in the chat. I'll push them on over. But I have questions because I'm geeking out here too. He started talking about day two operations. And then my little heart started going. Yeah, exactly. And I'm like, oh, I just got to let him get through this stuff. Yeah, exactly. When should I interrupt? I don't know. So from my point of view here, specifically for Argo applications, right? I wonder how you, right? And you're running this production. How you slice up what makes up an application? Because an application in Argo land, it's kind of like, it's like your wildest dreams. I can be whatever you want it to be. Right? So some people put their entire stack as an application. So if I have a three tier, you know, dumb three tier application, some people just stuff it all in one application. Some people break those up into three individual applications. And so they can manage those individually. So I know you have some government stuff that you need to keep protected. But like, what can you tell us about how, you know, what you guys do and how you came about that decision? Sure. So one of the things that we found is that some, some stuff works really well in a single application, like multiple helm charts all in one. So the way that we, let me just talk about how we do it. So how we do it is you'll notice that we have here destinations that are in a single cluster here. Some things that we know will be across multiple clusters and stay the same. We will put in a single Argo application because it's, it allows you to basically in your project, you can set multiple destinations. And then we attach that project to a particular application. And then we can go across multiple. In other cases for things such as pipelines for particular namespaces, we give those a, a individual applications. You'll see right in the upper left corner, you'll see content service project pipelines, content services and microservice. So that has its own specific application and just for the build pipelines. And then we will have a content service, database pipeline. And then we'll have deployment resources. And those are all separate because they will have different uses. Some of them are going to be using deployed as customized. Some of them are going to be deployed as straight helm charts. Some of them are going to be deployed as the directory style as they call it in Argo. So it really depends on the split of, sort of how many things need to be modified. And I'll actually get into that. And you'll maybe understand a little bit more when you see some of the customized things that we're doing as well. So it's sort of getting closer. Yeah. And that's another thing like, like with, with customize. I know there's like, you know, a few ways to send skin that cat, right? And then it's, you know, what works for me may not necessarily work for you because your production is going to look way different than mine. We know what I'm doing. So I definitely be curious to see, to see that. Like now I'm geeking out harder. So, so yes. So yes. Thank you for that explanation. Sure. This entire stream. Sorry. This entire stream is just Christian. I geeking out. Yeah. Yeah. We just need to sit here and geek out. And you just geek out the whole time. Yeah. So it's really, it's a lot of work for me than for anyone else. Right. Yeah. Well, I hope other folks are enjoying this. So. Okay. So here's an example going into the app details of it. You can see that there's, so for example, here, you'll notice that there's a built pipeline that we synchronize. And that pipeline has a bunch of associated files. These are tecton pipelines. In case that wasn't clear. And so tecton pipelines have like four different components. They have an interceptor, which is like your webhook basically. And then they have a bind that connects a pipeline template to a listener. And then you've got the pipeline template. And so there's like four different components that are involved in a webhook triggered tecton pipeline. And so we have all of those resources in a Helm chart. And then we deploy that. And so these are the builds. These are resources for a build pipeline. We install it with our go into the namespace for the particular microservice. Then we copy the route over the webhook route over to the repository as a merge based webhook. And you'll see that in a little bit. And so what's really cool in this case is these build pipelines tend to be the same. And as we add, for example, more build tests into a pipeline, we just update the git repository. Argo picks that up, picks up all the changes in all of the individual cluster project namespaces for each microservice, then get that updated pipeline. And then all of the updated tests. So this is an example of really just, you know, not just the build itself, but the build resources, the pipelines, et cetera, taking advantage of that synchronization via Argo CD. And so here's an example of that of that pipelines for a build. I have what I call a gate, a merge request gate. So for folks that don't know GitLab, when you do a merge, it will trigger the webhook with one of like four different actions. So it's like open or create, approve, close and delete or something like that. There's like four different actions. And the webhook gets called regardless of which of those four actions. And you have to actually read the payload to do that. So our pipeline has a gate basically that I wrote that looks for a particular action before it runs. And then we clone the repository, get some meta information about the repository, generate a location for the image. Then we do an S2I Java build, a source to image Java build, and then we tag it. Christian, this is sort of similar to, I think you did a show, what was it last fall, I think, you did a show where you sort of showed this sort of a process. Yeah, I was going to comment and like this is kind of like similar to the, because Tecton doesn't currently have like a gating feature, it's essentially what you need to do is kind of like write in a PR in the pipeline in order to get gating, right? So this is, I was going to say, this is like, I'm like, I'm glad that I'm not crazy. And so someone else is doing this in production. So this is great. Yeah. And so I have a, what I did is I wrote a merger request component and also a merge request, gate task, and then I also wrote one for comments so that you can basically do bot type stuff. So we have commands that you can do as a comment on a pull request to trigger an update to the Helm chart, which then in turn gets synchronized to the cluster. And so you start getting into sort of get bot type stuff with Argo CD sort of playing that synchronization role there. We do a fair amount of this in the services SRE org, which is the layer of the SREs that I'm in at Red Hat. And yeah, we have fun with bots and all this exact type of stuff. Everything comes down to, you know, simple commands and magic happens. Yes. Yeah. It's very cool. So GitOps, you know, you've got the one component if you're constantly syncing with what's in your repository, but there are times when you need to have commands issued to get something and you want those commands stored in the Git basically so that it's there that someone issued this command at this time. So there's two benefits to doing sort of get bot type stuff. And also it kind of bridges the imperative tasks and the declarative tasks. There's going to be some imperative steps that you're going to need to do. And so I think having those bots and those integration points are extremely helpful, especially when you need to run a few commands. Just some things you just can't do with declarative. Right. You just have just an imperative test to do. Yeah. There's one that we use pretty commonly. It's like slash retest. It's just, you know, very, very simple, easy, straightforward. But just having that as an option just makes life a lot easier when you've made a little mistake or, you know, forgot to, you know, rebase before your PR and you have to go back and change some configs and you don't have to close and do a whole brand new merge request. You can just, you know, use the bot commands to refresh things, retest them, you know, restart that whole process of getting things to, to a final state. Yeah. Also slash cancel, life saver. Life saver, please cancel. Like, you know what, we're all human. We all make mistakes. That's right. Right in that slash cancel function, dudes. It's worth it. And so here's just a quick look. So, you know, deployment resources. We have separate repositories for each microservice. Each repository is a helm chart for the code. The home charts are published in package and published to your registry. And then the application level repos and helm charts will offer quickly collecting microservices into an application. Christian, I know you've talked about this before on the show is so essentially we're talking about helm charts that are sort of parent helm charts that reference sort of a child helm charts, sub and charts, right? So like, they call actually, sorry, in truck, I actually found out what they're actually called. They're actually called umbrella charts. So for those who are curious or what they're, the technical name for it is because they're called umbrella chart. So like if you're like, it'll help in your Googling. So just let a little tip. They're called umbrella charts. So I found out what they were called. Excellent. That's fun. Because when we were doing our helmet back again, talk Christian, it's actually in the documentation or was at the time. I did see umbrella charts reference, but I also saw parent child. Yeah. Number plate you're used as well. Say, yeah. So it's a good way to kind of like, like folding your dependencies, right? Like if you have like a lot of helm charts, but then you kind of want to reference it as like a unit. It's very, it's very, I do that a lot. It's a very helpful. Yeah. And so we have a helm chart library that I created that allows us to create helm charts really easy. It's like even easier templating because you can just drag and drop basically a template from this library that I created and then add some values. And you don't even have to mess with helm templates very much as well. And all the deployment repositories have pipelines and pipeline runs. Like I mentioned that gets synchronized. Here's an example of the deployment repositories with my ubiquitous taco deployment for testing purposes. You always have to have a taco repository for testing. And these are, so these are the umbrella helm charts, right? And so this is, you'll see on the right hand side different microservices with different versions that are all helm packages. And so we, this allows that umbrella chart, right? To pull in dependencies of different versions for particular applications. And so, and you'll see some, I've got some testing scripts in the repo as well to make sure everything works. And so, and this is the, the template library that I talked about there, you know, basically very easily being able to create secrets of particular kinds, et cetera. And then pulling that into an umbrella repo. And so as I mentioned, new builds using tecton, modifying deployment resources and publishing the helm charts and are go continuously monitors the helm charts and other resources and reconciles as needed. And then also, as I mentioned, Terraform for AWS, email templates for AWS, SCS schema for databases. These are all the external types of resources that we want to manage via the pipeline and use Argo to synchronize the pipelines and the pipeline runs actually for that. And here's customized. This goes a little bit to what we were talking about, Christian. So, you know, customized we found is best suited for singular resources that are not part of like a collection and you can modify pipeline run existing in the deployment configuration repositories and you can modify smiles which require only very small changes. So we would break out to a separate application in Argo, something that under these circumstances would use customized because of what you see listed here. And, you know, helm charts for managing pipeline resources. Here's an example, sort of at a clear level. There are those four files that I was mentioning. And Argo CD can reconcile against the chart for full GitOps, right? And syncing the pipeline resources. This is an example of a pipeline file here. And so here's, you know, Argo just copies these over within the helm chart. Some of them we actually just do as individual pipeline runs separately that are in the repos. And I understand there's some new stuff coming from Tecton that actually is pipeline runs in the repos themselves. Christian, you may know about that. Yeah, yeah. So actually, I was thinking about maybe having a show on that called pipeline as code, right? And so you essentially pipelines as code and Tecton is what, you know, kind of what it sounds like is that Tecton reads these configurations from a Git repo, right? So you just provided a Git repo and it can interact with your repo to be able to do these things. What's interesting is that you're actually storing the pipeline runs or because like what I end up doing is I always end up like I do like a trigger template or I do a, you know, I forget what they call it. Now the object. But anyways, the trigger template, what it does is you basically template eyes and it takes a payload from the Git repo and just takes that payload and then, you know, it kind of parses that out and then runs that pipeline for you. So I'm interested. So are you storing the actual pipeline runs themselves? In some cases, yes. So it depends on the underlying storage needs and it's because of this thing with customize that it's a bug or a feature depending on how you look at it. So customize uses the name metadata for its modification. And if you want to do generated name, if your pipeline run needs to do generated name, like it won't be like that run won't be deleted. So here's the issue. If you are going by name, you're using customize. If you give it a name such as you see here, it won't run properly in OpenShift. If you run customize, if you, sorry, if you run customize and pull this in again, if the run has a previous run hasn't been deleted, it won't run again because it's the same name. And so there's a little hack that you can do of using said. So you run your customize, you pipe it to said to pull out the name metadata and turn it into a generate name metadata item and then feed that in. But obviously that wouldn't be something that we would be doing through Argo. So that's a case where we wouldn't be using Argo if we need to have unique names because we don't expect that that pipeline run will be deleted before the next time that it needs to run. Yeah. I think, and we weren't going off on a tangent, but it's part of the geeking out, right? I think you can provide both, right? So like you can provide name and generated name in the same manifest and generated name. Can you? Okay. So if you supply both, and this is kind of like a hack, maybe they took it out. So if you provide both when you apply the manifest, generated name takes precedence. But if you have name, you can reference that name in a customized file. So you know when you're patching and customize, you have to reference the name so it knows what it's patching? Yeah. So you can reference the name, but when you apply it, generated name over, over rides name. So you can. So I was searching around and I came across, I think it was like two years ago, someone in the tecton issues was, had a conversation about this and they were talking back and forth and I didn't really get that that ended up being a resolution or that it was even possible. Okay. Well, I think it's a work around because, yeah, because ideally, ideally in customize, you should be able to reference a generated name, right? Because all it is is a prefix. Like it's like who cares what the suffix is because that's going to be applied at runtime. Like it's never going to know that. So anyway, so yeah, test that out. I'll see. So we'll see. I mean, I know that was a work around I had. Yeah. Exactly. Exactly. You know, and so here's, you know, sinking pipeline resources, the via customized. So if I need to update all of the, if I update that template library, and then all of the microservices that use that template library need to be updated as well, I can use customized to, to run and change from Argo customized from Argo to change the references in all of those. And so here's pipeline for X resources. And so this is deploying a base in AWS. So this is something where we can use Argo to synchronize the pipeline and also the pipeline run itself for a particular application and a particular database for that application or microservices. And here's an example of deploying databases, a customization for that. And, you know, the different stages there. And then that can be synchronized. And this is a publishing package example. And so, you know, as I've mentioned before, so the results of and in particular using Argo CD for that, the microservices can be used really quickly and efficiently speeding up the creation of the web applications. Configuration for external dependencies is deployed consistently, developing new applications without server at an intervention, without DevOps intervention, including sort of one-off instances to functionality, right? Because Argo can be running in the background and synchronize their changes from the Git repository. And they don't even necessarily even need to know how to modify things within the cluster itself, right? And the process of promoting applications through the different stages is efficient and way less prone to errors. And, you know, resource usage can be tracked at a higher level of resolution. And even for those commands we were talking about, you know, so if you're issuing commands to trigger things or modify things, those are stored in the repo, which in turn can be synchronized to the cluster in various ways. And clusters themselves can be scaled up really quickly by using GitOps principles in Argo CD for that day-to stuff or hour-to, as I call it. And so next steps, you know, separate out deployment code, create more top-level applications. These are things that DevOps and developers can do to embrace GitOps and, you know, utilize Argo CD to its fullest, I think. You know, create top-level applications, the umbrella, application helm charts, you know, and clarify that delineation between software for an application and software for multiple applications, right? So some microservices might be used for only one application, they might be used for multiple health checks, you know, and sort of that mental shift to thinking about the microservices that you're creating are, you're being consumed, it's being consumed by users, but also other development teams are consuming your software as well. And think, communicate in terms of Git commit hashes. This is a great thing for synchronization with Argo CD and with Tecton because you, that, when you start thinking and coding and synchronizing in terms of Git hashes, things become very clear what is where and what has been copied over to the cluster, what's been created for a container, et cetera. And so improve your documentation and read me and the changes to better understand these pieces work together. Because if you're using, you know, Argo CD for your synchronization and you're using Tecton for your pipelines and you're using Git and you're using commands and things are, you know, if you're using the automated sync of Argo CD, you know, really fully realizing that continuous rectifying of GitOps principles, you're gonna wanna document that so people know that when they make a change that that's automatically gonna show up in the repo and show up in the cluster somewhere. That's good to let folks know that. And here's some resources, obviously, some things that folks can utilize. I didn't change the title on the bottom of the slides, but you get the gist there. And of course, thank you. Yeah, no, thank you. I have a ton of thoughts, but first I think, I don't know, I think I'll have Hillary, I'll hold to see if Hillary has any thoughts, any questions or anything. Also, if you, those watching, drop your questions as well. Yeah. I have so many thoughts. I said some of it, I don't know if you could even answer, you know, some of the logistical splits behind applications that need to run more monolithically versus applications that can be made up of a series of microservices. I don't even know if you can answer that. Most of my questions are gonna be around stuff like that because it's super fascinating. Sure, so some applications are monolithic just out of the sheer fact that they were written years ago and haven't been ported over yet. And so they are monolithic because they were written monolithic and breaking them off into microservices is a task that would take many development teams many hours and we just haven't gotten there yet. We are in a process of moving over a lot of the old monolithic service applications, monolithic applications, breaking them where appropriate into microservices and moving them into the cluster. In some cases we are taking whole applications and moving them into the cluster. We often talk about, you know, Kubernetes hosts microservices but it can also host monolithic applications in a way, right? And there are times where that's appropriate. I think, you know, there's, I've got, what's his name? Sam Newman's books and Sam Newman, I think does a good job of pointing this out is not everything is a microservice or not everything should be a microservice. And you want to ask yourself, like, what are the benefits? You want to do a cost-benefit analysis, right? What are the benefits and what are the downsides of doing this as a microservice? And so we have a whole criteria for that that would probably take us right through another show to go through. We really analyze, like, you know, what are the benefits, what are the downsides of switching to a micro approach for that? Right. That would be a really fun show. That would definitely be. I was just thinking that, yeah, first of all, that would just be a series, right? It's not even a show. Just one episode, right? That could be like a series of talks. And I think also sometimes you just have tech debt. And so, like you said, like, it's a cost analysis. It's like, do we, you know, do we spend the time and money to break this up into a microservice and, like, you're not going to really see much benefit from it? I think a lot that has a lot to do with where the benefits is something like, and this is a shout-out to Andrew, who's watching, of OpenShift Virtualization, right? Where you can maybe instead of, like, run. So because me and Hilary had this conversation on Slack about something completely separate about, like, why are you shoving your monolith in a container that's, you know, may not be the best move that you want to do? So which is something like Kuvert, right? And I guess, Jamie, you would use Kuvert instead of OpenShift Virtualization. But OpenShift Virtualization has that aspect of, well, you can still kind of manage, you know, those type of things in their, in their own native operating system, right? And kind of just let it live there. Yeah. Well, because some things you do need to get off of the old infrastructure. And maybe you, maybe you can't break that application just yet, but you need to get it off of the old infrastructure. And so in that case, maybe one of the, you know, doing it that way in a VM is the way to go, yeah. Yeah. I think my favorite thing about what you just took us through, and this is something that is always my favorite thing is that there's several different technologies that you have orchestrated so that you're applying the best tool for the job. So you didn't go over it in your slides. I know you mentioned you use operators. I see you using Helm charts. Christian, I have done a talk about these two topics before. These are things I work with every day. You know, you've got customized and Argo and everything is being used to its best potential, which I really love because I think that there's a tendency for people to stick to one tool because they know it and that doesn't necessarily, so sometimes you're putting the square peg in the round hole, right? And I just really loved that this tour you gave us had the best tools for the best job, even if there's technically some like a Venn diagram of overlap of functionality. And I really appreciated that and that's orchestrated so cleanly. And also a shout out to Rick. Hi, Rick. We used to work together and then he left red hat. But hi, Rick. Hey, Rick. I actually, and I was actually going to mention that and I think I didn't want to interrupt you even further. But I also like that fact. I've had the discussion about like, well, customize or helm and it's a to borrow from Andrew. It's like, it's a yes and conversation, not a but. You're going to use, using the best tool for the job is always the way to go. And I think, yeah, taking us on that journey, that's something that stood out to me as well. I was going to comment like, you know, I noticed you're using a little bit of both and you're kind of just playing to the strengths of what you're trying to do and using that tool. So I think that's another, another great takeaway. Excellent. Yeah. I mean, and that's something that, you know, it does take a little bit of work because you have to familiarize yourself with all those tools and you have to get your teams, your DevOps teams and your developers sort of familiar with these different ways in which things are going to be done. So there's a little bit of a learning curve because you have to learn all the tools, but it's, it's, it's worth it. I think to again use strengths of all of them. Yeah, for sure. And of course I missed my mute button. Sorry. So, so yeah, so if anyone has, has any questions, please drop the, drop that in. We are almost about at time. Again, thank you, Jamie for joining us. I always enjoy end user talks. People always enjoy end user talks. They love seeing how these things are working in production. I personally like seeing some of my suggestions being used. So, I mean, that's me. That's a, that's the Leo in me. Seeing my stuff actually being used. And so, oh yes, someone has an interesting question about in terms of security. I don't know if I know we've had that, that acquisition of stack rocks, right? That has been open source. I don't know if you guys have been looking into that or have been using other tools. Like, like sync or I forget what the other one is black duck, something black, something duck, something with a duck. But yeah, anyways, is there anything specific, the tools that you're using, like specific tools or. We are using a handful of tools. I can't really talk about them, but we are using a handful of tools that are account, there's a combination of open source tools and a combination of commercial tools. Basically, yeah, that we utilize. Yeah. Yeah. Yeah. I had a foster on Michael Foster, who's who's part of the stack rocks, one stack rock community. And, you know, he's the ACS guy here. And, and red had talking about supply chain. I think maybe that'd be cool to maybe revisit that supply chain and kind of just having that the factory of, of trying to deliver applications and constantly scanning. I think that's, that's an important thing. To be able to black duck is what, okay. Yeah, black duck. That's what I'm thinking. I know how to do with a duck. So thank you. And so, so yeah, so unless there's anything else, Hillary, do you have anything else? Um, no, I mean, you just started talking about a bunch of stuff that I deal with every day. And I'm like, we're out of time. We need to talk about supply chain. Yeah. That's right. I have thoughts. Yeah. Oh yeah. So like, I obviously drop a very important topic, right as we're going out. Right. So that definitely means we need to revisit that topic. Maybe we'll bring someone out. Maybe, maybe I'll, maybe Hillary will drop some knowledge on us on that. Cool. Yeah. So Jamie, anything else, anything else you want to talk about, talk about where we can find you. Sure. So folks can, I'll, I'll, we can put my name maybe in the description, my email in the description of this one that gets posted or whatever. And folks can have my email. So it's basically just Jamie L M at you, Mitch dot edu and happy to answer any questions that folks have. If you want to reach out, you can find me in the open shift Slack channel in the Kubernetes Slack. And you can also find me in the, the GitOps channel there and then the open GitOps channel on the CNCF Slack and, you know, check out any of the working groups I'm involved with, you know, okay, do working group. Open GitOps working group. And, you know, Fedora quarter West working group, a lot of great stuff happening there in all those places for sure. Awesome. Awesome. Thank you. Thank you for sharing that. Thank you. So we are out of time again. I appreciate everyone coming on. Jamie, I appreciate it. Hillary appreciate it. I'll appreciate you all joining and geeking out with us for the hour. And yeah, so everyone stay safe out there. Hopefully I'll see you at KubeCon. I think we have one more episode before KubeCon break. And so remember to like, subscribe, share, especially share. I appreciate the shares, right? So if you, if you go to our YouTube channel and you find an episode and you think it's cool, please reshare it. You can find that. I'll actually drop it in the chat just while I'm talking about it. Red dot HT slash GitOps. So if you like a video, please feel free to share. I appreciate every share. And yeah, and with that, as I always say, unless it's in Git, it's only a rumor. Thank you everyone. Cheers.