 And sometimes we have a huge... This is so awesome! We are an open culture that is actually already recently promoted for your Github's work. Everybody has something that they can say. On top of the Red Hat portfolio. Alright, and we are live. Welcome everyone to another episode of Github's Guide to the Galaxy. I will be your captain, Christian. So, today I have a co-captain, my co-captain today. I don't know which way the camera is pointing. I think I could go like this. It is Shubik Bose. So Shubik Bose is a principal software engineer at Red Hat. One of the guys I'm always bugging about OpenShift Githubs. He's one of the main guys. I try not to bother too much. But, as he knows, I'm a little bit opinionated and I'm always get excited when he talks about Github. So Shubik, how are you doing today? How's Diwali treating you? Doing great. I think I have some plans in the evening for Diwali. But so far I'm here right now with all of you enjoying Diwali with Githubs for a change. Nice, there you go. Exactly. Thank you for joining us. So today's topic, we're going to talk about OpenShift Githubs version 1.3. Like all the new things that we get with OpenShift Githubs version 1.3 and kind of like a look into the future. A lot of people ask and also like in the community is like, hey, I installed Argo on OpenShift and it's kind of acting weird. It's like, well, did you install it via the operator? We all know I just installed Argo. So I always like to bring up OpenShift Githubs because it is the entry point on how to get Argo City installed on OpenShift and a lot of the stuff, a lot of the headaches used to configure Argo is taken out of the way. But before we get to that, I do want to talk about a few things, kind of top of mind sort of things. First and foremost, I think, and I will put it here in the chat, ArgoCon, the schedule is out. So there it is. So there is the ArgoCon schedule. If you want to take a look, I threw it in the chat there. I have a talk. So kind of a selfish plug. I have a talk there on ArgoCon. I'm talking about stateful applications and get-offs because that's always a hot topic. I've talked about it on the show before and if you want to cache that episode, the past episodes are on the YouTube playlist. Another thing I want to talk about is, let's see if I can find it real quick, is the get-offs con itself. The videos now are on YouTube. So let me find that playlist, there we go, found it. Let me put that in the chat as well. So this playlist here is everything that went down with get-offs con. Get-offs con was something that the CNCF open get-offs project, which Red Hat takes a part of, we put on as a day-zero event, all the talks are up there, great talks. I gave a talk in the selfish plug, so if you want to actually about another project that Shubik is on about pipeline as code, but we'll talk about that maybe on another day, have a whole episode about pipeline as code. So you can check that out as well. Your talk was good. So yes, do check it out. So when I say pipeline as code, you guys are a little confused. That's actually a good intro. I did a great intro about pipeline as code and kind of the idea behind getting your CI as get-opsified. I guess it would be a good way to say it here. And kind of to round it out here. For those of you, friend and colleague we have a kind of a developer group in the BOIT network, women in technology. If you want to check that out, she's done a lot of great work there. If you're a developer, engineer or whatever, you want to join a great community, that's a great community there. I promised her I'd mention that on the stream today. So I think it's great. Last but not least, selfish plug because as you know, Shubik, I'm not ashamed of doing a plug here. Yeah, right. It's my show. Why not? Speaking of open source and ARGO CD in the upstream community, I have a little pet project called Go KP that is it's an idea about a get-ops friendly Kubernetes platform. So this is kind of like a pet project of mine. It's on Reddit. So if you guys want to join the open get-ops Reddit, it's right there. So you can see my posts there about it if you want to get more involved. And also if you want to get involved in Reddit, there's already Reddit hate on it, which that means it's great. I love Reddit hate. That's exactly what I wanted. I wanted feedback. Sometimes the hate is good. So Shubik, what do you have for us with regards to OpenShift get-ops? OpenShift get-ops, what is this new update? What do you see in the future? What do you got for us? What do you got some inside information? Absolutely. Thank you very much. So I think we are probably running with OpenShift get-ops and not just walking or crawling right now. Full screen. Things change every day. While I'm talking, we are doing a release right now. So while I'm talking, we are going to have a new release in a few days. We are seeing a bunch of changes going into OpenShift get-ops both from the perspective of how you run the get-ops control plane, just so that admins are a lot more comfortable doing that to actually ensuring that in the long term, when it's running on your cluster, it's not killing your cluster because you have a ton of developers who are putting things to sync from get to your cluster. So we are doing a bunch of improvements on how you install it, how you configure it securely to how do you manage the performance over time. And today I'm going to talk about a bunch of those. I'm actually going to ensure that you have some context about some of the different caching mechanisms and what are the things we are doing to make them better. There you go. As well as talking about how do you securely talk to each other in the control plane. So I think irrespective of whether you're a developer or an admin in the audience, there's a lot of interesting things that are going on which I'll talk about today. Sweet. I think actually I think one of the engineers Jonathan, I think his name was, made a great post about he has a read me somewhere about how the caching works. I'll go see if I can share that with the people because I read that and I was like there's a lot going on there. Yeah, there's a lot going on. I didn't even realize it. Actually we do have it here. See this may be old, maybe not who knows, but I put that in the chat so you guys can check it out. Yeah, but it's going to be definitely helpful as something about a point in time caching mechanism. So yeah. So I think I'll probably go ahead with showing what I've got. See here, we're waiting doing technical things here trying to share the screen as always. Yeah, so if you look at the so it so this is kind of weird. So if you look down it looks like your search screen is disabled but it's actually not disabled. You actually have to click on it. I don't know if you see it. I see a stop screen turning here. Okay. Yeah, let me just try it out. Yeah. Awesome. You can see it now, Christian. Let's see here. Yes. There we go. Awesome. That tip helped. There you go. Always right. Awesome. Thank you very much question. So yeah, I think we'll dive straight into this today. We'll be talking about some of the salient improvements we've done in Openship GetOps 1.3. It won't be demo heavy. Rather, it'll be a lot of conversation heavy and slides heavy, but not from general slides. I'd like to ensure I use this opportunity to tell the audience how a lot of things work inside Openship GetOps in Argo CD and how things are improving over time. And if you see something that's not working, I hint at why something is not working. So which is why I dive into some of the details in this show. Let's see. Next slide. So a quick recap. What is Openship GetOps? The charter of Openship GetOps is very simple. You read manifest from Git, you look for equivalent resources on the cluster, determine if they're different, if they're present, if they're not present, they're definitely different. And if you see there's something different, just shout out AutoSync. And then if it's AutoSync, it's going to reconcile to your desired state. And then you get to repeat this till eternity. And that's all Openship GetOps is going to do for you for a bunch of complex humanities, yamls that you're going to put in your Git repository. Now what is Openship GetOps? Really, really, really, it's basically it packages Argo CD in a way that you could deploy with multiple deployment apologies. And Argo CD engine is the one that ensures that this charter we have on the left is well taken care of. So before I go to this slide, I'd like to show this. So yeah, so what's Openship GetOps? We have different mode jobs. So if you know what Argo CD is this is going to be a little helpful for you then Openship GetOps ensures you can effectively deploy Argo CD in multiple variants of apologies. One is where you effectively have cluster configuration done across your own whole cluster using central mechanism which is Argo CD on your cluster. The other is effectively when you say, hey, I have a nice development team who needs an Argo CD and yes we can do that for you with Openship GetOps which works within the isolation of specific namespaces. And of course the other mechanism is, hey, I've got these 100 clusters sitting out there and I need to centrally sit and manage all of these. So you could have Openship GetOps on one cluster and manage 100 clusters using those even though you may not necessarily be managing your own cluster with it. So these are the different mechanisms which Openship GetOps makes it easy for you to use on an Openship cluster with Argo CD. Yeah, the guys at Intuit I had them on and they're actually doing that whole central hub push mechanisms for hundreds of clusters of clusters thousands of applications. If you're worried about scale don't worry about scale because Intuit beats it up a lot. Yep, absolutely. If you've done a bunch of financial transactions online, you've definitely gone to the system which has been part by Argo CD and Intuit. Yeah, especially here in the U.S. when tax season comes there in the company. Right, so before we go deeper into different things I had to show this customer slide which thanks to William Tam who started putting this up a few days ago and I had to show this. So we are at 1.3.1 now and this is the map of the different component versions. You don't have to memorize this but in case you need it sometimes you're going to put it on Openship Docs and we're going to have a way to get to this but the idea is if sometimes you suddenly feel hey this seems to be not working but it should have been working which version do I have on? A reference like this is going to come handy and I'm going to share this with you as well. Sweet, yeah that's a great matrix by the way because I'm always asked which version are you on? Well what Openship version? Yeah so it's good to know matrix and people love matrices. I think it looks scary when you look at it but then if you need it, it's your best friend. Exactly, absolutely. Yeah I think the first thing that I'm going to do is I'm going to go and install Openship on my cluster. Again it's a refresher to ensure we all know how the installation experience looks like. Can you see my Openship screen? Yeah you can. Yeah can you make it a little bigger? There we go. Cool. Let's go to operator hub type Openship Openship getups I need that logo on a t-shirt man. I know. I'm trying to like, I know corporate doesn't like logos like that on a t-shirt but I really want one. This looks good yeah. So yeah I think as soon as you go in here it gives you a nice clean install button. It'll basically give you a bunch of information for itself. But effectively we're just going to go ahead and hit install. We're not going to choose any other options along the way. We're just going to say install install and scene defaults right? Yeah that's totally yeah. Yeah and then I think while this is happening I'm going to quickly give you an overview of what's happening out here. In case you're an admin who wants to peel the layers and want to know what's happening behind the scenes. So this effectively is going in and installing an operator or a controller for you to manage Argo CD control planes itself. And then it's going to also install an Argo CD for you to manage cluster configuration. What that means is if you just provision the cluster and you need to basically set up user management and a bunch of these things this will ensure that it does all of that for you. This will ensure your Argo CD is warmed up to do all that for you to be precise. Sorry about that. So yeah I see that it's installed. I'm going to wait for maybe four seconds four or five seconds to ensure behind the scenes Argo CD is actually coming up. But we can go ahead and check that out. Yeah so while that's actually there's a question here that talks about hope this version fixes the missing permission for Argo CD to deploy in any namespace. Just installs getups operator 1.3.0 and getting a namespace not being managed. I think that was addressed in 1.3.1 That's correct. So that's going to be there in 1.3. Time to upgrade, absolutely. That fixes in there. So I'm running 1.3.1 now as I showed you so you should have this. Definitely. I see a bunch of things running here. That's always a good sign. I can still see something is not yet ready but it's ready now. Awesome. Let's try it out. Worst case is not going to work. Nothing more than that. Worst case doesn't work. Which people like saying by the way. It's always a weird thing. People love it when things break live. Including myself. It's like it's a human thing. Awesome. So right here I've got openshift getups installed for those who haven't seen it in the past and I'm going to go and click here and then yeah. Awesome. It just tells me that I need to authorize openshift getups to use my openshift authentication to login to getups. This is one of those things where it's like this is the value of using the operator because the operator does all this for you. This used to be a very manual step. It's like setting up decks with OIDC or setting up OIDC directly at SSO. Now it's just all integrated. If you go to the user tab it actually shows that you're QBadmin. All that information comes through. Which is pretty cool. One of the values you get with the operator. Automatic setup. Absolutely. We have this installed and then we can do a bunch of interesting things. This is one of the first things folks that you don't necessarily have to go in and install Red Hat SSO to be able to login with OpenShift. Of course you could if you wanted to want to do that. Because Red Hat SSO gives you a bunch of other powerful features which you wouldn't have without it. But if you are looking for a simple login and pull in the groups this will just work for you. With that I go back to slides for some time so that I can set up more context for more things. So yeah. One of the first key things we did is you're trying to ensure that a whole login authentication and authorization experience is a lot more smooth than it has been in the past. So we are actually executing on multiple fronts on that. We ensured this time we actually added support for decks only for the OpenShift login bits. So I think a quick disclaimer to do not use decks for other purposes other than OpenShift getups login with OpenShift and authorization that's my disclaimer. But yeah. Yes you can use it but only with OpenShift getups. Yes please. Right. So we have that here. If you are interested in looking how the CR looks like you've probably seen this already with the community operator to support it again. We have the policies set up out of the box so that you don't have to worry about them. And interestingly if you are somebody who already has you know Red Hat SSO set up in your customer environment or you have a key clock set up which you want to reuse you could potentially use that as well. And because I think some key improvements are happening in that space it's not available right now but it's going to be shipped in a month or so Red Hat SSO or key clock is going to have support for syncing of OpenShift groups into Argo CD. For those interested the package has already been merged so it's there it's been tested you're probably just waiting for the next release of Red Hat SSO to have that feature. I mean we need to test software so it's coming so we just want to make sure it passes all that. Developers have tested it it has to go a little beyond that. Yeah it has to go through CI so let's do that showing green. We have this improvement and then there's another work in progress improvement which are going to land onto your cluster soon we are going to continue improving this but then the general idea and my slide is a bit messed up on the left no worries but it's YAML right when you type YAML that's the problem. If someone can look at it and say that's invalid YAML I'll give them a prize I only know it when there's an error I'm like oh okay. It didn't work. That's the only time I know. So I think one of the requests that we've got that we should be running the get-ups control plane on nodes which have been designated as in-front nodes and we wanted to make a lot of progress on that and we've made a ton of progress on that but yeah do expect improvements but right now we've got something for you which is there is a top level that's been hidden from all of you and we're going to gradually open that up the idea is the cluster config get-ups instance is the only one which we'll allow right now to move around to in-front nodes because if you did that with the others we might get into your territory where developers are moving the stuff around in different nodes but they should not be doing that so yeah in general what we are going to do is we have something called get-up service which gets bootstrapped for all of you on your clusters all the time you probably just haven't checked it but if you go there you will see there is a resource called get-up service cluster that is on your clusters already if you have installed OpenShift get-ups you can now put run on infra true after attaching the right labels on your nodes and it's going to move your workloads onto that node further if you have tolerations set up you could even specify that on that resource and this will only impact your cluster config get-ups instance for obvious reasons so nice nice so this is it was a kind of like a hidden object now you're kind of like opening up that object to to control where those those workloads land right so for those that don't know that are using OpenShift that there's a concept of infrastructure nodes and if you're using infrastructure nodes they don't count against the subscription right so we're basically they're free they're free course right we don't you know we don't charge for those so to free up some of those resources that like Argo CD takes takes on now we're adding the ability to kind of alright let's move those workload those like you know Argo CD like the Argo CD repo controller right or the the application controller repo server yeah the application application sets or yeah like all those stuff that's run Argo cool yeah the idea is that if you're paying for OpenShift you should be paying for the applications you deploy on yeah yeah applications we deploy on as part of the control plane yes yes and this kind of fixes that yeah so yeah cool but yeah please do expect a lot of improvements non-breaking improvements happening in this space just to ensure that we can gradually run a control plane isolated on a set of nodes that should not be counting towards your subscription with that which we're going to move ahead with some of the security enhancements but before that I think you were just talking about this question a while ago different components of Argo CD yeah so roughly we've got the application controller the repo server the Argo CD API server these are all different components that live on the control plane today and if you are an admin who has installed OpenShift GitOps just so that your other admins can go ahead and configure it to have GitOps used for the cluster or if you are an admin who's allowed other developers to install their own Argo CD instances you would definitely be concerned about one thing the control plane better be secure because if the control plane is not secure you could potentially do an escalation of privilege into everything else so which is why we are constantly investing behind the scenes on how to integrate good security practices in the control plane which should really be boring for all of you it should not really be something should care about but if you do care about I'm going to talk about some of the tuning some of the knobs that you can turn to provide specific levels of security right so let's talk about the routes I think we've got a bunch of we've had a bunch of feedback on this and I see again a place where you may have should have been a little better formatted it's okay but yeah so in general the first thing which I'd like to mention is in OpenShift today we have an amazing model of securing routes and services and this is going to ensure and this capability that I'm going to talk to you about now is going to ensure you're able to use that in the way you think would work for your setup on that note when you actually went in and installed OpenShift getups you could basically go in and open the console or you had a bunch of services you had there now we want to ensure that anything that's exposed to the internet is well secured anything that's not exposed to the internet should also be secured what that means is any intraservice communication happening in the control plane should be as secure as anything that's outside we call it a zero trust so yeah I think here are the different things like when you say pass through that's effectively where your encrypted request from the outside world to your control plane is going to go directly to the service encrypted that's called pass through that's what the default is today we did not want that to be the default one we actually wanted re-encrypt to be the default to be very frank but we did not do it because we are sure that a lot of our users and customers would have something set up we didn't want to break that that's the only reason if you can find a nicer way to make that a default with a very cleaner migration for folks to already have something we're going to do it in upcoming releases but the good news is if you are looking for something like re-encrypt so I think I'll quickly cover the differences between the three of them so edge is basically where your traffic is decrypted from the encrypted state into the edge of your network before the traffic reaches your internal services that's called edge re-encrypt is when it's basically intercepted at your edge and then re-encrypted when you're sending it to your internal service that's called re-encrypt and of course pass through is when you don't decrypt it at all at the edge you just send it and let the service deal with it yeah it's basically a TCP socket right and it's just the actual service it's all there actually Argo CD pod or whatever handles the encryption versus yeah so basically this encryption is like basically where the encryption and decryption happens because depending on what you're doing you care right you care where it happens right so so yeah I know I can see that update by the way it's very useful for me because you know part of my job is I create a lot of training content and some of these platforms they don't like it when you don't like when you do a pass through and the edge router wants to do it doesn't like pass through I'm there for a reason and you're passing through so this is really cool yeah so I think since we're talking about different services I think I'm going to use this diagram to kind of show what are the different components so the API server is something that's fairly exposed because sorry there's obviously server that's actually a little more exposed in general your controller is probably the only non HTTP thing the others are all either HTTP or GRPC and you want to ensure that all of these are actually secure and so if you want you could actually go into your setup let me see if I can show you which exact place you should be doing this so you could actually go in here and you know route enable true you could actually go in and modify these with one of the options that you would have here and here are the three options you don't have so many options in the past but yeah these would just work and a key important thing just to kind of explain why some of this has not been trivial in the past is that with re-encrypt you need to you also need to teach the router or rather you you also need to ensure that your internal service is trusting the traffic from your router so you actually need the internal service to have in its possession the CA certificate and the Argo CD server today that component let me see if I can show you that component here so I go to my deployments so there is this server component here so you need to ensure that this component has the right TLS certificate configured for it why this is where your OpenShift CA has to land you cannot just give it an OpenShift CA and it will know where to pick it up from why because this is the service which actually has to trust the OpenShift external router because that's what we do in re-encrypt this update actually not only gives you the certificates it tells Argo CD here's where you need to look for OpenShift CA certificates one and it also ensures your router is configured with the right certificate pair so that external requests can be encrypted so it does a bunch of things behind the scenes when you're actually going for this one going for the termination as re-encrypt in the future we're going to also come up with better migration mechanisms between these primarily because you may have a situation where you went for re-encrypt but for some reason you would want to go back to Edge today we're going to document these there are a bunch of manual steps not a bunch of manual steps you have to really remove the old certificate before you move to the Edge one and the main reason we want to do this is we don't want to mess with something that you put in yourself and not something that OpenShift put in so yeah this is just to some of these are boring details but I just wanted to let you know that these things have been considered when we designed this so what that means is you can safely go in and turn these knobs put it to re-encrypt and ensure that you have multi-level encryption setup or you could go to Edge and ensure that it's an Edge setup or you could live with pass through the way it is right now and it's really on what also what industry you're in, what vertical you're working on because for me I've always liked Edge because it's like if someone gets into my network then if someone's sniffing my Argo CD traffic I have bigger problems or they're like a dumb person because there's much more interesting things on my network but that's not to say that re-encrypt is probably the best thing because you have zero trust I only trust this specific certificate that you're giving me to re-encrypt the traffic and so then it gets decrypted, it encrypted again so you have end-to-end encryption and I think if you can swing that I think that's the way to go don't be lazy like me is what I'm saying and the best thing is OpenShift actually provides a bunch of those certificate generation mechanisms out of docs which is very cool I would say to be honest as an OpenShift user sometimes when I deploy a simple go-rest application I'm like damn I need search for these and then you just need to put in an annotation and it's there it's pretty cool I think I would do in probably some of the large installations would be do you care about the fact that when you re-encrypt there is an extra CPU intense operation that's happening if you don't then you're good but I think beyond that re-encrypt is really cool if you're encrypting that there's a cost of CPU definitely with that there is another area that we have to secure you're probably not exposed to this component of OpenShift GitOps or Argo CD but I'm going to again go back to this diagram so there's something called repo server on the right here that is the component to which talks to Git it caches a bunch of the stuff that you haven't get it basically knows how to talk to Git and it ensures and it is basically a gateway to the world of Git internally so which is why it is of prime importance to ensure that we encrypt that that goes there as well so which is why you could actually specify an auto TLS called OpenShift for your repo section in your Argo CD config and that will ensure that you have secure traffic directed to your Argo CD repo server component by default we keep it as true so that you don't have to make a choice and it's boring for you but of course for whatever reason if you don't like it you have to go and attest that verify TLS false which is I am happily making it insecure because as you've probably heard in OpenShift we are secure by default to go for non-secure is an option you choose we let you make that mistake yeah you have to consciously make it yes yeah so does that auto so I actually have a question so now this is me being the end user this auto TLS OpenShift does that mean it just takes whatever OpenShift trust and puts it in that right yeah okay yeah so for so for those for those that don't know so in OpenShift if you have in your environment if you have the the you have like a self-signed certificate right so a lot of places have their own custom CA server and they use that to sign all their certificates you can actually upload that to OpenShift and have OpenShift trust it so it sounds like this option whatever OpenShift is trusting I'm going to trust it as well so then that way when you connect to those Git repos you automatically trust it with that correct yes and the most important bit here is that you you actually ensure that OpenShift is going to renew those certificates at the right time for you and ensure that it's made available for your components so you wouldn't be in a situation where it expires and you're stuck which you would probably be if you're doing it yourself so that's a huge advantage we have in OpenShift in general and this feature knows effectively how to utilize that and ensure that you're running a secure repo server which some may argue that well nobody's exposed to this service apart from Algo CD itself but why do we care if you're going through an audit of your systems I'm pretty sure you would care but even without that you should care I would say yeah there's actually a question and I'm not sure if you know I kind of know this answer so by the way Walid welcome he's a long time he was there since the beginning let's say so he says that can you explain the certificate annotation not sure if I've used this in the past is it OpenShift specific let's encrypt something else yeah so that is OpenShift specific and I'm going to actually show it to you right away yes this is OpenShift specific this is not let's encrypt and since we're talking about security I'll probably take 30 seconds to go through it definitely let me try to bring this up this is ad hoc demos absolutely this is what we love this is what Walid always brings brings to the table he brings the ad hoc awesome yeah yeah yeah that's that's lovely right so let's see if I have the right services in here you have to pick one yeah I think I'm going to just go ahead with something that we have in the docs that's going to be nice and quick can you control plus this a little bit make it a little bigger sure oh yeah absolutely there we go you know what I'm going to try one more time to find the right one I'm looking for let's do it we'll do it live as they say oh awesome I have something here so let's take a look at this YAML right so so if you're affected so these are the annotations that would effectively so if you put in this annotation in here you would actually have the OpenShift cluster provide you a certificate for your service make it available in a config map and ensure that you can then mount it onto your pod so all you have to do to request it is provide an annotation on your service and what that tells you and what that tells the OpenShift controller is that hey somebody is requesting to have their service secured and let's issue this person a set of keys or certificates and ensure that person has it so in this case let's take a look at what we have here yeah so that basically says hey I need a certificate give it to me and then it'll provide the controller will provide that for you in a config map sounds like yeah so for example in this case you know the controller actually went in and you know generated these this CA certificate for me so that I could actually use it to ensure other services can trust me now where did the CA certificate come it came from the OpenShift system that did that gives me these certificates what that means is if I'm deploying an OpenShift and I want to have secure traffic coming to it I can not only ensure that I have the right certificates I also have the CA certificate that other people can trust me about and to be able to do that you're effectively using this annotation which I just showed you here which is this one which is here you go so yeah I think if you I think if I had to speak from a non GitOps context for a moment I would say just go ahead and go to your developer console deploy a Git, deploy a Node.js application from Git and create a service so we created for you out of the box just add this single one annotation and you're going to have these certificates show up on your namespace you just need to mount them into your pod and audio deployment and you're good it should be that simple sweet yeah so that's that's actually pretty cool it saves a lot of the guesswork right and trying to you know so you know we say we're secure by default right like now we're passing that on to you right as developer right this is like hey here we'll give you everything you need pretty cool and there's a funny bit on that right which is these I think have an expiry of 24 months after which it would automatically and but it actually renews these every 13 months so which means you have a good 11 months to restart your application to pick up the new certificates yeah there you go so which means you will never be in a position where your certificate has expired that has happened to you in the past with some other systems which tells you deploy often so that way you always get the latest certificate I mean if you haven't deployed something for 13 months probably in a good state already your application is super stable but then you might have other problems if you never restart it there is an absolute great question in the chat right so the question is is there another plan to support something like Argo rollouts or is there another path to Canary releases so this is you have the perfect person on for this here to see so what is the plan for Canary releases so for those of you who don't know Argo project is a collection of tool sets right Argo CD being one of them probably the main one but there's other ones like Argo rollouts Argo workflows there's Argo image updater right that Yan is a big contributor on so what is the plan in terms of having Canary releases handled with GitOps yeah I think right now we are actually exploring what we should be adopting for our customers all the time so we do have some folks who actually use Argo rollouts while they have OpenShift GitOps installed and that works great we did dabble with some of these strategies deployment config initially like it does solve a tiny amount of use cases but it still does solve them but I think we are still evaluating the plan to support something like Argo rollouts whether it would be Argo rollouts or something else but it may not really default to Argo rollouts it could be something else as well but yeah we'll keep you updated in one of these streams on that yeah yeah no this is this is part of the the cool thing about being an early adopter right with especially with Red Hat emerging technologies right we take a lot of these feedback to heart so CMAC who is a product manager and myself we talk to a lot of customers and we take that feedback and if it's a need we will definitely put money and engineering behind it so you know there's this so right now it sounds like there's a lot of conversation going on about how we're going to handle something like Canary releases so I think we view our folks inside Red Hat who are actually evaluating that a lot right now so just to let you know that if you're aware that that's an age area we should probably be expanding to for our users and customers as well so look this is real time so this is what's cool about doing a live stream is that CLE has an update the updated getoffs 1.3.1 and it indeed fixed the permission issue so the operator and it worked out of the box so there you go see there you go success story and in general like I said if you can actually convey or communicate to us if any serious issues you're facing we try to push out fixes within days if needed so this would be one of those cases yeah I actually want to get to the actual repo you guys use so you guys are under the Red Hat developer right correct Red Hat developer slash getoffs operator yes so let me put that in the chat so by the way if you're a customer do a support ticket always but if you're in the upstream just testing things out that's the repo that Shubik's team works off of so straight from the mouth right so awesome thank you for dropping that on the chat Christian yeah so there's there's that cool cool so what else is what else is update on 1.3 or can you tell us a little maybe talk a little bit about the future in these last 10 minutes or so yeah some do a quick thing in the last 10 minutes actually some of the cache related things we're doing right now so perfect time for asking about the future thank you that's right yeah so yeah so in general I think I'll quickly give you an idea of some of the things that we're doing right now with caching and they're going to continue for a while in short today when we set up our Go CD it does a few things behind the scenes that you don't know about which is it creates its own cache of the whole cluster state and when it does so it actually has a high CPU usage because of heavy JSON marshalling it has a high memory usage because it's reading a bunch of Kubernetes resources across your cluster we've actually had some very nice upstream improvements on those performance metrics what that means is for those who have dabbled with the Kubernetes APIs you would understand that we've actually paginated those list calls to the API and that's been a good improvement we've controlled the number of concurrent API calls that go to the API server when we warm up the cluster state like it's called warming up because when you know how it comes up it actually say get me all the information on the clusters that I can work with it basically it's like get me all the information and load it into memory which is kind of a lot of information if you think about it so to be able to do that and still be productive with the real work that is getups there are a bunch of improvements happening we are gradually going to open up these as knobs that you can use for performance tuning some of them yeah so this is kind of a slide we've got little details there are multiple levels of caching that we do one is no caching of your git repositories to an absolute caching of your git repositories and we only check the cluster state because in case you have a situation where you know that you don't push your applications more than once a week so we are not going to go ahead and bombard your git repo with a request every three minutes you could potentially then say hey nothing is going to change for the next one week so probably just look at it once in four days but other than that whatever you had within your memory ensure nothing on the cluster has been messed up with multiple levels and we are going to gradually expose these there are some interesting optimizations that have happened and they will be available in this release that 1.3.1 or 1.3.0 which is we've actually been able to reduce the number of git polling git polls that happen from your cluster this is an example we took from a test and right after you wouldn't so the way it works is today what we do is we poll git per git repo what that means is if Christian is using the same git repo I use the git repo we just poll it once to know if something has changed not what has changed but to know whether if something has changed that's git ls remote previously we used to do it per application cr so which means if the 10 application crs with the same git repo it would be done 10 times but now we see hey all those 10 application crs have the same let's just do it once that makes sense right so previously you were doing an ls remote regardless of if we're using the same repo that would happen twice right instead of once gotcha nice that'll definitely I can see why it's trending down so there's a best scenario worst scenario there best case there's just one git repo worst case everyone has a different git repo but then it just optimizes it to ensure that whatever we can do to reduce the number of requests let's do it with the amount of polling we do yeah there's a question about caching and by the way Celia I do see your question I'll ask it in a little bit but William asks are cached levels configured on a per-argo city instance or can it be set on a per app basis so right now it's a per-argo city instance it's not on the app basis yet but there's been work going on okay so it's a one-shot so it's like globally right yeah okay so that's a good thing and a bad thing the good thing is it's one-shot globally the bad thing is it's one-shot globally yeah exactly good news bad news right yeah right and then the other bit is there's an improvement we did you don't really have to put your secret in some of the existing configuration secrets if you have a git repo you would actually put into a new secret and just put in this label and our city would know that you're talking about credentials for repositories you don't have to by the way this is my favorite thing by the way this is update because well because if for those that you don't know if you used it in the past this wasn't like in different places and now there's like a central way to manage not only your your your git credentials but access to that git credentials right it kind of serves like two purposes so this is like one of my favorite things I'm using it all the time now so especially in your declarative approach to deploying Argo CD yeah absolutely and and this is something that would actually make it easy for you to configure an external secrets mechanism and pull it in because this can be an isolated secret altogether you don't have to lump it into other secrets you don't want to yeah and the other thing is yeah as I mentioned previously when that happened we actually needed some of the internal config maps that we use here you could actually reference any secret rather than very specific secrets I think too interesting but yeah the other interesting thing that we did is we have health checks for OpenShift deployment config and OpenShift crowds use it and let me know if you know they're working to your needs we'll be happy to improve them it could be in one of the patch releases or in one of the minor releases or major releases but then yeah we are going to ensure we'll listen to feedback on this to ensure that we act on them yeah and it's and I don't want to you know you build I mean I do but I don't know if it's not possible you building ArgoCity just for me but one of the things that I like doing because I I mean you could right but like I would like to hear like other people's feedback on like what would be what's a good default I think for a lot of engineers it's really hard to say okay what's a good default because there's obviously there's so many snowflakes cases right edge use cases and it's hard to choose like a like a like a like a same default but what I always do is I install the the operator and then I have to patch the the ArgoCity instance to ignore the the host field for my routes that's because I always have I always set that host field to blank because if I'm deploying an application to multiple clusters that host field is always going to be different and I don't necessarily want to make a I don't want to store the FQDN on my my Git Oh yeah and I think that's the best part about routes in general right you don't have to provide that host compared to on this object and so so when ArgoCity says like wait you set the host to blank but you know this is the actual the actual the actual route right like the FQDN like it'll say the differences so I always have to patch it to ignore those differences so that could be a potential agreed yeah I think yeah that's the same thing I do yeah and there are actually my repo actually has a bunch of them as well and it's just that it's in the repo all the time and hey this works but then I realize for people who are starting out they probably have to do it afresh and we need to have something out of the box for that job I think that's a good feedback thank you see here so there is a question Celie I know we might go a little bit over but that's cool we have Celie asked actually about the Argo image updater any any plans to include that as part of what we should getups yes we do it should be very soon there you go John would be happy to do that so here just some more questions here so William asked the wish list is make the default dynamic so they figure it out they meaning the process based on what they need to process at a given time also kind of like an AI driven um no William did not mean that you're putting words in his mouth Christian yeah yeah yeah maybe I'm just saying I'm like well that's you know yeah so I like um I like ignoring the specific fields right because it's a um um uh it's it'll make it um ignoring specific fields make you can kind of tailor it to your workflow so um so that's that's really cool here so yeah HPA for Argo CD processes right so cool so um that's it from the updates from my side Christian over to you yeah yeah so sweet so um yeah so thank you by the way for sharing all that and I think I think the uh I think the viewers always enjoy having like a sneak preview or having the engineers talk because um like people like me in product management right we tend to give you a lot of fluff right but like when engineers talk they they talk you know they talk tech and I think I think people appreciate that so I do um I do appreciate appreciate that here so uh we have a few minutes here I don't know um if you guys have any more questions we'll give it another couple minutes here or so um ask your questions in the chat you know either via YouTube or or uh or twitch we'll all get them um they'll aggregate that um so uh while they said um it was informative thank you um remember to subscribe like share right I feel like an influencer already please hit subscribe like share do all that do all that stuff I I certainly appreciate it um um so and I actually have uh a bit of something you guys can help me out with let me see if I can find it uh there we go I can copy it there we go there we go we are rounding out our year here I believe there's only like three episodes left in this year um then we're the show's gonna take a little break we're coming back next year right um stronger better than ever we have uh um an amazing um uh an amazing team right Red Hat uh this this new team that that is handling all the streams for us I know I appreciate it Andrew Sullivan appreciates it I know he we've talked about it um it's it's been um it's um it's been great to see how much the show has grown so what I'm asking is I'm dropping this in the chat here is I'm actually conducting a survey about the show right we've been doing this for a little bit over a year maybe almost two years now this is a second break we're gonna be taking um but the um I'm running a survey right go ahead and take the survey just a quick you know five questions about what you like about the show what you don't like about the show what you want to see what you don't want to see um we've been kind of we we did this um kind of ad hoc right during during the whole pandemic and it's really great how the the show's been um the show's been um growing here um so um please take some time I appreciate you filling out the survey a few people are already filled it out um I'm gonna be mentioning this until the new year I'm gonna try to collect some information these next months here so um so I do appreciate that also um if you don't see it my twitter handle is here right so you can follow me on twitter or github same same handle um on on both sides here um and then I think one last thing I did have one last thing here uh uh no I think that's it oh yeah so the survey uh next next episode right we're gonna have Alex Collins I think Shubik you know him from Intuit Alex Collins from Intuit's coming along talking about Argo City workflows so you guys um yeah we kind of mentioned before about workflows and roll out uh Alex Collins um Collins he's uh from Intuit one of the engineers in workflows when it's talk about Argo workflows so uh that that's cool uh don't miss that so um so I guess with that we're at top of the hour um appreciate you guys watching um um so um again Shubik appreciate you uh stopping by um so as always as I always like to close out the show or at least as as someone told me I should close out the show is that if it's not in git it's only a rumor so um thank you everyone and uh stay safe out there bye everyone cheers bye