 Good afternoon, good evening and welcome to Red Hat portfolio. And we are live. So welcome everyone to Get Off Sky to the Galaxy. I am, I'll be your captain on this journey, Christian Hernandez, technical marketing here at Red Hat. And again, you guys out there watching are my co-captain for this journey. And first I'd like to call out that thank you for the fine folks at Minayo for giving me this shirt. I actually got this at KubeCon. I actually really like this shirt. I really like Minayo and it realized I haven't worn it yet so I'm wearing it now and still kind of still buzzing from KubeCon because registration is open for KubeCon EU 2022. So that's, that's exciting. And yeah, so this is the last episode of the year, right? We're going to take a break, right? We did this last year. We, we took a break around this time. For those that you don't know, Red Hat does this recharge days and one of the recharge days is actually a recharge week. We take the week off. I'm going on PTO around this recharge time. So I'm going to take some time off. I'm pretty sure, hopefully all of you out there are taking plenty time off. I know Bern out's pretty real. And I thought maybe this year, you know, I've worked with, you know, producer Stephanie who's in the background there, worked with her and decided to do something new. Maybe something a little fun, something I haven't done before. We'll try it out. We'll see how it works. I decided to do like a year recap or a year review and I decided to do kind of like a top 10, right? The top 10 coolest things that have happened in the world of get ops this year in 2021. So I have 10 things, right? We have, we have sort of a clip show going on, right? It'll be really cool. So I'm going to get right to it because we have a lot to get to, right? So all right. So here coming in at number 10 is episode seven, right? Episode seven where we talked about sink waves and hooks with Argo CD sink waves and hooks, right? So sink waves and hooks. So this is what I like to call controlling non declarative things declaratively, right? Question is, how does this work, right? So like you're, if you're watching, if you're watching this, it's like, okay, so, all right. So how do I put this together? So first when Argo starts a sink, it orders the resources in a particular way, right? So first it'll order them by phase, right? So it'll take all your manifests and okay, it'll look at it, okay, I'm going to put all the precincts in one batch. I'm going to put all the sinks in another batch and I'm going to put the post sink in another batch, right? So once it does that, so it'll sort them that way. And then within each phase, it'll sort them by the wave that they're in, right? So it'll start from the lowest number and then it'll apply it going to the highest number in the precinct. All right, there's some technical difficulties, right? We're going to try that again. We'll do that again, this AWS thing. I think it's really messing everyone up. So we'll try it again. Number 10, sink waves with Argo CD. I think we'll have technical difficulties again. Bear with us. All right, for those of you who just joined, I see some people coming on. We're going through the top 10. We're having some technical difficulties. We're trying to get these Clipsha, Clipsha, it's the first time we're doing this. So it's a little, we're just trying to figure out how to move the things back and forth. And I know a lot of people are having issues because AWS thing, I don't know about you, but I'm pretty sure I'm ready for this year to be over so we can all reset. Here we go. We're going to take a really quick short break, figure this out. We'll be right back. All right, so we are back. Going to try it again. All right, that's where we're in this together, right? We're in this journey together. All right, ready? Number 10, we're going to do episode seven, sink waves and hooks with Argo CD. Sink waves and hooks, right? So sink waves and hooks. So this is what I like to call orders a resource sort of that way. What I like to call controlling non-dec... We got all the kinks out yet. There we go. One second. There we go. Sink waves and hooks, right? So sink waves and hooks. So this is what I like to call controlling non-declarative things declaratively, right? Question is, how does this work, right? So like you're, if you're watching this, it's like, okay, so how do I put this together? So first, when Argo starts a sink, it orders the resources in a particular way, right? So first it'll order them by phase, right? So it'll take all your manifests and, okay, it'll look at it, okay, I'm going to put all the precinks in one batch. I'm going to put all the sinks in another batch. And I'm going to put the postsync in another batch, right? So once it does that, so it'll sort them that way. And then within each phase, it'll sort them by the wave that they're in, right? So it'll start from the lowest number and then it'll apply it going to the highest number in the precinct and then it'll do that for the sink. And then, so that's how I'll order the YAML. All right, so number 10 was sink waves and hooks. What's really cool about sink waves and hooks, well, my favorite thing about sink waves and hooks is that it gives you the ability to order things, right? And for not everything is cloud native ready day one. And I feel like I kind of put in the clip, you know, you'll be able to order your manifests, you know, zero, one, two, three. Because not everything's ready, not everything's declarative. And that's what's really cool is that you can start your declarative journey, your get-offs journey with sink waves already built in to Argo CD and by extension, OpenShift get-offs. So really, really cool. With that, we're going straight on to number nine, right? Number nine, episode 11, working with Helm. Helm was a way to define a set of, I guess, manifests in a predictable manner, right? In a release manner, right? So it's like, okay, I have these manifests that provide this functionality. And there's an order that we need to follow and all that fun stuff. Yeah, all that fun stuff, right? And so Helm was a way to install things, essentially. So when I first started working with Helm, and I think a lot of people, when they first started working Helm version two, it was, people were a little nervous, right, by using Tiller and being able to use Helm, which is a lot of people, especially here at Red Hat, especially using OpenShift, they used OpenShift templates instead of Helm. But now Helm version three, here at Red Hat, we're all in on Helm V3. It's the packet manager, first class citizen on OpenShift, and also supported on Argo CD. So you can use, still use your Helm templates, right? If you're using Helm, you're able to use Helm, just fine. And you can integrate that now with Argo CD and make them get ops-friendly. So that was number nine, we, I love Helm, now I use Helm exclusively. So moving right along, number eight is episode 15, introducing the app of apps and application sets. So this is application sets. And so I got application sets. So the idea with the application set is essentially a graduation or a, the next iteration of what application sets work. It's just like, all right, since everyone's doing application sets and these Helm charts, that's kind of, mix things, things together, right? Like I actually just make this a thing. So they call that an application set. So an application set is nothing more than, I would like to say an application factory, right? So you give it, you know, one of the engineers, I was talking to one of the engineers, Argo CD engineers, I said, yeah, application set is nothing more than, you give it this application set resource, it goes brrr, and then a bunch of applications come out, right? So it's kind of like a, you know, it's like brrr, and sound effects and everything. Yeah, sound effects, the money counter going, yeah. Yeah, money counter going, and it spits out applications, right? So it's a simple, it's essentially a template agent to create applications. So that kind of brought a smile to my face, because I remember that episode. Part of this also reminds me of Chris and, you know, Chris, wherever you are, well, we know where you are. You're at AWS doing great things. We miss you here. So yeah, application sets is essentially, people were taking app of apps, right? They were creating an application that had many applications. So I think it was a great, great call by the engineers. It's like, hey, let's actually make this a real thing. It's making it, making deploying applications even easier than it already was on using Argo CD on OpenShift. So that was really, really cool. It's very inception-like in much what I really like. So, so cool. Number seven, episode 20, multi-cluster management with ACM, moving beyond GitOps. Advanced cluster management had away provisioning apps and doing GitOps and Helm. You know, which do I use? Do I use Argo? Do I use the ACM subscriptions? You know, how do I make the choice? And, you know, the answer used to be, well, it depended what you were doing. That answer is still the same. It depends on what you're doing. But in both cases, we want you to use both. We want you to use them together. And what we're going to show you today is how ACM and Argo are like this. They're fully integrated. The ACM brings the fleet management that was sort of lacking from the Argo space. Argo is very much the, you know, contained for the cluster that is working on. Or if you've gone out and you have a bunch of clusters, you still have to bring those in. ACM, we're going to see, brings all those pieces together and gives you that visibility into Argo that one, number one, we never had before. But number two, allows you to operate Argo and through OpenShift GitOps at scale across the fleet. And so, you know, as Christian mentioned, you know, doing everything from day zero on through day infinity with your cluster, I, those of you familiar with ACM, I just navigated in from the welcome screen down to applications. And so what we see here, you see some titles, some labels that say Argo, we see some that don't have it. I'm not going to get into that just yet. What I want a target on here is there's this application that's deployed in ACM called infrastructure build out. And so the backstory in this is that I got an OpenShift cluster. I ran an OpenShift install, to be honest, internally, I just requested one and it came up, but you deploy an OpenShift cluster, you go to hub. So you go into OpenShift's operator hub, you say I want ACM, you hit the install button. In seven minutes, you have an ACM cluster. At this point, I'm at this stage, I have no applications, I have no other clusters. I want to create my first application in ACM. And so I go through, I click the create. And what I do is I choose Git, and I pick my infrastructure Git repo. And in this case, it's labeled GitOps fleet samples. So I pick my repo that has all the definitions for my security, for my configuration, for the clusters I want to provision, for the applications that need to go out to it. So all that with just this single infrastructure build out. And let's take a look quickly in Git here, we'll get it to pop out. I was able to build out a bunch of policies, provision some clusters, apply policy to those clusters. In this case, we actually go out and we deploy OpenShift GitOps, which is Argo on all of those clusters as well. So it was really, really cool. ACM. So a lot of people, I get asked a lot working at Red Hat, working on the Argo CD project about ACM, because there are, I'll be the first to admit, there's a lot that are very similar in terms of application delivery, Argo CD is very focused on application delivery. Whereas ACM, we kind of have to step back and think about things that exist outside the cluster, things that you want to automate alongside your GitOps practices. And so whereas Argo CD can manage the continuous deployment, continuous delivery aspect of it, well, what manages the cluster lifecycle? I always tell people, if you call ACM a GitOps tool, you're selling ACM really short because it's a really, really cool tool used to deploy and manage your clusters, can integrate with Argo CD. Also, what you can do is, from the get go, deploy application sets with Argo CD. So they're the, and we're working on putting that integration. It's really, really cool. I was so happy to have Josh Packer come on, one of the architects engineers for ACM. So that was really cool. So moving right along to other things that integrate with GitOps and some of the things that you want to talk about, when you're working on your GitOps journey here, in episode 24, we talked about GitOps with ACS, Red Hat Advanced Cluster Security. It was all about security, security, security. Earlier in 2017, StackRocks was a more container focused security service, and you'll hear cloud platforms and container focused platforms that are out there. So we really pivoted to Kubernetes. And it's because Kubernetes has a specific challenge that it presents to developers and operation teams that is slightly different from, let's say, a serverless process. There are a whole array of configurations with Kubernetes that are challenges. And if you want to deploy at the speed at which the developers can introduce functionality into your applications, you need a security process that can keep up with that. So it's about securing the supply chain, securing the infrastructure, and making sure that your workloads, that that process fits in with the developers and the containerization techniques that you have so that you can securely scale. To start off, we have an OpenShift cluster set up. This is fairly recent. I think I set this up a week ago. And about, yeah. Yeah. Nice. Good reference. And installed some demo applications and installed a couple of operators. ACS can be installed as an operator in your cluster. And I think I have four here. So we have Quay. We have OpenShift pipelines. We have GitOps. And we have ACS in the cluster. It pretty much is as simple as coming in, enabling OpenShift cluster, or advanced cluster security for Kubernetes, picking the specific namespace that you want. I called it StackRocks because you've got to keep the name alive somehow. That's right. You've got to keep that flag going, yeah. Yeah. And once you get in here, you'll see that part of the differentiation of the architecture that I showed you is there's a central component, which is the UI and all the tools for analyzing all the information. And then there's the secured cluster component. So the central cluster component is the one that you're going to want to install first. It's to initialize. It'll give you the UI that you can go into. And then you can bring on clusters as you go. And you can do this in an automated way by basically generating the certificates. And then as clusters come online, they'll automatically get populated. And they'll reach out to the UI to say, hey, I'm here. Or you can do it manually just so that you don't have that automation aspect of something reaching out to your UI. But you have that capability. So I'll just go right into it. I have a couple of applications installed on here. I think I was actually given some pretty insecure ones. And I haven't seen them yet. So I'm interested to see what pops up. Now this is just the main dashboard, which I should probably zoom in on again to 200% so everybody could see. And we only have one cluster. If you had multiple clusters, obviously this would show up here. You'll see system violations, compliance standards. It's your clusters at a glance dashboard. So what's funny is, I don't know if you're following the chat, is that Michael Foster actually joined this stream on, he started watching as his number came up. So yeah, Michael Foster right as he's tuned in, he's come on. So he can actually keep me to be honest here as I kind of summarize here. We talked a lot about automation and using get ops as a working model of your DevOps practices, right? It's a DevOps working model. I should never forget about security. Automation doesn't mean that you need to slow down for security, right? Security needs to be part of that process. I know Michael, if you check out episodes, that episode 24, Michael went into that whole aspect of supply chain. And also kind of made fun of me for the applications I gave them, however, being so insecure. That was, it was, it was a fun stream. It was great having them on. And yeah, I check out ACS. It's really good. You need to build that whole idea of DevSecOps, you need to build that into the automation and get ops being the cornerstone of automation. You should also think about security as well. So really cool. So remember hit like, like, subscribe, follow us, follow me on Twitter, Christian H814 if you disagree with anything on this list. Let me know. I want to hear it. I want to hear your thoughts as well. So remember, please like, subscribe, and share as we go into number five, right? Number five is actually one of the things I really like, our first GitOps con. So what we did with GitOps con is that I, if you guys haven't, haven't already know, I am a member of the GitOps working group. It's a working group that manages the open GitOps sandbox project in the CNCF. And we had our first GitOps con at co-co-located at KubeCon EU 2021. So it was a virtual event, all virtual events. It was really exciting. I was part of the programming committee. It was a really hectic. For those who haven't been part of a programming committee, it's a lot of work, right? But it's a labor of love. It was really cool to get DevOps practitioners and people who are curious about GitOps, getting all their thoughts, their end user stories. It was really, really, really cool. One of the things that I was a part of, there was a lot of excitement around it. I had to wake up really, really early because it was the EU time zone. It was fantastic. So really, really cool. So on the heels of that, on the heels of GitOps con, number four is I like to introduce ArgoCon, very first ArgoCon. What was really cool is that not only did I help with ArgoCon, I also did a talk at ArgoCon. So that was really cool. It was really cool to be able to both help with the planning and also get my talk accepted. So for those of you who think that the KubeCon CFP process is rigged or whatever, I can tell you it's not because my talk got rejected at GitOpsCon and I was part of the programming committee. So let you know it's all fair. But I was lucky enough to get selected for ArgoCon. I talked about stateful applications, which is funny because my talk for stateful applications is something that I've done on the show before, but I didn't put in the top 10. Interesting. I'm not sure what that tells you there. But yeah, ArgoCon was a lot of fun. We had over 3,000 people registered for the event. We held steady at about 500 active members at a time, lots of great people. For a virtual event, it was really, really well received and I connected to a lot of really cool people from all over the world. So it was really nice. Excited for ArgoCon 2022. We're still talking about that, but yeah, that was one thing that came out that was ArgoCon. So going in to number three is our second GitOpsCon. So our second GitOpsCon, I was actually lucky enough to do a keynote, right? So a sponsored keynote, usually our product manager usually does the sponsor keynotes. But I was lucky enough to do the keynote and it was actually our first in-person GitOpsCon. So it was in-person. It was really cool. It was here in LA, so I got to go because it was just in my backyard. It was really, really cool to be able to connect to everyone in person. A lot of the people that I interact with virtually, I actually got to see it in person, shout out to people like Scott Rigby, Dan Garfield, who are members of the working group. They were there. We got to meet in person. Chris Short was there. It was a lot of fun. My favorite part of GitOpsCon was actually the end user talks. That was really, really cool to hear people like from Spotify, from who was Starbucks, right? State Farm, great, great, great end user stories there. So all right, cool. So moving along here, right along is our number two. Number two, Episode 23, Directory Structures Battles. I get opinionated on this one. So the first thing is dry, right? Just like with coding, infrastructure as code, GitOps, it's all the same thing. We're all talking about code and we're all talking about don't repeat yourself, right? And basically is avoid duplicating your YAML. So essentially the question is like, well, how do I deploy all this YAML across multiple clusters without copying and pasting YAML everywhere, right? So it's like a trap, right? It's like, oh, wow. Virtuous cycle, yeah. Yeah, it's a virtuous cycle. So again, with my example, right, I dumped everything in one YAML, right? It's like, well, if I want to deploy this deployment to another cluster and I just want to change the scale, right, instead of replicas one, replicas equals two, do I just copy that directory and just change one line, right? It seems like a big waste, right? It seems like I'm just repeating the same YAML over and over and over again, right? So in comes Customize to the rescue, right? Yeah, Customize. Yeah, he's Customize, right? So Customize has been around for a while, and it's even built into the CLI, like, you know, Kube CTL or OC since like 1.14, it's been there for a little bit. It's been there for a while, right? And what Customize is, is a patching framework, right? So you, it's essentially it's like, okay, well, I want to take this YAML and I want to patch it, right? So you can do environment-specific changes by using the same YAML, right? So it's not a templating per se, right? Although you can use it like templates, it essentially it produces raw YAML. So the idea is that it just gives you raw YAML and you can directly apply it. So and the basic concepts of Customize is that you have a series of bases and overlays. And the base is essentially what's common amongst all clusters, right? So if you think about it, right, you're, you know, what's common across deployment, the differences between the deployments from one environment to another can be, it's actually very minimal, right? It could be like the secrets you use, the image you use, and maybe the scale, but like the rest of it, like in terms of like a skeleton, it's essentially the same, right? So you don't, you know, you don't want to be copying all that just to change a few things, right? So this is the concept of an overlay. Essentially I want to overlay my changes on top of the base configurations. So and the thing is, is the base has no knowledge of the overlay. It's just raw YAML. It's just YAML. Yeah. It's just YAML. So the thing about Customize is that you just, you just use raw YAML for everything. So and then the overlay is essentially just a series of patches against that. So how that looks like is, here's an example here. So again, just to kind of wrap it up kind of a little bow is the, get familiar with Customize, get that power under you, because that's going to dictate a lot. Ask yourself questions like, you know, what does my environment look like? What does my organization look like? Who's controlling what? That will then beyond Customize, then we'll tell you what kind of repos are you going to be using? All right. So anytime I talk about best practices, like directory structure always comes out. I should always really say the best practices for directory structure is really best practices for Customize. I'm a big fan of Customize. When you first start with getups and your directory structure and trying to lay out, you first start with a bunch of YAML and you quickly realize that you need some sort of templating agent, right? Either you're all in on Helm or when you're using raw YAML file, Customize is just going to have to be there somewhere. So I always say dry, right? Don't repeat yourself, meaning, you know, why have the same YAML hundreds of times if the change is minuscule, right? As you'll see, as you'll realize, as you're copying YAML everywhere, as you're putting YAML everywhere, the YAML doesn't change that much. You like to think it does between environment and environment, the YAML doesn't change that much. Maybe the image tag and the, for example, on the deployment, the image tag and maybe the replicas, maybe the secret, it really is you just really should store the Delta and I always highly recommend using YAML, right? So YAML, because there's going to be YAML everywhere. And so, you know, try to minimize that, right? I always do, don't repeat yourself. Anytime I talk about really directory structure best practices is really customized best practices. I should probably, I'm going to start saying that, right? So anytime someone asks me, I'm going to start off by saying that directory structure best practices is, is basically customized best practices. So that's one of the things, actually, the next stream that I do when we come back, I'm actually going to show you how I, how I lay out my directory structure. So stay tuned, right? When we come back in January, cool, all right, moving right along. So number one, all right, I feel like I should have like a drum roll or, you know, I should have put like confetti out when I say this here or maybe next year, right? We'll figure out some sort of drum roll or some confetti that'll come out, maybe hire some dancers, something really cool. Number one, the number one cool thing, the best thing that I think that came out in the get ops world was the get ops principles. So the open get ops, right? So open get ops is a sandbox project that is managed by, or as it gets driven by the get ops working group, right? So there's a working group and then the actual sandbox project is open get ops. And no longer, so what's really cool about this, about the actual principles, which is by far the favorite thing to come out this year is, is the version one of the principles, right? So this, this is no longer an idea or a buzzword, right? So now it actually has, get ops now has a charter and, and kind of a guidance, right, for those who want to dive deeper into get ops. So this is a huge milestone for us, right? I was kind of part of the, the principles committee, right? A lot of bloods went and tears went into this, a lot of argument. Oh wait, when I say argument, I mean our argument in the, in the adult sense, right? Not in the child sense, like kind of like kind of discussions, right? He did discussions, opinionated discussions coming out of that. Great, huge milestone. If I'll actually drop it in the chat, if you want to read up more about it. Go ahead and go to open get ops.dev, right? There's a, I put a link to the blog announcement there. Going really quick through these, these principles, right? Is that, the first principle is that it's declarative, but read it straight out the way it is declarative. A system managed by get ops must have his desired state expressed declaratively. So kind of self, self explanatory. This is infrastructure as code as you see it today, right? Stored and get. This is kind of what number one is telling you. Doesn't call out get specifically, but that's kind of what it means. The number two thing, the number two principle is its version and immutable, right? And so desired state is stored in a way that enforces immutability, versioning and retains a complete version of the history, right? So this is kind of where get shines or any source control management system shines where is that you want that immutability, right? Immutability, you want that, that version of the desired state immutable. It's pulled automatically. So when there's a change, that change is pulled onto the system, right? So there's a software agent that automatically pulls the desired state declarations from source, right? Whether that source is git or whether it's a S3 bucket, whatever it is, right? As long as it satisfies the number one and number two, right? And the fourth principle is that it's continuously reconciled. So this is where it differs, right? So a lot of people say, hey, I've been doing get ops forever. I do a, anytime I make a commit, it triggers a CI process that does everything that get ops does. Like, well, no, no, no, not exactly, right? So continuous reconciled. This is the nuance, right? Where it differentiates it from traditional event based CI CD process. Software agents continuously observe the actual system state in an attempt to apply the desired state. So this is the idea of a closed control loop, right? So if you guys look up a control theory you may actually see if I can pull that up really quick, right? There's a, this control theory, I'll look for it maybe, maybe a little later, because there's a lot of stuff that comes out she, but the control theory, right? Is that there's a feedback loop and there's a loop that happens. So then it basically checks for the desired, you know, the input is a desired state and it checks continuously if that desired state is satisfied. And, you know, it'll reconcile that, right? Kind of sounds like Kubernetes. Think about Kubernetes, right? Deployment, if the deployment says there's two replicas, but there's only one replica, it'll spin up another one sort of thing, right? So kind of same idea, but taking that to an application, seeing that as holistic. So really, really love what we did, right? With control theory, putting that control theory there. I actually wrote a blog that kind of called this out a little bit, it's the new stack blog. I think, oh, Shari, he's on. If you could put that in the, in the chat, well, Lee, it'll help. And so yeah, so actually a good comment, right? Three can be argued pull versus push. I think that that's something maybe in version two, right? I think either one works, pull and push, as long as that reconciliation loop happens. So that's kind of the idea there is control theory. And I always like to spend a lot of times there with number one, because one, it's number one. And it's one of my favorite things. And also it actually gives GitOps an actual thing, right? An actual model to follow. Lots of great things. Remember to get involved, opengithops.dev, you know, where it's an open community and you won't conjoin, you know, there's no formal process. You just join a meeting, volunteer for stuff. Again, things that are coming up on part of the program committee for GitOpsCon. EU 2022, right? In Valencia, we're gonna have it in person. So we're gonna do something, also something cool there. We're planning, hopefully, we're planning on having dual tracks. We'll see. Still a little early. You can register for it though, now. So if you're registering for KubeCon, you can also add on GitOpsCon, you know, it's right there when you register. So it's, hope to see all you guys there. Thank you. Well, he put that new stack article I have. I talk about Argo and Flux, but I also kind of briefly mentioned control theory. There, where you can read up more about it. So with that, I think that's it. I just kind of want to thank everyone, right? Thank everyone who, again, first of all, thank you everyone for kind of working with us in the technical difficulties in the beginning. I like to blame AWS because it's just, they're an easy target, but who knows what the issues were. I'm just kind of picking on them because they had those outages this week. So thank you, first of all, for those who have put up with that. And second, I'm gonna be taking, again, like I said in the beginning, a little break, we'll come back on, actually, let me pull up my calendar because I'm not sure when we're coming back. I think we're coming back on the 13th, and guess we are. So we're coming back January 13th. We're gonna start an hour later. There's gonna be an episode of, special episode of In the Clouds, right? So I won't ruin that for you, but we're gonna start an hour later instead of noon Pacific time. We'll start at 1 p.m. Pacific time, which is 4 p.m. East Coast time. So, I'll remind you everyone when that episode, before that episode comes out, that just for this one episode, we're doing it a little hour later. So, yeah, so with that, I would say, oh yeah, one thing, one last thing. We have a survey, I'm keeping this up here until, oops, I don't know, I can never get this. I did have a bitly here. Maybe just a day of technical difficulties. There we go. So survey, I'm gonna keep this up till the end of the year, right? So if you haven't filled out the survey, go ahead, fill that out. We'd like to hear from you how we can improve the show. And actually one last thing, I keep saying one last thing over and over again, but here we go, one last thing, right? So we are looking for a co-host, right? And so as you know, Chris left and we are diligently looking for a co-host. I do plan, we do plan on having a co-host. So if you're a Red Hatter, reach out and we'll see, if you're interested, we've already talked to a bunch of people. I think I've narrowed it down to a few people, but things to look for in 2022, a co-host, hopefully we'll get a permanent co-host here. So it's just not me rambling. I can ramble, I don't have a problem rambling, but maybe you guys, you know, it could be helpful to get another perspective as well. So it's just not me pontificating. So with that, I'd like to close out 2021. 2021, again, crazy year. Hopefully it gets better, 2022. I'm excited for what the show brings in 2022. So again, like I say, if it's not, it's not in git, it's only a rumor. So take care of yourselves out there. Bye-bye everyone.