 All right. Well, after an eventful, what, 15 minutes or so, we're back. Never dull moment. Yeah, it's ironic that on today with our topic being about troubleshooting, we're having all kinds of issues. We're having to troubleshoot what's going on. Just makes sense. Yeah, yeah, such is life, right? Say, say, lovey. So yes, Stephanie, it has been a lively morning for a number of different reasons, not the least of which is this US West issue happening in AWS that is very much apparently affecting us as well. For those who don't know, we use a service called Restream, which is realized on some AWS services apparently. So even though Google YouTube doesn't, Twitch certainly does, Restream apparently does. So apologies for the technical difficulties. Hopefully we can persevere. Although I do have a hard stop at the top of the hour today. So we've got 34 minutes and we're gonna make the most of it. You'll note that Mark is not back with us yet. As I said, he was having some laptop issues. He's having a host, or excuse me, a host. I'm looking at the word host. Is having a technician come out to his house. So he'll join us when he's able to talk about or finish out the topic around OVN Kubernetes versus OpenShift SDN. So somebody did ask about using Genev, so OVN Kubernetes does use Genev for the tunneling protocol as opposed to VXLAN with OpenShift SDN. And I think what Mark was getting ready to say before he got cut off was one of the driving factors behind the change is the community behind OVN Kubernetes. OpenShift SDN, as I said, it's been the default. It's been an OpenShift since like 3.0. But as the name kind of implies, it's only used by OpenShift. So the community, the user base is as substantial as OpenShift itself is, which is not to say that it's a small community. But when we compare that against OVN Kubernetes and specifically, you know, take the Kubernetes out of there OVN, we're talking about a much, much larger community of folks. So just within the Red Hat portfolio, you've got things like OpenStack, you've got a Red Hat virtualization that both use OVN. More broadly, you've got a number of other things out in the community. Other Kubernetes, Kubernetes, Kubernetes tie. Anyways, that use OVN. So, you know, there's a much bigger community that we can participate in, that we can join and ultimately contribute to and effectively deliver a better set of capabilities. And, you know, if you look in the docs, there's a number of, there's a comparison chart that I need to grab a link to that lists out some of the things that are inside of there. You know, capabilities available with OVN that aren't available with OpenShift SDN. So things like IPsec, you know, you can't do the node to node encrypted communication with OpenShift SDN. You have to use OVN, Kubernetes for that. I also know that the, and so the performance and scale team does a lot of testing with the two and does a lot of comparisons with them. And the performance-wise, scaleability-wise, as far as I know, everything I've seen is that they're basically equal, right? They trade back and forth, you know, between a few percentage points on which one is better at certain things than others. Probably the biggest difference that I've seen and unfortunately we can't share like the graphs or anything like that. We can't share the specifics because they don't let us for whatever reason. But the biggest one that I've seen is it takes a few more milliseconds to make the initial connection for the pod to the SDN, right? So instead of taking like 10 milliseconds, it takes 12 milliseconds or you know, whatever that number happens to be. I don't remember off the top of my head. But yeah, performance throughput latency-wise, you know, they're all the same or at least the same, I'll say, if not better with OVN Kubernetes and scale is certainly a positive as well. All right, so keeping in line or keeping in trend with being an interesting and lively week. So last week it was the AWS, the US East outage. This week Monday kicked off with the bang with the log for shell vulnerability being announced. So if you have not or are not aware of log for shell, I'm impressed because it has been everywhere. It seems like this week. I think I've gotten, yeah, I have had more questions about log for shell than literally anything else in recent memory. So Red Hat has been, you know, divulging, I think that's the right word. So we've been publishing information as much as possible as soon as it comes available. From what I understand, this was not one that was embargoed, right? A lot of these vulnerabilities, you know, we know about them for days or weeks beforehand. They're just, you know, embargoed. We can't talk about them. You know, they don't share them outside of a very small group of engineers because it's a security vulnerability. And then when it does go public, we immediately, you know, as soon as it goes public, the CVE page for Red Hat has, here's the affected products, here's all the list of fixes type of stuff. This one was not like that. It was like the whole world found out about it at the same time. So Stephanie very helpfully published, or put in the chat here, this, a couple of links that have details on this particular issue. So with respect to OpenShift, and I thought I had the, John, did you have the CVE? I don't like to get it really quick. Yeah, so there is a CVE page off of redhat.com that lists all of the affected products. So with OpenShift, the big one to be concerned about is the logging service. So Elasticsearch and the components used inside of there. So they are working diligently to patch to update, to release those updates out to everyone, which is the two links that were just published by Stephanie in there. So if you look at those, it explains how to do a temporary remediation. So that would be the first link, the access.redhat.com solution. Let me share my screen, actually. Right button, right button. To share a window. And I want to share, are you the window I want? Where's the window I want? This window. You know, every single week, I always struggle to share a screen, which you would think after this many times, but it's such a tiny window. I have trouble identifying which one is which. Anyways. I'm never getting easier. So again, I'm using windows today. Please don't judge me. Windows 11 has been an experience so far. So this, oh, here's the, so this is that first KCS link that Stephanie had. And you can see that this goes through and it walks through how to set the environment variable to effectively mitigate the vulnerability. And that includes doing things like, so down here we set the operator in an unmanaged state. So that way it will not basically clobber that change that we make up above. So what's happening here is, remember with most things open shift, the operator manages the configuration of it. So if I go in and I customize or I change something like the pod environment variables, the operator will reset that back to its known good configuration. So until, and I believe that the operator fixes have been released by now, but if you haven't gotten those right, if you're disconnected or something like that, basically you can go in, you can set it to unmanaged, you set the values that are up here and then the operator won't overwrite those changes. And then once you do have the operator update in place, kind of either before or after you can go back and set it, we set it to managed and then the operator will take its, will do its thing and apply all of the relevant fixes. So that's the first one of those links that's inside of here, the 6578421. The other one is this particular security advisory. So this one is, and this is the 5129. So this is for, you can see open shift logging and security updates for open shift logging service, version 531. So one thing to point out here is open shift logging, its release cycle is decoupled from OpenShift proper. So you can see OpenShift logging 5.3, there's one for 5.2, one for 5.1, so on and so forth. So this is the Erata related specific to that operator. And there's also ones for the other logging versions as well. So similar thing, right? It tells you exactly what's going on here, right? Link to the documentation for the solution, et cetera. Make sure that you update this. I know that a lot of us, despite it being the holidays, a lot of us are gonna be working on diligently fixing this and getting everything up and patched and making sure that it's working adequately. DMI3, I see the OCP update graph application. It was not working yesterday because of update service issues. That's interesting, I wonder why. I hadn't heard that there was an outage. So hopefully, the reason why I had this up is that, thank you for finding that link, Johnny, and posting it, Stephanie. So the reason why I had this up is because the next thing that I wanted to talk about is updates from OpenShift 4.8 to 4.9 are now in stable. So if I go to stable 4.8 and I select 4.8.23, I can now update to 4.9.10. So we talked about this last time with 4.7 to 4.8. Usually the window for stable updates between minor versions, so 4.8 to 4.9, 4.7 to 4.8, 4.6 to 4.7 is somewhere between 50 and 60 days. I think last week when we talked about this, we were right at 50 days or 51 days, something like that. So keeping in line with the historical trends, we now have those updates available. And the telemetry that they have shared with us, it looks pretty good. I don't think we've seen quite the same, not number, but magnitude of issues that we had seen before. And even that's not really fair, right? Before it was, we were encountering specific issues on specific platforms, like the VMXnet 3 driver with vSphere, that type of stuff. Amit, can we make EFS provisioner connect to multiple EFS servers at once and provision APV accordingly? We are on 3.11 and it is tech preview. So I will admit, I am not familiar with EFS. So my understanding is that Amazon ships an EFS CSI provisioner that will work with OpenShift 4. Whether or not you can connect that to more than one is not known to me. And it may be a question for Amazon themselves. And 3.11, I'm definitely unfamiliar with how it works on that platform. Johnny. I'd say, yeah, I've only used it with 4 and I know it was like just recently supported in like the last, you know, 4.6 or 4.7, so. Okay. So Amit, you can reach out to me. So Twitter, Practical, Andrew or email Andrew.Sullivan at redhat.com. If you wanna reach out to me with your question, we'll see if we can track down some better informational maps, get something more authoritative than Andrew thinks or Johnny thinks, and we'll let you know with that. Also keep an eye out for the blog posts. Yes, I know we haven't had a blog post in like a month. So that is partially my fault because I was lagging behind and now it turns out that all the folks who are running the blog are on a well-deserved PTO. So I am certainly not going to hijack their PTO just to get a blog from a little me published, but keeping an eye out in the new year, we'll have a whole bunch of those kind of dump and then you'll be able to go back and search and find all of that. So yeah, so please don't hesitate to ask your questions. We'll do our best to get those answers, you know, and as we just showed, if we don't know the answer, we'll track those down for you. Let me find the right tab here. I opened a whole bunch when we started having issues, so now I have to sort through them. Anything else to add, Johnny? No, not to that, no. Okay, so in our last 20 minutes here, 21 minutes, today's topic is troubleshooting. And this is definitely something that I want to revisit since we got cut short because it is something that is both a skill as well as like a concept, if you will, right? So when we think about something like troubleshooting, most of us do it based off of intuition, right? Kind of follow the breadcrumbs type of thing or based off of previous experience, I saw it do this. You know, we've all had those experiences where it's, you know, why do you stand on your left foot and hop around in a circle three times before you do this? Well, because last time I did that, it worked and it fixed the problem, you know, or the infamous, I'm gonna poke fun at Windows today because I'm using Windows, right? The infamous reboot fixes everything, right? So, you know, well, why do you do that? Well, because it works. But troubleshooting is, it's also, it is a science in addition to an art. And what I mean by that is there is a whole set of kind of philosophies and things that we can apply to help make that troubleshooting process a little more cohesive, a little more organized and a little more planned, even though oftentimes when we're doing it, it is a very chaotic and hectic thing, right? Nobody likes to get called in in the middle of the night because something's wrong and then, you know, you're trying to fair what's going on. So there's a couple of folks in the support organization that I hope can join us in the new year so that we can have a much more thorough conversation about this. And in particular, kind of get some of the mindsets, the behind the scenes of how our support folks work. You know, they don't just, they don't ask for a must gather or they don't ask for all of those log files, right? For no reason, there's a rhyme behind that reason. And I hope that we can share that. So the other thing that I will quickly point out is yes, for like two or three weeks now, we were talking about having a VMware guest on today. Unfortunately, they had a last minute conflict. So they were unable to join, but we will definitely have them join us in the new year. We've already rescheduled that for the middle of January. So please, if you're interested in that, if you're interested in any of the content that we have, be sure to subscribe on whatever platform you're watching us on so you can get those updates. All right, troubleshooting. So, Johnny, you and I, we were both administrators for a long time. You know, you were on the consulting side. I know that you helped a lot of folks with a lot of things when it comes to this. So, yeah, I kind of want to get to your perspective or your thoughts. What is troubleshooting to you? Where do we start? What does it look like? Yeah, for sure. So when you say it's an art and a science, it definitely is, right? Because over time, as you established your maturity with how you do things, right? You come up with a process that works for you because it's like, okay, well, I know, like you said, I started here. Let me start here. So for me, what I typically do is when I learn, you know, especially like OpenShift 4 with all the different components, I start like the very bottom. All right, where do I start at the install config? Okay, if you watch me do my disconnected OpenShift install, then you realize that like I was blowing up on the install config and it was because I had spaces in my pull secret. But like, so I just started the basics, right? I go, and I keep it as simple as possible. Like, okay, here's the problem that I'm having. Let me work one step backwards. And then if I can get beyond that, then I try and work one step forwards. And then I just try and identify where the issue is. And then once I identify the issue, then obviously I implement the fix. And then I try and understand why I had that issue. And then what happens is when you understand why, then it helps build that logic into troubleshooting. So the next time you see it, you're like, okay, I'm not getting to this point. So most likely it's because of this, this and this. Or at least you have three or four things or two things or whatever that you can kind of go back to in your memory that it's like, okay, well, last time I was able to get here and here. And that's if I can get to these things. And that means, you know, X. Yeah, I have to say that one of the most terrifying things for me was always, well, it just started working again. Well, why? Well, I don't know. Yes. But then we don't know what broke it. And we want to figure out what caused the outage or what caused it to break so that we can stop it from happening in the future. Yeah. So awesome is also terrible. Yeah. Yeah. Yeah. But magic is a bad word, right? It's a four letter word when it comes to things getting fixed. So I don't mean to put you on the spot, but I'm going to put you on the spot. So what are some like tools or what are some key things that you have used when troubleshooting, when looking into OpenShift issues? And I want to kind of separate this out into a couple of different things. So one being the install process, because if I talk to the support people, they tell me that the most frequent or the install process as the lifespan of a cluster that has the most amount of issues or most amount of tickets being created. So two would be kind of the cluster, the day to day running of a cluster and then three would be the application. So for each of those kind of phases, levels, aspects, install day to day maintenance, day two plus and then applications. Are there different perspectives, tools, et cetera that you use? Oh yeah, absolutely. So from just starting at the bootstrap, when the bootstrap node is coming up, if you see that like, it's just not, you can't connect via SSH or you're not getting any of the normal feedback that you get from, like, okay, hey, I should be seeing something. Then I would try and get into the bootstrap node and see like, first thing is, okay, one can SSHN because if I can't, then that means that my ignition files weren't passed correctly. And then if I do get into the bootstrap node, I would immediately hit the bootcube service with journal CTL and start following the logs there because especially if you're doing a disconnected installation, it's gonna tell you right away if it can reach the registry and pull down the images. And so you'll automatically see where it's trying to reach your registry and then it's gonna fail and then it's gonna reach out to the Quay IO and then it's gonna fail because it can't get out. So like the bootcube, it's the bootcube service on the bootstrap node. That to me gives you so much information because even as it's going through and if it fails at some point, you'll get a lot of info out of there. The next place I would like during the bootstrap process is the Kubelet service because you're gonna get a lot of information out of there as well, but you'll see any errors when it's communicating with AWS or Azure or whatever from your provider standpoint, you'll start seeing a lot of information out of there. Then if it's a deeper issue, then I would look at CryCTL. So CryCTL runs a lot like Podman or even like a Docker, so you do like CryCTL PS and CryCTL PS Months A and CryCTL logs. And so that way you can actually follow the logs on the containers themselves so that way you can actually isolate down to the thing that you think you're having a problem with, like the bootstrap API server, for example. And then you can look in there and see where things are going. A good example where you can do that is if you're doing a single node OpenShift install and you're like, I really don't know what's going on. You can actually go look at the CryCTL logs for the cluster version operator and then you'll get a ton of information on where the cluster is deployed from or where it's at in this deployment process. And if it's having issues, like it can't resolve something or whatever. So then moving on to like the day-to-day cluster operations, we have a lot of customers that... So I'm gonna interrupt you real quick, Johnny. So I think we still might be having some random issues. So I'm gonna post a link, post it on YouTube. I'm also posting it into Twitch because I didn't see it pop up on our restream, so. Okay. But so that's the overall troubleshooting article for OpenShift. So this is maintained by support. It has a link to a bunch of other articles. We've talked about it before here on the stream of just, hey, this is a good starting point and good resource. So one thing I do wanna highlight real quick. So go back to my browser window and I go to github.com slash openshift slash installer. So you were talking about troubleshooting the installation process. So one of the great things about working for an open source company and working with open source software is you can go and look at the code. You can see exactly what the installer is going to do and how it behaves and how it works. Yes. One thing to be aware of though is be cognizant of what branch you're on. And in particular OpenShift has a branch for each release. So right now, if I'm trying to deploy 4.9, I would come in here and I would look at the 4.9 branch to determine what that precise behavior is. Yeah, there's a bunch of different and comes from experience, comes from learning and all of that. There's a bunch of different components that go into the install process. The installer is one component, machine config operator is another component, if we just go back to the, we do machine big. We've got machine config operator. Machine API operator is another one. They all play a role. So just, it's one of those, yes, it takes time, we all have to learn how to do these things. I also want to make sure to highlight and to remind folks that you can always open a support case. It's, that is always a very good option when you are encountering issues, right? Don't draw out an outage longer than needed or have issues longer than needed when you can open a support case and get help or you can reach out through the various community stuff. So Davo, what is your most loved feature in OpenShift? That's a good one, I don't know. I'll say that I always, because I'm near and dear to the installer. I do a lot with the installer. I find deploying OpenShift, once you have the prerequisites in place, super easy, especially API. Yeah, mine is like the machine sets. I think that is just the coolest thing of all time where you can just scale a machine set up. That's by far my favorite thing. Yeah, I do periodically because I work with the community and do a lot of other things. I do periodically go in and deploy Kubernetes, just vanilla Kubernetes with Kube-ADM or something like that. As complex as OpenShift is, some things, when they work right, which is the vast majority of the time, it is really easy, periodically, so. Yeah, especially with the feature set that you get out of the box with an OpenShift installer compared to native Kubernetes. Yeah, I also, if you wanna call it that, the community around OpenShift, the Kubernetes Slack channel, there's OpenShift, Kubernetes Slack team, there's OpenShift channels that a bunch of folks hang out in and always willing to answer questions and stuff like that. All of the GitHub issues and stuff like that. Even as an employee, I'll open up issues on GitHub and we get responses very quickly from the engineers, so. Yeah, that's awesome. All right, so just to recap real quick, so with the install process, connecting to the Bootstrap nodes, or I guess it depends on where in the process you are, connect to the Bootstrap node, check the boot cube log. If you SSHN, there'll be a very helpful and part of the message of the day is the exact journal control command you can copy and paste that in. Beyond that, check the relevant operator logs. So, for example, if you're deploying IPI and the control plane deploys, but the worker nodes don't, check things like the machine API operator logs. If it doesn't make it that far, if you're having issues instantiating, just getting the control plane running, check the pods using things like Cree-Cry-Cuttle, Cree-Cuttle, Cree-CTL, whatever. I'm gonna offend everyone as I go through that, right? However you think it's pronounced, I'm gonna butcher it. So, yeah, so C-R-I-C-T-L, Cry-Cuttle, is used very similar to how you would Podman or Docker to do like a Cry-Cuttle PS, Cry-Cuttle logs to look at those things. Okay, day two. So, hey, things are cranking along, all of a sudden, let's pick on the logging operator. The logging operator starts misbehaving. What are some things that you would recommend? Yeah, definitely just looking at the operator itself. So, going into the namespace, trying to figure out what the status of the pods are, so doing like an OC Get Pods. One thing that I think a lot of people overlook is OC Describe. And you made a note of this in the document, which I was like, yeah, this is plus 100 on that, because OC Describe pod and then the pod name will give you so much information about the deployment itself, the pod. It'll tell you why it's not being scheduled, if it's not being scheduled, it'll tell you why it's, if it's a crash loop back off, it could be like your registry went down, if you're using a disconnected registry, or it could be any number of things. The not scheduled one is always the most baffling and the most helpful, right? Because it's, oh, unable to schedule, zero of 12 nodes available, three have taints, six have not enough capacity, three, it'll tell you exactly what the issue is, in many of those cases, or at least give you a good strong pointer towards why it can't schedule that pod. Exactly, and in case you don't know, typically scheduling is either selectors, labels, taints, tolerations, or resources. And I think that's probably the biggest one that we see with logging is that, people try and provision small and then put a big service like cluster logging on top of it, and then they don't tune the instance down enough to where it can actually fit on the nodes. And so, for example, like elastic, I think it needs like six gigs of RAM out of the box, right? So you can actually tune that down, or maybe it's fluent, but you can actually set that and set the request for the memory down to like a level that is actually usable. And I say usable, but it's toward the pod why it should get scheduled on a node. No, and a perfect example, was it last week or two weeks ago, we were showing single node OpenShift and I deployed a single node OpenShift instance with six CPUs, and I was doing other things, I keep banging into the microphone arm. So I was doing some other things and I came back and I wanted to deploy ACM onto my single node OpenShift instance. So I deploy everything and I've got a bunch of pods that are just in the pending state that's going on here, like forcefully terminate, restart, and I finally, OC described, I didn't have enough CPU. So, and it wasn't, and it's one of those, like it was only using, I think like actual utilization was two CPUs, but the requests were over that, yeah. Yep, and DMI three, I'm sorry, please don't hate me for that. Yeah, OC get events as another one. That's the other one that we're like alluding to as well, right? That'll give you a lot of information. But yeah, OC get events and OC describe are outstanding. Yeah, I don't think, so yes, you can filter it, but it requires using effectively JSON path. Andrew, as Christian can attest to, Andrew is terrible at JSON and JSON path queries, but you can use that in the output. So as a part of the command to filter out what's there. And you can do that directly through the OC command. You don't have to pipe it into JQ or anything like that. All of that is an option. Yeah, yeah, I'm trying to, there's always just logs and stuff like that. Like I'm an old school Linux guy, so the first place I go to almost everything is like one, how do I know if it's running? Where are the logs? I always look at the logs, so that helps me. Yeah, so my recommendation, especially if it's an operator related thing, there's like four different things that you need to check to see, you know, depending on where the error might be. To your point, I always start at the lowest level. So it could be the low level Kubernetes objects, I say low level, but the Kubernetes objects that are created by the operator. So think the deployments, the pods, the services, the routes, all of those things. There could be an issue with one of those, right? And, you know, do an OC describe, do an OC logs, whatever that happens to be. If it's an application that's deployed and managed by an operator, you also want to check the operator itself, because maybe it's trying to, you know, maybe it's having an issue interpreting the CRD, you know, you provided a bad field. Maybe, you know, the operator itself is encountering some sort of error and can't create those Kubernetes objects. You can, or you should also check operator lifecycle manager if the operator itself is having issues. So maybe OLM is unable to update it for some reason. Maybe OLM is unable to deploy because an operator is just pods, right? Just like anything else in Kubernetes. So maybe OLM is unable to fully instantiate the operator. And then last but not least, and definitely the least common, unless we're talking about updates or upgrades is cluster version operator. So if it's one of those core operators, like if you do an OC get CO, OC get cluster operator, those are all managed by cluster version operator. So you would want to check in, you know, cluster version operators, logs, stuff like that, if it's one of the core ones. I'm gonna just go with Quinn for that username, my apologies. When you have a stateful workload on your cluster, how would you upgrade slash evict the underlying node with minimum impact towards the workload? So there is no, right? Minimum impact is essentially the pod is terminates and then is rescheduled and reinstated on a different node. That's kind of the way it's always going to be. So minimum impact would be achieved through something like a redundant service. So there's two or three or five or however many instances so that there is no one single point of failure. And then, so assuming that that is true, so let's say that there's two instances, you would use something like a pod disruption budget to ensure that only one of those is affected at a time. So when we do an upgrade and it has to go through and it does a cordon and drain of each node in the cluster, it knows I can't affect so many nodes or I can't affect both nodes at the same time that have this pod running or this application running type of thing. So pod disruption budgets can be a huge benefit there. Boom, that's the word I was looking for. So real quick in the last say 90 seconds operate or applications that are running inside of the cluster. Any thoughts there? Yeah, it's always look at generally it's going to be an image pull back off. So that's going to be check your registry, check your image, check a tag and then or it's going to be scheduling, right? So that's going to be taint tolerations, labels, node selectors, all that stuff. One last thing that we kind of, I was trying to get to a second ago but it's if you turn your clusters off and you turn it back on and stay over a weekend, check your CSRs. So OCADM or OC get CSR because what happens is they will expire over like a certain amount of time. I can't remember what the timeline is but a lot of times your nodes, your cluster will come back up and it seems like it's fine but your nodes are not ready. If you check your CSRs you'll most likely see that you have like the bootstrapper set to like a pending and you just need to approve it and then everything will come back up. Yeah, I always try and remind folks to look at your application logs too. I've worked with a number of customers who it's, they're like way down in the weeds, troubleshooting the SDN and why isn't the SDN allowing this traffic pass? And it turns out it was like, if you look in the application log, it just has the wrong service name for what it's trying to talk to or something like that. Yes, it's simple. Yeah, make sure the network cable is plugged in before type of one-on-one stuff that we just forget about because it seems so obvious and simple. Yeah, absolutely. And especially as we get like more advanced, right? Like it's just harder to keep, it's harder to think that small and then, you know, like as we get further along we're like, okay, it's gotta be this but it's just start at the base and then work your way up. Come up with a process where you start at the bottom so that way you actually go through it every time. Yeah. All right. So unfortunately, as I said at the top of the hour back when we started on the first iteration of the stream, I do have a hard stop at noon today. So thank you so much for or to our audience for making 2021 a really great year for the stream. You know, I never, when I started at Red Hats, you know, when I started this career in tech marketing with eight, nine years ago now I never thought I would be sitting here, you know, talking with an audience of folks on a weekly basis, being able to interact with being able to do all of these amazing things with all of you amazing people. So thank you so much. Have a great holiday season regardless of what you're doing, you know, or what you celebrate or don't celebrate or anything like that. I hope that the end of the year is really phenomenal for you. Please stay safe, you know, COVID is still COVID and still doing its thing. And, you know, ultimately again, thank you so much to everybody. Johnny, thank you for joining me this year. Yeah, thanks for having me. I love it. So yeah, I really appreciate you being, bringing the co-hosts. So with that, I will leave you with the last word and I will see everybody in 2022. Yep, same as Andrew said, happy holidays, everybody. Thank you for all the questions that we get. And Andrew, I will see you next year, buddy.