 Okay. Well, thank you so much for the introduction and thank you everyone for attending. So let me just make sure I've got all the computer stuff you're working. I think you can see my screen and video and can you guys hear me? Hopefully. We can. Can all hear? Okay, good. Well, that seems like all the ingredients we need to get started. So I'm really excited to talk to you all about Rancher Continuous Delivery. So normally I would ask who here has heard of this but I can't really see if you raised your hand so that's kind of pointless. But I'll just assume most of you haven't heard about this. So we're going to start with a high-level overview of what Rancher Continuous Delivery is. Then we're going to talk about why we built this technology that's used in this new feature called Fleet. This is kind of the underlying engine. And then we'll go into some architecture and kind of tease apart the technicals. That's always fun. And then finally, hopefully the most exciting part, we'll do a live demo, which is always risky. But we like to live on the edge here at Rancher. So hopefully this will be fun for us all. All right. So what is Fleet? Well, Fleet is this engine that we have developed. It's an open-source project at Rancher. And it's designed for the problem of GitOps at scale. It's suitable for one or one million clusters. And that's one or two one million clusters. That doesn't mean one or one million clusters because obviously there's lots of scales in between that, I would assume. So this is a really important thing we think for the future of where Kubernetes is heading. And let me just take a step back, actually, and just bring up this other concept, which is when we started Rancher five years ago now, Rancher was developed to address the problem of treating our servers as pets. We, as a engineering community, have always had this tension of whether we're managing groups of servers as pets or if we can treat them and address them in a more scalable manner. And the analogy is, of course, the cattle. We can address servers as one single unit and modify them collectively and have consistency and repeatability. And if we lose a server because it fails, that's fine because we just replace it with a new server, which assumes that same functionality and role automatically. There's no more like, well, the pet server name, you know, Freddy or, you know, whatever, failed. And I got to rebuild it now. I got to go take care of the sick server and rebuild it. And that's a day of my time. We want to get away from that. So that's why we started Rancher. Also, we all needed jobs. That was part of it, I suppose, right? But more importantly, we wanted to do something meaningful at our career, which was improve technology for the engineering community. So we developed technologies to help solve this problem. We developed cattle, our first cluster orchestrator for containers. This is when we just had a docker back in the day. And then we developed Rancher for Kubernetes. When Kubernetes became sort of the de facto standard for container management, we built a technology to allow you then to manage Kubernetes more effectively and scale across many environments. Okay, so that's what we did five years ago. And that's been our evolution. What's happened now is we've sort of gone 360 again, back to the same problem, which is containers are no longer, sorry, servers are no longer pets. We've solved that problem. They're definitely cattle now with Kubernetes and containers and all the orchestration technologies that we've developed with the CNCF and what really the CNCF has developed at large, that's all solved. Now the Kubernetes cluster itself has become the new pet, right? It's replaced the pet that we got rid of, and it's just moved out the stack. So what we developed fleet to address this very problem of the cluster of Kubernetes is becoming the new pet and now becoming an administrative overhead, which is slowing us down as engineers from getting to the interesting problems, which are, once we get Kubernetes running, and we actually start using all the CNCF tools to build applications, like that's where we want to get to, let's be honest, it's not, we don't get excited about getting Kubernetes working anymore, at least I don't think so. We get excited about using, you know, service mesh and, you know, advanced logging and instrumentation and, you know, advanced CI CD methodologies and doing blue-green deployments and AB testing and, you know, doing proportional weight routing and circuit breakers between our microservices, that's what we want to get to. It's not this part. So again, we hope that we can help remove some of the burden and some of the administrative overhead that started to develop at the cluster level. So that's a long-winded way to explain that's why we think Kubernetes needs fleet and needs something because we have more than one cluster. We know engineering groups now that are, you know, thousands of clusters already and they need consistency across them. So how do you manage policy? How do you manage application deployment? How do you manage infrastructure maintenance? If you're using CRDs, for instance, you have to register, you need those CRDs to be consistently deployed across all of your clusters. Let's say you have special ingress settings. So many people now have to have, you know, Nginx tuned a certain way or they want TCP forwarding enabled. So the moment you do that, now you're going to need special yamls that have to be deployed at the CUBE system level for your ingress. Okay, who's going to keep track of all those files and make sure they're consistent across your clusters? These are the types of problems, you know, that need addressing. And then when you start to try to push those settings out, how do you make sure that they're consistently deployed? Monitoring the deployment of those things, making sure that there's eventual consistency. And that ties into, of course, with eventual consistency visibility, rule-based access control and, you know, a controlled rollout method, because of course, when you get to large cluster scale, you don't want to do these things atomically. You want to do them transitionally or gradually. Okay, so why millions? So maybe that's a question that doesn't even need to be addressed in some people's minds because you're already living it. But for those who haven't experienced this yet, the trend that I'd like to tell you about that is happening is as companies are embracing Kubernetes in more and new use cases like near edge and far edge especially. And it's not just these, but these are the ones that really kind of turn the dial up very quickly. Now all of a sudden, we have an order of magnitude increase in the number of clusters that are going to be managed by an organization. We know companies that are using Kubernetes, you know, on windmills and energy farms. You know, there's just hundreds and hundreds of endpoints and hundreds and hundreds of clusters now because of that. And, you know, you've got the telco space, which has got devices in the field everywhere, right? Telcos have cell towers, tens of thousands of them. A lot of them, have you guys ever driven up to a cell tower before and like peeked over the fence and looked at kind of the electronics there? I'm not suggesting you, you know, those are only restricted areas. I'm not suggesting you interfere with anything there. But if you ever just happen to notice them, there's a full, there's usually a full like 42E rack or something or not, you know, or sometimes smaller. But a sizable rack and cabinet, they're full of servers. So those are data centers, you know, that are out sitting behind your Walmart parking lot in the back. And that's a data center right there. Those are going to become Kubernetes clusters. And so now we've talking about tens of thousands of clusters. So existing solutions, they get to about here. And why is that? Well, it's because the existing solutions rely on artisanal craftsmanship to maintain those clusters. They rely on a subject matter expert who is able to put together a Kubernetes cluster over a few days very well from their skills and from the tools they have, but they have to do it manually every time. And that just doesn't, you know, that scales to about the 10 to 50 cluster range before things start to become, have some friction. So that's why we're doing this, essentially. So let's talk a little bit about existing get off scaling approaches, because this isn't necessarily the only way to approach it. There is some models where people have built things, for instance, that try to have something inside the Kubernetes cluster just pull from a get repo. And so you have a repo per cluster. This can work. In some cases, there's a disadvantage of this, though, is you have a lot of repos to manage. You have a one-to-one relationship now between the repo and the cluster. And so you've almost just offset or transferred your, excuse me, administrative overhead of your clusters to a different system, right, to your get repos. And now you have to deal with the problem there a little. So that's not great. Furthermore, how do you visualize and see who's pulled what? If everything's just pulling asynchronously, there's, as soon as you commit it, it's just going to start getting consumed and you don't really know how fast and you don't really know, you don't have any control over the rate of that consumption. So that can be kind of a problem. So the kind of the reflection of that is people then respond by, okay, let's create one repo to solve the problem. And I have a single point of control. They don't have the repo sprawl problem. Okay, that's great. Still a disadvantage, though, is you don't have a way to control rollout. In fact, in some ways, you've created a new problem, which is now you've got an atomic lever is your only way to make changes. All I can do is make an atomic commit and all of a sudden, everything gets the change at once. And if you're talking about a global data center, sorry, a global cluster footprint, a thousand clusters in 25 points of presence across the world, which is kind of the minimum we're talking about before we get even into the fun stuff, that's not going to work so well. You don't want that to happen all at once because you really want to make sure that you're monitoring the impact and effect of those changes. And so then another way to solve that problem of control is, okay, let's not let it pull. Let's instead just push to it. So then we got a single repo and then in theory we can control it because we're pushing out the changes and controlling sort of when those things are seen across the world. Okay, that's good. The challenge there, of course, though, is the push model, it just doesn't scale well unless you have good tooling. And push also requires this concept of a network ingress. And if we're talking about distributed systems globally, the moment we start talking about WAN and EDGE, we're talking about links that are untrusted by nature, we're talking about going over significant terrestrial distances where we're going to have links that are not trusted and they're not going to always be on a private backhaul. And so the moment that happens, we have to be much more defensive about what things we let reach into that environment remotely. I don't want something reaching into my windmill generator station from across the world if I can avoid it. And so that's where ingress is a security obstacle, not impossible, but it makes it harder. So that's why we chose an agent model because an agent model allows us to have a agent that makes it, doesn't require ingress because it sits within the downstream cluster. So in the windmill, the agent will live there with the cluster and it's periodically checking for new updates that it might want to consume, but it's initiating an outbound request talking to a trusted endpoint. So that's much more secure. I don't have to have my firewall open at all. Right? So that's one advantage. But then because it's not just checking a GitHub repo or a Git repo, it's actually checking an internal engine that's making decisions and telling it if it should have updates. Now we get the control that we want. We get role-based access control, we get conditional deployment. So when this node or sorry, this cluster checks in with the master here, this master may say, Hey, you have an update. But when this one checks in, it will say, No, you don't have an update yet because it hasn't told, it hasn't decided to update to this one yet. So we get that control. And of course, now we also know, Hey, someone checked in and hey, this thing just reported back that it applied the change successfully. So now I have a transaction. I don't just have, you know, commit and see what happens. And maybe they got a phone call. Now I have a transaction, right, which has all the benefits of transactions that we all value in our data systems, right? Okay, so normally I would stop for questions here, but you know, with this webinar works as we do questions at the end. So I know this is a lot of information and just take notes if you can, if you have any questions, or just try to maybe have better memory than I do, because I know I would forget if I had to wait. So I apologize, but we're going to get to questions soon because I know this is a lot of stuff, but just just really high level here. This is the architecture of fleet. So we got our downstream clusters in a cluster group. And they're checking in with the fleet controller cluster, which has a definition set or a bundle definition of state that it's trying to propagate based on what's get what's in get so get or and you know, we got GitHub here because it's recognizable logo. It's not just GitHub, it's any get, but you know, extensively now, GitHub is just get, right? They've sort of claimed that nomenclature for sure. And so, you know, practically speaking, you got your GitHub repo. It's the source of truth now. It's got all of the state that you want to see in your clusters. And the controller's job is just to follow the directions of that get repo, and make sure that they follow or are seen outside in the real world. Here's the roadmap that we're working on. Our 2.5 release, this is now GA. So you can use this today in Rancher. Now, Rancher, by the way, if not mentioned, is an open source project. Rancher itself is not a CNCF incubated technology. We have actually many others that are though. Longhorn is a technology that we donated to the CNCF, K3S is a technology we donated to the CNCF. So Rancher is an engineering group is very involved with with contributing to the CNCF and values that heavily. Rancher, the the product Rancher is an open source 100% open source product. So you can download this today if you ever wanted to try and it is available now with the 2.5. In future versions, we're going to add things for private get repos, you know, some kind of housekeeping things around different use cases like, okay, maybe there's proxies or advanced security settings or credentials that are needed. And then, you know, we're going to in the future versions really try to dial up the UX in our UI to make it more comprehensive. Although I think you'll see it's pretty good now. Things like automatic deployment rollback, you know, something fails, we can automatically rollback and probably much more because we're just starting this journey. And so that's for spring of 2021. That's kind of what you can expect. So, okay, we don't need any useful assets. That's been you've been useful enough PowerPoint. So that's a ton of information. Did I lose everybody? Okay, we still have people that's good. I have spoken into an empty audience for a long time before. So it wouldn't be the first. Okay, so what time is it? 1120? Okay, we can do this. So now we're going to do a live demo. Who likes live demos? The silence is encouraging. Okay, good. So let's take a look at our Rancher UI here. Why is it refreshing? Okay, that's not good timing. One sec. We should know that this would happen in a demo, right? All right, there we go. Now we got our cluster back. So imagine this world now we're in a Rancher server, we have two clusters that are downstream cluster one and cluster two. And I have a git repo on GitHub. And this git repo has some examples for fleet. There's a bunch of different examples here. We're going to do something really simple today because that's the best way to start one of these demos live. I don't know if I've had enough coffee to do anything more crazy. So this here is a set of Redis containers. And you can see here, these are actually just YAML files. Has anyone seen one of these before? I hope so, right? If you guys use Kubernetes, you've probably seen one of these. So one of the first things you'll notice here is, hey, there's no special syntax. We're not introducing a new language or, you know, oh, there's a new fleet, you know, configuration language and syntax you have to follow. And it's going to be, you know, and it's a variant of TCL or something or, yeah, I hope you like semicolons or something like that. None of that. It's just YAML. It's normal YAML. You can just drop it into a repo. Now, being said, there are some situations we discovered where you might want some more metadata. So there is an option to do, is it here? Where's our bundles? Here's our fleet YAML. There is an option to basically have this metadata file and then do some more like metadata-y things like target customizations and specify all this stuff in the repo. So that's an option too. So that's more than just pure YAML, but it is pure YAML if you want it. And then finally, you can actually have just helm charts in here. So this is not a replacement for helm. This is not, this is not trying to do anything helm does. This just tries to take helm and use it for a greater purpose along with the other things we're trying to accomplish, right? Okay, so this is our repo here. So let's say we wanted to use this to control this. What would we do? I wonder if I know? I hope I know. We'll find out. Okay, I'm going to copy this get repo here and I'm going to first go to the cluster explorer because this actually all happens within our new UI called view, which is also known as the cluster explorer. So you want to go in here right away. And this is probably where you want to stay ultimately as you start getting used to Rancher 2.5. So now I want to create a get repo in fleets. And notice I'm going to continuous delivery. This is the new feature here, right? And now I'm going to put in a unique name, what should we call this CNCF rocks? How's that? No objections, good. Because we all love the CNCF. All right. And the repository is that, which is not that. It has to be HTTP probably. Actually, we might use get protocol, but why, why find out right now? I could use a branch if I wanted revision, you know, this is get base and expect to get like features. I am going to use the path simple, because I want to just use this one simple set of configs. I don't want to do the whole thing. Okay. And then where do I deploy to? So here's where some of the fun stuff gets. I can deploy all the clusters, I can deploy specific clusters, or I can deploy to a cluster group. I want to do a cluster group because that gives me the most control. So I'm going to create this. And now I'm going to use, now it's pointing to a cluster group. Now cluster group is just basically a pointer or a way to address multiple sets of clusters based on labels. So it's just kind of the classic legal selector idea, which allows me then to add and remove things without kind of messing with the actual repo parent setting. So this cluster group here is defined by did I already actually have these labeled? Ah, yes. Okay. They're already labeled. So this cluster group here is defined by cluster location equals North America. It's just a label selector, but any arbitrary, you know, is open source equals yes, right? Anything I want here or is not what happened. In this case, we did cluster location equals North America. My two clusters here, I have also, can you guess what I labeled them? I wish I had giveaways or something, give you a shirt if you got the answer. Cluster location equals North America, right? So therefore, this cluster group has a quantity of two clusters. So that's, that's how that works. But I could add and remove a cluster very easily just by adding labels. Okay. So since that's happened, it actually is telling me now what just has gone on. So when I applied that, it actually started deploying to those clusters in the meantime, then it just completed those two clusters, right? Let's go ahead and take a look at a cluster now and see if it's there. There we go. So where are we now? Cluster two. Okay. Cluster two has front end, reddest master, reddest slave, which is, let's see if that's kind of what we expected. Front end deployment, it's called front end, yeah. It's the reddest thing, great. There's a service, the service wouldn't show up there, but that'd be kind of the routing behind the scenes. There's another deployment, yeah, reddest slave, replicas two. Do I see two there? Reddest slave, let's take a look. Replicas two. Okay. Cool. So we just basically configured these clusters. Now, I know what you're asking is, hey, you never showed me the clusters before this. So how do I know you just didn't put all that stuff there and just, you know, or pull in my leg? Okay. Fair enough. Let's, let's keep me honest here and let's, let's remove one of the clusters from the, well, let's just remove the label and now let's go back to the cluster explorer. We should see if this stuff go away. Synchronizes. Why is it not, it not saved my changes? That is odd. Why is it not force update and see if that does it? Still thinks it's got two clusters and it doesn't. Well, let's just, let's just do it this way then. I'm going to delete those objects and they won't get recreated, hopefully. Because it's, it really should be ignoring that cluster at this point. Yeah. It doesn't even talk about that cluster anymore. So it's ignoring it, but it didn't clean it up. That might be a minor bug that we have to look into. Okay. So repo's ready zero. Okay. That's what we want to see now. So, yeah, there's nothing there. Okay. Cool. So let me just add that cluster back in though. And then now just to improve. And it is for cluster location North America. So let's see if these things pop up now. Oh, there you go. Okay. So that's good. All right. So let's try another thing. Let's let's do another kind of common workflow, which is okay, cool. So I got my, my, my clusters are connected here to get now I should be able to just control things to get right. So who wants to add another redis node to our, to our cluster here? I can just feel the excitement you all do. I get it. I know, I know the feeling you just redis needs more, more nodes. Let's, let's make that happen. All right. So redis has two replicas now. What do you say we go to three? All right. So let's just, again, try to make sure that there's no man behind the curtain here so we can all see what's happening. Redis replicas two, redis replicas three for red slave. Let's commit directly to the master branch, like all good DevOps engineers do. Right. No, this is not CNCF recommended. I'm, I'm being told right now by the moderators that I need to retract that statement. CNCF does not recommend committing to master directly. You always want to always want to pull a request. All right. So I'm going to commit to master directly, which again is not great, but for the purposes of this, and in time, it's going to check and say, Oh, look, hey, look, something changed. I need to go change the world around me. And look at that replicas three. And now there is three. So there you have it. That is that is now working. So what you see here is like, I just committed a change here to get, and it just changed it in two of my clusters. If I had two or 2000, it'd be the same. Oh, by the way, do you want to look at the other cluster just to see what cluster one is looking like? What do you think is going to be there? Okay, same things, right? How many should be here? Three. Perfect. Right. So now we just have this identical state across all of our clusters. Now, admittedly, I'm really focusing on the example of red as an application. So you might be saying, okay, I could always, I've been able to use helm and just, you know, helm, helm update across five of clusters. No problem. Not a big deal. I can just have a script to sort of synchronize all my deployments with how not a big deal. Yeah, that's fair. That's not the only use case. Again, that you're going to want to think about, though, it's the problem is what have read as needed CRDs or modifications to the cube system or storage classes provisioned, right? Or system level things in Kubernetes. Kubernetes is now a complex piece of machinery. Those things can't be necessarily captured in a helm chart easily. That's where this is coming in, because now we're doing that in Git as well. So now the Git repo is cohesive. It's the whole thing. This goes right back to the Docker thing. Why was Docker so powerful initially? Well, one of the reasons was is because it just worked on, if it worked on your box, it worked on any box because there was none of this dependency injection that we've lived by for the last 20 years in engineering where it's like, okay, yeah, everything will work, except you need like, do you have the right version of, you know, libc and libxml on your box? Oh, yeah, if you don't have the right version, we will fail. And it's like, well, okay, but not everybody's going to have the same version. So that's a, that's not a complete solution. And Docker solved that by saying, everything's going to work, everything's going to be in the image. And it's a, it's a, it's a comprehensive solution, which, you know, it doesn't leave anything up for surprise. And that's the same thing here. Now the entire cluster can be described. And notice I said, can be, you don't have to, you're not confined to doing everything to Git. I can still edit these clusters. No problem. All my other tools are going to work. Nothing is, nothing's impacting my existing tool set, but I can if I want to. And if I have that pattern, describe everything through code. So this is infrastructure as code now for Kubernetes. So with that, rather than doing any more demos, I see some questions coming in. And I'd really like to get to the questions. If it's all right with everyone else, could we start answering some of these? Sounds good, William. Can you hear me? Okay. Yeah, we can. And everyone, this is Connie Lynn, our events manager at Rancher. And so thank you for helping us today with questions. Hello. Yeah, thank, happy to be on here. So we do have quite a bit of questions coming in. I'm going to kind of go down this list here. You know, there's some questions that came in earlier during your presentation, William. So Zolt here asked, which also is related to another question. How are secrets managed, especially ones that have to differ between clusters? Yes. Yes, that's a very good question. I'm actually, I'm actually thinking about our new backup operator, because our backup operator, we actually have a decryption engine for when we pull the secret out of state and we store it in the backup object. I think that's, I think that's the same thing we do here, because we actually are basically capturing every Kubernetes resource with our backup operator for preservation. So I think you can use that same encryption format. But I need, I need to double check on that. That's a very good question. Now, as far as secrets differing, you can still mutate things outside of fleet. Right? So the only, only fleet is only caring about the things it knows about. That that's part of the beauty of this. Again, it's not a, it's not an abstraction. It's not an abstraction. There's no opacity on your cluster at all. So what I would do in that situation right away is, okay, secrets are different. I'm going to deploy secrets through some Jenkins job or some other, you know, very secure control deployment tool that is just for that purpose. Because that's, you know, that's a small amount of entropy that I can deal with. So that's how I would address that. Great. Thanks, William. Next question is from Patrick. Patrick asks, can the continuous delivery configuration in Rancher also be done using code with Terraform? Terraform, for instance? Yeah. So using Terraform to, I mean, I'm just trying to think of how Terraform could be applied here because Terraform is largely concerned with the infrastructure below. And so by the time we get to fleet, we've already got Kubernetes clusters running. Now, Terraform could be producing the clusters that then get registered into the fleet agent, into the fleet system. So the registration of a working cluster again is kind of here, right? This is where I sort of assign clusters to now be registered to be assimilated by the Borg, if you will. Because any Star Trek fans, right, the board would come and assimilate you and now you're part of the collective. And that's actually not a far-fetched analogy because what I've read is the original Google technology was actually called Seven of Nine. And then later Borg, you know, so either of them are quite, you know, Star Trek meaning. So yes, the same, you can just really think of like something has to create the thing that is going to be assimilated by the Borg, if you will. And that could be Terraform, but Terraform doesn't have really a place once we've got the cluster running because now we're controlling everything through Kubernetes. So I hope that answers the question. Sounds good. Thanks. So a couple additional questions coming in here. Let's see. Can we create clusters on GKE ACAS or EKS from Rancher? Or should we register after creation? That's a great question. Yeah, you can do that. Basically, you can do both. So you're kind of asking an opinion question, which I think I would want to know a little more about your use case. But if I look at the ad cluster page here in Rancher, it's probably easiest in those cases to just do it right from Rancher. So I could go down here and just click EKS and choose the region, put your Amazon keys in, and have to fill out some more forms. And it will basically talk to Amazon and say build a cluster and register it. We are finding too that there's a lot of people who want to use something, okay, so ties into the previous question Terraform. They want to use Terraform because they like what that provides to build their infrastructure. So in that case, use Terraform to build EKS and then you can register it or import it into Rancher and continue on the journey with Rancher. So both are available. Easiest is just to use Rancher just to provision into EKS, AKS, or GKE one click. So great. All right. We have a few additional questions here. One attendee is asking will fleet obey HPA or scale down to manifest original replicas value? That's a very insightful and sort of like advanced question. That'd be like the advanced question of the test, wouldn't it? To know that. I like this and I'm glad they're keeping me on my toes. I would think that it would see the transaction completion as, so the transactions that are driven are driven by get changes. So in theory, if it made the transaction complete and the HPA is deployed, it's not going to necessarily be like seeing, oh, did the deployment number change from what was in get? It's not doing that because it's not trying to control those things. It's only going to be pushing down changes that get is signaling. So now if get, excuse me, if the HPA does its thing and mutates the scale of that deployment and then later you make a change in get, it's going to get reset back to that original number if you define that as a replica quantity. There might be a way in the HPA YAML to make sure that it doesn't do that. I'd have to look into that, but essentially it's possible that we'd have to figure out exactly the best way to do it. So let's definitely feel free to follow up with me, William at Rancher. If you want to discuss that further or just jump in one of our user slacks and ask them. Great. Thank you, William. Another question here. In case of build service, upload helm chart to the Helm repo and push image to the registry to use Rancher UI API to deploy the chart, kind of that workflow, what is the recommended way to use fleet? Yeah, so if you're using fleet, you aren't necessarily going to use the catalog in the same way. There is a bit of overlap there. So like adding a helm chart to a repository and then going to our sort of our classic app catalog. So our classic app catalog here where you can deploy apps. This is based on a repository as you build. You probably don't want to use both of these in the same way. Or I mean, you could. So you could if you basically, hey, I just want Grafana for this one cluster. That's fine. But if I'm doing this across a lot of clusters and I want consistency, I wouldn't do it through here. I would take this code and put it into a fleet repo and control it that way. So there are tools that can both do similar jobs, but one of them is better at a certain scale. That makes sense. Great. Thanks for answering that. The next question from Patrick asks, how is the replica change in Redis from two to three different using fleet than using Argo CD, for instance? Yeah. I don't know of any difference with Argo CD yet. And I don't know Argo well enough probably to make any comments on it. There is some difference in approach, I know, from how our engineering team developed fleet. I think Argo definitely does more of, is more of a top-down push model, less of an agent check-in model. But in terms of that one nuance, I don't know of any difference off the top of my head. Great. So our next, there's two questions here related to Git repos. Nico asks, can the Git repos be deployed in new Rancher project as well? And a second question asks, how can accidental or forgotten manual changes be removed to synchronize what's deployed with what's in the Git repo? Okay, I'll answer the last one first because that one I can show quickly. So I can just do a force update and that should override, that should override anything that was different now or out of sync. So if I did make manual changes, I could kind of force change them that way. The first question, I'm not sure I understand. Could you say it one more time? Yeah. So it asks, Nico asks, can the Git repos be deployed in a new Rancher project as well? So maybe the code, the applications and deployments represented here can maybe be deployed in a different project, I think maybe. And yeah, so right now these were, how did we define that? Right now, these were deployed to default. We could, we could define that, I think through the fleet bundle in Git. We didn't, there's none of that here in this example. I think we can also do that from the definition here. Let's see. I think we can say the name, target name space. Yeah, there it is there. So we can say the name space we want to deploy it into and the name space is just a member of a project in Rancher. The project is just sort of a container around name spaces. Okay. Great. So it looks like we have seen two questions left here. The next one is, I think, more of a general Rancher question. So does Rancher allow us to upgrade all nodes Kubernetes versions for a cluster group, whether it is managed clusters or K3S? So upgrading the Kubernetes pieces on them, managed clusters or K3S. So if the moment we're talking about managed clusters, like the moment we're talking about different Kubernetes infrastructure management, then it's not going to be the same across, right? So like if it's K3S, there's an upgrade path for upgrading all those nodes. If it's managed by, if it's Kubernetes managed by Google, by GKE, they have their own way of, they control that. We don't, we don't get to control that. We can make an API call, maybe asking them to in some cases, but we don't get to control how the update process works. So it is different than for different cluster types. You know, what, I think what you're, you're envisioning is a world where I kind of have one way of managing the Kubernetes distribution on every node across my infrastructure types. If that's what you want, then you want to look at something like K3S or RKE2, and then you're using that same technology to build even in the cloud. So like then I wouldn't use EKS. I would use RKE on EC2, just nodes, because then I control what the upgrade path is like, and I can have it consistent for EC2 and for my on-prem and for my edge. So you'd have to use the same technology across, and that's what Rancher provides, but we, we also recognize that a lot of people want to use, there's so much in the cloud that they might as well just use EKS, because they have so much in the cloud. So we support both of them. They're both first-class citizens for us, but if your goal is consistency across infrastructures, then you need to choose a Rancher technology as your distribution. Thanks, William. All right, so looks like you're getting close on time. One last question here. I do see additional questions coming in, but hopefully we'll be able to follow up with attendees here and get your individual questions answered that we won't get able to get to today. So last question here, William. What is the easiest way to try out fleet? That's a good question. Well, and important because I hope you guys can all try this ultimately. That's the best way to learn and see if it's right for you. Easiest way is to set up a Rancher server and then add more, add one cluster to it. If I had to do this with like two nodes, let's just say let's say I want to go to digital ocean or light sail or EC2 and just create two nodes, here's what I would do. I would I would actually check out Rancher D and this is actually not considered production ready yet. This is actually brand new, so don't use this for evaluation purposes, but this is really fast to get started. I would check out Rancher D and just Google Rancher D and this blog will tell you about it and literally to run the installer is this one command on a single node and that will install your Rancher server. It will look just like this and that's actually what I'm running here on a VM at my house. Yes, I'm at home. Don't tell my boss I'm not. I didn't come into the office today. Just kidding. We all know we're all, most of us are working from home these days. So yeah, this is my home network. This is a home VM single node running Rancher with Rancher D and then once I saw Rancher D, then I need one additional node to be my downstream cluster and what I would do for that is K3S because it's really resource efficient so I can run a whole cluster again on one node very efficiently and guess what? That's also one command. So I would run that command and then I would go into my Rancher cluster that I have running, say add cluster and it would be another cluster, Fubar and I would run, I would copy this command onto the other VM that I just ran K3S on so that it gets registered into the cluster and at that point now I have Rancher server and one downstream cluster and that's all I need to try out fleet and a Git repo, GitHub. And just go ahead and fork the examples from Rancher slash fleet examples just to try it out. You don't even need to like write your own YAML, just fork this one on our GitHub. So that's that's what I would do. Thanks, William. I hope everybody can give that a shot and William, if you do have any more comments or are we pretty much wrapped up here with you? I think I've done enough talking, haven't I? I'm sure you guys are tired of it. But thanks for everyone for participating and I hope we can help you guys with more things in the future. All right. Thank you so much, William. That was great. Thanks, Connie. And hope everyone has a great day. Bye-bye.