 Hello, everyone, and welcome to Cloud Native Live, where we delve into the code behind Cloud Native. I'm Taylor Dullazal, head of Ecosystem here at the CNCF, where I work closely with teams as they navigate their Cloud Native journeys. Every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer your questions. In today's session, we have Andrew Reinhardt from Sedera Labs. And today, Andrew will be presenting Bringing the Edge Into Your Data Center. This is an official live stream of the CNCF, and as such, is subject to the CNCF code of conduct. Please don't add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful to all of your fellow participants and presenters. Be excellent to one another. With that, I would like to hand it over to Andrew to kick off today's presentation. With that, Andrew, please take it away. Cool. Thank you, Taylor. All right. So today, I kind of wanted to take us through a pattern that we're seeing emerge from our users, where, essentially, you have Edge Compute, but you don't have a whole lot of it. And when it comes to Kubernetes, you want to have HA and all these things. And it becomes much more expensive. And when things go wrong, things could go really wrong if your control plane is out there. And so what we're going to do today is we're going to deploy a control plane in Vulture. We're going to add some nodes to that cluster that are running in digital ocean. And then we're going to add a Raspberry Pi to the cluster that's running in my closet. And the goal is to run Redis on these worker nodes that are in digital ocean and prove that we can reach the service, the Kubernetes service, from the Raspberry Pi. So you can imagine that you have maybe a cluster that is hosting Argo CD or something or Flux that's responsible for deploying an application to your Edge machine, and then your Edge machine is reaching out to what people I'm finding out are calling cloud services. But they're not necessarily the AWS or GCP services. They are the cloud services that are running in Kubernetes, but the workloads are actually in the cloud and the Edge is reaching out to them. So let's see. Am I sharing my screen yet? I'm going to take a little bit of a shortcut today because we could have easily made this stream about how do you actually even go about connecting these remote machines? Because there's some networking implications here in this design that I'm talking about that make it very, very difficult. As I mentioned, we're going to have machines in Vulture. We're going to have machines in digital ocean. And we're going to have a completely private machine in my closet. So how does that work? WireGuard is a new technology that's built into the Linux kernel that we have support for in the Sedera Labs' product, Talos Linux. Talos Linux, it's just a simple Boolean value, you just say, enable kubespan. And all of this automated WireGuard networking gets set up for you. Didn't want to go through that manually. So I'm sort of cheating here a little bit. And I'm also using another product of ours called Omni, which is going to allow me to manage these machines regardless of where they're at. And what you're looking at here currently is about seven machines. We have these demo SFO three ones that are running in digital ocean. You can see these droplets here. We have these ones that have the generated hostname since Vulture doesn't do DHCP, it seems. But we do have three of these machines running in Los Angeles. I should point out that's in Los Angeles. This is in San Francisco and I'm in Santa Barbara. So those three machines will serve as the control plane. And then we have this demo, SBC01. Sounds fancy, but it's just a Raspberry Pi 4 running in my closet here. So let's start off by creating the cluster. I'm going to look for my, let's see, I want Talos, yes. So let's focus on the control plane nodes first. So I'm going to make these three in vulture.com, my control plane nodes. And I do have some patches here, which I'll explain as I'm doing it, but let me just look here, two seven, cool. So Talos Linux is completely driven by a configuration file that you supply to it via the API. For those of you that don't know what Talos Linux is, it's a Kubernetes or rather a Linux distribution that is built explicitly for running Kubernetes. You could only communicate with it via an API. We don't have Bash or SSH or anything like that. And so I'm going to use the API in this product to push this configuration to this machine when we're setting it up. And all I'm doing is setting up the networking according to how vulture.com tells me to. And you can see here, this coop span navel true that I was talking about earlier, this is all that is needed to do the automated wire guard setup. So let me just double check here. This is two seven one, I'm just matching them via UUID and I set up some patches here beforehand to save us some time. And really quick, Andrew, is there any way that I might be able to just bump up these sizes, just a hair or two? How's that? Good, that should be good for now. Cool, thank you. Yeah, and the last one is F6C. All right, cool. So we'll do tell us 120 and Kubernetes 124.4. We will call this live stream and I will create the cluster. What's really cool about this is that we are actually already leveraging wire guard. It isn't the coop span wire guard network, but it is what we call sedero link, which is basically Talos is streaming logs to this service for us. So I don't have Talos API access in theory at the moment. So if I wanted to see what was going on, I could get almost like a console access that you would from the cloud via this. And this is just Talos using the wire guard network. It's just a point to point connection so that we can stream events and logs and we can actually operate this machine remotely. You can see that there are some transient errors coming up, but the node is not ready, which means that things seem to be forming. We will wait for that to be formed. All right, cool. So the nodes are starting to pop up. These are the machines in vulture.com and this is going to serve as our control plane. And the way that we're doing load balancing here is that this product is actually serving up a load balancer so that we can hit all of these three machines in HA fashion. So as long as your edge machine is going to have egress capabilities and internet, you'll be able to reach the Kubernetes control plane and it'll all be secured over wire guard. Cool, so the next thing I wanna show is let's just add some machines here. And these machines are gonna serve as let's just imagine, we have a control plane currently running in Los Angeles and let's say that your customer has an edge site somewhere in San Francisco. So what we're seeing people do with this product and with Talus Linux is they're actually adding, I guess you can call them edge but they are still running in the cloud but they act as sort of a support set of nodes that supply different services that the actual really edge machine is going to need, the machine that's running behind the customer's firewall which in this case is gonna be my SBC-01 machine. But these SFO machines are just going to serve something. In our case today, we're gonna use Redis but just imagine that they supply some kind of support to the edge machine so that you don't need to run as much there. So we're gonna set up these three as workers and the only thing that we're gonna need to do on these machines is patch in the Coupespan capabilities. Cool. We'll add those nodes and all that we're left here now is with the edge machine. It is technically physically in Santa Barbara because I don't have an edge machine running in San Francisco somewhere but let's just imagine that this is the edge machine close to these support worker nodes. Before we do that, let's just make sure that these other machines join in. I always wish that Projects supported the Jeopardy theme song music while they write for things to play around join but no one's answered that feature request yet, sadly. All right, cool. Node is not ready. All right, it looks like things are going to chug along just fine. So one of the things that you need to be concerned about that we're seeing our users at least be concerned about at the edge is how do you encrypt the disk and thankfully this is simple enough of Talos. So we're going to add this machine as well now but we're going to do something slightly different with it this time. We're going to patch in, we're going to patch in some encryption. So we're going to use Lux to encrypt the what we call the system disk within Talos which is the disk where more or less ephemeral state lives. We could do a whole live stream on Talos but I won't get into that but just suffice to say we're going to be encrypting all of the important partitions that we need to be worried about. And what I'll do here is just generate a couple passwords. So this is something that you probably want to be thinking about if you're doing an edge machine because people can walk away with these machines and that's a common thing that does happen. So you can see we have kubespan enabled and we're going to do system disk encryption and I'm going to add that node. So this machine running in my closet should be joining this cluster here very, very shortly and it looks like the ones in San Francisco are up. Let's just double check in San Francisco if we have some extra disks because if time permits I want to maybe push things a little bit and try to get storage going because that is a common thing that does come up is how do you do storage? To be honest, I think that that is a weakness in this space and I think these hybrid sort of approaches are rare because it's been hard. But I think if you look at WireGuard and what it has enabled just take the folks at tail scale what they're doing. The new types of things that we're going to see I think are going to push the boundaries of the current implementations of things like storage. And so my hope is in the future we can have storage that is able to be sort of geographically aware. But yeah, it's not really the case today. So the recommendation I would say is sort of the pattern that we're doing here today is where we're going to have Redis let that be backed by a local sort of storage some CSI that you decide to supply to the cluster and then have your edge machine sort of store its data there if you will. Okay, cool. So actually let me just double check on this one machine real quick because it is important that it works. No, it isn't ready. All right, cool. It looks like it's going to come up though. All right, let's download our Kube config and let's download our Talos config. These are going to allow us to actually interact with these machines and I will move these where I need to go. And I'm happy to pause for any questions at this point because we're going to sort of shift. We've bootstrapped this cluster and I've already explained sort of the architecture that we're seeing people use the pattern at least and now we're going to actually prove that this can actually be functional. Cool, if you do have any questions please feel free to throw those into the chats and we'll get those questions raised up. And then we had another request just to make the textures a little bit larger in the terminal. That better? Yeah, just a little bit more would be fantastic. Perfect. All right, awesome. So I'm just going to set some environment variables here so that I'm not having to supply a bunch of flags. All right, so let's just do get nodes and show that things are working. Awesome. We did just get one question asking, are there any plans to release this for air-gapped environments? Yes, absolutely. It is a single go binary. It's going to have SED baked in so you'll be able to run this in HA fashion. So yeah, it will be coming shortly but at the moment it is currently just a SAS. Cool. Cool, so we have all of our machines and at this point I'm going to do some labeling on these ones that are running in digital ocean so that we can do some node affinity. So this is something that you're going to have to be really aware of when you're talking about the edge in this pattern because imagine that this edge worker, this SBC-01 here, let's imagine we have five of them and they are different customers potentially or different locations. And so you're going to have to really utilize labels and node affinity to make sure that you have the workloads running where you need them to run. I'm going to use the node affinity today to make sure Redis is running in my digital ocean machines but you could see how it applies to these edge machines as well. So if we get our nodes now, we can see that these ones now have a role of Redis arbitrary name but something that I want to also point out which is interesting here. Oh, the internal IP is that which is interesting. You can see another thing that you're going to have to probably be very aware of is sort of networking collision. In Vulture, I have my subnet is 10-5-9-6-0 slash, it's a slash 20 and then in digital ocean, it's 10-1-2-4-0-0 and it's a slash 20 and at home it's a 192-168-1-1 slash 24. So you got to be aware that you're not going to get networking collisions here because what's going to happen under the hood with at least Talos and I think anything that isn't necessarily Talos but it's going to implement something equivalent, you're going to have some potential for IP collisions here. So that is something you're going to have to be aware of in this model as well. Let's just double check. Actually, I need to get an IP address. So let's just ask this machine what version it's running just to make sure that we can have Talos access. Ah, I know what I need to do. I need to set the password on the Talos config and I will do that offline. So give me a second and I'm happy. If there's any questions as well, this is probably a good time to take it as I do some stuff off screen here. Awesome. If anyone has any questions, please feel free to get those to us and can definitely ask. I think when it comes to, I really liked what you said about kind of envisioning what the future is going to look like in terms of where we actually go about placing storage. Are there any other projects or contexts or really just any places to be aware of where some of those conversations are happening that you could refer folks to, keep an eye on if they're interested in that space. Yeah, they're not really happening yet. I think what we're doing here at Sedora Labs is sort of pushing the boundaries. I think with something like Talos Linux, we can really start to imagine new types of architectures and new ways of doing things that just aren't there yet. So we're a little bit ahead of the curve here, I think, and the conversations aren't happening, but we plan on spearheading those conversations. But they need to happen in multiple places. Storage is one. If you even just look at the ecosystem within Kubernetes, they don't really support this notion of hybrid infrastructure, if you will. This idea, like what we're doing today, I have machines that are running in Vulture, I have machines that are running in Digital Ocean, I have a machine that's running on-premise. If you look at tooling, they don't really, you know, cloud controller managers, for example, they don't have the notion of this. How do you actually do that? Look at cluster API. It's not a native idea within the cluster API world. So we're still very early on here, to be honest, and the conversations aren't necessarily happening at the level that I think we should, but we're proving it out here, that it can be done, and we plan on driving those conversations. Cool, cool. It's, yeah, it's an exciting space. I'm really curious to see as we move to the edge, you know, what is unlocked? I know there's still many other problems to solve too, like multi-cluster federation, like all of those kinds of concerns, as well as we start to go multi-cloud and multi-context on that front. Exactly. I did get one more question in, and that was does latency between workers and the control plane matter? Not too much. What does matter is the latency between at CD. That is the most important thing. And so in this model, you could see that I'm deploying the control plane nodes right next to each other. That's important. The amount of traffic that's going back and forth between the Kubelet and the API server is pretty minimal. And it really is just enough to really check the state of the world and make sure that the state of the world, the configuration is pushed to the machine. And so it's not necessarily in the hot path as far as latency goes. So that is actually a very, very common question. I'm glad someone asked that. Let's figure out why this isn't working, the joys of live demos. Okay, looks like I'm gonna get another IO time out. That's fine. We can live without it. I would have loved to have shown off Talos CTL a little bit, but we can figure that out later. In the meantime, let's just actually start crafting some of the things that we're going to need for deploying Redis. So I will open up my editor here and let's just call this redis.yaml. So my idea here is that we're going to run Redis as a deployment. We're going to target the nodes that have the role Redis. Those are running in digital ocean. We're gonna deploy Redis there. And then we're going to expose an internal service which is going to be routable from the edge machine thanks to WireGuard or Kubespan. So that we can reach this service from, even though this machine is completely private, we'll be able to reach this and prove that this is a model that will work for us. So I think what we'll do is, how are we doing on time? We have about 40 minutes. Okay, cool. So I know that Kubernetes has, the reason I chose this is because I know Kubernetes has a sort of a how to run Redis. And so we're just gonna work from there to show how we can build this up from scratch. So let's just start with the pod. I know that what I wanna do is actually do a deployment. So we'll just change this. Vaps v1. And then I believe what we need is spec. I'm going to cheat just a little bit. So we'll just do one replica for now because you could figure out how to make this HA later if we have time, maybe that's what we'll do. But I'm just gonna use this label here to set up my template. And then let's get these things going. I think that's what we need. And deployment, cool. So I mentioned node affinity. So node affinity, again, it's going to be an important idea here and you're gonna make sure that you have these on all your workloads when it comes to this model. What we're going to do is we're going to set up some node affinity here. And I'm keying in on this node role label that we did earlier and the values are just that. That's not really the node role. This is sort of a special one, by the way. You cannot, the Kubla does have the ability to set labels on nodes when they come up, but this is actually not allowed because you can actually escalate a machine to become a control plane node using just this label. So they don't allow this to be set. So this labeling will have to be done sort of after the fact. But anyways, here's the affinity. We're going to target those digital ocean machines and we're going to run Redis. You can see that we're referencing a config map here. I believe they have that somewhere around here. So we'll just use this one as an example. Cool, so that will be mounted up. We have port six, three, seven, nine. And the last thing that we're going to need is a service. So I want to be able to reach this service from my edge machine. So let me just copy this one in. Save us a little bit of time because no one wants to watch me be a YAML engineer. Cool, so let's try to deploy this. What is the name of it, Redis, YAML? We might get yelled at about something. Yeah, okay. These are just warnings at the moment. Let's see where they are running. Cool, so it has reached one of my digital ocean nodes in San Francisco. So we have that running, great. Now, what I want to do is I actually want to hop in and I want to create a workload that's going to simulate an application that's running at the edge on this SBC01 machine. So let's just create a pod here. We'll call it Redis Client. You can imagine this is your application or whatever might need to reach this service that we just deployed in the cloud but that we needed at the edge. So I'm using a different one here. I'm explicitly setting here. So you can see here, I just did node affinity. This is a good way of just targeting node types, if you will. In this case, we're just using the role and you could even have it be such that if I scale this up, I only want them to be ran one on each machine. There's ways to spread this out. In this case, it's very, very simple. I'm just gonna say, okay, I know that this is my application, my workload that I want to run on this specific box. So we're going to target that specific box by hostname here. And we have a bunch of security contacts here because Talos is going to secure Kubernetes for you out of the box. It's going to apply Stig Hardening guidelines, CIS benchmarking guidelines, our own sort of personal opinions and the on things. It's going to be a hardened version of Kubernetes, which I highly recommend for the edge and this pattern as well because there's just so many moving parts in this. And so that's where all of this extra security context stuff comes from. It's basically the most least privileged pod you can possibly run. And the only thing that we want to do is exec into it. What is the name of it? Let's just show that it's running on. Cool. So my client and my server and let's reach it. Everything is ready. What could I be missing? Interesting. CoupsPen should be enabled. Okay, well, let's dig into it. Let's use Talos CTO. I believe we could use it here. Let's get the members here. So we'll use Talos CTO to sort of debug what's going on here. So what I'm doing here is I'm just going to get a list of the resource definitions. Talos, if you haven't noticed already is very, very similar to Kubernetes. It operates on this controller pattern. We have resource definitions that you can supply and that you can retrieve. I'm going to look at my CoupsPen peer statuses. What I'm doing is I'm targeting one of the machines that I can reach. CoupsPen is not enabled here. What is it going on? We'll look at the configuration. Oh, interesting. Did I not enable it on these ones? Let's double check. That's interesting. Oh, it looks like my patches didn't apply for some reason. Did I not save them? Well, let's just go through and let's do it. The other things don't necessarily matter. What I will do, let's get these nodes. So what I'm doing here is these machines for one reason or another, I'm not entirely sure about right now. They didn't get the patches that I supplied. So the most important one is that we enable CoupsPen. So I'll just start with, well, five eight since I already have it here and we'll go down to network. This is the Talos's configuration. We'll do CoupsPen. That should get us what we need. Give that a second. I think it needs 30 seconds or so. We'll edit the other machines in the meantime. So I have 180 and 94 to do. Feel free to interrupt me if there's any questions at this point. Yeah, but not a question, but a comment. The excitement of a live demo. So... But that's what makes it fun and that's why we do it. I can't wait to endorse you for debugging live on LinkedIn. I feel like that's one of the most undervalued skills you could possibly have. Operating under pressure is one of the most important skills in that case. Helpful learning, every adage. Yeah. All right, cool. So we're getting some peers at this point. I wonder why I had some patches that didn't go in here. Let's do a little bit of debugging real quick. Okay, so this one has peers now. The problem is going to be that I can't reach what we can do. Let's get the public IPs of our droplets and double check. Interesting, okay. It looks like none of the patches worked. Well, they didn't apply for some reason or other. I believe we just rolled out an update to this alpha software just before I did this. So we'll have to check on that. But we'll copy this, maybe some hidden character or something. Okay, I guess so. All right, we applied that configuration, which means that should start setting up Coupes Band in the background. We'll do this one as well in digital ocean and then we'll just edit the local one here in Santa Barbara and then we should be able to continue. But hopefully you can see a little bit about Telos and how it works here. Cool. Yes, helpful to get a sense. I did have one question and someone that just really wanted a recap on some of the edge devices that you had just on clouds and edges. Oh, sure. So what we're running here is a seven node cluster. It is split across vulture.com where I have these machines running as my control plane. And then we have three of them. These ones are in Los Angeles, by the way. We also have three droplets running in digital ocean. These are running in San Francisco. These will serve as sort of the support nodes, if you will, for our edge machine. And our edge machine is this SBC-01 machine, which is running in my closet on a private network here at home that is just a Raspberry Pi 4. So let's enable CubeSpan on the last machine here. This one. Let's try one thing. Let's actually... Well, we may need to just improvise here and spin up something else that would represent our edge machine here. One question that came in was, what are the advantages of CubeSpan over, for example, Cilium cluster mesh? Yeah, that's a good question. I think that they both are after the same end goal, but the advantages within Talos is that it starts very early. It's not dependent on Kubernetes status. It's not dependent on the KubeLit being up. This is just baked in and you get this as early and as possible. So we also have access to host-level networking routing. So we can set up routes and everything that we need so that even host-level traffic can be over this WireGuard network. So that is some of the advantages and it's being leveraged here already, but in this Omni product that I'm using, which would not be possible because if you think about it, these edge machines and the fact that I can remotely manage them using this product is entirely dependent on the fact that Kubernetes is not in the hot path because if I had to bootstrap Kubernetes on the machine and do all these things, it implies that I need to be able to talk to that machine and we need to get Cilium installed and do all of that. In this case, it's just baked into Talos and so it can start as soon as the machine turns on, essentially. Okay, so I think what we're going to run into here well, an issue. I need to, oh, I can schedule it, but I can't reach it. Well, let's just do something. Let's just create a droplet in somewhere else. I think I only have this in a few regions, so let's do this. We will do basic and we will make this simple machine and why not? Let's do New York. So what I'm going to do here is I'm going to spin up a machine in New York, in digital ocean. It's now going to represent my Raspberry Pi. I confirmed with my team in the background that patches were broken between my testing and now, but that's why we do this and that's why it's still alpha. So we'll still be able to prove this out. What I'm doing is I'm booting a custom image that you can download from here. So if I just go to here and I'd say digital ocean, I could download this image and it's preconfigured to set up WireGuard so that we can reach this machine and start managing it from here. In fact, it's already joined. This is the one in New York right now. And what we'll do is we will join this to the cluster. I can fake patching, that's fine. I'll have a public network and I can patch it in manually. So let's add that node and then what we'll do in the background here while we wait is I now need to, what's the host name of this machine? I believe it was demo NYC 3, 0, 0 1. Let's delete that pod. We'll get it off of that machine. Cool, so we still have the one in the service Cisco. Let's just check in on demo NYC 0, 1. No, it isn't ready, but it should be there. Okay, all right, there's NYC. We have an internal, we have that. So we're gonna patch in a coop span here so that we can have this mesh network. And if the node is ready, we can continue. So all I've done is I've just spun up a new machine. This might actually be good because you could see how things could go wrong and maybe you could just spin up another machine and replace it really quickly. This is why we love Kubernetes. So we have that pod, let's exec into that. We'll use the service to see if we have it. Cool, so this is a node running in New York, just a recap. We have our control playing running in Los Angeles. And we do that because we want XEDs, the latency between XED to be as minimal as possible. Thanks to WireGuard and coop span, we're able to add some machines that are running in San Francisco, which would act as these support machines offering storage or something to that effect, some service that the edge machine might need. We planned on using the one in a Raspberry Pi in my closet but we ran into some trouble. So to replace sort of what is representing that, I just went ahead and spun up a node that is running in New York and using WireGuard we're able to have a completely mesh network between all of these. So you can imagine maybe that this is one customer and this is another customer and you have workloads dedicated to those machines. Maybe it's audio visual and you need to run some software in a conference room that supplies that service. And it's a room that is completely represented by a single machine. You can target that and run specifically what the customer might need. So that's where I wanted to get to with sort of cheating but we didn't necessarily cheat. We had to do some workarounds and patch things in manually. And then if we had enough time, which we might, I was going to maybe look at how we can maybe use Argo CD or Flex to deploy this or maybe even challenge this thing of storage running at the edge. Let's see, we have about 10 minutes. Are there any questions? Anything that I can answer? I don't see any questions now but folks if you do have any questions or would like to get things answered, please feel free to throw this into chat and we can get those surface. Cool. So I want to just maybe expand on this a little bit. There is another pattern to this that you could tell I was doing everything sort of by hand with Coop CTL and applying these. But in a fully automated, the goal of being fully automated, we're starting to also see that things like Argo CD and Flex become really, really powerful here. And one of the ways that we can deploy that stack automatically is with Talos. We support the ability to supply manifests at the bootstrapping time of the cluster. So that means that deploying your clusters could be as simple as what we did today. We just supplied patches that might do specific networking that I might need, enabling Coopspan if we need to. And then having your workloads and using these node affinities and all these things be managed by Flux. I've heard really good things about Flux and I did save Flux to be completely new to me so that you guys could really watch me stumble with this because I was told that's good for the live stream. So I kind of want to take a shot at seeing how far we can get in the next 10 minutes or so in automating this Redis client and this redis.yaml with Flux. I've used Argo CD before, I am a fan of it. I've never had a chance to run Flux so I'm gonna use this to do that today. All right, let's see how easy these docs are, get started. While that's going, let's see. Okay, so we're just gonna do some prerequisite checks. Bootstrap, I'm gonna need a repository. So let's create one. We did have one more question about Silium and Cubespan. This question was, any benefits of putting Silium on top of the provided Cubespan or dangerous disadvantages? We did, so we did have a bug recently with Silium and Cubespan being enabled but I believe we fixed that. I probably wouldn't run Silium with the WireGuard network, with this WireGuard network as well because we're setting up some routing rules and that will too, it's just untested. I'm not necessarily opposed to the idea and theory but I just think it's untested. Silium and Talos is a very common combination, maybe partly due to the fact that both are sort of more advanced technologically and are more cutting edge, if you will. And those types of folks tend to want to run those things together. But it is a very, very powerful combination. I'm excited about what Silium is doing for sure. And then the other question was, is it your own Raspberry Pi? It is. I managed to get one before the big shortage in them. So I'm gonna take this off of the screen real quick so I can set up my GitHub token. You can see my terminal still. Here you go. Cool. Let's do flux check pre. Pre-wreckers, it's past, let's bootstrap. I have no idea what any of these mean but let's see, I like to see stuff fail. Flux demo. That's actually a really nice experience. Good job, Flux. So what I will do is I will add things to this now and see if we could push it up and roll it out. I'll delete them first actually too. Okay, so Flux is gonna create this. I'm assuming this is a namespace, but let's see. That make sense to you? I mean, looking at Flux demo, clusters, my cluster and then these look like namespaces, I guess. Let's see. I never read all the way through. I just want to run commands and see stuff, work or break. How much time? We have five minutes, let's see. Okay, cool. Is this done bootstrapping? What is going on? Waiting for customization to be reconciled. Okay. I really wish all showed everything. I think it doesn't. I think I've got a couple GitHub gists of how to clear out like errors and things like that across namespaces and whatnot but I feel like, yeah, that's definitely, definitely could benefit from having something like that. Yeah, I'm wondering, when it says all, I want everything but I think they actually don't list everything. Cool, a couple more minutes and yeah, we won't be able to get that far I guess, but let's see. There was a question, once deployed, is there a UI for performance in management? Not sure if that's in reference to Flux but Steve Francis, but yeah, let us know on that front but I know with, I think with Flux, it's just that using the CLI and then I'm not sure with the other stuff, I'd assume that that is the case. Right, Andrew, as far as the SaaS and everything goes with Cidero Labs. I'm sorry, the question was again, sorry. Once deployed, is there a UI for performance in management? A UI for performance in management. I mean, it depends on what you're talking about. I don't know if Flux does but I know that we have some stuff here. So if we go into this cluster, we click on a node for example, or this is just the cluster view. I would recommend deploying something like Prometheus and Grafana to do full on metrics that you can alert off of and whatnot but this can be useful just to see what's going on in bootstrapping. If we wanted to look at a node in particular, this is using all of the Telus Linux APIs at this point. So we're able to get a list of processes, a list of services. If I wanted to look at NCD logs, I could do that here. If I wanted to get some lightweight metrics, this is all just GRPC streams. So looking at a list of processes and how busy they are and how they are over time console. So hopefully that answers the question. And we can dig into pods here as well. We do intend on expanding the functionality here but it is focused on cluster management at the moment. Well, yeah, I don't think I'll be able to get too far on flux. It looks like it failed anyways and I'll have to dig into that but I was prepared for that. This was something I wanted to test with just a few minutes left. But yeah, I think that that's it. So just to summarize, we're using Telus Linux and CoupSpan, which leverages wire guard under the hood to connect machines that are on disparate networks so that we can have a fully meshed Kubernetes cluster where the control plane is running in one region of the world. I can have sort of a support workload running close to my edge machines and my edge machines then can be smaller, I need less of them and less things can go wrong. So these edge machines will be able to leverage the wire guard network so that they can reach internal Kubernetes services regardless of where they are. Awesome. Cool. Should I stop sharing? Yeah, let me do that so we don't have infinite mirror. Infinity is a fun number though. Awesome, awesome. Well, thank you so much for stepping by Cloud Native Live today, Andrew. It was really great to kind of get a sense for how we can expand the edge, every cloud is a silver lining and it seems like that's the edge, the more that we get to see it. So thank you so much for joining today. Thank you for having me, that was fun. Thank you to all of you viewing today or watching this a little bit later for joining the latest episode of Cloud Native Live. We really enjoyed the interaction and all the questions that y'all had to ask. Thanks for joining us today and we hope to see you again soon with that. Wish you well and hope you have a wonderful week. Thank y'all so much, have a good one.