 All right, we can get started. Hello everyone, thank you for attending this webinar. I am Andrew Reinhardt. I am the CTO and founder of Cidero Labs. Let's hop right into it. Today, we're gonna talk about Cidero Metal. Cidero Metal is our way of doing Kubernetes on bare metal when we're gonna talk about how you can manage Kubernetes on bare metal and some of the solutions that we have cooked up and some of the common issues that you'll run into. So first of all, the server power management is something that we commonly see users kind of being apprehensive about when we're talking about moving them, say, onto bare metal from VMs or the cloud or because they are seriously considering bare metal. And it's a very human process. A human has to figure out when the machine needs to be turned on or off and if maybe it is turned off accidentally, you need to set up alerting and all that good stuff to make sure that you are notified. The next issue that we commonly see is the server lifecycle management, which is things like installing the operating system and uninstalling the operating system and how do you do that in an automated fashion? We also have the operating system management itself. So how do I configure my operating system for Kubernetes? How do I do user management, patch management, upgrades and hardening and so on and so forth. And then we have Kubernetes management in and of itself being a very large issue. People are still struggling with that alone, let alone everything else. Common things also we see are in the networking space. So node networking, how do I configure my node to be reachable from the outside world? Exposing services from Kubernetes to the outside world. Load balancing services, how do we do this? And then finally, storage. This is a common need when you're running stateful applications, obviously. And so let's just start off with talking about a few definitions before we dive into things. First of all, Cidero comes with the notion of a server, aptly named. It is basically a custom resource definition in Kubernetes that represents your physical machines. This is something that eventually you'll be able to do something like coop CTL, get servers and see all the servers registered with the system. And then on top of that, we have the notion of a server class, which is essentially a filter. So you have all these disparate servers. How do I group those servers and treat them as a unit? So for example, in AWS, you have, you know, T4s, T4 Smalls, you have T3 Larges, you have M5 Larges, so on and so forth. Server classes are our way to give you that same experience, but on-prem. And not only that, but they also give you the ability to configure a set of machines in the exact same way. So for instance, if we have a server class that we know a particular make and model of server has maybe a regular SATA drive and an NVME drive, we can say for everything in this server class, install the operating system to the SATA drive. The next thing is there's the notion of a metal cluster. And really what this is, is just a way to declaratively define what your cluster should look like on-prem, specifically in the bare metal scenario. So mapping server classes to your cluster definition so that you can handle this in a very declarative way. And we also have Talos Linux, which is our reimagined Linux distribution written from the ground up purpose-built for running Kubernetes. It is an API-driven operating system. So we've removed Bash, we've removed SSH, we've removed all traditional access to the system. It is a 50 megabyte squash FS, so completely read-only, and it runs entirely out of RAM. And so you have a very simple, clean, and efficient way to run an operating system. And not only that, but it also manages Kubernetes for you. I should mention that Sidero Metal integrates very tightly with Talos Linux. So let's talk about how you would get up and running with Sidero Metal in your data center. Let's imagine that we have some servers that have been racked, networking configured, and you walk in with a laptop. The process is gonna look like this. We're gonna want to create a Kubernetes cluster because Sidero Metal requires a Kubernetes cluster. It runs as an application on top of Kubernetes. So using our Talos CTL CLI, we can actually provision a Kubernetes cluster right here on our laptop using Docker. Very simple and quick way to get Kubernetes running on this network very quickly. The next thing we're gonna do is we're gonna use cluster CTL, which is the cluster API CLI, which is the technology that Sidero Metal is built on top of to install Sidero Metal onto that Kubernetes cluster. It's a simple one line command in most cases. So at this point, we have Sidero Metal installed. Sidero Metal is coming with two different services, Kubernetes services, one being a UDP service and one being a TCP service. The UDP service is for TFTP and the TCP service is actually functions for multiple things within the Sidero world. So it's gonna offer up specifically, Pixi, iPixi, it's also gonna offer up the Sidero Metal API and it is also going to expose a way for these machines to get their configuration files when they become part of a cluster. And so at this point, we need to expose these services and this is just an exercise of how do I expose a Kubernetes server? Once we do that, we turn on a machine after we've told them via DHCP to boot off of this exposed service. We turn on a machine, it Pixi boots, Sidero Metal is gonna say, I have never seen this machine before, so let's send back an agent. The agent is then responsible for capturing data off of that machine and submitting it back to Sidero Metal. Sidero Metal is gonna save this in the form of a custom resource definition called SCRD. So now within our laptop-based Kubernetes system, we can do something like kubectl get servers and we can see all the physical attributes associated with those servers. So for example, let's imagine that we have a random set of Dell R630s and some HP ProLiance. The next thing we're gonna wanna do is we're gonna have to submit a server class and what the server class allows us to do, as I mentioned earlier, is it allows us to filter down all of these disparate systems and use the physical data that we've pulled off from the agent associated with the server and filter them into classes. In this case, we're gonna filter them down into what we're calling a G1 small x86 and a G1 medium x86. And I should mention that your decision on the naming and what qualifies as this is very flexible. It's completely up to you. If you base it on labels that get associated with attached to the server or the physical attributes that are in the spec of the server definition. So now that we have our server classes and we have our Kubernetes cluster ready to go, we can now define our cluster. As I mentioned earlier, we're gonna define a metal cluster. We're gonna push this to our local laptop. We're gonna submit and say, I want a one node, one control plane node and one worker node cluster. In practice, you typically want this to be HA, HA, so three nodes and as many for the control plane and as many workers as you think you might need. So in this example, I'm gonna say, or at least in the diagrams, I'm gonna say, give me a Kubernetes cluster that is made up of one G1 small x86 and one G1 medium x86. And what Cidero does is it knows how to choose a server at random from these server classes. Power them on. Knows that it needs to install Talos. The whole pixie process gets initiated. Talos gets installed. Once Talos is up, it knows how to grab the configuration file from this management cluster. And now we have a Kubernetes cluster running on bare metal. At this point, we can't have somebody living in the data center with a laptop. And so we need to actually take these management tools and move them onto this cluster that we just created. Essentially what we're doing is we're pivoting from one cluster to another. We're pushing all of the Cidero metal objects, the cluster objects, and Cidero metal installation itself onto this management cluster. So now we can remove the laptop from the equation. And in our data center, we have a management cluster, which acts as a command and control center for us to create as many clusters as we want within this bare metal data center going forward. So now that we have our management cluster, I'm just gonna run through a quick demo of what it would look like to create a workload cluster. Let's take a look at our management cluster, now running in the data center. And now let's see a list of servers that have been registered with Cidero metal. Notice that this server is unallocated and not currently part of a cluster. Let's go ahead and create the cluster now. And now if we take a look at that server once more, we'll see that it is allocated. This means that it has been chosen to be part of the cluster. We can verify this by taking a look at the server binding. Notice that the server was chosen from a server class. The Cidero infrastructure provider represents the physical server to cluster API in the form of a metal machine. A metal machine is then mapped to the standard cluster API machine resource. At this point, our cluster should be up and ready to go. We can see from the cluster status that our control plane has been initialized and is now ready and the infrastructure is also ready. This means that we have successfully created our cluster on bare metal. Okay, so just to recap, what we've done so far is we've walked into a data center. We have bootstrapped our entire management plane from a laptop. We've installed Cidero metal, we've exposed this Pixi service, we've registered servers, we've then classified those servers. And in the demo, I showed that how you can use those server classes and those servers to create a workload cluster, a workload cluster being a cluster that we intend to run our actual services on, not the management cluster, but it is a cluster that is managed by our management cluster. So as you could see from this whole process, from the common issues that I mentioned in the very beginning of this, things like server power management and server lifecycle management are entirely handled by Cidero metal. Cidero metal knows when a machine should be turned on, it knows when it should be turned off. And in the case that someone accidentally turns it off, it knows that the desired state is that it's actually to be turned on. So it will go ahead and turn it on. The lifecycle management is also entirely automated and handled by Cidero metal. Cidero metal knows when it needs to Pixi boot and install Talos. It also knows when it needs to actually have this system be registered with itself. It also knows when the operating system needs to be removed. And so in that process, what happens is the agent will be sent back and will actually wipe that server and get the disks prepared so that the server becomes available within the pool again. It becomes clean and unallocated. The fact that we're using Talos Linux handles all of the OS management that we're gonna need. Again, our goal with Talos is for you to forget about the operating system and to deliver Kubernetes. So we have a secure hardened API driven operating system that aims to basically remove the whole operating system management out from underneath you and handle it itself. Not only that, but Kubernetes, sorry, excuse me, Talos Linux also handles the management of Kubernetes. It's going to install Kubernetes according to the best practices. And it's going to roll out control plane changes for you that you submit to Talos Linux. It knows when you want to perform an upgrade, whether or not at CD will survive this upgrade. And so we have safeguards around protecting the Kubernetes data. So that whole Kubernetes management, what we like to say is we give you a cloud like experience but also giving you that flexibility. So within Talos, we have the ability to actually change and tweak certain parameters within the control plane but not so many to where you can shoot yourself in the foot. And so as you can see, a huge portion of that overhead that we face on bare metal is dramatically reduced simply from using Sidero Metal and Talos Linux. The last two things that I listed in the common issues were networking and storage. And this is where the Sidero Labs team comes into play. We have a wealth of experience in getting Kubernetes running on bare metal. We know all the patterns, we know what tools to use and we have reference architectures for different scenarios. So on my host, what should I do? Should I run, you know, routing on the host? How do I do BGP? We also have a feature within Talos, which allows you to basically do load balancing of the Kubernetes control plane using a VIP which is handled and managed by Talos itself. And so you don't need any extra infrastructure outside of that. You don't need load balancers, for example. Now storage, Rook and Maya Store work great with Talos. We commonly recommend this to our customers. Rook being the more battle hardened one and Maya Store being one that aligns better with our philosophy of making the operating system and be less of a thing. And then finally, load balancing. A common thing that we suggest to our users and customers is to use Metal LB, which will allow you to expose those services, specifically, sorry, the Kubernetes services using BGP or get to it is ARP. So we know how to when and how to run these various tools so that you can get Kubernetes running on bare metal. And so at the end of the day, we're capable of handling all of the common issues that I talked about in the beginning. And if you wanna learn more, you can reach us at sederolabs.com. If you wanna check out the project specific sites around Sedero Metal and Talos Linux, you can go to sedero.dev or talos.dev. And now I will pause for some questions. I see we have a question already. The first question is, can you manage multiple clusters this way? The short answer is yes. You can absolutely do more than one cluster this way. In fact, that's why we built this. We wanted to make managing bare metal clusters easy, reproducible, and you can kind of print these clusters out within your data center. I should also add that since Sedero Metal is built on top of cluster API, not only can you create multiple clusters this way, but you can use this same management cluster to say create Kubernetes clusters within AWS, GCP, Azure, Digital Ocean, Equinix Metal, and wherever there is a supported cluster API infrastructure provider. Another question, I saw your release of Coopspan. Does Sedero Metal work with Coopspan? Currently, not yet, but the way that Sedero Metal works is it's a bootstrapping system, which we are also working on making it more dynamic for the life cycle of the cluster, but you get the cluster up and running and then from then what you could do is you can actually enable Coopspan using our APIs. Since talos is API driven, you can, you know, script this all out in a very dependable way. Simply enable it with an API call. And you can set this up manually, but we are working on integrating this with Sedero Metal in a much more seamless and automated way. Okay, next question. What happens when you remove a machine from a cluster? Ah, good question. I don't think I went over that. So what happens is Sedero Metal knows that now that this machine is deleted, this machine, if you remember from the demo, is associated with the Metal machine, which is then associated with the server. You saw that in the server binding. We know that the machine has been deleted and so that we now know that the server has to be cleaned up. And so since Sedero Metal does the power management of this physical machine, we basically tell the machine to pixie boot on the next boot, power cycle it, send back our agent to the server, server boots that, and then the agent knows I need to clean up all these disks and get this node prepped so that it can become available again for the next cluster. And so it's simple as that really. I'll give just a moment for maybe another question to come through. Okay, great. Well, thank you everyone. I appreciate everyone showing up. That was a lot of fun. I hope that you learned a lot about how you can do bare metal Kubernetes using Sedero Metal. Take care.