 Okay, we are finally going to get started. Apologies to everyone for the delay. We were having some technical difficulties, but everything seems to be in working order now, so we are going to get started. Let's welcome everyone for joining us today and for staying on. My name is Jerry Fallon and welcome to today's CNCF webinar. Deploying Kubernetes to bare metal using cluster API. I would like to introduce our presenter today. Sean McCord, Principal Senior Software Engineer at TALOS Systems. Just a few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat that would be in violation of the code of conduct. Please be respectful of your fellow participants and presenters. And please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. And with that, I'll hand it over to Sean for today's presentation. Right, thanks Jerry. Have you ever wanted to manage a fleet of bare metal boxes as easily as you manage Kubernetes workloads? Do you wish Jboss meant just a bunch of servers? The idea of declarative hardware compute management is the driving horse behind Cidero. The bare metal lifecycle can manager and copy infrastructure provider from TALOS Systems. With cluster API, all of your compute resources are represented as familiar Kubernetes YAML manifests. With any other Kubernetes resource, you can then group them into classes, apply labels to them, modify their settings, and see their status. All of the hard work of dealing with machines themselves and setting up the high availability Kubernetes clusters is handled for you. Before we get too far, however, I should say hello. As Jerry said, I am Sean McCord. I'm a software engineer at TALOS Systems. But before joining TALOS, I was a refugee from CoreOS looking for a Kubernetes focused immutable environment on which to build dynamic clusters for my real world customers. After working with TALOS from the outside for over a year, I joined the company full time last month. I have been living and breathing all things Kubernetes for about five years now, so I have formed a few thoughts along the way. I'll be sure to leave plenty of room for questions at the end of this talk. So let's start with the basics. What is Cidero? How is it constructed? What does it do? How does it fit into the puzzle of bare metal Kubernetes management? Cidero creates Kubernetes clusters from a common pool of compute resources using cluster API. Basically, it lets you have a dynamic set of generic servers which can be plugged into some racks, which will then be automatically assigned and used in any of a number of Kubernetes clusters within the domain of Cidero. So if the generic servers are the iron, Cidero molds that iron into usable Kubernetes clusters with whatever raw resources are available at the given time. It will continue to manage those server resources as time goes on, as servers are added or removed, as load comes and goes, and as whole clusters are created or destroyed. Kubernetes manages workloads for you. Cidero and cluster API manage Kubernetes clusters for you. Cidero itself is made from all the good things. It is fully open source, written in Go, and licensed under the Mozilla public license. It runs on Kubernetes itself, and it's an infrastructure provider for cluster API, CAPI, and as such allows for higher order tooling to talk to it, just like any other CAPI provider. It is built on TALOS, the Kubernetes OS, and it is purely declarative using Kubernetes manifests for all user side interactions. Thus, it is also easily version controlled. At present, there are just four main places, main pieces, which define the operation of Cidero. Environments define boot environments for a server, that is, to what network should the machine be booted? What pixie image should it use? What URL will provide its configuration? Basically, this defines a reusable environment which will tell any server how to start. Servers represent the machines themselves. They are automatically created when a machine first boots. Its hardware details are discovered and recorded, and it can then be categorized and signed by Cidero to whatever role it deems appropriate. Server classes are groupings of servers. These are akin to storage classes in Kubernetes in that they can be used as names of pools of resources whose members' discrete identities are not important. Servers may be members of any number of server classes, and clusters may be defined to use specific server classes instead of enumerating discrete machines to fill out their numbers. Talos config is generated by the metadata server to point to the assembled server configuration, folding any patch data from the servers and server classes to configure the node with the cluster-oriented config data and credentials. These four sets of things then allow Cidero and cluster API to create and manage clusters for you. We've mentioned Talos a few times now, but we haven't really defined what it is other than a Kubernetes OS. Allow me to briefly explain it since it is the engine on which our Kubernetes clusters themselves run. Talos is a modern Linux OS. I mean this rather directly. It is not a new Linux OS. It's a Linux OS. There are no new components in Talos. It is a secure, immutable, and minimal OS. This may sound like buzzword libingo, but because of how Talos is constructed, we can actually make it really secure. We have enabled all sorts of features related to security, including in the kernel itself, which could not be enabled on a more general purpose OS. Our entire OS is run from a read-only squashFS image, meaning there really is no way to modify it from within the user space. And it is really quite minimal. It literally comes with only the tools necessary to run Kubernetes. There are no extraneous daemons or binaries. Talos is an entire OS built from scratch in Go on the Linux kernel. No other services run on it. No other services are even installed onto it. Not even so much as an LS or a cat. There is no SSH. There is no shell. Because there is no shell, there is no point in SSH. And because there is no SSH, there is no point in the shell. Everything runs inside a container other than the container run time itself. The entire point of the OS is to quickly start and run Kubernetes. So now that we've taken everything away, here is what we give you instead. APIs. Everything in Talos is controlled by API. This makes it trivial to integrate into machine controlled environments like Cidero and cluster API. It makes complicated config managers a thing of the past. There may not be a shell on the servers, but with the CLI tool and the API, you can script or code any kind of automation from anywhere else. In a way far more deterministic and certain than a haphazard set of independent shell utilities discreetly and hopefully installed on each server. APIs are the key to Talos and make Talos the ideal Kubernetes OS for Cidero. Cidero is inactive development. We have a lot planned for it in the future, but it is also already coming with a great set of features which make it really useful to control your bare metal Kubernetes clusters. Cidero has a bunch of automation features built in to manage servers. It bundles in an IPXE server to network boot all your servers. It has a TFTP server to shim all your non IPXE machines up to the IPXE server. It supports metadata distribution through an HTTP based metadata server. It includes an IPMI client to control power and boots from an out of band management network. Cidero itself acts as a Kubernetes controller to continuously manage your resources. Cidero is of course built to respond to changes over time. These dynamics includes scale up scale down of clusters along with the creation and deletion of them. It is expected that servers will be added, removed and reused. So Cidero includes a number of life cycle features such as making sure disks are wiped, collecting all of the hardware specifications for each machine and seamlessly handling machine ads and removes to and from the network. Because Cidero is a CAPI provider, it will fit in well with hybrid cloud setups and with other CAPI providers. So now that we've gone through what Cidero is and does, let's see if we can make it come to life with a live demo. The moment of truth, do we have visibility? So first of all, we are starting out with a bootstrap cluster running our Cidero components. We have, first of all, to create a few servers. In this case we'll run them as VMs rather than having to plug them in directly. Now we have a number of machines running in the background, QMU machines. And these machines will be picked up by Cidero and automatically added to the inventory list. As they come online, they're being cleaned. So as we can see three out of the four are clean and the last one should now soon be clean. Now that we have our servers, we have a minimal cluster definition defined here to get us started with a basic Kubernetes cluster. This was automatically generated by our provider. These are using just the defaults. We've created our cluster in two, four of them are now clean. The first thing we need to do is to get the Talos config, the machine configuration, or the client configuration to allow us to access the machine. New labs up to forgive me if my typos are getting distracting. So at this point, we're just configuring our Talos CTL client or our local client to be able to talk to our new machine. Now that we have done that, we should be able to get our kube config. We can use this kube config to get our list of nodes. So I could still hasn't come back home up yet. So we'll just see where it is along in the process. So this is our allocated node. I'm going to get the logs of this just from QEMU like it has been enabled and it just came up. Get nodes now. We now see one master control plane node. It's not quite ready, but this should be quick as soon as it starts CNI. There we go. So we have now our cluster. It's pretty useless at this point, consisting of only a single node, a single control plane node. So let's scale this up. We're going to make this into a fully HA control plane by increasing the number of replicas to from one to three. As we wait here, depending on where the other machines are in their boot cycles, we should see them come up and be allocated shortly. This is a short term optimization that we will be getting to get this a little faster. But there we have one should get one more allocated. There we have the second. So if we watch the logs of this one, first one, we can see that we are in downloading the installer. It's installing Talos to the disk. It's now rebooting having installed Talos, starting the container runtime and Talos services and etc. Joining the etc cluster. We've started the kubelet and kubelet is up on cordon the node. So if we get our node list again, we now have three control plane nodes, two of which are ready. The other one shouldn't be long be fine. There we have it. And just to show our control plane is in fact HA. We have three API servers, three controller managers, schedulers, and we are now an HA control plane. So we still don't have any workers. So let's go ahead and scale up our worker machines. We'll just create one because of my poor laptop's limitations. But this last machine should shortly be allocated by sedero. Again, automatically out of the pool that's available, which consists now of only one out of four machines. And it has now been allocated. We'll just watch this being a worker node. It should be a little faster all rebooting almost started the kubelet started the kubelet. The PID was hanging a little bit, but all is well. And we are on cordon. We should have shortly. We've added the worker node. And there's C&Is up. We now have a complete cluster with three master nodes in HA configuration and one worker node. If I had a bigger laptop or real machines, we would then be able to expand the number of workers to fill out whatever we needed. We can also just as easily create new clusters entirely separately and sedero will manage any number of those. All right, let's get back to the last bit of the presentation. As I mentioned, sedero is inactive development. We are looking to add quite a number of features, both to it and around it. Among these are auto scaling. So as you saw, I scaled up manually the number of nodes in each class. But we want to be able to have sedero controller automatically scale up and down clusters based on the number of criteria. First among these criteria are by time and by load. In the future, we also want to be able to coordinate competition for limited resources based on priorities between the clusters. Talos is API driven, and we will be building APIs around sedero as well to make sure that it can be controlled with the same freedom as Talos itself. This would provide high level APIs with wide reaching effects. Ideally, one could simply create a cluster, a complete Kubernetes cluster with a single API call. We spent a lot of work getting key management right with Talos and we want to make it even better with sedero by coordinating PKI chains from there. We would be able to share, branch, or separate CAs between clusters to match any number of security layouts. Edge computing and Kubernetes may sound odd at first, but it is an incredibly hot topic these days. The idea of running small or even single node clusters at the edge is compelling to quite a number of industries and sedero and Talos are well placed to fill this role. So too is the idea of clusters with distributed nodes where one or a small number of workers are located at a number of sites. Communicating with masternodes in the cloud or at a data center. With sedero, we bring the power of managed Kubernetes to the world of bare metal machines in a simple but powerful way, automated and fault tolerant, dynamic and adaptive. Soon, JBoss really will mean just a bunch of servers. Talos and sedero are open source community oriented projects. As a community member, I felt how earnestly the team is invested in the community. As a team member, I know how much the community means and how much it drives the development of these projects. So join in. We have a slack on which we discussed Talos, sedero, cluster API and related things. All of our source code is of course on GitHub and PRs are always welcome. You can find the documentation at sedero for sedero at sedero.dev and talos at talos.dev. Thanks for your time. And I'm happy to open the floor now for any questions. All right, thank you again for a wonderful presentation. We have plenty of time for questions so everyone please feel free to drop in any questions into the Q&A box and we'll get to as many as we can. We have a few here. First one up. Is there any dependency on the bootstrap node after the installation is complete? No, not for the clusters themselves. However, like as with many Kubernetes resources, you do want to maintain it over time so that all of the dynamic features are available. In the same way that you can lose your control plane in Kubernetes, you don't have to maintain sedero. But if you want any of the dynamic features of sedero, you do want to maintain sedero to react to those changes. Okay, next up. Can one of the masters be used as a bootstrap once it's up for maintenance and upgrades? I'm thinking this is not possible to do to the absence of SSH. So I do not see what SSH has to do with it. By masters for the bootstrap, the bootstrap cluster really can be any Kubernetes cluster. For the demonstration, I just used a simple example. There's no reason the bootstrap cluster couldn't be a single-node. There's no reason the bootstrap cluster couldn't be a cloud-hosted or, well, probably cloud-hosted would be difficult because of the network. But it can certainly be an HA control plane. Cluster, it can have workers, it can have any number of things. In fact, what I use myself is a third cluster that is legacy running core OS and it runs as my bootstrap cluster in my own network. So in that way, yes, you can absolutely run maintenance on the bootstrap cluster. But again, it's perfectly fine to lose your bootstrap cluster briefly for any kind of maintenance. So I'm not sure if that's where your question was directed, but hopefully that will answer it. You may have already mentioned this, but what is the footprint of Talos Systrom? So, yes, I did not mention the actual numbers, but I believe it's somewhat. It changes over time a little bit, but we've made some optimizations. I believe it comes in now pretty well under 100 megabytes. Okay, next question. Which CNI can I use? Any CNIs are good with this. There's nothing special about CNI. Nothing in our system requires any particular CNI. Because you're bare metal, of course, you won't be able to use cloud-based CNIs, but there are plenty of others out there that are well adapted, and it really just depends on how you want them to work. Okay, next question. How are updates to the components like FD applied over time? Sure. So these are controlled by Talos itself at CD and the control plane components of your Kubernetes distribution. So all of those are currently controlled directly with the Talos CTL and the API for Talos. However, that is one of the things that we want to build in to Cidero over time so that we can maintain those in the same declarative resources as we do currently for everything else in the cluster. So presently, they're maintained by API directly to Talos. In the future, we'll be building that same system into the declarative manifests for Cidero. Okay, does Talos support Killium? Yes, absolutely. In fact, most of us use Killium as our default CNI. Is it production ready? Well, I should think so. Before I joined Talos, I managed a number of production Talos clusters. I did not use Cidero personally before I joined Talos, but Talos themselves use Cidero for all of the dog-fooded clusters, and I am rapidly converting over my clusters over to Cidero. So it is stable. It simply doesn't have most of the features that we want for the final product. So can we use it in production? Yes, it's probably a little early for most people to use it in production. Can you give a little more insight on the role and architecture of the bootstrap cluster? Sure. As I said before, there's nothing special about the bootstrap cluster. The main thing it needs is at least Layer 3 connectivity. Layer 2 is better if you want to use the DHCP components, but Layer 3 is sufficient connectivity between your bootstrap cluster and the clusters you want to create, the machines from which the clusters you want to create are in. So that's really the only requirement. It is otherwise just a plain old Kubernetes cluster. However, that is built. It can be built by Talos directly, manually, or it can be hosted. There are any number of ways you can go about it, but there's nothing at all special about the bootstrap cluster. Has Metal LB been tested in Talos or Traithic? Absolutely. I use it myself. I use it everywhere. How would you do a container's bare metal model with Talos that's verified and certified for a variety of HW ingredients? Oh, I didn't follow that at all. Sorry, Talos? Can you repeat that? Sorry. Let me repeat it. How would you do container's bare metal model with Talos verified and certified for a variety of HW ingredients? Talos is Talos. Okay. Let's read that. All right. I still don't really follow that question. Are we talking about arbitrary containers outside of Kubernetes? Are we talking about validation of containers? Sorry, go ahead, Tim. I think I understand the question. I think the question is how do you certify and verify Talos on bare metal for a lot of different compute environments, a lot of different brands of hardware, and so on. And the good news is that Talos is a Linux-based operating system, so we rely on the work that the Linux kernel team has done to handle bare metal installations and run time. So anything that can run Linux can run Talos. There might be details in there in terms of hardware support and if you need an obscure driver that's not in our default build, you might need to build your own version of Talos, but we have some tools and some advice for doing that. Can a machine run a master and workers at the same time? With virtualization, yes. In general, you wouldn't. In general, what we term as a machine is running a single role, but as with Kubernetes generally, you can run workloads on control plane nodes. It's just not generally recommended to do so, but we have a number of places where we're running, for instance, single node clusters where you absolutely need to do exactly that, run workloads on the master because the master is the only one that exists. Okay. How is this different than or inspired by core operating system sectonic or its successor, such as flat card Linux? So that's a curious question. The original implementation of Talos had very little to do with CoreOS. Andrew didn't really build it on CoreOS. He didn't really have much reference to CoreOS itself. From my perspective, it is absolutely a next step from CoreOS, that is, it's a spiritual successor, even if technologically it doesn't really share much between the two. I do think a lot of the features and some of the direction we want to do absolutely go through the same processes that CoreOS did, but generally speaking, they don't share a direct history, curiously enough. Do you have any experience using local storage platforms like Rock, Set or Top of them? I don't know any of those. As far as storage, we have generally, I think most of us have used Ceph for our storage systems. I don't know. I don't know the ones you just referenced, sorry. Sean, it's Rook and Topo LVM. Oh, Rook, yes. Yeah, exactly. So Rook is what I believe most of us do use on a regular basis. My bespoke bare metal cluster is using Vault's secret encryption, and as such requires customization of Ec2D on the master's running Ec2D. Additionally, I use Topo LVM to provide local storage, which requires a custom schedule or plugin. Could I customize my installation to this degree? Not as yet, but we have discussed this many times, and it is definitely something we are going to build out. A lot of people use Vault integration, and so that's something we definitely want to work through. As it stands right now, we have not had direct requests for this. We simply know that it exists. So by all means, if you have a feature like that, we absolutely want to build it out. We just don't have users currently demanding it, so we don't have an enumeration of what those requirements might be. Okay. Does it work with any kind of bare metal servers? Any kind is probably a little broad, but we do try and work with all of the most common machines. As Tim mentioned, drivers are usually the main thing, and we have definitely come into cases where a driver in the kernel was not loaded, that was required for a particular machine. This is usually a very quick turnaround that we just enable the driver, make a new release, and you can go from there. But in general, there's nothing special about the servers that are required to run Talos or Sidaro. So we have any other questions at all? We still have plenty of time for questions, so please feel free to drop them into the Q&A box. What about cluster API? Well, Sidaro is built on it. It interfaces everything through cluster API primitives, and of course, Sidaro is provided with cluster API as a default available infrastructure provider for cluster API. So you can actually use cluster CTL to perform operations on the clusters, and Sidaro will just handle those requests exactly like any other cluster API provider will. If I need to install some custom software to host a machine, can I do it? Yes. In all cases, we want to prefer that people build the custom software into Kubernetes itself, even if it is machine oriented and machine localized, either by using daemon sets or by using deployments with specific selectors. That said, we do have a plugin system that we are building out to be able to, with some restrictions, run arbitrary containers at boot outside of Kubernetes. But up to this point, we haven't had any need to do any of that. We have the facility. It's not built out. It's not exposed as yet. But should the need arise, we do have that ability. We just have to build the infrastructure to support that, the APIs to support that. Can you perform node by node upgrades, i.e. seamlessly upgrading a cluster from Kubernetes 1.1a to 1.19? Yes. Both in Talos and soon with Sidaro, we have managed node by node upgrades for both the Kublet and for the control plane components. Okay. Can I move my apps running on GKE to my bare metal servers with Talos and Sidaro? With caveats that you always have with bare metal systems in that you don't have the cloud providers storage systems. You don't have the cloud providers CNI. You don't have the cloud providers general load balancer type features. Now, all of these have analogs that you can install in bare metal, but they're not done for you. So you do have to plug in your own implementations just because you're on a bare metal cluster and don't have the cloud providers to supply those. But as were mentioned in earlier questions, Metal LB for load balancing is supported. We have Rook for storage as well as a number of other storage back ends, which I don't think any of us use, but should be available. These are surmountable problems, but just a simple workload translation without implementing those will likely be insufficient. But you can absolutely get there with minimal effort. How can I debug some host related problems without SSH? Do I need to install DS without, with a debug pod? You can install a debug pod if necessary. We have gotten it to where that is very, very rarely necessary. We have a number of APIs with Talos. As we mentioned, everything is designed to be controlled by API. We can get logs from all of the various components through the Talos API and the Talos CTL tool. We can with Cidero pull. Again, we have IPMI. If you have bare metal machines with IPMI support such that you can get the console logs from the machines themselves. And you have any number of control APIs from Talos, which involve things like getting statistics, getting process lists, being able to read any file on the system, being able to get the lists of directories. We have most components that you would use to debug available via API. Okay, next question. I should mention that. Oh, sorry, go ahead. No, you go ahead. So I should mention with that in regard to that, that one of the main features of Talos is its relative simplicity as far as what is on the machine. The things that can go wrong are far more limited than a general purpose machine. And also critically, everything is, everything from Talos is installed and operated on an image basis. So we can roll back an image. We have AB partitions much as CoreOS did, but importantly, these really are read only in the, in a form that is image based like containers rather than file system based as CoreOS did. So CoreOS and FlatCard use AB partitions, but these are full file systems, which are ideally extracted from images. We instead use the images directly as SquashFS file systems. So these images are discrete and atomic in and of themselves. So if we have a problem, rolling back to the previous image is both simple and guaranteed. Same thing for upgrades. When we do an upgrade with AB, we can roll back and everything is clean. The number of things that can go wrong are much, much more limited than any kind of general purpose OS, even a specialized container OS like FlatCard or CoreOS. And finally, we have designed Talos to be able to have nothing required on the disk. We can wipe the node and reinstall it, and everything will be exactly the same. In the worst case, you wipe the node and it will come back. Even a control plane node will work just fine with this, especially with Siderro handling the deployments. If something goes wrong with a node that you can't fix with the API, destroy the node, bring it up again, and it all will be well. Can you go over more on these bare metal alternatives to cloud provider services? So let's cover the usual ones. Load balancer is the most common issue. The one that works with most anything is Metal LB, which was mentioned before. It works in Layer 2 and Layer 3 modes. It has BGP support, or you can just have simple pass-through supports, again Layer 3 versus Layer 2. And what it does is piggyback on the tube proxy, or if you're using something, sorry. No, you're good. Okay, I guess no one was talking. And so that is a general purpose load balancer plugin. You create services of type load balancer, Metal LB will then take that proxy any requests into the cluster through the standard load balancer provided by the CNI, and make the load balancing happen transparently into the bare metal cluster. Really excellent utility. Other CNIs also have specific implementations, such as Calico, and Cilium, and Danum. All of these have methods of being able to expose services to an IP, in some cases shared amongst the nodes, in some cases not. There are a few different ways we can go about load balancers. Storage is another common thing. The usual answer to storage in any case, if you want to keep it open source, is Rook. Rook has a number of implementations, EdgeFS, Ceph, Cockroach, there are a number of different storage backends. Ceph is the oldest and most reliable and Ceph implements the most common storage components, which include block storage so you can have storage classes and block storage add-ons, and volumes to plug into your clusters that will look exactly like any cloud provider's storage system, persistent volumes. And it also supports S3 like storage, so arbitrary object store that's available within the cluster and stored in the same cluster, same backend storage cluster as everything else. So, Ceph, while it may not be the most high performance of the options, is one of the most flexible ones and easiest to get started. You have ingress controllers. Those are frequently the same in cloud providers just using an IngenX, but any other in ingress controller you can plug in. In many ways, that's actually easier with bare metal clusters as it isn't. In eyes, again, we have a number of options, but in general, you have a broader selection with bare metal and much of that depends on what your network infrastructure looks like. So you can use something like Kubrouter with BGP support in a low level system. You have Cilium, like I said, which most of us use, which handle a lot of the other fancy features. I personally use a combination of Cilium and Danum, D-A-N-M, which is a Nokia product, which allows you to directly allocate external IPs to internal pods which allow me to run, say, ingress controllers internally, etc. There are a number of different ways. These are just kind of the natural ones that I have. Other cloud services are not coming to mind immediately, but in general, you have translations in bare metal for most any of the cloud providers' services. Okay. Is there a UI which I can use to interact with the API? There is not as yet. So there is the Talos CTL control CLI, which allows you to do a lot of things. In fact, it has some console-based GUI implementations. We are working on a GUI, but we don't have anything yet. No. Can you please let me know what type of API, or do you mean REST API, or if it is using so it will not have issues? True. I did not mention that. So all of our APIs, both internal and external, are based on GRPC. Okay. Well, normally this would be the time where we would end the webinar, but since we started late, Sean, I'd like to ask you if you're available to answer a few more questions. We still have some here in the Q&A box. If not, can you give us where people can reach you if they have any questions that they'd like to ask you? Absolutely. So the easiest way is on our Slack at the... Let's see. I'll just share that again. Too far. There we go. So the Slack here is... Slack.dev.talos-systems.io. That is the most reliable place to get a hold of me or any of the Talos people. Excellent. Can you stick around to answer a couple more questions? Sure. I have a few more minutes. Excellent. Thank you so much. What container runtime does Talos support? So Talos is built on container D. We have had some discussions in the past of making that pluggable, but we have yet to have any real need to do so. Okay. Do you have NVIDIA GPU support? That is actually something that I am going to have to ask. I know we had discussions with one of our users, but let me ping one of the other people on that. Okay. I don't know that offhand. Not a problem. Okay. So no GPU support as yet, but we are working on that. What is the best way to contribute and follow to this project over time? Definitely the Slack and GitHub are our primary sources, both internally and externally. So those are the best places to go to contribute, to interact, to ask for features, to work on features, to do really anything with the project. Those are the primary sources. What if I need G visor support? Good question. That is not one we have looked at before to my knowledge. I do not know the answer, but I will punt to one of our other engineers on that and get back with you if I hear. Okay. Not a problem. Has metal load balancing been tested with sidero managed clusters? I assume it would work like in any other Kubernetes cluster. Yes, definitely. This is a long one. Speaking of work set, we use dedicated NICs on our nodes for the set cluster storage and we force set to use this with a custom configuration override. As a result, we configure a pair of bonded NICs on the bare metal for a dedicated cluster network. Could I do something like this, arbitrary extra NICs on the nodes using TALOS? Yes, absolutely. So all of my bare metal servers in fact do exactly that. I have bonded connections as well as management connections on each of my servers and those are all handled by TALOS. Excellent. Do we have any other questions? How did Andrew Reinhardt and Timothy Gerle meet? What is the origin story behind the background at Red Hat Post Ansible Acquisition? I think I will let Tim answer that. Hi everyone. I'm Timothy and I'm one of the co-founders of TALOS. Andrew is the creator of the project. I joined him last year to see where we could take the project and find him some resources and help him build out the open source community. So that's really the origin story. It's been around for quite some time now. Andrew started it I guess almost three years ago now. So if you're interested in learning more, feel free to drop by this slack and we'd be happy to chat. Awesome. Thank you so much for that. Do we have any other questions at all? I think we'll call it a day here. I want to thank Sean for a wonderful presentation and for a great Q&A facilitation. And I want to thank everybody for joining us today. We apologize again for the technical difficulties that we had earlier. But thank you very much for sticking around and for spending a few extra moments of your day for today's presentation. As I said before, today's recording and slides will be posted on the CNCF webinar page at cncf.io slash webinars. Thank you again to everyone for joining us today and everyone take care. Stay safe and we will see you next time. Great. Thank you.