 Hello, everyone. I think it's time for us to get started. Thanks, everyone who is joining us. My name is Marsda Konolev. I'd like to thank everyone who is joining us. And welcome to today's CNCF webinar, Metal Cubed, Kubernetes Native Bare Metal Host Management. I'll be moderating today's webinar. We would like to welcome our presenters today, Myel Kemerlin, Senior Software Engineer at Edexcel Software Technology, Verujon Moyasarov, a cloud developer at Edexcel Software Technology, and Pep Turo Maury, Senior Software Engineer at Red Hat. Before we start a few housekeeping items, during the webinar you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop your questions in there and we'll get to as many as we can at the end of the presentation. This is an official webinar of the CNCF and as such it is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that will be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants and presenters. Please also note that the recording and the slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. With that, I will kick off today's presentation and you guys are welcome to start. Great. Let me quickly share my screen. I hope you can see it. Hello everyone and thanks for joining this webinar. We're really, really happy to present to MetalCube and thanks for giving us a chance to introduce to the project called MetalCube. But before we jump into the actual topic and start talking about MetalCube, just a small introduction about ourselves. My name is Ferris John Wilsarff and I'm working as an experienced developer at Derrickson. Yeah, and my name is Mel Kimela. I'm also working in Derrickson. Yeah, I'm Pet and I work at Red Hat. Great. Thank you guys. So what is MetalCube? Why do we need it? What problems does it solve? So first of all, it's a bare metal host provisioning tool that allows you to manage your bare metal nodes through the Kubernetes APIs. So you might be wondering why do we need it because there are already a bunch of like existing tools to manage the bare metal host. But the main difference and the goal of the MetalCube is to manage your bare metal nodes through the Kubernetes native APIs. We want to have something that could live in the ecosystem of the Kubernetes, let's say. So MetalCube also offers you a plugin for another Kubernetes subproject called Class API about which we'll talk a bit more in the next couple of slides. MetalCube is also self-hosted, meaning that all the custom controllers and all the building blocks are running within your Kubernetes cluster, which kind of avoids the having the need to have an extra tooling to manage the MetalCube itself. So you need, of course, Kubernetes cluster for the MetalCube and that kind of eliminates the need for many, many problems that you might encounter as a race. In fact, it's very young project, but currently we're seeing more and more interest from different communities and a lot of different contributions, which is really nice. And then the last one is that it's a CNCF Sunbox project currently. It's been, I guess, a couple of months that we entered this cycle. All right. So during the talk, you will be hearing quite many times about the word cluster API or in short CAPI. So I think it makes sense to just give you some kind of brief introduction about what cluster API is so that you have some good idea about the next slides. So the cluster API is the Kubernetes sub-project. It's focused on the cluster lifecycle and it allows you to do the management of your clusters in many, many different cloud environments, but not only cloud, but it could be even the bare metal, right? All the components of the cluster API are running within the Kubernetes cluster and it manages your target clusters, which are running somewhere in the cloud. So to start with the cluster API, basically you need some kind of Kubernetes cluster. And that cluster has different names, but they all mean the same. So for example, some in some context, you might hear management in some context, you might hear even from us like ephemeral or the source cluster. So cluster API comes with its own client called cluster CTL, which you can use to spin up the clusters in your desired environment. For example, to start your clusters in desired environment, you start usually with cluster CTL in it. And then there are different flags, but we focus on the infrastructure provider. So here you can pass the infrastructure in which you want to spin up your target cluster. For example, you may want to create a cluster on Google Cloud, so you just pass infrastructure GCP or you want to create a cluster AWS, or you may want to create a cluster on Azure. But in our case, in our case, it's a metal cube. So that's the plugin that I previously mentioned. So if you want to create a cluster on bare metal infrastructure, or let's say in your data center, then you may want to choose, for example, metal cube, which will take care of provisioning your real bare metal nodes in the bare metal infrastructure. So imagine the let's take a small use case and see how it really ends up in the bare metal infrastructure. So imagine that you want to create a small cluster that has three nodes, one master node and two working nodes. And you want these three nodes to be represented by your physical nodes, because we're in the context of the bare metal. So what happens is that between Kubernetes node and then the actual physical server, there is a couple of layers or the processes involved. So the first thing is that cluster API project comes to present so on customer resources, custom controllers, of course. And one of the object is called or the resources called machine, which actually represents your Kubernetes node machine is actually generic across all the providers. So it doesn't know about any provider yet. But what it knows is that it knows it has a reference to to the desired infrastructure. For example, if you are about to create a cluster in in Amazon, then Amazon or AWS, you will have AWS machine object, which will be created by the AWS infrastructure provider. Or if it's Google, then it will be taken care by the GCP infrastructure provider for the cluster API. But let's focus on the metal cubis right now. So after the machine has object has created, metal cube will take care of, let's say creating the metal cube machine object that will be referenced by the kept key machine object. And after that, we have another kind of the controller or the operator to be exact, which is called bare metal operator. That actually knows how to really talk to your underlying infrastructure or to your real physical machines. But we, we have another object that is controlled by the bare metal operator. It's called bare metal host, and that bare metal host is has like really one to one kind of almost one to one mapping between your server. It has a lot of data about your actual servers. It knows it has, it stores a lot of information like for example, the CPU disk ram and all this kind of stuff. And then you can manage everything through the bare metal host, but that's basically the chain of the objects, which end up which which help you to create the note in the, in the bare metal infrastructure, let's say. So coming to the metal cube now. So metal, let's focus on the metal keeps stack and see what metal cube actually brings and how does it really manages the bare metal infrastructure or the servers. So imagine that you have a couple of servers like a physical service that you want to manage you want to provision and then you want to bring them into the cluster into the Kubernetes cluster. So the first thing that you need to do is that, of course, on another note, you need a Kubernetes cluster, because metal cube is running inside the Kubernetes. So you can start with very minimalistic like a small Kubernetes cluster to start with. And on top of that you can install bare metal operator, which is, as I already mentioned, component of the metal cube that knows how to talk to your underlying infrastructure. And only just by running the bare metal operator, you are already able to manage your servers, you are already able to provision them with your desired image and so on, for example. But if you want to extend the capability of the metal cube, and then if you want to have the features that are also provided by the cluster API, then you will have to use another component of the metal cube called cluster API provided metal cube. So this is the plugin that I already mentioned that we basically plug into the cluster API. And now cluster API knows how to create nodes, for example, bare metal nodes or bare metal cluster and let's say, through the metal cube. Right. So, in the in the next slide, we will briefly talk about custom controllers that we've built in the metal cube and then some of the objects. But before we jump one more thing about the navigation. So if you go to the metal cube GitHub organization, you will see four peanut GitHub repos, which you might already know as based on the slides that I showed earlier. The first thing is the metal cube docs where you will find a lot of design documents. It's always growing because we're having more and more contribution, more features are being added. So this is the place where we're storing some documentation for like design docs, but also currently started writing down our doc for the whole project, or extending the existing documents. The component that I mentioned that knows how to interact with underlying infrastructure is the bare metal operator that is living in a separate GitHub repository as you're seeing in the middle here. Then you have cluster API provider metal cube, which is the plugin for the cluster API project. And then we have another repository that we're using for testing and development purposes called metal cube devans about which we will shortly talk in the, in the next slides. Okay. Now let's go through the, these repositories or the components of the metal cube and see what they're, or how they're like working and how they're really really presenting our objects. Yeah, sure. Let's start then with the bare metal operator. So, as fellow has already mentioned, this is the really the base building block of metal cube. That's what we use to manage the hardware, right? It's a standalone thing. So you can use it without having the cluster API integration. And that would allow you, for example, to just provision some nodes without integrating them into Kubernetes cluster, for example. And then you can do whatever you want on top of them on the, of the node provision, because we have a feature like that we can inject cloud init data to the nodes, and that allows you to, to adapt and run whatever you want on top of the, of the provision nodes. So bare metal operator standalone and really the base for, for metal cube. Then how does it really work with them with the hardware that we have under the hood. So the bare metal operator has this representation that was already mentioned this bare metal host, and the bare metal hosts represents the physical hardware. There's only two requirements to be able to, to like start managing that hardware. It's first to know all the details with about your DMC, this baseboard management controller. You need the credentials, you need the address, maybe the certificate if the CA if you're using a certificate with anything that allows you to, to manage that node directly. That also implies that you need to have connectivity between the cluster where you're running bare metal operator and the BMCs of your hardware. And then the second thing you need is the host MAC address the host MAC address is used to identify the node when it boots, which, like, which bare metal hosts were talking about, like, when it puts using ironic. Once you have those two things on the output in the bare metal host, you're ready to go and you can like kick in the kick in the deployment process. So, yeah, the let's talk a bit about how bare metal operator interacts with the two different components. So if you can go through like put everything at once or right away. So there is the, there is two, two things actually like behind this mental host, the there's the bare metal host itself the object that represents your hardware but there's also a secret that is attached to that to that bare metal host and that secret contains the user name and password for the BMC. Right. So then in the bare metal host in the in the CR you have a field that references the credentials that that you're using here called credentials name, and then the address also of the of the BMC. Then you put the MAC address that you want and then. Yeah, sorry. And then you can specify if you want it if you want the node to be on or off. So we're going to dive deeper in the fields, right after so let's go to the next slide. So you have the bare metal host and then the bare metal operator like keeps reconciling that object. Can you please put the, yeah, then we are now like going to look in details like what are the fields of this bare metal host and how can you like use it to manage your server right. As any other community subject, there is an API version and a kind so the API is metal cube.io the V1F1 because it's still under development and the kind is bare metal host. Then you have the spec part of that object the spec contains a lot of information like how you want the state basically you declare that the state you want your bare metal host to be in. So the first thing you have to give up obviously we just already talked about it is the BMC the address and the name of the secret in which you start the credentials. Then you specify the boot MAC address of the node that's that one MAC address the node will use to PXE boot so and that's going to be used to match which node is being booted to like which bare metal host. Then you can specify the boot mode whether you wanted UEFI or legacy. And after that, you have, yeah, okay, good. You have the consumer ref field the consumer ref is the object that is currently consuming the bare metal host if any. This doesn't have to be set but if you're using the cluster API metal cube provider, then you will have this set to the actual metal machine that is currently consuming the bare metal host. The next field that we have in the bare metal host is the image field so that's where you specify the image you want to have written to the disk of your hardware. So this should be available over like like an HTTP request and you need to provide the checksum and the type of the checksum and then the format that you use to that the image is using when you specify that image then what's going to happen is that Ironic will start the temporary image using an ISO that is called Ipa, Ironic Python agent that Ipa will then download the image that you just gave here, write it to the disk and then reboot from disk so that allowing you to start the node with the OS provisioned and directly from the disk. It will also write the cloud init data and that's what we're coming in now that there are a couple of fields the next one being the metadata. That is basically the set of like fields that you can give that will be used to render the cloud init user data and network data. So you can give for example in the metadata like the name, the host name of the node and any other field like it's like a map so you can give whatever you want in there. Then there is the network data. The network data. Well, yeah, exactly. The network data is a reference to a secret and that secret contains the network configuration that will be applied by cloud init on the node so you can do all the networking configuration from there if you don't want to do it through the user data in cloud init. So the next field is just this online field. It's basically a switch like is it on or off and then the following field is the user data. So this contains all the cloud init like let's say core data given by the user and it allows you to do a lot of configuration. It's really, really powerful. You can create users, you can run comments, you can like install packages like it really allows you to do a lot of things. So, then, once you have cloud init booted, sorry, once you have the node booted then cloud init will kick in. So the image need to have cloud init installed. And of course, we're talking about cloud init now but it could be exactly the same thing with ignition. So cloud init will kick in and then once it's started it will like read the data that was written by Aaronic on the specific part of the disk and then use that to perform the setup of the node. Then the last field here in the spec is called root device hints. So when you're on physical hardware you probably have multiple disks and you probably want to specify one specifically that would be the root like let's say the OS disk. It could even be read like so you're going to give what's called here root device hints. So it's basically basic hints that are telling Aaronic to choose how to choose the disk on which it's going to write the image. You can give the name, you can give some things like HCTL, for example, or some identifier. There's quite a broad range of fields available there that allow you to specify which disk exactly you want. And it will default like Aaronic has its own way to default the selection to a disk that is writable and more than 4GB. But of course if that doesn't fit you and you want to be much more specific then you can give anything here in those root device hints and it's going to be matched with what's on the node. And if it doesn't match that disk will be selected for writing the image. Then the status part of that object. It contains a lot of information about your node. So the node will when like created it will go through a process that is called introspection and introspection gathers all the data of the node. And this data will be put in a field called hardware. You will have for example the CPU like with details about what you have on your node. Then you will have something about the firmware you will have the hostname that was at the time when it was gathered. What was there then you will have the list of the interfaces and with some details about each of them. You will have the amount of RAM the different disks that you have on the node and all the fields that you have in this storage part can actually be used in this root device hints. Then there will be another field called powered on true that will indicate if your service actually turned on or turned off. And then the rest will be just reflecting what you have in the in the spec except that you can find the state of your node that will be either like ready in the case it's waiting for being used or provisioned. Once it's like being used and running running the workload you want to have on top. So that's it for the bare metal host then we are going to move to the cluster API provider metal cube and the integration with cluster API so just it's basically this is basically the same slide as fellows already presented but with a bit more information about the cluster API and the controllers and how things work together. So you can see here that you have different objects representing different things in Kubernetes so you have the cluster for example that is the representation of a Kubernetes cluster. And then there is the metal cube cluster that is the infrastructure part of that Kubernetes cluster. Then you have the machine that represents a Kubernetes node and then you have the metal machine that represents the actual infrastructure part of that machine. And then the bare metal host that represents the hardware. There is also the cuba DM config object that contains the that contains the cuba DM config part of the that will be used to to provision this specific machine. So each of them are reconciled by different controllers. We won't go too much into the details right now but if you have any questions please write them in the in the Q&A and then we will use that we will answer that at the end of the presentation. So we are going now to like have a short look at what those objects are so the cluster was the description of the of the of the Kubernetes cluster and I would recommend for you to go into the cluster API book to see more detail about that part, but we are going to focus on the metal cube side. So for the metal cluster. It's basically just a representation of the endpoint that you will have set up for your Kubernetes cluster so it contains the only field that it contains is this control plane endpoint with a host and the port so that's where your API server will be listening once your cluster is up. Then we have the metal machine, and this is the infrastructure part of the machine. So the machine will contain everything related to the Kubernetes part, and the metal machine will contain anything really regarding in relation with metal cube. So that contains for example the image. Yes, sorry, can you skip directly to the to the spec. So it will contain the image that you want to have deployed on the node. So that's exactly the same thing as we've seen in the bare metal host. And then it will contain the provider ID that is the same as same as in Kubernetes node so that will be the exact same provider ID here as you will have on the on the Kubernetes node when it pops up. So for the status, you will find all the addresses of your node, so that way, which address you can reach it, and then whether it's ready and ready to go. So this was for a very short overview of the different objects that we have. And I think after that we can probably like switch to the demo and show like how, how everything is working and then I think that can take over. Thank you. Yeah. Okay, so we will show this in action using one of the repos that was mentioned there is, you know, in the GitHub or for metal cube there is this metal three dev M which is the developer development environment for metal cube, which actually simulates bare metal using virtual machines. This can get confusing. I want to clarify that the target of this is bare metal. The demo will see virtual machines running but they simulate sorry bare metal hosts. A quick overview of the environment of what the environment looks like. So we're going to deploy a new, a brand new Kubernetes cluster, a small one, just with one control plane member and two worker nodes. So those, all of those three will be on, well, bare metal, which will actually be simulated. The next one, you will see that those bare metal servers are actually, well, it's highlighted later. The deployment of this new cluster will be handled through a management cluster. This management cluster on the developer on metal three devamp, it's actually a small cluster of one, only one cluster using mini cube. So in, in this management cluster with mini cube, we will have all the components metal components, be a developer metal operator cap M3 and also the cluster API deploy. By the way, again, acronyms here. Sometimes we mentioned copy and cap M3 that's cluster API provider metal cube and cluster API. So, if you skip forward, the starting point of the demo will be the management cluster already deployed with all the components in place and a few of the resources already in place. We will see, we will be deploying a few more during the demo. Okay. Yeah. Yeah. So just to summarize, we have a management cluster. Actually, I think I can take over here from sharing the screen on second. Okay, I hope you're saying this. By the way, this is a screenshot of the website metal three.io. There is a section there called tried that actually goes through this developer environment and explains it. And how to run it. By the way, this is a video it has been recorded this because, well, this bare metal provisioning does take some time time that we don't have here. This is a diagram of the slides that you were seeing the environment is actually the implementation of it is you see a management cluster or mini cube cluster. There are four networks here one used for provisioning one for the bare metal. And this is actually the access the public network, let's say of the cluster. Here we see, you know, just let me post quickly to see what we have here we see a virtual machine manager again showing the virtual machines that we have a mini cube here is not open at the moment but you you see running is where the management cluster is keep city on this post is configured to talk to the management cluster you're running a mini cube and we have two nodes node zero node one. Those the consoles of those nodes are open here at the left. Okay. I'm just to confirm we already have as mentioned we already have. Metal cube deployed this is just the cap M3 namespace cluster API is already is also deployed but on a different namespace and we have you know those two nodes have already been represented into the management cluster in the form of bare metal hosts. Okay, this is the starting point and we will see, you know, we have this kind of empty two node to bare metal nodes where we want to deploy a new corner this cluster. A new bare metal corner this cluster using metal give from the management. Okay, so first we start by declaring the this is a cluster API object cluster that that mail describe and we will not get into the details of oops, sorry, but I do want to mention, sorry, I do want to mention that we also have here the the metal cube part of the metal cube implementation of this cluster with the end point for the for the API as well explained. So, we just declared that cluster here, nothing much will happen other than the resources being created. So we move to actually start deploying the cluster. First we start with the control plane. And again, some of those objects are cluster API objects and we will not get into the details of this but like the control plane. But this, this actually references a metal cube object metal machine template we didn't talk about templates but and we will not talk about them now. But imagine this is a kind of a generator of metal cube machines right and and here we have the the the fields of the spec that make a metal cube machine like the image. Here we will be deploying centralized on those systems, which by the way I didn't mention but what you see on the consoles at the moment is that the ironic Python Asian image just waiting for instructions right. We will deploy sent OS on those systems as you know the operating system to run Kubernetes. Okay. So once we declare this control plane and this this set of resources, what we will immediately see starting to happen is that one of the one of the hosts will be picked up as the target for this. For the control plane. It's a single node control plane. And we actually see here that, you know, no zero the very metal host that was declared has been picked up to as the target for the cluster. Sorry for the control plane. And ironic is now provisioning it. And this is, by the way, this is the part that has been highly accelerated this does take time time that we don't have but you know if you look at the clock there at the right you will see that it moves faster than reality. Anyway, we are and we are already done we have this no zero has been provisioned with. Okay, let me post just post quickly here. The no zero now has been provisioned with the image that we said and this is what we see here as provision. We see a machine this is a down here is that the cluster API object mentioned that it's being provisioned and in the middle between the two we have the metal machine object. The physical part let's say the very metal part is already done provisioned, but the machine itself. It's not, it's not done yet so we still don't have in other words we still don't have a Kubernetes note there. So, now in SSH, this is an SSH into into the note right so this is a new sent OS system that has just been provisioned. So just taking a look after booting cloud in it is still running and you can see that for instructions from the middle operator it's actually running QPADM to install the Kubernetes cluster here. Well, it does again this is another part that has been highly accelerated here it does take a while to download the images. Well, it started the API server controller manager, HCD, Core DNS, etc, etc. After a while. We will see, you know, containers actually starting to run the new clusters control plane so it's a bit we have the API server already starting. Okay, well, you get the idea right so Okay, at this point, this this note is already this systems already this bear metal host node zero is already a single note new Kubernetes cluster it's not ready yet we don't have a CNI plugin deployed so but we can move on and actually grow the cluster by adding workers. So for this, we will use this manifest here. Again, this is a cluster API object of machine deployment will that you know we will use by the way I didn't mention the name. Test one is the name of the cluster we created last mentioned in the question of the cluster that's here again we will not get into the details of the of the fields here just get the idea that equivalently to the what we did before we also have a machine metal cube machine template that will specify how the machines that will actually become nodes will look like and again we're using CentOS here. Applying this this manifest here. Well, the consequences are relatively similar to what we saw with the control plane we will see one of the, well, actually we only have one host left one metal host left node one which is what has been taken here and we here we are following the same, basically the same process as before. The note has been well it's being provisioned. It will be image let me skip that a bit faster. And just to recap at this point we have the two nodes provision from, you know, physical point of view or mental point of view. One of the machines that the machine, by the way, we, you can see here the provider idea of the control plane machine, while the the new worker node is still being provisioned. Well, not sorry, it is provision but it's being configured as a worker node. Now, I look back in into the control plane so this is our new cluster and we still see, you know, even if the machine was already installed like the physical machine the bear metal host was already provision. It was not the note now it is, you know, configured as a, as a worker node of this new cluster so now the cluster has two nodes. They are not ready because they, as I mentioned they don't have a CNA plug in here this is what I'm doing here this is completely unrelated to metal keep itself but we need a C and networking to to make the cluster functional in this case. I choose to deploy cilium. I can speed that up. But basically after cilium gets deployed. We will see that we have a fully functional brand new Kubernetes bear metal cluster with those two nodes that we have here. Okay, both are ready. We have a cluster. And just to just to recap. This is a picture of the current situation we have to bear metal hosts that have been provision. They represent to cluster API machines which actually host Kubernetes nodes. And, okay. This is, you know, this, we are not done yet. We will show the cluster here. You will have noticed that we have another bear metal or fake bear metal host here note to. It's been switched off it's not related has no relationship with the cluster yet this is a new, let's say a new server that we want to add, you know, to as a new worker, make it a new worker for our new cluster. The first thing that we will do is declared as the bear metal host. You saw in the presentation that bear metal host that the object can be very contain a lot of information but this is just the essential essentials. We are declaring no to the boot market race credentials to access the BMC in the form of a secret. And that's it. We will see by creating those two objects. We will see something that we didn't see in the previous ones because when we started the demo, the note zero note one were already registered and inspected. What we will see now is that after creating this brand new for metal host. We will see, well it appears here on the list of bear metal hosts and it's registered registering and inspecting this ironic that, you know, not if bear metal operator notice that we have a new bear metal host and it's using ironic to. Well, find out how, how the note looks like and it's putting now, you can see that the note rebooted and ironic is inspecting the hardware. Again, this is another thing that has been accelerated for demo purposes. But let me accelerate it even more. Maybe a bit too much. Okay. It became ready. This is a status that we saw when we started the demo note zero and no two were already ready. We reached that point. We reached that point with no two now. Okay. Now, we will use that new note to have another additional a second worker to the new cluster and we will use a different trick here. So machine deployment is a cluster API object that controls it's the equivalent of deployment for parts but for machines and we have the current cluster has one. You know, it has a machine deployment with one replica. And as a, as a reminder, we have basically we have three bear metal hosts. The last of them know to has been recently provisioned. What we will do is we will scale that machine deployment and, you know, ask it to be to have two replicas. So here, by declaring this new state of three replicas, we will see that we need a new machine. And so this will cause a new machine to be deployed, which will, you know, ask for a new metal cube machine, which will come as a bear metal will be provided by the bear metal host that we just added no to. And this is what's happening here you see this the same process that we followed before we see no to being now provision. And that's basically the same process as the previous note deployed. And that's basically the kit of the demo so this is kind of a summary of the final situation. Again, we have a brand new corner this cluster with three notes, three bear metal notes. I think, yes, here, here I think we're checking how from the new cluster itself how it, how it looks like well the the new note has only been provisioned but the configuration has not finished. It's just a matter of waiting a bit more. Yes, it happened with no one no two will eventually become a proper note we see it pop up here as not ready, because we already configured the CNI plugin it will soon be ready and. Okay, this is it we have, we have a new cluster. And I think that's it for the demo that's that's all I wanted to show and we have like a few. I'll hand it back to mild to continue. Yeah, maybe I'll take it over. Thanks a lot. Oh, nice demo. Let me share it back. If you can stop it please. Yeah. Yep. Can you see the slides. Yes. Yeah, just one thing to mention that I also saw in the in the questions here in the chat that, and I forgot to mention in the beginning, we're not shipping any other open stack services was metal cube. It's just standalone ironic. And even when you are taking like, when metal cubies using the ironic you don't have to manage the ironic itself you don't have to use any other open stack services like no or whatever. So because it's just like the coupled ironic itself that we're using under the hood to manage the actual parental service. Great. So, if you're interested to contribute or if you think that the project might be a bit kind of interesting to you. We welcome you very very much for any contribution, or even just try to use it. But contribution can be different so you might want like to do some documentation changes that we're doing right now quite a lot. So if you have some skills in the documentation and if you want to share some some your skills and do some you're very much welcome. You might also be interested to have some future requests that you might think would be valuable in to have it in metal cube that is also really really nice to hear as well. Or you might have found some bugs that you might report, which we first tried to, of course, fix or the maintainers or the contributors of the metal cube but even if you have your own fix for that it's even nicer than that. Or you can also take a part in the review processes of the pool request. Share your comments and give some reviews, or you can also take a part or like help in some talks or webinars or presentations like this what we're doing right now. Or write some blog posts we have a metal cube website where where people are writing different blog posts or different features of the metal cube. Also you can, you might have some questions or the feedbacks that I get for the metal cube. So this is also very much welcome from the community. If you want to know how to get started with the contribution you can you might be interested to check the link below. And the last slide I would say that we have a very diverse and really really interesting community. Right now we have different contributors from different organizations across the world, like to name red hat Ericsson the right is Dale Fujitsu and AT&T. If you want to reach out the maintainers or the contributors or the whole community, the best way is to join Slack Kubernetes Slack on class API bare metal Slack channel. If you have some questions, you can also reach out through the mailing list. We do have community meetings, bi-weekly community meetings at one UTC time so you will have to convert to yourself. That happens every Wednesday we had the zoom. So you will find the link below. Community meetings are recorded. But apart from the community meetings we also have nice demos of the metal cube that are stored in the metal cube YouTube channel to visit the code actual code. It's hosted on the GitHub under the metal cube dash IO. We also have a nice website where we're storing as I said like a blog post and some updates what's happening on the metal cube. And you can also watch some updates on the Twitter as well. So you can, you will be able to find slides on the link below zoom link community meetings recordings for the YouTube and then Kubernetes Slack where you can find the channel to join the metal cube. And I think that's all from us for today and I hope it was somehow informative and interesting to you to listen. Yeah, thank you very much. I think we can now take some questions. Yes, please. If you have any questions, please fill them out. We have about nine minutes left for questions. Please feel free to ask any questions and I think we already have a few of them. Yeah, I tried to answer them in while we were going but there's a couple of them that I think would be nice to go maybe in a bit more details that we could we could discuss now if there's no further coming. There were a couple of questions regarding like what do you deploy on top of the the host that you're provisioning. So middle cube has like two options. If you're going with the cluster API, you will deploy a cluster directly like and it will be like one Kubernetes. Well, one physical hardware is one Kubernetes node. And that is directed on this way. But if you don't use the cluster API provider metal cube. So the integration with cluster API, then you can of course deploy anything that you want on top of your notes so you could very well, very well like deploy and hypervisor like whatever you want on top of this and deploy your communities on top of this and you could even like even have a nested like metal cube level that you would use metal cubes to us to provision those, those hardware nodes and then expose the virtual machine as like fake hardware nodes let's use metal cube to provision them with the cluster API this time. So it's quite flexible like that because you have this cloud and you can pretty much do anything on on top of this. And then the there were some questions about the operating systems that we can deploy. So ironic is pretty much only writing the image that is provided to the disk. So with that regard as long as you have an image, a disk image of the OS you're trying to install like it can cover anything. So, you just need this, this image available and can be downloaded by HTTP. And the one limitation would be like if your image doesn't have the doesn't have the cloud in it or ignition or like similar mechanism, then you will have to build in all the configuration in damage so what you will have to have a specific image per node. You will probably not work out of the out of the box. Then some questions about the communications between Bermeter operator and the BMC so Bermeter operator embeds something that is called ironic and ironic is open stack project for the for the management of the hardware. And this ironic out of the box already embeds a lot of lots of different protocols and like supports a lot of different hardware. So there's like IPMI, Redfish, ILO like lots, lots of different even proprietary like kind of kind of protocols. Then the, so you just specify which protocol you want to use in when you specify the BMC of your host so you say like if it's a Redfish or if it's IPMI or whatever, and then like it will configure Bermeter operator will configure ironic to properly talk with your BMC. And then there are some open questions that we probably should address because they don't yet have an answer. Yeah, go ahead. Sure. The first one was like what visualization product you have good experience with. I could recommend looking at. I seen the demo is Libvit. Surely you know a lot of options with that would be incredible to hear. Actually, I don't think we know that many options where like really usually working with Libvit. And then like directly like building the notes on top of that. So sorry, no, except maybe like if that offers you have like more insight in there. No, I would say the same like up until now we've tried like delivered. Then next question, does metalcube support provisioning of any type of storage mechanism or is it outside of the scope of metalcube. I'm not really sure what you mean then in this kind of like storage mechanism if it's like just for the disks on the node. Well, it depends on the image you're like, you're using so this IPA to write it to the disk but it's like, it's a center S so there's like quite supportive with different types, types of storage mechanism. But then if it comes to you're like trying to ask like for from the Kubernetes cluster that is deployed on top of you like on top of it for example like for the assistant volumes then that is outside of the scope of metalcube that's rather with the configuration of your target cluster. Yeah, then the next question red hat is contributed to the plan to use metalcube to provision open shift that one is for you. Yeah, I think that's a question for me. So the question is, does open shift plan to use metalcube to deploy open shift over metal the answer is already it already does. It's a slightly different in the sense that, especially the control plane part is deployed to slightly differently to what you saw in the demo we the demo showed a full cluster API and using the tube to PDM control plane on open shift but you would see and by the way, you can always deploy open shift on on bare metal, providing your, you know, deploying your control plane nodes yourselves but I understand that the question is automated deployment management off off off bare metal. And so open shift uses it for let's say the worker part, the control plane you would see it uses open shift has its own installer, which is installed that will take care of deploying the control plane and in metalcube terms you will see the host that represent the control plane. So in the demo provisions. Well, the control plane nodes for for open she would be externally provision and then the rest of the worker nodes with deployment is handled by metalcube already. Yeah, thank you. And another question about like this storage mechanism that the precision was that it can we support like self of or rook the answer is like it's outside of the scope. However, we are like working on features now to make the support of like having this kind of like storage deployed on your cluster feasible like for example until now we were always like cleaning the district and upgrade. Now we're like working on a feature that would that would allow you to disable this cleaning so that you can save like those safe discs for example so that you don't have too much data like flowing through your cluster during like rebalancing of the of the drive when during the upgrade. So we are not it's outside of the scope but we are trying to be like kind to it like to make sure that it's it works smoothly on top of a metalcube deployed cluster. Then was there another. All right, I think that is all the time we have for today to answer Q&A questions. So thanks a lot once again my permission and pep for a great presentation and Q&A. Again thank you for everyone for joining us today as a recording of the webinar and the slides will be online later today. So we are looking forward to seeing you at the future since your webinar and hopefully you will have a great day. Thank you everyone. Thank you very much. Thank you.