 happy Monday and welcome back to an OpenShift Commons briefing. Today we are going to talk about Metal Cubed or Metal 3 or whatever we want to call it but I like Metal Cube the best and it's a new CNCF project. We have two folks here from Ericsson who are maintainers on the project, Mal and Farazon who I'm going to let introduce themselves. We'll have live Q&A at the end so wherever you are listening whether it's Facebook, YouTube or Twitch ask your questions there or here in BlueJeans and we will relay them to the guest speakers and then at the end of the demo and the wonderfulness of Metal Cube we'll just have a conversation. So Faruz take it away. Hello thanks a lot welcome everyone thanks for having us today here and giving us a chance to share with your project that we've been working on but we are really really happy to be here and today we'll be talking to you about a quite a young project called Metal Cube which does basically provisioning for the bare metal host for the bare metal host in the Kubernetes cluster. So shortly who we are? My name is Faruz Joon Miasarv I am working at Ericsson as an experienced developer and I'm one of the Metal Cube project maintainers. Yes sorry like mishap with the mic yeah it was just saying that my name is also like so my name is Metal Cube and I'm also working in Ericsson and I'm also Metal Cube maintainer. Cool thank you so what is Metal Cube? What problems does it solve and what does it really offer to you? So first of all it's a CNCF project sandbox project in fact it's a quite young project as I want to mention it and the interest to the project has been increasing so far quite a lot and the community always keeps basically growing and we're seeing more and more people joining our community doing all sorts of contribution which is really really amazing. So it's a bare metal host provisioning tool that basically allows you to manage your bare metal nodes through the Kubernetes native APIs. There are different ways and already existing tools that you can use to manage the bare metal infrastructure or the host but primary goal primary goal of the Metal Cube was to use to have the Kubernetes native APIs in order to do the management of the bare metal hosts. So the second it's self-hosted meaning that all the building components all the building blocks and the controllers that Metal Cube offers to you run inside of your Kubernetes cluster which basically eliminates the need to have some extra tooling to manage the Metal Cube project itself. Also Metal Cube offers a plug-in for the Kubernetes subproject called cluster API which is the C cluster lifecycle and we will talk a bit more about the cluster API in the upcoming slides. So now let's see what is Metal Cube stack and what does it really represent for us. So let's see the high-level picture of what would you get from the Metal Cube. So imagine that you have the infrastructure or the bare metal infrastructure that you want to manage and the Metal Cube has a component called cluster sorry the bare metal operator that basically takes care of provisioning and then the provisioning of your bare metal hosts. So one thing to note in here that Metal Cube or the bare metal operator under the hood is using ironic tool which is from the OpenStack ironic community but it's also important to know that we're not shipping any other services or the parts of the OpenStack because we're running ironic as a standalone tool meaning that it's really under the hood when bare metal operator is using the under hood the ironic it's somehow hidden from the picture so the bare metal operator always takes care of managing the ironic itself so you don't have to do the management of the ironic because Metal Cube will do it for you. Yeah and also the next component in the Metal Cube stack is called cluster API provider Metal Cube which is also another big component that Metal Cube offers but before we jump into it I would like to mention that bare metal operator can be used separately you don't have to use any other components of the Metal Cube to do the management because the main component is let's say bare metal operator but if you want to integrate your cluster management specifically bare metal cluster management with the other projects like cluster API then you have to use the cluster API provider Metal Cube which is basically the plug-in that you can use in order to plug your management into the cluster API project so both bare metal operator and then the cluster API Metal Cube they run inside the Kubernetes cluster so which would be in this case which would be really really easy to manage them and you don't have to as I already said you don't have to have some extra tooling or components to manage these the clusters and then the last component in this stack is the cluster API so the cluster API is the high level project in this stack let's say that offers some machine objects and then cluster objects and then those objects are represented by different infrastructure provider in different ways but in our case for example if you create the machine object the cluster API which at the end of the day represents your Kubernetes node that machine object would for example if you say create one machine object from the cluster API at the end of the day it will create the bare metal server through the cluster API provider Metal Cube and then later through the bare metal operator and then you will have basically bare metal server that is managed by the high level object like a machine from the cluster API great so so quick overview of the cluster API before we dive into the Metal Cube project cluster API is a Kubernetes SAP project focused on the cluster lifecycle management and it allows you basically the data operations or the manager cluster in most of the cloud environments or in most of the cloud providers and then whenever you do the management you do it in declarative way right so you do you use the cloud Kubernetes native APIs or you use the Kubernetes manifest so in short you will be able to deploy and manage your Kubernetes cluster we are the we are the Kubernetes so that means all the components and then the building blocks of your cluster API all the controllers are running inside the Kubernetes cluster and then those controllers actually manage your target clusters which are running somewhere in the cloud for example or in the in the bare metal infrastructure so as such we always need a cluster to start with cluster API so we create a management cluster which is also known as a bootstrap cluster or the ephemeral cluster which you see on the left side of the slide and then in that ephemeral cluster we start installing all the cluster API components and then the controllers and once you have a bootstrap cluster up and running with all the cluster API components and all the necessary controllers then you can start actually creating your target cluster in your desired cloud environment whether it's GCP or AWS or Azure or Digital Ocean whatsoever so what we did also what we did with MetalCube is that we somehow extended the list of the infrastructure providers by adding the cluster API infrastructure provider for bare metal so MetalCube as I already mentioned allows you to do the cluster management or bare metal host management for in the in the real bare metal servers or in the real bare metal infrastructure so we basically added one of the infra providers for the list of the cluster APIs infrastructure providers so now let's see how the objects are referenced from one object into different into another project so imagine you have the Kubernetes node object that is like the core component of the core object in from the Kubernetes so once you create or once you create a when you want to create the Kubernetes node so you can use for example cluster API to to define from where that actual node should be created like a virtual machine or the actual bare metal server so the node is transformed into the machine object which is coming from the cluster API project so which is kind of the generic across all the infrastructure providers but once you create the machine object from the cluster API then you have to tell in which infrastructure do you want to actually create the machine for example you can say that I want to create the machine or the virtual machine at AWS or I want to create a machine on the Google Cloud or I want to create the real bare metal server that would represent my machine and that will be basically represent the Kubernetes node right so once you have define it from where you want to from which infrastructure you want to create a machine then you will have the actual cloud infrastructure that will be taking care of creating virtual machines in the cloud for the cloud providers or the actual bare metal server for the metal queue for example so what happens is that in most of the cloud providers the process will end up in in the in the cloud infrastructure itself so the older cloud providers have already have their APIs to manage their underlying infrastructure but in our case since we're doing the bare metal node bare metal host management we we didn't have any cloud provider behind and we had to create our own API that will actually do the the management of the reels or servers or the physical machines so for that we have created the bare metal operator as I already mentioned a bit so which actually does the the management of the underlying infrastructure or your bare metal servers and then at the end of the day it will basically talking to your the bare metal server so you can see that from the left whenever it's moving from the left to the right we have the node object that is referenced by the machine and that machine is referenced by the infrastructure specific machine in our case metal cube machine and then we talked to bare metal operator and then ask hey I want to create a one server in this data center please go ahead and do this right so instead of having some APIs like any other cloud providers yep so the next thing is the metal cube custom resource definitions and then the CRDs and I will leave this part to mile yeah thank you very much yeah so let's dive a bit deeper in the in the technical details of the of the metal cube project so we're going to start with this kind of like overview of the different elements that that we have and how they work how they work together so you've already heard like all these kind of terms like the metal operator the provider metal cube for cluster API cluster API itself so basically for all of those there's like there's objects representing like real real items like for example we have a cluster object that represents the the Kubernetes cluster and with its like equivalent for the provider that is here the metal cube cluster same way we have the machine that represents the the Kubernetes node and the metal cube machine that is the infrastructure provider equivalent for that machine so all all of those are basically referring to each other the like for example the cluster points to the metal cube cluster like telling the API how to actually like deploy that cluster like with the infrastructure provider and the machine points to the metal cube machine like telling exactly how to deploy deploy that Kubernetes node in addition as a fellow said there's the Burmese host that is a reference directly from the metal machine to tell like that you want to deploy this Kubernetes node on this specific hardware so the different controllers are represented in the in this picture and they interact each with their own on objects and there's usually a dedicated controller for each of the each of the objects and the controller might be like editing some other objects as needed to fulfill its role so that is the high level view of the different CRDs that we have in the in the metal cube project now we're going to like dive in like have a look specifically at each of the each of the metal cube like CRDs that we have so the first one is the metal cube cluster it's it's actually consists like with the like usual elements of a CR of a CR the interesting part for us is in the spec so we have a definition of the of the control plane endpoint that represents the the Kubernetes endpoint for your cluster so the load balancer endpoint it needs to be defined beforehand because in the case of in the metal infrastructure we are not in a cloud provider environment where we have just like load balancers that we can create unfortunately we have to handle it as part of the deployment so you need to give it beforehand telling like this will be the endpoint of the cluster so that is the metal cube cluster then the next item was the metal cube machine so this defines like gives more detail about like how the how the Kubernetes node would be deployed so the current is metal cube machine and if we have a look at the spec we have the the first thing is the image reference so here you give a URL to an an image like a QCart2 image for example and the checks on of that image so that ironic can deploy the node with that specific image there's also a host selector that allows you to choose which of the metal hosts you want to deploy on and then you have a couple of other available fields like data template for example that allows you to to pass templates for the metadata that will be included in the user data or for the network configuration that will also be applied by ironic through cloud init you can of course you don't have to use templates for that if you want to directly give the metadata or the network that you can also do it directly via via the metadata and network data fields so this basically allows you to configure in quite fine grained level of details the deployment of the Kubernetes node that you will have it's it doesn't touch anything to with regard to Kubernetes itself it's more like if you want to do some customizations on your on your node or like if you want to change something deploy a specific image and all of this is in this specific object the next object that we're gonna have a look is at is the bare metal host so the bare metal host represents the physical server so the in the spec here we will have all the details that I needed to be able to access that server like control it and manage it or ironic to work with it so the first very important one is the BMC so that's where you will have the details of the of the management interface of that node so you can have either like well in that case a little bit when you're using some visualization but if you're using an actual server you would probably have something like redfish or IPMI or I low like depending on what you what you're using so a lot of protocols are supported that's like thanks to using ironic that is a quite a that is a project that is supporting a lot of a lot of different protocols the addition that you have to give here in in the in the specs is the boot mic address because ironic needs to identify the node when it boots so the way to do that is to tell which is the expected mic address for that node when it boots so that like ironic can figure out that okay that node is booting and it's this I know that it will be like this bare metal host so that's how it's basically matched and then again you can give the image the URL to the image and the checksum and then the user that I if you have any that you want to to be given through cloud it for cloud in it run so this is basically the the three core CRDs that we have in the metal cube project we are not going to go through the cappy one because they are they are already like a lot of available webinars and other things related to this so we can give you some references if you want let's keep instead like diving deeper into into metal human how things are actually working here so in the next slide we are going to be talking about the bare metal operator and how it manages the node so the bare metal operator itself is a controller that interacts with the CR the bare metal host CRs and it can do a couple of operations so the first one would be to inspect the hardware it's able to like with the node and then run something that is called ironic python agent that will like go through all the specs of the of the hardware and report it to to ironic and bare metal operator will be able to fetch that data so at the end of the inspection you will have for example the nicks the drive hard drives the CPU the firmware like a lot of different elements that are available for you to to fetch from the bare metal host CR the second operation that bare metal operator can do is to provision the host so you can give it the image and then it will take care of making sure that ironic will write that image to disk and reboot your server to boot that specific image the third operation that bare metal operator can do for you is to clean the disk and that usually happens during the provisioning and deprovisioning of the of the bare metal operator there is then the like a couple of let's say useful capabilities of bare metal operator it can of course like manage the power of your node so like if you need to reboot it to power it on power it off like you can do it through the CRs just editing the bare metal host CRs so let's like talk a bit more maybe a lot in in more fine details but the bare metal operator and the ironic interaction so the bare metal host the CR itself points also to a couple of secret and those secrets contain the user data and the metadata for example and the network data all those are for cloud in it and when you give it you can give it as as different elements but at the end they will be combined into a config drive and so ironic takes care of this and then when bare metal operator in instructs ironic to to start the deployment then ironic will start talking to the BMC to turn on the server and then the the the server will boot the ironic Python agent ironic will then directly talk with the with the Python agent that is here called deploy RAM disk and ask it to basically download the image from whatever web server where it is stored and write it to the disk so that's back to the local disk in addition to this image it will also write the config drive on a specific part of the part of the hard drive and then once that is done it will just instruct the server to reboot from the hard drive and then it will put into the image that you asked so that's how the magic happens now like if we want to go even deeper in the details we can like see how this happens under the hood so when basically when bare metal operator registers the note what opens stack ironic does under the hood is that it goes to talk to the BMC to turn on the server the server will send a DHCP query when booting because it tries to boot over PXC the DNS mask that is in that case with the role of DHCP server will answer and in the DHCP reply give all the details about the image that the server needs to download to boot from PXC and that it would that would be Eva the server will then proceed with introspection on the first the first time the server starts with Eva it will always do the introspection and then send the report to ironic that the amateur operator will then be able to fetch directly from from ironic and once the node is introspected then you have it ready and you can start like provisioning a node and like deploying something on top so that for the provisioning so same thing that operator will start talking with ironic and ironic will like with the Eva again on the server and then instructed to deploy it the node so the Eva in that case will download the image from HGPD write it to the disk so the thing is if you're downloading the image like you know in some specific formats like QCart who they will need to be uncompressed and like adapted but basically at the end it will write the row image to the disk once it's ready it will send a signal to ironic that will then trigger reboot and the amateur operator will then update the status and say like hey there you go your node is ready you have it provisioned so yeah that's it for the provisioning now if we talk a bit about the integration of the cluster API provider metal cube and the different functionalities that this handles we have the the following so this is really the integration with cluster API that the metal cube object like metal cube cluster and metal cube machine actually interacted with the cluster API controllers so we have the different elements like metal cube cluster metal cube machine but there's also like additional ones that we didn't go too much into details like the machine template that allows cluster API to generate metal cube machine based on this like you can think about it like maybe for deployment the spec part of a deployment that is then translated into actual pods or like some data template that would be from which you can generate the user data like including also the network data for the for the node so all of this are like linked to cluster API provider metal cube and the result is a single like cloud in its file that is then handled as a user data and passed to the amateur operator that then forwards it through ironic onto the config drive on the node and then cloud when clouding it starts it will find this cloud cloud config and run with it deploying your Kubernetes node and have it getting it ready and then once it boots once it boots it will join the Kubernetes cluster or like if it's a control plane then it might start with just a cuba dm in it is and then the further on the next control planes will join and then the workers that are all deployed through this process so that's enough I think like details right now I will give it back to fellows for presenting the demo of all all this that we just talked about can we pause just for a quick question because Peter asked a question about back around the provisioning so before we go into the demo and he what he asked was does the image get expanded to fill the hard drives on the server when you install or do you need to specify a specific layout etc so the expansion happens later when the server actually boots the image that's usually at that time but of course you have like the specific size of the image will be written to the disk so like let's say you have a row image that's actually 10 gigabyte you will have this 10 gigabyte on the on the hard drive but your hard drive might be like I don't know like 500 so you of course will get it expanded when you boot the actual node then that's part of the actual image like this mechanism so he's asking a little bit of a follow-up question to if you have multiple disks etc how do you manage customize the image location boot configuration etc or are you gonna demo that for us now yeah that's a very good question and I don't think it's part of the demo so the way to do this is actually an ironic mechanism they call it root device hint and the root device hint allows you to specify some identifier for the disk that you want to deploy on so you could say for example I want a disk that has this WWN to deploy to write the image on or you could say like any disk that is over 500 gigabytes will do or you could like there's a really a lot of ways to to figure out which disk like you could even say like by path for example like devs da or like but the path is a bit tricky because it's it might change like and you're never really sure that that's the correct one so we really recommend using other things like the WWN for example and a final question from Peter Peter's got a lot of questions he's obviously interested in this can you provision hardware RAID and other hardware configuration as part of this setup it depends software RAID is out of the box supported but for hardware RAID it depends on which which let's say hardware you're running on like some of them like if you're using well I'm not exactly completely sure of which which is really supported in this case but some of them have a raid configuration possibility some del servers for sure I'm not sure if I low supports really this but that can is something we can dive in to figure out but it is definitely possible in some of the cases everything's everything's possible with a little extra documentation it sounds like something that needs a little documentation yeah it's just that personally I didn't have the need to do any like kind of read a configuration so I haven't dived into this but it definitely is possible for some of the hardware so let's go to demo time now then great thank you yep thank you very much let me switch to my terminal that quickly right can you see the screen yes indeed thank you thanks so we have done the recording for the for the demo but maybe I will first actually show something else before we jump into the I forgot the actual demo so in the metalcube project we have created a special repository called metalcube the event that is basically responsible for that contains a couple of scripts that you can use in order to test metalcube so using the metalcube events you can for example deploy cluster API provided metalcube you can deploy a bare metal operator you can create a couple of virtual machines or delivered virtual machines and manage them as if they were your real bare metal servers so the reason that we're using virtual machines is the first we cannot always provide bare metal servers for testing and it gets very very complicated but thankfully we have some kind of tools that allows us to really really replicate the real mode scenario with real work scenario with the with the virtual machines so for example instead of using the BMC like you would use in real bare metal servers we're using virtual BMC that will basically talk to the management of your delivered virtual machines so what happens what will happen during the demonstration is that we got a clone the metalcube demo for repository go into the past and then run the make so in the place where we're running the make so it's gonna first install the cluster for us and then inside the cluster it's gonna install a couple of components like the cluster API which I already talked like which is a high level that they core project let's say and then we're gonna install it's gonna install the cluster API provider metalcube and then the bare metal operator for us so all these components are running inside the this cluster and let's call this cluster as a source but it's also known as fmrl or the bootstrap cluster so once they all these components up and running the scripts will create a couple of virtual machines for us or a delivered virtual machines and then it's gonna join it's gonna create the bare metal host objects reconciled by the bare metal operator and those bare metal host objects will represent the virtual machines that we have created and then once we have the virtual machines that are referenced by a particular bare metal host we're gonna start the provisioning of the of the the bare metal hosts so the provisioning like we're gonna install some operating system into them then inject some ssh keys and then go into the inside the VM and see if the cluster in running and if the if the bare metal host or in our case liver virtual machine is part of the target cluster so but once the the operating system is installed or the provisioning is done the scripts will try to create a target cluster for us and then join those bare metal hosts or the nodes into the target cluster so in that case we have to cluster one source and then the second is the target that the target is basically running on the bare metal environment but in our case it's an emulated environment so we will be running on the bare metal on the virtual machines or delivered machines and once you have cluster up and running with those nodes you can do any kubectl operation crude operation create delete update whatever on the bmh object which is the short name for the bare metal host or you can do any operation on the top level object which is the machine coming from the cluster API object all right now I'm gonna switch to my terminal and start playing the recording we have done you all right so so what happens first is that I have already cloned the metal cube devents in this environment and now I'm exporting couple of environment variables before I run the make so for example here I'm specifying the container runtime to be used in the metal cube devents you can use different container runtimes for example you can use docker or the podman and then here I'm specifying the target OS that will be provisioned that will be used to provision the target node so I'm specifying the boot of course I'm not telling which version of the boot to because we have made it in the script that you just specify the version and it will just speak the right one for you and then we have the another environment variable called ephemeral cluster which specifies what tool do you want to use in order to spin up the source clusters so we support currently the kind at the mini-cube to spin up the source cluster and then the sec the next variable is the CAPM3 version which is the cluster API provider metal cube version we have different versions of the CAPM3 so in this case we're using the latest one we've an alpha 4 and then the number of the nodes which represents the number of the virtual machines or delivered machines that you want to create in your environment right so once we have exported we start running the make and then this process will gonna will take a couple of minutes it's gonna actually take a lot of time so we have done the magic with a video of course so once the the script has finished it to run and then we're gonna run the one of the script called verify.sh which will basically do some check up in order to make sure that we have created desired number of the virtual machines that we have created the desired replicas of the bare metal host and then we have all the networking set up properly and all this kind of checkups basically and then at the end you can see that we have a couple of some containers up and running as a Docker containers these are mostly used to do the management of your actual invert machines so you can see that all the checks have passed and now now here we can see a couple of things first we can see that we have four levered virtual machines in a powered off state at the same time we have four ironic nodes that represents that are referenced by those virtual machines but in real case it would be your bare metal server for example and then at the same time here in the last you can see that we have four bare metal host objects node 0, node 1, node 2 and node 3 in the ready state so ready statement means it's ready to be provisioned it has no operating system so you can start actually provisioning but introspection is already done for it and it's also registered for the bare metal operator right the manageable is also from the ironic perspective that means that you can like start provisioning those ironic nodes so then we in the same environment we have a couple of scripts that we used to provision the bare metal host so you can see here we have cluster script we have control plane and then we have worker script so we first start executing the clusters don't a sage which will basically create the cluster object and apply into the cluster and then it will create also metal cube cluster object which is metal cube specific then the control plane don't a sage will create one machine object and then metal cube machine object and then the bare metal host object so it's kind of link it and then at the same time it will create another when you run the work it will create the same chain basically machine then metal cube machine and then the bare metal host so we will have in this environment for now two bare metal host two metal cube machines or two machines or two livered machines a virtual machines that we will use in our clusters so one of the machines will represent the Kubernetes control plane node and the second one represent will represent the communities working out let's say so I'm going to skip how I run the script so basically I have run the scripts here and after some time you can see that provisioning has started so provisioning does not start in parallel it will start it will do the provisioning one by one so you can see one of the provision one of the nodes has started provisioning and it started with control plane of course and then the the virtual machine or the livered machine that represents that bare metal host star is up and running and then you can see from the ironics perspective that that node is in clean way state meaning that it's actually right now doing the cleaning as my earlier mentioned and it basically wipes out all the disks that are available on that livered virtual machines or virtual machine sorry all right so it's kind of all this process also will take some time and after a while we should be able to see that two of the mental hosts are now in provision state and then you can see that both the livered virtual machines up and running and two of the ironic nodes are in active state so also you can see the consumer for the bare metal host object you can see the consumer which represents the the metal machine that is consuming this particular machine bare metal host object all right and then also you can see the online field the online field is set to true for both of these bare metal host that represents that they are powered on right now so the next step that we're going to do is that see the chain of the object references so we mentioned a couple of times that we have cluster objects some objects from coming from the cluster API some objects are coming from the metal cube and then they are referenced by each other so the first object the core object coming from the cluster API is the cluster you can see that it's in provision state and then we have infrastructure specific cluster object that is referenced by this top level cluster object so in our case in the metal cube it's a metal cube cluster so you can see that name is test one and it's basically referenced by this cluster object then we have the machines object also from the cluster API but we have two cluster API machines and we have two cluster API whether metal cube machines that are referenced by these machines and then at the same time we have two other different objects from the cluster API what is the machine deployment and then the second is the key CP so machine deployment is like basically like a deployment in for the ports so it basically you can use this object to manage your machines and then you have the key CP which is kind of the similar to machine deployment but it's specifically meant for the control plane nodes so that's why you can see that we have for the control plane node we have one replicate and for the workers we have one replicants so in total they represent two to machines here great so the next step what we're gonna do is that if you remember I have in this environment we have created four virtual machines so the next step would be we'll try to basically scale up those machine deployments and see if the if their metal cube machines and the Bermuda host gets created and then starts provisioning but before that I would also like to show you that now we have two Bermuda host provisions they are part of the target cluster one of them is the control plane and it's the second one is the the worker so and then we can see we have the coupon fake for the target cluster and here you can see that QCTel get ports with the target coupon fake shows you that you have this basically Kubernetes ports up and running in your target cluster right and then if we also check the nodes for the target cluster we should see two Kubernetes nodes running for the target cluster basically represents our two Libre virtual machines so one thing to notice here the status as you see it's in not really state and that's because we haven't installed yet the CNI in the in the in the target cluster for example usually what we do is that we install the calico but in our case we didn't do it but if you install the calico in the target cluster and you do the proper networking then you should see that the status of the nodes in in ready state so and then the last check is that if we do QCTel get machines from the source cluster we can see that we have two machines because the exact same name as the Kubernetes node for the target cluster and that's because as I mentioned earlier we're creating two machines that represent a dead of date represent to your Kubernetes nodes okay this is the first part now we're gonna play a bit with machine deployment and try to scale it so currently we have one replica of the machine deployment so we have one worker node and then we'll try to increase the replicas to three because we have two other Libre virtual machines that are free and then we can utilize them so first we're gonna try to increase the replicas here just read so you can see that it says it's scaled now and then if we check the status of all the corresponding objects so first we can see that machine deployment is now scaling up and then it's trying to have two or three replicas of the machines and then you can see that here we have machines being created they are in the provisioning state but they are they don't have the provider idea so they are not yet consumed so and then we have metal cube machines that are at the same time created by by this cluster API machines two of them and they are also used in the same cluster test one and then you can see that for now two virtual machines are still in power mode yeah so this process also provisioning as I said it takes a lot of time so after some time we should be able to see all two new Bermuda host in provision state and two Libre virtual machines up and running as part of your Kubernetes cluster so you can see that now they are up and running so the machines are running and then the metal cube machines they are also ready and they are basically then other two virtual machines are also up and running and we can also see the ironic note status so you have two more you have two more ironic notes that are in the active state and two other Bermuda host objects in provision state and you can see who is consuming this Bermuda host here and then all of them are online set to true because they are all powered on right now if we check again Kubernetes notes we can see four notes right now in our target cluster so to you that we just added in our our cluster so that is basically end of the demonstration and now I will switch over to the slides so that was the end of the demonstration so now how do you contribute to metal cube so first of all we really welcome very much and quite a lot any contribution that anyone is doing to the metal cube as I said metal cube is kind of is young project and but we're growing really fast and we have a lot of contribution from from different companies as I as was listed in the previous in the previous slides so the the contribution that you are doing it can be basically anything so you can do any documentation contribution you can have some requests for the new feature you might have some found some bugs and you might have to fix them if you want to or you can report by the issues you can participate in helping creating the talks or presentations like this or the writing some blog post we have in the metal cube the website they have a lot of blog post that different people just write about the features in the from the metal cube trying to share the knowledge basically from with the community and outside of the community and then any question that even the feedbacks that you might have for the metal cube is really really appreciated so we really love and we would really appreciate to have some contribution from your side so and then also about the community I mentioned the community so we have the metal cube community it's quite diverse as I said we have different contributions in different time zones so the the the github page is the metal cube the right metal cube dash IO so we have contributors from currently AT&T Dale Erickson Fujitsu Mirantis and then the red hat so if you want to reach out the community members or ask any questions or chat with the community members or the contributors you can join the metal cubes through the cluster API per metal on the community slack channel or if you have some questions you can also reach out the the maintainers through the CNCF mailing list or you can reach out the communities through the community metal cubes on mailing list and also you can watch some updates and then new features being added through the Twitter so we have the community meetings that happens every alternate Wednesday at 1pm UTC time it happens on the zoom you can find the link here and then we also have the recordings and different kind of demos on the metal cube YouTube channel that represents different features of the metal cube and then some discussion that we have during the community meetings that you might be interested in and that is the last slide for our presentation today thank you very much for listening and I hope it was interesting and somehow informative for you yeah thanks a lot so yeah no thank you very much mal and and for for joining us today a couple of things pop into my mind I know a lot of the people who are watching this are probably open ship users so there I believe there's a set of instructions if somebody Peter maybe you can pull up this and start this is a different a slightly different deployment approach when we do it use metal cube I think the bare metal deployment on there so just not to confuse the two we are using metal cube for the open shift for reasoning of bare metal but there's a whole set of documentation on how to do it using open shifts so and that's the beauty of open source so it gets used by lots of people in lots of different ways and we all get to collaborate on it so this is I think it's pretty amazing it's it's wonderful that you're in the CNC of sandbox I know that was a recent event was that a couple months ago I don't know exactly yes yeah a couple months ago and it's a pretty healthy community that you have around around metal three already I think it was it filled definitely filled a gap in the pantheon of the CNCF landscape which is amazing to think that there was a gap because there's so much stuff in that landscape diagram but bare metal was one of those things that really wasn't being addressed very well so I think this is a perfect fit for for the CNCF sandbox and hopefully incubation sometime not too distant future can you tell us a little bit I guess and let me see if in the chat if anyone else has questions besides be Peter was saying BMC is also used with OCP but it bootstraps all the typical OCP installation process yeah that's definitely there's a slightly different deployment method methodology for when you're using this on open shift so definitely read the open shift bare metal docs if you're watching this and if you're watching this from anywhere else doing it anywhere else that's you know read the stuff that's on the metal three dot IO website and contribute your feedback to that that website too I think that's the beauty of this project and a lot of other projects at the CNCF is that they may get put out there by Red Hat initially or something but then they get adopted by Erickson and AT&T and Mirantis and everybody's collaborating on it can you talk a little bit first about the use case at Erickson you know what made it so important for Erickson to get involved in this project and to move help move it forward because you guys two of you maintainers on the project obviously Erickson's got a big commitment to using this can you tell us a little bit about that and how you got permission to participate so actively in the project yeah I can take the question if so yeah we actually have quite a big commitment indeed like it's 10 people working full-time on the metal cube project on the Erickson side and the reason for that is that Erickson has its own communities distribution called CCD and so there was a request for the Bermittal support and so we were looking around and found that that cluster API had actually like a very interesting idea that we wanted to take in so we were looking for like a provider for like Bermittal and at that exact point the metal cube project popped up from like Red Hat to get started at this exact time it took us a bit of time to get on board because internally we have a lot of NDAs and stuff that prevent any kind of open source contribution so there's actually a second entity that was created just for open source contribution so people had to move companies to be able to do the open source but then we actually like really got on board and started like contributing as much as we could too and now metal cubies really used as part of the like core of the Bermittal solution for Erickson's cubitis contribution so there are a couple other questions here you that are popping in that are more related directly to the project one is asking what are the pre-rex before you start the installation like DNS network availability BMC configurations etc and it seems you have to know MAC addresses and other details about the physical high hardware you're provisioning on that true yeah yeah there's a couple of things you need to have done before you can start any any deployment is of course the networking like that needs to be in place like on the physical level but also on like the configuration of the switching and everything you also so then that's at that point you should know already the mic address of the interface that boots of a PXC like to to be able to register the ironic note properly and you don't need to know too much more about the the hardware maybe some details about the hard drive if you want to select one specifically for the for the installation the the and with regard to the BMC it needs obviously to be configured and reachable so ironic needs to be able to reach to reach it so meaning that the node where the ironic pod runs in the cluster needs to have access through routing or direct connectivity doesn't matter to the to the actual BMC and then you store the credentials for that specific nodes BMC in the in the secret for the bare metal host so yeah there's a bit of work to be done beforehand like you need to have your BMC configured and the credentials there you need to have the networking done but once it's kind of like an initial setup and once that is done then you can deploy whatever you need on top I hope that answers the question I think it did and definitely I these have been great questions Peter so so thanks because there really are some you know a lot of people are trying out this project for the for the first time and I know one of your colleagues are on the project him Hamesh is working on doing a demo of deploying metal cubed on okd4 so he's coming to the okd working group and not to distant future hopefully to demo of that as well so there's lots of lots there's still lots to do in this project it's you can use it in production obviously you guys are and lots of but there's still lots of places where you can contribute and give back and people are really looking for feedback on this project so if you're testing it deploying it anywhere whether it's okd or who knows where then definitely give some feedback to the metal 3.io group I know you guys also have a webinar coming up the CNCF webinar now I think or maybe Rose is going to help you back you up on that one too but that's in some time in December but there's lots of opportunities if you can throw back up your slide with your final resources because I think that's probably a really great place to end this one today so that people know how to get a hold of you all and and participate in those community meetings and I hope we'll see you in the incubation channel sometime not too soon I know you just jumped out into the sandbox so it may take a little bit more doing but it's definitely something and I'm betting that you guys might have a few talks at kubcon coming up you get in we don't have talks per se but we have office hour now like because we now have our sandbox project we got the benefit of being able to schedule a couple of office hours during kubcon so we're gonna have two one of them is in the 17th of December and the other one is on the 20th I think but I don't recall the hours with the time zone is anyway such a mess right now the 17th of November probably November sorry yes far far far away kubcon is coming much faster than that it's like a freight train coming right at us so definitely look for those office hours because that's on the 17th that's coming up soon there's a couple of community meeting I think there's at least one community meeting before that as well on your schedule if I looked right next week next week so there's lots of places where you can participate and if you are an open shift user do check the documentation on open shift for the bare metal deployment because it is a slight variation on this and you'll probably need some more details but this is a great way to showcase how someone like Erickson and is putting a huge effort into stepping up and contributing back and collaborating with lots of other people to make something wonderful that we all get to you know get to use and take advantage of and hopefully contribute back to so Bruce and now thanks for taking the time today we really appreciate this and keep us posted we'll have you back when you get your next release and you can tell us about all the new features and look for a demo by Mesh around deploying okd4 on bare metal using metal 3 I think that's one of his channel challenges of the month to do so and there's also a great blog post six series of blog posts on the metal 3.io website to to check out so if you have a use case for this and you're looking for someplace to land and have a conversation about it definitely reach out and join this crowd so thanks again everybody and have a wonderful week it's only Monday thank you very much