 Okay, well, thanks, everyone, for coming today to this session about running Kubernetes, so I show OpenShift. I'm going to try to keep it, you know, compatible, so to speak, on OpenStack bare metal. That is ironic. My name is Ramon Cedo. I'm the product manager. For a few things, one of them is the integration between OpenShift and OpenStack, and this is part of it. And, well, let's start by talking about bare metal itself. We have been seeing in this OpenStack summit and in the previous one, and over probably the last two years, that bare metal is on trend. Like, four years ago, the number of sessions about Ironic were limited. If you check now, there are so many more. We are seeing the same in the market, many customers, many partners. And, you know, you can see that all over the place. Amazon is providing bare metal. Why? Well, because customers require it. In the OpenStack user surveys, the one from last year and the one in this year, we can see how users are saying, yeah, I'm running much more bare metal than I used to. To the point that last year, what we saw is that 20% of the production environments with OpenStack had Ironic in them. That's quite an increase from previous years. This year, and this is something I want to talk about today, there's another data that we suspected, right? Which is most of this is driven by wanting to have Kubernetes, containers, on bare metal, right? This is one of the things that's driving this growth. Every way we're going to see some of the reasons why today. I wanted to talk about this blog post. This is from Joe Fernandez that's leading at Red Hat, the BU for Cloud. And well, it describes very well in a high level that anybody can understand the reasons why bare metal is making a comeback, right? A number of use cases, yeah, this year and the previous year, Kubernetes is one of the main use cases driving this adoption, right? Traditionally, we have seen many of them, though HPC is a very popular one. HPC traditionally has been running bare metal for many, many years. When we started with OpenStack, we saw many HPC customers interested in adopting OpenStack and in part also replacing what they were running on bare metal nodes by virtual machines, right? Virtual instances. And you know, the trend has been moving a bit to half and half or, you know, depending on the use case, VMs and bare metal nodes, right? When you need direct access to dedicated devices, well, it's obvious, yeah. You can use with virtual machines things like PCI pass through, for the network you can have a SRIOV, there are a few things you can do, but at times that's inconvenient and maybe you have full access, if you can afford that to the metal, it's always a little bit easier. Big data, it's nothing new, big data has been there for a long time and we have seen many customers also using bare metal in there. For example, you can have the control nodes in VMs and then all the data nodes in bare metal. That's a common use case, but I would say that in the last two years, Kubernetes is the one that's picking up and driving this increase, right? Okay. Then let's see why. And in particular today, as you know, Kubernetes runs on open stack very happily, it's been doing that, well, since almost the beginning for many reasons, the integrations are great, but in particular in bare metal, which means open stack managing the nodes, the bare metal nodes that Kubernetes run on. So let's start by talking about Kubernetes and Kubernetes slash open shift, right? I don't want to focus on either of them. Kubernetes is workload driven, meaning that if I'm a developer, do I really care what's in the underlying platform, whether it's open stack, whether it's bare metal, virtual machines, or even if it's public or private, as a developer, I don't really care. As a developer, what I need is to have access to my containers, to be able to keep working on my applications, if I need to distribute them, I want to have ways to distribute them. And if you tell me that it's a physical load balancer, or Octavia in open stack, or HAProxy in some node that somebody set up, I'm happy with it as long as it works, right? So this is a premise, and what we want to do is an integration that's seamless, that provides us this type of experience to developers, right? Now I'm not a developer anymore. I'm the operator, so I need to have a platform that behaves in that way, that allows me to have this level of integration in a transparent way for my users, the developers, right? Well, Kubernetes and OpenStack are deeply integrated, and this integration, as you probably have seen in some of the sessions here, is only growing. Say, for example, with storage, we have Manila, Manila in turn, can use SAF, via SAFFS, right? We have, if we don't use bare metal, we can use Curia, Curia understands Neutron, and Curia understands CNI, so we have a level of integration in many of the areas where we need this. And again, it's great when I'm the operator to be able to do all of this, but all of this needs to be transparent, needs to be hidden for the end user of Kubernetes, which are the developers, right? And then OpenStack itself, and I'm sure everybody in this room knows that, it goes across all the platform in my data center, right? It's an abstraction layer where, again, in the same way that developers need this level of transparency, you know? The operators also want the same when it comes to the data center. OpenStack abstracts the data center and makes the data center as a infrastructure as a service, you know? So these are reasons why it's a good choice to run Kubernetes on OpenStack. Today, we're going to see, in particular, bare metal, as you know, the bare metal service that we have in OpenStack is ironic. And I want to cover a few things about ironic. First off, when we need to manage bare metal in general, we need a platform that allows us to do the life cycle, right? We probably have experience with OpenStack if we are considered running ironic. And we want to have, you know, similar experience if not identical to the one we have with virtual machines. This is where ironic comes into play, right? Ironic uses... Ironic has its own APIs, but you can manage ironic through Nova, which is exactly what you do with the virtual machines. And when you do OpenStack server start, that server can be either a virtual machine or a bare metal node, okay? And then today, we're going to cover a few of the things listed in here, like routed spinal leave, multi-tenancy, auto discovery, super cool things that we've been adding to ironic over the years, some of them this year. And then again, for this level of, say, compatibility with the experience, right? When you need images for ironic, well, you are used to upload your image in QCOW and then deploy from it. Well, you can do all the same with ironic, right? There are different ways of doing that. I don't know, you can have an image of a whole disk or any much of a partition, right? And play with that, depending on your needs. But all of this is available for you. Another important thing is, well, how does ironic integrate in my architecture when I want to do this? Well, first off, we want to keep things simple as much as possible, right? Here you can see a typical architecture in which we have a mix of compute nodes where the virtual machines run and bare metal nodes that are owned by this controller node where ironic is running, right? If you see in this architecture, this is pretty standard, right? You can make it more complex or more simple, yeah, probably you can have just one node with ironic owning everything. But in here you can see how there's another cloud, right? This is triple O, deploying everything, right? And then you have ironic in your controller, managing through the BMC, the bare metal nodes, just the second from the bottom. And all of this to provide to your tenants a platform where they can mix VMs and bare metal nodes, okay? All of this is documented. We don't need to expand on any of this. And if you want to try it, this is probably the simplest form of deploying and managing ironic architecture-wise. Now everything is installed, everything works. The nodes speak Sivut happily. And I'm ready as an operator to offer this service to my tenants. And what do I need to do? Well, again, the workflow should be simple, yeah, right? Basically, I go ahead and create networks, the networks that I have preconfigured probably my switches where the nodes, the bare metal nodes managed by ironic are connected to. I create flavors. This is if you use Nova, which is, you know, most cases you are going to be using Nova with ironic. So you create a flavor. That flavor will be associated with bare metal. And you upload optionally images for your tenants. But your tenants can also go and upload their own images. And then you register the bare metal nodes. This is something that the infrastructure owner needs to do, right? Usually it will be the admin of the platform. So you have a new rack of servers or a few racks of servers. You want them to be owned by ironic. So you go through the registration process. And pretty much you are done, right? And then I'm now the tenant. I want to consume these services that my operator is providing me. Well, to me, it should be an easy workflow as well, right? I just pick the network that I want. Maybe I've been given just one. I choose the operating system. That is the image that's in there. Maybe I have my own image. The flavor that I want, that flavor will be even a mixed environment, one associated to bare metal nodes. And I start the VM instance. And I'm ready to operate in the same way that I would with a virtual machine, right? So far so good. OK. So yeah, this is just to show that things, once they are running and we manage to keep them simple, should just work. Then let's review a few features that Ironic has that make it a really compelling solution for anything that needs to run on bare metal, but especially for Kubernetes as well, right? One that we have released, I think it was in Newton, one of the first integrations where we completed the integration with Neutron. And basically that allowed us to have a non-trusted tenant environment with multi-tenants. What does that mean? Well, I want my tenants using bare metal nodes to be isolated between them, right? That's multi-tenancy provided by Ironic and Neutron thanks to an ML2 driver capable of talking to the switches as it's shown in here, right? So basically, I need to do a very simple operation in the switch when I want to isolate a tenant. See, if tenant A has belong 100 and tenant B has belong 200 and they are going to use or reuse the same bare metal nodes, well, if it's a non-trusted tenant environment, I don't want tenant A and tenant B to have access to each other's networks. I can do it with this. I just need a driver, an ML2 driver, capable of setting the switch ports in this way. Something else that I can do thanks to this integration is to configure, well, bonding, right, link aggregation. How do we do this? Well, in a similar way, right? Sometimes a user will want to use one nick and maybe the next time that user or a different tenant will want to set up a bond. Well, with this, you can do it, right? Basically, at the OS level, you will configure cloud in it. You will take cloud in it. Hey, use these two nicks and configure bonding in this mode, right? And then the ML2 driver will go to the switch, deployment time, and it will know that you want to create these two ports, which translate into neutron ports. And then you will have everything you need on the software level and on the physical level, right? Usually you will have two switches. The node will be connected physically to the two switches, the usual. Most of us are familiar with this type of setup. Something else that is implemented is support for setting ACLs, basically, to be able to use security groups, right? I have to say that this is still a little bit early in the implementation. That's because of the drivers. Not all of them do this in the way that we would consider production ready, but we are working on it. And here you have some documentation if you want to take a look in detail at how to do this. This is all part of the standard documentation, right, for Queens, for Rocky, and some for Stain. And in particular, I wanted to talk today about one ML2 driver that we are releasing with Rocky, right? And that is an ML2 driver based on uncivil networking. And this is super cool, because one of the things we have been seeing so far since this integration between Neutron and Ironic was that, OK, driver A will work with this set of switches, right? And driver B will work with this set of switches. So vendors will create drivers for their switches. But then we thought, look, there is uncivil networking. And uncivil networking is capable of talking to multiple switch families. You can talk to JunoS, but you can talk to, I don't know, Cisco, Nexus switches as well, right? So why not leverage that and implement an ML2 driver capable of doing this? And this is what we came up with. And it was amazing, because we were able to complete that in one cycle during Rocky. And at Redscout, we are going to start supporting this with OSP 14, which is the Rocky release. Let's real quick review the workflow in here. So basically, you boot your VM, your bare metal node, on a tenant network, and then at deployment time, since, remember, this is mostly for non-trusted tenant environments. So at deployment time, you configure physically in the switch the provisioning network, meaning that you go, well, the ML2 driver goes to the switch port and says, now set up only the VLAN used for provisioning. No tenant has access to that VLAN, right? When the provisioning is done, you do the same. The ML2 driver does the same. And goes and changes the configuration of the switch port to the VLAN associated to that tenant, to the VLAN that that tenant picked to deploy its bare metal node. And as simple as that, the concept is really simple. And I would say, if not half of it, a big portion of it was already implemented in Ansible. And in our tests, this is working pretty well. And I guess this next year, we are going to see it expanding with many users, because there's a lot of interest and a lot of demand. And this is why we really went ahead and implemented it. OK. You have this blog post by Dan Rades describing this in the RDO project blog. OK. Something else that we've been learning from customers, and this is not new. This comes from at least four years that I can remember. Actually, you see here some of the users that initially were asking for this. And this is the Spine and Leaf topology that's very popular among network architects. We had to implement support for Spine and Leaf in OpenStack. That means in ironic, because this is when you start dealing with the physical nodes. And this basically what it does, I'm going to simplify a lot, because this talk is about Kubernetes on bare metal. But basically what you can do with that is I can have multiple Leafs, each of which has its own L2 domain. And they are connected to each other through L3, usually on the top of a rack switch. And then you have your Spine switches that connect everything with everything. If you are a network architect, I'm sure for you this is your day to day, very simple. If not, we want to make it simple. But then if we think out the way OpenStack works with ironic, well, we need to be able to pick some woods. So we need to be able to pass this L2 to L3, go with my DHCP request all the way to the Leaf where the DHCP server is, and come back. And when you go through a top of a rack switch, well, your source mark won't be your source mark anymore, as in the source mark for the host, but it'll be the one from the switch. OK, so this has been solved for a very long time with DHCP Relay. Only that we had to implement this in Neutron. We finished that implementation a few releases ago. I can't remember. I think it was Queens, definitely. Well, now it's ready for you to use. And this, again, think about this. This is part of this abstraction layer that OpenStack offers to us, that we want to hide to our end users. This is efficiency for the network. This is how network architects say things should work. We are capable of doing this with ironic, and this is yet another reason why ironic is a great platform for abstracting the complexity to the end user. More things. Auto-discovery, another great functionality that we have in ironic. If you have a small number of servers, OK, maybe it's not a problem to go ahead and write what you need to do to register each note individually to ironic. Basically, to register a note, you need to tell ironic how to manage that note. And you do this to the BMC. That is IPMI, iDrag, iLO, IRMC, et cetera. But if you have a lot of racks of servers and you have maybe a weekend to do all of these, because your customer was very successful, they grew too fast, and now you're under pressure to have all of these set up over a weekend. Well, you can do that. Just rack them up. Make sure they are connected to the provisioning network. And have them pixie booting. And the next is the note is registered in ironic. And then you can do things, magic things. This is really cool. There's an example here that I wanted to show where, basically, you say, this example, if you are on a note, set these credentials. For example, root calving as the login and password. And take from the inspection, because this is another thing that ironic does. Ironic inspects the notes through ironic inspector, extracts a lot of the specs and information that you can use to do things like this. To do things like saying, if it's a note, configure iDrag, and the BMC address, configure it as the drug address for the driver. And that's automatic. So you don't have to do any of those. And all of a sudden, you have maybe dozens, if not hundreds of notes, registered for you. OK, moon things about ironic. And something that we have seen is a lot of interest in redfish. And I'm sure most of you are familiar with IPMI, right? IPMI, well, it's a tool to manage remotely notes. And to power manage and change boot order, things like this in general, right? Well, redfish is a similar concept. Only that, it's API driven. It tries to do much more than some power management. And more importantly, it's becoming a standard. A standard adopted by many of the vendors, if not all. And most new servers come with support for redfish. That indeed makes our life so much easier, right? If we can rely on a standardized way of power management, because this is pretty much what ironic will do, to power manage our servers, then things are going to get so much easier. And then also implementing new features around this management will become faster and also easier. And for example, some of the things that we are doing for Stain is adding to redfish things like out of band inspection, right? Like you don't have to boot a note and put an image in the memory, extract everything, and then report back to the central controllers. You can do that out of band. That's great. And redfish should allow us to do this. So we're working on that. Or something really cool as well. And this goes along with Edge Your Cases, which is boot from virtual media without Pixie, without DHCP. We all have been at an ILO or an IDRAC and mapping an ISO locally and then booting from that through the network, et cetera. Well, what if we have all this logic in the driver? And it allows us to boot without DHCP. Sometimes we cannot do DHCP relay, right? Sometimes our network architects will say, no, no. I'm not going to allow you to do this because I have some policies that don't permit me allowing this. OK. So that would sort this out for us. Edge Use Cases also fall into this. Bias configuration, if you were here yesterday on the keynote, this was shown as part of the new things that we are doing with Ironic. This is super cool as well. In this example, you can see how we can go to compatible drivers. This logic is implemented in the drivers. But basically what you say is, hey, disable for me hyper-threading and enable the VT flag in my CPU. Every driver will tell you what you can do to just set it up and become a little bit more technical. At the cleaning stage, the cleaning means when you are using Ironic, there is a way to say, please empty the disks, either the metadata or just zero everything in the disk. There are a number of steps in there, right? So you basically plug this into the steps. And if your driver allows it, you will be able to interact with the BIOS settings. You can actually ask the driver, hey, what settings in my BIOS? Can I modify, right? And it will tell you, and then you can play with this, all right? And multi-site, this is OK. Multi-site depends on a number of features. Some of them we have finished. If you take a look at the Ironic Conductor and Node Group affinity, this is one of the requirements, so that you have a central site with your main controllers, hopefully highly available, right? Maybe you are doing Ironic and more things with this central control plane. But then you have maybe remote sites. And in these remote sites, you have nodes that also need to be managed by Ironic, right? OK, so with this, what you can do is obviously, well, not obviously maybe, but you are not going to control IPMI or redfish or eye-drug from the central site to remote sites. That would mean exposing IPMI or whichever BMC you want to use through the network, right? Sometimes maybe through the internet or dark fiber or whichever it is that you use to connect your sites. But you can have an instance of Ironic doing that for you. And that instance of Ironic can be associated to these nodes. Well, in this example, I wanted to show you what we are doing so that we can achieve a cleaner way of doing multi-site with Ironic. And this is also part of the edge use cases that we see that many users are wanting, right? OK, well, more things. Kubernetes on OpenStack and what we are doing in here. This is pretty simple. This, I'm sure, is no surprise to anybody how you do this. Well, you install OpenStack. You have OpenStack upon running. Then you provision your operating system, say, Rails, CentOS, whichever you choose. And then you have your Kubernetes installer. You point Kubernetes to these nodes and you install them. What you are doing with this, well, you are managing your bare metal nodes that are used for Kubernetes from Ironic, right? And the lifecycle, everything is managed from Ironic. And Kubernetes is happily running in it. This is the simplest way to do this. And if you go to the documentation to how to do this, and here I'm talking about an installer in particular, the OpenShift installer, where, well, you need to do this. Provision the nodes. We all know how to do this. At the DNS entries, this is more specific to OpenShift, right? OpenShift works with DNS names internally and externally. So you do this with your DNS service. You distribute the SSH keys. And after that, you are ready to install. There is an installer, a really cool OpenShift ansible. You point OpenShift ansible to these nodes and have it installed. It's simpler than it sounds, right? And it's all documented. But we wanted to make it even simpler. And we said, I need to first test this, see if I like it, I don't have so many resources, maybe I don't have so much time. What if I could install just one node and have this node deploy in OpenShift for me? In the same way, right? But without having to go through the whole process of installing OpenStack first in a highly available mode, et cetera, well, what we did in triple O in this release is integrating this logic, the OpenShift ansible bits in the triple O code. So now you can tell to triple O, hey, install an OpenShift cluster for me, please. And triple O will do everything for you. It will install the operating system. After it installs the operating system, it will install OpenShift in it. And by the time it finishes, you have OpenShift running on bare metal, managed this bare metal by OpenStack and assisting the installation of OpenShift itself, right? Let's see how this looks like more or less. Installing triple O is really simple, right? So you need a node, sent OS, rel, and you install the package that provides triple O and then you do a configuration in a file called undercloud.conf and then you say OpenStack undercloud install. You go to have a coffee or you go for lunch, come back and everything should be ready for you to start provisioning nodes, right? Auto-discovering nodes, maybe you wanted to use the spine and leaf use case, et cetera, right? All of this, remember, all of this is ironic. Triple O uses ironic. So whenever you are using triple O slash director, director is the downstream name of triple O. You are using ironic, right? You're using ironic, you're using nova, and you're using glance, so everything that you know. It's like an all in one OpenStack with the minimum things it needs to deploy. OpenStack on the one hand, this is how triple O started and in this case, OpenShift on the other hand or Ceph clusters, right? So this is an example of how you interact with this. Obviously in OpenShift, you have different roles as you do in OpenStack. These are all for the master nodes, when you have the APIs, then there's the infer nodes, then you have the worker nodes, right? Those would be the compute nodes, compute as in hosting the apps with containers. You tell in this case to triple O, hey, install for me three masters, three workers, and three in France. Oh, okay, cool, I do that. And on top of this, you can also say, hey, and I will need storage. We're gonna talk about storage in a minute. So in the nodes, make a converged type of deployment because I want to just get up to speed real fast and use one disk to set up a cluster, first cluster, right? So you can tell all of these two triple O and just have it deployed the way. If you have ever done this, you will be familiar with the bottom half of this screen where you say, hey, deploy an overcloud and call it in this case, OpenShift. If you're interested in the code, go there, have a look at the code. We are working on the documentation for this, some of it already, but soon we will have much, much, much nicer, right? Now, storage, let me see how much time do I have. Yeah, hopefully we're gonna make it. A few options here, remember, this is Kubernetes on bare metal, right? When we run Kubernetes on virtual machines, yeah, we are used to present storage to the containers, to the pods by persistent volumes, right? If you have ever done that, you know what I'm talking about. If you don't, this is the equivalent to what you do with Cynder, right? With Cynder, you present a block device, well, with containers, you have a similar concept. You present that block device to your containers and then they use it for persistence or even for a familiar, depending on how you orchestrate it. Okay, so looking at the options that we have for storage, I wanted to highlight those that can be candidates for running OpenShift slash Kubernetes on bare metal. One is Glastafs. Glastafs is probably one of the simplest solutions in this combination. Yeah, host path, but host path is literally within the host, right per note. So it's probably not so ready for production or for many of the use cases that our customers are telling that they have. With OpenStack itself, you have Manila. Here, I pointed Manila to NFS. Well, when you create a Manila share, what you get is an NFS URL, right? Nothing prevents you from using that on bare metal. That's super simple and it's a very cool integration because of the transparency that it gives. This morning or earlier today, there was a talk from the CERN about the Manila advances that we are making with Cef, with CefFS and this is super cool. So this is another option that we can have and local, it's another option. Okay, let's review. And before I forget, when I ask these two customers on what they need, they say, well, I would like to have a single storage backend. I don't mind which one, right? As long as it works, obviously. But having to deal with IELTS of storage backend, you know, if I can avoid it, I will try. Okay, if you are using OpenStack, most likely this is gonna be Cef or maybe you have your own storage array, right? Minera, VMC, whatnot, and then you can consume it transparently through Cinder or through Manila if your backend supports Manila as well. Okay, and something else that they are telling us is, look, some of the apps that I have containerized want to use read, write, many access modes. That means that I want to have one persistent volume that's accessed by a number of containers at the same time, right? Then you go, okay, that reduces a little bit the options that I have. I end up with Glastafs and NFS. Okay, so let's see how this integrate. Well, when I deploy with triple O or manually, right? It doesn't matter, it will work in the same way. I can create what we call converged cluster of Glastafs on the same nodes where I run the pods. Actually, Glastafs in this case is containerized. So it runs as any other container, as any other pod. In the infra nodes, maybe you are gonna do that to host your registry, the registry with the images for your containers. And on the worker nodes, you can have another Glastafs cluster and that will be for your persistent volumes, right? When your users start creating them and you can consume it directly from there. So with triple O, you can get up to speed zero fast with this. Now, what people that have a lot of experience with this tell us is, well, I would rather have the Glastafs separate and full performance reasons on the one hand, and maybe I'm using this Glastafs for something else and I may be putting load from other services, right? Besides Kubernetes. Well, you're free to choose your topology here, but to get you started, you can do this. So this is one of the options that we have. Another option when we are running Kubernetes on open stack or rather on bare metal managed by open stack is Manila, right? In this case, I have the nodes managed by open stack on the top in blue, where Kubernetes is running. And then I have open stack with Ironic and Manila, at least probably the rest of the services running in there, right? And usually I will have a Chef storage cluster. And the Chef storage cluster will be consumed by Glans, by Manila, by Cinda. So we are solving the request, the requirement for a single storage back end. This is a pretty neat solution. It's clean. It's easy to understand. And it's something we probably are already familiar with, right? Open stack backed by Chef. And we're putting on top, not on top maybe in this case because they are bare metal nodes, but Ironic managing bare metal nodes, OK? Right. And I'm going to finish with this. Two minutes to go through the network. This should be really, really simple. If you have any experience with Ironic, half of this is boilerplate, almost. It's what you know. Ironic in the open stack cluster will be managing your nodes how? Well, interacting with the BMC, right? It will do what Ironic does. Power on, power off, power cycle, change boot order now, et cetera. And for that, you will have to have access to the BMCs. So you will need that network on the one hand. Then you will need a provisioning network as well. You want to be able to put the images, whichever those are, on the bare metal nodes. For the master nodes, for the infer nodes, for the worker nodes, OK. Nothing new here. And then usually, you will want a dedicated data network. It's not mandatory. You can reuse the provisioning network if you want. But it's a good practice to separate them, because maybe the data network is going to have more bandwidth. Maybe what you are going to be doing in there is a bond with two nicks for both bandwidth and high availability. So you will do that. And at the same time, and again, if you start doing this, it will become obvious you will need access to a public network, because you will want to expose your container applications to the world. So you will do that. Probably, you won't need to do much research for that. Then Kubernetes itself, just like OpenStack with the virtual machines, can use OpenVswitch as the CNI, the container network interface that comes with Kubernetes as a CNI plugin. And we all know that one of the things that we can do with OpenVswitch, and this is what Kubernetes does, create a VXLAN tunnel for the pod to pod and node to node communication. So one pod having containers in one node, and another one with containers in another node, well, will travel through the VXLAN tunnel. So this is pretty much how it works. And with that, like to finish this presentation, I'm not sure if we have much time for questions, but if anybody has questions, you're welcome. Yeah. May you want to use the microphone? I wonder if you can connect those containers instead of talking through VXLAN, through a physical bare metal network. Yeah. So usually, the way it works with Kubernetes is you will have a CNI plugin. And this is one example. So you will need a CNI plugin that suits this use case. For the bare metal network? The bare metal nodes themselves are connected to the data network, right? So this is abstracted. The containers have no clue about that. Container to container use this abstraction or this overlay, this network overlay to talk between them. So they are not exposed. Yeah, but if I want for them to be exposed, I can with the CNI, with my custom CNI. OK. You didn't mention about issues of authentication and authorization. How do you handle identities in OpenStack or Keystone and entities in key? OK. Yeah. So OpenShift itself will have its own authentication, right? In these examples with bare metal, we don't necessarily integrate authentication between the two, even though it's an option. When you run OpenShift and OpenStack, these are some of the things that you could do. You could configure it to manage authentication through Keystone. OK, so can just mention that what we did recently was to provide a way to integrate authentication using application credentials. So you can get application credentials from OpenStack and use those on Kubernetes to be recognized as user in Kubernetes. Yeah. This is not something that in this environment, we tried a lot because it's kind of separate. OK, I understand. But it's a possibility. Yeah. Thank you. All right. Well, thanks, everybody, for your attention.