 Well, hello everyone. Welcome to Bare Metal SIG and welcome to the hopefully quick fun presentation on ML2 neutron bare metal. So, when everyone thinks of neutron, you think of a single cube or a single component. And then sometimes you think of, oh, there are other services that are bolted next to it that like handle DHCP or that handle OVS. But ML2 is very different. And it's important to understand this difference when you start talking about managing ML2 drivers and infrastructure for bare metal because save blue cube is my my ML2 driver in this case. The ML2 driver is bolted directly to the neutron API. And that's kind of frightening for many people here. So hopefully I'll provide a little more clarity to this. But in the case of bare metal, we also have a second ML2 driver. To help the reconciliation. So let's say in this case, this one is networking generic switch and this one is networking bare metal. Networking bare metal provides the reconciliation, which allows you to take the switch configuration that is posted by Ironic and upload it to that's taken by Ironic, posted on the port bind and then basically long story short what happens is neutron API gets the request and then it goes through all the end points and all the things just loaded inside of it saying, hi, I have an update. Hi, I have an update. And if the type in that driver matches, say this driver says it's a it supports Munich bare metal, which is the type for the bare metal machine indicating that the MLT driver can manage bare metal ports, then that code executes inside the MLT driver. So in this case, the update to the networking generic switch MLT driver causes the driver to go and log into the switch and insert switch configuration. And then when we come down to the post update or post up post update step basically the networking bare metal MLT driver gets the command says, hey, I need to make sure that the state in all the state is correct and reconcile it in case there are any problems. So that's at a very high level how MLT works and how it is assembled and it's kind of scary when you think about how that works because it is bound into API and you need your API send to be able to access it. But the Neutron folks have said in the past if there's demand they are willing to retool it or support the ability for remote and L2 executions. Any questions? So this was this was made for I mean, this is the generic MLT framework in Neutron, right, which was made for virtual machines. It exactly. So what is the like difference when using this for bare metal machines and switches? So physical hardware, I mean, okay, so so I kind of go through that one has to kind of comprehend the physical model. So say I have an MLT driver here and I have my Neutron API here and here is my save my MLT, or not MLT, my Intran ODS agent or DHCP service. Basically what happens, the request comes into Neutron. It transmits it to the message bus and begins to enumerate through all the endpoints that are possibly bound or possibly exist in all the MLT drivers. When it finds a match it sends the request over. Meanwhile, this other service basically does the same thing, except it's not the MLT model. It's done via an RPC bus. So the difference in this scenario is that the other Neutron services use the Neutron, or not Neutron, they use the message bus established and used by Neutron. The MLT driver itself uses internal communication. I basically data payload is being passed and essentially what happens with a virtual machine is the payload for Neutron or Nova to request a port binding is fairly simple. Hi, I want to bind this port to this virtual machine. Done. And this is actually fairly simple because in essence the port binding request only has the VNIC ID and then the actual name or I think it's like a to do that compute name or host name of the actual server. So when Neutron is able to say, oh, here's my Neutron service. I have a request to change or do an update port or bind a port. Let me send it over for my RPC service. And this RPC service happens to be on the compute node and it's able to get this information from the message it gets on the initial post. So the RPC request gets processed by the compute node. It does the port binding. It sends back to the message bus done. And the reason it's so different with bare metal is the fact that we need the extra MLT driver interface to actually reach out into the infrastructure and do the same thing, which is why in Ironic, you have to have extra port information to achieve the binding. Why we have separate network interface drivers, by being for pre-configured networks, Neutron, if we actually have to set a certain more data into Neutron for the port binding or configuration to succeed. Also in newer versions of Ironic, you have to use the Neutron driver if you're doing stateful IPv6 for reasons. And I think that reason is largely because the IPv6 specification does not require static addresses by MAC. It operates completely differently. So we basically have to tell the DHCP server, here are all these possible addresses this host can get to support IPv6. Hopefully that provides a little more clarity. Yes, thanks. Any other questions? Any typed questions in the chat? Is that actually, do you know if this is like the main like deployment model for Ironic with like the Neutron driver for networking? Or is flat more for our deployment, for instance, as I said at previous occasions, our networking is pretty, it's pretty much a know-op. The nodes have like in a different database, they have pre-configured IP addresses, the DHCP server is somewhere else. So we're trying to more or less make sure that like Nova and Ironic do not interfere with anything on this. So it's mostly a know-op. So whenever someone has like more networking that is controlled by Ironic, I find this quite fascinating. Also because our network is relatively simple, it's really completely flat and there's no separation of like tenant networks, for instance. All we have is the BMCs on a different network. Apart from this the everything else is on the same network basically. Interesting. Basically the model people tend to use is flat and there's two reasons for this. It's easy. It's simple. Well, those are two reasons on its own. There's a, I guess there's an additional reason which is in organizations you end up in this assumption where you have to have separation of duties and the lineation of humans versus parts of infrastructure. So basically you end up with people who are in charge of the switches and they start to become very uncomfortable in software, especially software they don't control is managing a switch configuration. And it's repeatedly been an issue that we've encountered as a community. And I'm not sure there's a good path forward on that, largely because it's the only way to get people to really use it is for them to understand it and understand the value of supporting some sort of dynamic infrastructure like that. Right. In our case, for instance, the networking in our data centers managed by a completely different team which managed the network in the data center, the network on our campus, which is responsible for phones. So it's completely separate. So I think that doesn't help with introducing new tools that do configuration of their equipment. I don't know if this is the same in other data centers as well, if they have like this like more like historical or traditional separation of things. My experience is it's extremely common because organizations feel that there's a need to separate and silo these roles and be delineate that access and restrict it because if the network goes down your business or your operation is down. Right. So you have to operate a highly available, highly trusted service. And so it becomes a, oh, this is something I don't know and the natural human reaction is, I don't know it, I must fear it. Right. It's not only this also, for instance, for the networking team when they already have or when they like get servers through Ironic, they kind of, especially when it comes to servers that are critical for the operation, they're a little bit worried that some other tool has access to this very critical machines so that usually say no thank you and rather have it like completely independent. So yeah, so this makes like adoption more complicated. And in that sense, it's risk management and they perceive anything they don't directly controls risk. Right. It's again, they don't know it, they don't understand it. Thus, it's hard to trust. So I hope I provided, context people did not have before today regarding how this works. Are there more questions? Doesn't seem to be the case. So thank you very much, Julia. Rock them.