 Hello, everyone. Welcome to this short talk on the evolution of OpenStack integration in OpenStack. My name is Numan Siddik. I work in Red Hat. And I'm representing this with Daniel. Okay. So this is the agenda for our talk. So we'll be talking about a little bit of history of how OpenMV switch is used in OpenStack. We'll also talk about OVN. What is it? It's architecture. And we'll compare both. So this is the history of OpenV switch in OpenStack. If you see, like right from the earlier releases, OpenV switch has been mainly used on the layer to switching. So we had Neutron L2 agent, which was using OpenV which is using OpenV switch mainly for the switching part. So as we see the journey towards it, with the firewall driver, OpenV switch is used in the firewall features, which uses the connection tracking feature of OpenV switch. And with the OVN project, OpenV switch is used with a lot of features. So basically OVN uses OVN for all its virtual networking. It uses the flows. And it uses basically the OpenV switch features. So before that, let's see a little bit of architecture of OVN. This is the architecture of OVN. So we have a Neutron OVN plugin at the top, which Neutron would talk to. Like if you want to create a virtual network or a Neutron or a port. So we have a networking OVN plugin, which talks. OVN has like two databases, which is called OVN Northbound Database and OVN Southbound Database. So Northbound Database represents your virtual, you know, it represents your networking. Like whenever you create an OpenStack Neutron network, a logical switch gets created in the database. When you create a port, a logical port gets created. So that is the job of networking OVN. It basically listens to the Neutron APIs and writes it to the OVN Northbound Database. So OVN has a service called OVN Northday, which is like a centralized service. It listens to the OVN Northbound Database and it converts into logical flows. And it writes into the Southbound Database. So in each of your compute nodes, a service is run called OVN Controller, which actually connects to the Southbound Database. So all these databases, like they talk OVSDB Server Protocol. And whenever a VM comes on a compute node, so it programs those logical flows into actual OpenFlow flows. And it hooks up the networking. So it's all like distributed. So Daniel will take from here. Thank you. Yeah, I'm going to talk a little bit about the comparison between ML2 OVN and ML2 OVS. Like the major differences. I'm going to go quickly through it. Basically, the major difference could be like the components of each solution. ML2 OVS has a bunch of Neutron Python agents. And while ML2 OVN basically just runs OVN controller in all the nodes. So basically, you know, the complexity of the deployment gets reduced. And also the footprint. So it's better for a resource consumption utilization. It's a better approach. Also, most of it is OpenFlow based. So we don't need most of the helper processes used in ML2 OVS reference implementation. Such as KIPA libd or DNS mask, HA proxy. So we are getting rid of those. Basically, most of the stuff is implemented in OpenFlow natively via OVN controller. So that is also a huge advantage. Basically, everything is containerized in OpenFlow natively. Like, for example, the L3HA. We don't need anymore the KIPA libd or VRRP or HA network devices. So we are reducing as well the number of network devices. Basically, that increases the data plane performance because we are essentially having less hops in the network. And also the failover just takes a couple seconds to happen. That happened under the hood in the core OVN side. So Neutron doesn't have to worry about that anymore. With the routing, it happens pretty much the same. Basically, east-west routing happens distributedly in the compute nodes via OpenFlow, while the SNAT traffic still goes to the network nodes. And we are using that as Lumen said. We are using OVS connection tracking for better performance. And floating IP by default, they are getting distributed. The same goes for DATP or IPv6. We don't need our IDVD with the Neutron L3 agent. We are not using those anymore. Everything, again, happens natively on OpenFlow, just by deploying OVN controller in every node. And that happens distributedly and locally on the compute nodes. So basically, we're saving a lot of network traffic. The same goes for internal DNS or load balancing. Actually, we're implementing an experimental driver in Octavia right now for distributed L4 load balancer, which is right now. It got merged quite soon, I mean, quite early. And basically, it's already available for experimental purposes. And just we had our performance team pulling some figures. We are trying to get this ML2 OVN as a default network backend for 3.0 deployments. So we are trying to wrap our heads around the performance figures. We have done control and data plane performance testing. Basically, ML2 OVN outperforms ML2 OVS in both control, plane, and data plane. We don't have much time, but you can reach out later to either Newman or I. But basically, what we found out is that it outperforms in both control and data planes. And again, with CPU utilization, we found in some of the deployments, RabbitMQ starts to be a bottleneck in terms of CPU and also memory consumption. Because we are not using RabbitMQ anymore, because OVN controller, all the components use OVS-CB under the hood. Basically, we are not using RabbitMQ. All the CPU utilization drops dramatically. So that is also a very good advantage that we found so far. And so next question would be, what's next? No one will take over here. Yeah. So basically, we are planning for a migration tool. Basically, if you have an ML2 OVS deployment, we can actually migrate that in-house via the Ansible scripts, which you can switch over from OVS to OVN without migrating your VMs. And we also have QoS and other features coming up. In the OpenVswitch community, we are also talking about, right now, OVN is an OpenVswitch project. So we are also talking whether we can split that up and have separate OVS and OVN so that they can be compiled separately and distributed separately. So that work is also going on. And also, OVS DB server supports RAF protocol, like HHA. Right now, we use OVS DB server for all our northbound database using Active Passage, using Pacemaker. So we would like to move to RAF implementation of OVS DB server so that we have active HHA for all databases. Also, there is a huge amount of work going on in the OVN community to have a redesign of OVN services, both OVN and OVN controller using Rust and Haskell, using differential data log. So if you are interested, you can look into the OpenVswitch community. So there is some very interesting work going on there. So that would help us in improving the OVN, northbound, OVN controller services. Right now, it takes a bit of CPU because it's like a single threaded and it computes all the logical flows if any changes happen to the database. So we are looking forward to that as well. So that should be it. I think we have just 30 seconds for any questions you may have. So you have to be quick. Otherwise, thank you so much for coming and you can reach out to us at any time. Thank you. Thank you.