 Hello everyone, now I just share my topic with each other. Aronic and SD integration in our bare metal cloud. So the agenda is three parts. The first part is the focus of our course. The second part is focus on topology. The second is our demo video. Okay, we have built a public cloud with two results pool, not so young. Now we are building a larger provider cloud with two results pool. One is householder results pool, another is harbinger results pool. These two results pool, we have some considerations. One is about how about to customer our application. The second is about network automation. The third is about network solution and security. The fourth is about HWA. Let's go through our bare metal server configuration. We have one BMC NIC, 2,000 NICs and two 10G network NICs. This topology is our overview from the left side. We have management zone. The right side is our DMZ zone and test zone and call zone. All the BMS are located in the three zones. This is the view of a BM connection. This connection is to our OOP network just for IPM control. The bottom is for our data networking. The upper is management networking. In this talk, we focus on the two 10G NICs, just this one. The network consists of one online network just for storage and multiple online network for inspect and provision and tenant. The storage network is VLAN and OVL is VXLAN. This topology is from the Aronic load. This Aronic is run API service and Aronic conduct service. Besides, this Aronic in-spec service is also run in this load. This load also has three type of network, OOP for IPMI control, data network just for Aronic API network and inspect network. This network is VLAN type. The top switch is mapping to a VXLAN network. Let's go through the inspection phase. The first is we start a request to inspect for a BM node. The Aronic, we are found the BM wire API mic protocol. Then the BM, we are booted via PXE. Then do hardware connection, hardware information connection. Once this information is connected, the Aronic inspect the API to continue the processing. Once the processing is finished, then power this BM. We are showing the detailed processing in the demo video. This phase is about a tenant phase. It's just about two sub-phase. The first phase is about provisioning a BM. The second phase is network switch. This is briefly introduced about our Aronic network configuration. We have two links. One is bound for data 50. Another is bound for data 4. This is for inspect network. Aronic can communicate with each of the BMs via this network. Another is Aronic API network just for BM provision. This network is only for BMs use. I show our demo video. Now I create a chassis just for demo. There are no ending loads. Create the first load, the second load, the third load. Now we have three loads created. Then I set the provision status to manager. The provision status is changed to managerable. There are no port and no power group. I open the KVM console. I open all the three KVM consoles for money. Now all the BM knows that we are in inspecting status. Just look here. This BM is PanWang. Another is also. The BM will try to PXC boot. Just wait a moment. This is the BM PXC network IP address. Go ahead. Just a debug message should. We must connect the correct LDP information for forcing use. OK, RPA posts the connected data to Aronic inspect service. OK, this BM is finished inspect. Now we can see that we created four posts. After all these three loads are finished inspecting, we can see there are three port groups. OK, hold. Now there are three port groups. OK, I check the pollinists. As a result shows, the log-link coordination has a result. They print the chassis ID and the port ID. And these two posts are associated with the port group. Any questions here? This switch is our window provides. In our result pool, we have Huawei, New Agile, and ZT networks switches. Thank you. OK, the interest magazine video is finished. The second video is about BM provisioning. First I check the immediate UID, then load the uploader to insert the deployed kernel and ram disk. Then set the BM to provide the state. OK, switch to demo tenant. Here, we have a provisioned network. I don't know any instance. Now I create a first demo network and another. And then I create subletters. We have two tenant networks. OK, let's put some instance. The first instance uses demo net one. The third instance uses another tenant network. OK, let's watch our new loader status and lower instance. Here, we can see the new PXE API address. As you maybe remember, our inspection PXE API address is different with this one. OK, the BM is in deploying image. OK, this one is the deploy finish. The instance that is changed to active. OK, let's check the Newton port. This is the tenant port. Now the BM put from hard disk. Demo net one is the VXN network. The VIN ID is this. The other network is also the VXN network. This provisioned network is just for Aronic cloud use. And the demo net one and demo net two is for tenant use. And this Aronic API network is for Aronic node used to communicate with the BMs. OK, this two network is also VXN network. During the PXE boot, the BM node, we get this IP address from this range. This is a raw table in our Aronic load. This is also from the admin's tenant. So we need to add this argument to see the instance. Now I log into the BMOS and try to test the network connection. First is the PIN gateway. Unreachable. This is the same. The gateway IP is unreachable. The PIN another network is also unreachable. So we need to add a creator without to connect each other and interface to this route without. OK, as we can see, the gateway IP is reachable and another interface. The gateway IP is reachable. Here we check the ARP table. As we can see, the MAC address is not the virtual MAC. As we know, the prefix is FA16. And we can see it can pin this BM. Now we can see the two sublayers can pin each other. Let's check the route table in BM. OK, the video is finished. Any question here? How to deploy? You mean deploy the basic components or provision multiple incidents. You can use a script to do lower boot or another way you can use. So actually I don't see the great differences between this approach and just using triple O for the very same. For sure, triple uses this. But to be honest, it would be much easier to use triple or it's only five commands. Any more questions? Yeah. Oh, sorry. I'm sorry. I didn't see the microphone. So you mentioned earlier that you're using some physical switch and the hood, etc. Have you done any integration yet through Neutron that you can actually connect through so you can use SDN through the switches directly? Have you done any performance testing through this from the end-to-end pipeline side of things? OK. This video shows that we use PodGruble to bounce the two individual network links. We can see the PodGruble here. And the second is about how to control the SDN controller. That's how we choose. Once we use PodGruble, we should define the MAC address of PodGruble. Yeah. So we do some changes in our only conduct code to tell Neutron to pass the local link connection to the SDN controller, and the SDN controller can control the access to a switch and program the correct flow tables. OK. Just as a previous question, we have different vendors, such as Huawei, New Edge, and ZTE. Yeah. Yeah. As we know, the community has project links, generic switch drive. But this drive can only program the TOS switch with VLAN creation generation, and add our Neutron. So the switches are pre-configured. Yeah. OK. Thank you.