 Okay. Good. Everyone. Good morning, everyone. Okay. Today, we talked about improving container networking, networking performance with the BGP. I'm Jilapath. We are, I'm a former Nipah Khao in Thailand. Hello. My name is Patim Bulot from Nipah Khao too. Another guy sitting in front of the state is Mr. Chan-Sin. He is the chair of innovation officer backing up two of us. Okay. Let's start. Okay. I'm Jilapath. For today, we will talk about our environment and consideration when we deploy Kubernetes on the open stack in the public cloud service. Next, we talked about port-to-port networks. Improvement, basically, you know, is an IP and IP modes encapsulation between the worker node on top of the open stack, right? And next, we will talk about Kubernetes service, IP advertising with the BGP, and inter cluster with the BGP. And the last, we talked about feature work. Next. Yes. Let me get into the first topic of agenda. Our Kubernetes environment is based on our open stack public cloud. We utilize open stack Victoria integrated with the SDN tungsten fabric as a new time back in. Using launcher RKE1 and Calico CNI on every setup, our cluster integrated with the open stack API allows us to enjoy overall benefit. Firstly, we can level this. Persistent volume from Cinder. Secondly, it's a access to a low balancer from Octavia low balancer. And lastly, we have ability to scale our cluster up or down with just a single kick. By combining the power of Kubernetes and capability of open stack, we have created a robust and scalable for our Kubernetes environment. Due to the fact we run Kubernetes cluster on open stack, created using the launcher API call into open stack, we obtain Kubernetes node as it stands running on multiple compute nodes. In the open stack compute node, we may already be network underlay utilizing react land encapsulation between the compute node. In the left picture, we will show you about when port A communication with port B, the port A on the worker A in the compute A need to communication with another port in a worker B in a compute B. The first issue we encounter is double encapsulation that packet need to be encapsulated twice. The first encapsulated is IP in IP at the worker level, and second encapsulation at the react land in the compute node of open stack. This set of encapsulation rely on the CPU to perform the encapsulation. The second issue we identify is that throughput for port to port communication according to different port is not optimal. We conducted test using the IPerb T to evaluate the communication between the port. Okay, Nick. Okay. You know when we need to access to the application when you run inside the Kubernetes cluster, we need the dedicated external router for the communicates to back in port on the worker node, right? But we have some the consolidation and some the issue, like, you know, it's a global answer is work in the active passive mode, right? This is how some issue bottlenecks rely on the only one node, yeah, and need to be used a VRP for check the state between the active and passive node, right? And then if we run on the on the bare metal, if you have an issue about looping inside the layer 2 network, it has an impact to the work, to the global answer work, yeah. Okay, next. And the last consideration is intercommunication. In reality, we have multiple Kubernetes cluster for each team, less possible for managing their less effective cluster. However, we want curtain application to be able to communicate with another application across Kubernetes cluster, allowing them to share some service with our with our and and external global answer. So we would like to keep the improvement topic as follows. The first is the first one is port to port improvement. The second is so with IP improvement. And the last is a cost improvement. Okay, let's start in the third one. Port to port improvement we will describe in network basic, Kubernetes basic network tuning, concept of MTU jumbo frame, using kuboxy APVS in standard of IP table, Calico BGP PLMode, and the last is Calico overlay network. Okay, let's start in the first is MTU jumbo frame. By default, Calico since I have a default MTU that work on valiance cloud provider, but it may not suitable for every cloud environment. The first time after I set up Kubernetes cluster is configured MTU jumbo frame for suitable network on OpenStack cloud environment. This adjustment helps reduce CPU loss caused by doing the package fragmentation and improve to put doing port to port communication. The below the left below picture is comparison before versus after testing using the using by the IPerb 3. After we set the MTU we take port to port to put is upper form T.6 gigabit per sec to 9.6 gigabit per sec on one type of our compute failure. And the second, the life side is second consideration is kuboxy. In the Calico documentation it is recommended to use IPvS mode in standard of IP table mode. IPvS specialized in technology of balancing, offering best of performance in hashing and lower CPU usage. Additionally, is provide more low balancer and quality. The last IPvS mode handle low more effectively in the last cluster with the most of service and policy when they are updating a new new low policy. Next slide is Calico BGP Pilling mode starting with the left side. When created the Kubernetes cluster with Calico CNI selected the default behavior the Calico is perform BGP with node to node full mesh. This means that every node need to be PR with the outer node to broadcast low. However, as our cluster going up to increase over 100 or 200 of nodes we may find that this way is no longer scalable because cannot increasing number of BGP PR in the cluster. The second, the solution is on the right side. Loudly vector topology we use BGP loudly vector is a mechanism to overcome the requirement of established full mesh IBGP and provide all IBGP routing information to all IBGP PR. In my setup I group the T master node to function as a loudly vector server wiring all worker node as a loudly vector client at the result worker node no longer need to establish direct connection with every outer node. They only need to establish a connection with the master node. This approach reduce both bandwidth usage and CPU load in the BGP topology. Okay, next. Okay, you know when when when the pod to pod communication between the worker node this is the default or Calico is a user IP in IP mode right. This is need the encapsulation and decapsulation when packet ally or the worker node. Yeah, in our environment we use a tungsten fabric. Huh? Do you know tungsten fabric? Show your hand up? Okay, right. Okay, so we we need to reduce decaps and in-caps for the CPU efficiency on the worker node. Yeah, above the picture this is a user IP in IP mode. You will see the CNI of pod is worker node, right? And the next nickname is tunnel zero. That means it's encapsulation when the pod communication to pod. Yeah. And then we reduce the CPU usage for decaps and in-caps with a cost of net mode, right. Calico is a means if we have a worker node in the same layer too, right. When the pod communication to pod different the worker node is not it's not needs a decaps or in-capsulation. Yeah, right. You will see the above picture this is a CIDR of a pod is node. We are made up in an Ethernet zero. Yeah, that means it's not decaps and in-capsulation when the pod to pod. Next, okay. Next, okay. We will talk about why we use a why we use a service IP with the BGP, why we not use the dedicated dedicated load balancer for the external access to this application, right. Okay, this is a benefit of when we use the service IP with the BGP first, you know, it's not need the external load balancer, right. And then it support the eco-cord money part or the ECMP load balancing and support when you need to the control traffic with the BGP attribute like a 8-part P-Paint to control the in-clade traffic, right. And local people can control the e-gates, yeah. And the last, the short IP service is not need net, right. We can send the package to the monitoring system or the security system to analyze. This is a benefit for ours, yeah. Before we start the ECMP, sorry, ECMP, we need to know the default inside a Linux kernel. We have three try, but the default is a hatching policy is zero. It's hatching with the two tuple, that means it's a user short IP and destination IP to hatching, right. And again, below here is a user five tuple, that's a hatching policy is set zero, sorry, six to one. The five tuple is has with the short IP address, destination IP address, IP protocol and short port and destination port. This is a better hatching when we compare the two tuple. In our environment, we set to the five tuple, yeah. And next, we need to the, to rather for the PMGP with the calico inside the Kubernetes for the redundant and it's active, active mode, right. And we need to the config IP pool for allow external prefix into the Kubernetes, yeah. And the last, we should select the service mode, like a cluster or the local. In our example, we use the local and select the load balancer, yeah. And if you select the, if you select an external traffic policy in local, you only, only or might be select tire load balancer or the node port, yeah. Next. For the PUC, yeah, this is a picture. We have a two internet gateway, XS layer, top of black, yeah. Oh, sorry. This is a connect to the upstream provider, yeah. And in the gray box, this is a inside the open stack. And we have a three nodes, add a VGP, add a service, yeah. We use the VYOS and Pearson to add this, this layer, yes. And three VGP as with PMGP with IBGP to keep a loud thing from the, from the, from the internet gateway and from the Kubernetes, yeah. And, and the master node is add a loud Victor server. When the loud Victor server is received loud from the VGP, add a service is advertising loud to the worker node. And, and worker node is add a loud Victor server. Okay. This is a show example about configuration with PUC, yeah. This is a enderburn, might be enderburn, the ECMP part on the VGP add a service, yeah. This is a configuration of VYOS. And then we need to the configuration inside the Kubernetes with loud Victor cluster ID to the grouping a master node, three nodes, add a loud Victor cluster, yeah. And next, we need to configuration the IP pool for the CIDR for allow the external IP fit from the VGP to the Kubernetes service, yeah. And the next, we need to, we configuration the key original, you know, NickHawk is to, that's mean like a NickHawk sale inside a Cisco or Juniper or the Alistar, yeah. To advertise loud to the worker node, worker node have to see the NickHawk is from the VGP add a service, yeah. And the last, external traffic load balancer is a local used thing to the load balancer type, yeah. Okay. This is a result of the PUC. This is a, we use an engine X and the type of load balancer and external IP. This is a privacy for the PUC. And above Victor, this is a show on the VGP add a service, receive a service IP from the Kubernetes. And you can see the file NickHawk on this. That's mean is ECMP is endable, yeah. Next. And if we, if we show in the worker node, you will see the external IP fit, receive from the VGP add a service and see to the three NickHawk to load balancing, yeah. That's mean it support ECMP and above is a kind of access from the Cayenne. That's mean, yeah. Okay. From the previous topic, we will talk about the topic in in the communication to customer with VGP. In this topic, we will showcase our use kit for the, for indication Kubernetes with router and connect to customer using a VGP. Start with the goal of this solution. Do we mention you before we have multiple Kubernetes cutters for each dedicated to specific operator or application. One of our requirement is to enable communication between application across different cluster and provide in case for access from internet. From the diagram, we have, we have the router on the top and the main cluster. I, I, it's a bit, it's a bit, the main cutter act as external, external in bond load. It's a in gate for every team and every, every, every cluster use together. Our solution for implement is config the clinical configuration resort to tell the bird, tell bird, establish a VGP PR between the main cluster and the cloud router. This allow us to advertise service IP as there's use, use for the service IP and load balancer in the in gate software. When users access our website or back in application, the topic will send directly from the internet to the router and it's only allowed to the, to the worker node where the in gate pod are running. And look in the below cluster is another Kubernetes cluster. The cluster I mentioned below is a cluster where violin application running, running on a back end or website are running on them. To, to enable access to this application to the load balancer, we utilize bird to establish PR between the main cluster and below cluster using the, the eBGP by giving that the master node have fixed IP address and they will, they will suit to serve a VGP node and establish a PR, PR mesh between two cluster. First of all, config, configure the VGP configuration to advertise the service IP to main cluster for proper routing. By we implement this solution, we gain the following the benefit. The first is utilization of only the single public IP address for advertising the, to the router allow for ECMP path selection, high availability and eliminating the bottleneck of a single node active load balancer. No longer need to manage an external load balancer or dealing with complexity of the, the underlay, the underlay network. And the last is simplification of the in gate layer as there is no longer need to accept the public service order multiple in gate layer. Okay, next. Okay. The last section is a talk about feature work. Yeah. Okay. Mnipakao. Nipakao is a cloud service provider in Thailand. We will provide a feature for support in our customer like Kubernetes, right? And then we, we will provide two options for them. First, we, we provide the Octavia load balancer indicate with the Kubernetes. And second, we will provide the VGP address service for the indicate with the service IP, right? And right now we will deploy the VGP address service and sales service on the output platform and deploy the internet gateway router to indicate with the VGP address service and deploy the public pool for the Kubernetes IP service with the month dependency. And the last, we will improve a compute ability to add the network service accelerated network performance like with the DPTK or the SIV and sematnip. Yep. Do you have any question? Yeah. I have a question. Have you, have you managed to solve the problem of losing sessions when the, when changing the, the members of a, of a load balancer set? So if you remove one of your workers, now all your, your hashes change and traffic gets, gets redirected. Have you, have you managed to solve that with anything? Do you, do you, do you mean the, the section when we are with the VGP address service and go to the, direct to the port, right? About the VGP, you know, when the traffic load balancer with the ECMP, right? We not control the, the, the traffic, but we can control with the VGP attribute like local people or the eight-part VPN. If, if, if we have a T, T node, right? The ECMP is formed, go, go, go to the internet gateway, right? Hatching with the handle with the node one, node two, node three, or the node two, right? If we need to the control traffic, we, we, we can use a AS part VPN, have people, people to the VGP one or two, and then we can, we can, we will, we will maintain the node, yeah. And I think it's so very fast when the, when the detection, when, when, when we compare with the traditional load balancer, when we use the VRP detection, yeah. Any question? Caster in the, you, you mean a Kubernetes caster, right? Oh, if, if you remember in the, right? Yeah. When we, when, when we peeling the VGP address service to the, only the master node, yeah. The master node add, add the, lauli vector server, right? I think it's a, if we scale the worker node, this not need to the reconfiguration or more additional configuration VGP between the Kubernetes and the VGP address service, because master node is add the lauli vector server and propagate lauli from the VGP address to the worker node. And then it's a key for original, original network to the VGP, VGP address service. Any question? Okay. Okay. Thanks. Thanks so much. Thank you for joining us.