 Hey, thanks, Gaurav, very much. So hello, everyone. I hope my screen is visible and my audio is properly working. So first of all, I would like to thank everyone for joining this session. Let me start by introducing myself. My name is Rohit, and I'm working as a senior technical support engineer with Red Hat. And now it's been five years working with different private cloud platforms and dealing with some of the trending technologies around this private cloud platforms. So the topic that I'm going to present and discuss today is around the same public cloud platform. And more specifically, it would be a combination of OpenStack and OKD or OpenShift you can call us. So the topic name is Courier CNI and Octavia to manage OKD services. So I have reserved last five to 10 minutes for Q&A. So in between, if there are any doubts, I would encourage all of you to discuss at the end because we have a time constraint and also the content that we are going to see has some broader aspects. Moving to the agenda, just a note to be made that I'll be starting with very basic of whatever I'm going to present today. And then moving slowly to the intermediate and then advanced level. So the agenda is also designed in the same manner, like moving from very basic or the fundamentals to the advanced level. So we'll be seeing the terminologies like Courier Kubernetes, then Octavia or load balancing as a service in OpenStack specifically. Then we'll see what is OKD and OKD services. Then we'll see the Courier integration design part, how exactly it's integration is made. Then we'll see the OpenStack on, sorry, OpenShift on OpenStack architecture. Then comes the Courier architecture. And at the end, we have a very interesting demo where we'll see how exactly using the services, OpenShift services, the load balancing is made and how Octavia comes into picture, managing all the load balancing part. All right, let's start with the Courier Kubernetes, what is exactly a Courier Kubernetes. So it's a project that is being developed specifically for Kubernetes integration with OpenStack networking. So this is a SDN solution that uses container networking, also known as CNI, and OpenStack Neutron. And Courier and OpenShift container platform integration is primarily designed for OpenShift container platform that runs on OpenStack VMs. Then comes the Octavia LBAS. It's another project or component that we are going to talk. So Octavia basically provides load balancing as a service in OpenStack environment. And in OpenStack, Octavia needs to interact with other OpenStack components like NOVA, Neutron, Barbican, Keystone, then Glance, and et cetera. And for that communication purpose, like communication between the Octavia and this other OpenStack services or component, Octavia is designed in such a way that, to make this communication easier, there is something designed called as the provided driver. And so we have two types of provided driver. One is Amphora and the another one is OVN. So Amphora is the default driver which comes as a part of Octavia deployment. So basically it's just a normal NOVA VM that runs HFROX inside and that does the job of load balancing. And then we have the OVN provided driver. So the environment that we are going to see in the demo section will be having the OVN provided driver itself because it would be 16.1, the train release, OpenStack train release. So by default it will be having the OVN provided driver. Now, see there are many advantages and disadvantages of using OVN driver, but the most important feature of using OVN driver is that OVN load balancing can be done without VMs. So when we say without VMs, that means there is no need of a special VM for load balancing, like that will act as a member at the back end for load balancing. So there is no such need. So for the OVN, load balancing is completely managed by the virtual switch data path engine. That means we have open flow packet rules that convince a picture when using OVN. So whatever load balancing is done, that is done based on the open flow rules. Also we are going to see at the end, like how open flow rules are responsible for load balancing. Moving to the next slide. So most of you, you know, might be familiar with this diagram as this is one of the very few diagrams available for Octavia Elbas. So this diagram is specifically to OpenStack load balancing use case without courier involved, okay? So there is no courier in this diagram. And it's important to understand this architecture because courier deals with OpenStack, Octavia and Neutron. So first let's try to understand the terminologies used over here, starting with the STTPS listener or listener. So what is exactly a listener? So listener is a, you know, a port on which the load balancer listens traffic. And over here in this diagram, we have STTPS listener. That means we have listener created for 443 port. And yeah, we can have more than one listener. Then we have the member which is inside the pool. So members are your instances, like the NOVA instances that serves traffic behind your load balancer or what we also called as the amphora. So you can have multiple such members inside one single, you know, the thing called as a pool. So this pool is nothing but grouping for your members. Then next we have the pool health monitor or manager we can call us. So this is basically for monitoring the health of individual amphora or the member. So in our scenario, this might not come into picture because, you know, this is not supported when we are using the OEM provider driver. So this is just for amphora driver. And over here we are having LBAS V2 agent or what we call as the Octavia API. So the STTPS traffic will enter the listener and it reaches to the load balancer, okay? So the LB load balancer block is not directly present but you can consider it as between the listener and the pool box. So once the traffic reaches the load balancer then the member serves that traffic. So decision of which member should listen or serve that traffic is decided by the routing algorithm. And when using amphora as a driver, default driver, we have round robbing routing algorithm. And when using OVN as a provider, the only available algorithm what we use is the source port IP algorithm. So this was all about the basic Octavia architecture diagram. Next we'll move towards the next, I mean the next terminology called as the OKD. So OKD is open community distribution version for OpenShift, many of you might be already knowing this. And it was previously called as the OpenShift origin but then it was renamed to OKD. So everything is the same for OKD and OpenShift. Just the thing is that OpenShift has been made more mature day by day as people are really using it for enterprise use case. Also for the demo purpose that we are going to see is based on the OpenShift lab and not directly with the upstream bits. So there is specific reason of using that OpenShift lab is because the courier integration or the deployments really becomes easy when using with the OpenShift instead of OKD directly. But the main purpose remains the same like how we are going to manage the services and load balancing the traffic. So the main purpose will remain the same and we are going to see that as well. And after implementation of courier what we are going to see is that each OpenShift service gets a correspondent load balancer. So the load balancer that actually happens via open flow rules as I already said using the open OVN rules. So this will be the most interesting part that we are going to see in the demo section. Moving to the next slide. So this slide is about OKD service cluster IP service. So service, you might be already knowing that service is the way we can expose our application running on the pods. So like when we have an application of pods services services like a load balancer where it is a proxy the connection it receives on the pod. So there are basically two types, main important types, cluster IP service and the node pod service. There is one more like type is called as a load balance or something. I don't remember exactly that but these two are the main service type like cluster IP and the node pod. So talking about node pod, it exposes the service on each nodes IP. And another one that we are going to consider for our use case is the cluster IP service type. So this will be the default service type. So whenever we want to make a service available directly to a cluster, we expose the service on the cluster's internal IP. Likewise, we can see in the diagram where we have a Kubernetes cluster with multiple pods hosting some application and a common service type cluster IP which makes sure that the service is only reachable within this particular cluster. All right, so moving further, I just wanted to let you know that whatever things or whatever content that I'm discussing, I've just put a source for that content like, so in case if anyone's wants to refer after this session, they can just have a look at the source link. Now here comes the main part where we'll see how the courier integration is made with OpenShift and OpenStack. So courier and OpenShift container platform integration is primary design for OpenShift container platform cluster which are running on OpenStack VMs. So first here we need to understand how the OpenShift and OpenStack architecture looks like and then we'll see the courier architecture. Yeah, so OpenShift on OpenStack architecture, you can see the architecture diagram. So we have two different platforms connected with the broken lines. The below one is the OpenStack platform which serves as the hypervisor for the OpenShift platform. So the OpenShift is deployed on OpenStack compute node VM. This is the thing we should need to make a note of it like OpenShift is deployed on the OpenStack compute node VMs. So over here you can see Nova compute node which is a bare metal node and we have VMs created with the help of the KVM manager. Then we have Cinder and Swift storage as the backend with Swift and then we have Octavia and courier present as well. Coming to the OpenShift platform, we have applications hosted on the VMs which can be called as worker or master node in OpenShift terminologies. So this diagram is basically just to make you picturize the OpenShift on OpenStack platform and how it exactly looks like. Similarly, we are using OpenStack provided lock, provided storage and similarly, OpenStack provided networking as well. So next thing that we are going to see is courier architecture and as a part of courier architecture, we have courier components like courier controller, then we have watcher, then we have courier CNAV, we have other important components like handler as well but this are some of the important one. So this courier components are installed as a pod in the OpenShift container platform using the OpenShift courier namespace. So let's see exactly like what each of these components mean. So talking about the courier controller, so this is nothing but single service instance installed on the master node and in case of OpenShift container platform, this is modeled as what we can call as a deployment object. Next, we have a watcher. It basically connects to the API and observe the endpoints and invokes a registered handler to pass an event. Then next we have a courier CNAV, it's nothing but container installing and configuring courier as a CNAV driver on each OpenShift container platform node. And basically if you want to represent this as an object in OpenShift container platform, it is being represented as a demon set. Okay, here we go with the architecture diagram of courier. Now let's try to understand this architecture in detail. So courier components are deployed as we all know that they are deployed in the courier, OpenShift courier namespace. So you can see the courier controller is a single container service pod which is installed on the infrastructure node as a part of, as a deployment OpenShift resource type. Then the courier CNA container installs and configure the courier CNAV driver and on each of the OpenShift master or infrastructure node and compute node as a demon set. So then the next courier controller, what is does it? It just watches the OpenShift API server for pod service and namespace, like whatever creation updates and deletion events. So it just make a watch of that events. So what it does is it's mapped the OpenShift API calls to the corresponding object in the neutron and Octavia. So this means that every networking, every network solution that implements the neutron trunk port functionality can be used to back OpenShift via courier. And this also includes like many open source solutions like OVSOVN as well as some neutron compatible SDNs as well. So last and the interesting slide we have is the service creation workflow, like exactly if I just execute command SVC create and what exactly happens with respect to the courier and how exactly the backend thing works that we are going to see in this diagram. So let's try to understand the actual flow when any service creation takes place. So first let's try to make out a picture that we have courier as a block and it stands in between OpenShift block and OpenStack block. So we have OpenShift on the left side, OpenStack on your right side and in between we have the courier. So just try to imagine this scenario. So now what courier does is it watches OpenShift endpoints like endpoints for parts or services. And each of these endpoints has a watcher. Now any job or any task that happens with the services or the parts, it passes that message through the watcher. So courier what it does is it connects to this watcher and that's how courier knows like something is happening on OpenShift side. So watching the endpoints is, this happens in a loop like number of times. So any, what I can say, any job or any task or any update event is made. So this is continuously watching in loop and just reporting back. So whenever a new service is created in Kubernetes, we have a thing called as custom resource definition CRD what we call as in short form. So it gets created which has details like service IP or the target port. And this gets enrolled into another thing called as Kubernetes load balancer. So whenever new service gets created, it has its own definition and that definition has information like service IP or the target port that gets enrolled into our courier load balancer for that particular service only. And all this is managed by a thing called as service handler. So the similar happens for the pod creation flow as well. So like whenever a new pod is created, courier watches it and creates a courier port or neutron port specific to that pod. So courier also try to watch some other things like VIP to particular to that pod and et cetera things. So this was the last slide. I just wanted to switch to my terminal. All right. So I hope the terminal is visible for you guys. Gaurav, just give back if it's visible. Yeah, I wrote yep, it's visible. Okay, thank you. So before moving to the actual functioning of the courier, I just wanted to give you and what you like how this environment is deployed because this is OpenShift deployed on OpenStack. So the version of OpenStack is a train release, which is 16.1, RHSP 16.1. And I have deployed OpenShift 4.8 on OpenStack compute node specifically. So I'll just try to show you what things I have as a part of infrastructure. So right now I'm on the overcloud node. So this is my OpenStack nodes. I have one control node and one compute node. So talking about the resources of this particular environment, so this is a not too big, not too small environments with 128 gigs of RAM and two or three TV of disk I guess and a different number of CPUs. So all we need to understand over here is the amount of memory that we are assigning because the minimum requirement of OpenShift nodes like worker node or the master node is 16 gigs. So we need to make sure that whenever we are hosting all these nodes on the compute node our compute node has that much number of memory available at least. So I'll just now show you the OpenShift nodes. All right, so I have total six nodes and out of which five are my OpenShift nodes. So I have total three master nodes over here and one is the bootstrap node and one is the worker node. So I'll just show you now the install hyphen config file for OpenShift and what all things we need to change or we need to consider whenever we are deploying a courier infrastructure specifically. And talking about this deployment, so it's like whatever steps that I have followed those are already available in the upstream doc. I have just followed similarly as it is without any change. So I'll just now try to show you the install config file that I have used for installation. So this is my file and the thing that we should notice over here is that the network type, okay? So by default over here we have OpenShift SDN as a network type but just because we need to use courier we need to replace that with courier instead of using OpenShift SDN. So this is the important change that we need to consider whenever we are deploying a courier Kubernetes enabled environment. And the next, there are some other network related changes like whatever network I'm using in my OpenStack. I just need to mention that over here. And apart from that everything is default. Like nothing major, what I can say a change we need to make in that file. All right, so this was all about the environment. Now let's try to see, let's try creating a service and check how exactly in the load balancing happens. Okay, so what I've done is I've just already created some of the services for demonstration purpose because due to time constraint we cannot just create live. So I've just created it for you guys and I'll just show you like what things I have created and how exactly the load balancing happens. Let's see the service now. Over here you can see I have a service named as DevCon with a type cluster IP and the IP over here. So I'll just show you the definition file. Yeah, so this was the definition file that I used for creating a service. It's a very basic definition file with a kind as service then in the specs section use the, just mention about the protocol port and the target port that service will be using. So over here for the service we are using the port as 80 and the protocol as TCP. All right, so now we have the service created with service named DevCon service and we have the associated cluster IP as well. Now, when this service was getting created, if you remember like I just mentioned before like we have a term called a CRD custom definition, custom refers definition where all the service related information is stored. So what happens is using that information at the same time whenever we are creating this service there is a courier load balancer gets created at the back end. So when the service creates gets completely laid I'll just try to show you how exactly the courier load balancer looks like for this particular service. So I'll just run OC get KLB. I have a associated courier load balancer called with the same name DevCon service already created. So till now we have OC service and we have courier load balancers created. So now for this operation OpenStack is not aware about anything like OpenShift has created a service and a KLB. So OpenStack is completely unaware about this particular operation. So next thing what we are going to do is connect this service with supports and for that connection purpose we'll need some endpoints to be created. So I have already created ports, some basic ports which has Apache on it. So I have three ports named with DevCon 5.0 and port running. So let's run with the wide. Yeah, so all these three ports has a IP. So we need to make note of this IP because this IP will be using for our endpoint creation. So it's like starting with ending with 21, 242 and triple two. So we are three ports running with each of its respective IP and here it's Kubernetes responsibility to make sure that each ports gets IP and also each port gets a courier port also. So this is also important things to notice over here. OC get courier port. So for all the three ports we have a corresponding courier port created as well. Now we'll use this IP and create endpoints. So I've already created it for the demonstration purpose but I'll just try to show you the definition file. So what I've done is I've just used this three IP in my endpoint definition file. So here you can see the kind as endpoints and the service name I have just kept the same service name and just mentioned the IP address over here like 21, 242 and triple two ending with and the same port like 8080 port that I've mentioned in the service. So once I just ran this file I get the endpoints created OC get endpoints. So you can see I have the name and then the endpoints created for all the three parts specifically over here with the port as 8080. You can see that all the three endpoints are pointing to three different ports. So till now we have a service. We have a Kubernetes courier load balancer and we have endpoints. Now let's see the Octavia load balancer part. So let's see if we have a correspondent load balancer Octavia load balancer created in OpenStack or not. So I'll just use command OpenStack load balancer list. All right, I guess we have, yeah. As you can see we have a load balancer created with the same service name like Defconn5 on service. And one thing you have noticed that the cluster IP like this OC get SVC, this one. The same IP is being used as a VIP for the load balancer. Now let's make a show to this load balancer. Yeah, so it's current provisioning status is active and operating status is also online. So we have some other things to understand over here as far as OpenStack load balancer is concerned. So we have a pool for a load balancer where we have the ports running. So let's try to see if my pool has those particular ports present or not. So I'll use that member list and the ID of the pool. Yeah, so you can see the same ports that I have created those are now part as the members of this one stack. OC get ports. This one, these three ports that we have created from OpenShift side, they have the OpenStack load balancer. I mean, they are added as the member and we have the port as well, port 8080 that we have mentioned. Now what we are going to see is the listener like the service that we have created that correspond to the listener in the OpenStack. So let's see the listener. Yeah, so as you can see the operating status is online and the provisioning status is active. So let's see what we have mentioned in the OC service to get SVC, okay? So you can see we have mentioned the port as 8080 for the service and same is reflected over here the protocol TCP and the port as 80. So till now what we have is we have the OpenStack load balancer like whatever service we have created when the OpenShift it has its corresponding load balancer it has its KLB, it has the listeners it has the members and the pool everything is till now, you know, up to the mark. Now let's try to see how exactly the traffic passes on this three ports and how exactly the load balancing happens. Now I'll just try to get the node. All right, so I'm inside node and let's try to make a curl, I just got the IP. Yeah, this is the IP. IP and port as 80, all right. So you can see when I make a curl request it goes to one of the pod on port 80 and as I'm having a, you know, STDP Apache install you can see this is the default Apache page which I'm getting upon every curl request. So like I'll just try to grab something because the output is too large. Every time I make a, you know, curl request it lands on, you know, anyone of the pod. So that's how a load balancing happens. So in this scenario, like I have the same test page. So that's the reason you might not be able to see like on which part exactly the requesting is going but as you can see the curl request is happening that means it's rotating between each of these three ports. So the next thing that I'm going to show is very interesting. So till now, whatever we have seen that was the, what I can say the front end of the load balancing. Now exactly what happens at the backside like using the OVN flu rules, how exactly this load balancing happens that I'm just trying to show you. I'll just exit it from here. So this is my open stack controller node. I'll just increase the font. This is fine. So this is my open stack controller node. I have logged into my open stack controller node using the heat add bin, the normal user. So just before moving to the actual OVN plot, I'm not sure like how many of you are aware about this, you know, OVN functionality but I'll try to explain in short. So whenever a new OVN provider, you know, load balance request comes like if you have seen and just try to show you again so if you can see over here, I have the provider as OVN. So there are two types of provider basically as I've just mentioned earlier that one we have is the AMFRA provider and the other one we have is the OVN provider. So we have by default considering, you know, this is as the this deployment is using the OVN ML to neutron plugin. So by default it gets enabled with the OVN provider driver, okay. So whenever a new OVN provider, you know, LB request comes, the OVN driver creates a, you know, a high level entry into the OVN northbound DB. So there are terms with respect to OVN called as northbound DB, southbound DB then OVN controller. So that we need to understand first. So whenever the OVN drivers create, you know, whenever a new request for the OVN base load balancer comes, the OVN driver creates a high level entry into the OVN northbound database. Then there's one more thing called as the OVN north DB. So what it does is basically convert that logical entry from the OVN north DB database into the logical flows and stores it into southbound database. And then lastly, there is one more thing called as the OVN controller. So what it does is actually does the open flow compilation job. So first what we'll do is first let see the logs for, you know, the OVN creation like whenever a new LBS created how exactly the database entry is made. So we'll just try to see the log. So I'm on open stack controller node and so the path for the Octavia logs that where we are going to actually see the DB entry for the OVN is a VAR log containers Octavia. And we are just going to access this if we have file. So I'll just try to grab for, let me do thing wanting, I'll just grab the ID itself. Yeah, this one. So I've just highlighted the log entry. Okay, so what is basically does is so you can see OVS DB app dot backend dot OVS ideal transaction. So what it does is basically opens up a, you know connection to the OVN northbound DB using this OVS DB app library. So whenever a new load balancer creation request is made it creates a row in the OVN's load balancer table and update its entry for the name and some of the network related dependencies. So as you can see, I have this OVS DB app library than this actual command like when the request was made. So we have a DB entry over here and the fields like the name of the sorry the idea of the load balancer than I have the VIP as well like ending with 165 than the rest of the details like VIP port ID. So whatever the details that you see in open stack load balancer show that all gets enrolled over here as a part of a DB entry. Now let's check how the actual load balancer entry looks like from OVN northbound standpoint. So I'll just get out of this file now. And OVN, if there's some problem. Okay, so I guess I have a long list of load balancer what I'll do is I'll just in this file. So let's see how exactly the entry of the load balancer in OVN northbound table looks like. So I'll just find for this ID and I'll highlight for you this entry how exactly the OVN northbound entry looks like. So as a part of external IDs field, I have all the dependent things for that particular load balancer like I have the listener ID, the pool ID, then all of the reference IDs, neutron, port ID, et cetera. And one thing we need to specifically see over here is the VIPs like whatever VIPs we have, I mean the endpoints that we have mentioned over here. So that gets enrolled as a part of the database. So ending with 21, 242, then we have ending with the end of the winding with triple true. So these are all the IPs of our ports basically and we have a port as well, AT. And then the first one is our actual load balancer VIP with the port. So now actually what happens is we have these details. So OVN northbound table have these details. Now actually what happens is these detail will get convert into logical flows. Now let me show you how exactly the OVN logical flows looks like and from the southbound table. And just hear it, OVN. I'll just first give you like how exactly the flow looks like. So this is not that much readable. So I'll just like to graph for keyword backend. Okay, I have too much entry cell again, graph for IP of one of the port. So as you can see the flows are present in the form of tables like over here, we have table equal to 22. And where it has a priority value as well like priority equals to 120. Then we have a match field. And then we have a CA dot city dot new which stands for connection track or contract what we called us. And we have the IP or VIP of the load balancer and the destination port as 80. Then we have a action for this field. This is the mean action like CT control contract, a load balancer and we have the IP of that parts. Now, so this is just a one single flow example of flow. So likewise, there are other flows as well. So now what happens is this gets translated to OVS flows. So if I check the flows on the default integration bridge from the open stack point of view like VRI into int, let's see what we can see over there OVS flows. Okay, so over here we have a lot of flows. So one thing I wanted to mention over here is that in OVS, there is a term called as groups or groups tables to process the flows of load balancer. So what I'll do is I'll just try to graph for group. Let's try to see what we get in the term groups. I'll try to graph for the IP. Okay, that's what I was expecting. Okay, so let's try to understand this flows. Basically what I've done is I've just created, I've just tried to extract the flow from the group's perspective on the integration bridge. So this is a part of the flow where we have the group ID as 126 then type as select. And for that particular method, we have some fields like source IP, destination IP, TCP source and destination source as well. So we have another field called as bucket. So it has a relative weight in the form of integer, this one, this relative weight in the form of integer. So this weight is being used by the data part engine or the switch whose type is select basically. So then we have netting that is done on the member, this one. So this is our member IP. Then we like it's a netting has been done before it's actually passed via the routing devices. So similar calculation is being done for some other members as well. So I'll just try to show you log for another member. So this is for another member like ending with 242. The similar sort of logs we can see for the other member as well. So basically what is happening over here with respect to the OVN flows is that OVN uses group action to select one of the backend member. So if the packet is sent to the VIP, the OVS flows with respect to the group action jumps and select one of the bucket, like whatever I mentioned over here based on the weight. So if the bucket ID is zero is selected, then the packet is send to the connection tracking and the destination IP is then knitted to the backend member IP and connection entries then committed, okay? So in short, OVS uses group action along with the contract to do the load balancing over here. So this was the logic behind how exactly the load balancing is made using the open flow rules, like OVS rules, what we can call it. So this was about the demo part, CLI part. I am just open to the question. So if you have any questions, just feel free to ask. I'm just stopping my screen. I wrote it, there's a question from Shrikant. Shrikant, if you wish to ask this question live, you can ask for permission. We can even, you can even be live. Yeah, so what I'll do is I'll just read the question for everyone and try to answer. I hope that is fine with Shrikant. Okay, so the question what Shrikant I've asked is, can we try using OKD on open source, open stack along with Korea SDN to use neutron and Octave services? So the answer over here is yes, but frankly speaking, I have tried installing, while I was preparing for this lab, I just, before moving to the open shift, I was trying on OKD, but I face a little bit of challenges while deploying on OKD basically. So that's why for time being, I've just switched to open shift. So your answer is yes, we can do that considering the upstream beats. So if you have any questions, you can just ask directly or you are free, just feel free to post it in the chat. I guess we are already over time, but still I guess we can continue.