 Welcome to our session, coupling broad field VNS deployments with CNF. But before we start, let me introduce myself. I'm Sebastian Schiele, co-founder and CEO of Cubamatic. And today my co-presenter is Yousef. Yousef, do you want to quickly introduce yourself? Yeah. Thank you, Sebastian. I'm Yousef. I'm a software engineer with Cubamatic. I work with the backend team or distributed system team, I would say. And on the Cubamatic platform. Okay, let's start. So let's have a quick recap. So where are we coming from? So in the early days, we really used physical hardware for our network infrastructure. And over the time, it already moved forward to virtualized infrastructure so that we put some components into VMs. And one of the next things we want to do is instead of running it into VMs, we want to run this into containers. But this have some challenges and there are also some problems with this. And we want to show you how we can couple different components together because we think we cannot go with the big bang approach. So we have the requirement that like all the different components needs to work together in a smooth way. And yeah, let's talk a little bit about the evaluation. So physical and virtual infrastructure will stay for at least another decade. So it's not like that we can easily replace them. Even if we now starting with a containerized infrastructure, we will have some for at least a decade. And so the only really feasible approach for telcos and telco operators is we need to move smoothly from PNFs to VNFs and that they in the future to become CNFs. And I think we see a similar pattern what we already saw in the enterprise when they're moving from monolith applications to microservice architectures. So there are different patterns how to do this. So if you're starting completely from scratch, of course, you can start with CNF infrastructure. But if you're having like in similar ways in an enterprise, you have your big monolith applications. So you have your PNFs or VNFs. You need to slowly refactor them and putting them into containers so that you can really leveraging container infrastructure. And this of course needs a lot of time. And we need to do this in an incremental way and increase in the future also the velocity of our development and also can leveraging new functionalities on the new platform. But what is the main challenge to transforming from VNFs to CNFs? So one part is definitely moving from physical hardware to VMs was in general much easier because we could put everything in a VM and it's running. But many of the network functions will lie on specific kernel or specific kernel hex. And this is now not so easy if we are using containers because we cannot run different kernels on the same host. So we need to think about how we can leveraging this and what needs to be done in the user space? What other services do we need to be using? Like how can we connect to DP, DK or IOV? And so it needs to be some new thinking around this. But container can really provide more or less nearly direct access to the hardware with little or no virtualization overhead. So there's also a big benefit in this new architecture. So, but when we're looking into this is now when we want to go to CNFs, VMs comes our new legacy again. So we need to think about how can we solve this? How can we smoothly involve? But sometimes we really need a VM. When you have, for example, non-standard kernel modules for specific components, when the security folks ask for VM level isolation, and sometimes currently if you're looking into your application and it's a much more modulated application, it's not possible to do it right with containers. I think that it needs first re-architecturing it. And as long as this is not done, it makes no really sense to put it into containers. But what we think is a VNF and CNFs can live side by side on one platform, because a big benefit of this is we can already establish one operation model for container and VMs. So we can support legacy brownfield and greenfield deployments and of course also mixed deployments. And with this, we can run VNF functions in VMs and CNF functions in containers. But what we need there is we really want to connect them together in best case with the Kubernetes networking. And so we have also a consistent model on the networking layer for VNFs and CNFs. So what should be the desired state? CNFs should coexist with VNFs, because as we saw at the beginning, VNFs will live for a long time. And what we already want to do is we want to renew our underlying infrastructure without replacing everything. And we want to really make this independent from our infrastructure and also when they are neutral, so that we can already change some things and moving slowly more and more into a cloud native ecosystem. And we can also really leveraging the orchestration for network functions. So having like self-healing capabilities, heavily automation and zero touch deployments. But let's think about how we can embrace this. One thing we believe is like what we can use is an open source project called KubeVert, where you put your virtual machines into a container. And with this, you can run your VMs on a Kubernetes cluster and your containers on a Kubernetes cluster. And then you can leveraging Kubernetes to orchestrate your containers. And as well with KubeVert, orchestrate also your VMs. And with this, we can already build a centralized one platform and can also use hardware acceleration if the application supports it. And so now I want to hand over to Yousef. He will now give you a demo, how this could look like. Thank you, Sebastian. Give me a second guys, just set up my environments. Screen. Can you see the slides? Yeah, working. So yeah, to illustrate was Sebastian saying and how to work with VNF and CNF together. We'll do a little demo. And basically, the demo will be using a couple of components that Sebastian mentioned. So obviously Kubernetes clusters, also KubeVert for the hyper-vider on Kubernetes. We'll be using packet for the bare metal servers. And also we'll be using KubeMatic KubeOne, which is an open source tool to provision the infrastructure and build the Kubernetes cluster quickly. So the idea here is that we have two sites, two physical sites, Amsterdam in the Netherlands and Tokyo in Japan. And we're going to connect those two sites with the VMs, so running as VNF. In this case, using a channel is Y-Guard. And I'm choosing Y-Guard because Y-Guard can illustrate what Sebastian was saying before about the custom kernel needs. So typically, if you take an F5, a balanced VM or a 40 gate, a firewall VM, all those run custom kernels. So it's a, well, you can't containerize them like that and you need to run them as VMs. So we will be running a Y-Guard between the two sites and have Y-Guard tunnel. And then we'll have a router that will act as a router doing the matting and as a firewall also. And finally, a web server NGNX running inside the Tokyo data center. And to illustrate this demo, we'll have a couple of networks basically. And the goal is to be able to curl and ping the web server in using an overlay network. So we'll be using the baseline and specific subnetworks that are running on top, I would say, of the Kubernetes network. So the classical CNI input network and service network. So let me bring my terminal. So here we are. So we have my terminal is separated in two. We have to the left, the Asia side, so Tokyo side to the right, the European side. And let's have a look at what we have. I won't go through the infra stuff. I mean, again, it's using Kube1 and Kubernetes. So we just have working Kubernetes cluster right now. I've already proceeded with installation because provisioning as a VMs can take a couple of minutes. And obviously for the sake of the demo, I don't want, I mean, people to wait for nothing. So we have master nodes and one worker node, one worker node is more than enough. And if we go to the manifests, what we are installing. So basically we start by installing the router. So if we open the manifest of the router, we have a simple deployment. So we'll be using a custom image, but this custom image is actually a very simple image. So alpine Linux image. And it has a couple of applications that have been installed like bridge, utils, TCP dump and couple of networking tools basically installed inside of it. And what are we doing in this router? We are simply well, building the blocks that we saw on the diagram. So to put it simply, if we take back the diagram on this router, we'll be building this part. So this half part and this half part and doing the same for all the items. So the first part is indeed building the connection and overlay between the CNF router and the future VM, I would say, so the white guard server running in the same sites of Tokyo. So we are creating a big slam interface. We are assigning assigning it a specific subnet and IP, which is 19268255. And this will be called outside the transfer network since this subnet will be used for the CNF router and the VM to the VM was on Tokyo, Amsterdam to talk between each other, each other. And we are using a bridge. Forwarding database append zero zero, blah, blah, blah. So basically this command here. We are not using what I'm not using for the sake of the demo. Multicast groups with the big slam. I'm just using static with unicast flooding, I would say with, sorry, unicast with static flooding. So basically what does this mean that the V depths. So the virtual virtual channel endpoints are static you design them. So in this case, the V tip for the CNF router will be the VNF peer what I call here, which is basically the white guard server. So we're running on Qvert inside the same Kubernetes cluster. And all addresses all the zeros addresses basically sending you that all the bump frames. So the broadcast and unicast and multicast frame will be sent to this V tip. Obviously, this setup is simple. And it's good if you have a couple of details but if you start having like 1000 of V tips obviously you want you want to switch to multi to multicast. So this part's built the first I would say interconnection between the CNF router and the VM. And the second part is building the interconnection between the web server, the engine ex web server, and the CNF router. So here in this case, we are creating a new VXLan interface using a different VXLan ID and exposing it on a specific port the same port as a VXLan ID. And with this one we're creating another another network, which is basically just to illustrate that we can have multiple subnetworks. And that will be the network used for the interconnection or for the connection between this, the CNF router and the web server. So we're defining couple of rules I would say and nothing because this is acting as a router. So what we are saying we're saying for all the packets that will be going from the router to the web server. Using the interface of VXLan interface we have created just source NAT them or masquerade them to be more specific and correct. We are dropping all the packets and these three lines will act as a firewall. So we are dropping all the packets by default, and we're only accepting HTTP traffic and packets that are in an established or related state. So that's, I would say that's it for the CNF router. So if we have a look at the pods running currently. We have the first one CNF router and it is running properly. So, next step is to build the web server web server is to be honest, pretty simple. It contains a deployment also it contains the same custom image but again I mean this is a very lightweight alpine image, which has some networking application added to it. So, in this case what we are doing, we are building the exact same thing. We are connecting the two and together. So the CNF router and the web server. We are building both VXLan ID 2002. So we are creating a VXLan interface we are assigning it an IP from the same subnet, and we are using the same basically unicast VXLan type of traffic. This is for the first container so I would say this is a sidecar container that we set up the networking, and we have a second container which runs the NGNX web server. So this set up the CNF web, and once we have it we can do a kubectl get pods and we see that it is running we still out of two ready. And finally what we need to run and install is the VM itself. So the VM, again, we said we are using kubectl as a hypervisor and basically it's a virtual machine instance. So it's a type of like the kind that for kubectl. And in this one we're using a very simple image I would say default image from kubectl provided by themselves Fedora image. It's running in kernel 5.6.6, but whatever I mean you can use your own image in your own disk. In my case it's to run why God so this works fine. So what we are doing we are running two scripts basically and to run those two scripts before going through those two. We are first creating a secret and these secrets it contain the base 64 why God private key and public key so the public key would be the public key of the other side. I'm sitting down side in this case as we are in Tokyo and the private key of basically the Tokyo site. So those two will be mounted and this is where we have a starting script that will add my SSH key and mount the secrets to a specific folder and have the config map to a specific folder and then we are running the scripts as part of the starting script. So if we go to the config map this config map basically contains a shared script and this shared script we set up the why God and the networking so we're installing why God we're starting why God creating an interface of type why God. And we are starting this we're starting why God and listening on the default why God Paul 51 a 20. And, well, using the private keys that we have mounted send thanks to to Kubernetes features here as a secret. Also, we are setting up the peering with Amsterdam. So the peer here is basically the. I would say the public IP of Amsterdam site. It's hard coded. I piece was IP we allow to peer to. Then we set up the underlay address so he what I mean by underlay addresses is I would say the data plane between the Amsterdam site and Tokyo site. So we are setting up a subnet 192 168.102 slash 24. And we are also creating a static route, telling that every traffic going to the 168 slash 16 goes to the tunnel. So with this done, we also need to set up the overlay so the big slam. So basically, what we are doing is, again, we're not using multicast just for the sake of the demo we're using. Unicast with static floating and what we are creating here is a big time interface with a specific biggest on ID different one, and we are pending all the zeros addresses and send them to the V tip. In this case, the 192 168 102.2. And if you have paid conditions. This is the one so the two is basically Amsterdam site. And then we also set up the transfer networks. This is another network that we are using on top. And basically, this network, as I said is the one that is used basically to let them to VMs and the router speaks between themselves. So here to put it simply, we have configured this spot. So now we still need to configure this spot and have the connection between the router and the VNF VM here working. So if we go back to the terminal. I'm creating again another VXLan interface with the same VXLan ID as the CNF router. So 21 same thing for the forwarding database. And basically, this is set up the VXLan part. And now we are using basically what we would call a layer to hand over to layer three. So what how we are doing that we are creating a bridge, which is basically a virtual switch player two. And we are adding the interfaces that we have created before, and we are setting everything up. Once this is done, we apply the manifest and normally we should see a pod called the old launcher that has the VM running. So once this is done, we need to expose some parts for this. This is a Y guard VM, because it's needed to speak with a Y guard. And basically I've exposed three ports SSH one. Well, this one, in a way, it's just for me if I want for anyone. I mean, if you want to reach the VM SSH cluster IP, and you can see that this one is listening or targeting for 21 on UDP. So this is a VXLan port. So basically, a not port. Not port was service of type not port. And this is because we won't be using any load balance. So we directly target the worker node to have the journal working and we are exposing the default. So basically, I would say Y guard port and as a not port and we have a port assigned it assigned here randomly, and we'll be using this port on the Amsterdam site and I will show you this in a couple of minutes. So, once this is done, we have finished everything on the Y guard site, the Tokyo site in Amsterdam will do exactly the same basically, if we go to the manifest here in this case we have not that much. We have the VM 01 to apply. And this is exactly the same as the one in Tokyo. The only difference, I would say, if you go down where obviously the Y guard keys are private keys different. And the setting up of the Y guard tunnel. Here we are adding where we are changing the public key obviously this is the one from Tokyo. And the endpoint, basically this endpoint is the, since we're using not port to expose a port on the Tokyo side is if we do a couple to get notes, you can see this is the IP external IP of the worker node. So, it's targeting the external IP and the port 30 021 you can see it's a not port port of white guard. So, we are creating also the same zero one or two to two so the same control network for the white guard. And we are creating the big slam interface, etc, etc. Exactly the same transfer network. And finally, this is a little change we are adding a static route, telling you if you want to go to the 164 slash 16 network. This is a network that we created between the CNF router and the web server, you have to go through this basically know the point or endpoint when in this case it is the CNF router and the CNF how router will have a route table for for this specific network so we apply this one. And if when we apply it, keep people get thoughts, we have this virtual launcher, which is basically the VM and with the CTL, we can basically easily access the VM. So, let me use that console keep config VM zero one. And yeah, so we can connect it. So now logically, if we are, we should be able to target. Sorry, we should be able to target the web server. So it's CNF web. So keep it on exact minus it CNF web. Now we're just going to put IPA should be enough. So it's 164 zero that 100. So if we do a kill and target it. And there we go. So we are able to reach the engine next web server. So now if you want to ping, this is not possible because if you recall correctly, we have not allowed this type of traffic on the web server on the sorry on the on the router so if we actually we can just exact inside the router. And let me see what type of rules we have inside the forward chain of the filter table. So yeah, we are not allowing anything is the default policies drop and there is no ICMP so let's let's add a rule just to allow this type of traffic and let it act. Let this router out as a firewall. So insert in the forward chain and just simply put a minus P for protocol ICMP and the targets accept. And there we go. We are having the ping response and you can see that the latency time is pretty high because well this is Japan so we have a 240 minutes against latency and yeah, that's it for the demo. So in summary we were able to use the VMs to set up a tunnel and set up an overlay network and have traffic being sent from one side to the other side using the VMs, the VNF and be routed properly in the Kubernetes cluster as a target one using the Kubernetes CNI capabilities. So yeah, in summary, for this talk, everything related to zero touch networking so basically provisioning a device without a user interaction and the service management, well it helps to have obviously smaller slices and to literally couple your puzzle pieces and to use CNF with Kubernetes orchestration but this is not simple. This is not always the case. As I said, we have lots of, I would say, legacy vendors that are still very popular like F5 or even, I mean, for logs like Splunk that, well, don't have these standardized solutions and you still must run those VMs. So we need those virtual machines but now thanks to projects like Qvert, we are able to run them side by side with cloud-native network functions and we can have the advantages of both worlds. So thank you very much for listening and that's it from our side. Yep, thanks a lot and if you have any questions feel free to reach out to us. Bye.