 My name's Danian Hansen. I'm a principal software engineer with Solo. I have been a contributor to Silium for about a year now and I'm also a reviewer of a couple of the SIGs within Silium. And pretty interesting to come full circle talking about IPv6. I was part of a team that brought IPv6 alpha support to Kubernetes. And IPv6 is probably not the best topic to discuss via slides. And so I put together a demo that I'm hosting the instructions on GitHub. If you'd like to follow along or maybe access the demo later if you want to grab that QR code. And what we'll do is dive right into the demo. Give me one second here. And if you got the QR code this is the demo that you're going to be seeing. So what we're going to be demoing here is IPv6 capabilities of Silium running on a Kubernetes cluster that is configured for IPv6 only. I'm using KIND to spin up the cluster and it's a simple cluster, two-node cluster of a control plane and a control plane node and a worker node. And let's take a look at what we got going on here. So I've installed the cluster and if we do, if we look at the control plane configuration for the kube manager you'll see that the cluster sider is defined and it's using a slash 56 IPv6 sider and the service cluster IP range is also defined as a v6 as a slash 112. And what we're going to see here is, if we go down, is that nodes are assigned pod siders. So here's my control plane node that's been assigned a pod sider of a slash 64 and same with the worker node that come from the slash 56 cluster sider. And what Silium IPam will do when I install Silium, Silium IPam will be assigning IPv6 addresses to pods that get scheduled to each of these nodes from each of these pod siders. And let's take a look at the nodes as well and you'll see that the nodes are configured for IPv6 as well. So we've got the colon, colon two node and the colon, colon three node and if we do a docker network, you'll see that since these aren't physical nodes, these are virtual nodes, so essentially each node is a container running on my docker daemon and kind created this docker network called kind Silium and gave it this v6 subnet and each node here gets assigned an IP key from that subnet. So now that we have confirmed the settings for our cluster, let's go ahead and install Silium and let's wait for Silium to be ready. And while we're waiting for Silium to be ready, let's cover some of the configuration that you see here in the demo. Most of the configuration I think is fairly self-explanatory, but there are a few configuration knobs here that I think it's worth covering. So routing mode, by default Silium uses a tunnel routing mode, so each node creates a tunnel overlay to other nodes and that's how pods are able to communicate to each other across the nodes. In this instance, we're not using tunnel mode, we're using native mode and instead of creating and managing those tunnels, Silium will go ahead and manage to the host's routing table and I'm going to show you here in just a minute what I'm talking about in more detail. The autodirect node routes, when these nodes share a common subnet, Silium has the ability through node discovery to learn about those pod subnets of the other nodes and then create routes on the host routing table to get reachability to those other pod ciders. We specify a native routing cider and so this instructs Silium, it says Silium when you see traffic within this cider, never masquerade the source IP. And so essentially saying we don't need to masquerade any of the traffic that goes within within the cluster. Instead of using IP tables for masquerading, we're going to use BPF and we're also going to replace Couproxy and use Silium for proxing services. So hey, take a look at it, Silium is up and running now and I want to show you something here, oops. Right, so I told you about that auto detection of pod ciders from other nodes and here's a perfect example of it. So when I go into the control plane node here, I'm looking at the routing table and I see that it now has a route to the 244.1 subnet or cider and you see that that 244.1 is actually the pod cider for the worker node and vice versa. When I go into the worker node, I see that Silium has added the pod cider route to the control planes pod cider. Okay, now that we've verified that Silium's up and running added the host routes, let's go ahead and run a sample workload and this is a curl client and an NGINX server that we'll be using for testing connectivity within the cluster and you see that the client and server pods are running and in these manifest, I used node selectors and easy mechanism to make sure that the client and the server pods are running on different nodes so that when we're testing connectivity, we're actually going between nodes. Let's go ahead and do that. Let's actually test connectivity here and we're going to go ahead and curl from the client to the server and you see that we got a 200 response so that's good to go. Now let's go ahead and now that we've tested pod to pod connectivity across the nodes, I'm going to go ahead and expose the server pod using a Kubernetes service and you see the service gets assigned a v6 but one of the key features that was added to services in Kubernetes for dual stack support are these two fields, so the family policy as well as the IP families. So the family policy indicates will this service get one or more IPs based on the defined IP families. So in our instance, it's a single stack, server gets a single IP address from the family that we specify which is IPv6. Now let's go ahead and test connectivity through the service and you see that works as well. So the next thing that we want to go ahead and test here is DNS resolution. So everything that we've been testing so far great pods on different nodes over a v6 only network, v6 address for pods, all right great we've got network connectivity working but what about DNS? Does DNS work? And so we're going to go ahead and do the same test but we're going to actually dive into the the curl request and response in more detail and you see that now the curl request is to the DNS name of that server service right and so we're using the well-defined naming structure within Kubernetes of the service name, the namespace of the service dot service and what you see is kubedns has resolved this name right and so kubedns has provided a quad A record to the client and kubedns has resolved that name to the service IP let's just verify here right so this is the service IP so we see that kubedns has properly resolved the the DNS name to the IPv6 of the service and the client goes ahead and issues the request after resolving you see the the headers that that are added to the HTTP request and we get a 200 response okay with the payload here so so far so good um next area I want to talk about is is Hubble so we've been looking and testing the network connectivity Hubble is a platform that is built on top of Silium and eBPF there's been mentions of it in in some of the talks earlier today but it allows a deep observability into the network communications across your Silium network and it does this in a very transparent manner since it uses eBPF and it's you know it doesn't impact the traffic a very minimal impact on the traffic so let's go ahead and enable Hubble and there's also an option if I enabled the UI as well but since we're going to be working from the command line I'm just enabling Hubble and we can see what's going on here and what's already happened is we've added Hubble relay to our Silium installation and so now we have a Hubble relay deployment and let's go ahead and port forward Hubble and I'm going to do that so that I can see some of the Hubble activity here when we create some traffic so let's go ahead and observe traffic from this council and let's jump down here and let's create some traffic that we can observe using Hubble so the server IP here you see that the request and response was received 200 okay and in Hubble here we see a bunch of information now so each of these lines is is a Hubble observability event and providing a bunch of context to us right a timestamp the source and the destination you see the details of the source and the destination as the namespace and the name of the pods along with source and destination ports and what is happening here with two network is that this is being observed as the request goes to the networking stack and it's being forwarded along with the TCP flags or if it's being sent to an endpoint itself and so we see the entire communication happening here with Hubble we see that the session gets created data is passed and then the session is closed and if we wanted to with Hubble too we can enable the translation false flag and you'll see that we get the same events but this time instead of displaying the source and destination namespace and pod we actually get those IPs so network policy is is a foundational feature of of Cillium right and it allows us to create network wide policy for our applications that run in our clusters and what I want to do to demonstrate network policy is I want to go ahead and create a second client let's make sure it's up and running it is and let's go ahead and test connectivity from the second client to our server great so we can confirm that from both clients we can access the server right but I want to go ahead and create a network policy here that ensures only the client and not client too can access my server and what you'll see here with the network policy is that it's using a label selector to apply the policy and it's using the labels app server which are applied our labels that I have for the server pods show labels we can verify this right so the server has an app server and the clients have their own labels app client app client too so you can see that we can create network policies without ever even having to specify IP addresses um and going back to the the specification of the network policy right so we talked about the endpoint selector you being used to attach this policy to endpoints that match these labels in our case that's the the server pod and then we specify the direction so it's ingress to that server pod and then we specify the source right again using labels and then the two ports right so in English language right this is saying traffic from app client to app server ingress on port 80 is allowed right so now if we go back and we go to client let's verify that the client can still access the server which it can and now let's do the same thing for client to and you see it's blocked but how do we know that it's blocked let's go back to our hub will relay let me kill that first and then let's port forward hub will again and let's observe the traffic and let's go ahead and uh get curl right so we're going to go ahead and go from uh client to to the server which again if our network policy is working the way that we expect it we should actually see and we're actually seeing some of it here already but let me go ahead and create some space let's go down here and we should see hub will tell us and there it is right and so Hubble is telling us hey that packet got forwarded to the network stack got intercepted by the ebpf policy that says deny this traffic and then it gets dropped last but not least wanted to touch on uh big tcp so big tcp is a feature that was added to the linux kernel and five dot 19 release and what it essentially allows uh the kernel to do is to batch a bunch of either receive or send packets together so that you have a larger payload becomes more efficient to use the network so if you're familiar with something like jumbo frames similar concept but the difference with jumbo frames is that you actually need that that's a layer two technology so every um every layer two interface on that path needs to be configured for jumbo frames whereas big tcp is a feature within the linux kernel that psyllium is able to take advantage of to increase the size of that payload and again make communication over the network much more efficient so let's take a look and see where we're at here let me go back to my main console and configure view so you'll see that it's disabled for v4 and v6 let's go ahead and enable or before we enable it you take a look and we are running a um a kernel version that will support big tcp and what we want to do is actually take a look at the nodes and you'll see that the nodes right now have a 64k gso max size essentially telling us this is the biggest size payload that you can send in a packet and we're going to go ahead and see using net perf what kind of performance we get when we run a net perf request response test between our client and server across the psyllium network okay you see that they are now up and running so let's go ahead and run the test really quick oops so while it's running again from the net perf client to the net perf server which is running on this ip address we're specifying the request response tcp test we're specifying the request and the response size payloads to be 80 000 kilobytes and then the output that we want to see is the minimum latency p99 p90 latency along with throughput so let's go ahead and let's save these results here all right so we see minimum latency so at 71 microseconds you know 90 percent of the request response were handled under 225 microseconds 99 percent p99 is 697 and a little over 6 000 transactions per second now let's enable big tcp for ipv6 within psyllium and so when we when we make this change we need to roll the psyllium data and set and while that's happening let's go ahead and one of the things we need to do is we need to delete the the net perf client and server in order for the larger gso size to take effect for those pods we we actually need to balance those pods all right so psyllium is back up and running let's go ahead and kill those pods and recreate them really quickly here all right they're back up and running now take a look at our gso max size it's no longer 64 k it's now at 192 k so let's go ahead and get so wide and let's run the test again because we bounced the client and server our server is going to have a new address we're still running the the same test asking for the same output and the same payload sizes as well so what we see here is we see we could run it a couple more times and we'll see different results but just generally a nice increase in throughput along with with latency really across the board improvements and that gives you a quick introduction to big tcp for ipv6 and just a general introduction of ipv6 for psyllium so i hope i hope you enjoyed and again feel free to to use this demo if you'd like