 Hello, good afternoon. So I'm here to talk about with IPv6 and I work at Covenant IO. We are developing a software called Silium. It's open source that allows you to have security at layers And let's see if this works better. And it also provides IPv6 connectivity in your infrastructure as well So what should you expect from this talk? I'm going to give a really quick history just about IPv6 And then I'll talk about IPv6 in Kubernetes itself if it can run it or not And while I'm doing that I'll also do a step-by-step tutorial To talk about each component in Kubernetes and what you need to change or not in each options So let's see about IPv6 is true. So everything started in around 1991 and ITF might wondering if we had enough IPv4 addresses available for in the future So and as they realized that we might not have a problem in the future they created a A group called Rode and in 1995 they developed the first version of Let's call IP next generation, which was the draft for IPv6 And after a couple years like three years later it came up the RFC 2460, which was the IPv6 draft for RFC and In 2005 there was a stable version of IPv6 in Linux And in 2008 we start seeing some news regarding the IPv4 exhaustion and we should Do something about it. Some ISPs start looking around and they thought the best way to To prevent IPv4 exhausting was to do NAT over NAT over NAT and instead of upgrading directly to IPv6 and in around 2012 2014 containers starting to pop up Which were they were designed to have IPv4 connectivity and not Going directly to IPv6 and July this year IPv6 was considered the internet standard number 86 And hopefully let's see if next year in 2018 will be the year of IPv6 So what about Kubernetes running with IPv6 the first question that I come up with was is it really worth it to make this talk And what about the infrastructure if I have enough I have the infrastructure to run Kubernetes with IPv6 And what about the Kubernetes itself? Does it run? Does The pods and the services and the ingress Kubernetes concepts Do I need to make any changes at all and after that if I have a Kubernetes infrastructure What about my apps? Will they run it or not? So let's find out So when I start googling the first result that I got was if there is any benefit of using IPv6 in my home And I'm not sure if you can see it. The first answer here Was a funny answer was but with IPv4 you cannot have billions of IP addresses for your home appliances and with that Jing Yang would not be able to have these smart fridge and in this home so But the the answer that was chosen was this one No, there is not any benefit of using IPv6 at home, which I don't completely agree with I agree more with the the next answer available Which was yes, there is a benefit if you are using for education If you want to try it out IPv6 I think the safest place to use it is in your home where you can break things and at least to find out if things work or not And later on you can at least deploy it in your Infrastructure in your company infrastructure Of course, your house is not the data center. So Unless it's a startup, of course But if it is a startup you start to plug things up and things might burn down at the end So this concept pets versus cattle is not you is something that it has popping up around the containers A world and in your house you usually have pets. So what does this mean? This is this means you have an IP out of slash 24 for example for each type of pet and you know Which pet has which IP address but in your Data center you have more pets. So you have cattle and you you start Doing slash six slash eight for that each type of of animals So you have cows and you have ships and you start splitting up by colors For example, you have black cows on slash 16 and so on so but Hopefully your data center doesn't have actual cattle. It has containers So you start doing that this approach with containers start Splitting them up by color types for example and as Your containers start to growing up as the number of developers starting growing up The number of containers start to grow up as well and your users as well So you end up with millions of containers And the dressing to manage that will be impossible to to control that so What is the best solution for this will be IPv6 or No, let's do net. For example, let's do snet that always solves the problem with this. So we have a multi-level net and you start having containers on top of pms And then you have your cloud provider or your isp. They also use net. So you have You are wasting resources and your users will be happy about it because they have a slow connection That's also always a good idea No, that's yours. Let's go to to assemble an ipv6 cluster So you usually end up with two options in the beginning you either Deploy to premises or you deploy it on on the On the cloud and the first question that you might wondering is that is my operating system Supports ipv6 or not? Well, as i've shown in the first slide At least linux has a stable version of ipv6 Since 2005 and if you are not upgrading your linux since 2005, I think you have Worst problems then to worry about ipv6 And the next one will be does my server support ipv6 or does the cloud provider support ipv6 For example, here I have Look at aws and jce aws can at least provide you a public ipv6 to the vm itself Gce can provide you at least as far as I know can provide at least until the load balancer so inside your your own A cluster you have ipv4 until the load balancer you can have ipv6 And What about your users will they use ipv6? Well, I took this screenshot At the end of august this one here and we can find out that one in five users are using ipv6 worldwide So I think it's it's a good idea to start looking at ipv6 And this is what this is worldwide and this is from akamai the top countries that are using ipv6 So we can see in belgium uh At least half of the Half of the users are using already ipv6 and the united says alone. It's two out of five. So At least for the for the belgium side if you have belgium customers, we should Starting to get worried about this So let's go to the purpose of this talk. Let's go a little bit deep time. So This is a kubernetes Normal cluster like you have master and you can have multiple masters and you usually have multiple workers On the master side, you have controller manager the api server the scheduler and you also have the hcd which stores all the data And on the worker side, you usually have a container runtime And you have kubernetes kubernetes and the cni plugin to manage all the network In the containers running in the worker So let's start with the itcd in the master side. So itcd has 53 cli options and i'm talking about it cd not it's the cutl which is the client to connect to itcd And out of which only five of them are relevant for ipv6 And we can see that this is mostly like Addressing it's just an address and we can simply replace the address with the with the local host of ipv6 It's the simple yes, if it works it will work if it's not working You'll eventually find out and you can report it on a github issue since this is all open source And if you're asking me, what about the gtps? Well htps Should not matter if it's running on ipv4 or ipv6 But if you're talking about certificates, then yes, the configurations are aware of ipv6 So you can have a configuration for ipv6 in the certificate itself I will use htps in hcd. So i'll have certificates with with ipv6 on So i'll start by deploying hcd on the on the master node And i will have only one instance of itcd running Can you guys see the the fonts on the back? So this will be my service of Service file for itcd You can see that I have lots of options here But the important ones are the ones that have ipv6 on it And I also have the certificates they have They are aware of the ipv6 address in there and This is basically it On the itcd side. So let's start Let's start itcd. I have always messed up the operation And we can see they are up and it's up and running Since 40 seconds ago. So at least it's working so far So let's move on for the next component, which is Kubernetes now we are going to kubescheduler kubescheduler is Simple component it only has around 30 options and Only kind of three of them are relevant for ipv6 In our cluster This will We can also apply the same solution I have applied in itcd and we can see If it works or not like the same way we have implemented with itcd. This is a simple component Regarding the the ipv6 Side of our cluster because it only manages which node will receive which pod And So I will not start kubescheduler now because it will need kubepi server to run first and later on I'll run both at the same time So kubepi server is a really important piece of the of the puzzle in kubernetes So it has 120cli options And only five of them are relevant for ipv6 So we have a new option which is also important for the kubernetes cluster, which is The service cluster ip range and I've selected fd03 column column 112. So What does this mean? This means I can have 65 around 65 000 different services running on kubernetes And a service allows you to have Multiple pods serving the same service But the service will be an abstraction for those pods running on back end So a kubepi controller manager will assign Automatically an ip out of those range 112 and it will automatically assign for the service and this way All the other pods will be able to communicate with the service Without knowing the destination ip because pods come and go and you cannot make make sure which Pod as which ip at this at a certain point in time A warning here, please do not try this at home with kubernetes less than 1.8 because this was a pull request Recently merged and it will be available on the 1.8 that will be released the Later this month this option service cluster ip is available for ipv4 But the pull request was designed for the the ipv6 cluster to have the ipv6 option enabled So let's start a kubepi server First and then let's start And kubepi scheduler so This is the service file We can see here. I have the advertised address which is the node ip The ecd servers which is the ecd server that I've started up Three minutes ago And also the the certificates to to connect to ecd And also the service cluster ip range that I've talked about previously So let's start kubepi server And let's say let's check the status Okay, it's up and running so so far we have two Running ecd and kubepi server And now I have to start the kubepi scheduler Yeah, yeah, it still binds to b6 as well So on this this uh A kubep scheduler only has 30 options I will use a kubep config to have the everything to connect to the master inside So kubep config is a configuration file that you can create your kubelet And it allows you to create a configuration file to have a client to know how to connect to the api server So the screen is too small to see all the file, but it has the The certificate embedded and it also has the the server address here as you can see And the user will be the kubep scheduler that I've set up So let's start the kubep scheduler and check the status Okay, it's up and running so far since I Going well So the next one will be an important component also an important component of kubernetes, which is the kubep controller So kubep controller managed the whole cluster. This is the brine of the of the whole cluster And it has five relevant options similar with kubepi server, but uh, we are having Three important ones because they have specifically the ip addresses on it So let's take a look at them. So we have the service cluster slider that we Service cluster ip range. It's the exact same one that we have specified in kubepi server previously And the services will be across cluster. So it will be the abstraction for the pods running And now we have the cluster sider So the cluster sider will be the sider for your pods running in kubernetes Do not confuse them. Do not confuse the cluster sider with the with the physical cluster addressing that you will set up for your vm So all the pods running in my cluster will have fds your true column column slash at And now we have the node sider mask size, which will be 96 what does this mean so we have the cluster sider which will be that one I'm not sure if you can see the bold Which are the the the mask for the for the ip And now we have the node sider of 96. This means The first node that will register itself to the to the kubep controller Will have a sider of a subnet of slash 96 out of the slash at the second one will have So the second one will have ft zero two zero zero zero one So it will be subnet out of the slash at this is the out the kubernetes deals with the networking So it adds a subnet for each node out of the the whole Network that will be assigned for the pods containers and the last node will have f f f column column zero column zero slash 96 as well So we can have 65 thousand nodes and on each node we can have 4.3 billion containers If you want to run it for 3 billion containers So this is lots of ip addresses But you don't you should not care about ip addresses at this point because you have a controller manager That deals with that and you should forget about ip addresses. So you should really only to know Where the pods will be how the pods will be deployed you you should not care if pod a is running on worker one If pod b is running on worker two, you should have other options If you want security you should look at different levels than l3 l2 and l3 at this point So let's go. Oh, uh, let's start cube controller this one So I have the cluster sider here, which is fd02 and the node sider mask Oh, I also have the allocate node sider. So this is what uh cube controller will Know that it should assign to which node uh a particular sider out of the slash at And I also have the service cluster ip range here So let's start a cube controller Up and running. So so far we are good in the worker side in the master side. Let's move on to the worker And in the worker side, you you have a container runtime on my case. I choose docker and So the docker the network plumbing will be made by In Kubernetes will be made by cni and you can find the reasons in the link That isn't on the slide or if you google why kubernetes doesn't use uh live network So what is live network live network live network is the plugin for docker only you should start a container with docker run Dash dash net and you choose a plugin it will be the A live network plugin that will run in docker in docker itself And kubernetes allows you to have a cni plugin, which is a different type of plugin It doesn't it does not belong to docker itself and It's a different choice that they have made For their for their for kubernetes So the cni plugin that I will choose will be cilium So as I told in the beginning cilium provides the the l7 security As well ipv6 as a first class citizen when we were creating cilium in the beginning We thought okay, so we need to have something for containers The first thing we talked about it was we need high scalability So and we were not going to choose ipv4 we later on find out like We have to have ipv4 but for other environments than don't use ipv6 yet So cilium will use the we'll be aware of the options that i've chosen in controller manager Which is the allocate node siders the cluster sider and the node sider and it will know which sider It should be running on each node and it will route all the traffic across nodes And so that's why I told before that you should not care about ipv6 because the cni plugin will be take care of that so you should not really Be aware of that this anymore and you also have the service routing So the services that I've explained before as well Will be the routing of those services will be made by the cni plugin in this case will be made by cilium If you are using a different cni plugin It might not be the cni plugin itself that will deal with this. It will be cube proxy So I will not run cube proxy because cni cilium does already the service routing But some plugins will use it So as far as I know there are not An option that you need to change in cube proxy to run with ipv6 if you are using a different different plugin than cilium and We have cubelet which has the 160 cli options, but fortunately We only need three and so uh, we need the cluster dns, which is Um The dns that will run on the on the kubernetes cluster and this ip needs to to be written beforehand. That's why I had Cube dns running with this ip. It's just an ip out of the range Of the service cluster ip range And I also specified the node ip of the node itself in the on each worker Um, the node ip is still uh poll requests That number over there It's not we have been merged yet, but all of the This demo that I'm doing right now. It has this patch Compiled with version 1.8 better Running on so I'll run cubelet and So since my machine is not that powerful I will have cubelet on the same vm that I have the worker on so I'll have a dedicated worker And I will have a vm running master and the the cubelet or it will be the master slash worker Okay, so you can see the options here The node ip the network plugin which will be cni And also the cluster dns that we you need to specify it beforehand. Let's start cubelet And Let's let's check the status Okay, it's running from eight seconds ago and I'll also start cilium as the cni plugin Let's make sure sling is also working Okay, it is cool and on so on the lower side of the screen It's the the worker. It's the dedicated worker that I'll only start cubelet as well and cilium Sorry, can No, yeah, I will not starting cub proxy. I don't need it. So I will not use it uh, okay, so we have a kubernetes cluster up in running with ipv6 and So far things have been working But now we need cubetns. So cubetns will be The dns for all the kubernetes cluster It serves all the dnsers dns requests for all the pods running in a cluster and it's a deployment kubernetes Spec file that is available in kubernetes github And I only had to make one one small change in the in the deployment file Which I had to change the probe from for quad a instead of a single a What does what does the probe does? So the probe was Checking if kubernetes was running kubernetes was up and running And since the kubernetes So it was checking for this name here kubernetes not default of services So this name here will be the default one for kubernetes Since kubernetes was running with ipv6 and not ipv4 The the probe itself thought okay kubernetes is not working because I'm making a query for a single life for that name And the query comes comes up empty. So something is going on here I had to change it to check for quad a And the reply was the proper ipv6 that was serving the cube by pi master cube by pi server, sorry So I will start Cubetns, which is in this directory. Hopefully Check No, it's not That's here I'll just start it and I'll explain it So this file is available in kubernetes github and this was the change that I assumed so I changed it To have to connect to local hosts on this port and also the To request for quad a instead of single a I'll also deploy the application itself and ingress. I'll explain which one what They make on the cluster They just need a little bit time to set to what up in the cluster. So Cubetns will run. I don't know which worker Maybe a worker two or a worker node. I don't want to care about it and The next step will be where's ingress so ingress allows you to expose your services your pods that are running outside Inside the cluster to expose them to outside to the outside world So what's the point of having of having a cluster that is not connected to internet? So ingress allows you to have those that kind of exposure to the outside world. So it's also a Kubernetes spec file And I didn't need to change anything on the on the available specs files on github as well so and The ng next controller ingress controller will be run on worker two or worker one. I don't care So this will be by the my demo that I've already deployed and I'll have I'll be the user on the left side of the screen that will connect to the ng next controller And how do I find out which address it will be available on? So I'll type kub cuttle get ingress And I can see That this was a file that I've deployed previously It will serve for the host foo.bar.com It's available On these two addresses here. So fd 000 column column b and the fd 000 column column c so in theory if I Go to these links to this Address here. I should see something in my browser And let's find out just a little bit Okay, I just typed one of the addresses And I see a 502 bad gateway. This is good news. So so far we are able to connect to ng next Which is running on one of the nodes and I'm able to ping this side here So why am I getting a bad gateway? So I have this host here. So it's it will be foo.bar.com. If I change the host Header to foo.bar.com. I should be able It's a little bit, but I'll change it on my side So this is a problem plugin that allows you to modify the host header of your request And that's why I'm changing the host header to foo.bar.com Let's see if there's an accept button here or not Okay, there's not and if I refresh the page I should be able to see my service that is running inside the cluster inside my cluster via ng next So what is happening here so far? I'm hitting ng next that is running on one of the one of the workers and the request goes to guest book so ng next knows there is a service called guest book and that service the guest book service Is this one here It will have this cluster ip this ip was automatically assigned by the controller manager And it also knows There are some endpoints serving this service here the guest book Which will be this address here So this address is actually a container a pod running inside my cluster And the translation between cluster ip and the pod itself will be made by selen So the request goes from ng next If he tries to connect the guest book service Selen will know the proper address to redirect the traffic And it will go through selen and it will go directly to the pod itself So guest book later tries to contact kubin s because it needs to know the address of the redis master It will receive the reply and then guest book will write the name in the redis master and redis life will be Will be duplicating the entries on redis master on its own And read guest book will read the data from redis life So the data will be written on redis master Redis life will replicate all the data and guest book will read all the data from redis life So if I type here john for example, and I click submit So the data was written on the redis master on this side if I type refresh This happened really quick, sorry, but what happened was Guest book was querying redis life for the data and it was the same data that I've written on redis master so Final thoughts on this Kubernetes has lots of CLI options I know that but you should at least read all of them to have some knowledge in kubernetes and to try things out and IPv6 is coming. So you you will start having users running IPv6 and you should be aware of that You should try this at least this tutorial. You should try it at your home to have some IPv6 knowledge Where for example, I've I've no developers that don't even know how to type an IPv6 address with the port itself Like in the browser. So these these concepts of brackets With the port if I want to type this it's something A couple of developers don't are not aware of and kubernetes is getting ready. So there are a couple of to-dos Dual stack would be nice to have to have IPv4 and IPv6 on the same pod right now. I only have IPv6 running and these two Pull requests were Integrated in my demo. So I have compiled all the components with this pull requests Which is the kubernetes node ip option to have IPv6 and why if IPv6 prefix size limit for cluster sider And also kubernetes. So for the ones that don't know what kubernetes is It's a really really good tool that allows you to set up a cluster with a single line basically And I've set up all of these services and kubernetes server kubernetes scheduler, etc, etc This was only for educational purposes if you want to try it out You can also try it out But if you want to try kubernetes admin, you should try it because you basically just type admin in it And you start a master and kubernetes admin join and you start a worker. That's that's it And unless you try it you will never find out if you are ready for ipv6 Or if your infrastructure is ready for ipv6 and we have We had the booth on pipels one. I think the the booth area is already closed um, if you want to ask me some questions on twitter, uh, there's my handle and Coming next after this talk into 50 p.m. There will be a talk About sylum itself if you want to know more Um, if you have some questions, I think I have three minutes left To answer you Mm-hmm So, uh, sylum has an option. So the question was how sylum knows how to route The traffic to each node correct from one node to each node So sylum has an option called auto v6 route. I don't remember the exact name of the option But it inserts the Not this one It inserts the the The route Yeah, six is the first So it inserts the routes for each node says sylum knows which post-hider belongs to which node and it also knows Which node ip belongs to which node? uh, sylum automatically adds the the Which subnet should be go to which network or to which ip So for example the other node, I believe it's it's this one and it should go via via fd00 column column c And fd00 column column c it's on device This name here. So that's that's the way it does it Sorry. Yeah. Yeah. Yeah, exactly. So yeah sylum connects to the api server and kubernetes Sense it outlook the kube controller also locates the ciders for each node That information is available And you can see for pods for the node one You'll have the pod sider that one and sylum also is aware of that of that sider So that's the way that you can check it check it if I do Probably it will be a little bigger for the screen But Correct. Yeah. All right. Thank you so much. Oh, you have one question Yeah, I did So because nginx only is aware of them of the service of the kubectl get It's only aware of this ip here And this ip is not it should not be on the wire So sylum does this translation to the to the ip to the right ip itself Correct exactly that All right, I need to stop like the lady told me to stop. So thank you very much