 Alright. Can everyone hear me? Perfect. So those of you in the back, there is a lot of room in the front, you know, if you want to come front, you know, you don't like a particular answer by one of the panelists, you can even accost them. You know, I'm just kidding. But thank you for coming. You know, it's always a pleasure to be in person. I did a lot of KubeCon talks, virtual, right? We got a what? Three o'clock in the morning for one of these. But, but, you know, it's great to be in person. It's four o'clock is reasonable time. People are here. But this is your panel, you know, so be ready with the questions. I'm just going to do a quick round of introductions. And after that, it's all yours. Okay. So feel free to ask some of the tough questions. Make this panel a squirm. They know everything about networking and IPv6. So, you know, so I think it'll be cool. Yeah, I see that. Nobody knows everything about it, right? Before I get started, how many of you are already familiar with dual stack networking? Okay. How many of you are already on the IPv6 bandwagon when it comes to Kubernetes? Whoa. Okay, that's very cool. Last question. On a scale of one to 10, a networking geek, right, is 10, and really know nothing about, you know, kind of the different levels, whatever, right? That's one. And I'm somewhere like a level two, level three, something like that. How many are you above a level six, a level seven? A few. Okay. So you guys can't ask any questions. Okay. They're going to answer the questions. Yes. No, but, but thanks again for coming. This is your panel. So I have a great August panel in May. So it's pretty cool. I'll let them introduce themselves. So be ready to ask some questions. I'm going to run around with a handle mic. Please wait. Okay. Don't yell out because, you know, there are virtual audience listening as we speak. So I don't want to ignore any of them. So thanks for coming. What's your audience as well? With that said, let's go from left to right or right to left or whatever they, you know, start with Tim. Hi, everyone. My name is Tim. I work at Google. I've been on Kubernetes for a long time. I'm one of the SIG leads for SIG networking. And if you hate the dual stack API, it's probably my fault. Hi, I'm Dinesh from Sevo. I've been implementing some of the v6 dual stack into our cloud platform, which we then offer out to customers as a Kubernetes service. Wonderful. Hello, everybody. How's everybody doing? Good. Thank you. Thank you. I need the energy. I need the energy. My name is Lachlan Evenson. I work at Microsoft. I actually worked in SIG network on the dual stack feature in Kubernetes. So really excited to hear from you all. We want to make it better. So please ask questions either today or you can find me online. I'd really love to just hear about how it's being used. So all these hands, you know, this is a year of IPv6 Linux on the desktop. It's going to be this year. I've been waiting for 25. Liquid. Kubernetes is that Trojan horse that's going to get v6 out there in the world. So I'm very excited to hear how you're all using it and learning and making it better. Thank you. All right. Thank you, Locky. I will say that if Kubernetes having IPv6 is one really good thing that happens this year, I guess it was technically right at the end of last year. But if everybody implements it this year, then you may possibly single-handedly fix 2022. So please go back and do that. And I'm Bridget. I work on Locky's team. And if you would like to register a complaint about or perhaps a pull request to the docs for IPv6 or the code examples that I worked on with Locky or the blog post or, you know, Oh, come and bike shed the production readiness review. I feel like we could have done a little bit more with that. Why not? So, but yeah, happy to chat about it. Perfect. So anybody have any question? But, you know, while you're warming up, I think that was a good segue into the question that I was going to ask, which is, I'm a developer, I'm a developer, right? And, you know, Kubernetes networking was great because it was kind of like an extended VM, right? You know, it got its own IP address and all that I needed to do was kind of wire up the IP address or something like that. But as a developer, do I really care about IPv6? Should I really care about IPv6? And if I do, what are my resources to get started? You know, there are not a whole lot out there. You know, I mean, I tried looking for them. So can you, can you point developers in the right direction? Give some ideas? How many, how many of you think that this is a problem? You know that, yeah, yeah, I see a lot of things going on. So let's look at this from a developer perspective and especially from an app developer perspective. Yeah, that's a great question. And I hope the answer is your cluster administrators can set it up correctly. So it's not a terrible burden. But I'm looking at Tim blinking behind his mask. And I think maybe he thinks, hmm, well, how much of a terrible burden do we expect this to be for people? My feeling here is if you don't know you care, you don't care. This is a space where you shouldn't most people should not have to care 98% of applications and users, this should be transparent to them. And for those people who do care, that's where the API is that we've added are, but they're all optional. And we were like super rigorous about making sure that nobody breaks, nobody changes automatically. And for most users, nothing changes behaviorally. I think I don't agree with that, really. For end users, what we've kind of had with Kubernetes and v4 is that our pods are almost hidden by default, right? They're not addressable on the Internet. And going down the v6 and the dual stack route, all of your pods suddenly become addressable and rootable and reachable from the public Internet, which is a really, really dangerous thing for the uneducated. So there's a huge amount of resources that we need to create as a community to give developers an understanding of what happens when you put, say, your Redis directly rootable on the Internet. Wait, so you're saying that the Red teams should be paying attention right now? They should be, yeah. Yeah, I think, you know, from my perspective, one of the biggest barriers to adoption for IPv6 is the, you know, the overhead to get started with it. And I really think a platform like Kubernetes, which can extract a lot of those challenges in addressing and understanding the addressing away from being the burden on a developer is really, could be really empowering. So I think, you know, this, you know, it's as simple as setting a field and a service to actually put something on the public Internet. But I do, there are some idiosyncrasies to IPv6, which are just a little different. And one of them is that every all IP addresses are routable on the Internet. It doesn't mean that you can connect to them all. But if you don't set things up right, that could be a hassle. And in the IPv4 Internet, the world we live today, we've relied on this system called NAT to keep everybody safe out of the box. Exactly. And that kind of goes away, which NAT actually breaks the Internet most of the time. So I'm hopeful that in the IPv6 Internet, we can actually have better connectivity. But I, you know, back to your original question, it should be as simple as setting a field to say, I want Jillstat networking. So from a developer perspective, what we hear from the community and from customers is I need to present my application on both the IPv4 Internet and the IPv6 Internet. And why they're asking this is typically driven today by regulation. We have some regulation by a country that we operate in to have to present services that are part of a government or any kind of other international organisation. There are regulations that are moving companies to have to present them on both Internets. And that's why we think, you know, having Jillstat gives developers a path to actually start onboarding and getting a feel for IPv6 without, you know, drinking, having to convert everything at the same time. Not sure. Yeah, that's great. And I think it's partly cultural too, right? You know, for example, like you said, you know, the perception that each of these IPs might be routable might itself be a, you know, kind of a red flag, so to say, right? So can you address some of those issues of actually, you know, moving from IPv4 to IPv6, not necessarily from a Kubernetes perspective, but from a general perspective and kind of talk about some of the challenges there and how do you mitigate them? I wonder if we want to back up just a tiny bit too and warn people that if you have been in the Kubernetes space looking at this and thinking, oh yeah, I tried the alpha, just be aware there's some nuance there. Yes. Because we did re-implement the alpha specifically to reduce the complexity on the end user. So if you tried it and you thought, wow, this is way too hard to use, you might want to try it again. If you haven't tried it since 1.20, you may want to try it again because it probably did change. Yeah, I'd agree with that. So I've been working on implementing this stuff over the past few weeks because, you know, nothing like last minute homework. And it is pretty simple to get v6 onto a cluster now in the latest version. So the improvements are really good. It's just that, you know, obviously found that danger out the box, which is really key. The actual other thing is that GitHub isn't on the v6 internet, which is a huge barrier to entry. Because the first thing we did was spin up a v6 only cluster and a v6 only network and couldn't get to GitHub. Yeah, the things you needed weren't on the IPv6 internet. So the good thing is now you can hold all those companies accountable. Some cloud APIs aren't even on the v6 internet. Last I heard, I thought, you know, GitHub was brought out by Microsoft. Yeah, I think I probably have some colleagues that are waiting for this. They said they were waiting for this feature to land. I'm pretty sure I can send them a Twitter DM. That's the correct way to message your colleagues, right? Yeah. I think so to go back to your question, I think so there are customers that want to use IPv6 only, but for everybody else, I think dual stack offers the best of both worlds to get started because it's not just the transport. It's all your application software communicating on that. And a lot of the pathways, even in different application codes, those implementations on different networking stacks are a little less mature. So it's a really good way, hey, I presented on both and I start to get a feel for IPv6 and how it operates. And I start to get comfortable with, you know, operating different service discovery, all these different mechanisms. And you've got v4 there to serve the production workload. So you've got a great way to start getting a feel for how v6 functions. And I really think it's key because we do get a lot of, well, I just want to go to IPv6. And I said, well, go and see what works and see what breaks. And sadly, most customers end up coming back and end up saying, well, I actually need dual stack because I actually need to sometimes get to the IPv4 internet and how do I do that? And dual stack offers that kind of on ramp. And I think that's related to the, when we're talking and all of us, you know, in capacity talk to production user customers, they aren't necessarily in a green field. So they might have some backing data store that uses legacy, whatever, or they might have some hybrid situation where they can't put v6 on everything. So setting up a, hey, we have an ideal v6 scenario. It's great. You can do that in a test lab, but you can't necessarily do that in production. And that's fine. That's where everyone is. It's not just you. Audience, any questions? I have a few. Yeah. Let's get this gentleman Mike. I'm going to see if I can jump. So you're saying that the IPv6 may not be for everyone and people need IPv4 too. I agree on some part, but why is not the net 64 a solution? Because to me seems that getting dual stack kind of brings more problems than it solves. You get the problems from both IPv4 and IPv6. So preference would be IPv6 cluster with the net 64 to get outside. Thanks. Yeah, that's a great point. And it reminds me of the situation where because we have this in-person and virtual, there are two channels and I have to kind of look at those two channels. So it's all a little confusing. So what do you think of just going IPv6 like the gentleman said? Well, I can jump on this one. Having had done this before with net 64, a couple of things. Your back end services aren't actually running IPv6. That's fine. Okay. So your application software makes no changes and therefore even on the back end, you're never actually testing the application software. And having done this for a service provider in the past, actually getting IPv6 end to end is actually a better state overall for many different reasons, transport reasons, overhead reasons, routing reasons. But if you never actually turn it on and plug it right into your application software, you don't actually know the challenges that you're going to face in service discovery and a whole bunch of other things. So I think getting that end to end connectivity, because we come from the IPv4 world and we all grew up under it, we think we can bring all the old problems from IPv4 to v6 and say, well, let's just put nets in front of everything because that's what we did in v4. I think that's been one way to actually do this. But I think what we want to get out of is being hinged to netting forever and keeping IPv4. Now on the back end too, large clusters is becoming a thing. Now we can't go to a customer and say, give us a slash eight because you want to run 70,000 pods. Why not? Why not? You can. You can. And we've done some fun things in Kubernetes like non-contiguous blocks and a whole bunch of stuff to stitch large clusters together and make them functional. I think, you know, with IPv6, we have a lot more headroom in addressing and actually being able to utilize that addressing on the back end can actually help us meet scaling and cluster needs. So I think while there is the regulation and compliance pieces, there's also the do you really have enough address space to put 10,000 pods in a cluster and make sure they can go up and down like an accordion, have enough headroom in your cluster to do that. So I also see dual stacks serving that function as well as how do we actually get out of this addressing problem on the back end. So you might actually put v4 on the front end and v6 on the back. So you could do a four to six net so that you could actually have just a single stack back end. Anyway, I'm getting into the weeds. I can talk about IPv6 all day. I'm hopeful that dual stack is a transitory thing, right? And as people get more familiar with v6 and the v6 internet becomes a complete thing that people will start to drain off their needs for v4, right? And we'll be able to see those things end of life over time. I mean, that's the goal for the last 20 years, but this time we're for real. And having implemented v6 recently as well, it wasn't just for clusters, v4 net versus v6 rootable everywhere, v6 rootable is absolutely a joy to use. When you get it set up and you understand it and you put a really simple addressing structure into your whole network, everything just falls into place. Kubernetes was designed on an assumption that all pods were reachable and that they were on the larger network, right? And that was great at sort of small scale. It doesn't work everywhere, especially in big places where there isn't a slash 8 or a slash 16 to give you. And so we find a lot of customers, users doing what we call island mode, I've written some talks on this, and they put their clusters in an island and then they only poke holes or build bridges off the island for specific services. And that works, but it's not what Kubernetes was really designed to do and so we're seeing now with dual stack people who do island v4 and flat v6. And we think that's a really nice intermediate step. So I don't want to forget the virtual audience. I'll come to you in a second, but here's a question. I don't want you guys to turn back, but I'll read it out. My organization would like to use IPv6, ULA addresses in both on-prem and public cloud hosted Kubernetes. But this still seems to be on the roadmap, correct? What's the underlying issue for this coming later? I can take a shot. I'm not sure I know what the answer is here. Yeah, so there are many different address types in IPv6. ULA is one, so it's just different levels of addressing. I think what we wanted to do, so to throw the answer back, is we'd love to hear about your use case and actually understand how we want to do addressing and which addressing. So which link local addressing, ULAs, we want to understand how you want this to operate. I've heard this feedback. So if you're the one asking the question, please come talk to SIGNetwork and give us the feedback. Because if you want to do different addressing schemes, we basically came up with one that worked to serve a function to get the feature on the ground and start getting feedback. So I have heard this more than once, and I would love either an issue in SIGNetwork so we can start to discuss. But bring your use case. We really want to understand how you want to cover up the addresses. And when you say an issue in SIGNetwork, you can put the issue in KK in the Kubernetes repo and label it. You can also come to the SIGNetwork meetings. We meet every two weeks, early to midday, midday for Europe on Thursdays, and we're nice. So come to the meeting. Or you can send email to the SIGNetwork mailing list, which is linkable through the Kubernetes community sites. All of these will get to our attention and we'll be able to discuss. Yeah, we need a discussion about this. I can't answer it without knowing it all, but I think starting that discussion is critical. So if that's you asking that question, please come and ask it to SIGNetwork. Gentlemen over there. Yeah. So I have a question around security or default security with not 4v4. So we had that default security. Are you currently working on a solution for v6 that you can have a network policy as a default for the whole cluster so that developers are not accidentally exposing services? And I think there's a ramification in general. Kubernetes, IPv6, the dual stack, pretty much hit every component in Kubernetes, not just security. So maybe security is one manifestation of that, but maybe you can address it generally as well. So we have some work going on that's not specific to IPv6 but happens to answer this question at the same time. There's a proposal for a new API called admin network policy, ANP. And its original name was cluster network policy, but the goal was to actually keep the same API for multi-cluster. So multi-cluster cluster network policy didn't roll off the tongue. So admin network policy is similar to the network policy API, but a little bit more sophisticated aimed at cluster admins or fleet admins. And specifically it's designed to impose the guardrails above what network policy is allowed to express. I can jump in there too unless anybody else wants to jump in. So what we needed to do because we kept rags, can you move to that way so I can at least address the person? Sorry. What we needed to do because we kept getting stuck was none of the upper level abstractions could consume dual stack or even IPv6 because nobody was using it. So we needed to create all the core APIs and componentry so now all the network providers can have dual stack networking if they choose to adopt it. And now I'm hopeful that we'll see all the CRDs and abstractions start to model that because they're all blocked on not having the API in Kubernetes core. So if that's you again, please come and talk to us and talk to your cloud network provider or whoever you're using to get your IP addresses because if it's you, sorry I'm putting that back on you, but we need to start showing examples and getting it throughout the documentation. But we couldn't put the horse before the cart. You know what I mean? We couldn't start writing about upper level things that needed to come into play like all the CNI providers and how that works. We needed to get the core APIs in and that's what we've done. So now that people can play with this, we can actually rationalize well, how do we model default network policies at an IPv6 level because everything was stuck on while I don't have it. And nobody was using IPv6 on its own or very little people. So I'm hoping that this kind of gets the conversation around how do we model those things in addition to what Tim was saying. Thank you. Thank you. All right. I know there's a gentleman on there. While he's walking down there, I'm going to add that a SIG cloud provider also works pretty closely with SIG network. So a lot of overlap in the people who participate in both. So if you're thinking like I have specific problems specific to my cloud provider, either talk to SIG cloud provider or SIG network and we'll connect you up. So for cloud providers, public cloud providers, what are the top couple things that we can do to move this effort forward? I think making it very easy to get a cluster that's configured correctly out of the box. And I think we're seeing that. I'm seeing more and more of the tools. I think there was a talk from the folks that work on COPS that they have IPv6 and how they implemented that. So more tools, implementations so they can see it and then just for managed service providers make it very easy to get a cluster that's either IPv6 only, which I'm seeing a lot of, or dual stack out of the box. So I think making it really simple to deploy so that people can start using it and giving us more feedback on these specific features that would be my goal is making it really simple and across all the tooling. I think the talk was on COPS. If you can do it in COPS, I think that's a great start. Just having a way to get a cluster without understanding all the different flags and what order and what addresses you need and how to actually configure it all. That would be my suggestion. If you're a provider, implement it and do it this year. I'd like to see Detroit, all the major cloud providers have IPv6. We'll raise a toast for them. If you're a customer, ask your cloud provider. Demand, when are you going to give me this? I need this. Go ahead. Oh, I gave somebody a mic, right? Or did I? I lost my mic. Go ahead. Next question. What do you see as the future for network components like load balancers and traffic management in an IPv6 or dual stack world? I realise it's wider than Kubernetes. I mean, they're still going to play a part, right? Because they're not doing just... Do you mind repeating the question if you don't mind? The question was load balancers, whether they still play a part in either a v6 or a dual stack world, and they're going to play a part because they're doing things more things than they do in a v6 or dual stack. Which is what traditionally we kind of use load balancers for. So the load balancing, things like A-B workflow and deployments, it's still going to be a part of the world. It's almost built into how we run applications on Kubernetes. Agree. I think there's one little nuance in there that's interesting is what we've actually done under the hood is allow pods to have more than one IP address. Most virtual functions on top, sometimes these services need more than one interface. So again, if that's you, we've created an implementation here that we can start to rationalise and discuss whether pods can have more than one interface in the same address family. We've kind of limited that. Sorry to lead the witness here. Locky is saying the quiet part out loud. We have opened a door to a new adjacent set of possibilities which involve having multiple IP addresses. So that's the next can of worms. Yes. So maybe hosting those applications on those network function virtualisation may be an avenue that we can investigate through working on the dual stack work. There are no more virtual questions. I think we're kind of coming closer to the end. Maybe I'll take one more after this gentleman. One more here. We're going to be here forever. Wait until they kick us out. So we're going to be here. Definitely encourage some participation later. But let's take these two questions and then I'll give you guys some time to kind of summarise and then we can have the free for all for a while. All right. Go for it. Now that IPsec is baking to the protocol itself into IPv6, do you guys see any possibility to have end-to-end encrypted communication between parts in Kubernetes by default? Short answer is yes. The long answer is the great thing is now it can be transparent to the actual user. So the provider can actually do that handoff and you as the user could have end-to-end encryption on your network without having any special software required or third party software to do the negotiation of that IPsec tunnel. So I think it's a very interesting place and that's one of the benefits of IPv6. And I think, again, a lot of the community was stuck on, let's be clear, we took v6-only clusters and brought them along for the ride. They had been stuck in alpha and when we picked up dual stack we said we've got to take everything across the line here. Having all this core componentry in Kubernetes itself, I think now we can experiment with the bits and pieces because the conversation before was, well, Kubernetes doesn't support it so therefore I can't do anything with it. And so I think, yes, you could bake that in. It should be transparent to the Kubernetes user but we could get pretty crafty in IPv6 LAN where the south of the CNI where you set up point-to-point encryption out of the box and not have to deal with that at MTLS layout. That's just something you could do. And when you say you, you're talking about the cluster administrator, you're talking about the managed service provider, like who do you see as the one who's going to implement this? Yeah, so I think, you know, if you have, hopefully the managed service provider can implement all this on your behalf but if you are running your own clusters on premises and have this, the cluster administrator is the you. This is a quick clarification on your quiet part set out loud and I'm going to go talking about multiple IP addresses per pod. Are you currently talking about limiting that to one per address family? That's how it's done today. We've inserted this mostly artificial check that just says if you have two, they must be 1v4 and 1v6 in no particular order but we made the API design on purpose knowing that that rule will eventually probably be lifted. Thank you. Okay. All right. So I think we're nearing the end. Maybe I can take one more question if there's one. Any questions? No? All right. So let's go ahead and start from there. You want to summarize maybe call for action? You know, what do you think, if you have the same panel next year, what do you think we're going to be talking? IPv6 is done, right? If we have an IPv6 panel next year, I really love it to be folks from the community who are implementing it because we have one here and I want to hear a little bit more about what you're implementing now. But keep in mind that these conversations are not a broadcast of us to you. These are the conversations where you're telling your stories amongst each other over time and we're building together what you want it to be. So this is definitely not something where I receive the wisdom of IPv6. Now I know how it is. This is something you can, as you start from these questions, construct the reality you want it to be. Yeah, I think my goal here today was just to educate and get the word out that this exists and that I would really love to see and users up here talking about their experience and SIG network, we're listening as well. We want to hear if this works and meets the needs of the people that are trying to utilize it. So I think it's six months. I think if it's across all providers and there's one click, give me a Kubernetes cluster with dual stack or v6, I think that would be a great outcome. I'd love to see more v6 community clusters really because there's a huge amount of the internet, like GitHub that you just don't know hasn't configured it. So getting more people using it, testing it and feeding back to the entire internet community about what works and doesn't work, keep pushing that v4 to v6 transition down the line. If we do another panel in KubeCon North America, Detroit, I want it to be people who are using dual stack v6, not people who made it happen, but the people who are using it and we can call it 10 things I hate about dual stack in Kubernetes. Yes. That would be easy to do. You just wrote the CFP. The sixth one will surprise you. So again, thank you all for coming. I want to take a selfie. All of you are going to put up a six for the IPv6. Do you mind, everybody? All right, you gentlemen want to come here. Thank you very much for coming. And I really want to thank the panel.