 Okay, I'll get started in while Sushant joins. So yeah, today what we thought we were going to talk about is the work that we are doing in Azure as far as support for container networking is concerned. Let me share my screen. Can everyone see my screen? Yeah, we can see it. Perfect. And that was Sushant, right? This is Anand, but I think Sushant is also trying to send some messages. Sushant, here you go. We can hear you, Sushant. Okay, Sushant. I think it starts six if you're having issues on the phone. Okay, let me start and then as Sushant joins, he can jump in. So Sushant is a developer on container networking in my team in Azure networking. So he's put together these slides to update us on the work that we are doing in container networking. So today what we thought we will cover is support for container network policies in Azure. As you may already remember from some of the past discussions, Azure Network supports overlays or what we call virtual networks for containers. When you deploy Kubernetes containers in Azure, they can already be deployed into Azure virtual networks, get IP addresses in the private space, talk among each other as well as talk to VMs and on-premise resources through the Azure Virtual Network stack that is already available for VMs and not for containers. So what we are going to talk about today is how we are enhancing this virtual network for containers to support policies. And the component we are going to talk about is Azure Network Policy Manager. My slide is not moving forward. Okay. Okay, so the Policy Manager will support Kubernetes policies natively. So essentially, however you specify the Kubernetes policies through YAML, the Policy Manager will plug seamlessly into Kubernetes such that we support the same policy language specification that Kubernetes applies. This will be open source. It will be stateless. It does not require any store or state replication as part of Kubernetes. And we support both Linux and Windows. If you're able to join, feel free to jump in or let me know and I'll hand off to you. Can you guys hear me? Yes, I can hear you now. I don't know what's wrong with my volume, my mic probably. Yeah, so you can continue from now on. So I forward the overview slide. Why don't you continue with the slide? The slides, do you want me to present my screen? That's okay. I can stay in touch with you speaking. So next one is the architecture slide where we show what type of, we are handling all the three types of events that Kubernetes sends to the Policy Manager. For example, whenever something is wrong with PowerPoint, just one second. I'll just stay in this. Yeah, continue. Yeah, so we have event handlers basically using the. So, so we have three types of event handlers, all three that Kubernetes sends us events for. For example, like whenever a pod is created, updated or deleted, or in namespace is created, or whenever a customer updates a policy email file adds new policies in there. So we have, so we have event handlers for all of these events, and this policy manager is implemented as a demand set, following the Kubernetes guidelines of how to write a policy manager. And basically what we use is the combination of IP tables and IP set. To implement any, any kind of policies specified in the in the YAML for Linux and for Windows, we use something called Windows virtual filtering platform. You can think of it as analogous to IP tables in Linux. So, so that's about the overview of what we do. So we're going to look at the details and next slide. It just says what I what I just talked about just basically whatever I just said using client go is the is the one we use to register for the callbacks from Kubernetes. If you want to look into more details in the in the next in the next slide, we have details on how how it is implemented. We have a completely stateless so we don't save any any of the state. We rely on the Kubernetes callbacks. So let's say the policy manager flashes or restarts. We when we register for the events Kubernetes sends us whatever is the current current state regarding the policies in the in the notes should be and we work off of that. So how it is implemented is basically in the forwarding chain of the IP table filter table. We forward all the all the whatever packet comes we processes first through the Azure npm chain, which is basically just the chain that we create. So that all whatever we do is constrained within one chain. And within their that chain we have separate scenes again for ingress and ingress policies. And I don't know how much detail should I go but let me go further little bit more. So in the in the policy ML. I think it's my speaker. So in the policy ML there are, you can specify policies based on pod labels or the namespace label namespace names. So what we do is we create an IP set for every part level, which helps us in reducing the number of IP table rules that we need to add. Whenever a new pod gets added to a certain label, we just update the IP set and none of the existing rules need to be changed. So that's one thing we use. And then for it we so in the policy ML also the policies are based on ports like like HTTP or TCP protocol support and protocol combination along with every IP set we have one goal for that. And then we forward that the same thing for ingress and egress for both chains we forward those packets. Depending on which port and protocol and IP circuit matches to another for ingress we create a new chain called ingress from and for egress similarly we have another change egress. And then basically we start by basically we start by rejecting by adding the reject rule. And then we in. So the first two rules are for the IP blocks. So the way IP blocks are because you can specify what's what IP blocks to accept to accept packets from except you can specify some exceptions. So we start by specifying the exceptions and the rest we accept. And then similarly then it goes through the namespace so namespace selector label is also like an IP set. And we if it falls IP address falls within the IP set for the namespace we we accepted and similarly for the pot selector. And what happens is whenever we get a callback for like a new pot gets created in a certain namespace we update we keep on updating the IP sets. And these rules we add once, whenever a new namespace comes or a new part label comes, we update the rules once. We never take it. I'm sorry. Did anyone. So these rules we add once and whenever a new part gets created we do not change the rules we just update the IP set. And, and it's pretty performant, because we don't touch the IP tables again and again for creation of new parts. I can also give a quick demo on like a quick demo that we have prepared. So I'll stop sharing. I'll share my screen. So I'll share my screen I'm. So this is a, I'm connected to the master of one of our Kubernetes clusters. I'm going to show you all the parts that are there in the cluster right now. So, the namespaces we have. Okay, so the cleanup is done. And so, okay, so we have the only cube system, what's that running. Let me, let me deploy three namespaces. So what I'm going to do in the demo is I'm going to create three different namespaces and parts in those three namespaces. And I'm going to, first I'm going to show that these namespaces can, can reach each other. So right now, for example, we have three namespaces NS1, NS2 and NS3 and there are some parts there. They're still getting created. So once they are created, we'll get the IP addresses for them. And Sushant, these are from Azure VNet. That's correct. So these parts in these three namespaces are directly connected to Azure VNet. They can reach on-prem, express route, all service endpoints work. If you have like a storage or a SQL service endpoint somewhere running in Azure, you can reach that from these parts. So these are the NS1 and NS2. So let me, let me connect to a pod in one of the namespaces, like let's pick in NS3. So I have a YAML that allows from NS1 to NS3. So let me connect to one of the pod in namespace two. And I'll show that I can reach from NS3, I can reach NS2 and from NS2 I can reach NS3. But when I apply the policy YAML that only allows NS1 to NS3, that communication will break. So let me connect through. So I'm in the pod. So these are NGINX, all these pods are running NGINX. So I'm in one of the pods in namespace two. And let's pick one of the IP addresses from namespace three. And we should be able to reach that now because right now I have not applied any policies. So we can get something from the NGINX that's running. So now let me apply the policy. Can you show the policy Sushant? It's a very simple policy that says apply it on the namespace three. This policy you should apply the namespace three. And by default everything is blocked unless you specify something in that policy YAML to be allowed. And if you look at the ingress, ingress part of the policy, we can say any namespace that match labels of namespace NS1. Only that will be allowed. So it's one of the, we have like, we support everything that Kubernetes supports. But for the demo purpose, we have a simple one. So I'm going to apply this policy. So this policy is applied. And now, because it only allows from namespace one to three, this communication is now broken from namespace two to three. And I can remove this policy and it will start working. So I can apply again and it will stop working and I can remove again and it will start working again. So it is, like I said, it's pretty performant. There is almost no lag between when we apply the policies and when the data path starts working on it. And we support it in cloud. I think we are the first public clouds to public cloud to actually support the Kubernetes policies natively. So, so, so that was the demo. And we were also hoping if we can discuss in this forum, whether we can because right now the policies, we can only have these actals like port and IP address based what to block what to not to block. But we were hoping to discuss here if if there is enough forum on in this forum to, to further enhance this to maybe include some routes, because those are also the policies that generally customers want in in public cloud, routing their packets via force running or, or some kind of more richness in the policy. So, I'll let Deepak take over and and maybe he has something to add to it. I feel like, like Sushant was saying, in Azure VNet today for VMs, we support a wide array of policies. Security groups is certainly one of it that you just saw. But in addition, we also support capabilities for service chaining. And something that we call routes is what enables customers to specify policy to forward traffic from one from one part, another part and they go through this appliance in between. So, so routes is one such policy that that we are considering enhancing the cooperative policy specification with another one around load balancing and DNS, right. We support rich load balancing policies. And similarly, we support rich capabilities around DNS and then remote to on premise. So, so right now, while security groups are possible. We would like to extend the YAML specification include policies specification for these other scenarios that are possible with machines but are not possible with containers today. The other thing, the other thing I'd like to add to what Deepak said, I don't know how much of a policy definition. But there is a lot of, you know, value add and also providing an integrated experience where the Kubernetes policies or, you know, or any policy coexist with the Azure VNet policies that we have if we can use, you know, labels back and forth. You know, you can use ads back and forth between the two environments that is very useful in hybrid scenarios. I want to go from a v in a cluster or from a cluster to work. Yes, I definitely I think, you know, our group would definitely want to continue talking about the service chaining the load balancing and DNS extension pieces for sure. We have three areas along with like IPv6 is like a fourth area that we want to definitely take forward into the work group to try to define some extensions we want to suggest. Yeah, that would be great. We would love to work with you on to define those. The last thing that was just mentioned with the labels. Is this is just kind of going to like a custom defined label that can be, you know, mapped or are you suggesting something different than that? Yes, so so we support two kinds of labels. One is what we call system labels, which are to identify Azure services. And another is to we support custom defined labels which customers can can put labels on on any part of their containers or workloads. Yes, both of those we would like to be able to the customers with containers. I think definitely that's those those topics would be something we should definitely if you guys have like some something you could present. You can get together with me. Come up with some type of like, you know, definition we want to present to the work group. If you guys have something you want to present, we should definitely entertain that in the next, you know, one of the next meetings coming up. Sure, sure. I think we would love to give us maybe we'll probably won't be ready for the next meeting, but the meeting after that, we should be ready. So how about we intuitively put us down four weeks out. So next month for us to present at least an initial proposal on on routes and maybe load balancer. And I'll set up some time with you guys in the next maybe in two weeks to kind of have some discussions, you know, just between a smaller group of us. Yeah, that would be that would be great. I'll work with you on that Deepak. Yes. Yes, that would be great. One question I had was how, as a community, how do we want to approach this with respect to the NCF this working group and the network sake group in Kubernetes right do we do we take a joint work together to to Kubernetes network sake, or do we expect the two communities to be one and same only how do we see that happening. Yeah, but for the most part we have, and I think this week Brian is is on vacation but we usually have Brian join us from the CNI contributors group. And I don't know if we have anyone on here from the network sake, but we usually have one or two people from Google join representing the network sake this means I don't know if they were able to make it today or not. Yeah, we sort of collaborate together and you know whatever you know whatever we bring forward as a proposal from here we would probably presented to the TOC and the CNCF and then from there work through, you know, through the different whether it's a CNI request or whether it's a Kubernetes request we would work with the appropriate groups collaboratively on that. Sounds great. Yeah, the last question I had for the for the Azure team is, is there any interest in, you know, you said your policy manager was open source. Is there any interest in presenting all or any part of your policy manager to the CNCF. We would love to. We would love to. What, what would you like to see covered in in that presentation, detailed architecture, or, or more code level structure. We would love your guidance on what you would like to see presented there. Yeah, I can definitely provide that to you and I think it's probably more along the lines of, you know, specifically the policy pieces and the extensions and how, how we have implementation to show kind of how that works or kind of show the running code and an example of, you know, how the policy definition works. So we'll talk more about it offline, but I think that's, that's where the long line is what I was thinking. Sounds great sounds good and the main reason for us to open sources to have the community participate and contribute to it so so certainly we would love to present and get the community feedback and and participation. Definitely. Awesome. I'll get you our talk with Chris to get your schedule to probably be a month out of so given the timing on the TLC but I'll definitely work with you guys on when you can present and when they were able to pitch you in. Okay, thanks. Cool. Hi, I don't see a Lexus is really way from we've works joined. Before I jump on to the next item. Do you guys have any questions for for the Microsoft team. Thanks, Deepak and Shushant and joined with thanks for your for the presentation and overview of the work you've done on the policy manager and the different policies, you've been working on any questions from anyone on the call. Any comments. Quiet today. Okay, no problem. So the, the next piece of the agenda, so I just want to say something. Okay. So the, the next agent item was around part of the work group. There's, you know, looking at the networking space and on the, and the landscape that the CNC efforts put together. There's quite a few companies that they've sort of, and projects that they've sort of put into the network space. One of those projects I know we've works has presented to the TOC and is interested in becoming an official CNCF network project and so one of the things I've asked them to do is come and present to this working group and then I'll try to get that scheduled me for the next, the next meeting. But, you know, part of what we want to try to do I think is start looking at some of the network projects out there, especially in the areas that we know we discussed previously and you know the low balancing piece. And we think about some of the services that are needed in network IPv6 type of services, you know, looking at projects that are addressing those areas and reaching out to those projects to talk about what the cloud native aspects would look like in those projects so and definitely open for anyone on this call to look at those companies and you know suggest ones that might be of interest that are filling a gap in the cloud native ecosystem today that we should talk to. Same thing with like monitoring tools, you know, there's other sort of like areas that are kind of complimentary to network but are not in the network space that we probably should broaden scope of the discussion to as well that are kind of complimentary to what networking is providing to the cloud native infrastructure. Yeah, I agree. This is Lee. A couple of thoughts. One is, so yeah, so we've is intending to propose net for adoption that I spoke with Alexis that reached out a couple of times, kind of asking for project management there. I just did not have the bandwidth to engage but but we've had them present or that that project present on this working group but I think at least once in the past right. Yeah, we did. Not that it doesn't make sense to do it again it just can recollect it. You know, speaking of like monitor related monitoring tools and like sort of opening up scope it's like, well, actually, actually still we scope is certainly very, you know, micro services visual topology oriented. Definitely. I agree. I brought that up with Alexis in the past as well. I think there's a couple of things particularly around like, you know, I guess there's a couple ways that this particular work group could go one being somewhat service provider oriented in nature. You know, FD FD.io and some of that flavor of projects, much of what's in the CNCF and the general focus is much more inside the data center and kind of enterprise and end user oriented. So I'm happy to suggest things more toward that and so some of those things are like service mesh is a decent topic probably not lots of kind of education to happen in that area. Lots of interesting projects in that area. Very interesting. You know, we don't, you know, other other things maybe around. And I actually just missed D pox presentation so I don't know if that was, I think deep pocket presented before as well if I recollect it was around Microsoft's perspective on enhanced network policies and really sort of in context of Kubernetes policies but that's certainly there's lots of like QoS and sort of other higher level network services that probably get to be, you know, get to be addressed. I don't know if any of that helps sort of sort of riffing here on some, some things I know from my my part I've had a heavy focus towards service mesh of the last six months or so. Right. I think it's definitely a good topic so I go ahead and depart. Yeah, I was saying Lee what we presented today was more around security groups and isolation. And, and before you join Ken and we talked about quite talked about DNS load balancing IPv6 as some of the topics I think was is a good one that we should add to the list as well. Got it. We did you guys know the state of IPv6 that that you know more or less the scale is kind of, you know, thinking help trying to steward within the Kubernetes network say, is that like you said. So what was your question around IP kind of a state of IPv6 support in Kubernetes. No, we haven't looked at that but that's certainly one area that we want to look at next. There's a big disconnect and sort of the efforts are going on with IPv6 and the vendors ability to support them right now as you probably know only something that I'm very interested in sort of helping to, especially from the end user standpoint right if we can have a strong user voice back to, to the community space I think it would be very beneficial to I know my being a little bit you know selfish like you I've been spent a lot of time in the space and that mastercard being a transaction network we have no IPv4 at all. And we rely on IPv6 or out of space completely and every, you know, every transaction and every place in the world has an IP address associated with it so we're trying to, you know, we're trying to really get our vendors to understand that IPv6 isn't an option for us. So another supported this is on the roadmap and it's, it's even even like working with some of the cloud solutions out there today they're all IPv4 and you know IPv6 isn't part of the capabilities yet so I think it's a big area that we can help drive so if you want to give an update to the working group on what Cisco is doing with the Kubernetes thing that would be really good I think. Yeah, I'm ignorant, I don't, I just, I just recognize that you know some of our former cohorts are there being active but I haven't kept pace. You know that's interesting so you guys are you're having to traverse the IPv6 IPv4 stack as you guys communicate outside your networks and then yeah if you go to try to use really any of the public cloud services you've got, you've got that translation challenge. I think we've shown and Ken, as you steward the serverless working group as well, we've shown and I don't know that this is going to happen all that often but at least with cloud events, you know, a lot of hesitancy and reservation on the public, you know, the lead to public clouds behind AWS to really partake and adopt but certain like almost the peer pressure at this point might pay off so to the extent that you know we would gain enough mass around IPv6 that might. Yeah, definitely we can push definitely want to push for that so. Yeah, certainly I think Ken your comments on IPv6 requirement is very useful even for us in Azure because like you said, customers need to ask for it I don't think there is broad awareness. I know we have run out of IP addresses ourselves just like you said and and so at an infrastructure level, we've been moving towards IPv6 but as far as exposing it to customers is something that we have been trading quite slowly, partially because customers haven't been coming out to us and saying they must, they must have IPv6 it would be good to get that input from you over to Azure as well. Definitely. So I can try definitely service message one of these things I've been wanting to kind of have a discussion so I might see if I can align something up for two weeks from today for a service message discussion I think it's. It's very interesting I'll start working on an IPv6 one with my friends at Cisco, see if I can get somebody from this driving that need to join us and kind of give us a briefing on what's what's going on there. Yeah, you know, we never really we never I don't know that we necessarily ever came back to, you know, I don't know what your, your appetite is in this working group to potentially take on a bit of a white paper maybe like maybe this would be more evident after we can have a service mesh discussion that lots of lots of you know, particularly with respect to API gateways, lots of overlap in like on paper when you read down the features list it's like well these these two things do the same thing and so what's the difference to why why is it like anyway there's just like, you know, hey, I've got a container orchestrator it does health checks and a little balancing and why firewalling like why do I know I think that's a very important, very important point right and I, if within the server, let's work group right we kind of identified a white paper topic right and went after that I think and you're right and then and our working group I'd like to, and it doesn't have to be one. Like with serverless it was really more of about just how you kind of position serverless and function as a service and what it means to cloud native and the platform as a service and that kind of stuff right I think. In the networking work we probably want to have a couple of different white papers that may be more along the lines of the different services that we are, you know, trying to like kind of highlight from an end user community are important services that have gaps today and the delivery and the execution of you know to do in a cloud native way today so. So I definitely would like I don't I don't have an idea of what that proposal is yet, but I think once we talk about some of the different services that we want to look at adding extensions to for cloud native. And we look at you know service mesh, we look at IPv6 and we look at qos and maybe a few other topics to come out of those discussions. I think well to your point. We will have a much better view of if we were to do a white paper what would we do it on and would we do one would we do like two different white papers, we can take it out later but. Yeah, just just even as you know we might even be a good witness test just even as a group of insiders and, and as we go to grok the right answer to some of those questions. Yeah, but you know at one point the CNC I've had a, I hate, I don't have the right word I call it as a test bed with like a, you know, supernap had provided some, you know, servers, and you know, joint before they were had, you know, put some resources on kind of building out an environment where we could actually host and test out projects and ideas and have them interoperate together and stuff like that and that all died because of different reasons but I still think it you know, there's there's enough interest here and especially if I working who has enough interest in doing something there. You know I know from talking with Dan and the CNC that they're definitely provide, you know, resources to work with us, you know, whether it's doing a white paper whether it's getting access to some kind of an environment that we can test out some of you know ideas or, you know, for that matter a lot of the things that you're talking about here I have up and running in different environments today that I could, you know, at least provide results of I couldn't give access to them but I could at least provide results and take input on what things you want to see tested and maybe get those things tested quickly and provide the results of the test but you know one of those yeah you know actually on that not now that's funny that you mentioned it because I put in a proposal for some time on the cluster yeah I think that's what we were calling it CNCF cluster and the proposal was round performance testing of various container network drivers and you know like there's some natural expectations that overlays bear some overhead but but even at that like like various host level network you know just the various drivers of the same genre still had different implications and you know that like as a matter of fact this might even be something good for us to take a look at on this call and put the link in the chat is a free tool that we created at SolarWinds at about that time to really created with that use case in mind it was going to facilitate kind of throughput tests kind of performance tests across different networks kind of like an eye perf but you know pretty way and helps you kind of compare the as a matter of fact there's a little bit of like some we've scope type visualization in here. It's been a while since we've updated it but I don't know if folks want to take a look at just maybe the screenshot on the page and get a sense of whether or not a demo or just anybody has want to talk about this and I mean I definitely think it'd be worth talking about for sure. It's a we've scope is a you know it's gotten so much more time put into it than this necessarily has but this is kind of this is like a deluded version of that and so but but I bring this up can mostly to reinforce the point that you were making about like a various at scale reports or tests like or at least you know I know that as long we were focused really heavily on CNI the various network drivers there which went to use and why and making them off was it was kind of the why this project was created. Right. I like it. We definitely get something scheduled to talk through this and I do think there's you know work groups within the CNCF have have some flexibility still to kind of define what are some of the outcomes that we want to define and then what sort of the request we want to take back to the CNCF in terms of having an environment for doing things or having you know a tech writer to work with us to help you know document, you know, a white paper for instance or I'm not completely clear on the whole CNI integration or engagement piece yet that's something I need to kind of still work out with the TOC but I still like we have the ability to kind of request, you know, update. And I would hear some some things that we think are gaps and in the specification they may not, you know, accept them but I think it's still our scope to kind of define things that do we see as missing or needed. Yeah, I totally came to receive, you know, an update every so often and then to, yeah, allow space for suggestion. Yeah, I'm guesting myself. Well, everyone thank you for for joining today I will, I have a nice, I was hoping to get a nice list of topics for the next couple months and I, you guys have helped me successfully fill out more than enough things for us to talk about for the next few months and so I'll get a, I'll update the TOC page I haven't updated that in a while so I'll get that updated with these agenda items and get a schedule together and get speakers lined up to come and speak with us and each meeting will try to have a presentation and also, you know, knock off some of these tactical discussions we want to have as well so maybe 20 minutes on presentation 20 minutes on, you know, tactical next steps and discusses like we had today and then, you know, leave 20 minutes for, you know, new topics or open items we want to discuss so. Thanks. Thanks everyone have a great, great rest of your day and rest of your week. Alright. Cheers.