 Okay, it's five after, so I think we can get started. Welcome everybody, this is the Telcom user group meeting. We meet on the first Monday, every single month at 1500 UTC. I was decided after a recent poll, we used to flip back and forth, but now we're just meeting at 1500 UTC. If you're joining today and you're new, please feel free to add your, or actually everyone, please feel free to add your name to the meeting notes, so we can keep a record of who's been here. So we don't have a ton on the agenda today. We're gonna have a presentation from the great folks at Eno. I'll talk a little bit about the, what's going on in this working group to keep this group up to date, but before we jump into that, is there anything that anyone else would like to add to the agenda? It's more like a question than a request to the addition. Don't you want to discuss a bit how to continue with the network orchestration group, what was started or discussed in the last meeting? We had some discussions on left, but I'm not really sure how to continue with that. Yeah, we could add that to agenda, maybe give a five-minute update on, not too much has been done, but we can add that to the agenda. Yeah, I'm more interested in how to continue. Yeah, that's a good question. It's a little bit on me right now. There's one task I need to do, but anyway, let's add that to the agenda and discuss, good idea. Thank you. Okay, great. Is there anything anyone else would like to add? Okay, it seems I forgot to copy up the events so just as an FYI for people that are interested. The CNF working group meets weekly on Mondays. The Etsy plug test is now done. The Kubernetes on Edge event, if you're interested in learning more about running Kubernetes on the Edge is going to be Monday, May 3rd. The CFP is now closed and I think there's a lot of really exciting talks. So register for that now and find out more. And also that's gonna lead into KubeCon called NativeCon Virtual Europe 2021 and that's the first week of May. So there's links for all of that in the meeting notes. Now with that, I'm gonna hand it over to the folks to give a little intro to their project. I'm gonna stop sharing. And then I saw a look on the call earlier. Hello, am I audible? Yep. Okay, yeah. Thanks, Bill. So let me share and start sharing. Yeah, we can see your screen. Okay, thanks. So hello, everyone, myself, Alok. I've been working with Eriks and part of this external network orchestration, we started up as a study within our team working on the Kubernetes network orchestration and how we can actually fill in the gaps which we have currently in the Kubernetes, they call networking model different, how we can make it more dynamic and can be orchestrated on demand. So we started up as a study, now we are in a place that we've been running a proof of concept and have the intention or the ambition to make it open source so that we can basically the whole community can be a pleasure then we get a quick feedbacks and we can expand as quiet. So this is a quick walkthrough about the proposal which I think I shared with Bill a couple of weeks ago and we decided to bring it up here in this forum with a group of people and collect the feedback. How are we targeting the right thing or is it something which doesn't solve purpose or won't add value to the community? So a bit of a background here to begin with as we all know the standard Kubernetes networking model that relies on the single-netted interface and that's what Thal also mentioned in one of his slide deck. So, and while with that single-netted interface when working with the interworking external networks, so it doesn't really or doesn't always provide the proper network separation and above all it's implementation through the Linux kernel IP stack mechanism doesn't fulfill the performance requirements which many of the telco network functions have been looking for. So as a workaround or basically as a solution so to say there is an introduction of the special network attachments or what we call the secondary network attachments through multis or there are other solutions like NSM and secondly to overcome these telco specific limitations. So basically the secondary network attachment was introduced in Kubernetes through let's say the so far evaluated multis to overcome these limitations which allows the or direct L2 attachments to provide the multiple external networks for providing the network separation and also to support those telco specific protocols and basically to provide the choice of different network attachments like the kernel interface or the bird IO devices, SR IOV virtual function devices. So it's a wide spectrum to provide or to facilitate the performance requirements including the high throughput user plane network functions and so far these network isolation are being achieved through this preconfigured static fabric orchestration and it's not in general very much dynamic so we cannot basically automatize with the lifecycle of the network function. So we basically when we deploy a network service or a network function, we basically rely upon the preconfigured those static tenant networks which are being deployed during the cluster installation time let's say. So that we identified as a gap and how we can actually automate that so that we can basically make it more dynamic and can be created on demand when we basically deploy a network function. So the Kubernetes external network orchestration, how we've been doing it today in both NFV infrastructure based deployments and in case of bare material cast based deployments. So there are a couple of steps in the NFVR based infrastructure deployments. So it's a provider network that we create or that we orchestrate in NFVR based deployments. It's a multi-segment network, right? So it consists of VLAN and VXLAN segments and then the L2Gate with connections are being created using the OpenStack API, right? To bridge the VLAN and the VXLAN on the hardware meter. And then we basically spun up the worker VMs with the trunk VNIC, right? And then we associated with the trunk ports during the VM boot up. And then these external network connections are being then orchestrated as a supports to those trunk ports in the worker VM by creating the supports in the provider network form in the OpenStack terms. In case of bare metal, it's fairly straightforward. We first provision the fabric through the Fabric Specific API as a L2 network. And then the VLAN IDs are being configured on the Fabric MLAG interface facing the host computes and on the AMLAGs connected to the DC gateway. And just to note in both the cases the network attachment or the association on the gateway, which is your created VLAN associated with the VRFs can be either orchestrated through NFVO in terms of manual architecture and then all it could be done through some scripts. And like I said, today it all being done manually. So we can do a question to ourselves like why if we have to configure such networks thousands of time, is it an ideal solution to follow those steps one by one for these many network configuration? I think it might not be the ideal solution. It's more error prone, it's a tedious task to do even if it is through some bash scripts or some scripts. So to solve it, to make it more automatized, we introduce the concept of operator for doing that or basically the operator concept is there in the Kubernetes ecosystem it's quite known and there are lots and lots of efforts being done by other projects to basically handle how we can handle the stateful applications. So basically here what we are trying to introduce is the external network operator to basically automate those network orchestration, the creation basic or the life cycle of those external networks. So let's go through the, you know what exactly it's and as I said, it's a Kubernetes operator which sits inside the Kubernetes cluster to automate those external network creations and the life cycle. It's a controller component being deployed as part of the Kubernetes cluster and it will be bound with the life cycle of the cluster. It is a fabric agnostic or it follows the fabric agnostic architecture which allows the adaptation to the multivendor fabric and it follows the two facade architecture, one for the internal purpose managing the custom resources and one for managing the underneath fabric and for doing the fabric orchestration and external networks. So this is the overall architecture of Eno and the plugin API. So if I start from top to bottom, so we have this northbound API which basically through this API we are collecting or feeding in the CRDs and custom resource definitions and then this fabric agnostic operator, it's the southbound of Eno and then that basically has the pluggable CNI support which supports various fabrics, let's call it fabric A, fabric B or neutron in case of open stack based deployments and to be a specific or to call a specific fabric and it'll be a speech and it's a dummy fabric which we are implementing it for our park basically. So we have various fabrics and for each fabric we have a corresponding fabric plugin to basically orchestrate that fabric and basically Eno has the distorted line splits the Eno between two as two facades, the above one basically handles or manages the custom resources which we will discuss in detail in the coming slides and the below one is for the external fabric orchestration and the external network creation and management of those external networks basically. And then as I said, we are working on a proof of concept. So this OBS bridge which we are using it as a dummy fabric for our realization of the as a fabric and then having OBS plugin to basically orchestrate that fabric. So that's the overall architecture we have for Eno. Any questions so far for maybe one, okay. Yes, maybe one quick question. So the dotted line here that separates the internal versus the external is the idea as I understand it here is that these plugins are not just for orchestrating external networks but really to connect them to containers running in Kubernetes, right? So even though it's not CNI, for example, OBS the idea is that you will get this L2 network to your pods. Am I correct? Right. So you will be getting your exactly. So the L2 connections directly to your pods so that it can be used. Yes, Dimitros. So the question is regarding the OBS plugin and your plugin and all of that those colorful boxes that we calculate, right? So actually a condition of two parts. The first part is the inner controller that actually handles all the internal cluster operation and the second part is the fiber plugin. This fiber plugin will also run as a bot or as a deployment inside Kubernetes classes and will actually have an internal API and a southbound API and the northbound API will actually take instructions from the inner controller and to create some measurements or whatever and the southbound API will directly configure the fabric which locates outside of the Kubernetes cluster. Is this clear? Yes. Yeah, wait, I think it was clear but you have a lot of echo in your sound. It was kind of hard to understand but I think I got my answer. Okay, we'll try to fix that because I have problems with my microphone. Thanks. Okay, thanks Dimitros. So moving forward, we have actually modeled the northbound API as a data model. It's like a meta slide. We kept it to visualize how it looked like but maybe our intention is to keep it like a general introduction session for you and let's part the detailed data model discussions maybe for further discussions due to interest of time and we can come back if there are necessary discussions around this topic. Okay, so this is the example workflow which we kept for visualizing the end-to-end flow and the orchestration how it looks like. So if I start from the manual layer, the orchestration layer, we basically onboard the CSAR packages for every network functions using NPO. So the NST, the network service descriptor that gets passed and then feeding point to Eno which is nothing but your custom resources that will be feed into Eno through the northbound API. Look, I think your sharing stopped. Yeah, it stopped for me too. Can you please try to reshare? Maybe there was some hiccups in the show. How about now? Now I can see it. Okay. I don't know where it actually stopped so, but I think I was only on this slide. So, okay, let me start. So I was discussing about the end-to-end workflow to visualize that how the orchestration and what all other supporting components that will complete the picture of this end-to-end orchestration. So from top to bottom, let's start from this orchestration layer, the manual, where we actually onboard the CSAR packages using NFEO. So NST, the network service descriptor, will then get passed and generates the Eno external resource definitions which is nothing but your custom resources which will then be feed into external network operator Eno through the northbound API. And then Eno will basically triggers or basically use the southbound API to configure the VLANs in step three for on the fabric, on the data fabric through the fabric orchestration and it basically creates or configures the VLANs based on the external resource definitions and assign them to the trunk ports. And once the fabric orchestration that is the VLANs are being created inside the data fabric, Eno will then create the network attachment definitions, the NADs in multiple terms using the OBS bridge. This is the example of OBS. So we basically create the network attachment definitions and then once that has been done so the gateway orchestration can like I said in the beginning can be done either through NFEO which is the creation of or the created VLANs associated with the VRX either through the NFEO or some scripts that functionality is not there in the solution. So once all this has been done from step one to step five we basically have our tenant network in place and then operator can basically delegate the task to VNFM to deploy their network functions and that can be done through VNFM deploying the CNFs that will be using the tenant network that has been configured using step one to five. So this is the overall flow and the end-to-end orchestration I mean, I would like to highlight here that it's not just about Eno it's the supporting components which we have like NFEO in the orchestration layer and then the gateway orchestration in the framework. And like I was mentioning about the proof of concept so we identified couple of use cases for which I would like to hand it over to Dimitros. Are you still there Dimitros? Yeah, can you hear me? Yeah, but still we can hear the echo. Sorry, I don't know why. But if you feel better than me you can say if that helps. While you fix it can I ask a question about the previous slide? Yeah, sure. So is this example workflow actually something you built? Is there a POC of CSAR packages working with a specific NFEO that works? So yeah, so we have Mano Architects and the internal orchestration team which we did some technical feasibility with them and they kind of agreed that it is possible for POC we haven't put the end to end part but I think that's one of the requirement which we have triggered for our NFEO team and we have basically during the productification we have that requirement which will provide this feature. To make it end to end orchestration. So on technical grounds it is possible but we haven't tested so far. Thank you. Okay, so yeah. Daya, would you like to add something? Yeah, I just wanted to clarify from a POC perspective we would focus on everything inside the Kubernetes ecosystem which would be effectively steps two, three and four in this picture. Right. So I'll just self promote myself here and say that if you're looking for an operator that can quickly get CSR packages into Kubernetes, check out my Turandot orchestrator and I can help you with that if you like because anyway, we'll continue to talk after the presentation. Yeah, that would be really great and yeah, would be helpful to get some inputs there. So, yeah. Dimitro is over to you for these use cases. Yeah, so thank you, Alek. So to actually put everything to the test we decided to actually create an INU POC to implement everything that Alek have just shown and for this INU POC we will have three main use cases. The access mode use cases, the selective trunk use case and the transparent trunk use case and for those use cases, we are going to handle the obvious CNI and the host device CNI to actually implement everything. So, for the first use case we are going to create some L2 service attachment and because it's an access module case we are going to assign a single L2 service which corresponds to one VLAN actually and we're going to test the creation and the deletion. For the second use case which is the selective trunk use case we're going to create update and delete an L2 service attachment and we are going to include a range of L2 services which means a range of VLANs and also for this one we are going to use obvious CNI and for the transparent use case transparent trunk use case we have two branches there the obvious CNI branch and in those branches we are going to create update and delete L2 service attachment which would be type type trunk and we are going to include the arranges of L2 services which means also a range of VLANs again. So, next slide please. Okay, so here I would say a few words regarding the POC setup, the ENO POC setup so we can see here the day one we have four worker nodes which are arranged into pools we have the blue pool and the red pool the great area that we actually separate nodes to different pools are the networking characteristics of the nodes so in the red node pool we have obvious bridge and in the blue pool we have vertio pool which includes a range of vertioid phases underneath so that's why we have two different pools because in the blue pool we have the vertio pool and in the red pool we have the obvious bridges those worker VMs are connected through vertio truncator phases to the dummy fabric which for POC will be an obvious bridge fabric but for a real deployment could be an actual data set of fabric and to actually be able to depict the networking characteristics that our nodes have to the Kubernetes API we need to create three connection points as we can see from left to right we have a connection point which corresponds to the vertio pool network in object and we have a connection point which corresponds to the bridge transparent trunk and we have also a connection point which corresponds to the bridge data that we have in our system those CRs are going to be created through a Kubernetes lifecycle management system or through an administrator so we register that since you've not mentioned what L2 service attachment or connection point is maybe just a minute if you can spend on what these concepts are I think it will help the group I have L2 services attachment in the next slides that we have a full blown example here but so the connection points are like customer sources that Inno understands and visualize the actually represent the networking characteristics of our nodes so because we have three different networking objects the bridge data the bridge trunk and the vertio pool we need to create two different connection points for each of those pools so we have two pools so we have two connection points bridge trunk and bridge data which are on the red pool and we have one connection point for the blue pool which corresponds to the vertio pool so we register that to our Kubernetes system and we move forward next slide please so in day two we need to create a tender to services and subnets so those are two services actually represent one villain so here we have a tender to services so we are going to have 10 villains from 10 to 20 and the subnet objects represent the IP address ranges that we want to associate to each of those villains so those are customer sources are also Inno understands so we register that to the system in the next slide we are going to create some L2 service attachments that we bind all these together so for now we register that to the system and we move on to the next slide so here to bind all these together we need to create forward to service attachments with those L2 service attachment Inno we're going to kick in and we'll open the corresponding villains of the fabric and also we create the corresponding network attachment definitions for the project consume so in the first L2 service attachment we can see that it's villain type access it's related to the obvious bridge data it will consume only one L2 service because it's an access type to service attachment and that's villain 13 and the implementation that we are going to use here is the obvious on the on the second L2 service attachment is type selective trunk we are going to also use here the obvious bridge data connection point and here because it's a selective trunk L2 service attachment we are going to use a range of L2 services from villain 10 to 14 and again we are going to handle here the obvious in eye for this L2 service attachment for the third one we have villain type trunk again we are going to use a range of L2 services from villain 13 to 16 here we have a different connection point which is obvious bridge trunk and the implementation will be again obvious in eye and for the last one it's a villain type trunk again the connection point here is different is a bit of pool corresponds to the blue pool the blue worker pool we are going to use also a range of L2 services from villain 12 to 20 and the implementation here will be host device in eye so we register that to API you know what is for those events and when something like this happens it will go to the fabric to the appropriate trunk interfaces and will open the villains on those trunks and will create also the network attachment definitions so now that we have everything in place we need pods to actually consume all those stuff and we are going to create 4 pods 3 for the red node pool so for the first pod we are going to have an access mode interface for villain 13 at the middle of the image because we consume the network attachment definition that actually corresponds to the access of the SNI case the second pod will consume the selective of the SNI network attachment definition and that means that we will have at the right hand side a pod that is been up in the red worker node pool and we will have a selective trunk interface for villain 10 to 14 for the third pod we are going to use the trunk of the SNI network attachment definition and that means that we will have one pod that actually spin ups on the worker node that locates in the red node pool and will take a transparent trunk interface from the bridge trunk and the last pod will get spin up in the blue node pool in the blue worker and will take a vTiO interface directly a transparent trunk vTiO interface directly will get attached to the pod that actually consumes the network attachment definition which corresponds to trunk post device SNI so this is the main idea and with this slide we will reach the end of the presentation so if you have any questions please a very quick one these are three different clusters, am I right? No, it's the same cluster same cluster, okay alright so we have the worker node pools which classifies the certain characteristics so each node pool has a set of characteristics so in this case the red node pool has certain characteristics versus the blue node pool which is bound for post device using the pod the vTiO pool and has certain characteristics running on a same cluster Dimitris, I have a question how do you hint the cube scheduler to deploy the pods in the workers you want? You're not using the node selector or resources? No, it does So right now it's mandatory to have the network resource injector deployed in your cluster so the network resource injector is smart enough to understand that you want to get an obvious interface or a vTiO transparent frank interface and actually will spin up the pod to the appropriate worker node without for you to actually specify anything more than only the network attachment definition Thanks Another technical question so I don't know if you guys remember we're part of the KNAP demo that I did a while ago I just sent a link on the chat it's remarkable how similar our approaches are and we identified the same kind of problems I think that the difference is mine is just much smaller I worked on it on my own and you guys definitely went farther especially that slide with the dotted line you guys went beyond the dotted line and I stuck above it at least for POC purposes that initial kind of slide but one technical aspect that I had an issue was the custom resources and dealing with multis because one of the limitations of multis and CNI generally depends on which CNI plug-in exactly is that you can't do day two changes or after the pod has been already set that is if you change the multis annotations you'll want to recreate the pod recreate all pods to make sure that they reconnect to the new service so you have a bit of a chicken and an egg problem and you have to deploy things in certain orders did you guys hit this problem and how did you solve it I mean we don't solve it actually we assume that we have everything in place so we create the villains on the fabric through Eno and the network attachment definition and the pods we'll just consume those network attachment definition if we want to update those network attachment definition then we need to bring down the pods and create the new ones we don't have any pod plug-in of either phases so it's more of a rolling update use case right so you change or you update your configurations for a particular VLAN or if you change certain VLAN IDs or extend your network then you basically bring up your version 2 network service which will then deploy or basically makes and makes before break and then basically does the transition from version 1 to version 2 so yeah yes sir if you're interested the way I solved it was which is not necessarily a great solution definitely other people have given some other ideas but one solution that I did was actually to monitor kind of that relationship between certain Kubernetes resources that have that annotation so it would be deployments replica sets and pods and to see the annotations that they have there and then you know which custom resources they actually connect to so if the operator detects the change in the custom resource it will know to restart those deployments you have to do a little bit of trickery because Kubernetes doesn't have a restart API it would restart if there's a certain change to the resource so you can update its version for example its idea from what I understand you just watching for events of updation of the custom resource and if that happens then you spin up again the pods something like that I think the reason it's so awkward is that well it assumes that users would only use a deployment or replica set you know the built-in controllers but you know there are daemon sets there's stateful sets these are built in controllers part of kubelet but if somebody extends with something new your operator wouldn't know about them it's not the best solution there's an issue here I think in Kubernetes that we're all aware of that handling this by annotations might not be the best way right but this is how multis works it's weird annotations seem like this is metadata this is not but then we're dealing with something really intrinsic in terms of connectivity anyway it's exactly this kind of topic this might lead us into the next item on the agenda or what's available on the agenda today to talk about networking orchestration right we're exactly the topics that I think we all want to talk about how are we actually solving these things at a low level at a high level and I just want to thank you for this work too I think this adds a lot to the discussion and it's really wonderful so thanks Thanks Tal and just to add we actually evaluated KNAP in the beginning like what all the different approaches which is available in the community and there is one from Red Hat as well which is cluster network operator if I'm remembering correctly but I think the borderline is that they are exclusively meant for the internal Kubernetes ecosystem or handling the custom resources and we don't have as such this external or the second facade which basically does the data fabric orchestration so we kind of stretch it a bit and like you said extended a bit that idea and to bring to bring the end to end orchestration going towards the switch level or on a fabric level and then try to automate that that area we can say it's a bit of an extension to what we have in projects like KNAP and cluster network operator but yeah that we actually evaluated before and starting up with this effort I don't want to take too much time here but I guess another aspect challenge that I think comes out of this POC is how do you create a custom resources for these specific technologies L2 attachment and the challenge in Kubernetes is that all these kind of can look like one-shots so for a very specific use case you would create a custom resource with its own operator that works of course but the challenge is really how do we unite all these different custom resources that might be contributed by the community to some kind of solution that really could be more generic if you can install these plugins in a generic way or how do we move beyond specifically spoke solutions to a really general solution I think that's one of the tasks I see for the networking orchestration task force to think about this problem and provide solutions I was sorry just one thing I was thinking about this and there is this way how the CSI interface works and how the CSI selects the storage implementation to use and I think maybe we could use something like that to select the correct backend plugin but I totally agree that we shouldn't have any technology in the north on the API of this and also another thing is my worry is that we somehow need to separate the network administration tasks from the network consuming tasks so the administration as in the creation of these networks and then the deployment or creation of the networks like assignment of whatever or anything that we have like a very thin border line between the two and we kind of try to keep it decoupled in the form of packages so we will be so we have this VNFD packages in terms of the NFU and then that will be specifically or exclusively for our network functions and that will hold only the deployments of your network functions and then the networks or the external networks that has been created earlier upon through let's say the NSTs and basically that's the administration part which you were pointing to and that's how we kind of kept the borderline between the two and try to have their own life cycle that. Okay, that sounds good and is there any, is there also like interoperability of the VNFD in a sense that if you run it on another 2NS cluster which uses it like something else and OVS bridge them same thing I should yeah it should support interoperability and so the portability that could run on different clusters and yeah so on yeah that was all from our side about the small introduction of you know and I just have a question regarding the process to install NO is just straightforward do you need any specific requirement or any specific Kubernetes version or just run the Jamal one? Not in general we don't have any dependencies to which Kubernetes version so ideally it should work on any Kubernetes version or on the commercial deployment or the flavors of the variants of cluster so no specific requirements on that level we can we can think you know as an application that you can actually deploy in one click and you have then network automation in your cluster. What about step 5 shown here in this diagram I heard you mentioned you haven't got to it yet but any thoughts on how it might work do you plan to develop some generic plug-in something that can do net confi-ang or something like that or how do you see that evolving? Yeah so Daya would you like to take that question? Yeah sure so we have been pondering about using building something like an open config with either gRPC or netconf that can talk to different devices for this gateway configuration as far as you know maybe Alok you can go back to the fabric plug-in slide yeah as far as you know itself is concerned though we have not found any adequate sort of API which can work at a fabric level you know something like Neutron if there's a vendor fabric there's we haven't seen an open source standardized fabric API as such so that's where the gap is of you know although the southbound from such a service could be open config which is a very device configuration specific interface there's no standardized at least to our knowledge fabric API which could basically help build that layer I mean I'm aware of the issue but I was wondering if you have a solution we're coming top of the Alok we want to make sure that we have enough time for anything else and one of the actions going forward related to several of the comments there's multiple projects that are out there that are trying to solve this so one thing that could happen would be an effort to list all of those and then potentially map out the differences so that people can see here's what this offers and here's the other thing and then a potentially a separate action would be to analyze the projects to see what parts are they trying to solve we're kind of talking about it here but there's potentially parts that could be broken out and then we bring those forward like we're talking about these APIs right now and I know some of these items have been discussed for quite a while over in like cluster API so how are they going to handle provisioning the network fabric if you're at a point where you're doing bare metal I know those discussions are there because I was in those more than a year ago that's only one group though within the Kubernetes ecosystem but if we look at all these projects and then can split out and point out the problems that each of the different parts are trying to solve then ideally we can go and reach out to the some of the Kubernetes groups and see if there's interest in getting involved in this as well but we need a list of the projects and what they're providing and what they're trying to solve before other people can get involved with these projects like ENO or KNAP or any of them and then mapping them I think that's exactly one of the goals of the task force at least as I see it to do that work of comparison Absolutely One of the big things and we've already talked about this is splitting what are the needs that we're having and what's missing versus potential implementation and we can work backwards it's fine ENO and some of these things already have implementations of course but there was something driving it to the top is what's going to be desired especially if we dive in the Kubernetes community and get more people involved they're going to want to talk about the general driving needs versus the implementation showing that they'll first want to hear all about the needs So we are almost at the top of the hour but the agenda item about the update and the task force will do it in one sentence please continue to the CNF workgroup meeting that is just after this because the decision has been basically to move the task force to the CNF workgroup governance so the update will happen there I guess in the next meeting Well, thanks Tal and the last thing for people that are interested for leaders for the cloud needed network function leadership and there's a link to it in the meeting docs and so if you're interested to see who's running for leadership or are interested in getting more involved there's more details in the link in the docs to the mailing list with that the CNF workgroup will be starting in about two minutes here unless anybody has anything else today I'd say thank you all for coming and then we can go about our days or switch over to the next meeting Thank you all for listening and giving us an opportunity to present it in the community Thank you for presenting today I thought it was really insightful Thank you Alright, thanks Talk to everyone later, bye