 Okay. Let me go ahead and get started. Could I request folks to please add your name to the agenda and also to add in any topics that you'd like to discuss. The background here is that I've been working with the team from Bulk and folks from Cisco and Intel and Red Hat and elsewhere for, believe it or not, 10 months now, but intensively for about six to build out this CNF test bed. And the idea for it is to have a straightforward way to talk about increased use of Kubernetes by telcos. And it's conjoined with this concept of cloud native network functions or CNFs and how you could transition BNFs to CNFs, which of course sounds nice, but in practice winds up having a lot of challenges to to move through. So I'm a little hesitant. Well, no, I guess we have a few new folks on. So I think maybe I'll just take five or eight minutes and walk through the beginning of this CNF test bed presentation. And I think that would be some useful context for folks to have about how this can come together. We've seen would you mind sharing it. And then I can just walk through some of our thinking and and and I think that would then lead naturally into the next steps conversation. So on slide two. Yeah, thank you. I'll just remind folks that the Linux foundation is much more than just Linux. So since you have from the cloud has been collaborating very closely with LF networking. And as we'll talk about, we've made use of a bunch of BNFs out of their own app project. And have the hope or expectation of using more over time. Next slide three is the overview of CNCF that we now have five graduated projects and 15 incubating ones and they're 17 platinum members that are backing us and in one way of seeing the CNF test bed project is to say that CNCF has been extraordinarily successful, bringing the whole public cloud community and also the whole enterprise software community. And the question then is what would it take for us to expand to telcos and their vendors. And slide four is a look at LF networking, and you can see that they have 20, I mean, 10 carrier members at the highest level, and a number of platinum vendor members at the top level. And so they're now providing networking software to carriers representing more than 70% of all subscribers around the world. And own app is their kind of biggest and most important project. It is very much designed for a multi what they call multi in world that is supporting multiple different virtual information managers like the Mware and open stack and Kubernetes and others. And one of the things that we're trying to do with this work is to show a possible path forward for them that they may get some meaningful advantages to optimizing that work for Kubernetes. Essentially, the vision would be to have own app do a lot less to do much more the emitting containers and YAML and looking to Kubernetes to do more. Another way of looking at that is slide five, where this is focused specifically around the past was the sort of first release of own app Amsterdam, and that version of it ran on open stack the Mware Azure rack space. And it was supported be an apps. And then the own app passive blank that's available today supports Kubernetes. And so you can run these cloud native network functions. It also supports being asked on open stack. And of course the Kubernetes part can run either on top of their metal or in cloud. But the future scenario that we have here is one, we kind of want to focus on with this test fed, which is talking about allowing Kubernetes to be the universal substrate beneath all the application function functionality that abstracts away the details of both the bare metal and any public cloud and specifically supports that hybrid cloud functionality. And then on top of that you can run cloud native network functions, you can run all your operating support system and business support system functions on the same clusters. And if you do have some need of vm apps that are legacy or that you haven't been able to port over. There's this, these interesting technologies to bird and birdlet that allow you to run those manage them via Kubernetes. And then in this scenario the own app orchestrator is also running on Kubernetes. Let me just stop there for a second, since we have a number of new folks, any questions so far on any that before I jump into exactly what we're building here with the test fed. But that this is sort of the context for why we're building it or a vision of what we're what we're trying to do. It's star six to unmute your phone. If we can't hear you. So anyway, slide six is the overview of what we've done, which is that we took several vnf virtual network functions out of the own app project. And specifically we're using the broadband network gateway function which is part of the virtual customer premises equipment. And we took that identical networking code and we packaged it as a container. So in this image, it's a vnf on the left cnf on the right package in a vm or container running on open stack or Kubernetes. And then critically the hardware is identical, and very grateful we were very grateful that we've been able to get access from packet the bare metal hosting company, who has worked with us quite closely on a bunch of issues. So, the idea then is to be able to compare the performance of vnfs and cnfs and to be able to talk about best practices and changes necessary and whether there's any patches that we need to upstream to the projects and such. And I think that there's not the expectation right now of this cnf tested being a standalone project it's very much meant as a market development project to show the value of taking this approach, but our expectation is to the that any changes are necessary that we would be upstreaming those two Kubernetes and and other other projects we're making use of that could change obviously but there's no plan for it to. Okay, so slide seven. There's a whole build for this that we could get into later, but we found that this summary was the simplest version of doing it which is just to talk about moving between the user space and kernel is always going to be slow. The v switch here for our vns vnf use cases uses the host user to connect each of the vnfs together, and those connections are always going to be slower than the middle scenario, we call these the snake case, where you can do a user space data point still using v switch of having the cnfs talk together, so they're going to get a performance improvement. And then the third case in yellow is what we call the pipeline case, where two cnfs and in this scenario as we're doing a chain of three pairs of cnfs or vnfs. But in the third case, the cnfs can talk directly to each other using my my connections and get even faster. And so slide eight is a very preliminary look of performance, where the cnf snake and pipeline cases are six to more than eight times faster than the vnf case. And there's nothing particularly shocking about this I mean this is all the same reasons that people like containers over virtual machines, but on the other hand it is useful to see. I think that for most of this call we're going to talk about how to try to move to slightly more realistic use cases of not just trying to maximize packet throughput and looking at some of the other changes. On slide nine, some of the other differences are in terms of the amount of time it takes to do the deployment from scratch and and I will mention that we're aiming to get this 16 minutes for Kubernetes down, meaningfully by removing a reboot that's required right now. But in terms of deploying the network functions and the idle state RAM CPU are all significantly better on the runtime you can see that it's actually using more CPU, but that's not a huge surprise given that it's moving six times more packets through the latency is very low in both cases, and then it's the same performance numbers from the previous slide. So then the how do you engage is really the purpose of this call where the first piece is that there's no need for anybody to take a word for it on these performance numbers. We would love to have you replicate this environment and in particular packet is happy to make an API key available to allow you to use these these pretty beefy machines to do so. And then to the degree that we're just not doing things right that we have sub optimal configurations we would love pull requests that show ways to improve you did the Kubernetes or the open stack deployments. Then another area the third one here is that there's this test bed isn't designed to be locked to packet. We would love to have a pull request that would leave me able to run on your own bare metal servers in your lab or on other cloud bare metal servers like the AWS ones. And then the fourth is to package your internal network functions into VNF and CNF and run on your instance of the test bed and there's no you don't have to share the code with us, but we would love to see the results. I will mention that this whole project is licensed Apache 2.0 so to the degree that it's useful to you you can do anything you want with it, but it's not actually designed to go into production with telcos our hope here is to engage your interest vendors to the degree that they would be creating their own versions of that. And then. Yeah, the final pieces. Right now all of our work has been focused around bare metal servers, we would love contributions or ideas on improving performance in terms of virtualized hardware, such as mostly offerings from Google all about Azure and AWS. So then in terms of where to continue the conversation. We're going to be doing a number of meetings at the open networking summit in San Jose, and we're, I'd be very happy to meet with any of you or your colleagues there. And then we will try and have a boss at keep on Barcelona, and then there's some other minutes beyond that Shanghai Antwerp and San Diego. And I'll just mention that Barcelona is on track to be a blowout event where we're actually aiming extrapolating right now to sell out with 12,000 attendees, which will make it the biggest open source developer conference ever. And I think I will stop there there's some very useful appendix slides and other kinds of information that you should feel free to go through. I think that's a kind of reasonable overview of why we built the test bed. So could I open it up again for any questions and and I guess I'd particularly if some of the folks who've been involved have any edits to what I just said, where I wasn't quite being precise enough I'd be very happy to hear them of suggestions on how to describe things more, more clearly. Correctly, I might even say. Ed, Marchek, Daniel, and Michael, any, any, any edits. Hello. My name is Francois Turand. I'm, I'm working on Cordy NS, but also I'm part of info blocks company. I had one question I was looking at this day during the, the, this weekend, because was presented somehow on last CNCF to see. I understood that the game is to show that anyway, because the, the efficiency is here for running your networking function above Kubernetes, then take advantage of Kubernetes. I take the whole picture. I mean, you prove here that it is, or you prove or you, you, you want to prove or compare that anyway, it's not a networking limitation that should prevent you to move from VNF to CNF. Is that correct or I misunderstood. That's correct. I mean, the actual performance between machines shouldn't be meaningfully different between VNF and CNF since the networking hardware and the bits over the wire and everything are are going to be nearly identical. And so I think what we're trying to measure is the networking performance and, and scalability and, and memory usage and all the other kinds of metrics on how things run within a machine. But I'm not sure I'm addressing your question. I, so my question is, finally, what the full purpose of this test bed, because you say, okay, we are going from VNF to CNF is the whole purpose is, okay, you can now verify that your VNF, your same function as a VNF as the same performance as, as, as this, no, sorry, the same networking function deployed as CNF as the same performance networking performance as as deployed as VNF. No. And so that's correct. So we're talking about, yeah, taking the same NF we call it and packaging is a VNF or CNF. Yeah. Okay. So I went, I was wondering why, why, let's say on you say it's, it's natural that a container it goes, it's lighter and goes quicker than a VM. But my only was, is it because on the underlying network, we are using this MIM IF instead of the virtual, I don't know what they call that the host. Is it, is it the underlying networking that that make it more, more efficient as you say on, on, on container than on VMs. I don't know. That's the main advantage we're showing right now. Go ahead, Rick. I don't know what to agree to what degree folks have broken down specifically this piece versus that piece in terms of the contributions to the improved performance in CNFs. But you know, we can definitely, if you go back and look at the graph with the blue, red and yellow. Dan, we can definitely say that the difference between the red bar here for the snick case and the yellow bar here for the pipeline case that is definitely switching from looping through a V switch to using direct cross connects, which makes a lot of sense. So that piece is those two, the difference between those two bars, the red and the yellow, I think is pretty well understood, which is, you have an entirely different way of connect, connecting CNFs available that you don't have for VNFs and it's a big improvement. What all contributes to the difference between the blue bar for VNFs and the red bar for CNFs in the snick case. That would require more investigation to sort of track down the difference and what contributes what to where. And it may, it's not entirely clear how much we can dig into that because there are just so many more limitations in the VNF case than in the CNF case. It's a little hard to tease out. Okay, thank you for the detail here. Mach-Tek, is that something you can speak to as far as the performance numbers that you've seen, you or Peter or someone else, on the difference between containers and VNFs? Yeah, so I wanted to make two points, actually one just to comment to what that said and answering Francois' question and the other one, a question for Dan. So if we compare the VNF topologies or service chains versus CNF service chains, the difference in performance we're observing and we will observe this both as part of the CNF, CNCF, CNF testing and also in the FDIO, Limit Foundation Networking FDIO project that I'm working in is that if you compare a very simple scenarios of a single VNF instance and single CNF instance, both running on the same amount of resources with virtual switch, we do indeed see a bit of a difference, CNF configuration being faster, but it comes down to the restrictions of memory copy operations between the two. And there is not that much of a difference, it's at the level of 5 to 10%. However, once you load the topology, similar to what Dan presented on the slides, and you have multiple instances of the VNFs and CNFs where the packets being passed multiple times between those virtualized or containerized network functions, this difference grows apart. And the reason is efficiency of memory copy to a degree, but also the complete envelope of resource usage, including the context switching for the VMs, with basically a hypervisor attacks, KVM entries and KVM exits, completely not being present with a CNF configuration, and the cost of the copy operation. And that's the approach we are taking in terms of comparing VNFs and CNFs is to instead of comparing a single instance of VNF and CNF, compare the topologies, as listed here, snakes and pipelines, and filling up the processor socket with multiple cores to really highlight the efficiency gains between the VNF and CNF scenario. And we're referring to it in our project, in the FDIO CISID project, a service density testing, and we're actually looking to drive civilization of the methodology in the ITF benchmarking working group to that in that regard. And there is more data available in terms of various scenarios tested in the FDIO CISID project and I'm happy to provide links, if that makes sense. Okay, thank you. Yes, so I was wrong. It's not really the MIMIF interface, it's really the whole configuration. And I understand that we save hypervisor, that we avoid the hypervisor in the container case. Correct. Okay, thank you. Okay, and so, Dan, I've got a question regarding your comment on running the same network functions in VNFs and CNFs from the ONUP set. So is there a clear view on what those functions are expected to be going forward? I mean, I think the second half of this call is to decide what are the next things that we can do with the testbed that are useful and that both our, you know, telco partners and their vendors would find constructive. One thing that comes to mind is that we could implement the full BCPE use case, which Taylor could say, but I think it's like a couple dozen different network functions in it, and then try and send traffic through those and talk about the performance and the RAM usage and the CPU usage and such in both cases, and that that would almost by definition be a much more realistic view of the differences between these two testbeds. But in some ways it might be too realistic or not optimized enough because those network functions are not necessarily designed to scale or they haven't necessarily been redefined as microservices in the way that might be optimal. So I'd say that that's one thing that we could go focus on. Another kind of area that I'd say is maybe a little duct taped together or not optimal is that we're not really making use of the kind of core Kubernetes primitives at least on things like node affinities so that in a more realistic scenario you might configure several machines with this high performance layer to networking hardware and software on it, and then would want a way to have Kubernetes, the tubelet know that those nodes have that capability and that containers should get scheduled to them. So those are the two sort of two immediate things that had come to my mind of what to do next. But I guess I'm curious for Taylor and Lucina, did you, is there a document today with the CNF testbed that has the sort of backlog of things that you could go work on. This is Taylor. We do have, I guess there's a couple of projects with the next, the issues basically is is the main place and then they're organized in the project section. Thanks Lucina for bringing that up. And I guess the next one is kind of tying around that Mobile World Congress. So I'm sorry, we just finished Mobile World Congress. Open Networking Summit, ONS, and that's in San Jose. So a lot of this is tied into things that we've seen as we've moved towards the current test results and test cases and trying to make it specifically make it where OpenStack is probably one of the biggest things where that can be repeatable to deploy that and now that we've got to a point where that can be brought up 100% open source, we can start getting improvements from folks. So there's a lot of things that are more of like improvements and everything. There's some items on the left that IPv6 support. I know that Michael had been adding support on that. He's on the call. Michael and I think a lot of that's done and that'll allow us to do some test cases like segmented routing. We've talked about using use cases that may tie in with using IPv6 to MPLS. So there's a lot of things that I think we're ready for and we're putting those pieces together. And maybe the next is to pick a use case that would be most desired or relevant to the community. And that may be the ONAP VCP use case. It could be another ONAP use case. And we're also interested in other things. NSM, we've been attending and collaborating with the network service mesh group. And there's some use cases that might be a complementary for the test bed. But this particular project, this project 21 is probably the next set of things where we're looking to add various support. Okay, I think that was a good answer. Thank you, Dan. Thank you, Taylor. And I do agree specifically on the IPv6 site and the routing side that that's something, and as Rv6 as you said, that's something that we should be focusing on here. As for the ONAP use cases, I think it's something we'll need to discuss in more detail. But I think having a set of representative service chains use cases with different functions in a chain would be of the benefit to the community. So I agree here with them. Thanks. Marcia, could you point us to some better chains than the VCP use case? Our only real requirement is that they be open source. Understood. So from the, from the networking use case perspective, I think there are a number of security related cases, including, you know, firewall, nothing and encryption. But in terms of getting a fully functional open source set for that, that's, that's, I guess, a trickier. But, you know, we do have open source IPS like Snorlax and potentially other other applications. But I think, you know, let me think about that and I'll come back with some proposals. Thank you. I do want to point out on the ONAP slide, we're not looking at adding the ONAP layer immediately on top. What we want to do is support all the pieces underneath and then we'll be collaborating eventually with projects like ONAP for being able to run things. As Dan pointed out earlier, contributing back upstream to make sure that they, ONAP has a demo that runs a lot of cases and we want to contribute any patches that go up there. And we'll be looking at adding other layers and doing things. But right now we're saying what can we implement. If we have a use case that we can review and look at what's there, then we can pull as much of that over into the test bed and re-implement it. And as far as the firewall security side, ONAP does have security use cases if we want to take a look at those. And then Ed, you may have thoughts on some security use cases that you all have been working on from the network service mesh side. Yeah, I mean, I think that'll start falling in a bit more into place as we scale up. And it'll, you know, you also sort of look from a different direction. So for example, often a lot of the security cases we're looking at the network service mesh side are actually coming up from enterprise. But, you know, as the folks who do SP knows, know very well, essentially the kinds of things you would do for enterprise become product that you sell if you're an SP. So they become interesting there as well. I'd like to get some clarification on slide six if I could while Ed and Dan are both on the phone because I've talked with Ed about this. So when we say running on top of identical on-demand hardware from bare metal hosting. So from the ground up bare metal, are we making a decision where we're saying it's going to be a software data plane. So forwarding is going to be done the software always for this project. Or are we saying that where it's going to be compatible with some type of ASIC some type of hardware outside of smartNIC. So outside of the NICs being hardware, but forwarding tables and stuff like that on hardware. So I talked with Ed about this before. And every time this subject comes up, I think it's a little bit hazy whenever I talk to like operators or anybody about the project. They're always thinking, you know, hardware as far as a switch and ASIC and stuff like that. So this part, if there's if we can somehow either get clarification say we're never doing that or we that we're open to doing that. I think that might help. About about sort of magic hardware in the box. One of the things you run into is you've immediately moved into a completely bespoke world. You know, generally speaking, I mean, there are some exceptions. So much past some basic acceleration that maybe stuff that can be taken advantage of on the smart on the NICs. Effectively, it becomes a build your own solution from the bottom up at great expense and could do stuff like that, but anybody who's going to go build their own solution is then going to turn around and do it somewhat differently. Does that make sense. Yeah. So when we're when we're talking about from the bear metals, it you think it'd be a good idea to say, Okay, but we're, you know, we're talking about smart NICs here as well. Like, there's specific NICs that we're saying that this works with but we're saying no ASICs. Like, can we just say that explicitly. And everybody knows what we're talking about. And where we're going because at this point, and we're saying it's all mostly all software. Watson, I might make two edits to it. One is that if the ASICs are publicly available. And I mean, in particular, if they're available via, you know, on demand service from packet or from a similar company like that, then I'd certainly be open to having a version of the test bed that that works with them. It's just, if it's, if they're not, then it's just, it's not something that we can test or that anybody else can iterate on and see the impact. Yeah, so there are open ASICs. And that's where, and they're like part of, I believe some of them are part of Lenox Foundation, nobody's, I don't think there's any that's part of CNCF though. So this is something I've talked about with the group and we never, I never really can come to a clear definite no we're not doing that or yes we are. So it sounds like maybe it's over open to us. Yeah, it's sort of more of a maybe and particularly if a vendor wanted to come in and say, hey, we've made these cards available to packet and they're going to be running in some class machines, and we'd like the test bed to show the performance improvement of using those. That's, and most importantly, and we're willing to contribute the code that, you know, supports that use case. I would be thrilled for that scenario. I mean, that's not that far removed. Yeah, from saying, oh, well, you know, can Melanox come in and show the value of using a bit Melanox neck over an Intel neck. And as long as they're willing to make the contributions and, you know, that they're open source and in the same project under Apache 2.0, and we're not requiring the firmware be open source but just that all the configuration parameters and such, then we'd be thrilled to accept pull requests on the subject. Okay. I mean, I do think it's, I think it's very fair to say, oh, you know, most production use cases today are not using commodity hardware and commodity X. Now there's a sort of separate question of what should they be doing more of that in the future. But I don't really consider it our job to decide that. If in fact we did get folks with Spartanix coming and sort of helping to build out things that are replicable patterns. That would be an amazing thing. Right, because that's typically not what's happened so far in the industry and I think it would be good overall for the industry at that word would occur out of this. But you're skeptical that won't. Well, you know, sometimes the horse learns to fly. Could we chat about for a second where we stand with the Melanox Nick since we just brought it up and I did happen to run into some colleagues from Melanox in Barcelona. Well, that was our first target on packet because that was what was publicly available. They're releasing Intel next in March and we actually got pre release of those to start testing and helped helped packet create or decide on the configuration for the Intel versions and like the networking goals. We have support for the Melanox connect X for so that's the version that they're using. And we can deploy all the different. We support open sack Kubernetes KVM and Docker KVM and Docker only machines. All on the Melanox and we've been able to use those the limitations are the drivers are not open source and the there's some weird things that aren't expected with how they show up in Linux. As far as interfaces they don't act like we would expect but we've worked around those and understand what's going on there. And then there's some issues with well I won't say issues performance is lower. I'll just put that out and then it is on others we don't we don't know exactly why that is. Michael, do you remember that open seems like there was a bug or maybe add it seems like we had to patch something. Maybe it happened in VPP because a driver issue on on Melanox. Yeah, yeah, so I mean the Melanox story and drivers continues to improve. They have sort of dropped a note indicating that they moved away from requiring the OEF, oh FED stuff. And so I'm quite hopeful that that will improve that situation greatly it's been a long road with them but God bless them is stuck with us through the whole thing. So it may turn out to be easier to get Melanox support in the future. We just have to go see. Yeah, but it would be worthwhile at this point for me to connect them in at the corporate level and see if they want to engage at all. I think we should any folks in creating those both the cards and drivers and trying to encourage them to move towards open source model on all of that and have a common interface. Okay, on the performance issue. I think that's mainly due to the way we have to configure it since we only have one port available so we have to do some of the VLAN encapsulation and decapsulation and the V-switch. We don't have to do that on the Intel but I think we're planning on setting up an environment using Intel that does the same thing just so we have a better ground for comparison. Yeah, to expand on that it's dual port and two ports on the Melanox and the quad port four ports on the on the Intel for packet and we're going to test by limiting Intel to two ports to look at the performance and compared to the Melanox. But there's actually another interesting option in the space as well, which is I believe right now you're using the DPDK drivers for the Intel mix. And there for Intel recently, as well, ABF, which is a standardization of the binary interface for their nicks both existing and they've made a commitment well into the future. The DPD now has direct support for ABF, meaning that it can now take advantage of Intel mix without DPDK and the preliminary results on performance indicate that that's much faster than using the DPDK drivers. So there's a whole other branch that could be explored there as well that might give interesting results. Just to use our remaining 15 minutes to chat about what else we should do. What, what other work here would be useful from a market development perspective. There is something that I've not seen in the in the repository and at least in the documentation. It's about how a VNF vendor has to transform its VNF to become CNF compatible and to benefit from the pipeline that you described earlier. And the high throughput that is that should be available in this pipeline. I'm not sure it's in the scope of this project but it could be interesting for vendors to better understand the limit and what does to be done. Yeah, that's actually a really interesting thing to document because I think effectively what it comes down to is the number one thing is a VNF vendor to transition to being a CNF would need to move their data plane purely into user space, because they they you have the immutable So you can't keep doing things in kernel modules anymore. So that would be sort of the the table stakes for being a CNF. And then for the pipelining stuff turns out the MIF stuff is pretty well documented. There's also a little MIF that can be used in pretty much any CNF you want to build in order to be doing the pipelining behavior, or you always have the option of simply building on VPP, which is free and open source and Apache to licensed. Beyond the some of the hardware performance and our network performance stuff that would tie in with MIF and other things would be looking at VNF that may offer multiple services some of them maybe really large and then looking at breaking those down. That's just following what would you do cloud native direction anyways, and when you start doing that the density and workload on the machines is going to be more flexible. So that's probably going to be an area to look at some of that may come about when we start looking at other use cases. I don't think it's going to be a one size fit all I think we'll probably have here's some guidelines and then here's some specific things to start looking at. If there is a VNF out there though that anyone has or they're saying we'd like to see this running, especially if you have a minimum more minimal use case or something documented or can be documented. Love to take a look at that. And that could help like create a guide for other folks to contribute. You are mentioning the VPC use case. So, while you're, while you're seeking you out one VPC. Especially or the own out VCP use case was just one that we kind of started with this here's something we're looking at targeting we ended up refocusing on components within that to do. Benchmarking against the systems are running and other things so that was more of, if you actually look at that use case, which I'll drop in the chat. It has a lot of components and the one that we were mainly focused on is labeled V being BNG. Thanks for bringing that up. So on the left, there's home networks. It's kind of hard seeing this but the BNG, those are this kind of the edge so where things come in and that's where most of the tests that you're going to see right now have been focused on the last couple of months and that's doing IP routing so I'm just trying to get the performance and baseline numbers and this also tied in with stuff that the Linux FTA project does in the CSET lab. So we're able to really get the baseline and see if those performance numbers make sense then start building on that. So this use case may be something that we want to implement next finish all the pieces because we've actually done several of the other network functions. But if there's a totally different use case either at own app which they have some or another project or a vendor or anyone else be happy to look at one. Specifically if you hire priority to ones where there's code available to review up, you know, open source we can review it and or specifications about how it's implemented so that we can see what we would do to implement that. And of course if someone wants to contribute a test case that would be better code for implementing. So besides documentation on how someone could create CNS or my grade of enough to CNF. Are there any other items that folks would really like to see. Okay. Dan, what would you like to do next. I think we can stop there I guess I just mentioned that we don't have a mailing list right now but we do have the GitHub issues that folks can feel free to open and then. And we have the CNF channel on Slack, and we'll need to see whether a twice a month frequency makes sense here or maybe we're going to want to drop it back to once a month or something else but we're definitely open to thoughts or suggestions and, you know, hopefully we will be getting other organizations that are interested in in replicating these results and ideally contributing some code. So, we'll cross our fingers that there's going to be some uptake now that we're doing more publicity around it. So I'd suggest we stop there unless anyone else would like to suggest something. That sounds good. Thanks everyone. Thank you. Thank you. Thanks. Thanks.