 once or twice in my lifetime, but then they were gone. Okay, well, let's go ahead and get started. I think the main piece is that I wanted to apologize for not being able to push through and get the white paper released before KubeCon. That had been my aspiration, but there are just too many things happening as the conference got closed. And so I am interested now in trying to drive it toward consensus and getting the initial chapters published and then coming up with a plan on what the next couple of chapters we're going to work on will be. There unfortunately are still a bunch of holes or open issues that we're gonna need to discuss in order to get there. As of right now, I'm not sure that it actually makes sense to announce it or to publish it prior to Mobile World Congress in Barcelona. I think that's probably the best bet in terms of getting more awareness for it. And the advantage of putting it off a little bit is that after we do get a basically polished draft, I would like to run it by both CIG-AP delivery, which is a new CNCF CIG and the technical oversight committee. I'm not looking for the sort of formal approval either of those bodies, but I do think it would be worthwhile to let them have a look. And if it's not already on your calendar, Mobile World Congress in Barcelona is on February 24th. So my agenda for the day is I am interested in talking through a couple of issues of the white paper if we have time, but I was hoping that first Taylor and his team could give us an update on the CNF testbed and what's changed recently on it, what the plans are for the next couple of months. And then I did want to have a short conversation that I believe I've pasted into the tug chat around the question of Tosca and heat templates. And I would love to hear from both the operators and the vendors on the phone and understand as you're planning with CNFs, are you planning to use either Tosca or heat? This is coming up in particular in the context of CNTT and sorry to use all the acronyms, but the Linux Foundation Networking Project has the OPNFV subproject and they're launching the OVP, which is their verification process. They believe they've already launched it for BNFs. They're beginning to talk about how to do it for CNFs. And I would like to understand people's perspective. I'll go ahead and throw out there. It doesn't make a ton of sense to me. My understanding of heat and Tosca templates is that they're trying to specify at the wrong level around things like IP addresses and much more fundamental pieces of computing. But if they don't, then it does raise the question of what kind of constraints are we expecting to put on CNFs? For example, in terms of them being proper Helm templates or there are other kinds of usage of the platform. But before we get to that question, which I am really interested in, I would love to go ahead and hand it off to Taylor if you and your team would like to give a CNF testbed update first. Sure, thanks, Dan. So a lot of, I guess, what's been occurring in the CNF testbed is tied in to work moving towards KubeCon and some of the stuff there. So from ONS, the last ONS, one of the main things that we were focusing on was trying to make it where the CNF testbed could be used by more people as kind of a dev area for the purpose of testing the different technology and ideas. And we did a tutorial at the ONS and got feedback and have been trying to roll a lot of that in and make it easier to re-produce everything by others. Any of the missing pieces and documentation, any of the areas where maybe things don't work as expected and are harder to troubleshoot. One of the specific areas that's been switching over on that was splitting the hardware provisioning from the workload or the workload platform provisioning. So that would be setting up Kubernetes, adding any the add-ons, whether that's network service mesh or the Intel device plugins and if it's multis any of these pieces that you're gonna add. And then splitting that part out. Some of the hardware provisioning has to do with network provisioning. At packet we're able to provision the actual VLANs and other pieces. So having those as independent parts. So the hardware provisioning piece is now separated and we've been focused on breaking down the platform provisioning. So we have some more examples and use cases that have gone into place, including the example that we went over for at KubeCon. We had some examples for using the Intel device plugin, both on the platform level and some examples there. We've added, there's a multis SRV example use case that's now in the test bed and continuing to build from that with some of the other use cases where we had one for the benchmarking that we've ported over to the new structure and using more in-band items. So using more Kubernetes specific pieces versus a lot of the Ansible code. So as we're doing those, we're able to use the new structure as components or examples that can be used in more complex use cases. So on the roadmap, I guess from here is more of the smaller pieces that we can build as well as looking at some stuff like we would like to have Dan M added and did a lot of spec-ing out before KubeCon. So that's on the agenda to get in place and soon and we are looking at maybe adding a broadband service orchestration use case based on some examples and information that we gathered from folks over the last couple of months need to get a little bit more details on that and there's potentially a network function available for that that we had talked about at ONS. And I think one of the bigger ones would be for Mobile World Congress. We're looking at putting together a 5G use case and that also uses Network Service Mesh and making that available. So talking with various folks to put that together. I think that kind of covers the highlights on there and hopefully address some of the questions I've seen on the tag like regarding NUMA zone awareness and topology and other stuff. We are planning on doing these things. In fact, I think one of them would be the topology manager from Intel. We're gonna be looking at adding those in as some of the components could be used in more use cases. Michael, I think Pettersen, I think you're on the call. Do you have anything to add or call out specifically? No, I think you actually covered it quite well. Again, as you mentioned, we have some ongoing discussion on the talk channel as well. And I think we're getting to a place now where we have a good foundation for the test bed and that allows us to scale it out with some more complex both use cases and functionality. And I think the topology manager and the NUMA awareness, the no feature discovery and the CPU manager and then them and whatever plugins and tools we can find that could make life easier when it comes to setting up these CNFs and environments. It's definitely something that we'll have to look into and add and if anyone has any ideas or any tools that they think could be a good fit for the test bed, please let us know. One of the items that was been discussed recently on the tech channel was smartnecks and FPGAs. And it's definitely something that we're open to exploring. I think some of the questions were on how do you, how would you implement these and still and do it in a cognitive way? And I think that's something that definitely needs to be explored. Some of the items would be what smartnecks could we look at and ideally what are gonna be the most easily available for us to get. And then if anyone has any feedback on that and would like to. I have a caution on this too, Taylor. We should pick a use case before we just randomly do smartnecks. It's something that I've learned through Drive by Fire but just doing smartnecks for the sake of smartnecks doesn't really show you a whole lot that typical SRRV wouldn't. So coming up with some type of unique use case like TLS offload or I don't know, putting your virtual switch on the neck and then doing header and Dcap there but we should actually pick something that we wanna solve and then see how smartnecks perform compared to doing it directly in the CNF with SRRV or something. Cause I hear a lot of random stuff about smartnecks from both vendors and inside my own company and there's usually very little data or comparison to give you a strong justification for adding that use case specificity into your architecture. So it'd be cool to actually figure out like what we wanna test and then compare apples to oranges directly to like a straight SRRV or straight VPP or something like that. That sounds great. So I know that you posted about the TLS offload if someone has a specific use case I'd like to see, we'd start there. I think probably going along with that Jeffrey would be a spec and on the CNF test bed we have a project board that's specifically for this sort of thing. So if folks wanna work on that, a use case that shows off where smartnecks are gonna be valuable then we could start there and come up with something and if it looks good then we can start looking at implementation. Yeah, I'll take over again. Since if I could start with the operators on the call particularly Jeffrey and Jeffrey if everybody, if you could please add yourself to the meeting notes. I'd love to hear your current thinking about HEED and TOSCA in CNFs that I guess you have in your proof of concept today. And I see we also have Rob Fisher on the call from Verizon and a couple of folks from Sprint and then Herbert from Deutsche Telecom. I'd love to get any feedback from you on your thinking on that front. Like you mean using like HEED stacks or using TOSCA to define what I wanna deploy in Kubernetes? Yes, exactly. The context for the question is that the LF networking has an OVP certification program that they were using for VNFs and they're beginning to evaluate how to create a similar program for CNFs. But as of today, as I understand that program and there might be somebody on the call such as Phil Rob who I'm happy to see back in the LF world with a different hat on who might be able to give a little bit more context on that but my impression is that the main aspect of the certification is that they're running TOSCA and HEED SDKs to ensure that people's templates are compliant which makes perfect sense to me for VNFs but I don't actually understand how it works for CNFs. Yeah, so my opinion is if you just wanna use TOSCA as a generic modeling language, I don't really care because there's a million different ways to convert TOSCA to YAML and vice versa and to Yang and everything else. If you're talking about implementing a MANO stack above Kubernetes then I would say I'm pretty against that. One of my main motivations for looking into the cloud native approach and Kubernetes specifically is to break myself free of NFEOs and VNFMs where I can. I don't necessarily have anything against them in the VNF space. They're what all vendors support and kind of like where things are at but just the whole concept of the way that configuration is passed down through the different, souls one through five and things like that. There's just lots of weird dependencies, ill-defined APIs which they are getting better but they're not there all the way yet. So if I'm gonna take the time to move into the Kubernetes space and do all of this in containers and deal with all the data plane challenges I'd rather leave a lot of the orchestration baggage back in the VM space if possible. So, Jeffrey, that's super helpful and I will remind you that I'm still very hopeful you can get some of your internal documents cleared, the charter specific stuff from it removed and publish those with the workgroup. But could you just say a little bit more on if you do leave MANO behind how are you envisioning that the cloud native aspects, a CNF architecture parts of your infrastructure are going to interoperate with the existing parts of it? In terms of provisioning, in terms of managing, are you essentially, yeah, sort of can you talk through what are the pieces of MANO that you need to reinvent or recreate? Sure, let me, this is a very deep and complex thing that I have some emotional biases towards uncertainty. Yeah, and if you could resolve it all in the next three minutes, we would. Yeah, so I would just say the big thing is the concept of how things are abstracted between individual VIMs versus individual VNF VIMs versus individual NFEOs, how sometimes they're packaged, do I use Element Managers or don't I? The packet core wants something completely different than what a virtual firewall wants. As I start looking at virtualized CMTS, how do I run that in the same VIM as I do with my RAN, things like that. And the MANO thing is very pipelined and I think a lot of assumptions were made when those block diagrams were originally made on how configuration would be consumed, but there's just all these weird restrictions. Like, what's defined in Soul 3 versus what's defined in Soul 5, how do I provision a network in OpenStack or VMware? That, if I talk to Nokia, I talk to Cisco and I talk to Juniper, they're all gonna have different opinions on that. And whether or not an individual tenant network goes with the VIM and it's part of its life cycle so the VNFM manages it, or it's part of the network orchestration element and it goes in the NFL. There's all these things that are assumptions that nobody keeps the same, that are present in the MANO space, that Kubernetes just kind of says, I'm not gonna put up with that. You choose your CNI, that's how I'm gonna do networking. If you wanna do some networking outside of that with NSM or something like that, you bring your CREs and this and that and you do your thing. I just, the whole, the full life cycle of a VM and the MANO stack is very cumbersome and it doesn't leave a lot of wiggle room and it also really forces a monolithic approach to how you deliver these services. I don't think it fits well with trying to decompose some of these more complicated network stacks into multiple microservices. So that's my Reader's Digest version. Yeah, that's great. So I'm definitely looking for more feedback on this. I mean, and I need to figure out a more concise way of talking about what some of the things are leaving behind. Would any of the other operators on the call care to voice an opinion? And I will be clear here that none of this is, when you share an opinion, it's not representing your organization. It's not saying the official Sprint or Verizon or whatever opinion, but if you could share some of your thinking then I'd love to move on to the vendors as well. Yeah. So I have a dunker here. So obviously heat will not be a tool which you can use in the cloud native area because it's an open stack service. So the equivalent for heat wherever, maybe to say, okay, take the one which is the most natural one for in cloud native Ham Shards or something like this. Regarding Tosca and also Manu discussion, I'm not the expert. But if my colleagues for known up, they are also not in favor of all this at the end of the defined stuff and see more that they need an end-to-end orchestration and to decent APIs for managing the domain of the network functions. So target picture will look differently. So it will not be monostack. Yeah, and definitely not heat, but for the fraction of the people who prefer heat, I would guess that they will go with the tool which is then selected all the best practice was in the Kubernetes world. Yeah, I agree that heat is very tied to open stack and unlikely to be broken free. I know that there have been efforts to look at what a Tosca for Kubernetes would look like, particularly Tal Learone from Red Hat has made a couple of presentations at previous ONSes about the idea. And I think we could all agree that it is feasible to do that for some amount of work you could implement it. But the key thing that I'm trying to get at is is there demand for it? Do folks feel like it's that level of modeling is essential? If we imagine a CNF such as like the BB and G virtual broadband network gateway as part of the virtual customer premises equipment and you can package up those different CNFs into Helm charts, is there a demand for an additional level of modeling or component management tracking elements, et cetera, beyond what you're getting from the kind of Kubernetes native aspects of the infrastructure. Can I call on other, Herbert, feel free to say more or can I call on any of the other operators on the call, maybe Todd or? I will take the question into one of the next conversations I have internally. Great, yeah, I'd love feedback. It's definitely, I hope it was clear. I was joking about the answer in three minutes before. It's definitely an ongoing topic, but it is particularly relevant as we're begin to look with our partners at LF networking what CNF certification might look like. What are the things that we would even want to test for in order to demonstrate some level of conformance and interoperability? Yeah, what we definitely need is the definition how platform and not only Kubernetes, but maybe also the network black ends and some other tooling around which will be used for CNFs looks like and that we get there in agreement across operators. Otherwise, every operator will have a different platform again and it's integration effort for the DNF vendors. Oh, I totally agree that that part is essential. And so being able to say, I mean, if it's just saying, oh, my CNF is completely conformant, but only runs on this specific hardware and this specific version of Kubernetes, also from my company, then we've just taken a huge step backwards from even from the BNF world. And this is something we've heard very clearly from other operators as well, that they are not interested in taking that step backwards. And so part of this is trying to define using things like device plugin API to say, okay, if it requires a certain kind of smart neck or a CNI plugin with a certain level of functionality, how can those kinds of things be specified in a general generic way so that any CNF architecture, CNF platform with sufficient capabilities will be able to signal that and meet those needs. There are simple directions. So a CNF should not require a specific hardware because it cannot expect it to operate at this rolled out. So if they are comparable different vendors for the same hardware type, and there should be some kind of abstraction so that to be an F vendor has not to bring a specific driver for the hardware to see that. Yeah, you know, Herbert, this might be a moment to just take a quick detour for a second. I was hoping somebody on the call could fill me in on this conversation that's happening on CNTT about SRIOV, because my understanding of it is that sort of the natural way within a CNF architecture would be to, and Taylor, feel free to speak up here if you're up to speed on it. Can you remind me how we're doing SRIOV on the CNF test bed today? On the CNF test bed, we're trying to, I guess, keep things open. So we are using the Intel's, one of the, I guess, examples that we're showing right now uses the Intel device plugin. So that's the Kubernetes add-on for accessing, and then there's some other tooling and packaging for using SRIOV. So we're trying to keep that, I guess, as much as possible Kubernetes in-band. There are some pieces on packet where we have to make sure we have access at the host level. And are we using device plugin to have the server advertise that it has that resource and then to have a pod claim the resource? Yeah, that was recently added with, we have the device plugin, and then, I guess, as far as advertising, there's different levels of that that the tooling and pieces provide. So we have one example that's a very minimal. What we're planning to do is extend that and show the way that, say, network service mesh can take the information from the device plugin to other parts and then make that available as a service. And then there are other projects which would do something similar. Great, so it would definitely be nice to try and write up the plans on that. Is there anyone from who's active in the CNTT group and who followed that issue and speak to it? So we have a lot of discussions there and so the core group of operators is quite aligned that we want to not prohibit SIOV as a mechanism because you need it, even with Martinix and all the stuff, but pass it through to the VNFs. So PCI pass through, we want to prohibit because then it's not longer a cloud, then you are really binding the VNFs to the machine and they need to have specific drivers for this hardware and if we introduce new hardware, they have to test with new drivers and all this stuff. So that is not what we want. SIOV, underneath some kind of abstraction with IO or here in Kubernetes, I don't know exactly which mechanisms we can use, but sign in the infrastructure is okay. So that is not the problem, but it should be not makes CNF hardware dependent. That is our target there. And we try to bring it into the documents like this, but there are also some parties who are pushing back and say, there will be no VNFs or CNFs which are compliant at all because we need SIOV. The discussion is ongoing, but we want to bring it into the directions that we can have a more cloud, have a cloud and not a virtualization. Okay, so could I ask some of the other operators on the call? Maybe Todd from Comcast. And by the way, these discussions are all open and get types of all visible if you look for the right issues, you'll find the discussions. Great. Hey Dan. I pasted in the link into the zoom of the discussions. And I do want to compliment CNTT that it's fantastic to see that open conversation on GitHub. Hey Dan, can I ask a quick question too before the other operators? Please. Is there any specific like technical deficiencies within Helm charts and config maps, which is pushing for TOSCA. And is there any technical merits for why TOSCA was chosen, you know, as opposed to like Yang or something like, are we only choosing TOSCA because that's what was in the Mano stack. And that's what people know, or is there like some limitation in service chaining within the CNS space or something? I'm just kind of curious. Like, I think it's exactly the right question. And I would emphasize to you that in no way have, have we chosen TOSCA. We, we, I do, I do agree with Herbert's point of why he is unlikely to, to be correct. But no, I was sharing my understanding of where OVP is going, which is that they, for understandable reasons, really focused on VNFs to start. They're now gearing up to look at CNS. That, that work has not really progressed at all, but their natural approach is going to be to use TOSCA unless they go a different, encourage to go a different direction. And in terms of limitations on things like Helm, Helm for service chaining and then, you know, things like Prometheus for monitoring and such. I mean, I think the biggest one by far is just the operators and legacy systems. And so having a story about, okay, obviously nobody is going to switch overnight from a VNF architecture to a CNF one. These two platforms are going to need to coexist for years and years to come. How can you manage the CNF architecture, CNF platform and keep on top of what you have deployed there and the progress and such and have that interoperate with some of your current systems. But I would emphasize that even though that's a question, it's far from clear to me that, that requiring use of TOSCA on Kubernetes is actually going to improve things. I completely agree with this statement, which you made, right? Like exactly that's like the challenges all the operators will have for coming years. There will be like a heterogeneous workload in their environment. And so that's where, and that's the reason why what has been working for them, like TOSCA is something which they may, they have adopted along with when they were building their NFB tooling, right? So they want to continue with that. But then there is another thing, right? Like the capabilities of Helm or like the other constructs which are already there in Kubernetes, they may not be fully aware of it. So there is this gap too, which may lead them to just fall back to what they know. Great. So Todd said that his audio isn't working, so we're going to pass him. Todd, go ahead and open up the question to everyone else. I mean, particularly love to hear from the vendors on the call. So Erickson, Cisco, Juniper, you obviously have sold existing management systems in that are working with OpenStack BNF platforms today. As your customers start to use C&F architectures, C&F architectures in parallel next to it, what kind of management links are you going to need or first differently are you going to offer or are you going to sell in order to allow operators to have a view of what's going on in their network? All right. So this is Tomas from Erickson, I think. Then I started off. I, first of all, I think this question goes deeper than just what the management system is or what management systems we would like to sell. Because I guess fundamentally as vendors, I would like to sell a management system that the operators would want to buy. I think this goes deeper in terms of what is the way of working that the operator is willing to accept with Cloud Native. Is it okay to deliver individual services via CI-CD pipeline that starts with my development organization and ends with the customer. That's one extreme. And that model would probably not benefit from Tosca. Most certainly wouldn't benefit from heat or any other abstract descriptor. That way of working would probably benefit from the most Cloud Native pieces of technology like using the Kubernetes APIs and maybe having hands somewhere. And I think that's where many operators are in the CTO office. But that might not be where many operators are when it comes to reality and operations and ways of working and operational model and the whole thing around that. So I think why this question comes up again and again is because the operators have something, they have invested into something and they would like to see return of investment on that. So if that's something requires or uses Tosca as of today then it's quite likely that we would have to support that with CNFs. And then, you know, I think we discussed heat. So I don't think heat is a very good choice either way. But I think when it comes to the artifacts to be delivered, I think that cannot be viewed as something independent from the way of working and the operations model. So I think that the operators would like one to have with cloud native. And I know that from my perspective if I would go to every customer like, hey, here's my product, and by the way, you have to change your complete operation model in order to be able to use it. I might not be the chosen one for that use case. So long story short, I think we need both models and then the challenge is to figure out most cloud native way of delivering and most guide native artifacts that we can transfer to an operator environment. And then what is the sort of the legacy or the way that we use to be friendly with whatever the operators have invested already into. Okay. Great. Can we get some other viewpoints, please? Actually, we have Bill Mogan from Lucie. I'm curious if you're seeing CNF deployments at all among your customers. I think we're seeing our customers just experiment they're starting to experiment with CNFs. I think the way that we're seeing it kind of in the future is kind of aligned with the way you see it down where Kubernetes is kind of like the underlying let's say orchestrator and that orchestrates both VMs and containers. And so basically providing one platform to manage full legacy VNFs and CNFs together. Sorry. And how do you manage the legacy VNFs? So I think we're investing into like QBert. So running VMs in a pod so that you can do that. And so the way that we're looking at it right now is kind of like two different ways. So one is running VMs orchestrated by Kubernetes through like QBert as kind of like independent VNFs. And then the other thing that we're doing is basically using Kubernetes and QBert to set up basically a bunch of VMs and then creating another Kubernetes cluster out of that. So kind of two different ways of doing that. Great. Okay. Yeah. And I think you've probably seen my diagram. I can't paste a picture into Zoom, but I'll paste it into the Tug Slack channel that about evolving from VNFs to CNFs. And I remain a fan of QBert as being a potentially important transitionary technology. And I mean, I think we should be clear that transitions are likely to be around for a decade or so. So this is not a sort of overnight kind of thing given the sprawling nature of many telco operators. Could we have... Go ahead. Yeah, sorry. And I think with the operators that we're working with right now, they're actually kind of aligned with us and what you're seeing to using VMs to manage these legacy things. And it's going to take a long time to get off them, but they kind of see the same vision that you are of running VMs on Kubernetes through QBert. Could I get some other viewpoints, please? Yeah, I'd be interested to know. This is Dan from VMware. Hi, let's go ahead. Oh, yeah. Hi. So this is actually a very interesting question, trying to apply Tasker for everything. So we've been looking at this carefully in own app, especially and also internally in VMware almost for the last couple of years. So the learnings is that applying Tasker for everything is not the right approach. I mean, Tasker's right is probably rightfully the right modeling language, especially for network services that you want to be declarative, for example, in expressing SLOs at the state latency. But then it's not right to just carry forward to expressing resource orchestration, which is essentially Kubernetes. I think that's sort of what I'm hearing from several people and another key point I want to stress on the capabilities. So for example, when you're doing the 5G transition, I mean, essentially the service-based architecture is leveraging HTTP 2.0. We want to really make sure that we can leverage capabilities to service mesh, especially for the control plane network functions, right? As an example, 5G UDM, right? So now, if you're trying to model all this through Tasker, I don't know where we are going with this. I mean, basically it's going to be a while, while, while to get all these things done. We're actually probably taking a step backward as my feeling. Hey, Dan. I can kind of give you an example of like, so the legacy conversation is one that, you know, I beat the vendors up on a lot, right? Because I have all this VNF infrastructure out there, but I don't feel like, I think where things get lost is you don't have to model everything in Tasker. I would prefer that the sole interfaces be a little more flexible, so I wouldn't have to write custom plug-in southbound for my VNFM. But I don't understand this concept of like, you have to do everything in Tosca versus you have a small field in your, you know, Mano architecture, that if it's going to have to do something in containers via Kubernetes, you know, part of its service chain is it makes an API call to Kubernetes, but all of those, you know, all of that standalone configuration, the charts, everything is still maintaining that infrastructure and Kubernetes services, you know, the VNFM the same way that it would service anything else, right? So we have a couple of, they're not data plane intensive, but we've deployed in production a couple of CNFs for like control plane type stuff that actually work directly with things in VMs. And this is kind of how we've approached it is and our upper layer models that's provisioning all of our virtual infrastructure, we also write in some hooks that then make requests to Kubernetes and, you know, put a route where you need a route on an interface and your service chain is there and your containers and your VMs talk to each other. So I don't know why we would need to try to like granularly define every single aspect in a single, you know, or a suite of Tosca models. Yeah, I appreciate that thought. Can you talk about what on the Kubernetes side is receiving the API call? Yeah, so I mean we, I got to talk in generic terms here. Let me just echo someone earlier mentioned CICD, which I really do think is a key concept that if you're not constantly redeploying and in particular able to constantly redeploy your entire architecture, meaning both the Kubernetes platform itself and then all of the CNFs running on top of it, then you really don't have, I would say, a cloud native architecture. And so if the Tosca definitions are locking you into a brittle enough infrastructure that they're preventing that, then you just have a huge mismatch right there. Right. I would say the CICD aspect is probably the main motivation for going through all the pain of shifting and lifting, right, is changing to Thomas's point. Like so I do work in the office of the CTO and, you know, relationships with ops can sometimes be contentious when I try to turn everything upside down on them. And like the whole concept of the cloud native thing and what I don't have in the Mano space is the ability for ops to give me direct feedback to do pull requests against my internal repos. And, you know, YAML super easy for them. So if a manifest needs to change, if they need an updated version, like our ability to communicate and share resources is substantially easier in this model versus, you know, we have like a few really sharp guys on the ops side that can really dive into Tosca. So, you know, as far as your earlier question, what we're using Kubernetes for is mostly a lot of the standard stuff as far as like, you know, API abstraction for a lot of our end services, taking advantage of, you know, the different types of services. We do a lot with external load balancers and the ingress resource type to basically provide a little extra oomph because the scale that we have hitting these API ingests are astronomical. And then we have like, you know, some basic like routing functionality from like a control plane aspect sitting in containers behind these API gateways and some stuff around like, you know, IP mapping, things like that. Like I can't go into really deep detail on this because it's all target proprietary stuff. But the long and short is though, there's certain things that just don't work well in a container right now, even with a lot of the cool software defined storage options out there, like certain databases are super finicky, anything that wants state. And so, you know, we have a Mano stack. We'll have a Mano stack, like you said, for at least another, I mean, we're cable. So we'll have it for at least another 30 years if we're being honest. Like some of the, some of the vendors on the call, very happy. They'll be happy to provide you support for that. And we rely on them, right? But I mean, I just, I don't like this idea of mutual exclusivity. Even in my brownfield environments, I am deploying things in containers and finding ways to weave it in. And I let Kubernetes do what Kubernetes is good at. And I let, you know, both VMware and OpenStack do what they're good at, you know, and that's how I rock and roll. Great. Anyone else like to chime in? I'll just mention a ramp key. If you could please add your name and an email to the minutes. We appreciate it. It's been, yeah, absolutely. Did it, yeah. That was driving so far. Yeah, I just got back. Oh, I understand. And El Dico, I reached out to you on the tug white paper. I'd love to engage with you a little bit more on that diagram. I'll take a look. Thanks. Sure. Sure. And it probably moving to the CNCF Slack is going to be a better conversation because we can do threading there more easily there on than on the Google doc. Would anyone else like to dive in on this, this conversation of, because I think it's sort of a good topic for next time would be if the answer is not going to be heat in Tosca, what would CNCF cert, sorry, CNF certification look like? I think folks are familiar with CNCF certified Kubernetes program that I manage and we've been pretty, pretty thrilled with how that's come together. It's actually kind of exceeded all of our, our aspirations for it. We just announced at KubeCon last week that we'll have, that we now have a hundred, over a hundred, certified Kubernetes implementations. I just pasted in a link to it that's that provides some useful context. But interestingly on the, we'll call it sort of enterprise or cloud side, we've never done the other side of that certification. So if you think of us like Android, we're certifying that the phones are compliant, we've never sort of certified that the individual apps are compliant. I will point out that we do have a set of tools. So in particular, some of you may have heard of API snoop, which is being developed very actively right now, specifically in order to look in at the conformance tests and to validate what tech, what, which of the Kubernetes APIs are being managed by which are being addressed or being touched by which conformance tests. And so this is under very active development and there's a lot of work going on there. But one of the things that we could use API snoop for is to evaluate helm charts. So if you imagine a CNF or a group of CNFs that you package up in a helm chart, you could install those and then run them. And you could look at every API call that is being made by it. And then you could validate that all of those API calls are say in the stable or the beta APIs and what version Kubernetes you're depending on, not an alpha and not any kind of private call. And that would be a way of showing that your, that your CNF is conforming to a specific version of the, the Kubernetes API. So I want to bring this up and I'm going to be, you know, writing it up much more, but I'm, these are very early days. So I'm not convinced yet that there is the demand for implementing that. And it would really particularly to be demand from telco operators and from the vendors. And then that it is hitting the right level of functionality in terms of trying to find what conformance would look like. Then I've just thought that they might have a misunderstanding here regarding the OVP program and where this Tosca and heat is coming from. That was, that was a question to operators. Which of these has to be sorted up from own up point of view. So that own up has to work with heat or can work with Tosca and say, the operator said it has to be supported both. If we look on CNF or we and have towards infrastructure, that is not the discussion there. Because I agree that there's a, yeah, that there might be a mismatch here, but I have spoken to our pit and to Heather Kirksey who run the OVP program. And when I asked them, okay, well, how is, how is OVP going to work for CNFs? Their answer was, well, we haven't really figured it out yet, but it's likely to be similar to the DNF program. Yeah, but they are two different certifications. There's a certification against NFVI or CNFI or however you call it. Yeah. And as the certification said, that the BNF is own up compliant. And if it is open and own up has to be support both. Yeah, I mean, the NFVI side of it. Yeah, then it's maybe not the question at all. If we add now ham charts, then it's a challenge for owner because then owner has to additionally support ham charts. Yes, which I mean, I think is something to look at going forward. But I think we should stop there. I would love to have you engage on the Tug Slack channel. If you want to provide some additional context or links. I'm just beginning to try and come up to speed on this certification question. And that was really the purpose of the call today was, was to talk about some of those things in the high level. Perfect. Okay. Well, we let's stop on time. Thanks everybody for the call today. And I will see you on the Slack channel and in a month. Bye now. Thank you very much. Bye bye. Thank you. Thank you all. Bye.