 Hi folks, we'll begin in just a moment. All right, we're going to go ahead and get started. Welcome to today's LF networking webinar. Our topic for today is NFE deployments and the path ahead and operators perspective. Before I introduce our panelists just a couple of housekeeping items. All of our attendees will be muted during the presentation. However, if you'd like to ask a question or if you have a comment for the group. There is a chat and Q&A function towards the bottom right of your screen. So feel free to type in any questions or comments you may have throughout the duration of the event. We will have some time towards the end to do some live Q&A. And if you do have questions, I will be pulling questions from the Q&A window at that time. Recording of the of today's session will be available beginning tomorrow and anyone who's registered for the event will receive that link via email and will also be sharing it on social media and LF networking website. All right, I'm going to go ahead and introduce our panelists now if we could kick it off to our first slide. Yes, thank you. All right, so today we have joining us Lee Wang from China Mobile. We have Beth Cohen with Verizon and Randy Levensoller with cable labs. So without further ado, I'm going to go ahead and kick things off to Beth to get us started today. So thank you, Jill. I'm very excited to be joining us with along with Randy and Lee and we're going to be talking about a white paper or actually it turns out to be two white papers that we wrote specifically around the topic of testing and automation and the support for virtual network functions or NFVs. So with that, I would like to like to start a little bit of an introduction. So I'm Beth Cohen, I excuse me, I work for Verizon as a product manager and heavily involved in deploying software defined networking solutions for customers. So with that, Randy, just a quick introduction. Randy Levensoller, I'm a principal architect at cable labs. We're working on creating standards and specifications for deploying network virtualization in the cable space. And Lee. I'm Lee Wang from China Mobile and I'm a researcher in China Mobile Research Institute, and I'm focusing in the NFV testing and some other field and also include the industry field. And I'm now the chair of EO AG. Thank you. So with that actually leave us an excellent segue into you mentioned the EU AG. And I'd like to, you know, have you talked a little bit about what's the role of the EU AG within the LFN Linux Foundation networking portfolio. Yeah, it's a very good question. As you know that the EU AG is the end user advisory group in LFN and it is composed of various end users. Many of them are operators. We analyze and discuss the latest industry trends, collect operator requirements and put forward operator industry recommendations. So as to promote the development of various open source communities and standard organizations in the industry. So the EU AG, the requirements of operators can help promote the development of the industry and the technology of the industry can also be discussed and used by operators in turn. Thank you. So Randy you're not an operator or your, your company's not an operator how do you see as benefit of of EU AG. Yeah, so you know we really work on behalf of the cable operator since we're nonprofit only owned by them. And for this is really a way where we can kind of get together with people in other related industries. And really we can represent the views and collect feedback for all of our members so we find that the end user advisory group is a great place to get together. And a lot of great readouts and then with papers like this, we can really help identify, you know what are some of the opportunities and also, you know dig into some of the gaps of where we think we can improve what LFN is doing. So it's really a great place to go and, you know, a free exchange of ideas so it's just, you know, a lot of great stuff going on. And for me, you know, I'm talking from as Verizon Verizon is, is, you know, we, we participate in the development of own app and etiquette and some of the other projects. And I, you know, from my perspective I see the EU AG is sort of pulling together all those requirements and feeding them back out to the projects so you know that's that's that feedback loop that's so important. And I know the EU AG started as the advisory group to own app but of course it's expanded its role, which segues into the next topic, which is, how do you see the, the VNF testing that we discussed in the white paper fit in with the with the role. Yeah, I think that you know this is really kind of one these ways where we can help provide feedback to multiple projects. And we've had projects over the years even, you know, pre LFN through that through own app with that end user advisory group about, you know what is the importance of testing, what certifications do we need and how can we really help ease this adoption right because that's one of the big challenges with open source and these newer technologies is. It's a very reputable confident way to have these products from perhaps a very wide variety of vendors that can then be quickly brought into an operator network and run at scale. And in order to do that we really need a lot of automated testing and checks to make sure that everything works, and also the virtualization at one of our big benefits with virtualization cloud native is this agile process will actually be having lots of releases so it's going to go and qualify a piece of hardware for six months and then go drop it in the field and it'll run with just a few minor updates for five or 10 years. You're going to be rolling out new updates you know multiple times a year, maybe even multiple times a month, and there's no way to really do that without automation and around the test, the testing cycle, and the more kind of open source engagement there is the better foundation that everyone can can go from. Yeah, and and lay your, your. Yeah, I totally agree with this opinion. And from my perspective, I will say that, as you know that in the early period, you have organized a survey for an automated testing and the survey has been supported by many of our EOA GCS piece members. And from the survey we have analyzed and investigate many requirements. In fact, we have tried to put forward some recommendations in terms of NFB automated testing and try to summarize and collect most of the recommendations from our CPS piece members and from the survey paper, you can try to read and find out these requirements and recommendations for reference. So that actually brings up a good point which is that, you know, things are changing very rapidly and I know we talk about that in paper and, and I think we'll talk about it more in a minute, which is that, you know, traditionally you know, operators, you know, as Randy said, put hardware out in the field and expected to perform untouched, you know, for 10 or 15 years. And that's just not reality anymore. I mean, you know, we know that the gaming operators literally change. They, they, they do code updates literally hourly or minutes, you know, within minutes. Now, yeah, it's only three lines of code that they change. But still, that that's a whole different mindset. So what with, with that I'd like to, you know, have Lee if you could talk a little bit about, you know, what what your findings about what the current state of testing is in the operator and the open source and, you know, here's a diagram of the testing framework and if you could talk a little bit about that. Indeed, from this framework is one of one of the parts of our white paper. We have tried to analyze and investigate the popular testing framework in different SDOs. It's the testing framework is one of them. As you can see from this graph, they have put forward DevOps process. And we have been a provider, then package repository. And we can use the DevOps server to collect the testing requirements from the provider. And after that, it do the data handle and also send the testing process to the test framework. And, and this is just the standard framework. And based on the standard framework, we tried to do some realization and implementation in different industry open source communities. And as you knew that we have such as VTP, etc, etc. They, they are the most of them are based on this framework. So, so that's, that's interesting that you say that. So you found that Etsy the Etsy framework was was probably the common denominator, you know what other, what other organizations were involved, and I throw this out to either Randy to to talk about in terms of, you know, testing testing frameworks, I know that Anakit has a func test, for example, which is focused on the infrastructure. But I know that there are others as well. Yeah, thank you know on app has some testing as well within it for looking at some of the network services and how they're composed. And the tech is another thing because we find that edge and NFV are very closely related. So a lot of the tools that we build for one tend to apply to the other advice versus since it's, you know, all things that are related to the position on the network and you need very consistent performance and a lot of those things. So we're really kind of seeing a lot of overlap between those. And then you even just looking at a lot of the standards for the applications we're deploying, you know, things like three GP P and O ran, and we're starting to see more test tools come out in those spaces. So more to what we're doing in the cable space as well. So if you say and I'm and I'm looking at a question that somebody posed with, would, would either of you say that the state of the current testing tool sets is not well integrated. And that's, I think that's one of the recommendations that we made in the white paper. Yeah, so I think so I think there's, you know, there's certain functions that you can only test with the proprietary test tool today, due to licensing and IP arrangements. I know a lot of work is going on in that space to allow those licenses to be modified or have a one off license to do additional testing tools and open source. And the other is, you know, there's tends to be tests that focus on the infrastructure like we do with opiate fee, and then on that tends to focus on the application which is why having and a cut come together to really try to bridge those really shows kind of where we're moving. But yeah right now it's typically most operators end up writing their own test suites because a lot of it's very dependent on their network as well so we're really hoping to kind of get a flexible framework. To do this, you know a lot of the piece parts are there. There's a few gaps here and there that are either filled in by someone wrote an Ansible script or you went with a private vendor. But this is where, you know, some of these things we tried to call out in the white paper to really try to bring this together and make it a much more seamless process and more consistency across the different levels of the network. So Lee, could you comment about, you know what were the key takeaways that you feel that are that we should highlight this in this talk from the paper, you know what what did you learn it as you went through the process of writing it. Indeed, as you know that the NFV testing white paper will be divided into two papers, which cover the analysis of operators automated testing status gaps technical requirements. And from the operators perspective, we put forward some recommendations for automated testing process, frameworks, open source and STOs, etc, which are very worthy for the whole industry. And at the same time, we have also sorted out and summarize the current research status of automated testing technology in open source and also STOs, and we also give our evaluation model for automated testing. So by refer to the evaluation model and automated testing technology, you can try to find some potential automated testing methods suitable for your company. As you know that CSPs has done many attempt in automated testing. We hope that through the white paper, we can express operators requirements, put forward industry development suggestions, shared advanced technologies, then walkways for industry to promote the development of NFV testing in a more automated and intelligent way. And, and I want to say that there's a question that came in that I think gets to the heart of it which is how much of the testing framework is in fact automated today. And, and of course, you know manual testing of course, you know, translates into significant resource strains and slows down the development process so you know what were the findings. About how much testing is in fact automated today and, and how much, how much can open source really help with, with making that a reality. So, Randy. Yeah, I think I found a lot of the piece parts were there and there were some test orchestration tools there. But there are definitely gaps I think most a lot of the gaps are really at the application layer. We've done a decent job at covering the infrastructure, though of course there's always, you know, more that can be done with you know how do you standardize the test how do you test across multiple configurations and things like that. So, you know I think we're still, you know we're not, you know we have a foundation we have a starting point but we're by no means, you know, almost done right there's still a lot of work to do in the space. And, and how does containers really come into play I mean traditionally the telcos have been doing a lot of their testing. I mean, you know that's not a it's not a secret that open stack is is pretty close to the default infrastructure for for telcos and and as, as containers come in you know, how's that, how's that, you know how are we, are we ready. And containers actually provide a lot of great opportunities for this space. We're seeing in the cable market actually a lot of Kubernetes on bare metal for our virtual CMTS is our cable modem termination systems. With Kubernetes you can actually, you know deploy tests within the network very easily you can do rolling upgrades with, you know, contain orchestration you actually have a little more control over, you know what version you're running how you can load balance and manage So with that part it makes it a little easier the part that can make it a little more challenging is you do have the same kernel and often the same scheduler, and we've run into situations where, you know, if you're running something like a V ran and virtual CMTS on the same physical server, they both have different scheduling requirements at the kernel level. You can test that and validate that and you know look at those trade off for performance there. And that's something that you know again, we're as we get through these and we run more and more of these scenarios we then developed these automated tests to capture that in the next time. And a lot of this is going to come down to you know more real world deployments we have, you know one of the other questions asking how real is NFV. You're right there's a lot deployed today and looking at 5G and for cable and future upgrades to the cable plant. They're all heavily reliant on virtualization. And most of the new applications are container based. So, you know, yeah, but think that that, you know, probably is very critical going forward and a lot more so probably in the next, you know, two to five years then VMs will be well I want to actually I want to touch on that because what I have found is I think you're you're absolutely right there's a mix right now. I'd say containers are further along in the 5G space. And because that's Greenfield. What I have found, you know, is that Brownfield areas, you know, routing networking, you know, the sort of basic networking functions are still VM based and in some cases still physical based. And I think that the industry is struggling to, to, you know, get to be containerized. I mean that's the North Star for everybody, but we need to get there and I, and I'm hoping I am planning on testing helping to get there faster. Lee, would you like to comment on that. I think I don't have no more comments. Randy, I just say one other thing kind of along that continuum as you mentioned Beth, you know open stack is kind of has a lot of adoption in the telco space today and then we're seeing things like Kubernetes deployed on open stack where then open stack manages that Kubernetes environment. Yes, and then we're also seeing the evolution of the other side where with technologies like kubvert or once you get to a tipping point, you can have Kubernetes manager bare metal infrastructure and actually have VMs that are managed through Kubernetes, which is again another interesting thing as we progress towards hopefully all container based and cloud native just, you know, just one of those things the less infrastructure technologies you have to bring into bring to play the easier it is to manage and test less permutations there. So, so I want to talk a little bit about what are the next steps and I know Randy. The, you know, what do we need to move the test suites for you know what what what is required and, and, you know, there's a slide here about some of the, the work that's being done around an app. And I know Randy I think you were going to talk more about it. This is great again this shows how we are progressing as the industry, especially in some of the LFN projects. You know, as you look at right you know one of the key things to do for testing is you actually have to design your tests and plan them out, and configure your network so with you know own app SD SDC. You can build a Tosca based apology that can then be used repeatedly to run the tests. And then using an orchestrator like oh no you can actually deploy it you can deploy many different variations of that same container so you can test, or same CNF or VNF, and actually be able to test that repeatedly and then of course once you have that that deployed, then you know these virtual test plans, where you can actually run scripts against it and collect results you know what is the performance is there fail over how does it handle fail over. You know, there's quite a few kind of test cases there and that's where you know a lot of that just comes from real world experience you know oh we ran into this problem let's make sure we test against it next time in our regression suite. And finally, you know, as we're talking about you know, iterating through a wide variety of BNF's and CNF's across a variety of infrastructure options with kind of multiple deployment scenarios with you know different connections networking types and overlay networks, you really need to have a good way to validate that. So things like as these test suites mature, we're able to give a certification, and those certifications really show that you've been through this, not only that your application is mature enough, but that space is mature enough that there actually is a certification. You kind of need both, both prongs there to get that certification, and then the analysis because you're going to have volumes of data, and you're going to run these. You know, some of these sessions may run every five minutes, you know, every time there's a code check in, you're going to kick off a pipeline that's going to run unit tests, and do some deployment tests within a staging environment. You just really need analytics to help interpret the test results I mean there's always going to be that human factor but you know more and more we want to have something that will say, you know, big red X stop, and you can find that problem as soon as it was introduced, which really makes it easy to debug if there's only one change you're looking at as opposed to a few thousand changes. And I have found, you know, testing in real life environments is very important because the network behavior of a real network and a network behavior of a lab network, I have found have kind of had nothing to do with each other. And so I think it's really important to have a lab that that that reflects the real behavior on the network. And that's great Beth and how can we do that me, let me take your spot for a second. And what do you think we could do to help have, you know, open source communities out there, have a better representation of what's actually running on the service provider network. That's a really interesting, interesting thing of course you know within within our environment we tap into our real network which is how we do it. So, you know, our, our lab is, you know, some of our labs actually tie into the actual network. So we do the testing on the actual network but I think that's something that the open source community should, should, and should explore with the with the providers to you know, at least access to the internet part of, of our real networks to really get a sense of, of how things actually perform. And of course it's very important that you know this is integration testing and end to end testing and performance testing and all of those tests need to need to take place. And I know, and it gets working on a badging system and you know, a lot of the telco operators want performance testing. Yet, you know, many of the vendors are reluctant to, to provide that for, you know, it's not that we can. It's not that we don't do those performance tests we have to. But we tend to do them internally and silos. So, you know, I think that's that's something that the open source community does need to confront. And, and, and Lee I know. Oh, that's a good question so apparently we have a question about lean NFV, which Lee or Randy I'm throwing it out to you. I'm going to talk about lean NFV. I can read it out. Let's see what approach would you recommend for addressing the data plane performance limitations, especially the networking between containers. Yeah, so I only I'm happy to talk about that question you know with the, you know addressing data plane performance limitations this is where hard work acceleration comes into play. I think you'd be, you know, you know, I don't think you'll find any solid functions out there that don't take advantage of something like dbdk to offload some of the networking functions on to smart nicks. You know offload either on a smart Nick or through a separate FPGA in the V ran we're actually seeing quite a bit of FPGA utilization for the upper l one, so to do things like forward error correction. You know fast Fourier transforms and all the all those fun, fun functions there. So I think that really, that's something that, you know, hard work acceleration is needed for some of that and the other is looking at make sure you understand the topology of your network, both within the computer and where it sits on other computers, you know, especially with Kubernetes you can do pipelining so you actually can have, you know, CNF talk to each other without going on to the network so you can actually pipeline multiple containers together on the same processor on a system, and you also need to make sure that your processors are tied to the Nick that they're using otherwise you'll have a performance hit there so I said there's quite a few things out there today to do that and we're really looking a lot at how can we use some of this general purpose hardware acceleration to improve that you know to do a lot of these fairly common functions, not just put everything on the CPU. I'd like to, you know, get back to you I know we wanted to talk a little bit about some of the testing tools that are available today if you. As you know that the current open source communities have conducted conducted research and exploration of automated testing tools and frameworks. In the in the NFE testing white paper, we have listed current advanced automated testing tools for several mainstream open source communities and standard organizations, and we have conducted technical analysis and research. You can write the white paper to learn about the automated testing technologies that operators are concerned about, and which includes some advanced testing tools, automated testing frameworks and interface standard requirements, etc. As you can see on this page, we take the current testing tool of elephant as an example, and we give technical analysis, including vtp and x texting amount which you can see the the different description in these testing tools. As you can see that vtp is a vertical common test from front platform for various vnf tests, and which can be used in different test stages, including CICD. And elephant OVP certification testing uploading design and active and passive testing. This platform provide execution verification function of test cases. And as you can see from this page, we have another testing tools as an example, which called x testing. The X testing was originally proposed by fun test to achieve smoke and integration testing, and nowadays mostly used to build CICD to a chance. And I have also see one question from the chat window asking about the the OVP certification, which is relevant to the vtp testing tools, which the project is OVP. And the question is, are there any OVP certifications for vnfs recently. Indeed, as far as I know that the OVP projects is now still under planning, and they have. Yes, they do vnfs certification right now, and they have also considered trying to extend their desk gate to the AI and intelligent network certification in next step in OVP 3.0, as far as I know. And that is my answer to this question. Thank you. So let's see. So one of the, so as we sort of wrap it up, I know we'll be answering some questions following up, you know, I just want to take a moment to say please read the white papers there's two. One is the just the recommendations and sort of high level highlights that's the one to share with your CIO. And the second one really goes into the design methodologies and the requirements details and also a survey of the test tools that are out there today. Both some proprietary ones as well as the open source ones that are available. And I'd like to, you know, one final question to the panel. You know, now that we've completed this this test testing and automation white paper, you know, what are our next steps for for taking, taking what we learned here into our next next project which is focused on AI ML and how that will affect the, the networks and, you know, telco networks I know there's a lot of interest right now in using AI ML artificial intelligence machine learning technologies and bringing them into our networks to really allow the networks to work more efficiently. That's the promise anyhow, and you know just a little teaser about some of you know where we're going with that over the next couple months. Lee, you want to talk a little bit about our initial, you know, where we're going with this. Yeah, thank you best that this question. Indeed, as you know that the artificial intelligence and network intelligence intelligence are the development direction of the industry, so that we would like to bring up a more automated and intelligent testing process so that we would like to introduce the intelligent networking and artificial intelligence certification in this landscape. And besides, we will in next step, we will continue to focus on operators automated testing requirements and give our recommendations to industry. And we will continue to use the industry power to promote the development of automated and intelligent agile NFV testing. And in terms of the as best as mentioned the artificial AI and intelligent network. We plan to collect operators network intelligent evaluation and testing requirements, and we plan to input them into relevant open source communities as a reference as well as industry driving force. As you know that the OVP 3.0 they have planned the the AI certification in their project so that we will try to collect the requirements and work with them to bring these AI testing requirements and certification requirements into realization. Thank you. And what just in the cursory review of the survey so far. I think that the sense in the community is we've really just started down this journey and we have a lot, a lot more to go. So I'm pretty excited about that and the, you know, operators are obviously going to play a major role in this we're the ones with the data lakes, we're the data information, and of course the, the vendors in the ecosystem also play a role as well so that's, I think, think it will, we're going to be very interesting work over the next few months. With that, we are going to take questions I know we have a couple questions already in the queue. One of the questions is, has NFP really taken off since the first ETS, ETSI white paper that was done six years back. And what, what do you, what do you think about lean NFV. So, either I can, I can answer from, I think lean NFV is still an interesting concept. I'm not sure that it's 100% there yet, but I think, I think it needs to be a collaboration between the operators and the vendors to get there. You know, I still see many, you know, I know we talked about 5G is being, you know, containerized and, and, you know, again it was a greenfield situation. But I see most, most of the other basic network functions have not been cloud nativeized, if you will. So, open it to others or comment. Yeah, I think it's an interesting concept I haven't seen a lot of it in real world deployments yet for lean NFV, but for NFP itself. I think it's kind of becoming the future and the de facto standard, you know we're not necessarily going out and ripping legacy infrastructure that's still meeting the needs and you know a lot of that held up through coven. But as we deploy new services and new functions and add capacity, you know, NFV and CNFs are becoming more and more the default answer. Yeah. Thank you. So Lee, I would like you to answer this one. Can we have a framework for NFV plus VNF plus CNF testing. Is that is that a worthy goal. Yes, it is a potential goal. We have do many attempts in different testing framework. And we wish to have these kind of framework for automated testing. So it's a goal. Let's put it that way. And I think we already answered the question about the testing framework being automated. I didn't say that that was answered. So, Randy, maybe you can talk about there's, there's diverse data models, and they vary quite a bit. And, you know how much does that does that break the, the concept of automated and open source common testing frameworks. Yeah, did that come up, you know, during our analysis and I think it did. Yeah, it did, you know, because the more standardized the API's are for both invoking tests and collecting results and parsing results, the more effective it is. And so if kind of in order to support all those diverse data models you need a translation layer that will go through and normalize all the data, or perhaps something will come out this AI ML working group that will make that a little more automated and less manually intensive. So really, you know, relying on standards either de facto or your standards for reporting information is, is key right you know the easier it is to, you know, for all talking the same structural language for the data and parsing it and for invoking the processes it just makes it much easier to have a pipeline to handle everything. So I think that's a good, good, a good analysis. Leah I'd like, if you could talk about, you know, I think you're familiar with the fact that that a lot of the vendors provide their own testing frameworks and in their proprietary and each one is a little bit different of course. So how do you think that the open source community can address that to provide, you know, a more holistic framework that that can be used across multiple vendors. Yes, that is the role of the EOAG that the CSPs can lead this trend so that we can try to lead this trend to let them build this kind of common, what we call them common testing, automated testing framework. And that is what we attempt to do. Yes, I would agree with you. And there's one additional question I think we have time for one more and then then we have a closing thing. So Randy this, what do you think is the next step in x86 hardware. Particularly as we're, as we're moving towards, you know, dbdk and and some, and then we're moving into the Intel SR IOV and, and other and FPGA, which is, you know, getting back into the proprietary. You know, I think we've swung back and forth a little bit so. Yeah, that's a great question so we're, we a cable apps is one of the places where we're actually spending a lot of time looking at this and how we can use Kubernetes to manage a lot of these different types of compute, you know with x86 and other processor types. We do see fairly substantial improvements. I've seen claims of some of the newer generation CPUs having a 30% increased throughput per core at lower power usage so you know that'll continue to progress, but for certain types of processing, you need something that's more standard. And as Beth mentioned right now, writing to FPGAs is fairly proprietary, not even just to the vendor but to the card type. There's some promise with things like OpenCL and some more intelligent design tools that will let you write more portable design code. P4 is something we're looking at where you can actually program some network functions either on switches or on NICs or on FPGAs, or using the kernel space so you know we try to look at some of these abstracted languages it's, you know doesn't give us the performance as if we wrote it natively for that device, but it does allow us to take the advantage of that hardware offload and can give us either better performance or at least better power consumption and higher density with using a lot of those technologies. So there really is quite a bit available out there and I expect to see that growing and also more standardization in that area to make it more portable to write code and have it run across a variety of acceleration types. So I also want to mention that the OVP group is being renamed, and it will be changed to conform with the Anakit project. So the Anakit project, which of course is focused on the infrastructure to support NFVs, has created a whole set of test cases. We have requirements documents, I mean we have hundreds of pages of requirements documents, we do have the test framework which is came from the OPNFV project. And so interested parties should stay tuned. I have one more question and then I think we will have to wrap. So the last question is, are there any commercial NFV testing labs available for certifying our VNFs. The community labs are more for testing tools development. I'm not sure I would agree with that but that's what the person wrote, instead of commercial VNF testing, please let us know if there any NFV testing labs for certification. Either Leah or Randy. Yeah, I haven't worked with them yet we've been looking at how can we standardize this process and do some of the stuff that cable labs for the cable industry but I don't really have a good answer to that question. Fortunately, I'm a Beth, do you. I think they're actually because you know there are a number of labs, you know the Lincoln's UNH lab, and I think Intel runs a lab and there's a couple others out there that are really focused. I wouldn't say they're on tools development though I would say that, you know the whole purpose of the etiquette and badging is in fact to allow vendors to come in and get a badge that says that it meets the requirements. You know that it that it that it passed the test to say that it that it will fit in a stand you know it will, it meets the requirements of being able to be a reference, you know map to the reference architecture the reference implementations. Lee you want to comment on that. Yes, I fully agree with both of your opinion. I think that there are many of the commercial NFB testing labs here, and you can choose which is mostly fit for you. And it is a little hard for us to, to answer which is good or which is not so good. Maybe you can try to use some evaluation method, indeed in the white paper we have with the evaluation model for reference, and from the testing objects, the, the, the labor division the testing process, etc. You can try to rate this content for reference and find a suitable way for for for your testing testing objects. So with that, I think I'll come to the end of our, our webinar I thank you, Randy and leave for a very interesting and discussion and with some great questions and I encourage everybody, everybody to download and read the two white papers. I think you'll find a lot of thoughtful reading and and and some really interesting insights. Thank you, Beth for helping facilitate this and it's been great and I say we ended up with two papers instead of one just because we had so much that we wanted to talk about in this area and it's something that the end user group was very excited and passionate about. Thank you. Thank you for your effort in these white papers. Thank you to all of our panelists and all of our attendees and we'll see you all on another upcoming health networking webinar.