 Hi folks. We'll get started in just a minute here. We want to give a little bit of time for people to enter the room. All right, we'll go ahead and get started. Welcome to today's LF Networking webinar. The subject of our discussion today is Intelligent Networking and the Thoth Project. Where do we go from here? Before I hand it over to Beth Cohen to do some introductions, just a couple of housekeeping updates. All attendees will be muted during the webinar. However, if you have a question, there is a little Q&A box on the right corner of your screen. You can hit that at any time to type in your questions. And a recording of the slides and a link to the webinar. I'm sorry, recording of the webinar and a link to the slides will be sent to everyone who registered to attend in the coming days. And we'll also be posting access to this webinar on the LF Networking webpage. All right, without further ado, I'm going to hand it over to Beth Cohen to kick off today's discussion. Well, thank you, Jill. So I want to just set a little bit of what we're going to be talking about today. I'm really excited about this and we have a powerhouse panel of people working in this area from around the globe, literally. And this is all based on this white paper that we published just a couple of weeks ago on specifically around intelligent networking and AI and machine learning and the telecom industry and what state it's in. And I encourage people to ask questions during the presentation and then there'll be a panel following that's going to be talking about some of the exciting new work that we're doing in this area. And with that, I am going to turn it over to Tridar Rao, who's going to be speaking about the new project that was just kicked off a couple months ago by the LFM within the Anakin project. Hey, thanks, Beth. Hello everybody. Good morning. Good evening, wherever you are. I'm Shridar. I'm the project lead for Toad. So it's an LFM project on AI and ML for NFE use cases and I represent Spire and Communication. So Toad, it started sometime in early June as. And we started discussion at the beginning of this year, actually, and when once we found some good researchers joining hands, so we proposed this project and started as a part of Anuket project to solve. The name is chosen based on the to be in line with the Anuket name as an Egyptian God for learning and reckoning. And it's a number six represents Toad and the IBIS is like a week of IBIS, one of the symbols of Toad. So this logo, it's kind of captures the boat. Yeah, so I'll start with a quick overview of what this project is about. By the time we started this project, MIT published this article on decision-driven data analytics. So and this decision-driven data analytics really captured the thought process or discussion that we were having on that time. And it gave words to our thoughts, so that's why we started to continue and using that as our philosophy in working on this project. So it's mainly a software development project where we focus on developing source codes, it could be either the models or the tools that I will quickly come back and elaborate. And considering this nature of the domain, right, it's AI and ML for NFUs cases, it requires a lot of research before we make any decisions. So that's where we heavily invest on research studies, we publish few and we are working on something. And the most important for the success of this project is the collaboration because without any collaboration with other the academic researchers or the open source communities and of course the end user, the telcos. There's no way this project can be a successful one. So we heavily invest on collaborations thing. And lastly, we are working on to create something as a model as a service thing. So where the end users, typically the providers can share their data set and the problem in hand and our researchers that we have in our team, we will work on building the model, assessing it and delivering the machine learning model. So these both this collaborate and this to mostly the idea around solving that data set availability problem. Okay, so these are quickly our nouns and verbs of our project, we mainly focus on developing, creating working on the machine learning problems, build the models and supporting tools and try to create training and testing data sets. And through mainly our own development and deployment activities and collaborations and research works. I would quickly want to run through the tentative roadmap that we have it as a project way as I mentioned we started like the beginning of this year. And yeah, around June and that formally we kick started in June, and we initially we published few research studies. So we had, I will share the GitHub URL of the you can find it in the Anuket project list. You will find those interesting research studies by our researchers, Girish and Rohit from from reputed universities in India. And we also published recently the failure prediction model for the virtual machine failure prediction model. And currently we're working on log analysis and data generation using guns. And in parallel we have started an activity around model as a service what should be the ideal framework so we are we are actually elaborating multiple frameworks that can be used to do for this purpose. And and in 2022, we want to mainly focus around cloud native. So cloud native is our target considering the trend and the direction that telcos are also focusing on. So we want to solve some of the interesting machine learning problems in cloud native domain. So if you see in the in a whole of this 2022 we want to focus mainly on pretty the cloud native thing and try to come up with the model as a service by maybe by. Mid mid 2022 so that we can have along with the missile release of Anuket. Hopefully, hopefully we should be successful by doing that. Okay. Yeah, so maybe by the end of this year we should have a stable model next year we should have a stable model as a service. So I'll just quickly walk through the research studies that we have published. I think, as I mentioned, this is very important for us. So we have published one is the, what are the machine learning problems that is there in this domain of NFV and the corresponding techniques that the researchers and the academy and the industry have used to solve those problems. And second thing is the open source projects that are there for AI and ML for NFV use cases. So these both these research activities are published in a both in an Excel and also RST format thing for quick quick review and read up. And currently we are working on the as I mentioned the cloud native NFV and yeah. So we are trying to come up with maybe a short technical report on what are all the machine learning problems that we are trying to address in that it's more important in the cloud native deployment of NFV and the another work that we are also working on is on the data sources. Okay. So there are, this is this is one of the important work so what are all the important sources of the data, their formats and their meaning, and how it corresponds to the different AI and ML problem. So this is another research activity that we are pursuing as of now, and we will plan to publish once we complete it soon. So, yeah. The next is on the models. So this is where we just want to quickly summarize what what what do you mean by this models as of today. Of course, it's an open source all the models will be open source. And as of today the models are published as part of the Jupiter notebooks, or the of course in the Python separate Python files, and we initially started to use some Python frameworks, I mean the machine learning frameworks like the Linux Foundation, Akumos and other thing, but it was kind of dragging this bit so we started to focus with Jupiter notebooks, and we do not want to have any constraints as of today on different things. One of things is of course the data access, it could be a file system of the databases, or the repository, or even data pipeline, if a framework has one of those data pipeline in place. We don't want to have any constraints as of now for the frameworks and the tools and the libraries, though, we are heavily dependent on TensorFlow in the initial models, but it's not a mandatory. So we are working on different frameworks as I mentioned, and I think cube flow is looking very promising as of today. And also we do not want to have any constraints on hey, this should be the problem domain or that should be the technique that we should be using. And we want to prioritize both the end users preference and our researchers interest also at the same time and see how we can do build this models, but the main constraint is on focuses on novelty and the better performance. Of course, if you don't do better than the existing ones, it doesn't make really any use for the end user. Our models are bucketized into these four things on the analysis the detection, the prediction and the generation. When it comes to analysis mostly on the log analysis and the correlation detection we are working on some anomaly detection for the open stack log analysis the prediction as I mentioned the failure prediction, and then it comes to the generation we are working on a synthetic telemetry generation using guns. Okay. And, yeah, yeah, so this is the published one. So we have published the VM failure prediction. And it's a very interesting thing both. As I mentioned, Rohit from the IT MESRA and Girish from with you. So they are, they are the researchers who have worked on this thing. So they are both a computer science and mathematics domain they're pursuing the research and they're doing a very good activity and thanks to them. We are able to publish this work and currently we are working on Google bird technique for log analysis for open stack log analysis and guns for synthetic generation. And the resource as I mentioned, they're mostly, and yeah, sorry, I missed this is Q flow framework for the, we are evaluating very, very strongly I mean every day we are working on this activity on evaluating the Q flow as a framework and our contributors are academic researchers so I will take this opportunity quickly to request the audience who are interested. Please join hands we have a lot of open problems that we can solve together. Yeah. Yeah, and we're looking particularly from contributors from the telecom industry who have the data sets. Yeah, coming to the data set. We all when we and it began in fact that was in the beginning of this project is highlighted this issue, we will eventually have to, we will face it and we'll have to solve it somehow. So currently we are taking a three prong approach to solve this data set problem. One is to request of course to collaborate with the different research labs or other open source projects who have test beds and we're running this test base. And most importantly, the telcos we are trying to request you a G I attended the meeting and requesting them. And, and the next is about creating these test beds, for example, thanks to Intel. We have a few test beds with the open stack and Kubernetes, and we are trying to build some tools to generate these kind of data. I will quickly come to that when I talk about those. And finally, the of course the emulation. This is our last option. Currently, using guns we are trying to generate synthetic data but the performance is not really up there. It's really, I would say bad, but we will eventually get trying to come closer and closer to the real data. Finally, the tools that we are building as part of this project. So, today we have, yeah. So status today we have published a tool called model selector. So this is a Q&A based CLI wizard. It will later ask the user a bunch of questions. And based on the data that he has and the problem in hand, and based on the answers that users provide, it will suggest among the three categories of supervised unsupervised and reinforced, which of the technique is the closest or the best approach to take forward. So our model as a service, right, that's an enhancement of this idea of this model selector, where instead of just an answer doing the Q&A based thing, I think Steve Casey from Verizon suggested this option. Why don't we build a real tool itself like a framework which has all these kind of models, reference implementations, and actually run through the sample data set that the customer or the end user can provide. And suggest them the better, better suggestion will be even better, instead of just Q&A based. And currently we are working on the two tools, one is the data structure and anonymizer. And the next one is that time varying and load varying workload generator. So this is for the synthetic data generation on Kubernetes clusters, where we can using open source tool we want to, there are some two students have started this work. And it's going well, so they want to complete it as soon as possible, we should soon have this tool. So this is again, that is for the, this tool, the TVLV is for the data generation in the testbeds in both open stack and Kubernetes environment, whereas the other tools are targeting the end users. So for example, the data extracted anonymizer is mainly for the end users to share, if at all they want to share any data, maybe a sample data set for both the training and testing. They can use this tool to extract and anonymize in for a maybe, maybe remove certain columns from the most from some of the popular data sources like Prometheus and Elasticsearch, these kind of tools. And of course this model selector is for the any end users, either is new to the domain or has been working on this. Yeah, so the data again coming back to, I cannot stress enough thing. So it is very, very important for us in this case. So this is a quick overview of the total project. As Beth mentioned, I request on the audience, especially from the telco domain, please join hands and similar to the how orange and others have helped us in this. For example, our, when we published a failure prediction, I think we had to use a standardized data set that is being used by different academic researchers. So in that case, whatever the data shared by orange in the public really helped us thing. So maybe we are looking forward to something similar from other telcos who can really help us. So that we can contribute back in trying to build some models for the end users or for you telcos. So I hope I haven't taken much. Thank you. Thank you, Shridhar. That was a great introduction to the both and I know I am participating as well as Steve Casey from Verizon as well. So we, I think we are hoping to address the data, the data gap, so to speak. So with that, we're going to move into the second part of this webinar, which is we're going to have panel of distinguished guests who are going to be talking about how, you know, how is affecting telcos, why it's important to the industry. And touch on, you know, some of the new things that we're all working on now to realize, you know, how we can make our networks more intelligent and where we're focusing our research and where we're focusing our efforts today. So with that, I'm going to open it with a little brief introduction to everybody on the panel. And I'll start with myself. So I'm a product strategist for Verizon have been involved in the open source community for many, many years. I was in OpenStack for many years and now I have been instrumental in the project and very excited to be working on AI type activities now. And we have a surprise guest, Massimo from telecom Italia. And so I'd like to open with you, since you're not listed on the on the panelists list because, you know, we always say So please, please, you know, introduce yourself and tell us why you think intelligent networking is important to the industry. Well, I am nice to meet you everyone here. Thank you for joining it. I am Massimo Banzi working for for many years in telecom Italia in the standard department I am working in innovation activities, following several standard organization and among these are also some open communities and the Phoenix Foundation is for sure the most important one I am following up. Anyway, why, why is so much important for us a well autonomous network, the possibility of understanding the need and the desires of our customer before they even understand what, what they need. This is enabled by the use of new artificial intelligence techniques, machine learning, etc, etc. And this is the reason why it's a six months now or something more that we have a specific department in telecom Italia. We call it data office that is focused on collecting all the data lakes from all the sources in telecom Italia reporting directly to our CEO. This department is focused exactly on this on collecting data and identifying a new methodology for analyzing and for forecasting what our customer will need and the possibility for the possibility of improving the assurance of the network and of our services. And this is the reason why I am joining these activities and I am here now. Thank you. Thank you so Lee home from China mobile. Could you introduce yourself briefly. Yes, of course, thank you best. Hello everyone. Leihuan from China mobile. I'm a researcher in China mobile research Institute, and I'm now mainly focused on intelligent networking industry ecology, intelligent network platform development and other relevant work. For the question why I think that intelligent network is important to the industry. I think that the course as we all know that intelligent networking is a network empowered by AI technology and systematic integration of AI and communication network on hardware, software systems and process. The realization of network intelligence depends on the equipment and data of operators and vendors. So therefore, only with through the industry ecology cooperation, can we try to promote the further development of network intelligence technology. Thank you everyone. Thank you so Shridhar. I know we didn't really introduce you at the beginning so. Yeah, hi, so hi. So I am a worker's architect at the spiral communications. I'm a PTL of test and validation projects in allocate, and this is a new project for me at leading this because it's it's a equally work important for organization. So I think you might be aware. Traditionally, we have been adding this intelligence in the legacy networks, for example, the customer churning and these kind of problems we were solving. And this, when the transition to the cloud happened, we also had to adapt our tools and products and make it bring these kind of solutions in our products and solutions to. So that's why this project and we have full support from our organization to lead this project and become very important project for us. And that's where I want to. I'm spending a lot of my time and interest as of now, and hopefully we can make this project successful going ahead. And then thanks for the opportunity for being part of this panel and share. Thank you. You hung. Finally. Hi, hello. I'm from Chenville. And I'm now work on the air and intelligent operation and essential. And I think the intelligent network is the deep integration of automation and intelligence technology with communication like work hardware. Software system process and etc, which promotes quality and the efficiency improvements these network technology reforms. And enable federal business innovation, rapid expansion of business and technical evolution for network and optimization for operation management. Thank you. So I would like to open the panel with a question directed directed to Shridhar. And, you know, based on your work with both and based on the survey results. Do you think that the open, the open source community is the right place to address these gaps that we've identified. Yep, very much actually. So I think I'm, I think most of the audience here may have gone through the white paper if not please go through it. In this, in this excellent study, you will find that there is a need for a shared understanding of the intelligent network, right. And also, reliable data access, solving this process, this reliable data access for these two points, I want to highlight that community is the right place for us to address these two points specifically from this white paper, because for the legacy network, for example, when I gave the example of customer churning, the problem is very well understood throughout the community and all the stakeholders thing. But when it comes to, for example, even if I say a failure prediction in my case in the first model that we published, right, there is there isn't a proper shared understanding of both the problem as such and the data set that is used. And within the data set what each and every columns and mean and different tools that are the place as Leon was mentioning about the different types of hardware and the tools that are present. So we need to have a proper shared understanding of what really intelligent network mean and what are the problems we want to address. And definitely community is the right place where we can have such kind of discussions and have initially build these kind of solutions that can be tried and worked on hands on. And second thing of course is the reliable data access, and I gave the example of the problem that if you today if you look in academia the whatever the research that get published. The first question that researcher has to answer how reliable your data set is or why should I believe your model. And what kind of the data set that you have used in your training, right, so that every researcher whether he's whether he's pursuing his academic PhD or anything, he has to answer that question. So, so in this case, if we want to justify our model credibility of our model, we have to address the what is the data set that you have used for building the training and testing this model. And for that we need a reliable data set for a particular problem. And believe me, it helps everybody in the all the all the stakeholders and just example I gave regarding the orange data set that was shared. We could circle back our findings, whatever the model the new model that we built for open, open stack VM failure prediction model right, we could circle back with the orange team researchers and share our output with them. And then we got a very constructive feedback on the, in that sense. So, both the both the help it helps all of us both the researchers in building the models. And these two I would want to highlight these two points from the white paper, where the community can really really play a big role. Yeah, I'd like to, I'd like to add. I know from my perspective. I'm finding that the data sets, even with within our the telecom, you know, within Verizon, we have different understandings of the data. So, you know, I can only imagine that problem is magnified across the industry. So, it seems to me that it's very, very important to to address it and and the open source community is the place to do that. Any other thoughts from the other people. And I know I did see a question command an excellent question related to real time data that we will, I think we'll address a little later in the panel. So, any other thoughts before we move to the next question. So, the next question is directed at Lee home, and that is where specifically do you think that the community should be putting its efforts into contributing to intelligent networking as as we found from the from the survey results. There's a number of areas that you can put your efforts in one is is algorithms and other one is is scrubbing the data you can also put it into the operational side of the house. You know, or you can put it into the maintenance and management, or you can put it into the network performance. So, like your thoughts about you know where you think is the best place to put our efforts at this point in the in the technology adoption curve. Thank you, Beth. For this, this question. From my perspective, I think that everyone knows that man challenge of AI and intelligent networking currently include data tools platforms, etc. As the open source community is open place for operators and vendors, it can help promote the development of intelligent networking technology. So, where the community should be putting his effort to contribute to intelligent networking. First of all, I think that it is a good place to build a common understanding of AI platform. We can jointly build a general network intelligence platform through the open source community. And secondly, promote industry collaboration and open source project in the process of deploying intelligent networks, the operations and maintenance activity will gradually shift from people to autonomous network systems. The basis for system decision making will extend from expandable experience to complex algorithms as models. That is not just a simple technical technical issue, but requires industry cooperation to establish fair and objective evaluation standards. Open and reusable test environments and will organize certification service. So promote data and model sharing vendors and operators need to develop common AI models for data through a mechanism for model and data sharing. An AI and ML and model sharing project will be a good way to promote industry collaboration to promote and share data and models through the joint construction of intelligent networking scenarios. Another one is to establish a unified testing and certification program. From the feedback of the LFN intelligent networking survey, the highest priority to a common testing and certification service for intelligent networking is effectiveness evaluation and testing system for intelligent application, such as test cases, data collection, and quantitative metrics, etc. So therefore, we should invest in building a testing and certification program that could evaluate various intelligent application with objective performance metrics and evaluation approaches by scenarios, categories and levels. To build this kind of program, industry collaboration is critical. So above my opinion, you can also search for recommendations from our white paper. In this white paper, we will analyze in detail the technical next of current network intelligence and how to better solve these pain points through the industry. Yeah, thank you. So other comments about this from the panel. So, let's see, so following on that, you, you, Han, like your, like your thoughts on how the open source community can really create that. Common data set, it seems like it's a very hard problem. And, and, and, you know, I think we're all in agreement that the problem is important for the open source community to address as, as Lee just spoke about and treat our as well. But I think, you know, how can we get there is is the is the next step that we need to do and you, Han, like your, your thoughts about that. Thank you. And I think based on our email paper and the survey results, we found that data standardization and share data sets and models, helping a long term challenges for the adoption of intelligent networking. And the basic algorithm framework is the most needed capabilities provided by unified entire network platforms. The shared understanding of the data model some self is the basic requirements. For example, our data is defined can vary belt across operators or even within a single operator. Even something has been made simple at best. Yeah, our result framework is the much needed ability to advance the industry. And I think vendors and the operators need to develop common and models for data. So, like this for a model and data sharing. If the open source community could establish an air and air and ML and the model sharing project would be a good way to promote industry collaboration, promote the sharing of data and the models to the joint construction of intelligent networking scenarios. And UAG has established the air and ML and model sharing project, which is committed to promoting the sharing of data and models through the joint construction of intelligent network application scenarios, such as congestion, addiction, and mitigation, and sleeper cell detection, traffic steering, and stop software detection and resolution etc. So, thank you. Any other thoughts before we have some live questions. One of them is addressed to Shridhar and I think this is a follows up quite nicely with some of the thoughts that you Han just had just spoke about related to real time data. How do we integrate real time data because of course that's the that's the ultimate goal is to act, you know, have our networks be intelligent in real time. It doesn't do a whole lot of good to be intelligent sort of after the fact we want the networks to respond to the changes in workloads real time so that they you know to optimize the performance of applications. So Shridhar, if you could talk a little bit more about that. And the question is also how users integrate the models in their networks, and how to align the expectations and testing. Exactly. So, it's a very good question so actually when we started this project right we wanted to start with the framework that can help us to do this thing. When I use the word framework writer, I mean, it has a data pipeline support, which will help us to achieve this kind of integrations to any kind of data sources, or maybe at least the storage is right. For example, the end sources typically in most of these deployments there'll be some kind of observability solution or monitoring solution where all the data is gathered and put into databases like Prometheus or elastic search and these kind of solutions. And from there typically this data pipeline starts and consume these kind of live data, it could be either live or maybe short period of time, and this data to consume, and the model to consume this kind of data and work on that data. So, for that, if I'm referring that as a framework, we initially wanted to start with Akumos Linux foundations except Linux foundation itself as a good project product most. So, we wanted to start with that thing. And so as of today, we haven't finalized on the project because over the last three months we have seen going container I mean cloud native even for this thing would help us. We are just looking strongly at the queue flow as the one of the framework and where they also have the data pipeline integration options that are there. So we are we are really looking at these kind of frameworks and maybe when I showed in my in my timeline, maybe in next few months we should have a framework that will help us to integrate these models, and we will of course migrate our existing models to this so it can integrate with this live data and work on those live data. Thank you. Thank you. Any other thoughts from the panel before we I know we have another question that's come in. So I think the million dollar question here is what is your strategy to persuade the providers to share their useful and actual internal data, even the sensitive data to have a meaningful data set model results. And I would like to talk about that because obviously I'm representing one of the providers that has that data. And I have two comments about it. One, it's we do need to persuade the providers to share it. I know that Arash has shared one set of data. And we also need to reconcile the data internally. So that's one thing that that I know we found out just on our internal projects is that the data is not is not uniform within our own company. So that means that that there has to be a framework to a make, you know, map the data so that is there's a common understanding. I think it's also important to have a way of anonymizing the data. And I'll use an example of the data that is used in medical research. So in medical research, you know, you have patient data, which is obviously sensitive data, and it needs to be anonymized appropriately. And it's actually a fairly difficult problem because it turns out that particularly if you're researching something like, you know, a rare disease, it's actually fairly easy for the researchers to figure out who the person is who actually had that rare disease. So we need to, again, come up with meaningful ways to anonymize the data so that it's not not only anonymize it but also make it useful at the same time so that we don't get biased results. I know we've had a number of conversations around bias within the data that that renders the algorithms useless. So it's, do I have an answer? The answer is not yet, but we are working on it. And one of the things that the EU AG, as the group of people representing the telecoms specifically, we have to take it on ourselves to persuade our own companies to share this data. Any other thoughts from the panelists on that, on that? Yeah, I would like to share my opinion on. So there are two things that we are trying to think. So one is in the model as a service, right. So one is to entice or maybe, there is something in return you can get out of it. So if you share us a data set with us, a sample data set, even as the question was, even if it's insensitive, it's okay. So if you share us the data set and the problem, we will give you the models to you. We will build, we will assess, we will develop and we will even maintain it and improvise it for you. So this is one way of what to answer what I can get out of it in return if I share the data. So we are, we promise to build and give the models to the end user. So this, I'm hoping this could be one of the point that can maybe help the telcos to consider sharing the data. And in fact, with one of the end users I even mentioned, share those problems in the data set that if you don't want to invest your human resources on that, because most of the telcos they have their research labs or working on already these kind of problems. So if you don't want to invest your resources on these kind of problems, some of these problems, please share us with this community. So we would be happy, our researchers would be happy to work on those problems and build models for you. And I'm sincerely hoping this offering from the project can help in convincing the telcos to, and I fully respect their decisions to when to share, what to share and how to share, but hoping that this would help them to do it. And the second point is on the tools that we are trying to build. As Beth mentioned, we are still discussing with, and she's thanks to her, she has been joining every week the meeting and sharing her thoughts and inputs on this thing. We are still working on what really anonymizing means for different kinds of sources of data and how do we really achieve that. And also our anonymizer tool that we are in progress, we want to address that problem and so that that can give some confidence in sharing the data. Thank you, Shridhar for those thoughts. So just to sum up, you know, we've come to the end of the program, I'm going to be turning over to Jill to finalize and wrap it up. So as you can see, this is a really hard problem, and we're just starting down the road. But it's also really exciting. I'm to be participating in the intelligent networking conversation. And I encourage all the listeners this webinar to reach out to to us and share your thoughts. So hopefully can participate. And some two quick questions, maybe we can answer that from Rob and so yeah, so for if I can I take a minute and answer that is. Yes, and then we'll turn it over to Jill. So yeah, the orange orange data set is public we have a link in our project for all the publicly available data sets that one can view. Maybe you can visit our project page, or maybe this Friday we have a meeting you can join the meeting we will share all the URLs. And on the question the portability of a model it's actually a very interesting point that you have mentioned. So we as you mentioned we work, we train and we build this models for a particular set of data. Now whether we can put it into your network where it will consume the live data, and it will do it. It depends on how it was trained as as I think from researchers from China Mobile mentioned. There are different interpretation of different fields of the data. Now if there is a one on one mapping of these two fields how it was trained, and where it is be running, if there is a same thing without any translation or normalization required, then it will definitely be we'll work fine, or as we may have to do these kind of a pre processing that can suit the models as such. I'd also like to add, and I know we didn't really spend a whole lot of time with it, and I know we're running out of time but you know, there are two streams of intelligent networking there's the operational side, and there's the network performance side there's really quite different data sets, and they, and they need different algorithms and they need to be handled. So that's again something that as we get more into the research, it becomes more obvious that that's that some of that some of them interact with some of them are actually independent. I know we've been doing a whole lot of work around natural language processing of queries and tickets, which, you know, obviously improve our operations but they don't necessarily directly impact the network itself. So with that, I would like to thank everybody for all the great questions and thank the entire panel, and especially Shridhar for his wonderful presentation on both and I'm going to turn it over to Jill to wrap it up. Great. Thank you, Beth. I just want to say thank you again to all of today's panelists and to everyone who joined us. Also want to mention if you'd like to learn more about what was discussed today visit Anna kit.io an UK et.io or LF networking.org. And as we mentioned at the beginning slides and a recording will be available in the coming days. Thanks for joining us everyone have a great day.