 Hello, everyone. I'd like to thank everyone who's joining us. Welcome to today's CNCF webinar, Kubernetes in the context of on-premises and get edge and network edge computing with Intel. I'm Libby Schultz and I'll be moderating today's webinar. We'd like to welcome our presenters, Amir Maktar, Network Software Engineer at Intel and Prakash Kartha, Segment Director at Intel. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF. And as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful of all your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at www.cncf.io slash webinars. With that, I'll go ahead and hand it over to Amir and Prakash to kick off today's presentation. Thank you, Libby. Hello, my name is Prakash Kartha. Like Libby mentioned, we are going to be talking today about openness or open network edge services software enabling high performance edge for telco and enterprise. I'm with Intel's network platforms group where I'm a Segment Director for Edge and with me is Amir. Hello, everyone. So my name is Amir Muhtar and I'm a software engineer with the with Intel. I'm currently an active contributor to the openness project and I previously worked in other projects such as dbdk and plex ran. Okay, so let's talk to the agenda. We'll first start with some of the opportunities and challenges that we believe will exist as you deploy Edge in a telco environment. We'll introduce you to the project called openness and then walk through the architecture various deployment scenarios and how we can get started, and then we'll have some wrap up thoughts and then we'll go into Q&A. So first let's talk about our vision for the Edge and we see the Edge as a distributed across different locations. So, from a distribution perspective, there's what we call the on-premise edge, which typically sits in an enterprise, could be private wireless, could be in an IoT type scenario. And then we have what's known as the network edge or what we call the network edge which we distribute along the access which would be like a base station or the near edge which would be where a 5G user plane might reside or a regional data center where you may have different types of applications. Now one of the opportunities that we see here is that as a telco, if you're looking at Edge across these different locations, whether it's on-premise or the network edge, there is a tremendous opportunity to reduce your total cost of ownership and build a consistent environment for the Edge and this consistent environment based on cloud native platform approaches across all these edge locations. And it's quite easy to imagine why that would be the case because if you try to build Edge, cloud native platforms for different Edge locations, you're going to start to drive up your cost, you're not just to build a platform but also to manage an ecosystem, deliver services. So as a telco, telcos are always looking for ways to build cost efficient, reusable, scalable type platforms. So this we believe is a huge opportunity to enable consistent cloud native platform approaches for all of these different Edge locations. So as we move to the next slide, we'll understand what are some of the challenges with addressing this particular opportunity to reduce TCO. So as you can imagine here, each of these locations are going to look quite different from an Edge perspective. The platforms are going to look very different. For instance, if you are building a platform for an on-premise type deployment, let's say a private wireless type deployment, you are out in the field, your environment is going to be extremely constrained. Whereas if you move to a access Edge, you're going to have a little bit more flexibility, but you're still pretty much constrained with what you can do. And as you're going deeper into the network, it's going to start looking more and more like a data center. So the first challenge you're going to have is how do you deliver platform consistency and scalability across all of these diverse Edge locations which have different requirements. That's your number one challenge. The next thing you would come across, assuming this is going to be a fully cloud native framework that you're using across the board, we still see that cloud native, pure cloud native, is still quiet, not quite ready for the Edge. Right. A lot of stringent KPIs that need to be supported for the Edge, whether it's a latency or whether it's determinism and the ability to deal with a lot of the underlying network complexity around multi-axis. Right. So in a lot of these scenarios, you know, you may have 5G or LTE or you may have Wi-Fi. The access networks are going to be quite diverse. And your networking topologies may be quite diverse. For all of these reasons, you know, these cloud native frameworks that we all know still need to be optimized quite a bit. That's going to be a second challenge. The third challenge is going to be that the standards haven't quite caught up, but they're almost there. Right. From a 3GPP perspective, we really started to see more Edge-specific standards getting developed. So as you are kind of in this brownfield type environment where you already have a network and you're bringing on new capabilities, you're going to have to figure out how, as a telco, how to bring on some of these standards-based deployments. So with that, going into the next slide, we will talk about how we are trying to address some of these challenges with this project called Openness. So Openness stands for Open Network Edge Services Software. It is a CNCF certified Kubernetes solution. Openness is an edge computing software toolkit that enables highly optimized, secure, and performant Edge platforms to onboard and manage applications and network functions with cloud-like agility across any type of network. So each of these statements is quite important. So the first point is extremely optimized for some of these network workloads. It's secure and it's performant. So this solution is extremely modular. So it's built on a concept of building blocks and these building blocks are built on top of a cloud-native foundation. So all the key cloud-native projects form the basic framework for Openness, including Kubernetes, service mesh, telemetry projects, helm, operator framework. These are all the foundational elements and then Openness builds on top specific capabilities like multi-access networking, multi-cluster orchestration, edge-of-the-air service mesh, confidential computing, ban overlay, and so on and so forth. Now there are a few use cases that we target as the top use cases, starting with a 5G access edge, including a 5G RAN, virtualized RAN, enabled on a cloud-native environment. That is one of the key use cases. So you can run basically a 5G base station on top of this platform. We have support for a 5G distributed user plane function, support for a SD-VAN, and various MAC applications or multi-edge computing, multi-access edge computing applications such as AI inferencing and media. So these are all the top use cases that are enabled on this platform. From a key feature standpoint, the number one capability that we address is ensuring that the edge KPIs are met, including throughput, determinism, quality of service, latency, and so on. This solution is intended to run in different environments, including different cloud environments, different access environments, so as multi-access as multi-cloud. And it's delivered via different types of reference architecture. So these reference architectures are, you know, different combinations of these building blocks put together to service certain use cases. And this is quite interesting because, you know, you can choose a reference architecture depending on what type of platform you're trying to build, whether you're trying to build a base station versus a near-edge platform versus an on-premise platform. And of course, we address all the industry standards like 3GPP, ORAN, Etsy, and some of the, you know, more of the, you know, de facto standards that CNCF has been addressing. With that, I'm going to pass this to Amar, who is going to go over the architecture in a lot more detail. Thanks, Prakash. So now let's take a closer look at the open network edge services software architecture and its building blocks. As we look ahead towards a cloud-like 5G network, we are delivering the open network edge services software. We call it OpenNES for short. The OpenNES is an open and cloud-native architecture, as Prakash introduced, that is transforming the telco network edge and thereby promoting it to be managed and provisioned in a cloud-like agility. It also enables building highly optimized and performant platforms that orchestrates workloads, network functions, and hardware resources while preserving the separation of the control and data plane. So following the Kubernetes nature of doing things, the OpenNES builds edge clusters that are composed of one or more high availability nodes that belong to the control plane and multiple worker nodes, which we call them edge nodes. So these edge nodes are the ones that are actually positioned at remote sites and close to the source of events. The controller manages and onboards edge applications and services and network functions efficiently according to the edge nodes features and their platform capabilities. The controller also provisions acceleration resources to the workloads and provides highly optimized container networking for their data plane using technologies such as OpenV-Switching, EBVF, and PCIe, SRI, VCNI. All the Kubernetes capabilities are still included in the OpenNES architecture like the node feature discovery, telemetry and telemetry-aware scheduling, container runtime and virtualization infrastructure, and hardware acceleration discovery provisioning and management. So as we are doing in OpenNES, what we are doing in OpenNES is that we are picking up the latest Kubernetes releases and other CNCF projects. Then we build on top of them various extensions at different levels for the purpose of promoting network edge deployments. At the platform level, we are integrating the bare-metal container components for resource-based deployments such as node feature discovery and resource management daemon, NUMA-aware scheduling, and CPU manager. We are also extending the telemetry pipeline that is based on Colleague D, CAdvisor, Prometheus, and Grafana. And then topping them up with dedicated metrics exporters for accelerators that we are enabling for edge deployments with OpenNES. We are also enhancing and integrating the hardware acceleration device plugins for PGA-based forward error correction and Intel Movidius vision processing units. Also, apart from the core DNS that is part of the vanilla Kubernetes bring up, we are provisioning what we call the Edge DNS service. The Edge DNS service provides domain name resolution for the external devices that are attempting to discover and consume business to consumer software as a service that are hosted by the Edge cluster and open for public consumption. Some or all of these services could be monetized by the service provider and only allowed for premium users. The deterministic cloud native is another crucial capability when deploying cloud native radio access network functions. So the radio access network functions have very stringent requirements for latency and timeliness of the messaging at the radio link. And for that we are providing a recipe with the best known platform configurations and real-time characteristics that will achieve optimal performance on the general purpose compute. At the application level we are delivering video analytics services that are available through a service mesh. These services allow third-party application vendors to quickly build optimized video inferencing pipelines without being experts in the media processing domain. We are also defining reference Edge network functions for native integration with the public telco cloud infrastructure. The mobile network standardization body 3GPP has provided a software-based solution for all its network functions. Network functions such as application function and network exposure function in EF are key appliances that are hosted by openness in order to influence and steer the traffic that's propagating through the core network towards the Edge nodes. In a related context we are providing recipes for deploying reference cloud native radio access networks with the deterministic orchestration. We will explore more about the VRM deployment scenarios in the following slide. At the orchestration level, openness is shipped with the telemetry-aware scheduling and kubectl plugins such as CNCA. So CNCA stands for a core network configuration agent and it is primarily used to set up the core network traffic influencing rules for the core network functions. Another kubectl plugin is called FBGA remote system update. So the remote system update plugin provides a cloud-friendly means to automate the RTL image upgrading for FBGA-based accelerators without trackroll, which reduces the overall operational costs. This RSU plugin is also integrated with the operator framework for fully autonomous system upgrading. We will cover more as well on the FBGA operator and RSU in further details shortly. By the way, if you have any questions, go ahead and submit to the chat window. We'll obviously get to it at the end. Yeah, absolutely. So now that we had an idea about the openness architecture and its extensions for integrating existing network infrastructure, let's explore some of the reference deployment scenarios. As you may know, Rakuten recently announced a fully cloud-native 5G end-to-end network, including VRAM, core, and apps. Openness provides various foundational elements for this deployment in partnership with commercial partners like Alteostar and RobinIO. And this is a realization of the industry's mission to advance the public mobile telco network towards modern software-based architecture that all its network functions are cloudified and dynamically positioned at various locations. Many of these software-defined functions occur in virtualized environments at or near the edge of the network. The radio signal processing stack in both LTE and 5G NR is a pipeline of functions that are controlled by signaling derived from the core network. As the radio access network and the core network decompose, it becomes a necessity to distribute their functions across various locations as shown in this diagram. The radio access, the regional and the central data center. Openness enables these functions visualization and orchestration across these locations. At the radio access, Openness provisions are cloud-native, InnoB, and GenoB workloads on deterministic infrastructure because of their time sensitivity needs. Also, radio-specific applications such as radio network information and location information services can be deployed side by side in the same Openness edge cluster. Openness also enables media-centric applications such as immersive media, content delivery networks, and smart city applications to be deployed at the regional data centers. And for the core network, Openness provides extension APIs for smooth integration and harmonized control and user plane separation and traffic influencing for the LTE and 5G core. In a private wireless telco setting, Openness enables hosting edge applications and services at the customer premises. These edge applications are managed and orchestrated by a centralized office that could be sitting in the cloud. The various branch sites are connected over a software-defined wide-area network, SD1 for short. The centralized office is commonly titled as an SD1 controller as shown there at the top right corner. SD1 is a software-driven model for the one that instead of routing traffic just based on addresses, it is application-aware. Meaning that it uses software to more intelligently route or steer traffic across the one based on the business requirements for these applications. SD1 architecture logically separates edge applications and their management functions from the one transport services like MPLS, broadband, internet, or the LTE or 5G. Similar to 3GPP, SD1 separates the control management plane from the data plane. In the context of SD1, data plane is also known as the data forwarding plane. The data forwarding plane is established over secure tunnels that have been set up between the branch sites while their control is managed by a centralized data center. Multiple access technologies are natively supported by openness-based deployments. Regardless of what access technology the devices at the branch sites would be using, either private mobile LTE or 5G access, Wi-Fi, or even traditional LAN. Therefore, SD1 network functions that are underpinned by the openness infrastructure will be able to provide seamless access to the on-premise edge applications and services. As we discussed earlier, a key part of the cloud-native radio access network functionality is the execution of the LTE and 5G physical layer pipeline in a deterministic manner. In this reference deployment, we are using the FlixRAN. FlixRAN is a reference layer 1 pipeline of 4GNB and 5GNB that is optimized for Intel architecture. FlixRAN executes baseband unit, so for short, BBU, and BBU threads are at real-time, and they are given direct access to Ethernet-ISRAEV functions to terminate front-haul and mid-haul 3GPP traffic. Also, they are giving access to forward error correction acceleration queues that are memory mapped and passed through as BSAE-ISRAEV functions. These forward error correction queues enable accelerating the channel coding accelerations using the Intel BJA programmable accelerator card. The BBU threads are a group of critical execution routines that must be executed on isolated CPU cores. Isolated here means that these are the CPU cores that the BBU threads are given exclusive access to. So when booting the Linux kernel with the correct set of parameters, the Linux kernel scheduler will take care of prohibiting any processes to be scheduled or any interrupts to be serviced on these isolated ones. By doing that, then the BBU threads execution is not impaired by any kernel interrupts or context switching. We've used Kubernetes CPU manager and the topology manager and the node feature discovery to fulfill FlexRAN pod isolation and to provide optimal placement. Also, using the SRIOV CNI and SRIOV FBJA device plugins, the FlexRAN pods are correctly scheduled and provisioned the hardware resources they need to transceive radio traffic and access the acceleration. So for cloud native radio access network deployment, openness delivers a well-defined recipe for best performance on general purpose compute. This recipe defines the optimal real-time kernel setting, the necessary bias configuration and the most efficient CPU frequency and power management settings. The Intel FBJA programmable accelerator card, also known as the SmartNIC, plays a key role in accelerating virtualized network functions that in turn increases the overall compute density. The use of FBJA brings the benefits of programmability, reduced time to market and ease of integration. We use the FBJA to accelerate the LTE and 5G forward error correction operations at layer one of the protocol communications stack. That communication is one of the most compute intensive operations that are substantially enhanced when hardware acceleration is involved. The FBJA SmartNIC integrates two by 25 gigs of network interface card with Intel RIA10 FBJA in a PCIe card form factor. FBJA here is used in the field to accelerate LTE and 5G forward error correction acceleration. These FBJA accelerator devices are installed at remote sites so when an RTL firmware update becomes available, we are deploying a process called RSU. RSU is the shorthand for the remote system update that automates the FBJA flashing using Kubernetes operator. And this process is enabled through a technology known as OPE. OPE is as well as short for the open programmable acceleration engine. Openness is delivering Kubernetes operators of the FBJA SmartNIC device that implement autonomous state machines for managing the Israel VNIC device. The OPE stack and the forward error correction FBJA device. The forward error correction FBJA operator invokes the FBJA device plugin and the deploys a demon set that is responsible for configuring the FBJA resources and the hardware queues. Then exposes them as Israel VV virtual functions. These Israel VV virtual functions map the forward error correction in code and decode the queues so that later on they are passed through to the VLAN network functions as shown in this drone. Based on the provided configuration by the user, the FlexRAM network function pods are individually and they are exclusively associated with the VFs and accordingly a lot of added access to specific forward error correction queues. And this association and the allocation is done through the Israel V device plugin that is deployed by the operator. So the whole process of deploying the necessary device plugins and demon sets needed by the FBJA. The Israel V and the queue configuration and the remote system program remote system programming are autonomously controlled by the operator that openness is delivering for the for the smartNIC. So just before leaving the slide, I know I would like to mention about the two next there shown on the diagram. So these could be used for front hall and the mid-haul termination depending on this plating that's made at the radio signal processing stack and also based on the geographical positioning of the genoid B elements. Okay, so switching gears a little bit and moving on to a different deployment scenario, which is a bit more media centric. So this one is an essential deployment when developing applications for immersive media and content delivery networks or smart CDs. The objective here is to enable onboarding efficient and scalable media analytics and transcoding workloads at the edge for the purpose of reducing the latencies to the end users and them to ease the pressure on the backbone of the network. With this deployment, a video analytics service mesh is provisioned to the openness edge cluster that enables third party applications to define and execute open visual cloud based media analytics and graphic graphics pipelines. These pipelines are optimized for cloud native deployments on commercial of the of the shelf x86 platforms. The video analytics service mesh that is currently based on STO and invoice proxy. Exposes multiple video analytics serving bundles with built in FFM big and G streamer media media frameworks. And if there is hardware acceleration available such as the Intel Movedes high density deep learning card. Then they correspond corresponding services are spun up to the mesh and becomes available for for application consumption consumption. So as shown on on this diagram, there are four service bundles available on the mesh. One bundle is the video analytics serving with G streamer. Another is with the FFM big and two more are provisioned if HDH DDL accelerators are installed in the cluster. And with the existence of the service mesh, the video analytics services are scheduled with the invoice proxy site cars that enables the mesh control plane to monitor load balance and scale the services. According to the platform workload and and also according to the CPU utilization transparently even without impacting the third party consumer applications performance or altering their business logic. At Intel, we've developed the vision processing unit VPU that couples highly parallel programmable compute with workload specific hardware acceleration in a unique architecture. The VPU technology enables accelerating deep neural network and computer vision based applications on edge servers and the appliances. Intel Movedes HDDL and the visual cloud accelerator card for analytics are two vision specific accelerators that we support in openness to inboard dense and accelerator media analytics and transcoding workloads at the telco network edge. When a media centric deployment is needed or openness the provisions the Kubernetes VPU device plugin and the HDDL service demon set on all the nodes where these acceleration the devices are installed. The HDDL service demon set load balances and arbitrates multiple open Vino based applications that are deployed on the edge cluster for efficient VPU resource resource utilization. The visual cloud accelerator card is shipped with an iris pro GPU on board is specifically designed for pipelines that involve not only analytics but also transcoding operations. Okay, so how can you get started. In the previous slides we explored some of the deployment scenarios and their reference applications and network functions. The edge applications are designed to execute at proximity to where the data is being generated. This is for latency or data locality reasons. Example applications could be video analytics for IOT content delivery networks or location information services. Do you have a brilliant edge application idea. So please join us and contribute your application to the openness a jabs repo. That's available on GitHub. If you have a commercial application and you would like to contribute its home charts. So please enroll to the internal network builders and builders program and get your application added to the commercial edge applications hub. The edge repo is welcoming not only applications and services but also the network functions that that would invigorate the edge of the network and on premises deployments. So to get in touch with our team. Please visit our website www.openness.org for the latest and use and use for resources of white papers, trainings and podcasts. Our project space is available on GitHub at GitHub slash open hypheness. You can also join our developers mailing list to initiate discussions or just just raise some questions that you may have. And finally, you can find the edge apps links at GitHub and and internal network builders at the bottom there. So thanks. Thank you. And now I'll hand it over back to Prakash. Thanks, summer. Just to summarize. I know we have a few q&a questions that have come up will try and answer those but just to summarize. As you've seen with openness what we have really tried to do is address different types of use cases you saw use cases around you know very telco oriented use cases like 5G, you know, core network base stations SD van. But then also, you know, non telco use cases traditionally like you know video analytics and AI and CDN and so on. So, what we are hoping to do with openness is ensure that, you know, if you are looking to deploy the telcos looking to deploy an edge openness should give a good starting point right because we've made sure that openness is optimized for all these use cases for the specific edge requirements. So it's a very good starting point to build product you know do you know path finding go to market and so on. And also we've made sure that this is certified so it's a certified Kubernetes solution as well. Okay, so that's a summary. And Libby I'll pass it off to you. And I think we're ready for q&a if you have anything else to say or we can go to q&a directly. Awesome. Thanks, Omar and Prakash. No we have some time for some questions. Be sure that you've dropped them into the q&a tab, as opposed to the chat tab, and Omar and Prakash, if you want to open that up you can just start from the top and we'll get to as many as we have time for. Absolutely. There are quite a few and the q&a I see quite a few questions and then the chat also a few questions so we'll we'll try and answer as much as possible so Omar I'll take a few and if you want to look at the q&a questions if you want to pick a couple that you want to answer. Go and get started so I'll pick the first one, which is a great topic. Could you explain what types of applications benefit from this project so it came up in my summary. Which was that the types of applications that we are trying to address really go, you know, run the gamut right in terms of war fronts and different edge locations so clearly we're going to address the 5G use cases right so it's going to be the 5G, you know, a virtualized RAM which is the base station we're going to address a use of plain function as the other you know application we're going to address SDVAN for more of a non premise. But also, we really starting to see a lot of you know non traditional or non telco or non traditional applications like video inferencing right because we're seeing that a lot of video is kind of a killer app right. But some of these IOT use cases you're seeing a lot of video inferencing in a media analytics type type of use cases. So those those type of use cases are what we are enabling. Okay, so the next question is, where does the Kubernetes control plane run is it at the core data center or is it in the farage itself. The answer is it depends. You can, we have seen clusters where you would run the control plane, the Kubernetes control plane on the same node. And so it is not typical for the control plane to run at a different location but it is not too far fetch especially in an IOT type scenario, you might, you might see scenarios where the node might run in a, you know, much more smaller constraint environment, especially if it's like multi node environments and the cluster the control plane might, might be, you know, farther down in the data center but we don't see that a whole lot because that's quite challenging to implement. So typically we'll see the, we'll see the cluster and the, the, the control plane as well as the nodes kind of running, you know, as close to each other as possible. Okay, so the next question is the, okay, the CNIs that are being developed as part of openness. Will, will they not have a conflict with regular Kubernetes CNI deployment using operator framework. That's a good question. I'll try and answer that, please do chime in. So what we are trying to do with openness is ensure we can enable multiple CNIs right so we'll have a default CNI. And now the default CNI is a project called QoVN, QoVN, primarily because, you know, OVN or OVN is, is, you know, from an SDN perspective it becomes quite important, but we also support Calico and eventually we'll be supporting CLM. We've got a few other CNIs that, that we support as default because you may have some user space networking functions that require different CNIs. So the goal is to ensure they have multiple CNIs that are, that are supported in this environment. So that's, that's kind of a design goal, design goal for openness. And also for that point also, so we don't, what we are doing is that we are, we are building on top of existing CNIs and then we are providing our own enhancement or improvements into the data plane there and the CNIs. So there is, there is no conflicts as such. So when, when building operators or even building service meshes, we are building on top of the CNI that is, that was of choice in the deployment when, when openness is being set up. Amar, do you want to take any more, any other questions? There is an interesting question there about the quality of service. So how is quality of service supported in the platform? So this is a good question. And for that is, so in the case of an LTE or 5G deployment, the quality of service has a support there in the, in the 3GPP standard. And it's, it's being managed and controlled by the network function that is managing this. So in the case of 5G, it's called SMF, so session management function. And similarly as well, if the deployment there involves SD1, so SD1 controller takes care of ensuring and fulfilling this quality of service agreements that were promised for the types of applications. So the short answer for it is the quality of service is, is managed by the network functions that are deployed on the, on openness. There is another one about the APJ, so let me read it out. Regarding APJs, what's the speedup slash latency improvement provided by offloading error corrections codes. So for that, what we are doing, there is, there is a project in TBDK, it's called BBDF, Bisband Wireless Device. And it's taking, this is the one that we are, actually we are integrating in FlexRAN. And FlexRAN is using this framework to accelerate the forward error correction through the FGA that, and, and it is, it is showing a lot of enhancement and improvement, especially because this operation is, is, is very compute intensive. So, and it's quite repetitive. So it's taking a lot of CPU power to do all this computation. So the, the FPGA there is, is easing a lot of this strain on the CPUs and is taking care of doing all this acceleration. There are a couple of questions on the chat window. I don't know, Amar, about those first one. Yeah, another, another similar one. It's about, about VPUs. So how much density increase is expected if using the Movedius VPU. So the same idea as well applies to the Movedius VPU. So we're able of running multiple video streams and doing all the processing through the VPUs. And that, that of course it's a variant depending on the type of the acceleration that you're supporting. So we've talked about the HDDL and we talked about the PCA. So also, we have support to the Movedius stick. So this, when these hardware accelerations are available also, we are, we are seeing a lot of capacity that we are accelerating through openness. How, how openness components are selected configured per use case. We are using Ansible playbooks to, to do all this openness bring up and installation. And there we have what we call flavors or reference architectures. So there we are providing some, you know, pre predefined configurations for the typical or the most common use cases. So in the case of a service mesh or a video, a smart city application or a video analytics application, then the key ingredients here that are involved like would be maybe the service mesh as an ingredient, the, the, the CNI to be used. Also, if you would like to use any type of hardware acceleration. So this type of ingredients or components are enabled turned off or turned on or turned off in the, in these flavors or reference architectures that we're providing. And also we provide some guidance in order as well if you have your own specific use cases and you'd like to customize this, these flavors so you, you have the ability to, you know, pick and choose which components you would like to integrate and deploy in the illustration. There is another question about any plans to support PTP or I or SYNC, SYNC E as parts of our, yeah, I don't, I don't have straight answer to that. So if you can send us an email to the developer, developer mailing list. We can, we can try to find an answer so I can ask around and find an answer point. Do you want to take, we have about six minutes left you want to take another question. Yes, there is anyone came up. What is the implementation language is their recommendation on using any particular language or paradigm. So the language that the main language we are using in an openness itself is go. But for the applications or the network functions, it is really up to the, to the, to the application vendor or the network function owner to the decide on using their preferred language so it's you can use go of course and but any other languages of course are are are are okay. So, because like everything will be put in a container and these containers will be deployed through Kubernetes, maybe through parts or services or demon sets. So at the end that they these these applications will be packaged or bundled in a container containers and even you don't you can you can even like have your own binaries and put them or ship them inside the containers and deploy them on openness. So it is really up to the to the vendor or the developer to decide the language they would like to use for the edge applications and services. Okay, any other questions before we go. Right, well thank you so much armor and Prakash for great presentation and for answering everyone's questions. That's, if that's all y'all have we can go ahead and wrap up unless there's any other anything else you want to add. I think, thank you for the opportunity and CNCF and thank you to everybody for attending. Please do check out the website openness.org and please do engage. Thank you. Absolutely. Thanks for joining us today everyone the webinar recording and slides will be online later today. And we look forward to seeing you at another CNCF webinar in the future. Everybody have a great day. Thanks everyone for listening.