 Welcome folks. We'll just give people a minute or so to join the webinar before we kick it off. All right, welcome everyone to the second iteration of LF Networking's new webinar series. Today we're going to be discussing building the future open networks, how LF Networking provides the building blocks. We've got four panelists today, Abhijit Kumbhare, Chakir Al-Haqeem, Davideh Kirbuni, and Rani Haibi. All come from LFN member organizations, and they're going to be giving an introduction of themselves before they speak. Before we kick things off, just a couple of quick housekeeping updates. All attendees will be muted during the session. However, we encourage Q&A. So if you have questions, there is a Q&A window. Feel free to type those questions anytime during the session. And the speakers, depending on the question, may interject with a response, but we do have time at the end where we will go through all of those. All right, without further ado, I'm going to hand it over to Rani to kick things off for us. Thanks, Jill. Hi, I'm Rani Haibi. I'm a director in Samsung's Global Open Source Group and also a member of the LFN Technical Advisory Council. Let me start by saying a few words about what the Linux Foundation Networking is and who we are. The Linux Foundation Networking, or LFN for short, is an umbrella organization supporting a set of open source projects related to networking. It has an ever-growing community of developers who collaborate on improving and expanding the projects. The Technical Advisory Council of the LFN, or TAC, or TAC, has representatives from various projects. Its role is to facilitate communication and foster collaboration among the projects. The TAC works in the open through regular meetings, wiki space, and mailing lists. Our goal is to share expertise among the participating projects, and through that improve the overall quality and value of the software. The TAC recently published a technical white paper that discusses the roles of the projects in building modern networks. The paper discusses the entire networking ecosystem consisting of open source projects, standards, and commercial products. This webinar provides a glimpse into the content, but reading the full white paper is recommended for getting the broader view. If you can go to the next slide. So to understand the need for open source networking, let's look at how networks were built during the past few decades. Typically, the network operators or communication service providers acquired standard compliant technology from network equipment providers, often referred to as vendors. In most cases, the technology was proprietary based on software that was not available in the form of open source code. Innovation tended to happen slowly and at high costs. The traditional model became unsustainable with the exponential growth in demand for bandwidths, extended reach to new locations and new devices. Think about IoT, for example. There is a price pressure as consumers expect to pay less for the services, yet get more bandwidths, more coverage, and more devices. Other industries such as web scale or enterprise software adopted a more open model of innovation. There is a desire in the networking industry to follow such models and reap the same benefits, hoping to get better quality software and faster innovation as there is an open and common platform for collaboration. Go to the next slide, please. So why is the open source approach more efficient, you might ask. Let's take a look at the traditional process of building networks. It started with requirements for new type of service or technology. Then standard development organizations or SDOs got together to specify the architecture protocols interfaces and flows required for the new service. Once that work was complete that work equipment event providers could each take the specification and create their implementation. Despite the best of intentions standard left a lot for interpretation, so each vendors implementation ended up being slightly different than the others. This required a process of interoperability testing, sometimes referred to as bake-offs or plug-fests. Each iteration of this process required going back to the implementation, making adjustment and improvements, and repeating the interoperability testing. Sometimes the finding during this process required going all the way back to the standards and making adjustments there. Finally, when all the issues were ironed out, the products were ready for deployment. If we go to the next slide, we'll see how does open source software streamline the process. There are several aspects to that. First, the work on open source software can start in parallel to the standardization. Open source software development is dynamic and agile, allowing fast trial and error iterations. Vendors and operators may experiment with ideas even before they officially get standardized. The SDOs may use the open source project as a testbed in which they trial their innovative ideas. There is a constant loop of feedback between the standards and the open source implementation, helping the open source projects produce software that is aligned with the standards from day one, and enabling the SDOs to validate their new concepts, leading to better quality standards. Network equipment vendors may start working on their product implementation using early drops of the open source software. This way, they do not have to wait for the standards to be finalized. Vendors will begin from a common core software, significantly reducing the chance of incompatible interfaces. As a result of this process, deployment of new technology and services by the communication service provider can happen much earlier than before. Can go to the next slide, please. Let's speak about the role of the LFN. The LFN's mission statement is to increase the availability of quality open source software for networking with the goal of reducing the costs of building and managing networks. The LFN offers benefits to both the communication service providers as well as the network equipment providers. For the communication service providers, benefits include better control over their network and product roadmap, faster time to market with new services, increased security and eventually reduced costs. If you go to the next slide. Open source is not meant to take network equipment providers out of business. On the contrary, it offers several benefits that help them remain competitive and bring value to their customers. This includes reduced costs through sharing of the burden with the community, a platform for direct interaction with their customers or potential customers, and an opportunity to become part of a multi vendor ecosystem. There are benefits for everyone, whether it's communication service provider or network equipment providers who actively participate in open source creation. Research shows that active open source participants learn how to better use the software in their environment and they also see boost in their productivity. If you go to the next slide. Here you can see just some of the standard definition organization SDOs and other open source projects in the networking ecosystem. You can see the LFN projects highlighted with green frames in this diagram. As mentioned before, the open source in general and the LFN in particular is not aimed at replacing standards. In fact, the LFN is working closely with SDOs and this work is visible in the form of either coordinators that share knowledge between project and SDOs in a bi-directional manner. You can see signing of MOUs, Memorandums of Understanding between the LFN and SDOs indicating the commitment for alignment. And occasionally we are publishing technical white papers that highlight the alignment between the different initiatives. The LFN projects constantly maintain alignment and integration with the entities shown in this diagram. This ensures that the provided software is aligned and compatible with other elements of modern networks. You can go to the next slide. Here we can see the LFN projects and the functionality related to the different layers required for building the modern network. So starting from the bottom with the transport layer also referred to as the data path where speed and reliability are key. The LFN project focuses on fast packet processing. FIDO's work applies to multiple layers of the network from layer 2 to layer 7. The next layer up is network operating system where the essential software components required for building a network device are integrated and packaged together. The OpenSwitch project abstracts the complexity of hardware implementation details of network devices and exposes a unified interface towards the higher network layers. Next, the network control layer is where ONAP, Open Daylight and Tungsten Fabric take network service definition as input, break them into their more basic building blocks and then interface with the lower layers of the network to instantiate and control the service. The top layer of the network functionality includes the components which provide visibility into the state of the network as well as automated network management. Panda and SNAS can collect high volumes of network data in real time and make it available to external management systems. All the collected data can also be used by ONAP and its policy driven control loop which automates and dynamically controls the network in response to changing demand. Hey Ronnie, sorry to interrupt, we had a quick question about which project is the CIACD layer here? So the CIACD refers to mostly OP NFV and the CNTT initiative. Great, thank you. Which brings me to my last point about OP NFV and CNTT, which focus on the integration of the different layers. They provide tools and reference architecture for building networks. In addition, they provide verification programs for network infrastructure and for the virtual network functions. They ensure that the different components of the network are fully compatible with each other. Next slide please. It's important to remember that each project under the LFN may be used as a standalone and provide full and rich functionality in the domain it's targeting. There are currently eight projects under the LFN, we plan to add more in the future. Obviously, we cannot go into the details of each project today, but we do have two representatives on this webinar from two projects, Abhijit and Chakar. So in the next part of this webinar, we will talk about ODL, Open Daylight and ONAP. We will highlight what value they bring and how their open interfaces may be used to build the broader network solution. So with that, I'll hand it over to Abhijit. Yes, this is Abhijit Kumbhari. I'm the TSC chair for Open Daylight project. I have been involved with Open Daylight project since its inception in 2013. And actually, even before that, when it was getting formed, and it's been quite a nice journey. And I think we'll just go through a little bit of one by one of it, part of it in a small time as possible. So Open Daylight, as such, it was Linux Foundation's first attempt at the networking project. And it was the first project brought in where networking companies, vendors, and the providers came together and created a project in the open source in the Linux Foundation. And this project is now part of the LSN umbrella. And as I said before, it was founded in 2013. We just completed the 12th release in April called Magnesium. We actually follow the naming convention for the elements in the periodic table. So this was a decision that we had taken at the start of the project where we started with Hydrogen, then we followed Ethereum, etc. So since the release cadence is around six monthly, we have given ourselves around 50 years. So we are unlikely to run out of the release names in any time in the future. As such, Open Daylight is also the most widely deployed open source SDN controller. And overall it's a modular platform for customizing and automating networks of any size and scale. And right from the beginning, we designed it such a way that it can be a foundation for commercial solutions to address various types of use cases, not just a particular use case. And in here, I will just cover a partial list of use cases. And if your favorite use case is not here, we'll cover it later on. You can contact me with. So going on to the Open Daylight architecture, the key pieces, the key two key aspects of Open Daylight architecture that we need to look at is first the architecture. It is a modern driven architecture instead of a hardwired architecture. So what that means is in a hardwired architecture, we could have taken for programming networking devices via different protocols. We could have taken the existing APIs for the different plugins. We would have got different plugins for the different protocols to program the different protocols. And we could have come up with an API abstraction that meets all of their needs. And so that would have been a much more complicated process and less flexible and more prone to errors. So instead of that, we came up with this modern driven approach where the SDN applications actually deal with software models of network devices instead of dealing with the networking devices themselves. So the application deals with the models of that and then that will directly deal with the devices. So everything inside of Open Daylight is represented as models. So even the Open Daylight applications and services are represented as models. And these interactions between the models are processed within what is the center of the kernel of the Open Daylight platform or called as MD Sal or Modest Driven Sal. So that is MD Sal is probably the key reason for the Open Daylight longevity and flexibility. The second aspect of the Open Daylight architecture is the framework is quite modular and multi-protocol. So it allows developers and users to install only the protocols and services that they need rather than everything under the sun. And you can also combine multiple services and protocols to solve more complex problems as the needs arise. And the bottom line because of this is that due to this we can, it allows us to do variety of use cases as I mentioned before. So just to understand how the MD Sal helps you develop applications is once the first step you would do is you would define your models for that application or for the device. Or the device type. And Yang is used for the modeling of this. You compile that model with a piece of software inside of Open Daylight called Yang Tools. And the compilation results in the skeleton of the application including the REST API and the model itself. And if you look in the diagram in the middle on the side, so the elements in the right color are the application skeleton that the Yang Tools has compiled for you. And the model implementation green is where you actually have to write your code, write your handlers notifications and interfacing etc and do whatever your application needs to do. Next slide please. So as far as the use cases, so we have assigned covering a few of the use cases. So first use case I'm covering is ONAP or specifically the ONAP components SDNC and APC. These are extended from Open Daylight Controller framework to manage the state of the resources, network resources. SDNC is used for the layer one to three network elements. And APC is used for the network functions, the layer four to seven, like the load balances etc. And this is done via the state of resources done via application level configuration via NetCon, share sensibility etc. Or it is done via the life cycle management. And it's also does the life cycle management stop, resume, has check etc. So the diagram to the right, it shows how SDNC is leveraging the Open Daylight framework including the API handlers, operational and construction trees, adapters etc. And this diagram is pretty detailed but it does show how Open Daylight is actually used inside of the SDNC framework. Next slide please. The second use case is the network virtualization use case for clouded NFE. So in this, the Open Daylight network app can be used to provide network virtualizations, basically overlay connectivity inside or inside in between the data centers for the cloud SDN use case. So it will let you create the VXLanternals within the data center and layer 3 VPN tunnels across the data center so you can have a seamless network virtualization. Next slide please. The last slide I'm talking about is the use case called network abstraction. So Open Daylight can expose network services API for not born applications for network automation in a multi vendor network. So this is a bit of a problem for vendors or rather for web 2.0 companies and things like that who own network devices from different vendors. It's hard to configure them separately to have a single automation. And to that end, actually a new project inside of Open Daylight was created last week. The effort has been going on for a while so I did not have this part in the right paper but it is exactly the instantiation of the network abstraction case. And the project is called Open Daylight Service Automation Framework. And that is for providing the heterogeneous device management. So you could program the vendor devices with varying interfaces CLI, etc. So on the diagram on the right, the two icons on the right, one it shows is how you are interacting with the CLI devices and the other diagram. It's showing how you could use the ODSSAS to interact with the network devices and use the same kind of framework to do that. So this actually is to simplify the service provisioning. And the second thing is that is actually needed in the network automation processes is transaction management capability for device service provisioning. So that is pre-check of the configurations that you are going to send post-check if the configurations actually succeeded. And if there was any failure, a rollback of the configurations. So that's where my things are. So I will hand it over to Anand. Okay, thank you Abhijeet. Hi everyone. My name is Shakira Hakim. I work for a company called FutureWave. FutureWave is based out of Santa Clara, California, where I'm an executive director for open source and for virtualization. I am also the, within the LFN community, I am the chairman of the architecture subcommittee of the OWNAP project. And I am also a member of the TAC, the technical advisory committee of the LFN edge, which is another component to a project under the LFN umbrella. What I will present in the next few graphs is the OWNAP platform. And before I start, I'll give you a little bit of history of OWNAP. OWNAP was created back in the day, back in 2013, when I was a closed source with a major service provider. And basically I was there since day one. And then when it became an open source project under the LFN umbrella and it evolved to where it is today. So I've been with the project for a while. So I do have a very good background in terms of where the project started, where it is today. So with that, let me, I'll take you through some of the view graphs. And I will explain to you at high level what OWNAP is. We'll go into some of the components. And then we'll give you a view of the benefits of OWNAP or where we are today with the current release. So what is OWNAP? Sorry, if you could go back. Thank you. What is OWNAP? It's the open network automation platform, a project. It was created, it was to address the common orchestration in automation for the service providers domain, where you may have a multitude of network elements that may or may not be from the same vendor. So OWNAP was created to basically create a common way to orchestrate virtual functions across multiple vendor base and to also provide a lifecycle process and also to support the DevOps model. Next slide, please. So what OWNAP does is the following. It provides the service providers, mainly large service providers, but also smaller service providers with the ability to dynamically introduce a new service or full service lifecycle and orchestration. And what we mean by it is you have the design, how the service is designed using a well-tested and well-defined VNFs or virtual functions to the way the service gets created and using an abstraction layer that can create the service from any vendor in any VNF without impacting the way the service gets designed. And then once the service is designed, it gets basically orchestrated. And after the orchestration, you have the, you could set the policies and you could set the control loop. And now you have a service that is fully deployed in an automated fashion. And then you have your policies and you have the control loop that basically handles the lifecycle management of the service. The APIs that are provided are open APIs. The underlying technology is model-driven so you can change the way, you know, the type of VNFs that you're using, the type of platform that you're orchestrated on. And the fact that you have common APIs and common data model, it gives you the flexibility to do that seamlessly. It is the scalability. ONAP is carrier grade, which includes the horizontal scaling and the distribution to support a large number of services and large networks. That's basically, that was the basic premise of ONAP. It is data-driven and policy-driven architecture. It does allow for a significant flexibility in the way virtual functions are the process of orchestration, the virtual function, all the services is automated and the ability to do that in a very expedited and efficient fashion. The architecture allows for the best in class, the sourcing of the best in class components. You could pick and choose which components you want to use, which component, which VNFs you want to use, and create the service using, you know, mix and match. The architecture of ONAP is perfect for those types of functionalities or capabilities. It allows you to basically develop once and deploy many. You could basically create a template or a recipe to deploy a specific virtual function or virtual service. And you could use and reuse that function, that template or that recipe to deploy many instances of the same service or the same virtual function. And the last but not least, it does support the elastic scaling as needs grow or shrink. So you basically could add resources on the fly. And if the resources are no longer needed, you could basically reclaim these resources and put them back in the inventory and use them for a different service or for different VNF. Next slide, please. So this, what is the scope of ONAP? There are two paradigms within ONAP. You have the design, the design time in what we call the on time. So you have the design time framework that allows the service provider to basically create any service, design any service and create any service based on their needs and based on the type of VNFs that they're planning or PNFs for that matter. And we call them XNFs based on their needs and they compose the service, service gets composed. You don't have a time frame for composing the service. You could take whatever time you need to compose it. Once the service has been composed and created, then a, the bundle of that service is created and it passed on to the runtime framework. And the runtime framework has two major components. One is the service deployment, which is how do I orchestrate the service? How do I make sure the service is up and running? How do I inventory all the resources that are associated with the service that I just orchestrated? And once that is done, then the control is passed on to the operation sites of the service that was created. And now the data collection analytics takes place. The, you know, from the data collection and the analytics, then you could derive whether you'll, you have an issue or not with the service. You have a policy and you have a policy engine and set of policies that you would have designed during design time. And those will take over and then will allow you to autonomously monitor the system and monitor the application and basically be able to recover from a many failures by using the, the, the closed loop automation or the closed loop notion that is part of the service design and creation. Next page please. Taken, taken the architecture down one level of detail, the left side, which is the side that is outlined with the, with the MRAD is basically the service, the service design part and the creation part of the, of the on-app service. That's the, the component that allows you to create the service. And as you can see there, there are many sub components in on-app is a very, very large system. So the service design creation has all these components that are used during the service design creation, including the catalog and the way you do the data collection analysis and analytics. The way you onboard the X and apps and so on and so forth. So that piece, once that piece is done, in this piece is totally independent of the orchestration, right? You could take, you could design money services at the same time and when that service is ready, then you could basically push a button and you, that service gets, gets moved over to the orchestration part and it gets created. The key over here is the, the way you plan on the onboarding process, right? It is, you need, you need people that are familiar with your services, you need people that are familiar with the VNFs, you need people that are familiar with your target environment, whether it is an internal cloud or external cloud. And all those, these items are basically part of the service design and creation process. Once you do that, then you have to basically understand what resources you need and how the service gets composed. And then you do the distribution piece, which is, you know, the service is bundled, and then you, the service gets distributed in the way it gets distributed is in the form of what we call a SAR service archive, which is a task base that gets passed from the service design creation in on to the orchestration and the, and the operation part. Next please. Next slide please. So once the service gets deployed, it's called service deployment, then you, and again, what you see here is all the major components of own app and there are 11 or 12 major components, the service gets the deployed to the orchestration to the runtime environment. And that's where the orchestration steps start to take place. The orchestrator, which is in the middle box that is labeled service orchestration or so takes over, and then it coordinates all the orchestration steps that are needed, you know, it needs resources it knows where to get the resources and where to deploy the service based on the way the service was designed. It understands what the network topology should be so it passes the control to the SDN controller to create these network topologies and what Abhijit said before is that the SDN controller is basically a custom developed for own app, but it actually runs within the ODL container. And so does the app see which is adjacent to it right next to the right, the bottom, the bottom row in the, in the diagram. So, once all these steps have been executed, the one of the last steps that the service orchestration does in which is a very critical step is to inventory all the resources that have been assigned to that specific service into the element or the component that is called active and available inventory and AI. The A is for active and those are the resources that are that have been used up. And the second day is available and these are the resources are available for the orchestration to be able to use in case they need to it needs to orchestrate another service or another VNF. And once that is done, then the VNF is up and running and then data collections begins and once the data collection starts, then the analytics piece starts to take over because they're, you have a set of microservices that are doing the analytics, they're working on the data collection on the data that's been collected. And they determine these analytics microservices determine whether the there is an anomaly that was detected in if there is an anomaly they understand how to access the policy engine, which is the which is yet another component at the top, the second from the left top component, the policy framework, and the policy framework works very closely with the control loop automation process to make sure that the whatever policies define the control loop will basically implement it. And it could be that the service has gone offline. You send a request back to the service orchestration component service orchestration will go ahead and re instantiate that service. Next step, please. I mean, sorry, next page. So, I know that I'm going very fast just because of the interest of time but just give you an idea as to what the, what the benefits are. And I hope you were able to to see or notice some of the benefits. When I was presented the previous paragraphs, you have no common automation platform right that enables common management commons of common services allows for creating network topologies or the connectivity and independent of the what type of VNF or what type of service you're trying to create. So whatever service you create you, it orchestrates for you independent of the underlying technology that you're using or the underlying VNF that you're using. It gives you a unified operating framework DevOps. It's the DevOps DevOps model. Yeah, it's a policy driven. It is a model driven. The lifecycle is well defined and the lifecycle management is basically handled within one component based on the need of the service. It does support both the virtual and physical orchestration of the sorry virtual and physical network network components virtualization. It allows you it offers you a way to configure the VNF's and you know well defined matter. It is a model driven. Yeah, no matter what you use and what service you're trying to compose and what the VNF's you're trying to use. You have one one abstraction is called service instance that allows you to basically inventory any service. And so that modeling allows also the operators to use the same deployment and management mechanisms for all the services that run on the platform. Last but not least, the upcoming release which is coming out in the June release is Frankfurt release and the own app releases are named after major cities for associated with major company members. So the next one is different release and it will be available in the June timeframe of this year. And with that, I will turn it over to David to take you through the end to end use cases. Thanks, Chakar. So my name is David, I'm the leader of the source in in the bottom group. And I'm also a member of the technical advisory council of LFN. So in this section we will present an end to end use case example where the eight LFN project so including the six that haven't been presented in this in this present in this slide presentation are used to together to create an end to end service. And they will manage the whole lifecycle of the service so the old design build and run. So but before diving into the example I like to to make some some important points from an operator perspective so operators normally aim to provide innovative end to end business services to their customers through repeatable simple process with high level of automation. Now operators are also looking for solutions that enable to innovate quickly and flexibly. So what what what operator is expecting from from the open source projects. Well, first of all, most importantly open as, for example, the adoption of standard the open API is in order to avoid proprietary solutions and vendor locking. Most importantly, we are expecting future proof innovation at pace to quickly adapt to future networks as well as future products and services, and to offer the best to our customers. Finally, technical robustness in terms of solutions and also in terms of high availability while embracing modern software development practice like CICD and DevOps. And of course security is one of the most important pieces. Of course there are many more things that we the operators are expecting from the open source communities like, you know, automations simplicity reusability of the project and so on and so forth. So let's dive into into the end to end use case next slide please. So it's a very simple example where we we use the 8LFM projects to work in harmony and to deliver end to end service that includes virtual network functions, connectivity, also connectivity to the internet and an analytics powered closed loop assurance. Of course, this is really just a very simple example and it's, it's a suggestion, the flexibility of the 8LFM projects allow you to adapt these to the use case at hand. Now, in this example specifically we're going to deploy to VNS to virtual network function on top of our net NFE infrastructure for example can be open stack. These virtual network functions might be interconnected in the manner of the service chain, and they will be provided with an external connectivity to the internet in order to allow the consumption of the VNF to our customers. The net the network functions also require network acceleration. And for data packet processing, for example, and your service must be assured using analytics to even close loop operations. So it's the old lifecycle plan design build and run that we're going to present and we structure this use case in three phases. So the first phase is where we're going to build the infrastructure and we're going to prepare the network functions what does it mean prepare the network functions and build the infrastructure. So as mentioned by by Rennie, there is a project called the CNT project, the Common NFEI TALCO task force, and we will recommend you to join one of the next 11 webinars on May 27 which will be dedicated to CNT. Following the CNT reference model, an operator might decide which CNT architecture, and you will learn this in the webinar, may be best for the use case. So this is followed by picking a set of infrastructure components that fit in the CNT reference implementation. The infrastructure is then built using the deployment tools and the CI CD tools provided by OP NFE, as mentioned by Rennie. The infrastructure is certified using this entity reference certification and the OP NFE verified program or OVP. That is part of the LFN CVC compliance and verification. Several LFN project may be used as infrastructure building blocks for all in order to address the needs of network functions such as, for example, high throughput or low latency networking. So for example, open daylight and tungsten fabric can be used as third party SDN controllers to provide network connectivity. Open switch can be used to build the physical underlay network that connects the physical servers, the physical hosts. FDIO provides the data plane networking acceleration through its vector package processing, processor VPP. The virtual network functions are prepared for deployment and inclusion in network services. How does it work? An NFE vendor pre-validates and certifies two VNFs, say, in the example in the picture, VNF1 and VNF2, through the OP NFE verification program, the OVP. The NFE vendor also ensures that each VNF complies with the ONAP VNF requirements, the arrow you see on the top. And this will enable ONAP to properly control the online cycle of the VNF as part of the network service. So once this is ready, we can move into the next phase, which is more the ONAP phase in terms of design and runtime. So as mentioned by Chaka, ONAP here is central because it orchestrates the old service. So at design time ONAP is used on board the VNFs that are compliant with the previous step. These compliant resources can later be used and reused once they are part of the resource catalog can be reused to design any type of end to end service using the ONAP SDC, the service design and creation. At runtime, instead, ONAP orchestrates the deployment of the old end to end service. And the ONAP service orchestrator instructs the underlying ONAP functions like, for example, the SDC or the UPC, in order to deploy all of the elements that compose the end to end service. So, for example, ONAP deploys the VNFs on the NFEI, in this case OpenStack, and creates the overlay network connecting them together. And that's that using the ONAP SDNC. Now, also ONAP SDNC, as mentioned by Chaka, uses the Open Daylight-based architecture to model and deploy the layer 1 to layer 3 connectivity. Next, the ONAP UPC is used to configure the network functions and their layer 4 to layer 7 functionality. And as mentioned by Chaka, this is also based on the Open Daylight architecture. The Open Daylight itself that you see in the picture may be used to stitch together the physical switch fabric of the infrastructure with the virtual networking in the NFEI. For example, this is OpenStack Neutron. Through the Open Daylight Northbound interface, the ONAP SDNC is able to instruct the Open Daylight SDN controller for the underlying network management. Whereas the Southbound interface of Open Daylight, for example, NetConf, supports interactions with OpenSwitch, another LFN project, running on the leaf and spine fabric switches in the NFEI. Finally, the SDNC, the ONAP SDNC Southbound interface also allows to instruct Tangsten Fabric, another LFN project, to create the external connectivity that will enable the customers to consume the service offered by the two VNFs. The policies that are predefined in ONAP and that will control the life cycle of the network service are designed using the design time components such as the service design and creation and clamp. Next phase is the run phase, is the operation phase. Well, we have the closed loop operations, right? So in this case, of course, ONAP, again, being the brain or the orchestrator, the automation part of this example, plays a central role in the delivery of the assurance or the end to end assurance of the service. So for example, the VNFs, the virtual network functions, report their performance and fault data to the ONAP DCAE data collection and analytics engine. Using an interface that is called the VNF event streaming interface, the VES interface. This information is constantly analyzed in ONAP. It may trigger predefined policies that we have created in design time. The policies are then used to invoke closed loop automation actions, such as scaling, for example, or hearing one of the service components in order to meet the service level agreement that we have with our customers. Finally, the closed loop operations may be further enriched by combining the LFN real-time analytics capabilities such as SNAS.io and Panda.io, which are two additional LFN projects. So for example, information about changes in the network topology are collected and gathered by SNAS. And this information can be used to trigger ONAP policies that will spawn more instances of the packet routing network functions. On the other hand, the data analytics capabilities of Panda may be used to trigger ONAP policies based on data stream produced by all the layers of the infrastructure as well as the network functions. In this case, ONAP may respond to an infrastructure issue detected by Panda by migrating, for example, VNFs from an affected location to one that is healthy and that has available resources. Now again, this is a real simple example. I'm conscious the time maybe I was too fast and of how, for example, the LFN analytics projects can be used in conjunction with ONAP and with all the rest of the LFN family umbrella to meet especially the agreed service level, which is what is important for an operator. Now, this was my last slide and I would like to end over back to Rani for the final remarks and I thank you all. Thanks, Davide. We hope that today's webinar gave you an idea of how the LFN provides the building blocks for modern networks with standard compatibility and open interfaces. The LFN projects are easily integrated with other elements in the network and network designers may choose to use anything between just one project to the entire set of projects when they're building their networks, so it's not a package deal. And finally, we really encourage collaboration. You may start by just asking questions, proposing ideas or help with ongoing work. This getting started link here can guide you through the first steps of your participation, but please you're welcome and we always welcome new participant, new eyes and new ideas. This concludes the presentation part of this webinar and we will now be happy to address any of your questions. Thank you. Great. Thank you to all of our panelists. And thank you to all who submitted questions. We have several that came in through the chat window. Lots of them are already answered via type. But we do have a few questions that we can address in our last few minutes here. So to kick it off, somebody is asking, are you working to standardize the API is between these components. If we want to go to a best of breed scenario today we can own up the integration efforts are really big. Yeah, so I think this question came in two parts and I typed the answer in the second half so, but I'll summarize it to some of the interfaces within own up are already following existing standards where they exist such as Etsy NFV specifications and for the others, the APIs are provided are well documented and of course provided open source, and we are already seeing some successful use cases where people and organizations are using a subset of the components successfully using those interfaces. Yes, if I could, if I may add we have a, we have an effort underway to working with SDOs mainly Etsy like Ronnie just mentioned one for the orchestration piece where you could get an orchestration request from outside on that. And the second is for performance management data collection that through the vest collector that David I just mentioned. So these are two efforts that are currently being worked on and we've done some work in the previous release and we added new features and functional functionalities in the upcoming release. Actually, I would also like to add to the things that open daylight also has been has over a period of time been working to standardize young models and things like that for a long time. So we do work with the standards organizations. Great, thank you. Just a couple of other questions. What's the level of maturity of the projects into the elephant. Are they ready to be deployed in production yet. Yeah, so most of them are already being successfully deployed I think object if you can say a few words about open daylight and maybe shocker can mention a few things about owner. Yes, so, so, so open daylight has been deployed in the production networks for a while now. In fact, some of the networks are as big as like one billion users going through them. So it's been deployed at different service providers. And as far as own app goes, there are a few major companies service providers, mainly that have deployed on app and it's been deployed. I would say for past at least a couple years. The company that I used to work with deployed it three years ago and other companies have have been major contributors and have decided to basically have deployed it and have decided to make it part of their future direction in terms of orchestrating and deploying services. So we have two major companies in North America and two major ISPs in Europe and in the Far East as well that have made solid commitment to move ahead with us using own app in their production environment, if they if they're not using it already. Great. So this will be our last question unless if there are any more please feel free to type them now, but we've been asked if you all think the model of free and open source software is going to drive equipment vendors out of business. Yeah, so I think I tried to address that but the important thing is open source or software itself is just one part of the product or the solution. It's an important part but there is still a need for integrating software testing it, providing support for it. And this is still the role mostly of the classic vendors or new coming vendors. So for everyone we're just collaborating on the creating of the core software but then each vendor can still differentiate and provide its own unique value on top of that. Okay, great. Well, a big thank you to all of our panelists today and thank you to everyone who joined us. A quick reminder the slides will be available tomorrow anyone who registered for the webinar will get them via email. And if you have friends or colleagues who might want to be interested in this, they can watch it on demand at the same registration link that you signed up on. All right, thanks again and stay tuned for more webinars coming out from LF networking. Have a great day everyone. Okay, thank you. Thank you. Thanks all.