 If I can ask people to take their seats or take the conversation out to the car door, ideally one or the other, it doesn't matter which, I don't mind. So our next presentation is about ONAP which is a Linux Foundation project which has got a lot of momentum right now. It's a platform for the automation and management of virtual network functions on an NFE platform. So without further ado, I leave it to Gervais Martial and Christophe to present ONAP. Thank you. Hello. It is coming. So there is us one second until the computer starts. Maybe we can introduce ourselves. So we work for AT&T in Namur. I graduated from ULB here, so I'm kind of home. And we've been working on software development since many years, 20 for me and 15 for Christophe. Yes, we've been into this industry for quite a few years now. Let me start the presentation right away. OK, so ONAP. What we will do is try to introduce you to ONAP because it's quite a young open source project. We don't even celebrate our first year, so we are very young. And we'll give you a quick overview of what ONAP is and what it's trying to do. And since this is all very theoretical, we'll go through a use case that is implemented in our first release called VCP. And then we'll touch base on what a control loop is and why we would want to do that in an SDN network. And we have recorded actually the demo to avoid the demo effect, a limited risk. We have videos, so we are sure it's going to work. So, as Marshal was saying, we both have been working to the telecom industry for a few years now. And I'm a contributor to ONAP and Marshal is CLAMP-PTL, which means a primary technical lead for one of the applications within ONAP. Yeah, the one of our control loop actually. So, what is ONAP? So, as I said, it's a young open source project that was formed in March 2017. And it's actually the merging of two previously open source projects from two big network service providers. One which was called OpenO, which is built by China Telecom and ZTE, and OpenECOMP, which is the AT&T one. So, we both merged the solution to try to get to a common understanding of what would be an orchestrator for SDN. So, what is ONAP really? So, it's a platform, meaning it's a collection of applications. You'll see in the next slide don't get frightened, there are a lot of applications. And it's meant to orchestrate the life cycle of VNF. And it's mainly divided in two main pieces. On the left of the graph, you have what we call the service and recipe design. So, this is really what we call the design time. When you speak about VNF, it's not just only about taking one and loading it up in the network. Most of the time, you want the ability to combine them to create something a little bit more innovative or something that gives a competitive edge. So, you want to have something that allows you to mix them up, test them, design them, certify them, make sure you also put some room for licensing, because obviously we all like open source, but there are commercial products out there and you want to be able to meet both worlds. And once you've done that, you want to push that to what we call the runtime. The runtime is really the thing that's going to do the orchestration, going to instantiate the function in the cloud, and do all the life cycle management. So, this is the architecture. As I said, it's quite a few applications. So, on the left again, the design part where you see where we have the VNF SDK. So, that's an SDK that allows VNF to be built for own app. We'll touch base on that a little bit later. Resource onboarding, service production design, policy, all the things that you want to create and then store into a big catalog database. That's when it's certified, is then pushed on the right to the runtime environment. The runtime environment being mostly the orchestration part, so the service orchestrator, which will execute the, let's say, blueprint recipe, whatever you call it, that will really instantiate the thing on the network. And you have also on the right something called ANAI, which is a graph database, to store and keep track of what you've created in the network. And then a data collection engine that is able to get events from these VNF and run microservices on top of it to do some analytics and take clever advantage of the all the programmatic environment that we have. And the policy framework which Marcel will tell you about, which is involved in the closed loop area. And then below, we have adapters, which is essentially the thing that the orchestrator triggers to connect to the different clouds and execute the recipe. So, enough for the theory. Let's take a proper use case. VCP is one of our first use case that has been implemented in our first release called Amsterdam. It's very new because it's from November last year. And this is about moving from the traditional residential gateway that you probably all have at your home. These little things are doing a lot of functions if you look at them. And it's actually quite been an arms race really from the start. You have multiple versions of them. Some are doing even DLNA or really advanced cool stuff that you can have on your home. But the problem with that is that it's an expensive piece of kit which means it's also very rigid in the sense that most of the network service provider, they have one flavor of that and to avoid that you tweak too much with them, they limit the access to it. And really, some people are also staying on older version of these bugs and are unhappy. It's also a very time consuming and resource consuming thing for network service provider to have some call centers to fix these or even some people going to your home to try and fix that if something goes wrong. So, there is a specification by the Broadband Forum called NERG, the network enhanced residential gateway where what we will do is essentially strip that box of the complexities that it has, have something very simple, a very basic hardware that will be deployed at your home or for that matter you could even think not replacing the one that you have and just put it in bridge mode and move all the networking functions as virtual functions in the cloud of the network service provider. This will actually translate all these to virtual functions. So, these virtual functions can be on the operators cloud and we call that the edge cloud so you'll see in the coming years more and more small data centers that are going to pop up in big cities and where these network functions will be actually running. The advantage of that is that you delegate all the maintenance of these softwares to the network service provider and for people who don't need some of the specific super cool function you may even think that you don't have to pay for that and you can have a very simple just internet gateway and for people who like to share their videos or even have access to content that would be closer to your home on the edge data center be able to stream directly from the network operator data center. So, if we want to orchestrate that into own app well we start from specification which is actually doing exactly what I said it's stripping the function of the residential gateway 2 one which is virtual on the right and one which is physical on the left and what we did with own app is try and simulate the use case we didn't have an open source home to use so we simply created some emulators to simulate the home network we have actually two gateways there and we created some infrastructure VNFs that are actually mostly already there in the network service provider environment so this is all software and we want to load that software into own app so that's the design part remember we take this approach and we will design our VNF with that in mind so we need a function for all of these boxes and then we will load that into own app and run this orchestration so that it spins that up on the far right you have a small web server an Apache web server to show that we've established connectivity between the residential gateway and what would be the internet or whatever provider you could think of so what do we have to do? well we have to upload a descriptive model of the virtual function right now own app supports heat, open stack heat and Tosca we have to run through all the onboarding piece and then create the catalog entry and do the certification and distribute that to the runtime so what did we use to simulate all that? well you have here all the list of open source that we used so VPP that you heard about just before to simulate the residential gateway and some other open source software to create the VDHCP, VAAA and VDNS so the descriptive model just a look at it it's a collection of YAML files with some tree dependencies you can define variables and you see it's essentially if you look at that in red defining what flavor of VM we need to run the software on what networking we need to establish and then a series of scripts to actually download the software on the VM start it, configure it so that the cloud understands what we want to achieve with that after that once it's instantiated it will run life cycle management, monitoring metrics and so on so the ultimate goal if you think of it is that once we have run this use case once you could think of this platform orchestrating tons and tons of network function dynamically and you would have a frightening picture like this where your cloud is really something kind of organic, dynamic that can automatically change and be fixed by this huge automation platform so I have a small video to try and quickly show you the onboarding process so what you will see here I'll post the video once but it will be quick because it's an accelerated view you will see all the onboarding process so here we have created a zip file with all the YAMLs describing our software and it will be loaded into own app you will see the graphical interface and you will see that the user logs out and logs in again this is because it's actually different roles you have a designer role and then you have a tester role that will certify the package and then the distribution will occur so let me just run the video hope it will show up properly so you see it's logging into the design it's selecting the type of VNF loading the file it will drag and drop, you'll see it's very quick this takes actually quite some time if you have to do it, you see it's parsing the file and telling you if there are errors and it's creating and associating different variables for the environment descriptive definition then logging out, logging in again and then running all the certification part you will see that it will combine different VNFs the five VNFs that we have seen to create really this is this piece the complex virtual function so it's really modeling based and then it will certify and run so in the rest of time I will speed up a little bit so here you will see the actual instantiation on the top left you have the theory on the bottom left you have the script that is actually triggering the ONAP API for the instantiation and on the right you have the open stack dashboard where you see all the VMs popping out and the networking being established between the tree all the components and you see it's starting to spin up all the VNFs and once it's done you'll have that it's just running a series of tests probably it's in the next video okay so that's it for the instantiation now Marsha will talk to you about the control loop yes so after your VNF has been instantiated you need to have some tools in order to take care of the life cycle of the VNF and to take actions in case of issue that's where the control loop comes into the picture so it has different pieces so the first one is about the collection of the data you need to use to compute your KPIs etc and that collection is done on the DCE platform which is that analytics that Christophe mentioned about in the ONAP list of applications it's also on top of that analytics that you will have the microservices that are going to do the computation on your data to detect eventually issues and then you need to take actions those actions are done through policy which is a big set of rules that are triggered by the microservices that are running on DCE to say ok this is the action that I need to take and that action might be asking the application controller which is also another part of the ONAP to restart or to migrate to a new software depending on the type of issues and this is the closer part it's also another kind of action which is more the open loop where you will send a message to an external ticketing system for a human to take an action because that's the only way to solve the issue so for the controller there are also two main areas which are the design time and the hunt time In the design time you will actually stitch together the different microservices that you want to use to exercise your control loop and then you will after having stitched together all those microservices you will generate another ML file that will describe that control loop in a way that is understandable by the DCE application which is running a Cloudify to spin up the microservices and then you have the hunt time part which is really about configuring the control loop triggering the deployment and taking care of the lifecycle of the control loop and if you need to start or stop the control loop there is an issue with the control loop itself and if you want to update the control loop with new parameters that is done during the lifecycle management so this is the overall view of the control loop so this is the DCE platform where most of the work is being done and you see that here the VNF are going to send data to the DCE right now we are trying to use the VES format which is a format that has been put into place by HNT to try to harmonize all the alarms and KPI coming from the VNF in a single format to have 10,000 formats to support which makes things more complicated and then once those data has been collected they will go through different microservices and at the end the last microservice will trigger a policy depending on what you want to do as action and that policy will then trigger maybe an action controller or maybe a SO if for instance the action is to scale up a new VM then you can trigger the Swiss orchestrator to trigger that new VM I think you will see this in action it's maybe more... This is also another video to show how the... So you see here packets going through and one of the VNF on the VCP has some packet loss which are acceptable but at some point the threshold will be and you see it's reporting VES events to DCE which is not taking any action right now but you will see that will reach the threshold where it becomes not acceptable anymore and that will trigger the action on the control loop you will see that will become red in a few seconds and DCE will then forward this to policy which will here it comes you see it's becoming unacceptable and then DCE will trigger the policy which will actually then trigger app C to reconfigure the VNF which is rebooting in this case the usual reboot and no telemetry during that time when the conditions are fixed then the closed loop is complete You also notice that in the control loop or closed loop in this case sometimes the action will continue on being tried until a signal is given back to the control loop to say ok then DCE has been resolved I can stop trying to solve it Ok, it's the end of the presentation Yes, right on time I don't know if there are any questions Any questions? How big is OnApp? How much requirements does it have in terms of resources? What's the minimum installation of OnApp that's useful to somebody maybe for outside of a telco environment to use it? So OnApp is pretty big at the start it's required really a huge setup to run this because it was designed to be the big scalable thing running on multiple data centers so we've been trying to shrink it So now there is actually a project very interesting in OnApp which is called OOM which is running all this in a Kubernetes cluster So if you want to run all the containers and all the things you'll still need something big I think they've come from something that needs 200 GB of RAM to something much practical like 53 but you don't need all the pieces to run So I mean since it's a Kubernetes setup you can choose the containers you want to run If you want to try the design time you just spin up the design time part So it's shrinking down and there is even a project to run this on Raspberry Pi We'll see how it goes but it's in the works So the question is do you want to support only OpenStack? Well the answer is no This is a multi-cloud project and multi-v where right now we support OpenStack because that's our legacy but no we will extend to other SDN platforms and actually it's one of the topics for the future release