 Good evening. I hope all of you are able to hear me. I am Akhila. I work as a solution architect with TCS and today I would be presenting the application that we have developed on OpenStack platform for NFV. So what is this application about and what have we done? This is a framework that enables validation of network functions and what we mean by validation is to validate reference architectures, functional verification, verification of the various life cycle events that a virtual network function or probably a service that comprises of multiple virtual network functions goes through and also some real-time diagnostics which would help the person who is using this framework who's actually testing his solution to simplify troubleshooting and debugging. So this is a platform that we are talking about. Of all the list of features that I mentioned, we'll first look at how we are deploying the service. When we talk about deploying a VNF or a network service based on these VNF, they are a few important phases that we look at. It starts off with onboarding the network service wherein the configuration of that network service is captured into predefined templates like the network service descriptor which gives a high level overview of what all components are there in this network service, what are the different networks that are there, what are the entry points into that particular network and what are the different flavors that are present on this particular network. The next phase is the deployment which is probably the most complex because it involves coordination across the different layers likewise the NFA orchestrator, the infrastructure manager, your virtual network function managers and the VNF itself. In this phase the service is brought up with day zero configuration. The next step is or one of the important things is monitoring service for various aspects whether you want to understand if you want to scale your service or if you want to recognize a fault through monitoring you can do it in this phase and the update where we are referring to the configuration updates of the network services only not software updates and the final phase is bringing the whole thing down cleaning it up so that resource consumption comes down to zero. So, this is what we mean by end to end service orchestration. This is a high level reference architecture of the solution that we have built. So, we have you see a big blue box here marked as VNF SVC. This is centralized them in the trans on the open stack controller it provides the REST API and it is responsible for onboarding. Now, what are the different things that we do when we onboard a network service? The descriptors are given to this component it acts like a typical compiler. So, it understands each and every aspect of these descriptors converts it into metadata and puts it into its internal repositories and why do we do this because when we are asked to create a service we save a lot of time by defining it in a format which will give us data readily to be deployed on open stack. Now, when we talk about onboarding the descriptors though Tosca is a well known format there are also existing service orchestration solutions. So, you might need compatibility. So, what we have done is we have implemented a plug-in based interpretation of these descriptors. Reading the metadata in the descriptors we understand how this particular descriptor needs to be onboarded and the right kind of translator is invoked to convert this descriptor into metadata that is a first phase and also some preparatory steps like onboarding the images into glance and also computing the quota requirements for deploying various flavors of the service these kind of things are done during the onboarding of the templates. Once the onboarding is done you have your service in your catalog and you will be able to deploy it. In the next phase when the user requests for a deployment either through a REST API or through a CLI, VNFSVC coordinates all the different components that are involved including your open stack, the VNF managers and pushing configurations and everything is coordinated by VNFSVC. It derives the deployment order because services are not a flat deployment. You can't just run the VMs anything at any time. They are interdependencies that could exist between these virtual network functions. So, this information is derived by this component and then it ensures that these virtual network functions are deployed in the proper order and one simple use case is suppose I am deploying a network service which has a node that distributes IP addresses to each of these each of the other VNFs then that is a node that should go in first and that the configuration of that node or the IP address should be passed on to the rest of the VNFs that is one use case. So, when we are deploying these VNF we are able to dynamically extract the data typically you would hard code or preconfigure such parameters which is not required if you are able to extract it dynamically during deployment. Once it starts of deploying at least one VNF it triggers a launching of a VNF manager. The way we have implemented the VNF manager or probably first I will talk about the requirements that we have for a VNF manager because we are talking about a platform where we are able to validate any third-party VNF. It should be possible that we should be able to talk to them. So, this VNF manager is more of a framework and you see small rectangular boxes on this VNF manager these are what we refer to as drivers. So, this is like a framework which loads the required drivers that it needs to talk to the VNFs in that particular service and we have well defined API that needs to be invoked. For example, when a VNF is first launched the initialization API is what you would look at and when you want to update a configuration a similar API and on the other side it could be any protocol that runs on L3 likewise a netcon for SNMP or anything that is internal to the driver. Once the VNF is launched that is you can have a prepackaged VM like a firewall or you can have something where you need to go and install the software on top of that. Either of these configurations once it comes up the VNF manager pushes the DayZero configuration onto the VNF. Now, this is repeated for each of the virtual network functions in the service and the service is brought up. This is what happens in the deployment phase and the next one is the update. There is an API that or rest interface that we provide for being able to update the configuration on the VNF. The decommissioning is also I mean it is a process of cleaning up things so it keeps track of whatever resources it has consumed and make sure that it is cleaned up. Now, one aspect is monitoring what exactly do we refer to by monitoring where this is the phase where we are trying to understand what is the load on the system or derive a performance matrix which gives us some indication about what is the state of the network service. It could be based on CELOMETER but sometimes you would require some more information. It is possible to extract this from the VNF through the VNF manager and run it through an algorithm and this algorithm is not standard. It differs based on the service that you are deploying. So, we have a pluggable KPI algorithm. We have few predefined ones but it is also easy to add new ones because we are trying to validate fetch the value of this KPI and expose it. You can read it through a rest API and based on this KPI value you can trigger scaling. There is API also available for scaling. You can scale using REST, I mean OpenStack API also. So, why do you need a separate API for scaling the VNF? Because VNF scaling is not just launching a new VM. It has to go through the entire process that happened when you were creating the network service. It has to be configured. It has to be loaded with the right kind of software. So, these are the additional steps that go in. Of course, launching the VM is done through OpenStack API only. But the rest of the stuff is implemented by the higher layer which is your VNF SVC. This is the overall reference architecture of the OpenVNF manager and you will be able to find it on GitHub. I will probably run through a quick demo now so that we can relate to whatever I have explained. It is a recorded demo because it is quite lengthy. So, this is a setup that we have. Whatever framework I was talking about is deployed on a VM because this can be quickly created. We have a simple CLI for deploying this framework because this is supposed to enable full automation. It can integrate either with your CI, CD solutions, your DevOps or into your existing system verification suit where you are able to enable the orchestration and also reuse the test framework that you would already have because IMS has not inherently changed. The way you validate IMS has not changed the functional verification part. What has changed is how you actually build that IMS service is what has changed. So, you can even integrate it with your existing verification solutions. And we are using a third party traffic generator from Terra VM for executing this demo. So, we are triggering the service creation where we are onboarding the network templates and these small blue bars that you see here is what we refer to as event notifications. Now, when you deploy a service, it takes quite a good amount of time. And you would have to understand what exactly is going on in the background and you would also need updates on the nodes as and when they are deployed so that you can extract that configuration and use it for configuring the rest of the system. That would save you a lot of time. So, we have an event framework which gives you continuous notifications about the state of the network service as well as the metadata whatever configuration has gone into that network service or any such information and we are demonstrating the deployment of a clear water IMS here. So, you see that the nodes are coming up one after the other. Once the network service is deployed, it will trigger the traffic generator which is the Terra VM. It will configure the Terra VM to speak to the right node which is the SIP proxy and Terra VM will be able to kick off the test verification for SIP likewise registering the users making calls between the different users and getting the results. So, this is the Terra VM solution. This is also running on the VM which has just been deployed and you will see that the configuration will come up. It is coming through the test framework all this configuration and it executes the test cases. Once the test execution completes you will see that it will also perform the cleanup and it will log in the results. Yes, it has cleaned up everything and a few of the data is reported. Apart from this, the other aspect that we are working on for which I don't have a demo ready is the diagnostics. What do we mean when we say diagnostics? We mean that we are looking at capturing network traffic, correlating it with what exactly we are doing at that point of time and giving some meaningful result on the screen so that the user is able to understand what is happening. If there is a failure, he will be able to derive some sort of a conclusion from it which will help him in troubleshooting. Now, for diagnostics, there are a couple of things that we had executed. We used Ansible to deploy TCP dump on the host interfaces, capture that information, then analyze it and show it to the users, but that is very, very offline. It shows data very lately. So, what we have done is we have used this feature called tap as a service, which is available in OpenStack. It gives you an API which enables you to mirror the traffic that is going on to a network interface. We have built a small packet analyzer based on DPDK. This pulls in the network traffic. It quickly segregates it. We pick the packets that are of interest to us, wrap them up with the right information that the user is ready to see because we don't want to throw a full packet starting from L1 to the application layer to the user for debugging. That really doesn't help. So, we pick up whatever is relevant to that test scenario. So, if it is a SIP protocol that we are doing here, we can configure that analyzer to give us SIP messages from specific nodes. We do that and then we pass on this information to the user who is executing the test case and this is more real-time than what we could get with a TCP dump. So, that is one other feature that we are working on. So, that is pretty much what I have to share today. Thank you.