 Hi, my name is Andre Ferdette. I'm with Red Hat, and I'm here with my colleague. Hi, my name is Shri Kanth. I work at Ericsson. And we're going to be talking today about an activity that's going on within a few different groups, so primarily Open Daylight, but also OPNFV and FIDO for integrating towards what's being called the Nirvana Stack. So back in around Open Daylight Summit late last year, some folks from AT&T proposed a consolidated stack for SDN control, particularly for the telco environments that was composed of Open Stack, Open Daylight, and FIDO as the data plane, or VPP. And then using the OPNFV project as an integration project for pulling things together, testing, and defining requirements. So look at, dig down a little bit as to the work going on. So just to back up just a minute on that. So as we, around the same time as that stack was proposed, we were also looking within the Nebret project, which is where we're working primarily, of how we could integrate with VPP and looking to leverage some of the work that was being done within the GBP project and started talking to some folks working in that community. They are also looking at ways that they could leverage a lot of the networking services that were available within Nebret. And so we started talking, started working, and that's what we're here to tell you about. So we look at Open Daylight, how it integrates with Open Stack. We have a, it's a neutron provider, neutron back end. So there's a networking ODL project and plug-in that resides in with an Open Stack. It talks to Open Daylight over REST API. It provides the networking services for Neutron and the Open Stack scheduled workloads. Looking into the networking ODL driver, it's composed of a set of different components. So there's an ML2 plug-in that handle L2, L3 plug-in and a set of service plug-ins that together, this is really the API, the interface between Open Stack and Open Daylight. Once we get into Open Daylight, there's Open Stack talks to the neutron northbound. And this was developed within Open Daylight so that we could have one common entry point into Open Daylight and allow for multiple different options to be developed for networking. And two of them, the ones that I mentioned earlier, are Netvert and GBP. And they both can, to some extent, or another provide networking services for Open Daylight. So single, common, northbound, multiple providers. And then talking to a set of plug-in, southbound plug-ins to communicate with devices. All right, so Shukun's gonna take over at this point. So we have seen that there are two implementations in Open Daylight that is providing the same Open Stack provider implementations. So let's look at the Open Daylight network implementation. So the key features of this particular solution is it provides a wide variety of services like layer two, layer three, security groups, ACLs, QoS, even also some advanced services like L3, BZP VPN, EVPN, Service Function Chaining, IPv6, some of them, layer two, gateway also to connect to the bare metals. So if you look at the architecture or the stack for this network solution, as Andhrae was mentioning, the neutron northbound is the single interface towards Open Stack that receives the neutron API calls and translates into the MD-SAL Yang models. So to the beginners, just think of MD-SAL as a highly distributed data store within the Open Daylight. So all the intercomponent communications happen through this particular layer and it is also used for high availability and state replication. So once this neutron information is stored in the MD-SAL layer, that is where the network listens to the notifications from that particular data store. So another important point to note down here is what network does is it will, since it is designed to support multiple northbound systems like Open Stack being one of them, but it also designed to support Kubernetes and other orchestration systems. So what it does is it translates this neutron specific models into more neutral intermediate data representations. So the rest of the network business logic operates on this intermediate layer such that it can be agnostic to the northbound orchestration systems. And one another thing to discuss is the network solution supports primarily OpenFlow and OBSDB-based devices like OpenVSwitch or OpenVSwitch with acceleration as the DPDK or smartNICS and also top-of-rack devices which supports VXLAN, VLAN translation capabilities to inter-work with the bare-metal appliances. And network uses the BZP protocol to inter-work with the edge devices like data center gateways. Today it uses the open source Quagga implementation for its BZP operations. Let's take a look at group-based policy in Open Daylight. So what is group-based policy in Open Daylight? It is an intent-based policy framework where applications and users can define their networking requirements independent of underlying infrastructure. So it has defined constructs like endpoint, endpoint group and contracts. So one or more endpoints with similar policy requirements can be grouped into endpoint group and then you can define contracts between those endpoint groups on how they should communicate. So one nice feature about group-based policy is it has a very good rendering framework where we can support additional device types using which it supports the NetCon-based FIDO devices which is a key component in the Nirvana stack that we have seen in the previous slides. So what we have seen so far is what we have currently in Open Daylight. So we have two implementations that are providing this OpenStack services. Both are intended to support multiple Northbound systems and multiple diverse set of forwarding devices. Each has its own strength like network has rich network controls services layer two, layer three advanced and group-based policy has a flexible rendering framework. Last but not least, we have two communities working on two different applications but having the same goal in mind. So what do we want? We want Nirvana within Open Daylight. So what does that mean? We wanted to have a single converged control solution in Open Daylight that can leverage best of both the solutions that NetVote and GVP has. So we need that single control solution that can support this rich set of control services at the same time of broad support for forwarding devices like OVS, VPP, hardware. And more importantly, you will get two communities coming together and moving in one direction. So this is one high-level proposed architecture for such a converged solution. So as I was mentioning, the target is to take the best from both the solutions. So it leverages the components from both NetVote and group-based policy such that it will continue, it will have continued support for all the services that we described. And additionally, it will support both OVS and NetCon-based FIDO devices. So the current status of this activity is the design discussions are in full swing in Open Daylight community. And we also have done a proof of concept to support a simple layer two service in this particular integrated solution and demonstrate with the FIDO data plane. In the near-term roadmap, we wanted to support more advanced services like Layer 3 VPN service function chaining and some of the VLAN use cases. And also we wanted to validate this particular converged solution in a hybrid deployment where you have compute nodes with OVS, compute nodes with VPP. And we wanted to show you that single solution can support all these device types. So we quickly looked at what is Nirvana Stack and we discussed what is how Open Stack, Open Daylight integration is working together today and how multiple implementations that are available in Open Daylight, the popular ones, NetVote, GPP, we looked in deeper. And we discussed what is the need for converged solution in order to support, take the best from both the solutions and a proposed architecture for this. I know in this short time, we could just introduce you about this activity, but for more information, there is one full day track on Thursday scheduled to cover many aspects of Nirvana Stack, where you will find a lot more information. And also there are some pointers to the current proof of concept work that we did in the community. That's all, thank you. Thanks.