 Hi, hello guys. I'm same Hague. I'm a project lead on the obvious to be project on open daylight We're gonna talk about open-stack open daylight integration Let me introduce these guys first Saku Yamahato He's a project lead also on networking ODL and the neutron northbound Our Vinsomia. He's working on the networking ODL the v2 driver The Shaw at the par he's working on VPN service project on open daylight And a little visionally he works on everything So we'll leave it that he's a key guy though Okay, um actually we like to say thank you for showing up we know it's kind of we're about hump day right now It was a good party last night. It's late in the afternoon. So we appreciate you guys coming out Maybe not so we kind of just want to show first off the project contributors For the network virtualization and the open-stack integration that we've got there's a lot of keep or there's a lot of moving parts And so it that would be a lot easier just kind of show the names you can see the different areas that each of the individuals works at During this presentation, we're not going to go too deeply into the technical details So this is kind of another way if there is something you kind of have a question about We've got the people there listed so you can find their names and track them down You know Yeah, go to the next one. Yeah, okay. So so what is open daylight? Open daylight is a SDM platform a software defined networking controller And it's by the simplest the same way that open-stack is it's an orchestration for compute resources Open daylight the SDM controller. It's an orchestrator for virtualized network resources The key thing about open daylight is it's a modular From the beginning that's been a kind of a distinguishing factor about open daylight is It's very easy to have Anybody can all the network functions of services and things like that are all It's microservices architectures. They're all wrapped as small service blocks So it's very easy to take those blocks and build up higher level applications. It's very easy to The platform it's open. It's diverse. It's very inclusive Because of that modular architecture, you can include different features so you can build up, you know Again, whatever type of deployment you want And because of that modular architecture, it's very a lot of multiple multi-protocol There's a bunch of southbound protocols you can control many different devices And because again because of that modular architecture and being so open We had we can support a broad range of use cases government use cases cloud NFE use cases And the last point The open stack we say opens, you know, we're talking about the open stack integration That's always been a key priority for open daylight is to always be able to work with the open stack So we kind of wanted to highlight that So here's what This is kind of an architecture picture of open daylight As you can see there the minutes, it's a classical picture, you know top to bottom if you've ever seen anything about SDN You know, you've got APIs on the northbound to access services in the middle in the core and then southbound protocols to drive those devices You know that green block. That's the core of open daylight There's a bunch of different types of services and functions Like L2 switch host tracker the net vert VPN service for you know the network virtualization pieces That's the core of open daylight And you can see they're all little blocks and they are really all little blocks as we talked about that microservice architecture On the northbound side all those services they expose API's you can see this, you know, we have rest rest comp Interfaces for high-level applications can now drive through and get access to all those services and functions Down at the bottom the blue blocks are the great great bluish. Let's all the network protocols To control the devices and network elements. You can see this like obstb netconf lisp BGP they're all pluggable and and Another block in there is that blue block that long block the cell we call it the MD cell the model driven service abstraction layer That's an important piece to the whole architecture It uses a language called yang which is the modeling language basically That's where you can map all your network functions to these devices and expose higher-level application interfaces That's how we you can do all the little building blocks They all work through this MD cell and you can and it provides a consistent interface from top to bottom And some other magic that comes out of that yang and the MD cell is it it's what exposes It actually when it's compiled it exposes those rest conf interfaces And it it exposes the Java code which also the all the actual code itself uses When all the blocks are interact interfaces with each other. So again, it's just kind of a it's consistent interface from top to bottom Okay, so this is kind of a deployment architecture It's very similar to anything else you would see with open stack. You've got a control node and open stack control node Key pieces there is the neutron networking ODL That's the ml2 driver, which is able to pass neutron API into opens are to open daylight controller And then we come across to the open daylight the open daylight controller, which is where open daylight is running key pieces there are the Neutron northbound which is receiving all the neutron APIs and Obviously be which is driving the OBS switches And then compute nodes any number of compute nodes Typical things, you know, it's you know, you've got a bridge there OBS OBS is running there You've got virtual machines. There should be instantiating An open daylight is setting up all the networking for all those for all those switches and all the beams are there We show that as a network control node for the open stack. It's probably not in the typical sense before DVR was around It's not a dedicated network node in that in that sense We also support also the virtual routing in L3. So that's also Distributed across the compute nodes That network node control node really if there's extra agents like the DHCP agent in this case or that networking node, you know Okay Okay, so so that's kind of an overview of open daylight and now we're going to go ahead and talk about Moving on, you know to brilliant the features that we've implemented this release and so he's going to take over Thank you Sam. So The community has been working on Features are so hard. So we'd like to introduce new features. I mean what has been What was delivered in the previous development cycle and what we are planning to Planning to work on for the next next development cycle. I mean open stack Newton cycle and open the right ballon cycle So this our slides summarize summarize summarizes the features we have delivered in the previous Previous development cycle. I mean open stack meter cycle and open the right very dim cycle the main focus was a HHA high availability and the stability actually are people people is starting to use it use it and also Starting to test it. For example, or PNFB project is continuously testing Open stack integration with open daylight and we have a feedback from them. So as Open stack integration is the biggest use case of open daylight So we are we are seriously taking care of it. So this is open daylight committee response to it Response to it. So we have much improved its stability and we have fixed many bows Next slide please. And then This slides summarizes our future plan for open stack Newton cycle and ballon cycle Last in the last cycle we have much improved its stability. So we have widened our scope to include a scalability and also to fill a major feature gaps So some some features Didn't finished yet. So we have some leftover from the last last cycle So we would like to finish those unfinished features like V2 drivers and also We'd like to finish some HHA functionality and in addition to that We'd like to add more new fancy features In the next in the following slides, we'd like to explain those major items in detail First one is a port binding. Although the main goal is to support not only playing open V-switch, but also DP-DK enabled open V-switch and also we'd like to support another software V-switches like FDIO, FIDO, VPP. So in In metacycle we have we support OVS DP-DK, which is DP-DK enabled OVS Which is much faster than playing open V-switch. So now Open-derived mechanism driver understands not only playing open V-switch, but also DP-DK Support of open V-switch. So it found out the Open V-switch in the computer node supports DP-DK It's switched to use DP-DK so that you can enjoy accelerated performance. In NewtonCycle, we are planning to support FIDO, FD.IO, VPP switch So this will be this logic will be more enhanced to support any kind of switches in the future. Next feature is lightweight test framework. This is the answer. This is open-derived community answer that the community is very value its testing and it is quite important to test any kind of logic. So for now in order to test open-derived integration with open-stack, we have to deploy full open-stack, open-stack and open-derived. It is very quiet time-consuming and sometimes it is difficult for open-stack, open-stack developers. So we introduced a new test harness to simulate open-stack, open-derived behavior so that we can test open-stack-neutron without open-derived. So this helps developers developers so that development cycle is shortened and also this test harness can be used for HAKS. For example, network can fail between open-stack-neutron servers and open-derived and such such transient network failure can happen. In that case Neutron open-derived mechanism driver should re-dry and when network come back, they should continue to work. So this test harness can simulate such transient network failure. Actually, in order to test this kind of network failure, we have to set up network and then we have to, for example, we have to unplug the physical ether cable. But with this test harness, we can simulate such failure so that we can exercise such HL logic and it is quite important to test, exercise those logics. And in Neutron in Neutron cycle, we are also planning to introduce a test harness to simulate Neutron itself so that we can test open-derived without open-stack so that we can exercise we can easily test open-derived logics and also we can simulate multiple node, compute node, compute nodes with single physical hardware so that developers can test multiple node deployment case with your laptop machine or desktop machine and also developers or users can get a idea, a scalability idea, rough idea with single machine. Okay, I'll pass the mic to Arvind. Thanks, Sasaku. I was going to give an overview of the ODL V2 driver which is a new driver we introduced in this cycle. So anybody who's used the V1 Open Daylight Neutron driver knows that it works, but not very well. It doesn't scale at all. Once you go beyond a certain number of networks, you will be just stuck because it tries to do this giant DB syncs across. So our goal with this driver was to make a atomic micro-transaction. So every API called in Neutron will be synced individually to Open Daylight and we have implicit retries and also going down the list, we did it for Neutron HA and scalability. So if you have multiple Neutron servers running, multiple ODL endpoints running, you will have multiple sync agents or sync threads which will actually work in an HA fashion. For the first phase, we only did the mechanism driver and the L3 plugin. We just wanted to get like a proof of concept out and just to demonstrate how it works and what the performance improvements were. Based on that, we're still fixing a few small, minor issues with syncs and all that stuff, but once that is ironed out, we do plan to convert other service plugins, maybe like LBAS or VPN as into the same architecture. So how this plugin works is it asynchronously synchronizes resources. I know it sounds like an oxymoron, but it synchronizes resources from Neutron to Open Daylight, but it does it in an asynchronous fashion so that you're not blocking the Neutron API on every call. Whenever whatever API call you get to Neutron is stored in a journal database and there are like separate worker threads which work in the background to actually synchronize this. And what you get with this is like you're immediately your Neutron API is not blocked. So if you have like a large operation going through or if ODL itself is dead and you just wait to hit the HTTP time or your Neutron API will be stuck just waiting for that call to return. Once we translate that into Async, Neutron is not blocked. And as it says, it's atomic transactions per API event. So each API event like a create network, create port, a create subnet is treated as an individual atomic transaction and it will try to sync these and it also has some automatically configurable retries. So in case it fails, it'll try up to an X number of times and then even if that fails, then it marks it failed. We had to add a lot of validation to this because some events in Neutron come in very close to each other. Like if you use Horizon, then a port create might actually show up in the database before a subnet create because they're like less than a second apart and we did not have that kind of time granularity. So we added a lot of validation to this model with extensive validation models like do not send a port create before a network and a subnet create. Do not send a subnet create before a network create or in the same applies for delete logics as well. All the hierarchical children, child nodes have to be deleted before you can delete a resource. So if you get retries for failed journal rows and it has a back-off mechanism in case Open Daylight itself is down, there's no point in just hammering the CPU waiting to sync it. So we do have some sort of a back-off and it keeps retrying to contact ODL at regular phase. It is still in the beta phase. Like I mentioned, we introduced this driver like a couple of months ago and there's been quite a good acceptance in the community and people from other companies have also come in and they've started fixing issues in this one. So it's like a very high-level overview of how this works. In each of these blocks, you can see like we have multiple neutron servers. Each one of them is running a sync thread and they're all talking to a common journal database which is populated by all the API events coming in. So with the validation model and with the validation model in place, this kind of we kind of ensure that one thread doesn't try to step on the other person's or what the other thread is doing. All of these are controlled by a state machine in the journal database. So all of these journal rows will have a state of either pending, processing, failed, or completed which are pretty self-explanatory. And this is a very high-level, I put up a web service diagram as a very high-level overview of what the call logic is here. There's a lot more things going on here but this is present at like a general overview of how this whole logic works and we'll be posting these slides. No, no, we'll be posting these slides. Okay, we'll post these slides in case you need more time to look over the diagrams and everything. And with that, I will hand it over to Vishal. So I'm going to talk about the BGP VPN support and the L2 gateway support that is available in Mitaka and Beryllium. Just to give a brief history of what BGP VPN is for those who are not familiar, there's already OpenStack Project Networking BGP VPN which is a plug-in. What it does is, like lots of existing providers, they have, you know, talent networks which are beyond the gateway running L3 VPN which is the MPLS BGP VPN. So what this project aims to do is bring that VPN connectivity all the way inside the data center up to the VMs. So that Networking BGP is a plug-in on OpenStack. We had the similar functionality available in OpenDelight even at the previous lithium release. What we have done in Mitaka and Beryllium is integrate them together so that the OpenDelight component which is a VPN service project is now orchestrated by OpenStack now. So he just mentioned what the V1, V2 driver. So when we started writing driver for this, V2 framework was not ready to use. So the driver code we decided to still host in the Networking BGP VPN Ripper. It is the V1 driver and it works, you can test it. VPN service project in OpenDelight is providing the back end and we also have a fuel plug-in for it. Going forward, once the V2 framework, as he's mentioned, it's still being tested. Once it's fully robust and ready, then we'll be moving that driver also to V2 and you will have all these semantics like sync and retries for failures. Like I mentioned, fuel plug-in is already available. I don't know how many of you attended the gluon and the OPNF sessions today in the morning. So they mentioned VPN service and PLS, BGP VPN. So this is what they are using already. Networking, L2 gateway. So we were hoping to get it fully functional between in the Mitaka and Beryllium release. We ran into some issues. The, okay, before that again, L2 gateway is the project which brings, you know, connectivity between bare metals and the VMs running on the computes. It's also, it has been there since the kilo release of OpenStack. So what we did here was again, implementing back end functionality in OpenDelight and the ODL driver for L2 gateway. We ran into some issues because the way L2 gateway plug-in was written, it did not support adding a new service driver for it. So we had to take help from the L2 gateway project to get it done. And by the time it was done, it was too late for us to have the driver in Mitaka. We still might get it in Mitaka. So hopefully. So the code is up for review and ODL back end could not be ready and it could be available only in boron release. So, Anil. Hi everyone. I think next time I need to bring my sunglasses to present here. And this was a bad joke to, you know, wake up people who are sleeping somewhere in the corners. So yeah, so I'm going to talk about a little bit about a network project which is basically an OpenStack back end networking provider in ODL. And then I'll be briefly talking about the SFC support in network. So if you look at this long list here, this is the work which we did in a Barry Lim release. Okay. And just to give a brief about the network, network project was a part of the OVSDB project in OpenDelight. And recently we split at it because OVSDB project has OVSDB southbound plugin that talked to OVSDB devices and the network virtualization piece of it. So we just took it out and it has, you know, it's new project life, but this is something, you know, it's Barry Lim, so we kind of have the whole list there. So some of this work is an OVSDB southbound plugin. Some of this is a network project. So MD cell migration is the diagram that Sam shown right related to fourth release of Parallium, that big boxes. So ODL uses MD cell and the old network code was basically using the Pojo-based service abstraction layer. So we migrated to the Yang-based MD cell layer. It's a model-driven service abstraction layer. So that was one of the major work which we did. We enabled the support for the clustering because that's, you know, it's a no go if you don't have that support. So it's the first implementation we have as of now and we're trying to harden it up. So some of the work is still going on there. We have a support for hardware VTAP. And hardware VTAP support is basically, you know, same OVSDB hardware VTAP schema support. We added this for, you know, this is basically used by L2 Gateway service. So folks who wanted to implement L2 Gateway service in open daylight, they wanted this support, so we added that one. We have a support for OVSDB decay. This is a work that is done by Intel folks. And we have, you know, support in network also. So you can deploy your open stack deployment using open daylight with a DPDK support. We added support for QS. We added support for security group and we support both, you know, contract support and, you know, a simple flow-based support. Flow-based support is limited because of the obvious reason because the OVS don't support a lot of things. But if you have, if you deploy OVS that has a contract support, then you can enable security group with a contract. We have an outer manager that actually does, you know, IPv6 related stuff, ND and SD and all those stuff. Then we have, we implemented the SFC. It's not really implemented, but there is a version that present, that's present in network that actually give you some SFC support from open daylight. So this was a POC work that we did in Perylium. So if anybody wants to try it out, they can do it. But we are trying to formalize this work so that, you know, it can have a formal integration with all the orchestration pieces. And this is, I'm going to go detail into this one. So the next slide actually shows you what you are trying to do. So in the whole summit, you must have been, you know, attending NFV tracks and other telco tracks and they all basically looking for NFV support. And SFC is something which pops up every time, right? So this is one of the key use case that people actually want to realize. So open daylight, for open daylight, because network is, you know, one of the pure OpenStack provider, which actually go by the APIs of the OpenStack. So for us, it's kind of an obvious next step to support the SFC APIs. Given that OpenStack now has a first version of SFC APIs. So networking SFC is the work which they are doing and they are kind of finalized the first set of APIs. So I think it's the right time for us to kind of implement, you know, provide SFC support through the Open Daylight Controller. And the other thing is, you know, now given that, you know, it creates more value given that we have a consumer in OpenStack that actually want to consume networking SFC API. So if you know, if you attend an attacker session, so attacker is basically a MANO and a VNF manager, that basically, you know, working on the people in that project, working on providing support for VNF forwarding graphs. And they are planning to consume networking SFC API. So, you know, we already have a user who wants to consume networking SFC API. So from our side, we just wanted to implement this in Open Daylight so that, you know, we can have an end-to-end deployment using Tacker, networking SFC, and the ODL. So Tacker is not just one use case we want to start with, but there are a lot of other consumers. Whoever consumes networking SFC, they can get a support from Open Daylight. So we, and the good thing is, in Open Daylight, we already have a very mature SFC implementation, that is Open Daylight SFC project. And a lot of consumer, we have two consumers as of now. It's a GBP project, one, and a network, you know, project. They basically consume it. And there is external consumer also, that is OP NFV. So OP NFV also consumes the SFC piece from Open Daylight. So given that we have a mature, you know, implementation of it, to provide the whole end-to-end integration, there are three major pieces of work that we need to do. So first one is, you know, we need to write a driver, is a pass-through driver, that actually pass-networking SFC API directly to ODL. That driver is one of the major piece in that. We already have a blueprint for that, and people discuss this merge, and the work need to be done, and people are working on that. Second, these two bullets, basically, rest of these two work need to be done in Open Daylight side. So we need to define a Yang-based model for the SFC flow classifier and for SFC, because ODL is all based on Yang. So that is another major work that we need to do. And the third one is a translation layer. So why we define these Yang models is we want to keep these models as generic as possible so that it's not stuck with just one provider in Open Daylight. So if you want to write your own SFC provider for Open Stack APIs, you can plug in your own bundle into it that provides SFC, and you can start using it rather than the current version we have in Open Daylight. So that's why we want to make it generic. Once we have, we need to provide that plugability, then you need to provide a translation layer, because these different SFC implementation will have their own models. And once you have your own model, you need some translation from the generic model to those specific model. So that's one of the translation layer you have to write. So these are the three major pieces, which me and Tim Rozert from Red Hat, we are going to work on it. And there are people from other organizations also interested, and they'll be joining us. So any of you are interested in this work, please feel free to contact any of the person in OBSDB, Open Daylight, and Networking Audio project, and we'll hook you up. So this all is currently work in progress. So anybody who wants to give any feedback, not really in contribution in term of code, but any feedback about how we should go about it, how the design should be, and any other use cases that we should accommodate, please feel free to reach out to us. This is the high-level integration architecture, how this thing is going to work. So as I mentioned, so NetworkSFC has APIs. So whoever want to consume NetworkingSFC, they are going to call the NetworkingSFC API. Those APIs call will be propagating to NetworkingODL, and NetworkingODL will be passing it through the SFC driver which we are planning to write to the ODL. And if you see all these green boxes, these are the boxes which I was talking in the previous slide, and these are the work in progress. So SFC driver will pass data to the Newton northbound, and Newton northbound is going to write these data into a data store, the MD cell layer, so where we store all the data. And that is basically user configuration. So whatever API is for a flow classification and for SFC chain we'll get, we will store it in their appropriate models. And then there will be a provider for that. So if you see the two boxes, NetworkGVP, these are one of the consumers. These are the basically flow classifier provider. So they will listen to these flow classifier models, and then they will do the back-end flow installation and realizing the classifier on the SFF and all those things. And then we have SFC implementation. So we'll write a translation layer that will take the generic SFC model, convert it to a SFC specific model, and then SFC will do the job of creating a chain. So this is the high-level architecture which we are planning to do. So most of these pieces need to be done, and hopefully in the next release we'll have something working demo for you guys. So yeah. And this is one of the use cases which we're trying to realize. So when we want to show the end-to-end game that how this works. So Tacker is one of the use case which we are targeting. We already have a working POC implementation that work is done by Tim. But in that case, it's not really a formula. What happens is Tacker directly go to ODL. It doesn't go to the networking SFC, and then it goes to my Newton or Thorn. That direct communication is from Tacker to SFC. So that is not a formal integration in term of when you want to do this through OpenStack. So yeah, this is one of the use cases which we are trying to realize, and we'll hopefully do work on other use cases once we are kind of done with this thing. So yeah, this is pretty much we have. So just to summarize it, because the kind of services we support, kind of goals we have, ODL is one of the viable, and I would say, if you go to the SDN controller, it's one of the main controller that you guys can target to deploy. We are definitely trying toward making it more hardened so that people generally complain that it's not deployable, and that's our main focus, which we know where we are heading towards. So HAA and scalability and this feature parity is the thing which we're working on currently. So anybody who wants to deploy it, it's a time to kind of reach out to us so that we can get more feedback and can do the work which actually make it a deployable controller. It's an active and diverse community. If you see the initial contribution, there are around six to seven organizations that contribute to this project, and this is pretty diverse. We are not in a push mode. We are always kind of open up. We invite people to come and contribute it and have a healthy technical discussion about things. So that's a good thing, and that's a good thing about the whole ODL community and specifically the network project and the kind of community we have there. We are working on more feature and functionality. So if you are looking for any new feature in open daylight side or networking ODL side, please reach out to us. That's pretty much how much time we have. So I think we have seven minutes for questions. So yeah, guys, you can come up. I'm not going to handle all the questions. If you can come to the mic, that would be good. So you mentioned about the deployment. Are you targeting that for your SR or is it for Boron? So that's for some of the feature is going to be in Boron. So a stability part of it, whether it's a HA or a clustering that is available in Beryllium and a Boron both. So that is the piece which we are going to harden up. But the new pieces that is related to SFC and all those things, that is going to go in Boron release. But the stability, bug fixes, clustering, anything that we need to harden it up for deployment, that definitely will go to Beryllium as well. Deletion from Comcast, can you share what's the, you know, you mentioned about the FIDO project, you know, what's the support either on the open stack side or also on the open daylight side? FDIO support is being done under OPNV project, which is called the FASO data stack project. And the work needs to be done in neutron and open daylight and also FIDO. We have to touch all of that components. Right, you mentioned there is a hardware VTEP support coming in, is that L2 or also L3? Yeah, so in Beryllium it's currently just L2 and then Boron will be adding L3 also. So that's the plan. So L2 is already there. L3 would be coming in next release. Any other question? Then I think we are good. Thank you guys. Thanks for joining us. If you still have any questions, do come to our booth tomorrow. So we have an ODL booth down there. So anybody who has any question related to ODL and what is coming, what is there, you know, please feel free to come to that booth. We'll happy to answer your questions. Thank you.