 Hello, everyone. Welcome to our session about Open Daylight, OpenStack, and Kubernetes integration for high performance applications. My name is Nirik Chil. I'm a product manager with Red Hat, focusing on our OpenStack platform product, and specifically on networking. And here with me is François. Hello, nice to meet you. I'm François Le Marchand. I'm basically responsible for the NVI strategy in Ericsson, which means basically OpenStack and SDN. Yeah, the agenda for today. So François is going to briefly overview why SDN and some common use cases and requirements for infrastructure in modern days for SDN. And then I'm going to deep dive into the Open Daylight project. It'll describe the project itself, the community, and the NetVirt project, which is what we are using for integration between OpenStack and Open Daylight. And we also are going to do an overview of the Ericsson and Red Hat joint NFVI offering, and provide you with some links for further reading. And if you want to get the slides that are publicly available, you can check out the link here or just take a picture. It's on Slider. It's a PDF, you can download its PDF or just use it in your browser. With that, I'm going to hand over to François. Thank you. So yeah, so one of the reasons we are jointly presenting Ericsson and Red Hat is because we are basically providing like a joint product packaging for both OpenStack. Our OpenStack description is based on Red Hat. We are also integrating OSDN controller with a standard Red Hat OSP platform. But the most important, we are jointly working together on the upstream community in Open Daylight. And I think most of this presentation is really focused not so much on the product, but really on what we believe are the necessary properties and capabilities of an OSDN controller to provide a good solution for virtualization. And then on the second part, so it's a bit of recap of basically what we have upstream, what we are doing now with ODL in OpenStack, and then open up to new requirements, including containers and bare metal and how we see this integration with ODL. So picture of myself, that's basically the Tech Museum in San Jose. I guess many of you had a chance to go there or Santa Clara. I was very happy to see my very first router over there, Cisco AGS. It's being released in 1986. So it's about 30 year old now. And it was my very first day as an intern in the networking at university. I really got the task to take an old AGS plus, clean it up, it was full of dust, so I have to dismantle it on the parking lot. And basically upgrade it with the latest and greatest iOS release. At that time must have been like a Cisco iOS 8 or 9, something like that. Because you know what, the top most feature and the cutting edge feature that we needed at that time was GRE, GRE tunneling. And that's very funny because you know, if you look at where we are today, I mean what I was doing back then with GRE is basically we had a complex campus with a lot of universities and very complex networks. And then of course all the labos and so on, they are their own server and they had their own private IP addressing and they did not comply with the overall design rules. And they also wanted to manage their own security of their own VPN and so on. So the solution was pretty simple, build an overlay, build a GRE tunnel between to the multiple sites of the same tenant which is whatever labs in the university. And then tunnel the IP packets through that tunnel and then you create as many VPN and overlay as you want. That was 30 years ago, right? So what we are doing now in OpenStack and Cloud, well, not so different, right? We still have server, we still have application. The application runs on a VM. If there are VMF, you typically only have one application per server. So from that perspective, it's not that different. And then we get an IP tunnel which the new thing is like it could be Vixlan but don't worry, it could also be GRE. So we are not too much lost 30 years later. And why we do that? Well, same motivations, right? I mean, make sure that each of your tenant, each of your OpenStack instance can get its own secure private VPN that could work independent of the underlying infrastructure and independent of your data center fabric but independent of the one network that will connect this fabric or independent from the access layer as well. So same old solutions. However, not to say that nothing has changed, right? Because the networks has grown more and more complex. The requirement has gone more and more complex. This is a typical service provider network. I mean, it's not a service provider presentation because I think most of the thing we do for service provider and of course that's our focus market for Ericsson. That also applies fully to enterprise, right? And especially the large enterprise and as you grow your OpenStack deployment get more than one instance, get more than one data center location that you want to integrate more closely with your existing IP1 network. You will end up with very similar type of requirements. So typical service provider network, you have your access layer, enterprise, residential, mobile is missing but same one, then an access layer or core layer, some peering points where you connect to internet and some of the public cloud services and where they deploy data center. I mean, they of course had ITs and data center for a very long time for their IT applications. But the new thing with NFV is that they will add additional requirement in terms of performance, in terms of redundancy, in terms of the type of application, how do you onboard the application, right? On top of that OpenStack instance, there is specific requirement that is tied to the way those applications are designed in terms of redundancy, in terms of floating requirement. What we see as well is those centralized data center that are like focused on IT, some of the first wave of virtualization, of course, for the applications. But we also see pretty much all the operators have plans to create more distributed data centers for NFV. And the reason they will build those more distributed data center is something that we have done in the past for IP as well, maybe not 30 years ago but maybe 20 or so, which is like, if you want to have a lot of heavy traffic and have good performance, you need to move the content close to the user, right? So it's like caching on the internet, but you want to do it ultimately at the central office where you know your connection, your fiber lane, your DSL lane, your CMTS cable connection lens, right? And you want to inject the traffic right there because then you're saving a lot of capacity. You don't have to carry that traffic from the purring point across your own network. You also get much better latency, which is quite important for new applications such as augmented reality, virtual reality, where we start to see cloud rendering being possible with fiber, with 5G that are getting extreme low delays. That becomes a possibility to do cloud-based rendering of those applications, but it means you need the extra low latency like less than one millisecond run trip, right? Between the end user and the application. So some of those applications are getting distributed and of course is all kind of shade of grain between, right? You know, you have this very centralized data center. Some of the distributed data center could be like 10 or 20 sites, maybe it's up to thousands. That's in the plan of some operators, that extreme distribution and very complex topology occurs. So what about SDN? Why we believe SDN is important? Well, you need to interconnect the centralized and the distributed data center because you have set of NFV that needs to work together. I mean, some could be the control plane or management plane, some the data plane of the same VNF that has to be distributed across the data center. You have requirement to connect your physical access. You know, when if you're running NFV, it means you're getting the traffic still from a physical base station, a physical GPON node that connects your fiber lanes. And you need to get that traffic from your physical network into the virtualized layer. And for that, you also need this kind of first arrow, right? NFV connectivity. You might host application for the enterprise customers. So you will need them to provide them, you know, like guaranteed access for mission critical applications. So connect the enterprise to those more centralized data center where you have more scale. And then of course, you also, you know, want to interact with public cloud in a hybrid type of setup for the operator on application, but of course for the enterprises, right? Which could benefit from, let's say, locally hosted content by the operators, but also do that in an hybrid mode with public cloud together. So more complex type of network setup, a lot of those arrows, each of those arrow has basically its own set of, you know, requirements, you know, like if you go to hybrid cloud, you will need these to be encrypted. If that's NFV connectivity, you will need, you know, one single IP address to handle maybe 100 gig or 200 gig worth of traffic going to a single IP address. So each of those line has its own set of challenges, right? Converge and secure access and so on, and when you also have to deal with how you apply QS in the core network, and not just simple deep-serve QS, if you really want to get like a very mission-critical application, you need to guarantee SLA, to guarantee minimum bandwidth, to compute the path of each and every application into the network link by link, and those type of traffic engineering application is also something that has been done in the IP network for a long time, but not the challenge is how do we integrate that with cloud, right? And that's where we are going. So what does it mean? It means you need a new platform to build cloud connectivity, and that platform, SDN as you know, needs to have all those characteristics, right? I mean, I think we really believe in the openness, not only open protocol, right? Which is the likes of SDN, but also open source, community-driven open source. We are big believer both for that and ourselves, obviously, and many more, right? Collaborative development is very important. API model, just make sure that we define well the APIs that are intent-driven and so on, because we need to create the right level of abstraction at the same time, you know, make sure that we don't hide any capabilities, right? That could be helpful for the extreme layer, and then evolution to DevOps, which is still a bit far away from the operators because, you know, that's kind of complex. I guess also complex for the large enterprise system and many legacy applications, but this is really a key number, not only, you know, virtualization cloud, but also SDN. So the challenge is, you know, if you look at the original set of APIs we got from OpenStack, the Neutron API, they were pretty basic, right? I mean, you can do a bit of bridging, a bit of routing, and that's pretty much it, right? ML2, ML3, and you're done, right? Then, of course, you have much more advanced API when it comes to value-added services, load balancing, firewalling, and so on, but still, you know, from routing or simple connectivity standpoint, they are very simple. I think one of the challenge of SDN is not only to have a platform that fulfill all those characteristics, but it's also that to enhance, you know, release after release, the set of capabilities we provide to the virtualized layer from very basic connectivity functions to more modern IP routing capabilities, right? And if you look at those modern IP routing capabilities, you have things like inter-domain, hierarchical construct in order to scale, all kind of topology constraints for your services, point to point, point to multi-point, multi-point to multi-point, being able to be policy-driven in terms of, you know, routing behavior for redundancy and optimization. Traffic engineering, I mentioned before, make sure you can actually have a very advanced way of mapping the packets to a specific path in your network. Fast-rewrote for, you know, redundancy, very important in a V-domain. There is still those, you know, magical 50-millis gone redundancy requirements for many applications. And also new technologies such as segment routing that provide a lot of flexibility. So all those things, they are not, you know, integrated as of today, right? Or not in a proper way into open stack and the virtualization solution. So one of the things we believe is that it's very important that you build an SDN platform with very strong routing capabilities at its core, because over time, those things are coming down our way, right? As we want to make things more, you know, higher-grade or enterprise-grade, as we want to make things more deterministic, as networks' complexity will continue to increase by doing all kind of hybrid connectivity and distributed models, we will need all those tools to make sure that we satisfy the requirements. So practical example, you know, use case of routing. I mean, some of the things that you can do is routing in a more efficient way. Inside the data center, so intra data center, inside the same SDN domain, one thing is interworking well with your data center gateway, whatever like edge router you have. So you make sure that when you have a virtual IP address and you have load balancers, when you get this, you know, a fat pipe of 100 gig of traffic going to one single IP address, even before you make a software load balancer, you need to have hardware ECMP, right? To load share the traffic across your first-stage load balancer. And in order to do that, you need to use routing, so you speak the same language than the DC gateway, and you make sure this happens and this happens with the right, you know, like hashing of the flows and with the right level of redundancy and so on. Another thing that is typically done on the MPS network is that you can attach some kind of group ID, right? I mean, you can play with an MPLS VPN. It's not only that you set a blue VPN or a red VPN, but, you know, one endpoint can belong to more than one VPN at a given time. So you can create interesting application like intranet or extranet, where you can see, you know, a red VM and the blue VM don't see each other, but they both see the purple VM that belongs to the both network at the same time. You can do things at a much more granular level. When we talk micro-segmentation, very often you assume some kind of distributed firewall and so on to make filtering, but you can even do better than that. You can not even do filtering. You can just make sure that you only leak the routes that you need, you know, depending on the VM, and the connectivity is not relative to the definition of a subnet. It's not because you belong to the same subnet that you see each other. You can create all kinds of constraints around the routing space. There is even more, of course, interest when you go inter-DC, right, when you want to connect the different instance of your cloud, the different locations. It's also quite important, I don't have it on the slide, but it's not necessarily different locations. I mean, what we see more and more is people running different instance of OpenStack because they may have different projects that have selected different stack or because even if you use the same stack, you will have different versions and you cannot validate all your application at the same time and migrate them overnight, you know, to the new version. So you end up having, even in one data center, many instance of OpenStack, right? And sometimes, you know, one solution, one VNF, will be spread across multiple of those instance and you want to have this automated connectivity. And one of the things that you can use routing very well is to, you know, extend this VPN so that you can have a neutron subnet that is extended over more than one VM, more than one OpenStack instance, more than one location. That's something you can do. And by the way, you can even do it between different vendor of SDN. So of course, you know, we invest in OpenDelight and that's our controller of choice. But you know, like every time, you not always have only one single solution, right? And it's very important that depending on the, first you don't necessarily only have OpenStack, we talk about Kubernetes, but there is also VMware deployment, there is also many different type of beams right now that still exist. And it's very important that we make sure we have a solution that is also open. And when you extend, when you do inter-DC connectivity when you set up a DCI, it's not necessarily OpenStack specific or it's definitely not specific to one specific SDN controller version. And that's one of the beauty of routing that's standardized, right? Through IETF and so on. If you use the proper routing protocol East-West, you can make sure you get this interoperability between different VMs, between different SDN. The other nice property is like its scales. So, you know, we all start small with OpenStack but eventually we go big and some are already big and are getting very big. And at some point, if you want this to scale to the scale of the internet, right? You can use the same recipes where basically if you leverage BGP, you have a set of tools, I mean, nothing extraordinary from a technical standpoint but it's very proven that you can build very scalable networks. You have all the tools in terms of security, which transmission and so on to make it scale. So one example is route reflector that you can build in a hierarchical way so that you can end up, you know, with billions of endpoint and make it scale. Inter-IS, making sure you can connect different autonomous system for scaling but also to apply set of policies that you want to secure your network. And most important, maybe, or finally, is also you can always do that with any SDN technology or baseline neutron, right? You can do baseline neutron and then you will go to a DC gateway and then you will manually cross-connect, you know, whatever VLAN to a VPN instance and you can do the same thing, right? But then, of course, it's a bit autonomic with cloud because every time you will want to dynamically create new VNF, new tenant and so on, you will have to change your DC gateway configuration. A proper SDN solution needs to make sure that this connectivity, this teaching to the underlay network needs to be fully transparent, fully automated and fully dynamic, right? And right now, you know, this is the only solution is to do routing. Why is it the only solution? Because you can make it fully dynamic but it means that it's SDN controller to SDN controller, you set up overlay tunnels. You can make it fully dynamics but then it means you are not be able to map to, you know, the one network in a proper way in terms of SLA and QS and so on because you're building an overlay network and the one network is not aware of your VPN. So that would be a problem if you do that this way. It will also be proprietary and you won't be able to inter-work properly between different domains and then if you decide that you do it on the DC gateway, then you get back those two properties but then it means you have to do some very complex orchestration to make it automatic and so on. So the right choice is to actually make it use a native routing stack on the SDN layer because that avoid any kind of static configuration on the gateway and makes things very fluid. So makes things seamless between the data center between different data center instance and the one. Other things, service chaining. Why is it important? It's important for operators but what they need is not basic service chaining where you will say, okay, for one tenant I want the traffic to go to firewall and then parental control and be done with it. They want to avoid policy driven. They want this policy driven. They want to have this based on their policy server that is aware of the one network of the subscriber of the type of radio access technology. So this could be very dynamic how you want to set up the service chain. So you have very advanced classification that is required per subscriber or per destination or per application. So classification is quite important and you want to do it at scale again. The same way that we started with centralized routing and DVR, then we started with centralized firewall and distributed firewall. Same thing for service chaining. You need to have distributed classification in the underlay. Load balancing, symmetric forwarding, redundancy functions. All those things are essential and they're quite advanced feature that you have on top of it. Routing again. If you look at service chaining but you want to do it across the network then you will have different data center, different segments. And one of the things where you need to have a tight integration with routing again is you can do service chaining using the technology you want inside each of the data center segment inside each of the SDN domain. You can use NSH, you can use BGP, you can use whatever you want. But then the best practice if you want to do this service chaining again end to end between your physical routers between different data center instance connected back to the internet is then to stitch whatever technology you use for service chaining, stitch it at the ingress and the ingress with routing. That's also quite important because you have things like geo-reundancy and so on. You can monitor a service chain. If one of the nodes is dead you can of course protect it different way bypassing service function, load balancing across service function. But if you want to have geo-reundancy you can say maybe a data center is dead and I just want to bypass it. What I will do, I will just remove the route advertisement. I will change the route metrics and automatically I will have my network that will redirect the traffic to the right data center instance. So combining those type of functionalities again is very important to get routing capabilities. Last use case, that's one so the other one they were explained in the context of service provider but they are completely applicable to enterprise. This one a little bit less but even though we start to see that virtual CP type of use case is also something that could appeal the enterprise when they can build it by themselves more like an overlay fashion such as SD1. And one of the thing that you may want to leverage your SDN solution for is basically interconnect seamlessly your existing physical MPL SVPN, the traditional ones where you can add additional functionalities and services that are virtualized. So you don't have to upgrade your router, you can still use your traditional Cisco router, physical box and so on. You're existing MPL SVPN but you can connect it in a smart way to a data center where you will deliver all the new and advanced security services for instance in the cloud model. You have another model that is more applicable to operators likely where you get a very dumb CP, a very thin CP, very cost efficient like 30 dollar type of bill of cost of hardware but then it means you will emulate all those functions into the data center and you will use different type of connectivity instead of layers 3 VPN you will probably use you know VXLAN, EVPN type of connectivity more like layer 2 tunneling from the CP that is very basic just doing bridging and the data center. And then emulate all the function more essentially. And then finally, SD1 is probably the one that is the most flexible where you can virtualize both on premises on the CP itself and virtualize in the cloud. And doing this it allows a lot of interesting properties such as local turnaround of the traffic so you can have CP to CP and local breakout which means you're not dependent of a local cloud but you can still combine with a centralized cloud for some of the more advanced services. So you know connecting your sites together through the cloud or connecting the site to the cloud that's also you know one interesting properties of the SDN controller. So all of those things are basically things that we have implemented in Open Delight today that are available for consumption in a shipable product. And also the title of the presentation was about high performance which is critical again for enterprise but if you think at sorry for service provider but if you think at enterprise they also have data plane intensive type of requirement they also use you know security firewall functionalities maybe at some point you might want to virtualize some of your storage solution and so on. In that case if you don't try it on a flat network you will need also to get high performance. And what we can do we can drive you know with ODL a very flexible way black box router, OVS, SmartNIC or solutions such as fd.io that are new data plane options that are introduced. All those options they all come with different southbound protocol different functionalities and the interesting properties of Open Delight and the reason why we invested in Open Delight is because it's very modular. So you can build plugins for different type of services you can model different type of services you can configure black box, white box open flow centric type of SDN nodes and you can also adapt to different type of data plane. So those are just examples of what you can do and using this set of data plane you can get very high performance virtual data plane very high performance physical data plane with acceleration. So coming to you know what's next. Checking on the time maybe I'm already a bit over time but so what's next? Yes of course we have virtualization but we still have bare metal and bare metal is quite important. Why is it? Of course you have a lot of legacy you know legacy appliances that you're not going to throw away and that you want to make it work. You also have the new legacy which is virtual network function that you deploy but in order to get good performance or because you didn't have time to you know integrate in a proper way and so on use SRIOV and SRIOV is not always a good thing it depends on the profile or I would say the use case SRIOV could be good in a way that you're independent from the forwarding layer and the NFVI layer you just bypass it and go straight to the hardware which give you best performance but that also makes it a bad thing because if you want to use NFVI as a clean abstraction between your VNF between your applications and the infrastructure you don't want to create any dependencies between your NIC card between the way your data center gateway is configured between what your data center fabric is made of can you do layer 2, layer 3 what kind of VLAN tagging and so on you want to abstract that right and from that aspect this is very bad. So you have to deal with those SRIOV network function regardless right I mean there are reality and many of the initial wave of virtualization happen with SRIOV so that's something you need to integrate and they are viewed from the NFVI layer they are viewed like a bare metal server more or less because they go straight to the hardware but also it's a lot of extreme performance that are not going to go away right I mean you might see less SRIOV moving forward as we get you know probably new and better way of connecting the VM with high performance you get rid of your legacy appliances but there is always you know set of functionalities especially for the kairos that you cannot virtualize or you cannot virtualize efficiently right and typically the access nodes when you have specific physical interface radio interface, GPON and so on specific accelerations that doesn't fit very well with generic purpose CPU yet right and eventually at some point we might get there but we might get there on different architecture than XLE6 and up to now we also have to make provision for that so bare metal is there to stay for the legacy reason but also for the some of the high performance type of requirements virtual machine it's still the mainstream approach because it's easy to virtualize you know any kind of legacy applications but also it's actually pretty good in terms of data plane performance if you look at container it's more lightweight from a memory footprint but typically if you have only one you know one VNF is more like a single VNF running on multiple servers which means you only have one VM per server so you know the memory footprint it does not really you know make any difference and since those applications are designed in a cluster mode the boot up time doesn't make any difference either right so those are things that you know VM are there to stay as well for quite some time and then finally of course containers are playing a bigger and bigger role in already in the public cloud mostly applications is getting there in the enterprise a lot of traction in the service provider market there is also application that actually benefit already today from containers I mean application that are not multitenant natively so you release container for that and that's where you get the best fit so you have all three and all three will coexist for quite some time so if you look at your data center now you have the hardware layer typically you know a bunch of switches with a more or less as the end driven fabric I think most of the you know hardware vendor are coming with one way to make this fabric dynamic that it can auto discover itself auto configure itself it has its own tools in terms of analytics and management and so on we just talked for a long time about this open stack layer and virtualization and then now you get container orchestration solutions such as Kubernetes and a couple of others and each of those solutions they come with their own networking solution so now what you end up with is a number of networking solution in the open stack many different SDN controller more or less open you get new things like oven, dragon flow and so on so even new initiative that are doing more or less like an SDN approach and now with containers you get things like Calico and Flannel and yet you know another set of services so it's a good thing diversity and creativity is a good thing but of course you know it creates some practical issues for customers because when you want to run an application that mix and match bare metal VMs and containers because some of the application are pretty complex you have the management, you have the data plan you have the security layer and so on and all of this is one application that is many VM, many containerized application from an end user perspective you get performance impact running layers of tunneling over layers of tunelings you can also maybe you can accelerate I was mentioning before SmartNIC you can accelerate your NFVI you can accelerate your open stack networking but then if you run another overlay software overlay on top of it then you lose this benefit of acceleration you will break things like TCP offload and so on so you need to make sure that you get your best performance a nice and easy way of provisioning the network and visualizing the network and avoid this kind of flu service integration also from the administrative standpoint correlation, trouble shooting is quite important in turn working connectivity burden that's when you have to manually connect each of those layers together pull off IP address and what you use on DC gateway what you use on the container layer create your open stack network and then you have to learn and multiple connectivity solution multiple SDN layers so what is a potential solution to that? I mean we see two solutions and again we believe OpenDelight is a great platform because of the flexibility of the platform that you can basically choose simplest way one single SDN controller that will serve at the same time different layers that will configure your underlay and overlay and manage your physical switches and routers that will provide virtualization to your open stack layer and that will provide connectivity to your container and Kubernetes layer another approach would be to say as many open stack instance or as many SDN instance as you have layers but then it means if you need to do that then when you provision a service at the different layers you need to provision the service with the same model at the different layers and it does layer to talk to each other so you don't have this manual operations happening so in order to do that we also believe that OpenDelight is the right approach because then you can normalize the API and it's all young driven so and it's also compatible with now we have a CNI plugin I'm coming to that natively to Kubernetes we have of course a neutron implementation we are pluggable into new things that are coming in open stack such as gluon and other options so the idea is to keep the northbound API very open but to have ultimately the same way to represent the connectivity between the different layers between the physical layer, virtualized layer and containers to have the same way to express network connectivity, security and so on so that will take a longer time because in order to do service training using routing and so on if you use that as an east-west protocol it takes a longer time what is probably going to happen is some kind of hybrid between those you will get a single controller that can do underlay and overlay that can do overlay and containers but maybe multiple instance of those and those instance will collaborate again through those east-west protocols so how do we do that? two options again if you look at containers I mean one is Magnum which is basically Magnum plus Courier you basically run your container instance but you use the neutron layer in order to integrate with the with the same networking solution than the VMs and that's probably the simple and most efficient solution if you run open stack and you also want to run container solutions together you will be able to use the same backend and that's more or less transparent so that is a very practical option that we have however the impact of doing that is also in terms of performance when you talk about containers and depending on the application if you're really into one of those applications you need to fire up tens of thousands of containers per second in a very intensive way using a neutron backend maybe today is not optimized for that there might be other container-specific functionalities all the layer 4 load balancing and so on that are integrated to Courier you know using LBAS plug-in on the backend but there might be new and more advanced capabilities you may want to do natively on the container layer so the other option and both are compatible more or less is to get this native CNI plug-in for OpenDelight and we started an upstream project in ODL that is focusing on providing that CNI plug-in capability and as you see this plug-in capability is also has to work with the data plane that will have to satisfy two things first is that containers you want to use the NFVI rules to do service chaining, VPN, load balancing, analytics, whatever but you want this layer also to be able to support any of the native kernel services kernel networking services such as IP tables for security and so on that the container might request and you need to make both at the same time and if you use an OVS DPDK then you're bypassing the kernel and you're not applying everything so there is a bit of complexity where we need to integrate really this kernel layer services together with DPDK there is also if you want to run your VNF over containers and if it's a data plane VNF then you need DPDK or it's one of the challenges also to integrate DPDK so this is coming there is some of the early version of DPDK support for containers that's some of the challenges that we are solving right now in the data plane perspective but the great news is that once you do that with for instance OVS or some of the mainstream data plane then you can integrate it in a seamless way between your open stack and virtualized domain and the container side Bermet Hall that's another challenge and when I say Bermet Hall there is a way to integrate appliance storage and so on in the same overlay than your virtual service while you want to do storage on the same overlay for instance if you want to distribute some of the storage capabilities that a remote VNF can access some of your storage that is more centrally located you want to have this goes through a VPN and so on will simplify a lot of things having the same analytics tools from a network behavior having the same course tools and so on we are talking about course before I'm talking course at the networking layer make sure you you tag the packet with the right DSCV and so on and make put it into the right queue right so there is interest in you know making those work in the same overlay and it means basically the end controller can program rules into the overlay switch but if you get a SRIOV or Bermet Hall appliance you have to configure the top of the rack switch and that's why it becomes more complex because you have some top of the rack switch you can program with OpenFlow some with OVSDB some are more like IP rotors you can only talk NetCon for EVPN or those type of rotting protocols to them so this is where again the modularity of OpenDelight is helping a lot because we can adapt to different type of switching fabric and we can make sure we program the Tor switch to connect the appliances to the overlay but we can also bring up the DC fabric right and that's the natural evolution of ODL is now moving forward to use it also as the automation for the fabric and that has a lot of benefit also to synchronize the different layers and give you like a complete and synchronized view of the overlay and underlay you know correlation of what happens on the overlay where the traffic between 2VM is mapped on which link and do like all the analytics and so on on top of it okay so that was the last slide on the you know the new things coming and then leaving now to Nier to come to the actual project thank you thanks François okay so I'm well aware of the fact that I'm standing here between you and lunch so I'll do it quickly so OpenDelight in in a nutshell it's basically this cool SDN platform that you can build it's highly modular, extensible and we think is the key benefit of OpenDelight because as you just saw we are talking about very complex use cases and nobody here knows really what's the next thing here what's the next protocol is and so on and we want this to be really a kind of generic platform that can adjust as you go and this is very like a typical picture of the architecture we're talking about kind of the controller platform in the middle and then it's really pluggable again both on the southbound side so different protocols and interfaces you can talk to the virtual switches physical fabric and so on but also on the northbound side so you can talk to different orchestration agents like OpenStack, Kubernetes and so on and the beauty of OpenDelight is like on the core there's this service and plugins there's the runtime, the model driven API the data store which are really common as a base platform and then basically you can write different type of applications on top of that and we can cover diverse use cases for edge services, for IP routing for optical transport, for physical fabric management for overlay management and so on and this is really the benefit of ODL in terms of OpenStack integration we are primarily talking about one project in ODL called NetVirt which is basically the technology we are using to integrate between OpenStack and OpenDelight via Neutron some key fundamental facts about this integration is that Neutron is still where you define the networking API so we are not replacing Neutron Neutron is still the de facto API on the infrastructure but we are just implementing Neutron API as a backend with OpenDelight which is really important going forward because we want to standardize on Neutron as an API and basically not replace it and here Red Hat and Herrickson one thing I wanted to highlight is something we did in the last upstream release for on in OpenDelight which is basically integrate the NetVirt project with another project called VPN service which was a different one and this really shows the convergence in the upstream community around NetVirt as kind of the major way of integrating OpenStack and OpenDelight which is again really important because we bring all these use cases all this know-how from Herrickson and others in the community into the NetVirt project and getting NetVirt more feature rich and robust so NetVirt is basically an application developed on top of OpenDelight we have pretty powerful implementation for L2, L3, AccessList, Net, DHCP, IPv6 and so on and the idea is to configure and manage the overlay network so basically everything that Neutron provides today but also top of REC switches currently using the L2 gateway extension using OVSTB and obviously Tenant Networks for OpenStack it uses NetCon Fenyang to model the topology that's a key kind of architecture piece of ODL but it's really modular and extensible so currently we're using OpenFlow and OVSTB in NetVirt but we are looking into stuff like NetCon, PGP and other southbound interfaces to manage new southbound pieces as well so Francois mentioned the CNI plugin for Kubernetes we're also working on VPP which is part of the FDIO integration so this shows how powerful the platform is because we don't need to reinvent everything add new agent, new stuff we can just reuse the platform add more plugins and integrate into new systems and technologies just really quick about Red Hat and Herrickson so basically what we are doing is we are providing together again Red Hat and Herrickson we're providing a converged LFEI infrastructure we are working on different tracks under VI, SDN, subterfine infrastructure and containers and basically aligning what we are doing in terms of upstream initiatives and product maybe Francois you can just quickly describe how you are certifying the platform yeah just very quickly on this slide you see what I describe our joint solution with the open daylight based controller but we are also selling it and I think that's one of the way we are bringing the technology to the market as a certified solution which is a turnkey solution that you can use with all Herrickson and Red Hat component pre-certified including Herrickson, VNF that also could be consumed in a modular way so you can pick and choose best of red and it's all obviously open and all based on the open source upstream type of model yeah and last but not least for further reading you can check out the link here about the network project and some product documentation and we are running out of time take pictures sorry we're running out of time but maybe we can take like two questions there are two mics here and I have this one as well questions or comments, feedback what did we forget thank you very much okay, thank you very much bye