 Hello. Good morning. Good morning. So welcome for the session. And thank you for coming here. And today, I'm glad to present the IPv6 project that we've done in the OPMV. And with my colleague, Shrida Gadem from Red Hat and Prakash from Huawei. So today, we briefly will review what the project is. And give some introduction of key project pro-facts and what it goes and the deliverables of this project and what we have released in the OPMV Brahma Puja release and in February. And then, Shrida will talk about what it really means and from in terms of how to use a service VM as IPv6 router and what has achieved and why we do this. And Prakash will introduce about our planning for a Colorado release, which is planned for August. And that's the next big release for the OPMV. And of course, we will need to acknowledge all of the contributors to the project. And it's a community effort. And we have lots of good contributions from all the different companies in the whole industry. The key project facts. So OPMV, everybody knows that it's the integrated carrier-grade platform and to accelerate the introduction of the OPMV's products and services. And so OPMV was founded in October, 2014. And so IPv6 projects was approved as a formal project in November, November 25, 2014, about one month after the OPMV was founded. And so all of the projects in OPMV is in the incubation stage. And we're going through the different life cycles and continue to develop projects and to incubate the project to be more mature and mature. And in terms of feature functions, performance, et cetera, et cetera. And so we have our own the Garret Reportory, which is IPv6. And it has our own the project Wiki. And you can go to the wiki.opmv.org slash display slash IPv6, IPv6 plus home. It's our wiki page. And so I'm the project lead and the primary contractor project. And I'm very happy to have the support from all the different companies and from the industry, including from AT&T, from the blockade, and from Cisco, Clarepath, Networks Cloud-based Solutions, Huawei, Nokia, Red Hat, Inspirant. And we will have the bi-weekly meetings on every other Fridays. That's 8 AM the Pacific time. And so if anybody's interested, and you're welcome to join the conference course and all the logistics, you can find it in our website. So what's our goal in the Derova project? And so IPv6 is very important in the future from the network infrastructure perspective, right? That's the future of the networking. And that's no doubt. And so project goal is that we need to have the meta distributions of IPv6 in networked OPMV platform, because OPMV provides the integrated carrier grade NF with the products and the services. And this needs to be, from a future perspective, is enabled by IPv6. And also, we need to provide a methodology how to revolve in the IPv6 capabilities and OPMV. And so that's the goal. But in terms of the specific derivatives, and we are targeting to have integrated package and consider basic upstream components, which is the same as what OPMV will be integrating, including OpenStack, including the different testing controllers, including all the different type of hypervisors and storage and network, virtual networks, and running on different OPMV community labs. And so we need to, because of nature of the IPv6, right? And so we need to have the auto configuration script and to automate the configurations and provisionings of IPv6, some features and some enhanced functions and on top of this platform. And for those features that can be automatically configured. And also we need to have the user guide, installation guide for some features, which may not be necessary to be automated, but needs to step by step manually configured and on top of the platform after the OPMV is deployed by these installers. And we will need to develop the test cases to test those features and on top of the platform and the specific IPv6 features in addition and to the platform's function test and the artistic testing. And we are considering to do the gap analysis and to see what other features are currently not available in the, from the platform perspective and what's the roadmap methodology to bridge those gaps and for the future releases. In the end of February and OPMV had a second release called Brahmaputra Release and which is a river in India. And so what we delivered in the Brahmaputra Release include an integrated meta distribution of the package and that's enabled by the installers. And the installer will install the basic OPMV package and we will have the installation guide and for manually develop, install the service we have as IPv6 rerouter on top of this platform. And in order to do that and we need to support two different scenarios. So OPMV supports like more than 10 different scenarios for the Brahmaputra Release and for IPv6 specific features, right? And we need to enable two different scenarios. One is the no SDN, no feature. And the other one is ODL, the L2, no feature. And the reason is the, for no SDN meaning that is pure open stack and because the neutral supports the IPv6 routing in the L3, the agent. And but for ODL, the L3 routing is not supported yet and so we only need to use the L2 switching but still using the neutral L3 agent for routing functionality for the IPv6. So that's two different features but of course we have four different installers and different depending on which installer using the effects of fuel compass or the joint and you have different parameters as we instructed in our installation guide. But that's two scenarios that's been supported. And okay, we have an installation guide and step-by-step instructions and including how to use those different installers and which parameters you have to give to these installers in order to achieve those scenario, basic scenario first and then you add the IPv6, the V-Router and do the yardstick testing. And we developed a case with the yardstick project and yardstick is another OpenMV project which basically targeted for testing the OpenMV infrastructure on top of the radiant tapest which basically tests the OpenStacks, the APIs and performance. And we have done the gap analysis and with the, I wouldn't say it's gap analysis but some kind of analysis of features that's been supported and or not supported in OpenStack Liberty release and also in the OpenDalibralium release and we'll talk about that later. So now I will introduce the Shrida and he will give the more details about the, what's our contributions and what's the service of the IPv6 router. Shrida. Thank you, Ben. Okay, we have contributed to some of the IPv6 features in the upstream components like OpenStack and OpenDelike. So some of our contributions with respect to OpenStack include we added support for IPv6 in the Neutron L3HA routers. We added support for IPv6 address resolution protection basically where we program certain flows on the OBS switches to make sure that the VM is not sending out neighbor advertisement packets for the IP addresses and the MAC addresses that it does not own. Recently we added support in the OpenStack Neutron code base to advertise the DNS information as well as MTU information as part of router advertisements and working with the community, we also added support in the DevStack installer to set up an OpenStack distribution on an IPv6 infrastructure. Coming to OpenDelike, when we started using OpenDelike as the end controller along with OpenStack, we noticed that when we create an IPv6 subnet, we noticed some Java exceptions because of which we could not continue with our use case. So we addressed those use case, those exceptions and we were able to proceed further. Currently, we're actively working with the community to implement the IPv6 router in the controller. In the next few slides, we'll be talking about what was the goal when we started off this project, how the design looks like, how the underlying network topology is, the detail setup steps and the gaps we have identified and found them in both OpenStack as well as ODL. So one of the important goals that we wanted to kind of achieve as an IPv6 subteam in OpenNFE as part of the B release was to provide a platform that can run an IPv6 service VM that is capable of doing two things. One, periodically advertising router advertisements to the VMs that belong to a different tenant. And the second thing is to provide IPv6 external connectivity to those VMs. Basically, the idea is to have a platform that will allow you to try out different other use cases and also for us to have some kind of open innovation. Now, apart from this, another important thing that we had in mind was to identify the gaps and try to contribute as a team wherever possible in those upstream components. Now, how does design look like? So let me take you through this diagram a little bit. So onto your left-hand side, you can see a network node and you see two compute nodes, one in the middle and one to your right side. So the admin for this OpenStack would create two networks, one represented in the brown color, which is the external or the provider network. The other one, which is in the blue color, is the shade tenant network. Now, you say, for example, you have tenant A who is kind of interested in providing an IPv6 service VM. He will spawn a service VM on with two interfaces one connected to the external network and the other one connected to the shade tenant network. And when you have, in this setup, other tenants who are interested to have this service VM use case, they can spawn the VMs on the shade tenant network and kind of leverage this IPv6 infrastructure. So here, the compute node one is hosting the IPv6 service VM and that's the one which is actually acting as IPv6 router for the VMs that are hosted on the compute node two. We wanted to achieve this design both with pure OpenStack as well as with OpenStack and ODL and by the way, it can be easily extended to any other SDN controller that is integrated with the OpenFE today. Now, if you actually pick up the OpenFE installers like Apex, Joyden, so on, we can choose certain scenarios which will help you to kind of replicate this setup and try out this service VM use case which has been mentioned, the two scenarios which has been mentioned in the previous slides. So the Hunderlay network topology consists of three different nodes. One is OpenStack control network and compute node. The other one is a pure OpenStack compute node and the third one is an open daylight control node. We tried this particular service VM use case both with OpenStack kilo as well as OpenStack Liberty and when we were using an SDN controller, we tried with Open Daylight Lithium as well as Beryllium. We identified some gaps which we'll be talking about in the next few slides. OpenFE B release nodes provide detailed instructions on how to achieve this use case, what are the various REST APIs that are involved, what are the gaps and some of the best practices that we have identified. So I suggest that, you know, people who are interested in this particular use case to please take a look and reach out to us if you have any questions. We'll be more than happy to help you. So how does, this is a simplified view of what the service VM is all about. So as I mentioned in the previous slide, you have a service VM that is connected to an IPv6 router which is an external or a physical IPv6 router in your infrastructure. It acquires an IPv6 address using Slack. So the default route automatically points to the upstream external router and on the internal tenant facing side, you have a shared network and you have a couple of VMs. Say here in this particular diagram, you can see VMs from tenant A and VMs from tenant B. So the service VM is going to act like a router to these VMs where it periodically advertises this router advertisements and also forwards the traffic that is coming from these VMs onto the external IPv6 router. Now, in order to achieve this particular use case, so we kind of evaluated a couple of options. One was to use a metadata or a cloud-init script to push the necessary configuration into the service VM. The other thing we evaluated was to use a snapshot image which has all the required packages installed and started on boot up. We prefer to go with the option one which is the metadata thing because it's a script, as you all know, it's a script. So you can customize it according to your needs and try out other use cases with using the opnf installers. So that is one of the main reasons we had. The other thing is you can pick any standard cloud image. Like, for example, you can use Fedora or Ubuntu or CentOS, whatever image you're interested in and try this service VM use case with that particular QCout image. So on a high level, what this metadata script does is, like, for example, most of you know that when you have a cloud image, most of the time only the default interface is brought up and not the additional interfaces. So the cloud-init or the metadata script would actually enable the additional interfaces and it would also set the proc entries that are required for IPv6 forwarding inside the VM. It downloads these necessary packages which is RADVD in our case and it also pushes the required configuration to enable Slack or DHCPv6 kind of use cases inside the service VM. Now, while this is a simple use case, our main idea was to provide a platform, as I mentioned earlier, and by tweaking, doing some minimal changes into the metadata script, you'll be able to enhance this use case to other things. Like, for example, you're interested, say, for example, you're interested in having a prefix delegation kind of use case. You can have a PD client running inside the service VM, have a PD server external to your service VM and you can run a Quagga client. I mean, you can try out different other use cases with some minimal changes to this particular script. With that, I'll give it to Bin to talk about the steps. Thank you, Shridha. So, we summarize, document those setup steps and we'll give you a very brief introduction. So, we talk about two different scenarios. That's for the installer scenarios to install the open web platform. And here, we'll talk to the scenarios. The concept here is different from the scenarios of the open web scenario, but here, the scenario about how we deploy the IPv6 on the specific combination of the OpenStack plus OpenDalight, which beyond of the standard the OpenMV installation package, and basically you don't have choice just standard the OpenMV Liberty, OpenStack Liberty plus the OpenDalight, the Beryllium release. But here, if you wanted to try it manually because you don't have to automatic install, you can install OpenStack and OpenDalight by your own. And so, that's how we can have achieved different flexibility of different combinations of OpenStack and SDN controllers. So, that's what we call this scenario in this context, that's what the scenario means. Okay, so basically we have three different scenarios that we have a part and want to share with you. Then first is the pure OpenStack environment. And so, basically that's simply a scenario. That's just OpenStack, no SDN controller, right? And so, no SDN controller will be implemented actually at neutral backend. And the second scenario is the OpenStack plus OpenDalight, but it's OpenDalight Lithium release SR3 or the earlier release. And so, why we are using this? Because when we experiment with the Lithium release and especially SR3 and earlier release, right? And so, we tried it and before the Beryllium was officially released. And so, we found that it was back in the Lithium FR3 and before. And so, when we spawned the IPv6, the neutral LEM space for the router and there is the Java exceptions. And so, Shridha provided a patch to fix this, the Java exception, this bug. And but it's not applicable until Lithium SR4 and all the later, for example, Beryllium release. And so, in this case, and in order to work around those Java exceptions and we have to manually spawn the RADVD demo in the neutral LEM space in order to advertising the prefix, right, for the IPv6 address and to the service VM. And but in the third scenario, and if we are using the OpenStack, I'm sorry, OpenDalight, Beryllium and SR4, or the Lithium SR4 and for the later release after that, and we don't have to manually spawn the RADVD demo in the neutral LEM space. And so that's a difference. That's why we have the specific scenario just to work around those, the bug in the Java exceptions. Okay, and if you want to try different testing controllers like OpenCountry or the owners, and you may have some different findings and feel free to try it and share it with us and share the experience and to make it better. So once we know those different scenarios and what different work grounds, and so if we want to manually install the OpenStack, and manually install the testing controllers, so that's the steps that we're going to use. And also we in order to minimize the dependency on the physical infrastructure, for example, the physical, the lab or the resources. And what we did is that in the original design, in the first, you can see that the service VM directly connects to the external, the physical IPv6 router at the age of the data center or any of the physical infrastructure. But we also experimented in a single laptop, and basically we are not dependent on the any physical IPv6 router. But in order to make it work, and I will just describe in the next slide and how we make it work, but basically we are available to experiment in a single laptop, and of course we have to have sufficient the RAM and the disk storage in order to spawn the different, at least the three VMs on the laptop to ensure the performance on that. And then we need to set up the open daylight controller and one of the VMs on a single laptop, and we set up the OpenStack controller node on the second VM on the single laptop, and there's instructions available in the more details right on our Wiki side installation guide. And then we need to set up the OpenStack and the computer node right as the OpenStack compute. And then we follow the steps, we create the different networks, create the new home, the name space, and create the subnets, create the VMs, and the Nova boot image, the spawn those VMs, and pass the metadata into the VM, and set up this environment and complete those setups. Okay, all the details and instructions are available for the Wiki and for the instructions. And as I just mentioned, and in order to minimize dependency on the physical infrastructure, and you don't have to install it in a physical lab, and what we did here is we added a neutral name space, which we'll call IPv6 router on the left-hand side. Yeah, and the lower left side, you can see the IPv6 router which was basically a neutral name space that simulates the physical IPv6 router, which typically will be installed in the data center. Okay, and so in this case, the subnets will connect it to the neutral name space, which simulates router in order to make sure that our testing passes, okay. And of course, on the right-hand side, that's a typical setup of the VM. And if we are using the open daylight, the variant release, and the green, the tenant network can be shared by the data tenants, as Shreya just mentioned before. And so that's the net apology after the overlay setup. And so that's exactly the copy and from the horizon user interface, the same thing as we shown in the past slide, but that's the copy from the horizon user interface and for that. And through this experiment, and we analyzed what are supported already, what are still not supported in an open stack and an open daylight. And we're very glad that most of the useful features have already been supported. And well, while some features are missing, and not necessarily the problem with the open stack or the open daylight, but maybe because lacking of the different use cases, right. And from an open daylight perspective, and for example, the functions not supported is, I think the biggest function is in the open daylight, the layer three routing, right, it don't support IPv6, layer three routing, they also don't support the IPv6 address management. And that's the, I think that's a big gap here. And so Shreya is driving the support in the network to provide the projects and OBSDB project in the ODL and in order to produce this gap to make sure that this, the supporting of IPv6, layer three routing and IPv6 management will be supported in the roadmap for the open daylight future release, right. And also the security groups, the features also not supported in the open daylight. And once the L3 routing is being supported and also the drive the implementation that and to make it fully support IP, security group features IPv6. And so we listed here the shared tenant network because Lithium doesn't support shared tenant network, but really already supports, but we listed it just because we have done the exercise for the both the releases of the open daylight. And from open stack perspective, I think the, I would say as a pretty much that everything is supported, especially from an overlay perspective, right. And all the IPv6 have been supported. And from the underlay perspective, and so one, and the feature not supported is that the statically assigned IPv6 address and it's not supported from open stack perspective. And especially in the same fashion as when you support IPv4. And because it's usually the IPv6 address was assigned dynamically through a Slack or the DHCP, the dynamics allocation of the address. Fluid IP is not supported by IPv6 for IPv6. And I think that's a good reason. And because it's for the IPv6 address space is much, much bigger than the IPv4. And they don't need to have the full IP, right. And because they don't need that. And for the internal address, there's a translation to the external IP address. So that's, I mean, basically there's no, I would say, urgent use case on that. Because that can be naturally supported by from the address space perspective for IPv6. And additional IPv6 extensions and IPsec, IPv6, any cache, multi-cache not supported. But I think that's not because of the neutral. It doesn't support by because IGMP is not supported and for IPv4 even it doesn't support a multi-cache, et cetera. So it's if there's urgent need and the use case requirement for industry and once neutral supports the multi-cache in general and those both IPv4 and IPv6, they will be supported as well. So that's not specifically something that's something wrong there. So VM access and to the metadata and between the metadata servers is still the end point of the meta servers. Right now the infrastructure is only supported IPv4. So that's the, it's not something that can be addressed in the short term. And because if you want to use the meta server and you need to go to the meta server, that's been reality in the infrastructure part, okay? And distributed virtual routing and nature is not supported by for IPv6. That's because the floating IPv6 is not supported. And so the way that neutral is doing is that for the floating IP for the IPv4 and all the traffic will be forwarded to the network node, right? And the network will combine this traffic and do the source routing and to send the traffic through the centralize the new network node and to the outside external. So it's, so that's nature. We call it nature is not supported. But I think it's because we don't need the IPv4 or floating IPv6, it could be easily in and added and that's just that to distinguish and the IPv6 and IPv4 for the IPv6. Again, it is easily the no matter floating IP or not and just directly from the computer directly go to the external name. And of course, the GIE VXLan tunneling endpoint still requires IPv4 and as of the latest status and the links kernel already support the VXLan endpoint as IPv6 since the kernel version 4.4 and OVS and also will plan to support the IPv6 endpoint for VXLan starting from OVS 4.6 and once those are supported and could be the patch in the neutral to support the endpoint and the GIE, we are not know the details of the GIE endpoint yet. And it could be on the load map from the kernel side to support it at first and then on the OVS user space and the neutral part. So now I introduce my colleague, Parakash and he will talk about the release planning for the Colorado release. So what I will do is I think Ben and Shridhar, they have been from the beginning very excellent team they have formed and obviously in the Colorado release what we are looking at this time is can we do a full IPv6 install automatically because we have a manual process step one, two, three as they described to do that, what do we do? So there are two portions to it. One is the underlay and other is the overlay. So can we do underlay fully IPv6 installation? And of course we can do it because support is there but then there are some gotchas like we saw like we have to do using L2, ODL, L2 scenario. And so that's one portion. The other is overlay. So generally when we talk about these things we have to look at what is the traffic? Is it not soft traffic? Is it east-west traffic? Those are the terms we use in data centers. The other portion is we need to talk about tunneling because that's important from the point of view of how do we do the data plane traffic which needs acceleration and which needs support from the IPv6 system. So here we see that that's why the first is automation of what we have done. Simple, straightforward with some changes that we expect in ODL or other SDN controllers. So the second portion is how do we test if we have to expand it to WAN? So that means you have multiple sites and in those multiple sites, how do we route the IPv6? Or what is it riding on in the underlay? So if you look at underlay multiple instances of OpenStack, we can have two of them and we did ask for and we know that until part three we have been allocated and we have looked at some allocation for VLAN so that we can do it with the underlay. That's one. The second is if we do, now this is I'm talking about now let's say not soft. So we, you saw the router that is the service, we router service router which is based on purely IPv6. It did forward traffic either it is external which is not south or it is east-west which is within the one VM tenant to another tenant or through the shared tenant. So if you look at that, so in the overlay for multi-site we want to do external which is not south traffic. And that is one way of doing through service routers or external router with all the prefixed delegation and all set up already. The other one is underlay if we do V router at multi-site but we want to connect the east-west traffic. Now east-west traffic can be done through either the Ethernet VPN that is EVPN you can call it or through L3 VPN. So if you do via EVPN that is one use case for us and if you do it through L3 VPN that is another use case for it. So that's one of the ways of doing using the underlay the east-west traffic. And this is equivalent of what we do in DVR or something but at the L2 layer. Now another case is overlay end to end SFC for VM through L3 VPN. So when we have VM one, VM two in different what do you call compute nodes and we want to have them chained for applications which is generally the carrier grade applications. So for that we want end to end SFC and this can be done through L3 VPN as overlay. This is based on L3 totally. So we have many combinations and at the same time we want to make sure you saw in the recent whatever our summit here Verizon has implemented the IPv6. So you are seeing that there is an opening coming out for IPv6 in a big way and we also eventually will have to look at if there is a failure, how do you handle the failure? So therefore we route an HA through VR, RPR whatever routing protocols are available. We'll have to evaluate them and those are the things we are evaluating as we talk and there is always a blacks on. You don't know what comes. Suddenly there is a new project. I said routed networks which is being brought into Neutron in a major way. So we are going to see major shakeup there. So that may impact some of our plans but we do want to as far as we are concerned right now we have based on what you call the Mitaka. So we should be able to handle that but going to Neutron we'll have to see what changes occur. So you see that at the end of the day it is important that we allow innovation and that's the reason we chose service VM and we hope to see this sometimes into release D time frame probably to get into some kind of lab readiness to field readiness type of things and eventually make sure that IPv6 is important from industry because in industry we see that mobility whenever it comes it is you get it free in IPv6 because just you add additional mobility headers to it. So that's very important from long-term going towards 5G and all. So we expect that we will evolve and that's why they have kept it very open and I'm very happy that our team has collaboration from all the players, from all vendors, all the service providers. I think at this time I will open the floor for question and answer and give it to... Okay, oh, acknowledgement, sorry. I forgot, that's the most important thing as a community and you have contributors from Clear Path, Mark, Medina, then you have John from Nokia, then Eben from Was Inspired and now is in Rokit, then you have Minakshi from Cisco, then you have Gau, Kubi Gau from Huawei who has helped us in testing in their labs and our labs I would say, then Christian. So community labs we call it. So we have the Linux Foundation lab where from we run and community labs are vendor provided like Intel's and Huawei's and Ericsson's and also, that's one. And so I should also acknowledge to Christian for, from cloud-based solutions and for setup and access support and all and then we have Hennis, Frederick from Red Hat and I cannot underestimate the value of Sridhar who has been very, very, what do you call, leading the effort both on the open stack side as well as on ODL side. We couldn't get somebody on the ONO side or open country but we are trying. We know that ONOS does have a forwarding in the code base in the IPv6 but we could not get to the routing, et cetera. And so we are addressing that. We are looking at the gap analysis and all and we are fortunate to have also a very good manager who is very systematic and you can see from the presentation itself that reflects the detailed oriented work of Min and look forward to questions and help from the community. We invite everybody to help because this is going to be a major requirement for 5G as we go. So questions you can please go to the the microphone on the aisle. So you have created many good documents so I'm wondering if you can upstream those documents to say neutron networking, ODL project or OpenDirect project? Yeah, absolutely. So that's a very good suggestion and I would be very happy to upstream those documentations to the neutron. And are you the primary contact for us to do that or which channel do you suggest? For networking ODL, yes, I am. For OpenDirect Neutron Northband, I am. Okay. For OpenDirect, but it's some. Okay, let's exchange the communication information and we can do that. Thank you. That's a very good suggestion. One of the limitations I saw was regarding static IPv6 assignments to VMs when DHCP is being used. If you're not using the DHCP agents, are you able to statically assign at that point? So now we are using the Slack. The stat list, the address resolution, automatic address, the configuration. Yeah, so that's statically assigning IPv6 because we are not supported yet. Yeah. Okay, so you showed a page on gap analysis. So what's the communication path to upstream? I mean, I'd like to hear, as a community member, I'd like to hear a feedback. But so far, this is the first time I am aware of it. I'd like to have a communication channel constantly. So I think that's exactly the purpose that we are here to presenting our projects and hope that we can establish a communication path regularly and down the road so that we can collaborate together and between the, for example, Neutral Project and our projects. And that's exactly one of the purpose here is to establish the formal communication, regular communication channels. Thank you. You want to go ahead? Maybe we should talk privately. So you showed us some POC by Onos, but is there any broker to do the same with Open Daylight? Yeah, this one. The plan is, for example, for now, we only installed the Open Daylight as a back-end testing controller. And the plan is that we want to support all the different kind of popular testing controllers on the market, including the Open Control, including Onos. But we don't have the resources, basically, meaning that experts, that's on the Onos side and who is an expert that knows more about Onos and how to integrate that and can combine that with the OpenStack, as well as, and how to use these Onos, for example, that's the same way as UDL, right? Regarding the Open Daylight, I'm very willing to help here. So we're looking for the community support and especially with the expertise from different dimensions, different aspects, and help us to evolve the project to be covered as broad as we can. I see. Yeah, thank you. Thank you all. Any more questions? Thank you very much.