 Hello. Good morning. Good morning. So welcome for the session. And thank you for coming here. And today, I'm glad to present the IPv6 project that we've done in the OPMV. And with my colleague, Shrida Gadem from Red Hat and Prakash from Huawei. So today, we briefly will review what the project is. And give some introduction of key project pro-facts and what it goes and the deliverables of this project and what we have released in the OPMV Brahmaputra release and in February. And then, Shrida will talk about what it really means and from in terms of how to use a service VM as an IPv6 routing and what has achieved and why we do this. And Prakash will introduce about our planning for a Colorado release, which is planned for August. And that's the next big release for the OPMV. And of course, we will need to acknowledge all of the contributors to the project and its community effort. And we have lots of good contributions from all the different companies in the whole industry. The key project facts. So OPMV, everybody knows that it's the integrated carrier-grade platform and to accelerate the introduction of the OPMV products and services. And so OPMV was founded in October 2014. And so IPv6 projects was approved as a formal project in November, November 25th in 2014, about one month after the OPMV was founded. And so all of the projects in OPMV is in the incubation stage and we're going through the different life cycles and continue to develop projects and to incubate the project to be more mature and mature and in terms of future functions, performance, et cetera, et cetera. And so we have our own the Garrett Reportory, which is IPv6, and has our own the project Wiki. And you can go to the wiki.opmv.org slash display slash IPv6, IPv6 plus home. It's our Wiki page. And so I'm the project lead in the primary content of the project. And I'm very happy to have the support from all the different companies and from the industry, including from AT&T, from the blockade, and from Cisco, Clarepath, Networks Cloud-based Solutions, Huawei, Nokia, Red Hat, Inspirant. And we all have the bi-weekly meetings on every other Fridays. That's 8 AM the Pacific time. And so everybody's if anybody's interested and you're welcome to join the conference course and all the logistics, you can find it in our website. So what's our goal and the data-based project? And so IPv6 is very important in the future from the network infrastructure perspective, right? That's the future of the networking. And that's no doubt. And so project goal is that we need to have the meta distributions of IPv6 in networked OPMV platform, because OPMV provides the integrated carrier grade with the products and the services. And this needs to be, from a future perspective, is enabled by IPv6. And also, we need to provide a methodology how to revolve in the IPv6 capabilities and OPMV. And so that's our goal. But in terms of the specific derivatives, and we are targeting to have integrated package and consider basic upstream components, which is the same as what OPMV will be integrating, including OpenStack, including the different testing controllers, including all the different type of hypervisors and storage and network virtual networks and running on different OPMV community labs. And so we need to, because of nature of the IPv6, right? And so we need to have the auto-configuration script and to automate the configurations and provisionings of IPv6, some features, and some enhanced functions on top of this platform. And for those features, that can be automatically configured. And also, we need to have the user guide and installation guide for some features, which may not be necessary to be automated, but needs to step-by-step manually configured and on top of the platform after the OPMV is deployed by these installers. And we will need to develop the test cases to test those features and on top of the platform. And there are specific IPv6 features in addition to the platform's functional testing and artistic testing. And we are considering to do the gap analysis and to see what other features are currently not available from the platform perspective and what's the roadmap methodology to bridge those gaps and for the future releases. In the end of February, an OPMV had a second release called Brahma Pusha Release, which is a river in India. And so what we delivered in the Brahma Pusha Release include an integrated meta distribution of the package and that's enabled by the installers. And the installer will install the basic OPMV package and we will have the installation guide and for manually install the service we have as IPv6 rerouter on top of this platform. And in order to do that, and we need to support two different scenarios. So OPMV supports like more than 10 different scenarios for the Brahma Pusha Release and for IPv6 specific features, right? And we need to enable two different scenarios. One is the no SDN, no feature. And the other one is ODL, the L2, no feature. And the reason is for no SDN meaning that it's pure open stack and because the neutral supports the IPv6 routing, the L3, the agent. And but for ODL, the L3 routing is not supported yet and so we only need to use the L2 switching but still using the neutral L3 agent for routing functionality for the IPv6. So that's two different features but of course we have four different installers and depending on which installer using APACs, a fuel compass or the JOIT and you have different parameters as we instructed in our installation guide. But that's two scenarios that's been supported. And okay, we have an installation guide and step-by-step instructions and including how to use those different installers and which parameters you have to give to these installers in order to achieve those scenario, basic scenario first and then you add the IPv6, the rerouter and do the yardstick testing. And yeah, we developed a case with the yardstick project and yardstick is another OpenMV project which basically targeted for testing the OpenMV infrastructure on top of the Rally and Tapest which basically tests the OpenStacks, the APIs and performance. And we have done the gap analysis and with the, I wouldn't say it's gap analysis but some kind of analysis of features that's been supported or not supported in OpenStack Liberty release and also in the OpenDeliberation release. And we'll talk about that later. So now I will introduce the Shrita and he will give the more details about the, what's our contributions and what's the service we're going to use the router. Shrita. Okay, we have contributed to some of the IPv6 features in the upstream components like OpenStack and OpenDelike. So some of our contributions with respect to OpenStack include we added support for IPv6 in the Neutron L3HA routers. We added support for IPv6 address resolution protection basically where we program certain flows on the OVS switches to make sure that the VM is not sending out neighbor advertisement packets for the IP addresses and the MAC addresses that it does not own. Recently we added support in the OpenStack Neutron code base to advertise the DNS information as well as MTU information as part of router advertisements. And working with the community we also added support in the DevStack installer to set up an OpenStack distribution on an IPv6 infrastructure. Coming to OpenDelike, when we started using OpenDelike as the end controller along with OpenStack, we noticed that when we create an IPv6 subnet, we noticed some Java exceptions because of which we could not continue with our use case. So we addressed those use case, those exceptions and we were able to proceed further. Currently we're actively working with the community to implement the IPv6 router in the controller. In the next few slides, we'll be talking about what was the goal when we started off this project, how the design looks like, how the underlying network topology is, the detail setup steps and the gaps we have identified and found them in both OpenStack as well as ODL. So one of the important goals that we wanted to kind of achieve as an IPv6 subteam in OpenNFE as part of the B release was to provide a platform that can run an IPv6 service VM that is capable of doing two things. One, periodically advertising router advertisements to the VMs that belong to a different tenant. And the second thing is to provide IPv6 external connectivity to those VMs. Basically the idea is to have a platform that will allow you to try out different other use cases and also for us to have some kind of open innovation. Now apart from this, another important thing that we had in mind was to identify the gaps and try to contribute as a team wherever possible in those upstream components. Now how does design look like? So let me take you through this diagram a little bit. So onto your left-hand side, you can see a network node and you see two compute nodes, one in the middle and one to your right side. So the admin for this OpenStack would create two networks. One represented in the brown color, which is the external or the provider network. The other one, which is in the blue color is the shared tenant network. Now you say for example, you have tenant A who is kind of interested in providing an IPv6 service VM. He will spawn a service VM on with two interfaces, one connected to the external network and the other one connected to the shared tenant network. And when you have, in this setup, other tenants who are interested to have this service VM use case, they can spawn the VMs on the shared tenant network and kind of leverage this IPv6 infrastructure. So here, the compute node one is hosting the IPv6 service VM and that's the one which is actually acting as IPv6 router for the VMs that are hosted on the compute node two. We wanted to achieve this design both with pure OpenStack, as well as with OpenStack and ODL. And by the way, it can be easily extended to any other SDN controller that is integrated with OpenFV today. Now if you actually pick up the OpenFV installers like Apex, Joyden, so on, we can choose certain scenarios which will help you to kind of replicate this setup and try out this service VM use case, which Bin mentioned, the two scenarios which Bin mentioned in the previous slides. So the Hunderlay network topology consists of three different nodes. One is OpenStack control network and compute node. The other one is a pure OpenStack compute node and the third one is an OpenDaylight control node. We tried this particular service VM use case both with OpenStack kilo as well as OpenStack Liberty and when we're using an SDN controller, we tried with OpenDaylight lithium as well as beryllium. We identified some gaps which we'll be talking about in the next few slides. OpenFV beryllium nodes provide detailed instructions on how to achieve this use case, what are the various REST APIs that are involved, what are the gaps, and some of the best practices that we have identified. So I suggest that people who are interested in this particular use case to please take a look and reach out to us if you have any questions. We'll be more than happy to help you. So how does, this is a simplified view of what the service VM is all about. So as I mentioned in the previous slide, you have a service VM that is connected to an IPv6 router which is an external or a physical IPv6 router in your infrastructure. It acquires an IPv6 address using Slack. So the default route automatically points to the upstream external router and on the internal tenant facing side, you have a shared network and you have a couple of VMs safer here in this particular diagram, you can see VMs from tenant A and VMs from tenant B. So the service VM is going to act like a router to these VMs where it periodically advertises this router advertisements and also forwards the traffic that is coming from these VMs onto the external IPv6 router. Now in order to achieve this particular use case, so we kind of evaluated a couple of options. One was to use a metadata or a cloud-init script to push the necessary configuration into the service VM. The other thing we evaluated was to use a snapshot image which has all the required packages installed and started on boot up. We prefer to go with the option one which is the metadata thing because it's a script as you all know it's a script. So you can customize it according to your needs and try out other use cases with using the op-nf installers. So that is one of the main reasons we had. The other thing is you can pick any standard cloud image like for example you can use Fedora or Ubuntu or CentOS whatever image you're interested in and try this service VM use case with that particular QCout image. So on a high level what this metadata script does is like for example most of you know that when you have a cloud image most of the time only the default interface is brought up and not the additional interfaces. So the cloud init or the metadata script would actually enable the additional interfaces and it would also set the proc entries that are required for IPv6 forwarding inside the VM. It downloads this necessary packages which is the RADVD in our case and it also pushes the required configuration to enable Slack or the HPv6 kind of use cases inside the service VM. Now while this is a simple use case our main idea was to provide a platform as I mentioned earlier and by tweaking doing some minimal changes into the metadata script you will be able to enhance this use case to other things like for example you're interested say for example you're interested in having a prefix delegation kind of use case you can have a PD client running inside the service VM have a PD server external to your service VM and you know you can run a Quagga client. I mean you can try out different other use cases with some minimal changes to this particular script. With that I'll give it to Bin to talk about the steps. Thank you, Shrita. So we summarize, document those setup states and I will give you a very brief introduction. So we talk about two different scenarios that's for the installer scenarios to install the OpenVid platform. And here we'll talk the scenarios, the concept here is different from the scenarios of the OpenVid scenario but here the scenario about how we deploy the IPv6 on the specific combination of the OpenStack plus Open Daylight which beyond of the standard the OpenMV installation package and basically you don't have choice just standard OpenMV Liberty, OpenStack Liberty plus the OpenDaylight, the Berlin release. But here if you wanted to try it manually because you don't have to automatically install you can install OpenStacker and OpenDaylight by your own. And so that's how we can have achieved different flexibility of different combinations of OpenStack and testing controllers. So that's what we call this scenario in this context that's what the scenario means. Okay, so basically we have three different scenarios that we have a part and want to share with you. Then the first is the pure OpenStack environment. And so basically that's basically a simple scenario that's just OpenStack, no listening controller, right? And so no listening controller will be implemented as the usual backend. And the second scenario is the OpenStack plus OpenDaylight but it's OpenDaylight lithium release SR3 or the earlier release. And so why we are using this? Because when we experiment with the lithium release and especially the SR3 and earlier release, right? And so we tried it and before the Berlin was officially released. And so we found that was back in the lithium FR3 and before and so when we spawned the IPv6 the neutron and then space for the router and there is the Java exceptions. And so Shridha provide the patch to fix this the Java exception this bug and but it's not applicable until lithium SR4 and all the later for the Berlin release. And so in this case and in order to work around those Java exceptions and we have to manually spawn the IEDVD demo in the neutral name space in order to advertising the prefix right for the IPv6 address and to the service VM. And but in the third scenario and if we are using the OpenStack I'm sorry, OpenDaylight, Beryllium and SR4 or the lithium SR4 and for the later release after that and we don't have to manually spawn the IEDVD demo in the neutral name space. And so that's the difference. That's why we have the specific scenario just to work around those the bug in the Java exceptions. And if you want to try the different testing controllers like OpenCountry or the owners and you may have some different findings and feel free to try it and share with us and share the experience and to make it better. So once we know those different scenarios and what different work grounds and so if we want to manually install the OpenStack and manually install the testing controllers so that's the steps that we're going to use. And also we in order to minimize the dependency on the physical infrastructure for example physical, the lab or the resources. And what we did is that in the original design in the first you can see that the service VM directly connects to the external the physical IPv6 router at the age of the data center or any of the physical infrastructure. But we also experimented in a single laptop and basically we are not dependent on the any physical IPv6 router. But in order to make it work and I will describe in the next slide and how we make it work but basically we are available to experiment in a single laptop and of course we have to have sufficient to the RAM and the disk storage in order to spawn the different at least three VMs on the laptop to ensure the performance on that. And then we need to set up the open daylight controller and one of the VM on a single laptop and we set up the OpenStack controller node on the second VM on the single laptop and there's instructions available and in the more details right in the on our wiki side installation guide. And then we need to set up the OpenStack and the computer node right as the OpenStack compute and then we follow the steps we create the different networks create the neutral name space and create the subnets, create the VMs and the noble image, the spawn those VMs and pass the metadata into the VM and set up this environment and complete those setups. Okay, all the details and instructions are available for the wiki and for the instructions. And as I mentioned and in order to minimize dependency on the physical infrastructure and you don't have to install it in a physical lab and what we did here is we added a neutral name space which we'll call IPv6 router on the left hand side. And the lower left side you can see the IPv6 router which was basically neutral name space that simulates the physical IPv6 router which typically will be installed in the data center. Okay, and so in this case the subnets will connect it to the neutral name space which simulates router in order to make sure that our testing passes. And of course on the right hand side that's the typical setup of the VM and if we are using the open daylight the variant release and the green the tenant network can be shared by data tenants as Shreya just mentioned before. And so that's the net apology after the overlay setup. And so that's exactly the copy and from the horizon user interface. The same thing as we shown in the past slide but that's a copy from the horizon user interface and for that. And through this experiment and we analyzed what are supported already what are not supported, still not supported in an open stack and an open daylight and we're very glad that most of the useful features have already been supported and well while some features are missing and not necessarily the problem with the open stack or the open daylight but maybe because of lack of the different use cases, right? And from an open daylight perspective and for example the functions not supported is I think the biggest function is in the open daylight, the layer three routing, right? It don't support IPv6 layer three routing they also don't support IPv6 address management and that's the, I think that's a big gap here. And so Shredda is driving the support in the network pride of the projects and OBS DB project in the ODL and in order to produce this gap to make sure that the supporting of IPv6 or the routing and IP address management is where we supported in the roadmap for the open daylight feature release, right? And also the security groups features also not supported in open daylight and once the L3 routing has been supported and Shredda also drive the implementation that and to make it fully support IP security group features for IPv6. And so we listed here the shared tenant network because Lithium doesn't support shared tenant network but really already supports but we listed it just because we have done the exercise for the both the releases of the open daylight. And from open stack perspective I think the, I would say as a pretty much that everything supported especially from an overlay perspective right end all the IPv6 have been supported and from the underlay perspective and so one and the feature not supported is that the statically assigned IPv6 address and it's not supported from open stack perspective and especially in the same fashion as when you support IPv4 and because it's usually the IPv6 address was assigned dynamically through a Slack or the DHCP that dynamics allocation of the address. Fluid IP is not supported for IPv6 and I think that's a good reason and because it's for the IPv6 the address space is much, much bigger than the IPv4 and they don't need to have the full IP right end because they don't need that and for the internal address the translation to the external IP address. So that's, I mean basically there's no I would say urgent use case on that because that can be naturally supported by from the address space perspective for IPv6. And additional IPv6 extensions and IPsec, IPv6, anycast, multicast not supported but I think that's not because of the neutral doesn't support but because IGMP is not supported and for IPv4 even it doesn't support a multicast, et cetera. So it's if there's urgent need and the use case requirement for industry and once neutral supports the multicast in general and those both IPv5 and IPv6 will be supported as well. So that's not specifically something that's something wrong there. So VM access and to the metadata and between the metadata servers is still the end point of the meta servers right now the infrastructure is only supported IPv4. So that's the, it's not something that can be addressed in a short time and because if you want to use the meta server and you need to go to the meta server that's been reality in the infrastructure part, okay. And the distributed virtual routing and nature is not supported by four IPv6 that's because the floating IPv6 is not supported and so the way that neutral is doing is that for the floating IP for the IPv4 and all the traffic will be forwarded to the network node, right and the network will combine this traffic and do the source routing and send the traffic through the centralizing the new network node and to the outside external. So that's nature, we call it nature is not supported but I think it's because we don't need the IP for the IPv6 it could be easily in and added and just that to distinguish the IPv6 and IPv4 for the IPv6 that can easily the meta server for the IP or not and just directly from the computer directly go to the external name. And of course the GRE VXLan tunneling endpoint still requires IPv4 and as of the latest status and the links kernel already support the VXLan endpoint as IPv6 since the kernel version 4.4 and OVS and also will plan to support the IPv6 endpoint for VXLan starting from OVS 4.6 and once those are supported and could be the patch in the neutral to support the endpoint and the GRE we don't know we are not know the details of the GRE endpoint yet and it could be on the node map from the kernel side to support it at first and then on the OVS user space and on the neutral part. So now I introduce my colleague Parakash and he will talk about the release planning for the Colorado release. So what I will do is I think Bin and Shridhar they have been from the beginning very excellent team they have formed and obviously in the Colorado release what we are looking at this time is can we do a full IPv6 install automatically because we have a manual process step one, two, three as they described to do that what do we do? So there are two portions to it one is the underlay and other is the overlay. So can we do underlay fully IPv6 installation? And of course we can do it because support is there but then there are some gotchas like we saw like we have to do using L2, ODL, L2 scenario and so that's one portion, the other is overlay. So generally when we talk about these things we have to look at what is the traffic? Is it north-south traffic? Is it east-west traffic? Those are the terms we use in data center. The other portion is we need to talk about tunneling because that's important from the point of view of how do we do the data plane traffic which needs acceleration and which needs support from the IPv6 system. So here we see that that's why the first is automation of what we have done. Simple, straightforward with some changes that we expect in ODL or other SDN controllers. So the second portion is how do we test if we have to expand it to WAN? So that means you have multiple sites and in those multiple sites, how do we route the IPv6? Or what is it riding on in the underlay? So if you look at underlay multiple instance of OpenStack we can have two of them and we did ask for and we know that in part three we have been allocated and we have looked at some allocation for VLAN so that we can do it with the underlay. That's one. The second is if we do, now this is I'm talking about now let's say North-Souths. So we, you saw the router that is the vRouter service router which is based on purely IPv6. It did forward traffic either it is external which is North-South or it is East-West which is within the one VM tenant to another tenant or through the shared tenant. So if you look at that, so in the overlay for multisite we want to do external which is North-South traffic and that is one way of doing through service routers or external router with all the prefix delegation and all set up already. The other one is underlay if we do vRouter at multisite but we want to connect the East-West traffic. Now East-West traffic can be done through either the Ethernet VPN that is EVPN you can call it or through L3 VPN. So if you do via EVPN that is one use case for us and if you do it through L3 VPN that is another use case for it. So that's one of the ways of doing using the underlay the East-West traffic. And this is equivalent of what we do in DVR or something but at the L2 layer. Now another case is overlay end to end SSC for VM through L3 VPN. So when we have VM1, VM2 in different what do you call compute nodes and we want to have them chained for applications which is generally the carrier grade applications. So for that we want end to end SSC and this can be done through L3 VPN as overlay. This is based on L3 totally. So we have many combinations and at the same time we want to make sure you saw in the recent whatever our summit here Verizon has implemented the IPv6. So you are seeing that there is a opening coming out for IPv6 in a big way and we also eventually will have to look at if there is a failure, how do you handle the failure? So therefore we router HHA through VR, RPR whatever routing protocols are available. We love to evaluate them and those are the things we are evaluating as we talk and there is always a blacks on. You don't know what comes. Suddenly there is a new project. I said routed networks which is being brought into Neutron in a major way. So we are going to see major shakeup there. So that may impact some of our plans but we do want to, as far as we are concerned, right now we have based on what you call the Mitaka so we should be able to handle that but going to Neutron we'll have to see what changes occur. So you see that at the end of the day it is important that we allow innovation and that's the reason we chose service VM and we hope to see these sometimes into release D time frame probably to get into some kind of a lab readiness to field readiness type of things and eventually make sure that IPv6 is important from industry because in industry we see that mobility whenever it comes it is you get it free in IPv6 because just you add additional mobility headers to it. So that's very important from long-term going towards 5G and all. So we expect that we will evolve and that's why they have kept it very open and I'm very happy that our team has collaboration from all the players, from all vendors, all the service providers. I think at this time I will open the floor for question and answer and give it to, okay, oh, acknowledgement, sorry. I forgot, we, that's the most important thing as a community and you have contributors from Clear Path, Mark, Medina, then you have John from Nokia, then even from, was inspired and now is in Roket, then you have Minakshi from Cisco, then you have Gao, Kubi Gao from Huawei who has helped us in testing in their labs and our labs I would say. Then Christian, so community labs we call it. So we have the Linus Foundation lab where from Iran and the community labs are vendor provided like Intel's and Huawei's and Ericsson's and also, that's one and so I should also acknowledge to Christian for, from cloud-based solutions and for setup and access support and all and then we have Hennis Fedrick from Red Hat and I cannot underestimate the value of Sridhar who has been very, very, what do you call, leading the effort both on the open stack side as well as on ODL side. We couldn't get somebody on the ONOS side or open country but we are trying, we know that ONOS does have a forwarding in the core base in the IPv6 but we could not get to the routing, et cetera and so we are addressing that, we are looking at the gap analysis and all and we are fortunate to have also a very good manager who is very systematic and you can see from the presentation itself that reflects the detailed oriented work of Bin and look forward to questions and help from the community. We invite everybody to help because this is going to be a major requirement for 5G as we go. So questions, please go to the microphone on the aisle. So you have created many good documents so I'm wondering if you can upstream those documents to say Neutron Networking, ODL project or Open Delight project? Yeah, absolutely. So that's a very good suggestion and I will be very happy to upstream those documentations to the Neutron and are you the primary contact for us to do that or which channel do you suggest to? For Networking, ODL, yes, I am. For Open Delight, Neutron, North Band, I am. And for Open Delight, Network, it's some. Okay, yeah. Okay, let's exchange the communication information and we can do that, yeah. Thank you. That's a very good suggestion. One of the limitations I saw was regarding static IPv6 assignments to VMs when DHCP is being used. If you're not using the DHCP agents, are you able to statically assign at that point? So now we are using the Slack. The stat list, the address resolution, automatic address, the configuration. Yeah, so that's statically assigned IPv6 address not supported yet, yeah. Okay. So you showed a page on Japanese. So what's the communication path to upstream? I mean, I'd like to hear, as a community member, I'd like to hear a feedback. But so far, this is the first time I am aware of it. So I'd like to have a communication channel constantly. So I think that's exactly the purpose that we are here to presenting our projects and hope that we can establish a communication path regularly and down the road so that we can collaborate together and between the, for example, Neutral Project and our projects. And that's exactly the one of the purpose here is to establish the formal, the communication, regular communication channels. Thank you. You want to go ahead? Maybe we should talk privately. So you showed us some POC by Onos, but is there any broker to do the same with Open Daylight? Yeah, this one's right, yeah. The plan is, for example, for now, we only installed the Open Daylight as a back-end testing controller. And the plan is that we want to support all the different kind of popular testing controllers on the market, including the Open Control, including the owners. And, but we don't have the resources, basically meaning that experts that's on the Onos side and who is an expert that knows more about owners and how to integrate that and can combine that with the OpenStack. So as, and how to use this Onos, for example, that's the same way as ODE, right? Regarding the Open Daylight, I'm very willing to help here. So we're looking for the community support and especially with the expertise from different dimensions, different aspects, and help us to evolve the project to be covered as broad as we can. I see. Yeah, thank you. Thank you all. Any more questions? Thank you very much.