 Excellent. So before we move on, just a little thing as my massive head is in the way. The full version of the slides are at this bit.ly page. I will be posting slides throughout the session that reiterates this URL. But it's just bit.ly open stack-ipv6-barcelona. It is about three times the size of the slides we have time to go through in here. So if you love TCP dumps, horrendous looking debugs, and all of it in glorious color, that is in that deck. So we've got a lot of content to cover with a lot of diverse topics around IPv6. So we're going to get rolling as well as having hopefully a live demo in here. So deploying IPv6 in open stack environments. So a quick poll, how many people here know anything about IPv6? That is fantastic. So that's good. We don't have time for a primer, and we're not going over a primer. So if you have any questions afterwards about why is this thing in hexadecimal, then I'll deal with your questions at the end of the session. But we're going to get rolling. So my name is Shannon McFarland. I'm a Distinguished Engineer at Cisco Systems working on things dealing with cloud and networking and containers and lots of stuff like IPv6 that no one else wants to work on. So I've been at Cisco 17 years. I have worked on open stacks since just prior to Diablo. And I have worked on and written books and so forth around IPv6 since 2002. I had a full head of hair before I worked on v6 and open stack together. But when we put the two of those together, I was done for. So a quick look at our agenda. We are going to just touch base very quickly on general open stack plus IPv6 kind of considerations. It's really less about open stack specifically and more about just general cloud stuff. What we need to kind of take into consideration is we move into anything dealing with cloud, especially open stack with IPv6. And then we're going to jump into the tenant facing side of IPv6 and that's where we'll spend the bulk of our time. I am not going over control plane IPv6, meaning what it is that you're doing from a Keystone in-point perspective or what you're doing on the database side. Right now we're focused in this space because as a guy that talks to customers and engineering from companies all over the world, 99.9% of them are all caring about getting v6 inside of the tenant space so that those folks can serve applications out of that tenant domain. And so that's where we're going to focus our time. And then we'll end on what we're hoping to do next time and some future stuff. So again, here's that bitly link to go get the full presentation. Please get that presentation. I have no idea why summit after summit we post a video but we don't post slides. But you absolutely want my full version of this presentation because it goes into nauseating detail on literally every topic that we're going to go into. So you definitely want that content. Also, I've got a bunch of IPv6 focused heat templates out there spanning from Juno to Newton. So you'll want to grab that off of my GitHub page and specifically you'll want to grab this new v6 only LBAS v2 YAML. I'm very creative in my naming. But that is the actual one we'll use in our demo today. Also, there's a bunch of Docker posts and OpenStack posts that I have at debug-all.com which is usually where I dump my stream of consciousness. So this is the only slide that even resembles a sales-y threatening tone. I am not in sales nor am I not in marketing so I don't give a shit if you ever use IPv6. So I have absolutely no one or wish to sell you anything or to convince you of IPv6 but if you literally have forced yourself under a rock for the last five years and don't know about this thing called IPv4 address starvation then you should be fired. But the reality of it is this is an issue especially in various geographies and market segments that you're facing some sort of public facing v4 starvation issue and all of the RARs, Ripe, Aaron, Affronitik, Appnic, all of these guys have very, very concise information about if they've already exhausted their public address pools how do you get IPv6 address spacing for your organization and so forth. Also as a guy that's written multiple CVDs and design guides and tech briefs and a book around deployment of IPv6 I'll tell you that it really sucked not too long ago and it is much, much better about getting IPv6 into your organization especially as we kind of walk through what we're doing with OpenStack so there's not many excuses for you to not be doing IPv6 in some context in your environment. Now when we look at kind of the hard stuff which is gluing probably something that's new in your environment maybe like a cloud, whether it be a private cloud or you're tying a hybrid client in with AWS and now you maybe want to do some sort of linkage to OpenStack that may be new for you. And then when you add in the complexity of a brand new protocol like IPv6 you put those two things together and bad things begin to happen and so if you plan them out and you kind of know where to begin and where to avoid and maybe until the last possible moment those things will help you. Because inside cloud we have some very complex things that are very protocol dependent. We've got API endpoints. We've got to communicate to various databases. We have virtual and physical networking involved of which we not only need to get basic connectivity but we need to take into considerations things like management and high availability and securing those aspects. So there's a lot of things involved for IPv4 alone but when you add IPv6 in it gets pretty gnarly. We've had kind of a tumultuous time with IPv6 in the OpenStack realm. It's taken us a while to get to a fairly usable production quality environment that we have in IPv6 and we're going to talk about some of those features today. So I feel fairly comfortable about where we are from the basics of IPv6 from a core requirement but there's some very important things that we do need to do that we'll talk about in the future work here such as IPv6 prefix delegation with high availability. What do we do about IPv6 only with metadata environments? Currently we have no support for that at all and we'll talk about that further in the deck. But we do have stuff that we can work with today and we're going to talk about the tenant facing side of IPv6 and what our address assignments look like for things like stateless address auto configuration or do you want DCPV6 involved in your address assignments? We also want to talk briefly about how much IPv6 do you really want to start with inside of your cloud environment. So we'll talk about this notion of dual stacking everything versus this conditional dual stack where I'm just going to pick a part of my cloud and enable IPv6 there because that's really where my customers are demanding access. So we kind of graphically take a look at these two approaches this dual stack everything approach and the conditional dual stack. So dual stack everything is pretty brainless. It is literally put IPv6 everywhere there's IPv4 and that sounds easy but it really sucks. It especially does when you are looking at how do you automate the deployment of your open stack environment in such a way that you have both v4 and v6 side by side working. You've got endpoints that work, the database access works all of the stuff in the control plane the actual functionality that provides tenants access into the cloud. So that's a really hard thing to do. So most of our customers that we know out and partners and definitely those of you in the community have kind of echoed back what we're really after today is just getting tenants to have IPv6 support so that they can run their applications and those applications be available to the outside world. So that is really where the bulk of everyone that I know of that has a deployment or is working on a deployment is doing it via a conditional dual stack environment where they're leaving kind of the control plane all of the API, all of the database all of that stuff is still IPv4 and now they're just enabling either a dual stack and when I say dual stack I mean two protocol stacks IPv4 and IPv6 together in the same tenant or they may be creating a brand new tenant that is IPv6 only and we'll be talking about how we deploy some of these. So we're going to start off with looking at the three main address assignment types that you have in tenant. This takes whether or not you are running provider networks or whether you're running with Neutron L3 and you've got routers involved but there are three supported types of getting address assignment out to your environments. Before we turn on our address assignment types it's important for you to think if you've not already implemented how you actually handle your IPv6 prefixes for your cloud. So the one there on that far left is the cloud provider option and that is where the cloud provider goes out to RIPE or ARIN or whatever your regional registry is and pulls a prefix to use inside this cloud. Every tenant pulls a prefix from the cloud. Super simple, straight, works beautifully especially if you're using IPv6 prefix delegation that we'll talk about. Now the second option is totally doable as well. This may be for some of you that are cloud service providers that have enterprise customers that already have IPv6 and they want to extend their prefix into your cloud domain via BGP policies and they want all of their stuff to look like it's coming from their world from a prefix perspective. Totally doable, it's just BGP. So these two options are pretty mainstream. Now most of you do not look like dumb people so I'm encouraging you not to do something dumb and do the thing there on that far right. This is old school legacy IPv4 thinking. This model that you have on the far right is taking ULA unique local addressing which is an RFC 1918 private addressing kind of equivalent in the v6 world. It's a non-routable space and they are translating from that private space to a routable v6 space. Please never do this. It introduces a crap ton of really bad things in your cloud. You break a lot of application stuff. It's not very performant and if you do want to buy something that's very performant and I work for Cisco so please buy these boxes. They are very extremely expensive boxes to be able to optimize this type of translation so there's literally no justifiable reason for you to go down this option of translating between two IPv6 prefixes. So once we've figured out that yes we've pulled a cloud prefix for our entire environment and we're going to use that to distribute prefixes out to our tenants. We're going to go through and start creating some interfaces in order to do this. So you guys know this stuff as well as I do. We create a network. If it's a dual stack environment we're creating a public facing v4 and we're creating a public facing v6 and really the only thing that distributes any kind of change here is that we've got a flag that identifies the protocol IPv version 6 and that our addressing is now hexadecimal which is cool because you can create names like you can in the old IPX days. Now if you are using Slack mode so I saw a lot of hands at new IPv6 those same hands should know Slack. Fewer hands know Slack. Stateless address auto configuration so Slack is simply what we call it EUI 64 standard way of providing IPv6 addressing to endpoints. Very briefly Slack is a host asking via a router solicitation to a multicast group I want the first 64 bits of my 128 bit address. Some router or routers on that same segment will reply with the first 64 bits for you. Once the host gets that they take their 48 bit MAC address split it in the middle put FFFE in the middle of it to make it a 64 bit address they glue the two together and you're on the network. So that's basically what we've got with stateless address auto configuration it is a very very fast simple stateless way of getting addresses out to endpoints and let them be the last part of their IDs. So all we need to do to enable any of these modes is we're going to tune two knobs inside of our Neutron subnet create command the first one is the IPv6 address mode the second one is the RA mode and we're just going to identify them as Slack. So basically this instructs RA DVD which is the methodology by which Neutron handles IPv6 address assignment and routing within OpenStack and it then knows what mode it should be listening for and responding to these clients. So an example of this would be that we've got this instant down here at the bottom it's received its IPv4 address it would come on the network it would send a router solicitation up to Neutron or a DVD and it would say I'm listening on this subnet for Slack mode I'm going to give back to you a 64 bit prefix and this example is 2001 DBA cafe 0 and the instance would take that bit glue it together with its kind of newly formed MAC address put the two addresses together and away you go now some things about Slack mode it does nothing with DNS no options no nothing it is strictly an addressing format ok so if you want to get a DNS server entry a you know your domain name you want to handle options you're not doing that through through Slack so you would either need to provide that through cloud init or some other methodology to inject that information one thing that most people get really obsessive about in the Slack mode is that they don't see an IPv6 DNS server because they put it in the subnet and domain information when they created the subnet it is totally ignored with Slack so you don't even you can put it in the subnet it will never get sent to the client but because DNS with IPv4 the transport of DNS allows me to do queries over IPv4 and get A and quad A records back so you can still use Slack and use your IPv4 DNS server to still realize all of the DNS characteristics that you need in v6 now stateless DCP v6 is basically gluing the old school stateful DCP v6 world together for things like domain name, DNS server and options with Slack so I'm getting my address via Slack but I'm going back and asking for DNS information out of that so again we've got our address mode we've got our RA mode that really indicate what's going on in that subnet and in this case we're indicating what our DNS server is and then the client would go through the same methodology did with Slack but now it would go back and ask our ADVD for option information and then finally we changed our address and RA mode to be stateful and this is everything you ever knew about IPv4 DCP is the same pretty much with what you've got in IPv6 DCP it is really we have a database where we have leases we can have leased timeframes we can have options, we can do all the things that we had in an IPv4 world we can have in the IPv6 world now when you move over to provider networks nothing really changes I mean we still have the capability of having Slack and stateless DCP and stateful DCP in the whole nine yards but within the provider network realm we are probably doing provider networks with VLANs and we are trunking the access of these instances to a network into our physical infrastructure and our physical infrastructure in this example is actually performing all of our routing functions for us and so this is a reference diagram for you to take later but when we kind of recreate those networks like we did in the previous slide we are simply saying all the same things we did before but now the instance is using an upstream router for example an aggregation layer switch in your data center to perform that IPv6 routing for you and so these are the same types of examples that we had before with Slack and so forth but we are doing them in provider networks and you need to pay attention to this with IPv6 inside of your physical infrastructure upstream so when we are doing provider networks with VLANs upstream your infrastructure has to be configured properly to support these three address style types ok so if you got a Cisco or Juniper or Rister or you know Plumgrid or whatever your infrastructure is virtual or physical you need to make sure that you are going into the interfaces that feed these VLANs and make sure that your configurations for those properties are correct we can see here that Slack pretty much everybody that performs IPv6 addressing with Slack right out of the get go is default if you want stateless you need to turn your Oflag or your other config flag on that basically says I'm giving you a prefix but come back and ask for your DNS information then under the stateful dhcpv6 option we see the manage config flag now when we roll into an IPv6 only situation it's pretty much the same as everything we just talked about we just are missing the availability of metadata to inject information into our instances ok so this is really the only differentiator you have in everything that we just talked about is do you have custom information fully qualified domain name specific SSH keys those types of things that you historically have leveraged metadata to provide do you have that same capability in a v6 only network the reality of it is no our friends at Amazon pretty much own the metadata service and it is a v4 only service and year after year several people within the community have tried to go work with Amazon to go in and try to get v6 functionality we've even got some wish list bugs that have appeared in some of the neutron space over the years to try to get a v6 enabled metadata service and today it just doesn't it's just not happening so your workarounds are pretty easy you can build into your images the things that you would have put in metadata or you can use something like config drive which is where at the very top you can just put the basic properties that you want inside of like I've got this user data file where I'm just simply saying I want a custom fully qualified domain name called v6 only instance.example.com and some keys and I reference that inside of my nova boot statement and just as proof that that config information is overwriting it we can see that I named that rhv6 only drive but when we actually log into the box we can see that we actually picked up the correct host name from the cloud config okay so this is generally what most people are doing in v6 only environments is they're using a cloud init file to inject all of the parameters that they purely relied on metadata to provide before any questions on this one you just went out of this room so now ipv6 prefix delegation and I know we're moving quick we got a lot of topics so we're rocking and rolling and most of you are still away so ipv6 prefix delegation how many people know or use ipv6 prefix delegation so a handful of people if you are doing ipv6 this is your goal to do inside of open stack many people do not fully comprehend the importance of this feature in production environments today with ipv4 we don't have to do day to day management of which tenet is going to create which private network with which citer implementation of ipv4 we have overlapping ip's we have nat and we just don't care but in ipv6 we have no yet nat right you know yeehaw that's a good thing but the reality of it is is that completely changes your operational model and how you distribute ipv6 prefixes to your tenets because if sally gets one prefix and Joe gets the same prefix that he came up with and it happens to be the same one sally got bad things happen in your neutron and networking environment we need some sort of central control mechanism that allows us to dynamically distribute prefixes to people in a not overlapping way so the ipv6 subnet pool support that came around a kilo maybe even before then was the good first attempt at that that has allowed us kind of a non colliding way to create a master pool that people could pull from and they would basically get the next available prefix out of that pool and they would be non colliding in large production ipv6 environments we leverage ipv6 prefix delegation completely outside of the cloud space this is a very important way for us to re-address even our ipv6 prefixes inside of our network and have it distribute its way all the way down into the host realm without us ever really touching a configuration on any box so bringing that support into neutron was very important we got some of the guys that authored that code up here in front and if it breaks on you they're up here so you can come and talk to John and these guys the only thing that we've got to work on here is that ipv6 prefix delegation today from an agent perspective is non highly available so this is something that we definitely want to be rolling on in the next body of work is making that agent functionality highly available so very quickly the basics of prefix delegation is you have a requesting router that says hey I have a need for some downstream prefixes for my tenants can I have a prefix and it will talk to a delegating router this could be an actual physical router it could be a router acting as a relay to another server or you could actually have a server acting as a server and relay agent on the same network but it will respond and says sure take this address and it will assign its address downstream now when you have a host come on the network it goes through Slack or DHCP v6 and it says hey can I have a prefix and that requesting router in this case would be RADVD would say sure take this prefix and it would create an address and off you go so we'll take a look at some configuration stuff here on how you actually enable this this is our example topology where we've got an all in one node with RADVD and Neutron and we got the prefix delegation client and our private network basically needs a prefix and our router is going to go and act on our behalf to go and take that information so from a configuration perspective you would have in whatever server you have here's a Dibbler example you would identify your PD master class and you would say this is the pool for which you are going to assign addresses out of on this case 2001 db8 face and we're going to assign those prefixes by 64 bit increments and then with inside of Neutron you just really need to have IPv6 PD enabled for that and then you need to go and create your networks now the one different aspect over all of the configs that we've seen thus far for Slack and so forth is that we within the tenant realm are not identifying our own prefix we're not saying you know 2001 db8 bad phase colon colon 64 we're allowing prefix delegation to do this and so the highlighted item that you need to look at when you're looking at your syntax here is the use default subnet pool so when you enable IPv6 prefix delegation it's using kind of that default subnet pool functionality and it will go and then via its request into the delegating router it's going to learn what its prefix is versus pulling out of a predefined pool now once you've got your public network you've got your private network you've linked your public network to your router the magic happens when you associate your private network to the router the second that happens it triggers a bunch of behavior to happen from the prefix delegation agent and it will actually go up and begin the process of obtaining a prefix for its downstream now if you've got the pdf or when you get the pdf you'll see two solid slides of colorful debugs tcp dumps in the whole nine yards that gives you step what's happening on the delegating router side what's happening on the agent side and what's happening between the two of them so if you really want to implement this you really need to understand the flow of how things happen now IPv6 with heat so we're building a trend here let's start with the basic we'll manually do this maybe we'll do this with provider networks maybe we'll do our address assignment with prefix delegation and now we want to look at taking a lot of that pain away and doing this with heat so how many heat users good so heat with IPv6 is pretty much heat with IPv4 basic parameters are the same basic resources are the same and so there's a bunch of examples that I have up on on my github account but we basically are going to go through and either build a dual stack enabled heat template or an IPv6 only heat template okay so we're going to take a look at one we're going to run in our demo so we've got a yaml format heat template from the newton time frame here where we've got a list of parameters of things like keys images flavors we got a public network we are creating a new private network with the IPv6 on it we can see our prefix that we've got here and then in our resources we've got a network we got a private v6 environment that's running slack we've got a router interface talking to that private subnet we've got a couple server resources that we're going to boot in this case there are fedora images running docker with nginx inside of a container and it's all v6 only we're going to create a security group to allow all kinds of magic and then we're going to create some lbaz v2 resources such as a health monitor a listener a pool and then a load balancer itself and out of our VIP subnet we're going to create a VIP out of our IPv6 subnet pool and then we've got our member servers so my little macbook pro 13 is running two virtual machines with very fat images and several other things so we're going to go ahead and crank this off get it running go back to our presentation and then come back once it's done so we're going to let that crank off and it's going to go back to do all of the orchestration stuff that we need and we'll come back to it in just a couple of minutes so now what about layer 3ha how many people are running layer 3ha inside of neutron or do you know that you are or not because the beauty of layer 3ha from a tenant perspective is you have no idea that your tenant is l3 enabled on your routers from a tenant perspective you don't know whether it's IP and keep alive d or humming along in a multi HA router environment behind the scenes but from an l3ha perspective it's very important that we understand how that functions from an IPv6 perspective so this is kind of a my view of VRRP with keep alive d in the l3 context you would have an external network in IPv6 2001 DBA cafe 17 in this case and we have tied physical nicks or bonds into bridges for example and we have at least two or more l3 agents that are acting in a redundant fashion and then we got another bridge southbound that's facing our tenant networks VRRP is using this dedicated network which basically is the same tenant network structure that you've defined and we have kind of a northbound facing VIP southbound facing VIP and VRRP so that we have high availability traffic transit north and south in both directions now one thing that hammers many people in IPv6 when they're looking at l3ha is they're expecting to see like a 2001 blah blah blah colon 1 as the default gateway of their host in the tenant that's not how we do things in IPv6 when we advertise out a prefix to a tenant on a tenant network we are advertising the link local address of the offering node the router in fact so you will not see a 2001 something address appear as the default gateway when you look at a route A INET6 kind of you you'll actually see the link local address in the l3ha context that is a virtual link local address that is generated by VRRP and keep alive D to basically offer that address out there so you can see here in the tenant gateway that F435 address that actually would be the link local address in the default gateway structure so you take a look at a keep alive D configuration file this if you've done anything with L3 this should look familiar I'm just adding the V6 context in here you'll have an L3ha interface you have a track interface you'll have the VIP interface for VRRP this is the 169254 well-known network and then we'll have all of the VIPs both on the northbound side that cafe 17 network and then our cafe beef which is the southbound side and then you can see all the way at the bottom that F435 link local address is actually what is going to be distributed out to all of your instances as their default gateway and it will float back and forth between the master and the backup based upon whatever the crime rates might be from a failure perspective okay so now let's go back hopefully this is probably timed out it has alright so in our topology remember we had a test network we had two server instances we had a brand new router we've got all of this magic up and running and we've got a load balancer so now one of the things that's going to take us into why this is great but what we need to do now to make this easier operationally is now how do we get people on the outside world to come and talk to this VIP or to talk to these instances directly well in an IPv4 environment we don't do anything to the upstream routing infrastructure we know that there is this router that is answering on behalf of all of the private networks behind it we can pass traffic to it and that router will then know what it's translating for that's how we do things with IP tables and neutron today with IPv4 but we are not netting that we need to let the upstream infrastructure know what is going on inside of our cloud space so that means that we have to go and tell that infrastructure about this routing so for example if we take a look at this fixed IP this is the internal tenant facing default gateway on our neutron router ok so if we come over here to an outside node and we add a route statement to our next top which in this case is 17.6 oh mercy cap locks there we go so now we can come in here and we can ping 6 2001 db8 and we can get there ok and so now we have access to the inside of that so now we can go to the load balancer that we've created inside of our environment and we can grab that vip go back here and make sure we can get to it ok got a space in there and if all goes well we're hitting our docker engine x containers and we're load balancing back and forth between them ok so the purpose of that demo is to not show hey this is how amazing heat is it's to show that we have to operationally restructure what we now do when we assign ipv6 enabled tenants in our infrastructure because now when we via manual or via ipv6 prefix delegation we have to go tell the external infrastructure like we did with that route statement how to get back into our environment and so that kind of leads to the next body of work which is dynamically allowing what we do inside our cloud to be known outside our cloud ok so we've had some great work already in the neutron dynamic routing area within the open stack realm of doing some basic functionality with ipv6 bgp and this allows us to tell neutron how to participate with a bgp infrastructure outside of our cloud to let them know about prefixes we have been creating inside of our tenant domain ok so if you are not looking down this path and you are looking to deploy ipv6 then you need to be having a very close operational relationship with those people that handle physical routing inside of your network because you will have to establish a process by which sally is a new tenant sally gets an ipv6 prefix you need to route sally's neutron router and they have not had to do that historically before in the ipv4 realm of an open stack but they now need to do it within the ipv6 realm and so it's a pretty basic set of functionality here we can establish bgp speakers we can associate networks with those bgp speakers we can create peers both v4 and v6 peers with other bgp routers inside of our domain and then we can advertise prefixes in and out of that space ok and then future stuff so we want to expand this to not only bgp but igp so we would like to get some work done in the next cycle around ok what if you're not a bgp shop your ospf or your isis because you're running a massive data center fabric and they're running on isis so we want to get some more dynamic routing fed in into the community so that it's easier for us to not only deploy ipv6 but manage operationally the way we get into and out of our environment we definitely want to get prefix delegation ha enabled because it's super important if you're not already operationally operationally kind of tacking towards prefix delegation in your environment you need to be looking at that we also have this beautiful situation with inside of open stack where stuff that worked in the last release doesn't work in this release and you know that just is a part of how things happen when you're shaking apis between each release and one group such as the ipam group maybe doing something different that the other group didn't expect and things just kind of appear that you didn't expect so I beg of you to not just go out and turn on ipv6 for a basic tenant and it pings and you have you know cheered victory go turn v6 on in your other project types so if you're doing heat at all you're doing lbass, you're doing firewall as a service, vpn as a service we need as many people as we can to operationally attack the v6 functionality so that we can kind of close the loops on bugs that we may not even know about so I'm going to leave this here we've got a couple of minutes for questions or you can run the door that's totally appropriate and remember to please pull the full version of this presentation that gives you a lot more context about what we've rushed through. Any questions? Yes sir? If we are using BGP to talk with upstream road about our prefixes that means we got yet another point of running time failure. We've got BGP down just for a second and all our roads are no longer available how it's handled in Neutron? Well, so the question of it is hey we're doing Neutron with BGP now and BGP is another element that can fail when it goes away what are we doing about it? So that's why we want to work on future stuff is to more tightly integrate the automation around what happens both within the Neutron side and the external to Neutron side on BGP so there's all kinds of things in BGP that help us with high availability. We need to make sure that that functionality is embedded inside of Neutron enough to give us one the ability to tweak timers the ability to actually watch peers and respond to those peers maybe go to a different set of peer groups. So there's all of those things that you mentioned we need to extend the standard BGP functionality to exist in Neutron which it doesn't today. Any other questions? Yes sir? Okay so the question is what about RADVD? So the Neutron the Neutron pairing with RADVD is to offload that functionality from an upstream router for example if you just want your local tenant virtual machines to communicate and be able to get addressing structure we needed a way to do that without relaying them upstream. So inside the presentation is a very good table that actually tells you which flags to turn on and turn off in your RA modes if you want to enable that functionality. So if you've already got routers upstream that you want to relay your request to and pull them down then you can set the M&O bit flags in that configuration to enable that but to answer your question on RADVD it is there to act on behalf of what a router function would be absolutely yeah. Yes one more question I think we got to roll for the next session go ahead sir can I confirm that the Keystone IPv6 endpoint works? No I can't I mean I did some all IPv6 testing back in Mataka that stuff seemed to it seemed to work great for me but I didn't beat the crap out of it so I don't know if you'll run into a problem with it but the Keystone work I did in Mataka with it seemed to work fine okay well guys we got the next session in here I'll be here all week so tackle me and we'll talk about IPv6. Thanks for coming