 Alright, I guess we're on. Good morning everyone and thank you for coming to our talk. My name is Carl Baldwin. I've been working with Neutron for nearly about three years now. Last year, I guess about a year and a half ago, I was made a core reviewer in the Neutron team. More recently, I've been filling the position of L3 Lieutenant in the team and also a member of the Neutron drivers team. And I'm talking today with Kyle McKinnis and Miguel Laval and I'll let you guys introduce yourselves. Right, so my name is Kyle McKinnis. I'm a software engineer with HP. I started the designate project maybe four years ago at this point up until this cycle up in the PTL. Finally, finally handed the reins over to somebody else this cycle. And I guess I'll leave it at that, Miguel. So my name is Miguel Laval and I'm a Neutron developer with the Linux Technology Center of IBM. I've been working with Neutron over the past three years doing testing first with Tempest and now as a Neutron developer. Excellent, thanks. So our talk today is about, it's more about getting three open stack projects to work together to communicate. But it's about the end goal that we were after is really pretty simple. It's just getting your instance by name. And I thought it was kind of ironic. We walked in and Miguel had a piece of paper with an IP address for the machine that we're going to do the demo on. So I said, you know, we got to get this done and deployed everywhere so that we don't have to bring this paper with us anymore. I wanted to start today by just going over the background, what motivated us to get started on this. It was actually about two and a half years ago, I think, that I thought of doing this. And what happened was I had just started working on Neutron and HP's public cloud. And we were having good success standing it up and I was starting to boot VMs and play around with them for various things, even a few personal things I was playing around with on these VMs. And I noticed that whenever I typed sudo anything, sudo ID, sudo make me a sandwich, whatever. The first thing it did was it spit out this error, sudo unable to resolve host in the host name. I thought, well, that's strange. Why can't I get rid of that? I could get rid of it with a host file entry. But VMs in the cloud, they come and go. So you might fix it on one, boot up another one, and the problem's there again. So I started digging into it. And I actually found that there were a number of other things that happened because DNS didn't know the name of my instance. And Miguel put a few of those others here. So I started digging into it and I realized that, well, what the problem is is Nova has an instance name. You say Nova boot and you give it the instance name. But that stops at Nova. Neutron doesn't know about that name. But Neutron's the one that has the DNS server that's local, the local DNS server on the tenant network. And since Neutron doesn't know the name that Nova has, it makes up its own. It takes the IP address, replaces dots with dashes and sticks on some bogus domain on the end. But it's not very useful. It's actually not any more useful than the IP address itself. Just being a strict mapping from the IP address. And so I thought, well, wouldn't that be cool if the two projects talked? And then later on, I was actually in HP Cloud. I was using, I think it's a predecessor of Designate that was deployed in HP Cloud, is that right? There was an internal tool deployed before Hunt. There was one that I was able to use with HP Cloud anyway. And I actually took my own personal domain and I put my DNS up in it and played around with the API a little bit. And I thought, well, wouldn't it be cool if I could boot my instance and not have to take that information manually and plug it in with an API or through the web portal or whatever. And so I wrote up my first blueprint ever. Actually, one for Neutron and one for Nova. And I took them to Atlanta and talked to people. I found that people were pretty excited about it, but not quite excited enough to do anything about it. So why did it get stalled for so long? We'll get into that a bit later. I wanted to hand over the talk to Miguel, who's going to talk about some of the technical part. Okay. So the root cause of this anomalies that Carl is explaining is the way the internal Neutron DHCP is implemented. And I'm not going to work through all the ports and all the connections there, but the main point that I want to make here is that when you have an instance and you have a port, that port gets the DHCP service from the network node. For each network, there's a DNS mask instance that provides DHCP services and DNS services to that port. Actually, the name of the name space is QDHCP and then the UUID of the network. And for each network, you get that DNS mask instance. So what happens is that whenever you create a port in Neutron Server, Neutron Server notifies the DHCP agent that a new port has been created. And at that point, the DHCP agent requests through the RPC channel request the port information. And with that information, it creates a file. It updates a number of files that the DNS mask uses to come up with the DHCP allocation and the DNS services for that port. And essentially, the DHCP agent updates the files and then notifies the DNS mask that it needs to read the files again. And that's how your Neutron ports get DHCP services and DNS services prior to liberty. So essentially, the DHCP agent is creating these names. And as you can see in the lower right corner of the slide, you can see that the name that the DHCP agent creates is host dash and then the IP address of the port. And the same thing with the fully qualified domain name. It becomes host. The IP address that was assigned to that port and then the default domain for that open stack instance. So now to Kyle. Thank you, Miguel. So I wanted to give you a little bit of background on designate itself. So what is designate? What are the high level components? What are the things you use it for and so on? So 10,000 foot view. The simplest way of looking at designate is we have quite a simple REST API that you're able to use to interact with your DNS data. Architecturally, we're pretty similar to Nova and Trove, the opposite of the car. So we don't implement a DNS server. We make use of existing ones. We'll tie into power DNS, bind, third party solutions like Akamai or Dinect. So we don't require you to run your own global DNS infrastructure. You can instead use the API, use keystone authentication. Sorry. Use all of that. Open stack, niceness. And then push all of that content out to a third party if you want. So the components. We have quite a number of components sitting there. We have the core ones being API, central, sync, pool manager, and mini DNS. So basically these all communicate over RPC, standard Oslo messaging services. They all aim to be do one task and do it well. So for example, the API service is just a shimmed central that understands REST. It doesn't do very much else. It understands how to validate the incoming queries. It understands how to parse JSON. It understands how to return rendering errors and so on. It has no idea how to create a domain. It will pass off the command to designate central. Pretty much everything lives in here for designate. It's core service. It is the only service that has right access to the database. Everything else is either read only or no access. We have mini DNS, which is a pure Python DNS server. So you would never point a customer at this. You would never point an end user at this. What you do is instead you point your power DNS or akamai at it. Power DNS, akamai bind, they will slave the zone from us. So for simplicity, rather than reinventing the wheel for each DNS server, turns out there is a standard way of pushing zone content into DNS servers called a zone transfer. So we decided we were going to implement that in Python, rather than implement something that will call RNDC and render zone files, something that will call power DNS's database, something that will sync with akamai's API. So we decided to try and keep that simple. Next up we have sync, which is the first attempt we had at doing an integration with Nova and Neutron. So sync was an event listener. It would sit on the Nova and Neutron notification queues, the same queues that Celia would be listening to for events. Those events have a huge amount of information in them, including the instance names, the instance IPs, which port is attached to which instance. And given all of this information, you could correlate it and produce a name, push it out to the DNS servers. It was fundamentally flawed that there is no guarantee of delivery on these events. So you might get the port create, but the instance create, you might not get it. Or worse, they might be totally out of order, they might be delayed massively, you might miss a delete. So it was quite difficult to make it really reliable. So we started looking and talking to Carl two years ago or something about this. And finally, last main piece is the customer facing DNS servers. So we support more than bind, power DNS, ACMI and Dinect, but those are kind of the main ones that people look at. So what can you use Designate for? You've got all of these DNS servers, power DNS, bind, ACMI, none of these things are multi-tenant. So the first thing we do is create a REST API that allows you to take your single-tenant DNS infrastructure and share it among all of your projects without risk of one tenant destroying or modifying another tenant's data. So beyond that, we act as a sort of a gateway to third parties, so you can have one account with the likes of ACMI or Dinect, and all of your projects will feed into that. We also have some more functionality like, let's say you've got an existing infrastructure inside. You've got Microsoft Active Directory running in your enterprise, and you really don't want to move the Active Directory domain to Designate. But you do want it to fan out to your global DNS infrastructure. So we can then slave from existing machines, from existing DNS servers, suck it into Designate, and then push it out the far side to the DNS servers we manage. And finally, and most importantly, it's about automating the DNS provisioning. So you're familiar with, at the end of the day, you want to stand up a new developer environment. You press a button, you get five VMs, a couple of Trove databases, neutron L-bass, all of this magic just happens. Except there's no DNS. You're figuring out all these IPs. So by being able to have a DNS API there, you're able to integrate DNS into the workflow the same way you have, the same way you have VMs, networks, firewalls, load balancers automated. So I kind of start touched on this one already, which is what is this sync thing? It's our old way of doing this. We're not going to remove it because there's still some value there. But effectively, it listens on the notification events. I actually covered all of this already, except for it's plugin-based. So the interesting thing about it is you take all of these events in and you dispatch them to a plugin. The plugin can be customized to do whatever you like. You might decide to have it go off and gather some extra data from some third-party system and use that in deciding the name and so on. And you can do that if the trade-off of potential unreliability, potentially leaving stale data around because you miss it at least, potentially not creating it because you miss to create. If you're comfortable with that, then you can do some interesting stuff with this. So at this point, I'm going to hand back over towards Miguel. Thank you so much. So now we are going to review how we integrated the different components here. And essentially we took two major steps. The first step was to fix the internal DNS anomalies that we already explained. And essentially what we did is that we moved the generation of the DNS assignment from the DHCP agent. We moved that to the Neutron server. Now the Neutron server generates that. And we added an attribute to the API to two ports. We added the DNS name attribute to two ports. So that allows you to specify the name that you want assigned to your port. And we made this optional and essentially what happens is that we added a new Neutron configuration parameter, DNS domain, that allows you to enable this functionality. If you don't specify that argument in your Neutron.com, essentially you fall back to the previous behavior and nothing changes. But if you specify this DNS domain parameter in your configuration file, then you enable this functionality. So essentially what happens is you create the port. The Neutron server creates the DNS assignment for the port as you can see in the middle of the slide. And then when the DHCP agent requests the poor information, it gets that information from the Neutron server and assigns to the file that the DNS mask uses, assigns those names that were received from the Neutron server. So now you as an API user, you have a control of the names that your ports are going to be assigned by the DHCP agent and the DNS mask. Once we have that in place, we made a little change in Nova and essentially that name is that Nova uses the Neutron client to create ports whenever you create an instance. So essentially what we are showing in the left side of the slide, essentially now Nova does a poor create specifying using that DNS name attribute. And that DNS attribute comes from the host name of the instance. So let's say that your instance is my underscore VM, that name became sanitized to become compliant with the DNS names and it becomes my dash VM. And that name is sent to Neutron and then we follow the path that I already explained. So essentially what's happening is now in your internal DHCP server, you are getting the name of your instance, your port is known by the name of your instance by the DNS mask. Now you are only getting one name because the DNS name of that port becomes your instance name. And with that, we go back to the explanation, to Carl's explanation at the beginning of the presentation and essentially we can see that all those anomalies in the behavior of the different commands in your instance are solved. And the reason is that now each one of those commands is really asking the DNS service, the DNS servers of the instance for the fully qualified domain name of that instance and that becomes solved. Go ahead. How do you handle collisions? Like when many tenants call their instance foo, there will be collision on foo.defaultdomain, mydomain.org. Remember that this DNS mask instance, you have a DNS mask per network. Okay, so this is just private. Yeah, so it's really a non-issue from a tenant's point of view. Yeah, that domain name is global. We are going to share that domain name, but keep in mind, right now I'm talking about the internal DNS, internal neutrons DNS. So bear with me for a few minutes. So now, once we had that in place, we used that as a platform to go work with the integration between Neutron, Nova, and Designate. And essentially when we were planning this, we envisioned two use cases. The use case number one is that where the DNS name and the DNS domain are associated with the instance or the port. So essentially what you do is you create your, you now can assign a domain name to your Neutron networks. So we also expanded the API with a new DNS domain attribute for networks. And you can have as many domains as you want. That domain name doesn't have to be the same domain name that we are using for the internal DNS implementation. So now you go create a network and that network has a DNS domain attribute. So far so good. Next step is, let's say we create an instance and that instance happens to be, happens to have a port on that network. So what's going to happen is that you create, you create the, Nova creates a port and that port has a DNS name specified by the name of the instance. And as you can see in the fully qualified domain name of the attribute it got the domain name associated with the network. And now you assign an external accessible address to your port, a floating IP. What's going to happen is you create a floating IP associated to the port of the instance and essentially what happens at that point in time is that Neutron now, the Neutron server pushes that information to designate. And essentially what is going to happen is that we are going to get two records in designate. We're going to get an A record with the floating IP address and that floating IP address is associated with the name and the fully qualified domain name of your instance. And you also get a reverse lookup record, a PTR again with the floating IP address and the fully qualified domain name of that instance. So that's use case number one. The other use case that we envisioned was one that we want a name and a domain name associated with a floating IP regardless of the instance that is connected to that floating IP. So in that case we also extended the API with two attributes for floating IPs. Again, we have a DNS name and a DNS domain attribute for floating IP and those attributes are going to override whatever is defined at the port and instance level. Even if you have a DNS name and DNS domain associated with that port when we create the floating IP and we push the information to designate the DNS name and we also see that with the floating IP override whatever information might be there before. So as you can see here I'm associating my name with my floating IP and that's the information that gets pushed to designate in the A record and the PTR record. How did we implement this? Well, essentially we added an external DNS service driver to the neutron server. We wanted this to be a pluggable so essentially we defined an external DNS service class which defines the abstract behavior of the driver and the plan is to have different implementations for this driver. We also created the reference implementation based on designate but there's plans to create specific drivers for other external DNS services integrated to neutron. So how do you configure all this? Well, essentially again you go to neutron.conf and in the default section you specify the external DNS driver that you're using. Right now we only have designate and we added a new designate section specific for the designate implementation where you specify first you specify the designate endpoint and then you specify the information for an admin user and tenant and the reason we do that is that the PTR records are created under an admin user and maybe Kyle cares to explain the technical reasons for that. You have lots of tenants sharing the same name space the same pointer domain so you've got 255 at a minimum have to be in one domain but those IPs are allocated individually to customers so you can't allow a customer direct access to that zone. So I was told to do that I just implemented it and we have a final parameter which is allow reverse DNS lookup essentially what that allows you is that the PTR is optional there might be users who might not be interested in that so we create an argument a parameter that can be true or false and enables or disables that functionality. So since I like to live in danger and I want to add to the pressure that my VP is here I want to attempt a live demo it works. So first of all let's see what instances we have in there can you see it from from there? So it seems it's visible so as you can see I have an instance already created in the middle of the screen and essentially what I did before creating this instance to that instance I associated a DNS domain name MyDomain.org so what I got from that is that so what I got out of that is that I have my DNS assignment with the host name as My-VM and my fully qualified domain name as MyVM.MyDomain.org so we can also see that in designating MyDomain.org essentially it's empty. I don't have any records at this point in time and in my PTR domain I don't have any records at this point in time. So what we are going to do is we are going to run a floating IP to that instances port and there we go we got a floating IP associated and the floating IP ends with .5 so let's see what happened in designate as you can see we now have a new record with my floating IP and my fully qualified domain name so let's go to the PTR domain we can see that we got a PTR record for my instance so it's important to mention that this functionality is half of this functionality is already in place in Liberty and that's the internal DNS integration and we are already in Liberty and we are looking to merge the integration with designate in Mitaka one that's the plan. So going back to the presentation with full view we are going to we had a safety net here with screenshots and now we want to share a little bit our experience of cross-project collaboration so having started this over two years ago cross-project collaboration was there was a rocky start and I think I wanted to talk a little bit about how that has improved in OpenStack since then and enabled us to get this done Miguel, you are now serving or you previously were serving as the liaison between Neutron and Nova? No, I was the liaison between Neutron and Tempest Neutron and Tempest, that's right it was Sean serving as the liaison Correct, so the point is we've created liaisons and resources that we can use to help us work and collaborate between projects and that's been really great when I started out that was less mature and not a resource that I knew of or existed, I didn't think it existed at the time but it started out rocky and I was really pleased to find out that over time it got a lot better so working with Nova it's a very large project and me being pretty new to OpenStack, new to Neutron and to Nova, I found it a little bit difficult to navigate at first but now we've improved that a lot working with designate was great I think a lot of the success of this project comes from they created a very easy to use API, very easy to integrate and also as a team they were very excited to do this you guys had made your attempt that you talked about slurping things from RabbitMQ the designate team was very keen to work with us on this and very helpful they even talked me out of a few complicated things that I wanted to do and turned out to be completely unnecessary and just me overthinking things a lot that helped out a lot and I really appreciate that and in my case I want to say that I work on the Neutron side and the designate team was always available willing, ready to provide all the guidance that I needed and working with Neutron, I can't really talk about that funnily enough I didn't actually write the first one I think that was the placeholder where I was going to fill in but I decided to leave it there, who in their right mind? so Miguel and Carl obviously the two guys we've mainly been interacting with they've actually been great, so back in two some months ago or I can't even remember which one we most recently got together again and hashed out the updates back, absolutely excited to come along, got them into one of our work sessions and actually managed right then and there to come up with a plan to actually do it, we walked out of a work session with a plan, so that's an achievement what happened was, I walked in and Kyle said why haven't we done this yet and I thought about it, I said I have no idea, let's do it pretty much, so Miguel obviously being the driver on the Neutron side doing a huge amount of the implementation, all of the implementation in Neutron and refreshing Carl's two year old blueprints and specs and getting them up to date again, he came along every week to our weekly RSE meeting gave us some short updates, every single week which was hugely useful to us as we're not being Neutron team we don't see a lot of the stuff that happens in Neutron, it's hard to follow but when you have a team who's willing to actually drop out, come out say hi once a week and give a two or three minutes, here's how far we've got here's my current issues, let's figure them out it was really useful, so I'm not sure what slide is next well with that I think we can go to Q&A if there are questions in the room yeah we actually have about two minutes for questions thanks, I got two questions I noticed that there's an FQDN attribute in the port but also a domain name in the network isn't that potential for conflict or why is there an FQDN in the port Andrea was wondering remember that once you have the DNS domain once your DNS domain comes from your network it doesn't have to do anything anymore with the DNS domain for open stack once you enable that functionality once you have a DNS domain assigned to your network your fully qualified domain name is going to be constructed from that so the port entry is pretty much ignored yeah and each tenant can have its own thing and by the way it's important to clarify that the domain name needs to exist previously in designate actually that was my next question because what we want to do is that the domain names used in Neutron are authorized if you like by the DNS service manager whoever is managing that DNS because you're absolutely right it might be a source of conflict so if I understood correctly the Neutron side of this is there in Liberty but the designate integration will be coming in Mitaka remember that there's two pieces for this the internal DNS, Neutron DNS that's already in place in Liberty and the rest of the integration with designate is coming in Mitaka one okay and when that integration is there say someone assigns a domain name will there be validation against designate to ensure for example that that tenant actually owns that domain or will that fail at some point later down the road when you specify a DNS domain for your floating IP or for your network at the moment of pushing that information to designate that domain needs to exist so it'll fail early in designate if it doesn't exist I'll return error code to the API for things like instance names that aren't valid DNS names does it work the same way that there will be a failure in that scenario well the reason we are using the reason we are using the host name attribute of the instance is because that specific attribute is sanitized to be a valid DNS name okay makes sense, thank you thank you that was excellent I just want to know like with this Neutron port create and other stuff will it work with heat orchestration as well or you have any plans to integrate in future in next releases in does it integrate with heat orchestration with the port allocation and the floating IP port allocation I'm not sure if we've really considered it yet I'm not 100% sure yet okay so any plans for it now we haven't thought about it at all yet it'll be worth pinging us afterward and we can try and figure out what the if we can try and work it in before we land the remaining pieces in Neutron the next few weeks this is actually good feedback if you want to talk to us later today or ping us in IRC we'll be more than happy to talk about your use case I mean we are always looking for ways to improve this thing sure, thank you thank you thank you one more is there so we have time for one more excellent actually two but hopefully two small ones the one is what happens to people who are running publicly available IP addresses behind the Neutron router with an SNAP disabled so if you have SNAP disabled on your Neutron router and you're using globally readable IPs there at the moment they do not get pushed into designate that is probably something that's a future enhancement but for the start when we were dealing with this in the last summit we kind of scoped it down to something that we felt we could reasonably achieve in the cycle and we so very nearly missed the cutoff to land the full amount in liberty so more hopefully will come so if you have SNAP disabled and you're using globally routable IPs then that should probably trigger the creation of records in designate today it doesn't okay and since you are tying the name to the port what happens to ports that are not allocated to VMs like router ports and all that so if I'm not mistaken you don't limit it to instance ports so any port can be given a name you can use the DNS name it can be used for any port okay thank you guys very much for attending and if you have any questions I want to chat to us okay thank you very much