 Hi everybody, my name is Sean Madden. I'm with Piston Cloud Computing. Today I'm going to talk to you about Piston OpenStack and integrated with PlumGrid, which is a neutron plug-in. Just a little bit of background here. The first thing that we want to talk about is moving away from the legacy IT architecture that's in existence now. So right now, there's still a lot of people running their IT departments in silos with separate storage and separate compute, separate departments. And we're trying to move into a much more converged infrastructure that you'll see shortly here. So what we've done at Piston is just to give you a little bit of background. As we've taken the OpenStack architecture, made it a hyperconverge where we run compute storage networking on all the servers. And we try to make it highly available and scalable, elastic on demand, all the buzzwords that we hear all the time. So as I mentioned, we have everything running on every cloud node. So we can scale out very easily to tens, hundreds, even thousands of nodes. This is great on the compute and storage side, but a lot of people still on the network side are using Nova Network, which is VLAN-based. We're very stuck into what we can do. We kind of define VLANs in the beginning when we boot up the cloud, and we're kind of stuck to these four VLANs as cloud services, public, and host. And then when we want to expand the cloud even larger, we have a really hard time. So that's where these neutron SDN plugins come in. So as I mentioned, so in a small scale deployment, we have people deploy anywhere from five nodes to 20 to 100 nodes. You'll see things like they get stuck on their VLANs. The multi-tenancy support really isn't there. So for example, if you want to segregate, you have different projects, and you want those projects to be on a completely different address space, you can't really do something like that. People kind of share that same address space, and it's not very scalable. So what we've done is we've worked with Plumgrid. So Plumgrid is one of the SDN solutions that we have as a plug-in into PIS and OpenStack. Plumgrid consists of three main components. One of them is called the Plumgrid director, which is basically the manager of the Plumgrid integration. It runs on what's called an infrastructure server that I'll talk about in a second. There's also the Iovizer, which runs on all the different hosts. And what the Iovizer does is that's what takes care of being able to create many networks and takes care of all the networking, the switching, the routing, that kind of stuff. And there's also a gateway that you need too, because you need a gateway to be able to get out to the internet. So if you look at what a PIS and OpenStack and Plumgrid integration would look like, originally we had our boot node. And now we've added this PIS and boot node plus the Plumgrid director. And these run on the infrastructure server. So they're virtualized on the infrastructure server. And then we have our cloud nodes, which run PIS and Converged Infrastructure, as well as running the Iovizer on there as well. So what we see in this, this is a typical deployment example of PIS and with Plumgrid. And so we have our management services running on these infrastructure clusters. And then these are cloud cells. So these are individual clusters. And there can be many of them that are running this PIS and plus Plumgrid solution. And it's really cool because it can scale out as large as you want it. We don't know exactly how big it's going to scale yet. But it's cool because you can also make all these networks on the fly. And that's one of the limitations we have now with Nova Network is that we can't do that. We create some networks that we create at cloud boot time. But with Nova Network, you can create many networks. You can create projects, multiple projects that have their own different networks, multiple projects that even share or have the same network IPs that you've already used because we can segregate them and keep them separated. So there's a lot of ability to do a lot of things with PIS and plus Plumgrid. So this is just the example of if you've seen Neutron, you've seen this example of what happens when you start creating VMs and start creating networks. And they start showing up in here when you start creating routers. So everything that you do on the network side will show up in the topology. The cool thing about Plumgrid and PIS then is, number one, from the PIS inside, you can go into Horizon in the dashboard and you can see what's going on with your topology. But Plumgrid also has a user interface that you can use. And everything via the APIs, if you spin up something in the open stack side, it goes into the Plumgrid. And vice versa, you create these switches, routers, VMs, and things on the Plumgrid side, it goes over to the open stack side as well. So that first example was Horizon. So this is what the Plumgrid side looks like. So as I mentioned, when you spin things up in Plumgrid on their user interface, they automatically get transferred over into the dashboard. So it's two-way communication there. So why Piston and Plumgrid? The architecture is highly converged. I mentioned from our side, we run compute storage network everywhere so any hosts can run anything. So if you do have hardware failures, we don't really worry about it too much because we will migrate services to other physical servers so your cloud keeps on running. The deployment model, as I'll show on the demo, is very simple. It's basically we have the infrastructure host that boots up, we go out and auto-detect all the hardware so we know what kind of drives you have, we know what kind of RAM, we know what kind of CPUs you have, we net boot all the hardware and then in about 10 or 15 minutes, you're paying on the speedier network, how many nodes you have, you'll have a full Piston open stack cloud with Plumgrid up and running. There's on-demand creation so we always share the buzzwords, we want self-serving, we want on-demand, we want elasticity. All these things are very true in this architecture because like I said, you can spin up networks as you need them. So when you first spin up your cloud, when you first spin up your cloud, you don't have to go out and create all the networks, they're just on the fly. So if you have 10 physical servers, say, and you only spin up two VMs, the networks you create for those two VMs that reside on a particular host are only on those hosts. We don't create networks for the entire cloud until you have to do it. It's kind of like a just-in-time kind of thing. Let me move on to the demo. I see my time is running by very quickly. So in the beginning, the first thing that happens is we spin up this infrastructure virtual domain and Plumgrid calls these virtual networks or these virtual domains. So in the beginning on this infrastructure host, we need to spin up a host bridge, a services bridge, we connect them with the router so that they can talk to each other and then we have a port on top so that we can get external access outside of this infrastructure host to be able to communicate with the rest of the network. And I'm probably going to talk a little bit faster than the demo, obviously. So then we boot the cloud. So as I mentioned, when we boot the cloud, the virtual boot node and the Plumgrid director are sitting on the infrastructure node. The cloud boots up. As you'll see, this is our web console, the interface. You'll see that the hardware will get detected via the management port on the servers. We do IPMI for that. So we detect all the physical servers start booting up. Once they're booted, they will get an IP on their high-speed interface and we will pixie boot our software into a RAM disk on all the servers. Prior to doing that, we've detected all the hardware so we know what's out there. And then in this cloud boot console, you'll see that it'll tell you what state your servers are in. So you know they're booting. You know when they want their configuration. You know when they're provisioning and how far along in the provisioning process they are. And then once they're done, you'll get access. You get a link into your cloud so that you'll be able to go to the dashboard. So here we see provisioning. It tells you exactly what we're doing. We're installing ZooKeeper. We install Ceph as our storage back end. We run MySQL. And we're running some things once in the cloud and we're running many things on every server in the cloud depending on what the service is. So now we get into the dashboard. So now we can start creating networks. So network creation via the Neutron APIs is very simple. So here in the GUI we can create a network. We can call it something like Net10. So that gets created. And this is just created now. And then once the VM is spun up, it'll pull from that IP range that we've defined. So we just basically give it a subnet name. We give it an IP range and subnet mass that we want to use. And we provide a gateway. On the next page, you'll also see that we do provide DNS and we enable DHCP. So all the IP addresses for your VMs come from DHCP. So all this happens, like I said, just in time, just when you need it. So when the VM gets scheduled and goes onto a host, this particular network is going to end up just running on that host. It's not going to be a network that's installed on every host. So we don't have a lot of overhead here in doing this. So this is one network that's created from this network. I'm going to spin up a virtual machine, or at least in the demo there. So there, so we have our network. And then we'll just simply boot an instance here in the GUI as well. So we'll just grab the image. We'll give it a name. We'll pick that network. And then what we'll see is that that VM will be spun up and it'll have that network. And then we can look at the network topology and see that we do have that VM on that network that's there. And then we can create another VM, put it on another network. And then on top of that, we can add a router so we can connect those together. And then beyond that, we can take individual OpenStack projects, make multiple projects, create their own networks, and add routers, and be able to run all these things together. So this is a very different solution than the original Nova network. We still have some people that are using Nova network because it meets their needs. But for people that do want the L2 segregation, the people that do want to be able to define and create these networks on the fly, this new neutron with PlumGrid is certainly a very, very good option here. So I'm creating the second network here in the demo for my second virtual machine. And then I'll show you exactly how we can make those two VMs talk to each other by creating a router. I'm going to speed this up slightly because I don't want to run out of time here. So as you see here, the two VMs have been created, or one VM on both networks. I'm just creating the second VM so that you guys can see how they communicate with each other. OK, so the VM is building. The VM is going to be active. Now, once it's active, we'll look through the network topology. And you'll see that it's kind of cool. You're going to see everything happening on the fly. And you can also do this in the PlumGrid GUI as well to see that these two VMs have been created on these two networks. And then what we'll be able to do is create that router. And we'll show that they're up and running. So on the router, we're going to have to add the interfaces that we need, obviously. So those interfaces will get added. Then we'll be able to have the VMs talk to each other. And then the scalability portion, for those people that we've talked to at our booth and just our customers in general, the scalability portion is probably one of the biggest things that people want to have within their clouds that they don't have now. We hear that if you want to have 5,000 VLANs and they may get 4,096, VLAN limit is something that they hit. Or they just want, in general, things to stay up and running. And if you do have issues where your servers go down, you want your network to continue to run. So the scalability, the HA, the elasticity is all built in here. So now we'll see in the demo, once these VMs are up, this is just going into the console on one of the VMs to show that we can actually ping from one VM to the other to see that it actually is up and running as expected. So both VMs, obviously good. And then the last thing that we're going to do is we don't want to just be able to ping VMs internally to the cloud. We want to be able to get external and be able to either go out on the internet or have people on the internet be able to log in in SSHN or get into their VMs. So we have to attach a floating IP onto the VMs as well so that we can get external access. So this is showing how we, on the instance, attach a floating IP. And then again, we'll be able to get access that I'm showing here into the VM from outside. So we're going to ping 4.2.2.2, just an external IP. So you can see that creating the VMs, or creating the networks, then creating the VMs, attaching those networks to the VMs, adding a router, and then adding floating IPs is a very simple process. And then here's the last thing I just want to show you here is that this is what the PlumGrid UI looks like. So everything that we did in the horizon actually shows up here as well. So you can go back and forth if you want and see that that's the case, or you can just create things here on the PlumGrid side if you really want to dig deep down into the UI on this side. So in conclusion, that's kind of what I want to show you. But just want to let you know that this is a piston open stack with PlumGrid, highly scalable, available today, highly available all the time. So if you guys have any questions, feel free. You can ask now. I think I have about 30 seconds left. Otherwise, the piston and PlumGrid boosts are that way. But thanks a lot.