 Thank you. So we're going to take the next 15 minutes or so and talk through IBM, help you build an open-stack cluster at scale, and help you then add some governance to it. So since we only have 15 minutes, I'll get right into it. If you've been listening to IBM lately, you've probably heard us talk about three things. At the top level, there's the IBM Cloud Marketplace recently announced at IBM Impact that allows you to go find a full suite of software as a service solutions. As you move down the stack, you'll see codenamed BlueMix. This is our Cloud Foundry-based pass. What I'm going to talk about today is at the very bottom, the infrastructure as a service level, open-stack, and soft layer, and how you can use soft layer to get started with open-stack. If you're not familiar with soft layer, it's a broad portfolio of infrastructure that you can choose. You want bare metal, virtual servers, dedicated, shared, physical or virtual, hourly or monthly. You can go to their portal and get that. If you want some bare metal machines, two to four hours, you can get them delivered. This is important because soft layer has a global cloud platform. They're going to have 25 data centers at the end of the year spread across the world. And it's the ideal platform to build scalable, available, reliable open-stack clouds. But not only can you build them, but I'll talk a little bit about how you can help get started with open-stack very easily at soft layer. No commitments. Go out, get started, build your first install, and that's what we'll walk through. But why would you want to deploy open-stack on soft layer? You can acquire the hardware you need when you need it. You can scale it in hours, not days, not weeks. You can add more bare metal in no time. Have an immediate global presence. Like I said, there's going to be 25 data centers around the world at the end of this year, 40 if you count all of the IBM Cloud data centers. There's low latency, high-speed connectivity between all those data centers. They have a dedicated fiber network that connect all of their data centers together. There are no transfer charges. If you build an open-stack cluster at soft layer, you do it on the private network. There are no transfer charges from any data center to another. If you take an interconnect into one of their network, their private network, there's no transfer fees from on-premise. And you can choose the open-stack model that works for you. Do you want to bring your own open-stack? Do you want to get it from us? Do you want to manage? Do you want us to manage? With soft layer and IBM, you have those choices. From a hardware perspective, the hardware that software has, you can go from entry-level, single-proxed, all the way up to GPUs and quad cores or quad processor, fully customizable, RAM, hard drives, SSDs, network. Do you want one gig, 10 gig? And on demand, bare metal servers in two to four hours. So this is interesting, but how do I actually get an open-stack cloud, build it, and scale it globally? So here's what I did. I went to the open-stack manuals, pulled up the installation guide for Ubuntu 12. I did this on Friday night. I went to the example architecture, and they gave me this diagram for open-stack networking with Neutron install. So this is what I was going to replicate. And before you begin section, it listed out exactly what I need. A controller node, a network node, and a compute node. So easy enough. I went out to software's website, went to the server section, chose bare metal servers. Actually, I started with the virtual servers. For my controller node, I got exactly what it asked for. One core, two gigs of RAM, 25 gig hard drive, 6 cents an hour. For the network node, I got one core, one gig, 25 gigs of hard drive, 4 cents an hour. And for the compute node, I got bare metal. A two core, two gig machine, which met the minimum for 24 cents an hour. So I had 34 cents an hour in my install, or eight hours a day, to get this set up and get started with open-stack. But those specs aren't really useful. You can't really do anything with it. Maybe start up a few SIRAS images. So it gets something usable. I went back, got a little stronger controller node and network node, put one gig uplink ports on it, 100 gigs of disk, a little more RAM, came out to 17 cents an hour. For the compute node, 16 gigs of RAM, four cores, again, a one gigabyte network, 53 cents an hour. So now I'm up to 70 cents an hour, about 16 cents a day to get started with open-stack. So I went through this, I didn't talk about one thing, which is the network connectivity. All software systems come with three networks, a public network that has metered a private network that is unmetered and a management network for out-of-band management. In our case, if you looked back at that reference architecture, there were three networks that they connect to. And understanding this helps understand how to map that. So when I look at this, there's the management network, hypervisor to hypervisor communication, and the instance tunnels, which are guest to guest, those are easy, they map right to the software private network for all that connectivity. Again, unmetered, no charges. But what about that external, right? How do I get outside of the guest? That was the piece that I needed to add. Went back to the software portal, got what's called a portable private subnet. In this case, I use an internal one, portable private, accessible only in software. Again, no charge for those. So that's it, I've got all the resources I need to build an OpenStack cloud at software at 70 cents an hour. So going back to the install guide, I went back and stepped through all of it. I actually did this in two regions. I first started in San Jose. I had the two virtual machines, three physical machines for the cloud I built. That took about 90 minutes to get all the hardware up and running. I didn't do anything, I clicked it, actually left, came back in 90 minutes, it was all ready to go. Copied and pasted from the OpenStack documentation just following step by step. Took me about four hours of copying, pasting, just mindless work. Ran into problems with Neutron. I spent about three hours before I figured out that I really just needed to restart like the DHCP agent and started working. So then I did it again. I went and got hardware in Singapore. Again, waited about 90 minutes for my bare metal to come up. Copied and pasted from the OpenStack documentation. I only had one machine I needed to copy and paste to instead of three compute nodes, went a little bit faster. About two hours worth of work there. I had an outside, again, something happened with Neutron. I spent about six hours before someone pointed out I just needed to restart my L3 agent and then it worked. So if you ignore all the Neutron time I spent, I had about nine hours in this. When I was done, I had Horizon running, multi-region install, one in Singapore, one in San Jose. It took me, I said about nine hours and everything was fully functional. I could SSH into my guests from external. I had Neutron running with GRE tunnels. Everything was a pretty typical OpenStack install and a very easy way to get started. I will be putting out a blog post on how I did this in the next couple of days, probably early next week so that you can try and replicate it. But yes, it was very easy and something that I would, if you want to get started, it's the way to do it. And this was my dashboard running. You can see my two regions, Region One in Singapore. I didn't realize with Region One that I should have named it San Jose from the start. But one thing that when you look at this, one of the hard parts is that you can't really turn people loose on the Horizon UI in an enterprise, right? You need approvals and expiration policies. And Nina from our IBM Cloud Manager with OpenStack Team can talk a little bit about how you can use IBM Cloud Manager with OpenStack to add these features. There you go. We've lost it. Thank you. All right, so thanks, Mike. So as Michael said, we have the Cloud Manager with OpenStack which provides some additional layers of governments over OpenStack. So this is a self-service portal that you see. We've signed on as a cloud administrator and you can see all the features and functions that the capabilities that are available to the cloud administrator. You have some standard things that all OpenStack users would expect. You have the ability to configure the cloud where you define which regions you want to monitor, the ability to create your projects, manage users and roles against standard Keystone stuff. But apart from that, if you look at the right-hand side of the screen, it shows you the cloud status. So here you have a single pane of glass which allows you to monitor and manage multiple regions at one shot but you don't have to work with one and toggle between them. As you can see, we have two regions as Michael had. He had his Singapore and San Jose. Through the Cloud Manager, you can manage all those regions and if you look at the capacity utilization, it'll actually show you that the nodes you have in each of the regions. And in this case, we have two regions. You've got your KVM nodes that Michael talked about. And Cloud Manager with OpenStack actually supports different architectures. We support hypervisors. X86, we support KVM, HyperV as Michael said. But we also support Power. We support PowerVM, PowerKVM as well as EVM. So you have the different filter options. Michael had talked about three nodes that he'd set up in his region. But the big thing we're talking about is governance. And what we do is we allow you to apply governance policies at a project level. So we have two kinds of policies. Approvals that Michael talked about where we also have expiration policies. This is a DevOps environment. If you look at the kind of projects I have. So at the development project, most of us do agile development. We follow two-week sprints. So we've set up an expiration policy so that as our users, at the end of two weeks, of their VM being deployed, we stop the VM. We give them a three-day grace period if they have any defects, bulk fixes they're working on. And at the end of the additional three days, we actually delete the VM. We free up the resources. The development team's on to the next sprint. You know, fire up a new set of VMs and continue their work. So that's extremely convenient. The next thing I'd like to show you is really how easy it is to work these policies into when you're actually deploying your VMs. So at this point, I've switched over to a test project. And since this is a demo, I've just used Siros because it goes really quickly. At the point of deploying, do you see the expiration date has already been set? This is based on the policy that the cloud administrator had set up against that project. And that's it. You just go and create the VM. This tester needs five VMs for their test environments. So instead of submitting five requests at the point of your deploy request, you could just say how many VMs you want us to fire up. And that's it. We have gone ahead and done the deploy. Again, as everything else here is a single pane of glass, if you wanted to, you could look at all your VMs across all your clouds. In this case, we're working with the Eastern region, so just filtered on Eastern. And you can see we have five VMs already deployed through the deployment development team. This request was for five additional VMs from the test team. You can see the deployment has started. And at this point, I'd like to switch over to the OpenStack dashboard, because when you do your deployments, by default, one tends to use the default Nova scheduler, but IBM with the IBM Cloud Manager with OpenStack, we actually ship our own scheduler. We have something called the Platform Resource scheduler, which has quite sophisticated policies that you can apply. So I'm just going to quickly show you that. It's an extension to Horizon, so you see it in your dashboard. And we have two sets of policies. We have the initial placement policy, which is what takes effect when you do your deployment request. You have different kinds of policies that you could specify. We've gone with striping, which is where you want to evenly distribute your workload across your different nodes. But as you can see, you have different kinds of policies. And then we have the concept of your runtime policy, which is once your deployment is done, things can change in your environment. You know, your VMs could be deleted, and so your runtime policy monitors the nodes and dynamically adjusts where your VMs reside based on these policies. Now, as you can see, if you go back and check against your hypervisors, we had three nodes, we had the striping policy in place, we had a five development VMs, and then we had a five additional instances for testing. You can see it was evenly distributed, and all this was transparent to the application. So I just wanted to show you, we've got the management policies for exploration and deployment, and then we've got your resource scheduler, which is really how you would optimize where you distribute your workloads. Thank you. All right, thank you. If you have any questions, we'll be here for a couple of minutes. You can also stop by the IBM booth. So thank you.