 All right, we'll go ahead and get started. Thanks for coming out. I'm going to spend the next 40 minutes taking you through IBM software and OpenStack and kind of help you understand what software is, how it maps to OpenStack, how it can help you with OpenStack. So first, a little bit of a level set. If you've been paying attention to the news over the last couple months around IBM, there's been quite a few cloud announcements that we've made. Starting kind of at the top and the most recent announcement was the IBM Cloud Marketplace. The IBM Cloud Marketplace is a set of kind of high-valuable consumable services that you can get, both from IBM and from third parties, right? So that's kind of at the top level of the stack. Step below that from a platform as a service level is Codename Blue Mix. Codename Blue Mix is our cloud foundry-based pass environment. If you go out to bluemix.net, you can sign up for the beta. I believe that was announced back at the end of February, but that's the platform as a service offering that is built on top of IBM SoftLayer. At the very bottom of this stack is SoftLayer and OpenStack. OpenStack is what we use on-premise. All of our cloud-based products on-premise are based on OpenStack. Our public cloud offering is from SoftLayer. SoftLayer's a company IBM acquired last summer. I'm gonna take you through and kind of help map what SoftLayer is, what they do, and how OpenStack plays with that. I won't spend really any time up the stack on either the marketplace or the bluemix pieces, but you need to be aware, I think of both of them, said the marketplace is a set of high-value services that IBM has picked for the marketplace that you can get, that run, or participate in this cloud ecosystem and Codename Blue Mix is our pass offering. So, SoftLayer stands apart in the marketplace challenging some of the common assumptions around cloud, right? One of the most common assumption is that everything is virtualized and shared. That's the cloud model that's really pushed everywhere is you can get VMs on a shared hypervisor, shared resources. SoftLayer came from a little different background. They support that model, right? You can come from the, if you want shared resources on a virtualized system, you can do that, but there's also, virtualization is a choice, right? You can choose to be virtualized or not and resources can be shared, dedicated, or mixed. So, they didn't come up with the notion that here's our model and you need to adapt to it. SoftLayer grew up from the point of view that there's a set of choices that you, the customer have to make and we're gonna allow you and give you those pieces you need to make that choice and choose what's best for your workload. So, with that idea, it's a fundamental shift from the way most cloud providers look at this, which is, here's the one model, figure out how to fit your workload in it. We recognize that there are a set of workloads, some of them fit the shared virtualized model, others don't, we wanna give you that choice to pick what is best for you. So, with that, SoftLayer offers a broad range of infrastructure choices. You can choose to have bare metal with your own stack. You can come into their portal, choose to get bare metal machines, you can get them hour or monthly and obviously they're bare metal, they're physical and they're dedicated. But you do have a choice hourly or monthly. With those choices, you can choose. You want single processor, dual processor, quad processor. Do you want hard drives or SSDs? Do you want 10-giggy or 1-giggy networking? So you can go in and build the system that matches the workload that you need. You also have a shared virtual environment, right? You can go get, do you want VMs or not? Do you want your VMs on a dedicated hypervisor that is only yours or on a shared hypervisor? These are all choices that you as a customer can make to match your workload. If you have a regulated environment, maybe you can't run on a shared hypervisor. So you can choose to create VMs on a hypervisor that's dedicated to you as a customer. The point is that you have the flexibility to choose. When you deploy these systems, they are on the same network. All of them get deployed. So I'll talk a little bit about Software's triple network architecture. But all of these systems are in the same data centers, in the same racks, physical and virtual, on the same Layer 2 network connected on the same VLAN. They're not in separate data centers. There's not a bridge between them. They exist in the same infrastructure. You could build a three tier application, virtualized web servers sitting on top of virtualized app servers with a database on physical hardware. All of that's in the same data center, on the same VLAN, on the same network with low latency connectivity between all of them. They're not separated apart. So you can pick and choose from a compute standpoint the resources that best match. The same applies to networking and storage. You can pick, again, most, almost everything in the portfolio is available, hourly or monthly, dedicated, or shared, physical or virtual. And that applies to the networking as well. You can get physical load balancers. You can get a dedicated Citrix Net Scaler, for example. Or you could get a virtual load balancer that you manage in software. Or you could get a slice of a shared load balancer. Depends on throughput requirements, uptime requirements, what you need. But you can pick and choose as a customer. Storage, a wide range of iSCSI options, object storage options, NFS options, you can go in in provision and get access to. All of those resources end up on your VLAN as a customer. So all of your resources end up on the same network. If you have resources in multiple data centers, you can turn on VLAN spanning, connect together the distinct data centers, have connectivity between all of them and share resources. But the goal of software was to give customers the flexibility and choice that they need to assemble the right hardware to solve the problem. It wasn't a one-size-fits-all model. It wasn't, here's a set of shared VMs. Figure out how to fit your application in there. You can pick up and move your application and if you wanna retain the same topology, you can probably duplicate it at SoftLayer and with the resources they have. SoftLayer's triple network architecture is one of the fundamental pieces and one of their big differentiators. This and what I'll talk about on the next page. Each one of their systems has the ability to connect on distinct interfaces to each of three networks. The first is a public network. The public network is metered access to the public internet. If you're on this network, you get a public IP from the outside world, you can hit it, assuming you haven't set up a firewall or something to protect it. But it would be publicly accessible. Anything coming in and out of this interface is metered. The second network is the private network. This just gives you unmetered access. Anything on the private network is free. When enter data center, intrad data center doesn't matter. If you're sending it between data centers as long as it's on the private network, you will have unmetered access. This also applies to like ethernet handoffs in a pop. If, I'll show you on the next page some of the software pops. If you take like a one giggy, 10 giggy handoff and a pop to the software private network, you have unmetered access direct from your private network onto the software private network. This is the private network. Software owns their own private network. I'll talk a little bit about that. A little bit about that on the next page, but it's helpful to understand the latency and stuff between these is very consistent. The third network is the out of band management network. You connect to this by VPN. It's distinct from the other two networks, gives you access to like IPMI functions, a lot of the management functions. This is a separate VPN network that you connect to really intended for administrators, recovering systems, managing systems, but it's out of band management. So, software has a global footprint. This is probably one of the more compelling parts of software is the breadth and scope they have of data centers. You can see here these were basically the, these were the data centers that software had when we acquired them. Singapore, Seattle, San Jose, there's a handful in Dallas multiple, I think five or six, Houston, Washington, DC and Amsterdam. In 2014, we're adding another 14 data centers. Probably the end of January, I think is when they made the announcement that IBM was investing almost $2 billion in this data center build out in equipment and stuff. The new data centers that were being added, India, China, Tokyo, Hong Kong, Melbourne, Sydney, Brazil, Mexico City, Toronto, Montreal, London, Frankfurt and Paris. So this is what we build out in 2014. These will be coming online kind of staged over the next few months into the end of the year. So rapidly expanding the global footprint. And if you go read the announcement, when they put this out, this is just the start, right? This is what we were building out in 2014. You can look at the map and see some of the countries that aren't yet represented. They talked about building out, especially I think in Africa and the Middle East was what was specifically called out in that announcement as other areas for expansion as we go into the future. So you can see that we cover a wide range of the global footprint. Overlaid on top of that, there's a handful of POPs in the US. In addition to each data center being a POP, you can also take a handoff in Los Angeles, Denver, Atlanta, Miami, New York, Chicago. Those are facilities where you can take a direct handoff onto the software private network and get access directly to your resources in an unmetered fashion. In addition, SoftLayer owns their own global fiber network. All of these facilities are connected by network owned and controlled by SoftLayer. If you wanna go from one SoftLayer facility to another, it will be entirely on fiber owned and controlled by SoftLayer. And this is important when you look at network latency and consistency, right? If you had to traverse the public internet to cross between any of these data centers, the reliability of the connection, the consistent latency and stuff really is hard to come by. You'll see this especially in other cloud providers if you try and cross between data centers. A lot of times you'll end up on links that cross the public internet and your latency can be impacted. SoftLayer does own their own network. They have their own period agreements. They put all of that in place, but multiple 10 gigs of connectivity into each data center, connecting all the data centers. As they get to capacity, they add more scale and scale out and add more links between them. But the consistency and latency that we see is a big reason that gaming companies have moved over to SoftLayer. If you look at the news and the press releases, consistently are moving gaming companies over from other platforms and they cite two reasons. One being the bare metal, which gives them from a consistency standpoint again for games, you're not fighting over a shared resource. When you're not fighting over a shared resource, lag time in games, which is one of the biggest complaints in gaming is if you have a change in lag, shared resources will impact that, right? They can move to SoftLayer, get dedicated resources, and as well as a consistent network connection that helps avoid this lag in latency that it can introduce. SoftLayer is also designed for complete transparency, right? This isn't a black box. You don't put things in and have no idea where it is, how it works. When you go to SoftLayer, you know what data center you're in, what pod, what rack, what unit, what power port you can go in, see all the monitoring data on your system, what's the motherboard manufacturer, what are the RAM chips, what are the NIC cards. If you have performance differences between machines, you can go in and say, well, maybe I have different RAID cards on these. Do I am I doing something wrong, right? These are all ways that you can go in and help introspect and figure out what's going on, right? Are you having networking issues? Look at the uplink path. What switch am I on? What router am I on? This is all laid out for you in their portal, right? No longer is a black box where you go in, you put something in and if it doesn't work right, you have to kill it and start over, right? You can go in and actually figure out, right? Is there something I'm doing wrong? You don't have to start 50 VMs and find the 10 best performing ones because it's just, it's a box. In addition to that, you have audit trail of everything that's happened, right? Who's accessed things when, who made what changes? You can pull all of that up in the portal. So this gives you a level of visibility and control that isn't really possible in a lot of the other cloud providers. There's also predictable billing, right? One, there's no long-term commitments. As I said, everything's available hourly or monthly. The longest commitment is a monthly system. Most of the bare metal systems where you're doing configuration, if you want GPUs, SSDs are available by the month. Fairly standard configs where you just want CPU, RAM and some hard drive are all available hourly. But there's no surprise usage fees. You order what you need, pay for what you use, hourly or monthly contracts. There's no charges for inbound or private connectivity. You can send data into the public, on the public interfaces with no issue. Outbound is metered. Outbound bare metal systems come with 20 terabytes. Virtual servers come with five. Pretty good amount of traffic out before you get into any charges. And support is included at no extra cost. There's contact, you have, you can see their ticket, chat, voice. The ticketing system is probably the easiest one to use, 15-minute response times. If you have an issue, you need something done. You can open a ticket, get it done and move on. So, why IBM and software for infrastructure needs? I laid out quite a few reasons, right? They have more than 120,000 devices and 21,000 customers. I think that number's probably 22 or 23 now. Predictable bare metal performance, right? If you need predictable performance, predictable latencies, it's here. If you need virtual servers, shared hypervisor, you can come get that as well. Speed of deployment, you can get bare metal in hours. I'll talk in a couple minutes about one of the use cases I have. Typical delivery time for bare metal systems is two to four hours. On your VLAN, the hourly systems, if you only need them for a day, they're available two to four hours. Use them for a day and you can discard them. Hundreds of configuration options. I talked about some of them already, one giggy, 10 giggy options. Do you want SSDs? You can pick a 4U chassis, 36 SSDs in it if you want, right? One processor, two processor, four processors. Do you want quad core, hex core, octa core, a little bit of RAM, lots of RAM. All of those are configuration options that you can pick from. GPUs are available. And the last stat is somewhat interesting. There's more than 130 million online game players at SoftLayer. If you go look at, at the time we acquired SoftLayer about a year ago, they had talked about moving over like 60 new gaming companies in the first half of the year alone, strictly because of the predictable performance and latency which was required. So the real question is, what does this have to do with OpenStack? Right, everybody, we're at an OpenStack conference. SoftLayer today, their infrastructure, underlying infrastructure is not built on OpenStack. They had a platform in place called the Infrastructure Management System that they had built back starting in 2005, long before OpenStack was around, that did all of the automation of both fair metal and virtual servers. So because they don't use OpenStack, that's one of the most common questions we get, right? So what does SoftLayer and OpenStack have to do? So for one, using SoftLayer, I gave a demo theater presentation, I'll actually take you through it here in a couple minutes. I can build an OpenStack cloud at SoftLayer, global scale, and do it for less than $2 an hour. And this was something that, so last week, getting ready for the demo theater presentation I gave yesterday, I went to docs.openstack.org and pulled up the installation guide for Ubuntu 1204. If you go in there, start walking through it, you get to an example architecture. I picked the three-note architecture, which uses Neutron, probably the most common way people are trying to get started with OpenStack today. Before you begin section, they gave me, here's the starting point that you need to build a cloud. You can see it's pretty basic. One processor, two gigs of RAM, five gigs of storage. I mean, that's easy enough. Went out to SoftLayer, under the server section, they have bare metal servers, virtual servers. Went into the virtual server section and got my controller node. It was six cents an hour. My network node was four cents an hour. And for compute node, I got bare metal, a two gig, two core machine came out at 24 cents an hour. So it was about $8 a day to run this setup. But that's not really useful at all. I mean, a two core, two gig machine you can't do much with. So I went out and priced out, what would give me something that would maybe be useful? A two core, a dual core, four gigs of RAM, 100 gigabytes of disk space and a one gig uplink was 17 cents an hour for virtual machine on the compute side, I got a bare metal machine. It's a four core, 16 gigs with a one gig uplink. It came out to 53 cents an hour. So at this point, I have about a 70 cents an hour to build a multi node install. So at this point, we have all the compute pieces. The one thing we haven't talked about is the network. So if you looked at this picture, Neutron requires three different networks. There's the management network, which is for hypervisor to hypervisor communication. There's a instance tunnels network, which is what connects your guests together. And there's this external network, which is how you get from the outside world into your guests, right? Well, the management and the instance tunnels are very easy, right? That's the private network at software. That's how it connects my VMs together. And that's what connects my hypervisors together. That was the one gig E-link that I talked about earlier. I could have easily done this if I wanted to with 10 gig links. I didn't, but I could use 10 gig links on the private interface and all of the management and instance tunnels would be over that 10 gig link. But what about that external network? So I went to software. They have the concept of you can order subnets. I ordered a portable private subnet. A portable private subnet allows me to use these on VMs that are accessible basically anywhere in software. When I'm on the software, 10.x private network, I can get access to these IPs. There's another option. You can get what are called portable public IPs. Portable public are the same concept except they're publicly routable. You can access them from the internet. Portable private IPs are free. Portable public you have to pay for. It's like you get 16 of them for $16 a month. So it's not, they're relatively inexpensive. I use portable private ones in my example. So I provisioned those out. So at this point, all of my resources are being provisioned. I think I did this at Friday night at six o'clock. I started provisioning all this. About 90 minutes later, I came back. They were all done. Everything was available. All of my bare metal machines were there. My subnet were done. My virtual servers were all running. So I went back to the install guide. And then for like the next five hours, I copied and pasted from the install guide into the systems to finish building my install. And it was, following the install guide was pretty straightforward. I don't know if anyone's actually tried to do it. It's the first time I had done it with the Ice House release. Install guides have come a long way. The first time I built an install was with Diablo. And it took me weeks to get it working. With this, I was able to do it in about five hours worth of copying, pasting of commands and config files. I had a fully functional cloud. And I actually did it twice. I did it in the Singapore data center and I did it in the San Jose data center. When I was done, you can see I had multiple regions, region one running, which was in San Jose. When I first started copying, pasting it and realized that I was grabbing region names. When I built the Singapore one, I renamed that region, used a standard horizon dashboard. Horizon, I had a single keystone. I put keystone in my San Jose region. I set it up and used so one keystone, that keystone in San Jose served the region I had running in Singapore. Start to finish, getting both of these data centers up and running took about nine hours. If you throw in like Neutron to bug time, it probably took a little longer than that. I had copied and pasted one of my files wrong. Took me a couple hours to figure that out. But it was about nine hours to get this all up and running. Total cost of this install to be able to get started in an open stack was less than about $10, or I'm sorry, less than about $2 an hour. So it was about $25 a day, give or take. So really, software, if you're looking to get started in an open stack, software gives you a really easy way to do that. You can go out there, a new install. If you're just trying to get started with open stack, that's what I was doing this weekend. I had used open stack many times before. I was just trying to see how simply I could build an open stack installed software, because that's one of the first things people wanna say. I wanna get started with open stack, how do I get started? Especially if you're in an enterprise, it typically means somewhere you need to find hardware, you need to get the network reconfigured, you need to get access to VLANs or something. Here's a way you can do it that allows you to do it very easily, very cheaply, get started with open stack and understand how it fits. I built a sample config here for $1.19 an hour that's a fairly usable install. But if you also, let's say, if you're trying to build like a hybrid region or something, you could go get some of these custom configs from software. Talked about it, one U2, four Uboxes, how many hard drives do you need, GPUs. All of those things allow you to go out and build a custom install and add it to an open stack cluster. I talked about the GPUs and stuff. If you wanna try and figure out how do some of these features in open stack work with custom hardware, go provision in software and try it. You don't need to go buy GPUs or anything like that and figure out how to get them in your data center. Go rent the capacity you need and see if it works. But more importantly, if you start at software in the small config, you can grow into a full production install without starting over, right? I started with machines by the hour. But you can imagine saying, wow, this worked out well. I might as well go get some machines by the month because I'm gonna stay here for a while, adding to that same open stack cluster and over time, taking those hourly ones out of the rotation and building an entire infrastructure on just like hourly machines, supplementing them with bare metal, right? I don't have to reconfigure my environment to do that. I can start and move into bigger systems and do that gradually as it's needed. Last year, we did a bunch of work with Morantis at SoftLayer. Morantis demonstrated 1500 servers in a single data center and a single open stack install working at SoftLayer. So you can see how you can go from, right? A couple systems and getting started, let it grow organically, get people using it, figure out how it works and continue to add capacity to this and not have to worry about scale. They also had 75,000 VMs running in a multi data center open stack install. They used, I think, San Jose and Dallas, had a multi control plane on each side and had 75,000 VMs running. So I demonstrated, yes, I can get started very easily, right? Inexpensive, go out and figure out how it works, but it's not a dead end. You can grow this install and make it work and we have the proof points to show that it is possible. In addition, SoftLayer's object storage is open stack Swift. If you go out to SoftLayer, sign up for their object storage offering, you'll see that it's open stack Swift-based. It's globally available. They have clusters in North America, Europe and Asia. You can use any one of those or all of them. When you provision access, you get access to all three of them. Integrated content delivery network, as well as metadata search. The CDN integration is very simple to serve and use this as a platform for web serving. We actually use it inside IBM as a, if you're familiar, like with the CI logs that go on for the community integration testing, all of the IBM CI testing, you publish the logs out to SoftLayer object storage, expose it to the community and that's where they pull the logs from. And there's metadata search. So you can tag search and index all of the content you put out there. There's been some work with the open stack community to figure out to use the SoftLayer API as a basis for the search, the metadata search and integrating some of that into Swift. But what if you don't want to manage open stack? So at this point I talked about, right, you can go out, you can get the source code, you can install it yourself, get it all up and running and SoftLayer gives you a platform for that, right? You really just want to get access to SoftLayer or to open stack APIs. One of those is Project Zenith. So Project Zenith is a fully hosted and managed version of open stack managed by IBM at SoftLayer. It's a single tenant private open stack install. It's hosted at SoftLayer managed by IBM as a service. You get in a monthly subscription basis and half rack units. You get the rise in portal, you get API access, right? It's just like having your own install of open stack on-premise except IBM manages it for you. 24 by seven customer service, enterprise level security. You get access to the SoftLayer services. You confront this with things like the object storage, global IPs, DNS. And today it's in private beta. So if this is something you're interested in, come see me afterwards, we can talk about it. But this is something that we're looking at and trying to get to market to give you an easier way to get open stack access to open stack APIs. So there's another option for the open stack APIs. If you don't need a full dedicated version of open stack, there's a project that we started called Project Jumpgate. Jumpgate is essentially a translation layer that takes open stack APIs in the top. And today I'll put SoftLayer API at the bottom. It doesn't have to be SoftLayer API at the bottom, it's the only implementation. The goal in objective is really to have an open stack compatible endpoint at SoftLayer so that open stack tooling works with it, right? The design point is that these APIs past the tempest test, the ones that have been implemented today past the tempest test. So they're compatible with open stack does. And there's a single scope of visibility between them. If you go out and you set up the Jumpgate server today, basically a proxy server, if you set that up and run like NovaList VMs, you will see all of your SoftLayer resources, the exact same set of resources if you typed SLCCI list. You would see the same resources on both sides, right? It's just giving you a way to interact with the resources at SoftLayer using open stack APIs. Think of the use cases as an above the infrastructure target. I wanna use Sahara to do a Hadoop workload, but I don't wanna build an open stack cluster because maybe I'm only gonna need it for a short time, right? You could point this at an instance of Jumpgate and be able to create virtual servers or whatever the virtual servers underneath of there to run this or in the future potentially bare metal. And you can also kind of view it as a hosted region, right? With enough open stack APIs implemented, you could use this as an off-premise region with your on-premise install. So kind of the status of this, today it's a technology, right? This isn't a product, it isn't a service. If you go out to github.com slash SoftLayer, you'll find the code for Jumpgate. Today you download it and basically run it as a proxy server. It's something we do on-premise. We're working to build out the support of this internally. We've had teams who have successfully used Trove, Sahara and Heat against Jumpgate. It's probably 20% of the open stack API that's implemented, but it's the 20% everybody use. List VMs, list images, deploy a VM. That's kind of the level of function that's at, but again, it's the function that's used 80% of the time. There's clearly going to be some impedance mismatches between the SoftLayer API and the open stack APIs. We get farther and deeper into this, but our goal is to try and minimize those and make this as compatible as possible, sticking to those tempest tests. So again, this is a technology, it's not a product, it's not a service, it's open source code. You can go out and look at it, but something that we're trying to do to make it easier to use open stack APIs against SoftLayer resources. So what else can you do with open stack at SoftLayer? One is you can use it to build a hosted off-premise region. So if you already have an on-premise region with open stack, you want to get off-premise for whatever reason, you can extend your on-premise network to SoftLayer. One way I talked about, you can take a handoff and a pop, extend the private network, no transfer charges or anything like that. Another way to do that is set up like a managed VPN to a firewall or some sort of gateway appliance at SoftLayer, you can go out to SoftLayer, provision a viata router, a viata will allow you to basically essentially VPN into SoftLayer, create a dedicated private connection, and do it and extend your network. So you can bring your own IP address space to SoftLayer and make it look like an extension of your on-premise network. You could do that and build an off-premise region of open stack and use it to augment your existing on-premise capacity. Couple uses, if you have elastic use cases, let's say you're using something like Sahara, you need a bunch of compute for a short time, go build a region at open stack, or an open stack region at SoftLayer, integrate it with your on-premise, your users don't know any different, it'll look and act just like your on-premise region, give it a different name, tell them to deploy their Sahara workload out there, and it works. Another example is if you have workloads that benefit from aggressively upgrading to latest CPUs. If you're on a short refresh cycle or have workloads that would benefit from that, go out to SoftLayer. In six months, when Intel releases a new processor, it's available, you move your workload over, you discard the old system and you're done. You have no commitment, there's nothing left, you just can upgrade to the latest greatest processor. Phoenix Access to Specialized Hardware, touched on this, SSDs, GPUs, 10-giggy, something like that, especially if you need it for a short time or a long time, depending on the setup, you get that, or if you need some global web presence, maybe you wanna have a global set of web servers and app servers, but you need an on-premise connectivity that's still running. You can do that using this sort of setup where you extend your network and build an open stack cluster. Another option, when I built my cluster at SoftLayer, I used their existing open stack Swift service to hold all my images and snapshots. I built that small cluster, I went into my, I went to object storage provision and object storage account, pay for what you use, I modified my Glance API file and I had instant unlimited storage for snapshots and images. No longer did I have to manage Glance, worry about space or anything like that, I pointed it at their Swift, taken care of. One less thing for me to manage, one less moving part, and I have instant, basically unlimited capacity. You can also use their iSCSI volumes natively to VMs running in open stack. If you go up to hub.com slash SoftLayer, you can see the link there. Basically, there's a Cinder driver out there that has two modes, pooled and non-pooled mode that will provision resources or iSCSI volumes directly from the SoftLayer pool and attach them to your VMs. Again, one less thing to manage. You don't have to manage Cinder volume pools, capacity management, anything like that. They'll just serve as needed from the SoftLayer volume pool. You pay for what you use. The pooled driver is that it will basically reuse volumes, just shred them and reuse them in the future. You can tell them, you could go pre-allocate a pool. I only wanna pay for 50 volumes. You create that pool of volumes and it will never create anymore. If someone tries to provision and they're out of volumes, they'll just say, hey, we're out of volumes. It's one way you're getting, you can keep kind of control of cost. Again, in unlimited volume space, if you need it, no management necessary. One caveat is cloning a volume is slow because there's no native clone support. Snapshots are fast, clone is not. So with that, IBM technical sessions that are left this week, there's one this afternoon. Tomorrow we have three more of them. I encourage you to attend if you can. We have some t-shirts up here. If you'd like one, there's also some down at the booth. Thanks for coming. If you have questions, I'll be up here. I'll also take some now. Go ahead. Yep, agreed. Yeah, so that was actually, so the question was copy and pasting from the doc seems pretty primitive and I agree. My goal was to replicate. If somebody said I wanna get started with OpenStack, sure there's options like chef, puppet, there's chef recipes, there's the fuel installer, there's Juju from Arantis, there's PacStack from Red Hat. You have to go kind of search for all those things. My assumption was that the Docs at OpenStack would probably be the best bet, just getting started, right? I find that if I start with tools like PacStack for fuel or something, when things break, if I haven't done it manually, I have no idea how to start troubleshooting it. So you're right, had I done what I did using one of those tools, I probably would have had this done in three hours instead of nine, but I would also have no idea how any of it worked behind the scenes. And by copy and pasting, I got a much better understanding of how the pieces fit together. Only from past experience, having been in the OpenStack community, I know most of the development's done on Ubuntu. Historically, those Docs were the most stable, so I picked it only for that reason, no other reason. And then, they are assignable. So they basically route them to a VLAN, you can use them where you need on that VLAN. Somebody else had a question? So the question was, is there any smart called orchestrator integration with software? So there's a couple different ways you can do it. I'm assuming you're referring to provisioning, right? So how do you provision? I don't have a good answer for that. I don't know, is Andrew still here? Must not be. I'll get with you afterwards and get you the answer. Yes. When will software be production in China? I believe last I saw, it was either second or third quarter. So the answer is before the end of 2014, I'm thinking probably third quarter if I remember correctly. Any other questions? So with Project Zenith, right, it's a dedicated environment. The overcommit ratio would really be up to you as a customer. Any other questions? All right, thank you.