 Hello, everyone. Ooh, that's loud. Thank you for coming in this evening. And it's amazing, we have so many people who are interested in hardware at the software conference. I know we had to bribe some folks to get the presentation even included in the voting because they kept saying, wait a minute, this is open stack. What's hardware got to do with any of this stuff? So somehow we got in, and America voted us in. Thank you. My name is Jay Hendrickson. I'm a product manager at Heel of Packard. And Steve Collins couldn't make it, so you're going to be stuck with the marketing guy. So let me just say that before I get moving, probably everything I say is somewhat of a lie. I've got 217 slides I'm going to show you because that's what we do in marketing. They're all color, and they're very much of an eyesore. But let me get to something serious here. So about a year ago, we decided that we would design a private cloud and put it into a box. And months and months of talking with HP public cloud operators, talking with customers. In fact, we even got a customer to help us design the hardware. And so it was really good, but it took a long time. When we first thought of this, we thought, OK, let's put an OpenStack cloud in a rack and we'll just throw some hardware in there, and it should be fine. And it turns out that that's not the case. It took us months and months and months to come up with a set of hardware that can do the work for an OpenStack private cloud. So one of the things that I want to start out with is this is a design, a hardware design, for a private cloud. Everybody in here is going to have their own opinion about how that works. This is one of them. A couple of caveats on why we designed it the way we did. It's designing your first OpenStack private cloud. So for enterprise customers who are interested in standing up their first OpenStack private cloud, as we go through the design phase of this, kind of keep that in mind. I mean, obviously, if you're adding on to your OpenStack cloud, if you've got racks and racks and racks in multiple data centers, your design would be somewhat different than this. So kind of keep that in mind. So this is the agenda. I'm going to talk a little bit about the case for a private cloud. I'm not going to spend a lot of time on that. We need a private cloud, rapid provisioning of resources, makes things go faster, the compliance issues, the security issue, perceived security issues that we talk about, performance issues. I think we all kind of get it that a private cloud is somewhat necessary in today's IT world and certainly going to become more necessary as we go forward. Then I'm going to talk about some of the workloads that we were thinking about, the use cases that we were thinking about as we design this, because the design has to do a little bit with workloads. And then I'm going to talk about, OK, what's your plan when you do this? And then I'm going to show you what we did. So without further ado, let's talk about the use cases. So again, first private open stack cloud. So as you think about it, I mean, it would be nice to just say, all right, let's design a private cloud. We're just going to throw a bunch of hardware together. We're going to put it in the data center, and we're going to dump all of the cloud native applications that we already have onto the cloud in Presto. And that's usually not how it's going to work. And the assumption here is that isn't how it's going to work. So the use cases I'm going to talk about are you're going to want a cloud that you're going to use to design cloud native applications. You may want your private cloud to help you with basic dev test work. And then finally, when it's all gets done, you're actually going to host these cloud native applications on your private cloud. So we sort of looked at it from that way of this cloud that you build. It's not going to be a proof of concept. It's going to be something that you can use. You can get your OpenStack wings, if you will. You can start working with it. You can start having multiple users using it. And as that happens, you grow. So that's kind of the use cases that we're going to talk about. So what's the plan? So you're going to design the hardware stack. What's the plan? Well, first, what OpenStack distro do you want to use? Do you want to go to the OpenStack Foundation and download all the bits yourself? Put them all together a couple months later. Maybe get it installed. Have it running. Do you want to use a commercial distribution of OpenStack? Lots of them, right? Red Hat has one. HP has one. Canonical has one. SUSEI has one. My dog has one. I mean, there's tons of them. So you need to make that decision upfront. Then the second thing is you really do need to realize that it's complex to architect this. And one thing, and I keep bringing this back up because we are talking about the very specific first OpenStack cloud. There's a lot of probably general thought processes that say, I know what I'm going to do. I'm going to look around at what the public cloud providers have, and I'm going to use the same hardware they use. They know what they're doing. They've been doing it for a while. But it's not that simple because a public cloud provider has multiple, multiple data centers, racks and racks and racks and racks of servers. And when a server goes down, you rip and replace and off you go, and they have all that OpenStack expertise. If you're building your first OpenStack private cloud, you may not have all of that OpenStack expertise. It is complicated. And worrying about hardware is a real concern. So it is somewhat complicated. And that's why we're going to talk about some of these things. The other thing is you need to expect that your workloads are going to change. So traditional IT, virtualized environment, you have workloads, users, dial up, email, whatever, ask for VMs to be stood up, ask for servers to be stood up, ask for this, ask for that. And those are certain behaviors. But boy, once provisioning starts happening on demand, the behaviors will change, and your workloads will change. So when we talked with our public cloud operators for the HP public cloud, one of the questions I asked was, what do the customers come to you with? And by far, the answer was, we have customers, they come in, they say, we have these sets of workloads, we have these sets of use cases, and it turns out they had no idea, because things changed. So you don't know what you don't know. And that's important when you're designing your hardware, because you're going to need hardware that is somewhat flexible. You're going to need to expect to scale up and to scale out. You're not going to go in and say, OK, I want to buy a data center full of rip and replace hardware. You're going to start out, you're going to start out maybe a rack of servers, and you're going to put some Swift, put some Cinder, put some Compute, all that stuff. And as the various users start increasing, and they will, you'll need to scale up and out. So that's important. The other thing you need to do is to mitigate risk. And when I say that, what I mean is when you put your badge down in front of the CIO and say, I want to spend half a million dollars for a science project that is going to take us on this journey, you kind of want to sort of reduce the amount of risk that you're dealing with. So I would suggest that you get reliable hardware, because you don't want to be messing around with hardware issues while you're learning your OpenStack private cloud work use cases and all of that. So you want to do that. You also want to make that hardware, you want to get hardware that is easily managed. So you're going to be spending a lot of time getting your OpenStack wings, getting this private cloud up and running, putting all the other management utilities that you put on top of it. And so managing the hardware, you want that to be kind of secondary and a no-brainer. So something else. And then finally, how much time do you have? Do you want to start out with kind of a contraption of hardware laying around that seems to kind of work and then start playing around with it? Or do you want to have maybe something delivered or something about what I'm about to talk about, some sort of a recipe of what you can do? So these are some of the things that I want you to think about. And I'm going to get right to the heart of the matter now. So again, we spent so much time, again, talking with all these folks about how do you do this? How do you do this? And so I'm just going to show you what we did. This is an example. And it's based on the HP Healy on OpenStack Distribution. The key here is the compute nodes, the Swift nodes, the Cinder nodes, they're pretty much OK with any distribution. And so you're going to see some of the management control plane. These are some of the things that you want to think about. So what we did was keeping in mind that the workloads were going to change. You're not going to know what they are. We wanted to use some platforms that were extremely flexible, extremely reliable, extremely easy to manage, and could easily be reprovisioned if and when needed. In other words, we know what our control plane is for Healy on OpenStack. We also know that as OpenStack grows, so will the management control plane, it will change. And so you want to have a control plane where, as servers reduce in number, you can reprovision them to do other things. So this is what we have here. If you're not familiar with HP hardware, let me kind of give you a ProLiant 101. The two models that we're using are DL360s and DL380 Generation 9 platforms. They have the same architecture. The 360s are 1U. The 380s are 2U. But they have the same architecture. Same motherboard, same processors, same memory. The difference is that there are a few more PCI buses on the 2U unit, the 380, and the 380 can handle more storage. The reason we chose those is because same architecture, easy to manage. So we can use HP OneView. You could use what your management software of choice, and you're managing the same architecture. So when it's time to change drivers, spin firmware, those kinds of things, it's on all the nodes in the box. Makes it a little bit simpler. So I'm going to talk about the management control plane last. I want to kind of talk about the compute, the Swift, and the sender nodes first. So for compute nodes, we use DL360s, 1U, and we wanted them as dense as we could get. So we used Haswell 18-core processors. So each one of these nodes has 36 cores. We put 256 gig in the platform using 16 gig DDR4 RAM. There's room for an additional memory to go up to 384 using these 16 gig dims. If you have some money and you want to throw in 32 gig dims, you can go up to over 700 gig of memory. And we thought, this is a nice thing to do. We want to, when you think about power and cooling and all of the things that go on in hardware, you want to make your compute as dense as possible, just to kind of keep that rack size as small as possible. So we put four of those in the platform. We have four 1.2 terabyte drives in these. And that's to take care of the OS as well as ephemeral storage. So pretty straightforward compute node. If you want to add more memory, obviously you can do that as open stack changes. And you can begin to use block storage across all of these compute nodes and various things. You can even add more storage. But that's what we use to start with. For our Swift, again, first open stack private cloud. So for our Swift nodes, we use DL380s and we only use two. I know Swift is in a replication of three, but we only use two. And then we did that mostly for cost. Later on, you want to start adding more stuff you can. But we want to do it for cost. Again, it's your first open stack private cloud. Well, now we stuffed the DL380s with 15 drives. 13 of them are six terabyte large form factor drives. We did that because object storage, you're not necessarily looking for performance. You're looking for capacity. So we put these six terabyte drives. So each one of these Swift nodes has 78 terabytes of usable storage. So it's going to allow you for your glance images. And it will also allow you for object storage in addition to the basic open stack management system. Cinder. So for block storage, again, we use DL380s. This time, we use small form factor drives because we were interested in speed. And so the small form factor, oh, let me back one second. For the Swift, we have 13 of the six terabyte drives. And then for the OS, we use two 600 gig drives. OK, so to Cinder. So for Cinder, we use small form factor drives. They're faster. They're 1.2 terabyte drives. Now, when you use small form factor drives in the DL380 Gen 9, you can have up to 26 drives. We put in 12. So we have 10 in the back of the unit that we use for the OS. And then we have two in the front for storage. And that's about 12 terabytes to start with. And then you can scale that up out to 31 terabytes in each one of the VSA nodes. The cool thing about the VSA nodes is that the VSA cluster is that if you want, we have three of them. And so that's obviously highly available. And you can take those nodes and put them anywhere you want. They don't have to stay in the rack. So you could take one of those nodes, put them in a data center, and they're still part of the cluster. Hardware? What? OK, I'm sorry. OK, so that's the, oh, I didn't even get to them. And so for Helion OpenStack, the OpenStack distro uses five management nodes. And each one of these management nodes is configured specifically for the role that it plays. So we have a seed cloud host, which basically spins up the first VM for the deployment. After that, it's just a DHCP server. It has a couple of other functions. But basically, that's what it is. And so we've got a single six core processor in there with 32 gigs. And there's no need to do any more than that. The thing about the way we design this is it will run your private cloud, and it will scale as far as OpenStack is going to allow you to scale. In other words, we designed it so that our customers would not be in a situation where they get to so many VMs or they get to so many compute nodes or so much storage, and all of a sudden, they're toast, and they've got to start over, or they've got to do something else. This allows you to scale up to what the healing on OpenStack distro will allow you to scale. For the under cloud controller, which is used to manage the physical infrastructure, again, these are DL360s. We use two six core processors because it does a little bit more and 64 gig of RAM. And for the over cloud controllers, we've three of those, and that's what manages the virtual infrastructure. That's your OpenStack management structure, and there's three of them for redundancy. So let me show you what it kind of looks like. This is what it looks like in a rack. We happen to have one in the Helion booth in the marketplace if you'd like to come by and take a look at it. It's not a real rack. It's a virtual rack, but it gets its point across. So this is what it looks like. So you see the design, and one of the nice things about it is there's lots of room left over for scale out. And you can see the little bars on the right hand side. So you can add additional compute nodes. You can add additional object storage nodes. You can add additional block storage nodes. And obviously, you can't max out every single one of them in the same rack. You run out of space. So that's basically what it looks like. For the networking switches, we have a FlexFabric 5700 switch. The 48G is a 1G switch, and we use that for IPMI traffic only. And then the FlexFabric 5700 40XG, those are 10 gig ports. And we have two 40 gig ports for north-south traffic to get you out of the rack. Now again, boy, I tell you, we talk to a lot of customers as we went through this. And everybody had an opinion, and they still do. And I know that there's lots of you who are saying, well, why didn't she use blades? Or why didn't she use more dense storage? Or why didn't she use more dense compute? Or why didn't she use this? Or why didn't she do? What about this nick? And what about that nick? And the key here is, we wanted it to be as flexible as possible. So when you start figuring out what your workloads, if it truly turns out that your workloads are very object-centric, object storage-centric, then by all means, get a much more dense storage box for that. But one, you don't know what you don't know. And two, you are going to scale out. So this isn't going to hurt you. This is going to get you up and running. It's going to allow you to see what kinds of behaviors your end users will have. And it will get you rolling. OK, so a summary of best practices. First, mitigate risk. Use known reliable hardware. That is critical. You think about it in a system like I just showed. As you're playing around and getting ready to start deploying real-life workloads into production, but your end users are building up, a server failure is somewhat catastrophic when you have four nodes. It's not, you don't want to run into those problems, because then there's this perception that the open stack private cloud doesn't work. And if you do have a problem, you're not really sure exactly where it came from. Was it hardware? Was it the software? What was it? So use known reliable hardware. The second is that the physical host should have similar architectures. Again, it has to do with ease of management of the hardware. This is your first cloud. This is something that you want to spend your time socializing within your organization, socializing within your IT organization. Expect workload variability. In other words, get servers that are flexible so that you can redeploy them if and when things change. And they will. Open stack is different than it was last year, which was different than it was last year. I mean, it will change. Plan to scale up and out. That's going to happen. That is absolutely going to happen. You get this thing running well. Your end users start becoming extremely happy. And they quit going to fill in the blank public cloud to dump your proprietary IP on while they're coding and testing. They're going to want more of this. And it goes without saying that your cloud should be highly available. You need to really think about making sure that you have some redundancy built in. And it was that simple. That was it. Questions. Anybody who has a question that can stump me gets a 16 gig thumb drive. Sure. You can use SSDs. They're absolutely not a problem. The question was, oh, because we wanted you on the mic. So the question was, why didn't we use SSDs in the design? And the answer is, we could have. Absolutely we could have. The reason that we didn't was for cost purposes. Again, you're going to stand up and you're going to start building this thing out. There will be a time and a place for those SSD drives. But we wanted to make sure that everything was as cost effective as possible. It goes back to the mitigation of risk and all of this stuff. You're spending a half a million dollars, if somewhere in that neighborhood, to build this thing up and get things rolling. And SSD drives are a bit expensive. So that was fundamentally the reason. So the question was, if you were going to use SSD drives, where would be the best place to put them? Most likely in the VSA cluster, again, Swiss object storage is usually the digital parking lot. You don't necessarily need high performance. So SSDs are very high performing. So let me get it up. You don't get the question. So you're deploying neutron at all in this environment? Neutron is deployed. Oh, neutron is deployed. Neutron is deployed, yes. The question was, did we consider neutron? Yes, we did, and it is deployed. That's a very good question. The question is, why did we use a single switch? So we have one switch for IPMI and one switch for the management network traffic. We're not redundant. And again, it goes to cost. In terms of the product that we sell, the Helion Rack, OK, it was bait and switch. I'm selling Helion Rack. It comes down to cost. After you're doing your cloud native development and you're now starting to do some testing, and maybe you're getting ready to take this cloud and move it into a real live production mode, which is going to, there's a journey there. Could be a month, two months, three months, some period of time. Add the switches. Yes, sir? I know they're going to scream at me. There's a microphone right there. This is being recorded, and your voice needs to be saved for posterity. Hi. So a couple of boxes want to make it as the back end on our Ioniq driver. So what do you prefer through ILOAM or directly Ioniq integration in your boxes? I don't understand what you ask. I'm sorry. A couple of your blades, I want to put it as a back end as the Ioniq purely bare metal back end. So what do you prefer, good through ILOAM or through Ioniq? So for management, ILOAM. Yes. Yes, sir? What kind of price tag? The famous half a million dollar question of the price tag of the rack. So a lot of it depends on how you configure it and how you sell it. So not sell it, but how you configure it. In the example that I've shown, the hardware, so you have to think about it in terms of what the hardware is going to cost. And you also need to think about what are the software? What's the software going to cost? What are the professional services going to cost as you get this installed? The Helion Rack product is designed to be come in, set it up, and you're up and running and also train. So if you look at all of that, the factory integration, the cabling, the shipping altogether, the professional services showing up, installing the product, and getting it up and running. And you're looking at 430K. That's everything. The question is, is it easy to scale from the networking side of it? Any network engineers in here? It's never easy to scale networking. I mean, it is what it is, right? I think your question more is plugging in another switch and making your system redundant. Right, right. So I mean, cabling up additional networking ports, you will have to bring down the servers that you do. They're not HUD plug network supports. You did not mention the number of network interfaces for each of those nodes. Do you have any idea what they were? So especially for the controller nodes. Especially for the which nodes? The controller. Controller. The controller nodes. So the controller nodes have, we have one gig ports from the ILO. So we use ILO in HP ProLiant server. So that's what we use for our management. So we have one gig ports going from the ILO port into the 5700, which is a one gig port switch. 40 ports in that. And then for the management of the OpenStack private cloud itself, we have 10 gig ports. So we have a single, and because there's only a single, because there is a single 10 gig switch, we have one of the 10 gig ports to that switch. So the controller has just one or two? So each node in the controller plane has two ports. One of them is a one gig for management of the hardware. So powering on, powering off those types of things, hardware management. And then the 10 gig port is used for OpenStack management. For managing the OpenStack services. Did that answer your question? Yeah, because when I look at the documentation, they're showing two to three networking interfaces just for OpenStack. Like they have internal network, and external network, and data network, and so on. So yeah, this is the physical. I mean, you have a service network running underneath So there are a couple of virtual networks running at the same time over the top of the physical network. I think that's what you're asking. Well, could they be separate? I mean, could you have one-to-one relationship? You could. Not use VLANs, let's say. Then you need distinct networking interfaces, if you're not planning to use the VLANs or virtual networking. See, with the single interface, you're sharing it with, especially like when you want to do this VXLANs per tenant and stuff. OK, so the 5700 switch doesn't support VXLAN. So if you want VXLAN in your private cloud, then you would use a 5930. I mean, the thing about this design is obviously there are some configuration changes that you can do. So we have a 5730 switch, FlexFabric switch that supports VXLAN. You would use that switch, and then you would also swap out the NICs. You would want a NIC that could support VXLAN. So in the base unit that we sell, we use an Intel dual port 10 gig NIC. If you wanted to use VXLAN, then you could use a Melanox NIC that supports offloading. And you would guess right. Whoops. My Houston Astros are doing well, but I'm not on that team. Yes, sir. Here you go. The question was, how many kilowatts does a rat take? I'm sorry. I don't know that answer. It's a lie. I have one more. It's pretty easy to stump me. Like I said, I'm in marketing. But you have to catch the lie. That's the key. If it's not HP, what is my recommendation? If what's not HP? If what is not HP? Oh, in the hardware stack. That's a good question. I think my answer is, why would you even consider? So your question's well taken. And the DL360 and DL380 are the most popular servers on the planet. They are. I mean, we have a huge share of that market. And so that was one of the reasons why we chose them. Because when it's all said and done, and you're trying to build your private cloud, and you're standing there with your badge in front of your CIO, and your CIO goes, I don't know, about spending half a million dollars on this, you can look at them right in the eye and say, you know what, if this thing goes horribly wrong, I have a rack of DL360 and 380 servers. How bad can that be? So pardon? No one's going to get fired buying HP. The question, the statement was nobody's going to get fired for buying HP. I didn't say it. I don't know. You guys are saying no. Do you guys know for sure? OK, no. The answer's no. No more questions? Oh, I didn't know that answer. Here you go. This is the last one. And by the way, there's stuff on these, but it's 16 gig. So you don't have to erase it. You could actually look at it at some point. OK, thank you.