 Hello, welcome. My name is Jay Hendrickson. I'm the Product Manager for Hewlett-Packard and this is Steve Collins. He's Development Manager at Hewlett-Packard. And we're going to talk about hardware at a software conference. Are we in the right place? I don't think so. I don't think so, either. So this is about designing the hardware stack for your first open-stack private cloud. And an open stack runs on all kinds of hardware, and so some people might ask, what's hardware got to do with it? And a lot. We don't have software-defined hardware yet, so somewhere underneath everything is hardware. And so we're going to talk today about designing your first open-stack private cloud. So let me ask folks in the audience, how many people in here are using, well, have deployed a production open-stack private cloud? Running in production. Excellent. That's three times as many as we're in Vancouver. So that's good. We're getting up there. For the rest of you, this is about actually going through and deploying a private cloud. So I kind of like this quote because Albert Einstein was a pretty smart guy. And everything should be simple as possible, but not simpler. The interesting thing is, Einstein also had this theory of relativity. And so open stack is we're going to try to make this relatively simple, but you all know that deploying and maintaining open-stack cloud is not simple. So we're going to try to make it simpler. How many folks in here enjoy 30 to 50 slide presentations over the course of like an hour? Excellent. Nobody does. So here's what we have for you. I like, I hate PowerPoint. Whoever invented it, I would personally like to just break their knees. But we have, we're going to show seven slides. Not counting this one that's showing here. I'll tell you when we can start counting. So first is going to be some marketing kinds of fluff. So just don't throw anything at me. I just have to get it out of the, get it out of my system. And then Steve is going to talk to you about the real meat of the, of the presentation. So we're first going to talk about the case for a private cloud. We all probably know, but I always like to sort of level set everything and then talk about what your plan might be before we get into the, to the meat. And then summary of the best practices. So one of the reasons why I don't like slide presentations is you put up a slide like this and then people are either listening to you talk or they're reading the slide, but you can't do both. So I'm going to let it sit there for a few seconds. All right, you've read it. Okay. And you get these slides at the end, right? So, so what I'm going to talk about is really what's on this slide so you can, you can listen to me. The emergence of the internal service providers. So data centers, you know, OpenStack, cloud, that those aren't going to take over everything and there won't be anything else. We're still going to have bare metal stuff running. They're still going to be SAP HANA deployments that just don't run on OpenStack or don't run on a private cloud. They run on just bare metal. So, but, and then we heard today Erica talking about how if you don't have your own cloud in your organization, there will be people who go outside the organization and use a public cloud. If you can think about SharePoints in our own organization versus getting Dropbox or something like that with Google, you just go on, go on the web. And if you want more space, you throw out a credit card and presto you have it versus getting a share space in your own organization. Or for that matter, getting an email address in your own organization versus getting it from Gmail or, or Hotmail or, or fill in the blank provider. So there's this whole gamut of applications and where they land. And what I want to talk about today is, is the, is the private cloud and the need for it. So there's, there's a lot of folks who talk about security and performance and compliance as main reasons for wanting a private cloud. I really like to talk about compliance of hospitals and, you know, sensitive data that you, that you want within your four walls. And, and being able to create a private cloud where people can get to those resources as quickly as they can get to applications and or data on a public cloud. So that's what we're going to talk about today and, and where the internal service provider sits. So without further ado, let's talk about what your plan is for, for building a private cloud. So the first, the first thing is picking an open stack distribution. So right before I came in here, I was in my hotel room and I went to openstack.org and I went to the open stack distributions and there's 27. So, well one, you can, you can do it yourself, right? You can go to openstack.org and you can start grabbing all the, all the pieces and patch them all together yourself and six months later you might have something that's actually running or you can go to an open stack distributor. I work for Hewlett Packard. So we have one HP Helion open stack. Red Hat has one. Suce has one. Canonical has one. Morantis has one. My dog has, has one. It's not, it's not listed on openstack.org but, but he has one. He does. Okay. So anyway, so you, you pick an open stack distribution and then you start thinking, okay, this is my first one. I, I, you know, I just, do I just gather up a bunch of laptops and grab some old hardware that's laying around and just pour it all on top of it or is there, is, you know, do I think about it? And if we don't think about it, then, oh, it's just hard, it's just hardware, don't worry about it, just throw it on there but when you get right down to it and you start asking these questions of what open stack distribution did I choose? What's the control plane? What are we going to use for storage? All of these kinds of questions. Next thing you know, somewhere in there, it has to land on hardware. So we start thinking about that. The big one. Expect the workloads to change. So when you're building a private cloud, you're, you're probably looking at situations where people have not, your users have not been utilizing a cloud and so let's think about developers as an example. So typical dev test scenario is developer writes some code, compiles it, builds it, throws it over the fence to QA. QA takes it, they test it, they run some tests on it. The bug reports come back, the developer starts fixing bugs. That takes time, especially if they're building cloud native applications or applications that require a lot of testing. That timeframe can last weeks before they get reports back. So the behavior is, well let me get this next few lines of code in before I send it and compile it and send it over to QA because then I'm going to be working on something else while they're doing all their thing and setting up their hardware and doing all this. So by the time it gets back to them it's been six weeks and they're going, I don't even know if I wrote that code. Maybe Diane wrote that code, right? And Diane since left the company, I don't know. But the point is that that timeframe shrinks very rapidly once you go to a cloud. So the behaviors change. How much, how many lines of code I write before I toss it over the fence changes. There's lots of things that are going to change. So your workloads, the behavior in the organization is going to change when you can get resources like that versus taking time. And that's important when you're choosing your hardware. You can expect to scale up and out. You're going to build your first private cloud. You're going to put it into some type of production. It'll be small but you're not running OpenStack as a private cloud for six users. You're going to scale it up and you're going to scale it out. And so you need to expect that that's going to happen. All you have to do is get this proof of concept done, get some folks using it. The next thing you know your CIO is going to be throwing money at you to build out this cloud. That's how it works. We have data that shows that. We have customers that do that. You want to mitigate risk. What I mean by that is you're going to build your cloud. You're going to put together a rack full of servers that is going to run a private cloud. And it's going to be expensive. I mean it's easy to build a little tiny proof of concept that oh I put this little thing together and I show that OpenStack works and everything but I can't actually use it in production. To build something that's going to be in production is going to take some money. Half a million bucks somewhere in that neighborhood. Maybe less but somewhere in that neighborhood. So to put your badge on the desk of the CIO and say I need a half a million dollars to build private cloud. You want to kind of mitigate some of that risk. You might want to start thinking about using hardware that is extremely reliable robust flexible and easily redeployable. So something to think about. And then finally how much time do you have? So you can spend weeks and months thinking about what hardware am I going to use. Okay I'm going to use this OpenStack distribution. What hardware am I going to use for the management layer. What hardware am I going to use for the compute nodes. What hardware am I going to use for for object storage for block storage. What network switches am I going to use. All of these types of questions. Where do I think it might scale. What workloads might change. You can start thinking about all of this kind of stuff. Or you can listen to Steve who's about to tell you what we did and some of the reasons why. So without further ado let's get to the big fancy table. All right so like Jay said there's a lot there's these slides with a lot of information on them and you can either listen to me or look at the slides. I'm going to give you a second to look in that and then I'm going to talk through the various points. Or take pictures like a lot of people seem to do. So I'm going to go through three kind of usage scenarios. Three use cases that we had and the various hardware choices we made and why. So if you look at the various distributions that are out there there's a lot of things they have in common. For example one of the things that they they tend to have in common is there's a deployment host. You know on this slide we call it the seed cloud host. And this particular example is for Helion OpenStack version 1.1. But other distributions are very similar. So you'll if you're familiar with other distributions you should be able to kind of tie in some some similarities there. So you've got a deployer node which generally doesn't require a lot of resources. You know it may do things like you know in this case it's providing DHCP services. It's deploying some of the initial nodes in this case it's just applying a single node the under cloud controller. But it's got to be just you know a basic server not a whole lot of resources which in this case is why we chose you know just a six core 32 gigs of memory and a mirrored two terabyte boot drive. And that's really all you need. You know there are in fact there are cases where people use a laptop for this kind of functionality and that that can work as well if it's something that you want to want to keep in production long term you probably want something other than a laptop. In this case then there is an under cloud controller which in the Helion OpenStack 1.1 scenario is used to deploy the rest of the over cloud. So it deploys the over cloud controller compute swift and sender you know all our storage pieces. And so it needs a little bit more power but not a whole lot which is why in this case we choose a two by six core processor 64 gig of memory and this case faster drives. So that's one thing that's actually a big consideration with each of the nodes and the role that they play is what kind of drives you want in there. Do they do they need to be you know are you focused on speed which in this case we are which is why we we chose 15k drives or is capacity to really the driver in which case you'd want slower drives that can give you higher capacity which you know I think now we have eight eight eight terabyte drives and 7.2k speeds in this case we're using six terabyte drives. Then you look at the over cloud controller so this is the guy you know and most distributions will have something like a controller or something called something like that and in a lot of cases it'll be multiples in this case there's three. These are the nodes that are running the majority if not all the open stack services so they take a big load from processing power as well as from IO. So you can see in this case we're using two two by 12 core processors 64 gig of memory again and again fast drives and I should have mentioned also you know with your OS and you know other volumes you also want to take into account what type of RAID you want to use. So I think I mentioned in the case of the seed host we're just using RAID 1. There's just a mirror drive you don't need anything real fancy in the case of the under cloud controller and over cloud you want a little bit better performance in our case we chose RAID 10 for those nodes. So in actual capacity there we've got 1.2 terabytes for each of those cases 1.2 terabytes usable storage. Next on the list is a compute host it says for there that's just that's how many we had in this reference architecture obviously you can have as many compute hosts as you want they're all the same in our case we're using DL360s which I should have mentioned also all these up to this point are 360s in our case which is you know nice from a deployment standpoint and a density standpoint because they're 1u servers so you can get all the functionality you need for all of those roles and just a single u of rack space. For the compute node you know you need to look at what types of workloads you think you're going to run and as Jay mentioned that changes so that's really tough it's tough to know from the outset so you kind of have to make a best guess and then as you you know as your usage changes as you learn what you're really doing with the cloud you're going to have to adjust a little bit so maybe you start out with you know these first four nodes and they're 18 core like we have in this case and maybe you find out that you know either memory is more your limiting your limiting item or storage and so you have to adjust a little bit in this case is a good starting point we used you know maximum number of cores so we've got 18 cores by 2 which gives you 72 72 cores 72 virtual CPUs with 256 gigs of memory and then on the storage side we're using kind of the not the fastest drives but not slow either so drive speed is important in this case as well so you know you probably don't want to go with 7.2 k drives which probably also don't need 15 k drives and so a consideration there is at least at the time we we did this reference architecture 10 k drives were roughly half the price of 15 k drives and if you look at the io performance yeah that's that's a pretty good sweet spot for those compute nodes a good rule of thumb here is when you look at the ratio of your ephemeral storage to your memory you want about a 10 to 1 ratio which is what we have here we got 256 gigs of memory and roughly 2.4 terabytes of storage because these are also rated 10 all right okay so next up we have the two our storage nodes we've got Swift for our object storage and sender for block for those nodes we're using dl380s so higher capacity really the the big driver for using the 380s there is to be able to put more storage in those boxes so they can hold you know in this case you can hold the 26 small form factor drives or 15 large form factor drives in the case of Swift we're using large form factor because we're really after capacity in that case you know they don't have to be super fast drives we just want as much capacity that we can get in there so we're using six terabyte drives in here you can see i'm also listing 2 by 600 gigabytes that's the os those are the os drives so those are faster those are running the os and the Swift services but the rest of the storage drives are just six terabyte 7.2k drives in the case of sender you know here again you got to think in think about your storage in this case you do want a little bit faster drives which is why we're choosing kind of a sweet spot of 10k drives in that case this our implementation of sender here uses our vsa cluster and which also doesn't require a whole lot of compute power which is why you know we only have a six core processor in there it really doesn't need a whole lot so you could as you can see here you could kind of max everything out you could put you know 12 core or 14 core 18 core and everything but you're you know you're overspending or you could make the opposite mistake and just go single six core and then you're gonna have problems in some cases so it does take some some studies and research and research and kind of knowing how these things play with each other to to really you know figure this out or some some real world experience just to touch a little bit on the networking side you know the various distributions they're all a little bit different but in general you're going to have an ipmi type network that's used for powering the servers on and off which is fine with just a one gig network no problem there for the rest of the networking functions you probably want at least a single 10 gig there are cases where you can pair them up you know where you have a bonded pair so you can get 20 gig or more in this case we're using a 40 gig or 10 gig network with a 40 gig or a 40 port switch that uses 10 gig ports and that's used for the rest of the the rest of the network functions in OpenStack so before i go on to the other two any questions about this particular scenario yeah here i'm gonna pass around the mic so they can hear it why SSDs have not been used at all yeah so SSDs that's a good question so obviously a big driver is cost and the cost of SSDs is significantly more than than the spinning media as you'll see in in a in a follow-on scenario we did use SSDs so there are cases where you want to use them again this is kind of from if you're starting out you don't really know where you need that you need to put that extra expense you probably are okay starting without it and then you can see you know doesn't make sense for me to use SSDs in some of my compute nodes doesn't make sense to use SSDs as a front end for some of the Swift functionality which is what we did in a in a in another reference architecture because you do get a performance gain if you use SSDs for your Swift account and container functionality so that is an option to use SSDs for those because you don't you don't need that kind of performance on those nodes you'd be spending money you just don't need to spend they don't the drives don't need to be that fast yeah but not not like an SSD so you mean you can still have to find that right what the right spot is so i mean the good thing again j kind of alluded to this but you know these are all just kind of basic serves that can be used for a lot of different things so what you could i mean what you may find is so you start out with these 15k drives and maybe in your case you find out i really need SSDs in there pull those 600 gig drives out and they can be used for you know some of the other nodes so we try actually try to design this so that everything wasn't super super unique i mean you want to kind of tailor the specs for each node so that it fits but you don't want them to be so kind of out in left field that they you know the the hardware can't be used for anything else because as j mentioned things will kind of migrate over time you will change your needs will change and you want to kind of move things around any other questions on this one yeah i'm trying to get a mic back to you anyone using the Apollo systems for hyperconvert solutions so yeah that's a that's a very can you hear me okay that's a very good question and the so uh you can that that's the simple answer you can we're talking about a a configuration in this case where where what we were thinking was okay this is your first open stack private cloud we wanted to keep the architecture of the hardware as similar as we could so we used these dl 360s and 380s for several reasons first they they have the same architecture so same bios same drivers it's the it's the same architecture same chipsets all all of that stuff the second thing is the dl 360 and 380 are on everybody's standards list whereas the Apollo may not be and then the third question i mean the third reason is you are taking a risk when you set up your first open stack private cloud so you're you're going to your cio and you're saying i want to spend a half a million dollars and i want to build a private cloud and your cio is thinking half a million really okay that's we can do that and then the sweet spot is you say look boss if if this private cloud if something goes wrong and and it doesn't work we have a rack full of dl 360s and 380s i mean how bad can that be you're probably going to buy them anyway yes no it's the most popular server on the planet of course go through the next one and you'll see i'm not going to talk to every single one of these because you'll see there's a lot of similarities in fact the first uh i think three items on there are exactly the same as what we saw in the last use case but this one in particular is for the su state distribution and again you've got kind of the similar similar concept where you've got a deployment server you've got control nodes and you've got compute and you know the sweet spot for all of those in this use case is exactly the same as the last one we talked about a difference here is the storage you know in the case of suci opens that cloud they're using sef instead of uh swift and cinder and so the the requirements are a little bit different but you see they're they're actually pretty similar again a two by 12 core processor 64 gigs of memory and the drive configuration i remember i think this one this one's a little bit different so again two 600 gig drives for the os they're mirrored and then 13 six terabyte drives for your storage everything else is pretty common in the case of networking there is a little bit difference here because as i mentioned before there are cases where you can bond the network ports and this is the case with suci that they're able to to bond their network ports and so they they use a few more ports than our double double the 10 gig ports is in the last case questions on this one all right and our third scenario is for helan open stack 2.0 soon to be released there are some differences here because the the actual configuration is a little bit different um as you'll if you remember on the 1.1 scenario we had we had a seed node we had an undercloud um then we had the controllers and we had separate swift nodes the difference here is there is a deployer node which is similar to that seed node it's a little bit different but it's similar the controller is combined with swift so the swift functionality in this scenario is on those same controller nodes and there is no undercloud node at all so if you look at this the deployer specs are exactly the same as they were before a six core processor um mirrored two terabyte drives to run the os it doesn't really need a whole lot of functionality or a whole lot of performance to do his job um that's really all it needs now in the case of the controller and swift nodes so this is taking a lot of this needs a lot of horsepower right so it's running all the open stack services including swift is doing the swift proxy and container as well as the object functionality so you need to make sure you size this one right here we've we're going with the 12 core processor 64 gigs of memory I should mention here as you see up at the top this is proposed so this is kind of a this is a work in progress but here's what it's looking like right now and and is being proven out in our labs right now and then we've got again a pair of fast 600 gig drives for the os and six terabyte drives for the object um a difference is not mentioned on here is out of those which actually a little bit well it is incorrect here what we're looking at right now is not 13 six terabyte drives for um object we're actually looking at 11 six terabyte drives and then two drives um of ssd for the container and uh account nodes um as I mentioned earlier you do see a performance gain when you do that so we're we're at the point right now we're measuring what that performance gain is to kind of do the cost performance tradeoff but um it's looking good so far but I mean those so those are the things you have to kind of keep in in in mind and you know sometimes it just takes experimentation and and uh experience to figure out what really works best uh see I get this mic to the back so when you talk about performance gains actually I would assume that you're measuring this stuff so is there a particular suite that you are using a particular test bed so you can get help pretty much yeah I mean there's a variety of there's a variety of tools that uh we're using and honestly I can't remember the names of all of them I've got some engineers that are doing I think they're I mean Raleigh I think has some tools in it uh there I mean there's a variety of tools I think that are part of the open stack distro as well as some stuff that we've we've made in house to measure um throughput as well as kind of functionality between the various services and uh you know an IO performance and those kind of things so I mean afterwards I can I can track down the exact details where if you want and I can I can get that to you um compute specs exactly the same as before so a 360 server 18 cores 256 gigs of memory um oh I lied it's not exactly the same because in the Helion OpenStack 2.0 case they split the os volume from the ephemeral storage which is not the case in 1.1 so in this case you've got you've got a mirrored pair of 1.2 terabyte drives for the os and then you've got four drives 1.2 terabytes raid 10 for your ephemeral storage which keeps that 10 to 1 ratio again right because you've got you've got four 1.2s mir as raid 10 which gives you 2.4 terabytes usable storage in the 1.1 case which is all combined the os and the ephemeral storage all in the same drives which is why we we only had four of them and then cinder cinder specs are exactly the same as in 1.1 that seems to be uh you know good sweet spot doesn't need a whole lot of processing processing power so six cores is fine and then you know that sweet spot of the drives 10k drives 1.2 terabytes and you know in that case we're we're actually starting out with the uh with that box only half filled with storage it can hold 26 drives so you can get started with that much storage and then add drives to it as it grows yes it does support it does support sef yes so sef is an option still you don't recommend that kind of a country well it's not necessarily though we don't recommend it it's just for this for the reference architecture that we did here where we just chose to use uh swift and cinder no i should probably you want to run the mic yeah roughly how many vms are you uh planning for per computer that's always the magic that's the magic nobody talks about bms okay so here's the deal it's it's real important to talk about the physical cores and the the physical cores and the amount of memory that you have in your in your configuration okay so those are all those are all right so so uh you can run in in this configuration you can run over 500 tiny vms with four compute notes now and and and we have we have competitors who who will say that they can stand up x number of vms on some small number of notes the catch here is uh are you really going to run 500 vms through a 10 gig pipe uh i mean you you you can you can you okay you bonded for 40 and you still have over 500 vms so the question the question you have to ask yourself is what are the workloads and and how much and how much bandwidth is each vm going to need and and so uh the outbound marketing people at hp just pounded on us and said you can run 144 vms on each one of these notes and i just kept shaking my head saying i i don't know how to to actually justify that in terms of a real world experience but that's a number and it can be run if they're all really really tiny now now would you be doing that i probably not probably not so the key here is to focus on in my mind is to focus on how much memory do you have how many physical cores do you have and how much bandwidth do you have and then on top of all of that what are the workloads that you're going to be running so if you're running you know little if you're if you're running a vm that requires one whole physical core you're never going to get to 500 right so i mean that that question is is asked you know i mean it's a it's a good question lots of yes yes so uh yeah so uh the the next slide that we show is going to be you know what this bait and switch thing was all about well so just add another thing to that so so if you kind of have an idea of what your workloads are going to be or what combination of vm flavors you're going to have you you've got a better starting point figuring out what those ratios ought to be um if i'm trying to remember what the number is who remember what's what's the sizes for a for a large vm m dot large remember what those sizes are i so i'm trying to remember it because i mean actually we have we have a white paper that lists various scenarios in that large case i don't remember the number of vms i just remember in the large in the large case you end up with um you actually hit your limit when you get to um let's see in this scenario the disk space is actually you're limiting factor so you'll hit that limit on disk space a little bit before you hit it on memory and you end up 1.4 over subscribed on virtual cpu's or on cpu's so i mean so so if you have those if you know what your mix is and you can you can run the numbers and figure it out and tweak things a little bit based on that but that's but that's kind of the that's the analysis we did and that's that's also where that 10 to 1 ratio of storage to memory also helps because that that matches the the vm flavors so it kind of keeps your keeps you balanced there all right any questions before jay summarizes all right there you go so so what we've given here is a three three different configurations one for hos 1.1 one for susie open stack cloud and one for the proposed hos 2.0 which is coming out very very very soon and they're they're not just reference architectures what we've done to help stand up your first private cloud is to rack and stack and optimize all of this stuff and deliver a product like this so that it now the reference architectures will have the build of materials in the in the appendix and and you would be able to look at that and go and buy all the cables and the racks and the power supplies and all of that stuff from any vendor that that you would want or you could actually buy the product like this that that would have it pretty much i mean that's a that's a very very good representation of the of the helion rack which is the which is based on hos 1.1 it'd be slightly different for the susie open stack cloud and slightly different for hos 2.0 the nice thing about this is it's it's all cabled optimized it's it's tuned switch configurations are done in the factory it's delivered to the data center and then professional services teams come out and and install open stack for you and and or with you depending on how you want how you want to do that a couple of things about it first of all it really kind of want to look on the right hand side there so these barriers to adoption so one of the things that that's changing and certainly when i asked the very first question of how many people are actually have actually deployed a production open stack cloud the first time we talked about this stuff a year ago i can't remember there might have been one person in in the audience who who who raised their hand and said that they had actually deployed and and one of the questions that i then ask everybody else was well gosh we have you know thousands and thousands of people coming to open stack summit and there's a big the big buzz and it's the largest open source project in the world why aren't why aren't there more open stack clouds being run and when i started asking people that it was well it's complex and it's expensive and so one of the nice things about this particular box if you will this cloud in iraq is that it mitigates some of that complexity and some of the time to production so when we deployed our first i'm sorry was there a question is there a question okay when we deployed our first open stack private cloud at a customer site we we we found out after it was deployed and in production we talked to the folks who did the installation and the question was how long does it take for a typical open stack deployment in so you have open stack engineers going into a data center and evaluating the hardware and installing a production running open stack and the the answer was it takes usually about three weeks which is actually pretty fast when this was deployed the very first one off the factory floor delivered to the customer three days up and running which is significantly faster and so again when you're taking this risk of building your first private private cloud getting it up getting getting your toys unwrapped and and being able to play with them quickly is a good thing um so see we're getting short on time here let's let's go to the summary of best practices so um again i've i've i've i've talked about mitigating risk a lot use known reliable hardware again i'm not saying you have to use hp although i work for hp so full disclosure it's always good to use use our stuff but when you're building this first private cloud you you really don't want or can afford to have hardware issues during your learning experience of maintaining an open stack private cloud okay it's it's one thing to get it installed and deployed and then maintained so hardware that is known reliable and easy to manage is is a is a good thing use similar architecture for your physical hosts as i mentioned before not only are your workloads going to change but the the as you scale out you you may want to use these servers and for different roles and so to the extent that you have a similar architecture it makes it easy to redeploy these boxes into to the roles that they might play perfect example is the change from our hos 1.1 to our h hos 2.0 we went from a triple o type of deployment methodology to an ansible based methodology and we were able to reduce the number of nodes in the control plane so for a customer who has hos 1.1 and they want to migrate to hos 2.0 they they can now go from five nodes to two to three nodes and so the question might be well what do i do with those other two nodes well they're dl360s turn them into compute nodes right you could have gone with what steve said and used a laptop for your seed node or some very very small not very powerful box but then what do you do with it when you when you're when open stack changes and it will change right i mean we're we're going through a a c change so expect your so so use similar architecture expect workload variability that is absolutely going to change and when we develop this product we talk to lots of customers lots of partners we talk to folks in public cloud and one of the things that the folks in the public cloud said was we have we have customers who say these are our workloads this is what we want to be stood up for us and it it was never right it was never right their workloads changed and so that will change and so you want to have at least at the front end you want to have hardware that can be either compute as an example be there a compute node or storage node whether it be object store block store plan to scale up and out that's going to happen you're you're you're not going to build a a single rack private cloud and have it stay there it's it's going to scale out and then of course configure for high high availability so one of the things that we've seen is that the control plane is at least in this point of time is going to stay at around three nodes so that you can so that you can stay highly available so you definitely want to start planning on that so you you know the the physical network switches you want to think about what kinds of switching you're going to be using so that so that you have the the port numbers and the and the bandwidth so I guess just just one more point on that so we didn't talk a whole lot about the high availability but obviously when you start thinking about how that factors in there are a lot other things that kind of go into that and that will drive such as if you really want a high availability you're going to spread it across multiple racks right or multiple zones you're going to split up your object storage you know the same way nodes in each rack redundant switches those kind of things which we didn't go through here but that is also part of the the thought process you need to you know you need to put in upfront or at least build that first rack and that initial deployment so that you know that you can get to that high availability scenario when you want to get there sorry j so and just one more thing before i before we close when we when we began selling the the helion rack we we knew that the scale out would would happen over time it would it would be based on how successful the deployment went and and how quickly the user was up and running and one of the things that we found out that happened very very quickly is this particular customer very large entertainment company on the west coast of of the united states you've all heard of them but they haven't given me permission yet to say their name but let's just say that they do a lot of media stuff they they went in it was probably two or three months maybe less than that after their first rack to now they're at three racks and so it's happening very very fast and and i and i would expect that that if your first private cloud is successful it will expand very very quickly so with that uh any questions around the back you got it so you guys mentioned that you guys allowed to repurpose different racks that you have in place so my experience with the bigger thing the biggest problem that i have is pretty much is discovery almost of like the ipmi interfaces sort of things do you have any provision for that in your rack or do you have a particular inventory set in place fixed or something can you really discover can i grow this rack in place etc etc i didn't get all of that did you yeah what was so for example like you repurpose machines right so machines that used to be part of the part of the control plane can be used for store computer nodes or something or vice versa right uh so i would assume that at a certain point if you want to run ansible stuff things like ipmi ipmi discovery and things like this will be easier if you have a knowledge of at least which MAC addresses exist etc i would do which part of it long either to the data plane to the control plane is that something that you have casted in place or do you have it a priori or yeah right right now that kind of functionality is not in place as part of i mean as part of helan open stack gotcha so for example things like a particular network car has to be replaced or something having different MAC address would you accommodate for this uh i don't know how it's done in helan open stack honestly that's a that's a great question i'll read more about this thank you that's a great question any other questions all right that's why we didn't do the apollo on the first on the first one because we wanted okay but it doesn't mean that the apollos are not supported it just means that the apollos are not in that reference architecture and in this product yeah in fact there are there's there are use cases where the Apollo is great there's certain absolutely where is the perfect yeah if you're a if you're a swift object store if those are your workloads oh equally for compute i mean there's a version that would be the dense compute that would be perfect i mean just again it kind of depends on if you know what your workload is and what you want to do there's scenarios where those those boxes are perfect for i i understand that hp bought some company then those are some kind of software to include with the helian that you never mentioned that something like uh somewhere that will make it easier to like you include apache and all those stuff and all script and everything i i don't remember that name but i think hp bought that and i thought you got to combine with this one but you never mentioned it doesn't ring the bell no it doesn't this is a this yeah this reference architecture is a pure helion product so it's it's it's open stack i mean it's it's open stack with our with our deployment wrapper over the top of it but um you can i mean obviously it's open stack so you can dump all kinds of other tools and scripting utilities on top of it but i'm not really sure what you're okay i'll find out does the hp networking hardware supports acceleration like rdma which is beneficial for the the so in hos 2.0 the 58 the 59 30 switch does and the hardware which is included in the dl3 80 or 360s the 10 gig cards they all support it yeah so that's a that's a great question so in the in the base configuration we we have um straight intel um 10 gig nicks but one of the nice things about this particular configuration is you can configure using different types of nicks in in the boxes so during during the ordering process you can say well we've standardized on this different nick or we want to use um you know maybe want a low latency type of workloads and so we we use some you know very low latency of hardware in the in the box i appreciate it any other questions let's go to the booth crawl thank you