 Hi everybody welcome my name is Sean Madden. I'm a product manager at Piston Cloud Computing. I started out in a sales engineering role the past couple years and just recently moved into product. So thanks for coming. This talk is about zero to open stack in 15 minutes. So what that means for us at Piston is we want to get you from your bare metal servers to running a complete infrastructure in about 10 to 15 minutes. It goes well beyond that as well because what we'd like to do is not only get your cloud up and running but anything that you have on top of that we want to make sure that's easy to install. So we want to automate everything if we can. That means everything from the installation, the scaling out of your cloud when you want to add servers, the scaling in of your cloud, when you want to do updates and upgrades say you're going from Havana to Icehouse, Icehouse to Kilo and beyond or Icehouse to Juno. We want everything to be automated and easy. We want these things to be very simple and even third-party integration so we bring up our cloud in about 15 minutes and then on top of that you want to run your platform as a service from Cloud Foundry or Staccato or AMP or you want to run some SDN on top of that from Plumgrid or Juniper or OVS. We want to automate those as well so that this whole concept of 15-minute install extends just beyond the infrastructure. So let me just before I get into the demo portion of it let me just give you a bit of a background on what Piston does and how we get to this whole thing of a 15-minute install. So if you look at this block diagram here this basically shows what the Piston architecture looks like. On the bottom you have these Commodity servers x86 servers they're they're hyperconverged from pretty much any vendor so if you want to use Supermicro or Quanta or if you're a Dell shop HP shop wherever you want to get your servers from we're very hardware agnostic and basically every server in our architecture runs compute storage and networking so we fill them up with drives we have a couple of CPUs and some Knicks and these are the cloud nodes as we call them that are going to run your cloud. What we've done on top of that is we've built a micro operating system where we basically taken Linux stripped out everything that we need that we don't need and all it does is run OpenStack. So we install that into a RAM disk on every server so it's stateless we never install anything on your drives it's completely stateless. On top of that we have our runtime environment which does a couple things for us. Number one is it's our cloud orchestration framework so we can take you and do this automated install but it's also our high availability framework as well to make sure that your cloud stays up and running 24-7. We detect things like server failures, services not running, we can migrate services around and make sure your cloud is running 24-7. The three other boxes you see are the storage compute and networking so on the storage side we use Ceph as our storage back end for the block and object storage so you have a lot of control over how you want your your storage to be done we distribute that completely across the cloud doing three times three extra application of all your data. For compute we run KVM as a hypervisor we have some extensions on there called virtual memory streaming that allow for some deduplication of RAM and for the networking side we offer many different options from the original Nova network that came original with open stack and as well as Plumgrid, NSX, Contrail, OVS integrations and on top of that what you get is built-in open stack so we do run open stack we're running on Icehouse now you know so all the projects within open stack all the main projects and we sort of pick and choose the paint on what our customers want which are the other projects we we want to add in there like people want database out of the service or salameter or heat or these kind of things we sort of pick and choose what we want based on demand so just looking a little bit more at this no-tier architecture concept so you'll see that there's a rack of servers commodity gear every server can run anything compute storage networking in every box we don't have these separate things these separate tiers where we have compute boxes and separate storage boxes and things like that everything can run anywhere it makes very very easy to install base but a very way to for example if you want replace servers you basically scale in pop a server out if you want to maintenance on it you do maintenance on it put it back in so very very simple just really quickly on the on the storage side I mentioned staff is our back end I think people have probably heard of of stuff before so this takes care of both of our block and object storage on the back end we do offer ephemeral storage as well for people that want local storage and we have a very neat way that you can completely customize your storage if you want to so you can put in different flavors of SSDs and SATA drives different speeds of those drives and you can decide on your own how you want those drives to be allocated for block storage object storage or even ephemeral storage and as I mentioned on the networking side we support networking from various vendors from OBS to Juniper Plum Grid so we can support those as well and also trying to make those into an automated easy install part of the way that we do this to talk a little bit more specific about the the secret sauce if you will in piston is this run moxie runtime environment so what that does we built our own cloud orchestration framework so basically when you rack and stack your servers you have a boot node in there you basically take our software and plug it into a boot node you power on the boot node our software goes out we detect all the servers that are attached all the cloud nodes that are attached we network boot them we detect what hardware it has in there so we can detect what how much RAM is in the all the servers how many drives and what type of drives are in there what kind of compute power you have in there and then we install our software in the RAM disk install seth distributed cross install all the open stack components and all this is hands-free so this is the basic zero to open stack in 15 minutes and then on top of that we've been working with our partners on doing the third-party integrations to be able to make installers for things like I mentioned the platform as a service on top or any of the the the SDN providers that we've been working with to make it easy install all the way through so again with this micro-operating system we call it I okay this kind of our marketing name for it but it's just a hardened Linux distribution completely stateless that we run in RAM less than 100 packages and all it's built to do is run open stack that's what it's meant for and the in terms of our moxie this is not only the cloud orchestration framework but it's also the piece that we have it's our high availability framework so when there are problems in the cloud when a server goes down they know big deal whatever services are running on there we just migrate them to to some some other server the cloud keeps running so the goal is your cloud is up and running 24 hours a day seven days a week so that's kind of the little bit of background let me get into the demo a little bit I'm going to demo I have a screen capture I made I think the the the guys from Tessora made the great comment of not trying to depend on the Wi-Fi here to show you so I put together a screencast of the cloud orchestration so when we first boot the cloud I'll show you the scale in and scale out of the cloud as well so let me get that screencast up that doesn't look very nice hang on one seconds I'm not getting mirror mode here to work I don't have any jokes otherwise I tell a joke right now while they get this here we go all right thank you all right so this is the part that we call cloud boots what cloud boot is this is where we first install the cloud this is where you've racked and stacked your servers you plug them into your network you power on your boot node and then what happens is we'll get the DHCP request from the BMC in the servers and as you'll see we'll detect those servers we will power them on through the network and once they're powered on they'll boot up they'll go through their BIOS and then we will do the auto detection of all the the hardware components we'll install our software into the RAM disk and then we wait about 10-15 minutes to paint on the site the speed of your network the number of servers you have all this happens in parallel and you'll you'll see so you'll see that we're in this provisioning full state that's basically where everything is provisioning we're installing all the open-stack components and then once that's done you'll see that all these servers will go into the ready state and you know you'll know that your cloud is up and running this process is about a 10 to 15 minute process so you see the servers are ready and now you can go click on access a dashboard put in your credentials and you're in the piss and open-stack dashboard scale out so scale out works in a similar way to the installation basically when you're ready to scale out you add some servers you attach them to the network again we auto detect them you don't have to power them on when you're ready you can click host available to add is select on on the servers that you want to add into your cluster and all this happens behind the scenes so again your cloud is still up and running all your workloads stay intact your the people that are using the cloud can still keep doing all the work we install our software into the RAM disk on these this new server or servers you'll see that there's a lot of provisioning and changes that happen among the other servers have as things get reconfigured but there is no downtime to be able to scale out your cloud once the configuring is done you'll notice that the servers will show up in the ready state again and you'll just continue on on using your cloud this is the little though the finalized state that it goes through and then they'll all say ready and you're still up and running but while this is happening again your your cloud is up and running fully scale in is a very similar process and that's what I'm going to show you next so say you want to remove some hosts you're not using them anymore or you want to do maintenance on them so you go to this host tab you select the hosts or hosts that you want to remove from the cluster it starts an evac process we're on that server or servers we take any virtual machines that are running on there will migrate them off it's a live migration so the work work can still happen with that VM anything that's stored in Seth will get migrated as well so those copies will get migrated to other servers if they need to and the scale in process happens so all this happens in the background there there's zero downtime for this as well and you'll see that any effect that happens on other servers you get a you get it all shown in the in the dashboard it shows you what the states are and then once that's done your cloud is back up in the ready state again your servers are but again all this happens in the background all this happens with zero downtime and you'll stay up and running and about five more seconds you'll see that this is done now I've sped up in all these demos here I've kind of sped things up it really doesn't happen quite as fast as you see here but it all happens in the background and depending on how many server servers it is you're scaling in or scaling out we'll determine the amount of time it actually takes and about three more seconds and we're ready and we're running so these are just three of these sort of automated pieces that we do one of the other ones that we do which is very important and a big value add in in our product is doing live updates so as I mentioned before if you need to go from a grizzly to Havana Havana the ice house or even doing upgrades of your product because there's some security patches that have come out all the live updates happen without any downtime as well we do rolling reboots of all the servers so we start with one server doing update you know we migrate all the VMs and all the data off onto one of the other servers we'll do an upgrade on that server reboot it and do that for all the servers in the cluster so again there there's zero downtime for upgrades as well so the whole goal of this from zero to open stack in 15 minutes is to make sure that number one is very easy for all this installation all this configuration all this management to take place number two anything that we add on top of our product whether it's platform as a service or we're going to move in as you move into containers and other areas that anything that gets installed is also a very easy quick install and to make sure that your cloud and your excuse me your cloud is up and running 24 hours a day seven days a week so that's what I had for the presentation and for the demo and I'd be happy to open up for any questions I have about five minutes left on my time yes it is not so so the question was our we do have Linux a Linux operating system that we've created and the question was is our is our secret sauce open source at this point it's not open source as you know we we kind of evaluate as time goes on whether or not we're going to open source things right now we we have not open source that part of our product yeah so so our OS runs in the RAM disk and then and then so you can put in your own image once the the cloud is up and running people go in a glance and put whatever you know whatever Linux or more Windows images they want and then they can spin up their VMs from those images so if you have a bunch of red hat yes yes and our OS is still running in yeah I don't know that I've seen any any problems there I haven't I haven't heard that per se but as far as I know I haven't seen any any problems with people running Linux flavors of any any type yeah I'm sorry I couldn't hear you is it possible to map different services to different yeah so the right right now not not at cloud install time or cloud boot time we can't do that yes sure so we're seeing a lot of dev ops use cases it's fine like that's that's kind of where our sweet spot is so for people that are doing these dev ops deployments where we're seeing like that is like the biggest use case that without that we're seeing from most people like we have a lot of you know we have people on both sides we have like service providers over here we have large enterprises on this side where we make up some of our our use cases but right down the middle is this really dev opsy focus so what they're running they're running a lot of their apps are line running a lot of their own own internal in-house apps and they have this sort of continuous integration continuous of development where they're starting on a in a PLC they move that from a PLC to a dev environment move from a dev environment into production and we kind of see that happening a lot correct yes do we use the yeah so for for main cloud management you can do it all the three ways so some people like using the dashboard so we do have horizon that's there some people use use a command line interface to do it as well and then a lot of people use the API's directly so we still offer the three main main entries into open stack that that are available other questions okay well thanks a lot for your time pissing cloud computing we have a booth right right over there so feel free to stop by thanks a lot