 So, yeah, I'm Howard Abrams, I work at a company called Workday. We went public last year and the company's been in existence for a few years now. And I just thought I'd kind of give you more of a user's use case perspective on how we cloud at Workday. We have a enterprise application for HR and as such, we're really concerned about security to the point where it's really obnoxious to be a developer there. They have all sorts of stringent controls because we have things like our customers and their employees, their social security numbers and salaries and all kinds of incriminating information. So, each of our customers has to be in a VM and consequently we just put them in that we've been doing cloud for a long time. However, we do have a bit of a problem, like everybody does and this is kind of where we started from. We had a private cloud built on KVM and it was kind of duct taped together, actually it was using Zen, sorry, but it was duct taped together with a lot of shell scripts and a lot of tears from developers with their late night personal attention to these scripts and late night personal attention means something completely different to a DevOps person, right? We also had a lot of Chef recipes that we had hand rolled our own and some of these scripts were thousands of lines long and required pages just to explain how to run these things. So along comes Carmine here and he hired a bunch of us to kind of help make a reproducible private cloud. We wanted things like a business specific interface to it, make our DevOps team happy, we wanted a very, what we call a promotable pipeline, I'll explain it in a minute and then a lot of feedback that we could use to kind of improve things. And since we started going, they started adding more things to it, they wanted to, they had specific types of virtual machines and as we asked them what you mean by that, it sure sounded like flavors and images with quotas, like, hmm, alright, sounds like we've got an idea here, they also had very defined limits and authentication and security and then, you know, hey, well, we're asking for it, we want to support a lot of data centers with the same tools and, hey, can you support our existing scripts and, oh, hey, can you expand it for all eternity. So we came up with what we thought was a solution, it's kind of a four-part solution. Now these are not products that anybody else can use, it's just the code names that we gave them to kind of keep them straight in our own mind. We began with OpenStack, we started with the swan to deploy it, we then created the owl to configure it, the raven was basically our API to it, and then our monitoring metrics was our dove. So the swan, this is just how we did a fully automated deployment of OpenStack in lots of data centers all at one time. These are some of the key features, we were big into Chef, so we had to use Chef to install this thing. We started to kick out our old Chef scripts and use the ones that we could get from Ops code, of course, Rack Spaces Collection, and then we've added our own special sauce on top. Then we had our workday projects, which, you know, we had to have these continuously built and integrated and pushed out into production, and yes, we've been looking very closely at triple O for a few weeks now, and it's like we've got to start migrating over there. This is kind of our promotable pipeline. When a developer checks in some code into Git, it goes through build process, full testing using templates plus a lot more stuff, and if it all gets good, we push it into our repository, our gold images. The gold images gets shoved over into the first of one of many environments in our data centers, and if QA blesses that, we promote it on down the line from engineering to our staging environment, non-prod, and then into prod, and if that's good, we'll move it over to different data centers that we have all over the world, and little by little push them out so that we can kind of have a workflow that keeps going. Next, we had our owl. This is our, once an open stack is running, we need to have it in a state where we could now use it. So some of the features that we started developing was we start with a very well-defined configuration interface that we could override for particular environments, and yes, this too, we've been looking at the triple O. We wanted it item potent. Clearly, we wanted just be able to keep running this thing anytime we needed to, and we wanted the configuration files such that we could put it under source control for each environment, allow us to do things like creating particular tenants and flavors and just all of the beautiful stuff. So what we ended up doing for this particular project is we just wrote some Python scripts. We could use the Nova Client Library. Makes it pretty straightforward. This is a little bit of our configuration file, just to kind of give you a flavor for it that we could specify both the connection for a particular thing and then what we wanted to do with it. Set up a project, or in this case, here's another connection that's in the same file, but this one will be setting up the flavors. But the code, I mean, hats off to the whole team. The code was very easy to use. I mean, slapping together some nice little Python script made it a lot easier for us to handle as opposed to dealing with the rest client directly. Yeah, the only downside is, really, if you don't find it, you throw an exception, really, is that a good thing? Next, we created our raven, which is basically a business interface to the entire stack, kind of looks like this. We wanted to support lots of different DevOps teams using different types of tools, all of them coming through the same interface. But allowing that interface to configure multiple open stack instances all over the world. Well, once we started dreaming, we just kept on going. We wanted a generic API. We wanted one that was a hybrid that could work with both public and private clouds. Had to be restful to make it easier on ourselves, support JSON. But, of course, in order for us to support some of these shell scripts, we started adding other formats to it. So we ended up using Python Flask for it and using the Nova Client Library. Once again, it worked out pretty well. And we were able to keep it all stateless, which meant we could get high availability for free. This is kind of the architecture that we came up with. Clients on one side could push over to the API, which would then talk to one or more open stack instances. The code for it, if you ever decide to do it, is pretty straightforward. We could have one file that was full of just routing information with some documentation. This documentation here ended up something we could just spit right out. So this is the API that actually comes out of our code. So any of the DevOps want to see what could happen, they could just go to this one URL and see all of the different things the API could support. We try to make it more business oriented just for what we wanted for work day. So we kind of started to bubble up things to be very specific to what they do. And then underneath, it just calls the standard API. So this is just something that work day wanted to have, a much more simpler, dumbed down, very business specific interface. So, like I say, this is not something that you would do necessarily, or it's something that you'd take from us, but it's something that some companies would want to run. But it ended up working out pretty well that they could, great, battery request, great. It's hard to see that. Yeah, and so just using the Nova Client Library, very successful for us that we could then render it to different ways. And just bubble up the information that we got from the public cloud, or from OpenStack the way we wanted. We could reformat it the way the company needed it. We also didn't have to worry about a synchronousness. We could just have the clients pull multiple times, and that seemed to work out pretty well. And then using Box Grinder, we could create different images for the different layers for both the virtual machines and for the bare metal. But like I say, the triple O we've been playing around with, because we really want to go over that way. The Dove is our project for monitoring feedback. We had to pretty much use the same Nagios interface that we already were using, we just needed to extend it and then add Kafka so we could use it, get our logs. And in the future, we want to kind of start expanding that, since that's a key feature for us. Now if you're going to try doing some of this stuff yourself, there's a couple of other tools you might want to look out that seem to work out pretty well. Morantis has a project called Fuel. It allows you to stand up a cluster, on bare metal, kick off a whole bunch of integration tests to see if it validates it. And they have just a real pretty UI for it. It's all based on Puppet, and since that's what goes on the bare metal, we're until we can talk them into converting over to Chef. We're not able to use that at the moment. Another approach if you're going to try managing multiple OpenStack instances is using Supernova. It basically has the same NOVA CLI interface. It just, you can specify different environments on each command. And that reads from a configuration file for the different connection properties. So you can say, go to tenant one, and give me a listing of all the flavors. And it will use that kind of connection to get over. So some of the lessons that we learned through all of this, OpenStack, definitely very good. However, the tools move very fast. And for an enterprise company, that's a little rough on us. We have to make sure that we can stay on the trunk. And we certainly want to contribute back as soon as we've leveled up on it all. From a technical standpoint, there's certainly a lot of rough edges that we've had to work around. Heat almost, triple O, almost, a lot of good things that are almost ready for us to use. We've also had to rethink a lot of our assumptions, especially when it came to networking. Yeah, it seems like every day we have a very particular way that we want our networking to be. And that usually is what you put into an OS. That's not always the case with OpenStack. Sometimes that goes over here. So we've had to do quite a bit of leveling up in order to figure that part out. However, the scripts were easy to build up with the APIs. Oh, and also one of the things is all of our data centers do not have internet access. So to stand up something like a DevStack just to see if it works, it doesn't because it just can't go out to the internet to go download any sort of calls or YUM or whatever. It just can't get out there. So we've had to figure out ways to do offline installation. So in summary, if you decide to kind of follow suit on the same kind of business process, definitely you can certainly use the Nova Client Libraries. They've worked really well for us to enforce your own security policies. You can just kind of layer on top of KeyStone and then use Fuel or Supernova or something to do aggregate a lot of these different interfaces. So yeah, how about that? We can now finish off that beer we promised ourselves. Any questions?