 Hi, everybody. My name is Lee Thompson. I'm the director of Cloud Integration at Selenia. We're an engineering and design firm for Cloud, specifically OpenStack. I go all over the world implementing Cloud systems. And I'm very generic and unopinionated when it comes to implementations, any vendor. And I started picking up the complexity of the OpenStack tool chain. And I always want to get it right. And it ends up, it's a bigger problem than me. So I like to use a community forum like this to start sharing best practices. What I'm trying to target this speech to is people trying to deploy clouds. This is not easy. Chasing trunk is something that I typically do, which is chasing trunk is when you're trying to use the latest version of OpenStack, the latest config, and everything tends to stay broken. So if you're trying to implement an OpenStack cloud in your company, don't do this. You want to stay with a stable configuration, a stable version of OpenStack. OpenStack right now seems to me like what X Windows was back in the 80s. I'm showing my age, obviously. But X Windows, you could do anything with X Windows. And that was awesome. The problem with X Windows is you could do anything. It did not have well-defined defaults, configurations. When you look at something like OSX, it makes decisions for you. And standard configurations and standard use cases get applied. We're still trying to figure that out with OpenStack. Also, what you see going on in the floor behind you in the demo hall is what kind of feel like we're in the Linux distro wars that were going on back in around 2000. There seems to every day you run a new distribution and your company wants to look at it. And this makes it very, very, very difficult to try to standardize on something like OSX type of thing, which typically just works. There's really no one solution. The way we typically do this is with a tool chain type of design. This is very standard in the DevOps world. You don't just plop down one solution. You chain up tools together. And what is a tool chain? A tool chain is a set of programming tools used to create a product. And that's right off Wikipedia. Unix itself is just a collection of tools. I mean, it's one of those jokes that you go to a party and everyone's laughing at it. You don't really quite get. And that's what I always felt about Unix. It took a long time for me to figure out it's really not an operating system. It's just a collection of utilities that work together. Well, OpenStack is kind of like that. We've basically taken the big parts of a distributed compute cloud platform and broken them in pieces and implemented them separately and federated them together as a tool chain. Good programmers are lazy. And what lazy means is you don't really want to reinvent the wheel every time. You want to leverage your community. That's the best practice. The problem with being lazy is you have to spend a lot of time at it. There's a lot of tools out there. So which ones are good? Which ones are bad? You have to use your community to figure it out. So to try to figure it out, I spoke at DevOps Days last week. I did it at night. I've done an Ignite session. That's the five minute speech, which is a horrifying format for doing speeches. I did that in Austin. And we had a breakout session where we started talking about different approaches to doing cloud solutions with OpenStack. And I brought it out here to Georgia. Now, one thing Austin's got going for it is we have F1, top of the breed racing. Out here, they have something called NASCAR. I never really was into it until, well, last week. Do you guys see this? The Dogecoin guy sponsored a NASCAR. Now I have to watch NASCAR because that was super, super cool. The Reddit community generated $55,000 and put this on the back of a NASCAR. So they're doing the bumper cam drafting. There's a dog looking at you. Frickin' awesome. My daughter asked, daddy, can we do Dogecoin on an F1 car? Yeah, maybe for one lap. Let's talk about tool chains. This something that goes back that me and Alex did at a Velocity conference a couple years ago, where we started looking at most of the DevOps work prior to this, it was on provisioning. And what we were saying was there's lots of different tool chains. And you have all of these in cloud as well. There's a provisioning tool chain, which is deploy, config, boot install. The classic continuous integration, continuous deployment, continuous delivery tool chain, which is your software development lifecycle. You have your monitoring tool chain, which we have a whole conference on Monorama. And then the whole monitoring still sucks meme. There's a whole tool chain for that. And then there's control. And this is what I always talk about with control, is you have monitoring. Monitoring is data. It's just raw data. What you really want is control. You want to take inferences off of that data, make correlations, and decide things that are in control or out of control, and then have remediation, run book automation to get back in control. If you're talking about monitoring, you're probably talking about the wrong problem. And then model is where you basically look at your distributed systems ontology, the relationship of endpoints in your distributed system. I'm not going to talk about that today, because that's kind of beyond the scope. Let's go through each one of these areas, except for model, one by one, in regards to OpenStack. The provisioning stuff is probably the worst part of it. It's got a long way to go, I think. And unfortunately, that's the first thing you want to do. If you're doing a private cloud, you want to get it all installed. There's six major ones I keep running into, Forman and Pacstack, which is typically Red Hat, which in there go into Triple O and Tusker. The FuelWeb, which is from Rantis. The Juju and FuelWeb ones are probably the best user interface that I've seen, probably the cleanest install. Crowbar does the provisioning very well in Dell hardware. It looks like Dell is pulling out of that, but they're going to start doing support for more hardware as an open source project. Rackspace private cloud tools. The really cool thing that happened in Hong Kong is Symantec did a old school proof of concept where they grabbed all of these and went through them one by one. And so this is already up on the OpenStack website. And they got all the Crowbar guys in, they got the FuelWeb guys in, they installed all these and baked them off for their use case, okay? If you were evaluating these tools for your use case, you would come up with a different result, right? They ended up going with Crowbar, but what was so fantastic about what these guys did is, a lot of us have done proof of concept type work for the corporations we work at. And we spent a lot of time on it, maybe a whole quarter, maybe four months. And the result of that gets put into a file, you do some PowerPoint in front of your company and they file away and that's the end of it. These guys brought it out to Hong Kong, shared it with the community, it was just fantastic. If you go back to one slide, most of these tools are based on Chef or Puppet. The Juju has the charms, the triple O which is newer is pluggable. It's not opinionated on config management. When you get in the config management side of provisioning, it's slippery and wet, there's still some work going on. The Puppet code is getting refactored or re-hosted from Puppet Forge, moved over to Stack Forge. So if you go out there and you start Googling for your Puppet code, you might end up over on the older code. Plus there's a few branches out there. There was a meeting yesterday which is great. There's already a weekly dev status meeting that started about four or five weeks ago and a week's following, we're gonna start having weekly dev meetings on it. The goal has to be to get all these branches out of the system and start getting to a community resolved standard well-defined config and that's what we need, I think, as a community. The variation of all the different deployments, my configuration of OpenStack is gonna be different from your configuration, it needs to be different from your configuration. So how do we get all these different varied configurations merged together with one basically code, a piece of code to inject all the different config options. The Chef code was pretty much in the same space about a year ago where the Puppet code is now, but Matt Ray started getting ahold of this and started community wrangling and getting the code to merge in. So there's less fragmentation on the Chef side. Once you have your OpenStack cloud up, one of the first things you wanna do is get the machines that you wanna host in. So you wanna build up some virtual machines. Wow, there are a lot of tools here. And I run into pretty much all of these. VM Builder, Image Factory, Sousa Studio. The new kit on the block is for machine image builders is out of the triple O project, a disk image builder. Is anyone using that? Yeah, okay. Got one. I've been most successful in the work I've done just using Vagrant with Veewe or Packer and then just running on my Mac and then using the Vbox clone, Vbox managed clone and then uploading that into OpenStack. I mean, that's just something I've been doing as a developer for a long, long time and that's been the cleanest way for me. A lot of the OpenStack manuals talk about Box Grinder and Oz and there's a lot of pointers from the OpenStack manuals. I have not had any success getting those running and I haven't seen any commits on that for a couple years so I don't know if that's alive or dead. But why not just use Vagrant? Let's talk about the control side of the problem. Ansible and SaltStack are Python based configuration management and orchestration claps versus your puppet and chef are doing configuration management and leaving orchestration for another tool and a tool chain, these guys claps it. That's got a lot of adoption in the OpenStack space that I've been seeing and I think it's a lot of affinity to the Python language so that's really cool. Both those guys are here this week. OpenStack Heat is getting a lot of traction and it's heavily used by triple O. I've also seen ActiVe, Fabric and RunDeck. I use a lot of RunDeck, by the way. Scalar and Tusker projects are providing the scale, elastic scale, and a management console. Folks who are doing load across... Oh, by the way, I'm doing mostly open source tools, not the commercial side. What I figured about for this speech was that we would do just the open source side, meaning because the open source guys don't tend to have marketing departments. Scalar adds cloud abstraction. So you can host part of your load on, say, AWS and OpenStack. On the release side, release automation side, all the normal tool chains apply. You've been seeing outside of the OpenStack space, Jenkins and J Cloud are really heavily used for OpenStack. So you plug in the J Cloud, plug into Jenkins, and J Cloud can attach to OpenStack because that's one of the plugins they use, and almost everybody's using this. So if you're setting up a CI loop, that's pretty much a standard thing that everyone's doing. You can do continuous integration, you can do continuous delivery. Now, Triple O, I've listed more here as a life cycle tool in release automation, not as a provisioner. A lot of people have been talking about Triple O as provisioning, and I see it more as a life cycle tool. Let's talk about life cycles. When you're setting up a tool chain, and I do this a lot, I've been doing it for years, this is how I see the problem. What we've been seeing is that the tool chain development has been happening for application development, and with the onset of automated configuration managed, we're also seeing the configuration as code type of motto has applied that if your infrastructure is code, why doesn't it follow software development life cycle as well? The question is, do you develop a separate tool chain or a tool chain leveraging or borrowing from the software world? It depends on the client, it depends on the particulars of the deployment, but we do start seeing the introduction of package repositories for configuration. So you'll develop your code, there's been a lot of focus in configuration management on unit testing of your configuration, and you can release it and put it in a package repository. It's very lean that you do an intermediate stage release of an artifact, and then people who are doing deployment can pull those latest versions and set up an environment, test it, and propagate that into a release. So typically when I'm talking to a client, I go through their software development life cycle and figure out what they're doing in each one of these stages, right? What are they doing for development? What are they doing for source repositories? How are they running their build, right? What package management systems are they using? And staging all those tools up. Are they really, are they doing push or are they doing pull? Some people do a release and they wanna push it right out to production. I think the more sophisticated shops stage it in a package repository and pull it out. When you get over on the right side of it, deployment consoles, you're looking at something like a scalar or a run deck that will essentially orchestrate the delivery of the artifacts into the production across the distributed systems in the proper sequence, right? Or you can have a convergence that doesn't need orchestration, right? You're just using Jenkins. The infrastructure manager on the right is OpenStack in this particular case. I talked before that I thought the triple O was a release system, not a provisioner. It's OpenStack on OpenStack. There's been a lot of speeches about it this week. I see it as a CI tool, continuing to be testing your bare metal config. You can create different configuration versions with OpenStack and virtualize it, or you can put it on bare metal. If you're doing a role like me, where you're basically trying to try out the latest features of OpenStack and determine if it's suitable for a client or not, you're chasing Trump. And the best way to get some sanity in that process is set up a test environment, have it run through the latest config and give you a green light or red light that tells you where you are. Are you deployable or are you not deployable? Anytime you have that information, you have better control over your infrastructure. Yet on the negative side of triple O, a lot of people have been accusing the triple O project of DevOps unicorns, right? Some of the best practices of DevOps is reprovision instead of repair. If you have a problem with a server, don't try to fix the server, delete it and reprovision it, right? All your artifacts, all your systems have a recipe-driven configuration that can recreate itself. Why sit there and debug it? If there's a problem with it, you're treating it like a cattle, not pets, is another one of the things. The problem is, is that we talk about DevOps and it just, it's rosy unicorn, we got all the CI and CD stuff, but is it practical? In some kind of times it isn't. So this was the slide that Robert Collins gave about a year ago when he went over triple O. And it has the life cycle, as a design, it's a life cycle tool to basically, you know, on the top left, keep track of the current changes and the configuration and the code of OpenStack, stage an environment and run the tests. That gives you your state, where you are. Are you in a deployable state or a not deployable state? This is the information you gotta have. Yes, it does provisioning, but this is a much more complicated life cycle than provisioning. Kind of the standard slide in continuous delivery that you see is we have some code, we pull it out, do the build succeed or not? Well, that's feedback. The build didn't succeed, you're not in a deployable state. Okay, we actually built it, now the new version, can we actually create an environment with it and run the tests? More information, where am I in the life cycle? Right, all the tools that are lining up, there can be failure in any one of those tools. At some point, you gotta get all those tools to line up so you can get into a deployable state. Triple O does that. So if it gives you the information you need, it's getting out of the space of a unicorn and it's getting into best practice in DevOps. Let's talk about testing. OpenStack Tempus is kind of the standard approach to certifying that you're in a deployable state it's an open source project. I consider it best practice to get it working on your internal environment, your pre-production cloud. Bunch and lettuce are very similar to cucumber. I'm language agnostic, cucumber was kind of the standard behavior driven development testing tool. I mean, I don't care whether lettuce is written in Python or cucumber is written in Ruby. I think you should pick one and use it. You're doing a project, you should describe what the behavior is, use a tool like cucumber to put that in and make it runnable. Instead of writing a specification, why not write a test? You can do it with those tools. I do like the goals of triple O being a lifecycle automation system. It tells you, it gives you information about where you are in the process. Let's move on to monitoring. Now the usual suspects I've been seeing in monitoring. You know, people have been using Xabix, Zeno's, Cacti, Zinca, OpenQRM. On log management, I see a lot of people doing log stash and loggly. If you're not using Splunk, Splunk being commercial, I've been mainly talking about open source tools. The new kid is something that we released, Selenia, which is Goldstone. It's similar to log stash and loggly, but it's focused on open stack. So it aggregates all your logs inside your open stack deployment and puts elastic search to start correlating events in your running cloud. Hey, this isn't just about operations. Devs, there's lots of tools for devs in the cloud space. Ruby developers have been using Fog and Aviator. Aviator being a little newer. Mark McGlann, I was probably around here somewhere, he wrote that. JClouds and Cloud Foundry are the standard tools that I've been seeing in the Java space. In Python, you have a lot of choices. PyCloud, LiveCloud, OpenStack itself, if you just want to go native, you can use all the OpenStack libraries. Whatever language you want. I've done a lot of work with Bash and Rerun. That works just fine too. Moving up the stack, platform as a service. I actually don't run into a lot of this with the work I do. Usually when I work with a client, they're pre-settled on their platform. They've developed a platform for their business. They know how to run it. They want to get it on OpenStack. But these are the more generic tools to run Ruby code in your open stack environment, kind of similar to any of the Paz providers out there on the net. There's four projects I've run into. Trove, Solom, Cloud Foundry, OpenShift. But I actually haven't had any keyboard time on any of these tools. I'm a nervous speaker. I blew through this quite quickly. So what I'm gonna probably do is we'll talk about questions. I like to do this where I can get some feedback on, hey, what are you guys doing? Maybe we can do that here and then move on outside. But I will end with the SuperUser metaphor, right? I think what I'm trying to do is share different approaches to doing cloud and this is my superhero, the dude. So thanks very much. That's it for me, any questions? And we got two microphones here. Anybody wanna share what they've been up to? We got a victim. Now I just have a quick question about the build tools slide that you had early on in your presentation. I was involved in some work with Windows guests. Time back. Great, okay. Just a question, do those tools apply? Could you speak to hosting and doing build of Windows guests? I actually, two years ago, I was involved in, most of the year was on Windows guests. And I did a lot of WinRM. So I was running Rundec on a Linux node with Java and it was dispatching over WinRM and orchestrating through PowerShell. I actually loved it. Having been in several projects that had Windows, it was very, very, very difficult to do anything with a Windows node now that I have a good shell environment. I found PowerShell to be very, very similar to rerun, which I use a lot, on the bash side. The one thing that's kind of weird about PowerShell is it's verb noun versus noun verb. So when you get into a large distributed system that has maybe 1500 endpoints in computing, you know you're working on, say, a load balancer. So you wanna say load balancer fail or load balancer update or load balancer install. It's in PowerShell, it's reverse. It's like install load balancer. And so, but otherwise, I think it's a great approach. It makes your endpoints cooperate very well with your Unix nodes, especially if the application can talk Raster, HTTP. But yeah, that's a great question. I have had really nothing but good experiences in the last two years doing Windows and mixed environments. Great question. Any other questions? So does everyone go to DevOps days and do that kind of community work? No? I do a lot of the DevOps stuff in Austin. Go ahead. Yeah, quick question. The slide where you were looking at the Symantec bakeoff for the tool, there you go. Are there other examples of those types of bakeoffs that we can get our hands on so we can kind of see what other experiences are out there? Yeah. Can we go through it ourselves? Yeah, great question. Yeah, there's a ton of this. I mean, they're filming this presentation today. I think this community has been very open about what works and what doesn't work. I found this on the OpenStack. I didn't go to Hong Kong, but I found this from the Hong Kong videos. I love this presentation. These guys did a great job. Very thorough. Thanks everybody. I'm gonna go ahead and yield the stage. And if anyone wants to catch me, I'll be just outside the door there. Thanks very much.