 Hello, hello. Welcome, everybody. Welcome. I'd say welcome to Atlanta, but I'm guessing you've probably been here more than just this instant, so me welcoming you here is probably a little bit late in the game, but it's the first time I've seen many of your faces, so welcome to Atlanta. One of the things you may or may not know about me is that I was actually born here at DeKalb County General Hospital, so I did not grow up here, but there's some roots, so it's sort of fun to be doing the open stack summit. Before we get started on this, I want to let everybody know, if you don't already know, we've got a really fun, awesome book that we put together. Lisa, who's the main author on it, is back in the back and going to be doing, oh, she's dancing around now. So everybody watched Lisa dance. Nope. She'll be back in the back available to give them out and sign them and stuff like that, so definitely take a peek at it. There's a particularly snarky illustration on one of the pages that I particularly enjoy myself. Anyway, give that a shot. So I've only got 20 minutes and I typically don't tend to get through my talks in the allotted time anyway, so I'm going to go ahead and jump right in. And I'm going to, in theory, tell you everything you've ever wanted to know about the future of cloud and where it's going. So after this, you'll have all of the secrets and know everything, and that's all probably completely not true. So my name is Monty Taylor. I work for Hewlett Packard as you might be able to figure out by the very large Hewlett Packard logo on the slide and the hp.com after my email address. I also, for those of you who don't know me as well, sit on the OpenStack Technical Committee, the OpenStack Foundation Board, and I'm a past PTL of the OpenStack Infra-Program, where I'm also currently a core reviewer. So we do a lot of stuff with OpenStack on a day-to-day basis, and I'm going to talk about a little bit of that today. But to start off with, I would like to show the stereotypical OpenStack marketing slide, because it's the best marketing slide in the world. And also it's more graphically interesting than any of the other slides that I'm going to show you, because it turns out I'm not a graphic designer and do not make slides that are this pretty. So as I'm assuming most of you know, since you're here, OpenStack is some open-source cloud software. And that typically needs more explanation for other audiences. I'm assuming at this point that most of you know that it is software that provides compute networking and storage services on top of some hardware to your applications. But it's your applications part of it that is the interesting part to me at this point. That little gray box up the top that looks sort of unimportant, because it's not what we're here to build. We're here to design and decide what we're going to do with OpenStack over the next six months. I think it's extremely important for us to keep that top box in mind, because the whole reason that we have cloud, the whole reason that we have hybrid cloud stories and things like this is in service of your application. Because at its root cloud is an abstraction layer, at least it is to me. It can be a force multiplier. It can allow you to get your thing done quicker and easier. It can allow you to make choices in an agile manner so that you're not, say, putting in a ticket to get a new server that's going to come back in three months. And maybe it'll be working or maybe it won't be working. These things are tricky things to deal with. But I think that also this is a thing where I'm a little bit at odds with the normal things people say. Because when you start talking to customers, and weirdly enough HP lets me talk to customers, which is distressing, I'm sure, for everybody involved. One of the things that we'll typically tell people is that, listen, cloud is a bit different. You need to start thinking about the journey to get your applications ready for cloud. You've got to do things differently because clouds go away or whatever. And so we talk a lot about the difficulties that enterprise customers might have in running their traditional enterprise applications in the cloud. And I actually think that it's not as bad as all that. And to illustrate that, and to illustrate where I think that's going, I'm going to talk about myself because narcissism is my best quality. And I'm going to talk about some of the stuff that we do with the OpenStack Infra program, which it turns out is all one big cloud application. So if we're going to think about that gray box at the top of the OpenStack marketing slide, it's a readily available example of that. And it starts off with a very awful piece of enterprise quality Java software called Garrett, which we use as the basis of our entire OpenStack developer workflow system. So this is just a screenshot of that because I couldn't come up with something else to show you to talk about Garrett. And the point of this isn't to say things about Garrett as to what we do with it in the project. If you're involved in OpenStack, you will have seen many screens like this and it's nothing new to you. It's a code review system. It has Git repositories baked into it. It has some APIs, and that's all great. The thing about it is that it is a monolithic Java application. It is not intended to run in a cloud. In fact, I believe that one of the times that we told some of the developers earlier that we're talking to them about some performance problems we had. And they're like, what type of machine are you running it on? We're like, we're running it in a VM in the cloud. They're like, oh, my gosh, you can't do that. That doesn't make any sense. We're quite happily and successfully running it. In fact, I believe it might be the largest Garrett in the world. And it is running in a public cloud. It is a single instance running on a single machine. And so if it goes down, it's a bad thing. All of the developer productivity in all of OpenStack stops if this goes down. Luckily, it turns out that doesn't actually happen as often as the marketing materials might make you think. It turns out you can actually just run this in the cloud. Now, what you need to do is the same thing you need to do with all your other normal IT things. You've got to take backups. Don't just run it in the cloud and expect to not back it up ever. You've got to do the things you do to system it. But this is sort of a step one. I'm going to start with a single monolithic Java application running in a cloud, and it's going to work. But that's great. But cloud is there to get us past that, to get us to the next step. So cloud is there to allow us to move at a higher velocity. It's there to allow us to do things that we can't just do with that single Garrett running in a cloud instance. And so starting off with just that Garrett there running, and we had a single Jenkins server connected to it, which I'll also point out is a single monolithic Java application that is basically akin to a normal enterprise app. Also not particularly well suited for the cloud, and I'll talk about that in a second. That was great three years ago when we had the entire design summit that we're at here in a room half the size of this room, and all of us were sitting at the same table. Scaled really well for that. But we've sort of hit this point now. We have 355 companies involved in OpenStack. We in the last cycle merged 17,000 patches. Probably a single machine isn't the thing that we're looking for to take care of that anymore. We have 2,000 cumulative contributors and over 450 contributors on a month-to-month basis. On average we have 466 people contributing to OpenStack every month. And that's a great success for the project. It's one of those things where for the people dealing with the developer infrastructure, it sort of would be better maybe if we weren't quite so successful, because all of those people write patches and they submit them to our system, and we have to deal with them. This is in fact a graph of people interacting with the monolithic Java application that's running in the cloud called Garrett over the last five days. So we've had, what is that, somewhere between, and you can see the weekend. Apparently we don't work on the weekends in OpenStack. So last week we were doing somewhere around 300 code review activities an hour. Even in the lead-up to the summit, we've got people uploading new patches at the rate of about 50 an hour, even in slow times, except on the weekend where it's only like 25 an hour. So this is all well and good. This is a lot of patches and OpenStack is a really complicated piece of software. So we had to write some code to handle the coordination and management of the integration testing of this. Because this isn't just one piece of code. It's a lot of pieces of code and it's complicated. So if you do much with OpenStack development, you'll know that we wrote a piece of software called Zool. Zool is the gatekeeper, because it helps us manage the gate. And what Zool does is it helps make decisions about what jobs we're going to run to be able to test changes that come into the OpenStack developer ecosystem. And in a thing that I think is sort of a happy accident, the graph of Zool's activity looks a little bit like I think the crossing of the beams from the Ghostbusters. It doesn't look quite as much like that and there's not a stay-puffed marshmallow man at the end of the graph. If there was, it would be the best graph ever, but I don't really know what graph produces that. So we'll just assume that actually this does that. So this is describing a reasonably complicated amount of effort that has to go into ensuring that we have the ability into our testing system to be able to test exactly all of those changes that people are putting up and in all the combinations that need to be tested to be accurate. This is where the cloud kind of comes in. Zool itself is actually also kind of, it's in Python, but it's sort of a monolithic thing itself. It connects to some message buses, but it's kind of like the normal app you'd write yourself. We need to run, we need to build clouds, right? This is what we're doing here. So every time we say we talk about testing OpenStack, we're talking about actually spinning up an entire cloud and there's some great slides that Sean Diggs can be doing, this elastic recheck talk later in the week with some numbers on this one. But we're spinning up like 18 clouds per patch and each one of those happens in its own discrete cloud server that we create for, especially for the purposes of doing that and then delete when we're done. This is in fact a lot of cloud activity. Our public cloud providers who give us things don't always like us, but we try to do our best. Thank you, Ulf, for helping us out there. Because we do things like this. This is the last five days of the output of our node pool system, which is what keeps available nodes available for us to be able to test on. And you can see that roughly about five days ago, there was a period of time where we were using between 900 and 1,000 VMs in that particular moment in time. And you can see that it varies quite wildly with the activity in the project. So again, the peaks and valleys are there with the evening. So this is actually much more your more stereotypical cloud application. This is custom built for the cloud. I need lots of build resources and I need them in an elastic manner. So what could be better suited for the cloud? I've got elastic resources and they're actually rising and my usage of them is rising and falling based on demand. Oh my God, it's the exact thing we wrote this for. But this is actually all tied back into that monolithic Java application that we started with. Each of these cloud server activities are ultimately triggered by somebody having an interaction with that one Java server. So it's sort of more, we talk about hybrid cloud a lot. In my mind, this is actually a hybrid application. We've got traditional applications going in here and we've got custom built for the cloud applications and they're operating in, dare I say the word synergy, but something like that, like synergy, corporate word synergy, but they're doing that. And what's really exciting about this, from our perspective on the open stack infraside is this is done in a multi-cloud manner. We have two public clouds that are providing us resources and that node pool actually spans them. So I don't have a pool of rack space nodes and a pool of HP nodes. I have a pool of nodes that the system is using across those things and that has been running in that way in production for a couple of years. So people tell you that cross-cloud compatibility is a dream. I can tell you that I do it 10,000 times a day. It's pretty directly. And one of the reasons that this is really important, each of the clouds in and of themselves are fantastic pieces of software. But I don't want the open stack infrastructure to have this problem. It's sometimes things happen and when this happens, the Netflix people get really unhappy and people can't watch Game of Thrones or whatever it is that really happened last time they crashed. But being tied to a single cloud, a single ecosystem means that you're in a monoculture, which means that if there's a systemic problem somewhere, your application sort of has to just suck it up and deal with that. Whereas if you can spread your load across multiple things, you can weather your application, which is the important thing, can weather more of those things. Then it gets more complicated. We're producing cloud software. So it's great that I can install a cloud in a VM in the public clouds, but what if I wanted to start testing things a little bit more deeply? What if I actually maybe wanted to test the Ironic project, which is booting bare metal? I probably can't boot bare metal in a public cloud, or at least not any of the ones that I've got access to. So there's lots of different characteristics. Even in the public clouds, we have different characteristics from them. Some of them have chosen stability over speed, some of them have chosen speed over stability, and those are valid choices to make as an operator. And as a consumer of those clouds, I can actually take advantage of understanding some of those semantics to put some of my load in a particular place at a particular time. There's other various things, but there may be features that one of them just simply doesn't have, such as a bare metal thing. And ultimately the point there is that one of the reasons for having an ecosystem of multiple clouds is that one size in fact doesn't fit all. I need to be really clear, however, it's made clear to me by product management at HP. I'm not saying that HP condones putting children in boxes. And I get in trouble one stage a lot, and that's not the point of this. We're not selling children in boxes. So anyway, because one size doesn't fit everybody, we sometimes may need to be able to do more things. And this is one of the reasons that we started up the Triple O project, because we've already got a whole bunch of things that know how to do elastic things on clouds, or to do orchestration with cloud workloads. So if I can do a similar thing and have a similar environment to get some multi-mode bare metal environment so that I can do similar testing that I'm doing in the public clouds for the single node testing, then that's pretty spectacular, right? And it actually gets us to this graph, which is a little blown out from the one I showed earlier of our node pool usage. This is our current node pool usage broken out by Provider. And you can see that we've got usage by HP and Rackspace, but we also have two different cloud regions that are private cloud instances that are being run by the community using the Triple O software. So I've got now a dynamic cloud workload spanning public and private cloud instances, and I'm running production workloads on those. This is happening live today. Like, this is going on. This is actually a live... I snapshot it a half an hour ago, but this is straight out of the graphite.openstack.org graphing server. Like, this is the operational stuff. So it happens. And so what we wind up with, ultimately, is we wind up something a little bit like this. This is a very simplified version of our architecture. There's many more lines and many more boxes in the real version. But so over here on the left, we've got some monolithic applications. We've got some things that are single points of failure. If they go down, it's really bad. And we sort of have to treat them with kid gloves in that way and we care and feed for them. In the middle, and I did talk about this a lot because I'm running out of time already, we had originally a Jenkins server in here, and it turns out it doesn't scale, and that's fine. We're doing things with it that are evil and nobody really should be expected to keep up with the evil things we do. So what we did there is we took it and made it a scale out solution. We added a Gehrman bus onto it. So we run eight Jenkins masters at the moment, not just one Jenkins master. And each one of those, on average, we've determined that they can handle about 100 slaves. So we attach those. Our node pool software that manages all of those nodes attaches nodes to individual things, and it pulls them out of, this is apparently what the clip art for a cloud looks like, by the way, pulls them out of the cloud and attaches them to the Jenkins. So we have elastic combined with scalable, combined with monolithic to get the entire thing, which is my applications. This is at the end of the day. The future of cloud, the future of cloud is your applications. The future of cloud is that you don't care about cloud. The future of cloud is that you run your workloads and what you're doing using cloud to get it done. And with that, I am done, and I will pass it on to the next people. Thank you very much.