 So my name is Brian Aker. I've been working on open source for a couple of decades now, I think, a bunch of different projects, bunch of different pieces. For the last two years, this is something I've actually been spending my time on at HP. So it kind of gets in question of people ask me like, so this will come up of slides missing. So who is HP? So why HP in the first place in regards to this? Did you guys build printers and stuff? Well, the thing to know about HP is that if you go back in a little time, HP was the company who actually indemnified Linux against SCO. I don't know how many of you in the audience can remember that. There was a period of time when Linux was going pretty nicely and then suddenly SCO stepped in and started to threaten everybody over a patent and issue. And so copyright. So at the time, HP looked at the market, kind of sat back and said, you know what? This is a market that needs to exist in the future. So they probably did an indemnification that covered like Red Hat, it covered SUSE. It wasn't just about HP and exactly what was on an HP server or laptop. So this was something broadly that they decided to do. The end result of that was, well, today, they ship one Linux server every minute. So that's what we're doing today. And that's, by the way, the number that I'm like, eh, two, three people give me numbers. This probably is the one that I should go with. It's somebody that, like, one in 15 seconds. Really, one in 15 seconds? Three of you sent me the one in server in a minute. I'll go with that. So what happened is that about in 2011, I got a phone call and some guys tried to talk me into coming into their offices. And they're like, we'd like you to come and see something. I want to sign in the NDA. And I'm like, no. You'll really be interested. Not really very interested. OK, we're going to do open stack. I'm like, really? So sometime about in the summer of 2011, I went to the HP offices, which discovered there was one in downtown Seattle where they were at. And they showed me all these notes on a board and said, yes, we're installing open stack. We're doing this as a public cloud. We'd like to come work on it here. I was like, yes, is that what you're really doing? Just want to make sure, just to be? And they're like, yes. So in August, I actually joined HP. And then, of course, immediately went to the internal site and double clicked on it and checked the REST data interface and went, yes, that is definitely open stack. So HP had made a giant bet. Because at the time, Chris before talked about an ecosystem. If it was only Google running Android, it's not really much of an ecosystem. And at the time, Rackspace had taken open stack, tossed it out, but there was not an ecosystem at that point. There was Rackspace and what they were open sourcing. And so by HP coming forward and saying, we are going to create a public cloud based on open stack. It actually begins the actual ecosystem around this. So at this point, 2011, HP starts letting people, anybody with a credit card, sign up and use open stack. And at the same time, helps to actually start the foundation. So really comes back to people who goes, well, OK, that's great. So what actually is open stack? And what is this whole stack business? Well, to go back in time, remember this? Linux, Apache, MySQL, PHP, Perl, Python, all that. For a decade or so, this is what we've been talking about. And we talked about what is a stack and what is open source. This is the thing everybody has lived, breathed, cut their teeth on. And there's been many, many different attempts to kind of mold it differently. There was SAMP. We had WIMP. We had all kinds of things where people would try to enter new letters. And it was kind of crazy at the time. Every time somebody would do this, it would be like, well, that's not really creating a new stack. You're just trying to insert your product into this concept. That's not a new stack. And so I mean, there are plenty of stack based this stack, made everything. But what we didn't see yet was something that was kind of truly changing, something a lot more universal. And at the time, you think about it, you know, lamp, awesome, great. But what have we seen? We've started seeing Google. We have seen Amazon. We have seen all these companies start to enter. And where a lamp is one thing, it is something that just runs on top of it. It's no longer the stack. It is just one component and piece of it. And that right there is kind of something that should be telling, because really, like, why not have it open source and in? And this was the beginning of what we see open stack. So we had Nova, Glant, and Swift. Nova is compute, you know, ask for a server, get a server. That's Nova. Glant, the whole component that actually supplies ISOs to run this thing, images. And Swift is object store. I've got an object. I need it stored. I want a URL. I want to, you know, fetch it again. These three pieces were the first pieces that made up open stack. And from these pieces, what do we see? Well, you need a little more than that to actually run a public cloud. So we start to see it kind of grow a little bit. So when you see, you see horizon. You start to see a dashboard enter in. You are to see things like Keystone. Federated ID. This is one of the ones that, sorry, not federated, but ID as a service. This is still an extremely, extremely powerful concept. I find that very few people have fully kind of baked into their conscious yet. Is that we have a system here, which is actually, as a service, identity. Next piece in here we looked at, we've got Neutron. The network itself is a service. Because why not? It's end-to-end. Let's actually control the entire piece. You've got multiple racks, everything. You should be thinking about not just the single host instances, but the network that underlies it. And now, we start to see, fast forward a couple of years, and you start to see all these given pieces now. You start seeing things like load balancing. You see database. You see authentication around counting. How do we do metrics occur? And this is just a few, messaging, orchestration. Today, when we keep a spreadsheet on this, there are 32 discrete systems that we track that actually make up OpenStack today. And not everything is actually, all of those pieces make up things like old things we know about, like network time protocol. How do we actually make sure time is actually synchronized? We need to actually be able to send out email, send out alerts. So this is all open source, all built up. But it's actually a pretty large thing. And in the time of thinking about it, like open source, an end-to-end component piece like this that runs across multiple systems, we've only had three, maybe four open source projects that ever actually showed off this kind of scale in open source, which really brings out a need for a whole new kind of testing. Because you can't test end-to-end with these services. You need to be able to test edge-to-edge, which kind of brings us into the mentality of all this stuff. So OpenStack itself is actually an ecosystem. Obviously, because if you had this stack, you've got some software, how do we actually make going forward? There's, what, a couple hundred companies, an awful lot of people who are involved in this thing, and well, an awful lot of commits. And the information was outdated the moment I put the slide up. So what actually ties this whole thing together? Well, we can't just take these little pieces out there and say, well, Swift works, or Nova works, or Libvert underneath it works, or anything like that. We actually have to start actually doing testing on a large scale, and much larger than what we've ever actually seen in open source. So continuous integration and deployment, CI, deployment, not actually solved. This gives you an idea from the daily patch volume. So this right here is just the volume of patches. This isn't everything that's actually going to be committed. This is just where we look and see how the volume kind of goes up and down. So this is everything being sent in. Now, I think at this site we see one in three patches actually make it through. And if you think about that, what does that mean, as far as participation with velocity? This right here, by the way, is the growth of the number of actually accepted commits over time. So what you're seeing here is both participation at a very, very high rate. So this is more and more participation, more and more changes that are accepted, not just stuff thrown over the wall. We have no idea of the quality, but the stuff that's actually tested. Which actually gives you also the idea of the velocity in which this is actually occurring at. It's kind of pretty amazing. I don't know of anything we've ever had that's been quite this big. And if you're really interested in how all the testing pieces work for this thing, you should look up Zool, Z-U-L. I'm pretty sure at this point we don't have anything, I don't know, I'm pretty sure Microsoft has something like it internally, probably Google does. But it's the first time we've actually seen a system where we can do batch set of patches being tested all at the same time. So exactly how does HP participate in this whole thing? So what can we do? Well, we're a giant enterprise. So one, we can provide a lot of people, which is kind of handy. In this case, we supply people for multiple different projects, we supply things, also things like legal. We supply things to the CI team. We provide things for documentation. We provide people across the board for OpenStack. We also do some bits of development, too. For instance, Trove. This is database management systems as a service. This is one of the things we work on that's being incubated today. The idea that paths should not actually be just kind of left up to kind of like, well, maybe we'll find some paths stuff. Integrated paths pretty much needs to be done within OpenStack. Otherwise, you end up with these really bad impedance mismatches. So for things like database management service, RESTful API, give me a database. Give me a cluster of databases. Supports MySQL, supports Postgres, a bunch of other ones coming out, even some commercial ones that are being added. So this is one of the early services. Libra, load balancer as a service. Load balancers. Turns out you need them to run websites. So what this does is actually provisions up an HA proxy and sits there, pulls it. As a user comes in and needs another load balancer, they make another request. And what happens? They get another load balancer, feed them back in the pool. It's crazy. You know what load balancers used to cost us? In slash.terms, since Jeff's out here, this might make his heart warm. We had two load balancers. We jokingly referred to them at one point as Lamborghini number one and Lamborghini number two, because that was the actual cost in the damn things. Think about this going now. Libra itself is providing what used to be a really high end service at a commodity piece. And the software, by the way, has actually been even picked up by at least one of their cloud vendors. So this is one of the great things Chris had also mentioned. Like, does your software end up in competitor stuff? Well, I guess it does. That's fine. Few other things. DNS as a service. We work on Triple O, which is the installer. Because it's kind of nice to be able to install the software. Handy that way. Ironic servers as a service. Something that very few people seem to understand at this point, but it's kind of powerful, especially for any of you all who have ever spent your time installing VMware to only put one virtual image on top of it just because you're trying to manage that one virtual image. Well, scrap that. You think about it like data center, power utilization on the planet, 7%. And you're now losing like 40% of your given capacity for just putting an image on top of a machine that really only is running one image. Forget it. Heat orchestration of service. And we supply what? Like patches back to Nova, Glantz, Swift, Horizon. Pretty much anything inside of OpenStack. We have people that are actually trying to push package, just basically put patches back in. So the things that I like looking at OpenStack and kind of like tinkering with, there is so much more here than what often meets the eye. And we're really just starting to inching the age where there's just more than a few of us who have actually installed OpenStack. And OpenStack has become simpler. Like the abundance number of folks now actually running OpenStack deployments is getting larger and larger every day. So we've started seeing people actually learn more about how the tool works. So something like this. Here's a standard kind of deployment. You've got Keystone. You've got Nova. You've got Trove. You've got Swift. You can see where Trove itself actually makes use of Keystone directly. It makes use of Nova for doing deployments. Actually, technically, you use Swift for actually handling where its backups are stored. This is a service-oriented architecture, and all these little pieces can be molded together to do different things. And this is where, as we see people starting to be creative with installations and things like, for instance, this is a thing we actually do, we see things like, we have Trove, we have Nova, but we can also create private Novas. We can have multiple running Novas at different times. So in our case, what do we do? We actually have Nova running in a private environment just because we have a select set of hardware that we run, that we actually run the database as a service on. And we're finding this to be kind of a useful thing. And if you think about it, obviously, the other cloud vendors are doing this sort of thing. And the architecture is friendly enough to actually do these pieces where you can say, well, I want to take a Nova here. I want to put it on top of a very particular set of hardware. Or I don't want to run with virtualization. I want to use containers, the trick we do, less overhead. So these little pieces allow us to start moving things around and actually determine what we want with them. And the architecture itself is pretty Lego-like. You can start putting these, tinkering these things together and coming up with different systems. So here's an example. And this is the kind of stuff that we're actually learning and we're trying to, as we learn it, pass it along to others. Kind of brings up the question, what do we actually learn from actually running a public cloud? One of the things I think we add that's us in Rackspace do is just, how does this thing actually scale? Where are the bottlenecks in it? One of the first times we ever had it up and running over a certain size, I remember watching it melt down. This is about a year and a half ago. And then we started going in and going, OK, why is this thing actually melting down? What's the problem? And what we found out, oh, bad index is on the database. Let's go get that fixed. Nova had bad. So what do we do? We actually put them together as patches, sent them back in. This is something that public cloud vendors can particularly do, because we are running stuff at such a scale, and it is stuff that we are giving back. We've done a recent run with Keystone. What do we find? Oh, it turns out that nobody considered what would happen if the token's built up because you were creating thousands and thousands of instances at once, all trying to generate tokens. Well, that's got to be cleaned up. We find these sort of bugs, and we pass back that information, because that's one of the things we can do for the project. So other things. We're starting to try to figure out how to apply our philosophy. It's something I kind of push my engineers towards. It's like for us to find a common design philosophy and think about what these things are, and then push them back into open stack, see if others will actually kind of share with some of us our ideas. Continuous integration. It is thankfully the norm. This was a debate. This could be a debate with folks a couple of years ago. I came to CI. The whole conclusion of CI about more than a decade ago when I had first joined my school as actually an employee, and I discovered, like, so we built Windows binaries, right? But we don't actually test them. So we have no idea that this stuff actually works. OK, we've got to fix that. So this something is, thankfully, it's kind of like version control. Once upon a time, I would tell audiences, you must use version control. Do you know what it is? And half the audience would raise their hand going, we have no idea what this version control thing is. Thankfully, that's not the case anymore. SSH is, into production, it's considered harmful. I can go into more of that. Data security encryption at temp level, and just talking about the values of open source being audible. So this is one that we've learned along the way, and we're trying to push back. Don't allow SSH ever into your environments. No shell, nothing. One of the things about Trove is when Trove is running, all those database instances, there is no SSH. There's no shell, there's no nothing. You're not going to get into those things. They are pretty little toasters, and that's it. Why is that? Well, you end up with these things where some opera or developer often will come back to you and say, well, everything was working. But what was that hiccup the other day? Well, I just had to log in, and I run this command every so often, and it works. You did what? Hold it. You have to log in. What happens when you go on vacation? Get hit by a car? The process has to be restarted. All these things make for better software. The software needs to just work, not have maintenance. It needs to run. I just deleted the logs. Oh, the file system's filled up. I'm sure many people in the audience have been a victim of, well, we have this one guy who never knew that every three weeks he went in and just deleted these logs that have just been growing. Forcing people off SSH off boxes means that this stuff has to get fixed, and it has to go through a CI process, and it works better. Humans are essentially sources of bad entropy. Human logs into a box, it's over, which kind of just forget it. We look at this stuff like sock-compliancy. The day somebody said, oh, we definitely make sure everything is sock-compliant. I love sock-compliancy. If you're not doing sock-compliancy today on your own systems, just do it. It has so many good, rewarding things that forces you through best practices. And for any of you in the audience, thinking about debugging, this is the kind of stuff we're trying to push back and think about. As you want to operationalize OpenStack, you need to actually think more about what are the long-term pieces? You don't have people logging in. You want centralized logging. You want centralized logging anyway, because what do you want to do? Give your support people shell access? That sounds like a bad idea. Kick the box from the fleet, all this idea. The idea that if for some reason a human does need to actually go into a machine, at that point, what do you do? You take the machine out of rotation, you let them in, and then you format it and reinstall. This is great. It means people don't want to go into the system very often because they're going to have to do with formatting, too. We do get to find out if the system is actually HA or not. OK, you get access to the database. We just pulled it out of rotation. Let's see, it does Galera work. Excellent, it does work. These are best practices that get people like to think about things. Provide snapshots. Run as many things possible in the VMs. And the way that I was showing how you can use Nova, use Nova to deploy some of your services on top of itself. Give people snapshots. Allow them to look at that, but don't really allow logins. And this is what we're trying to do right now is when we go in the software, like how can we build out each of the services so they don't require these things in the first place? How do we make things into toasters? Part of this involves like security. We don't really want people logging into systems because once they're in those systems, well, what can they have access? That also means what can other people access? This is kind of a new thought process. I'm trying to get more open source developers to think about when we're developing software right now if someone can log in, if your system requires some kind of log in, we have already kind of defeated privacy on that moment. And it's kind of how do we actually rethink these things? Don't think about people logging in. And really, as we develop software and what we're trying to do is think about how we just do this a little differently. No one should be able to log into someone else's database. That's period. It's their database. Your vendor should not be able to log into it. It's your data. You don't want them to do it. In fact, not only do you not want them to do it, you want the system built so they can't even do it in the first place. And by the way, it saves a lot of headache if you're the vendor and you're like, sorry, I'd love to give you access to that data, but I don't have access to it. It just happens to run over here on this machine. I don't know. Would you like the encrypted randomized bits that exist for us? Things like that. Tenant data should be visible to the owner. So this is things as we kind of learn as being a public cloud operator, thinking about this piece. Data that is actually the tenets only make it visible to the tenet. That's actually how we should be doing encryption today. So another thing to be thinking about in all this is something I've been pushing is like, we look at copyright, and we've been thinking about copyright for quite some time. But copyright is not only a list of like, did we actually actually change the log? It's not only about just copyright. Not only who did something, but I mean for the means of did we actually get the correct sign-offs and all that. Like, when we start going back through history and we have a question about something, we can start looking at and going and analyzing all patches and understanding where things came from. And I can tell you this is like a big deal right now for everybody who's been doing open sources to go back and analyze. Like, where did it come from? What do we think? What did it mean? What did it mean in aggregate? So this is kind of an important piece. Anyway, so these are a few notes I've got right now. As far as us running actually OpenStack, it kind of gives you a little bit of sense of what it means for HP to actually be doing this. This is one of our big bets. And for me personally, when I look at it, like, this is the next generation of the stack. And we require it actually having an open source one, in my opinion. And so as we go through, we have to look at, like, what are the new challenges that exist? Where's the world we live in today? What is different about it? And actually, how do we go into doing a set of best practices around this and make them really very secure based system? So anyway, thank you. If you've got any interest at all in stuff I write, there's a blog, there's Twitter. I'm kind of constantly trying to learn more every day about this along with HP. So hopefully, we publish enough that others can pick up on our lessons. So thank you very much.