 Let's might as well start right on time. So good morning, everyone. I see some seats here and there for the people in the back if you want to sit on the front row. My name is Jury Butnik. I work for RightScale. Part of the presentation, Jacob is going to come into the stage and help me as well. He's an architect with her friends from Brackspace. I've been with RightScale for four and a half years. So I've had the opportunity to see the public cloud starting early on and then a lot of the private cloud, exciting open stack, things that we've been seeing lately. And I'm going to talk to you about this idea of the difference between virtualization and a cloud and why does it really matter. And I'm going to show you some slides, but I'm mostly going to go into a demo. If you want to ask me questions in the demo, it tends to be more fun than saving it in the end. It's a bigger crowd, so it might be a little bit trickier to be able to keep track of that. I want, maybe this is not working. So for my benefit, if you guys don't mind of the people that are here in the room, how many are IT managed or working IT in some capacity and are looking at maybe deploying open stack and want to understand what that means and see if they want to undertake that? So a few. How many of the people in the room are running some sort of virtualization in their data center today, VM, or anything else like that? So a lot of those. OK, let me see. I had another question that I wanted to ask. How many are running a private cloud already, whether it's not production, but maybe they're testing something or they're looking at using it for their developers? So a small number. OK, so it sort of fits with what I was guessing was going to be in the room. So the differences seem a little bit nuanced, and they're actually hard to define if you really get very deeply into semantics. But what I hope to show you in the demonstration especially is that the difference is coming to play in practice. What you can do in the results that you can drive, more than really worrying about defining things in a very specific and exacting way. But let's make no mistake, cloud computing is possible because of virtualization. In many ways, it's a logical extension of what started happening with virtualization. It's sort of the next step. With virtualization, we saw that the development and the advancements in hardware-wide pacing saw for being able to take advantage of it. Really powerful CPUs, lots of memory, the applications were not keeping up, so lots and lots of idle servers. And companies, as they started putting in virtualization, and I learned yesterday that there's barely a dent in the large enterprises that have actually migrated to that, by the way. I thought everybody was doing that already. Turns out not. But it was about server consolidation for economic savings. And once you do that, you start discovering some benefits that you can do by automating some things. That's what I'm talking about a matter of degree, because that's really where the cloud comes into play. But it's the differences in how you automate those things that can make a difference. So I think what we went from data centers to virtualization was more of an evolutionary change, doing the same things just a little faster because you got that hypervisor layer that gives you that extraction, so you can move it around. But what I hope to show you here is the jump to cloud computing without automation, it's actually a fundamental change in how you do things. So maybe here's a good example to outline the difference of when you got virtualization and when you have cloud. Let's do a hypothetical one here. Let's say you take a bunch of virtualized servers you have and you move them to a public cloud. What have you accomplished? Well, you're not paying by the hour for those machines. You're running in a shared environment, probably less powerful hardware and maybe a little bit more likely to fail. So not a lot of advantages there. So obviously there's a difference between, I got this virtualized and I got that cloud because if you just move it, there's no actual advantage there. So here's what I really mean when we're talking about cloud computing. It's really about creating this self-service automation that makes it possible for the people or the clients of IT to dramatically collapse the amount of time it takes for them to deploy the environment they need. And as I said a moment ago, it seems sort of nuanced because there's a lot of these things that you can do with virtualization as well. So I'll show in the demonstration what I really mean. But it's not, I took it from six weeks to one week is I can provision things now within the hour in the next 20 minutes. That's what I mean when I say lightning fast. And by the way for those of us that have been at this for a while, there's an interesting side effect that maybe you guys have an encounter when you put that much power in the hands of the different people in your different business units, you realize boy I really need to keep track now who's launching what and how much resources they're consuming. Turns out cost tracking is really important because that can get way out of hand with the public or private clouds once you put that power into those people. So in my view, it fundamentally changes the role of IT. It's not a request organization with their fulfilling tickets. It's about being a service oriented organization that creates preconfigured assets that the different clients of IT can deploy at will. So it means IT can still maintain control, have the environments they have comply with the policies they need for security and other business reasons, but they're no longer a gatekeeper and organization that mostly says no and you gotta wait, talk to them again in six weeks or in two months is you can deploy it now as you want to. So hopefully if you do this right, you have an IT infrastructure, an internal cloud that feels like this and like this, but not like this. And by the way, those guys are not doing acrobatics. They have a flat tire and they need to get to a gas station somewhere and the guy's holding on the front wheel with a wrench. So anybody that's worked in a data center and not used to sometimes you're gonna do things with duct tape and chicken wire and bubblegum and they're not, they're not fun. So hopefully it feels a little bit like this. You got developers that need servers during the day. They don't need to be running them at night. So it'd be useful if they just sort of release them or they terminate and back up automatically when they don't want them, but then you need to run your quarterly end. Why don't you reuse those same machines? But then it turns out that there's a new security patch you need to deploy. You wanna throw up some environments and smoke test that thing before you do it. And then it turns out that that's not working. So now you gotta work that thing and try different ways. So you're developing and deploying and launching all these different environments at different speeds. And now you wanna back out of it. And I think the next thing we're gonna do is we're gonna run through each other. If you can create an infrastructure that behaves this well, you know the job's going. By the way, that's a Japanese synchronized walking team. So I'd like to pass the presentation over to Jacob that's gonna talk a little bit about the specifics of OpenStack and then I'll come back. When you start to deploy your applications into an OpenStack cloud, you have to come in and you have to consume all the different bits that you need to make your application work. So your application now has to know about where to go and get its compute resources, where to go and get its networking, what those pieces are capable of. And for a lot of folks that requires thinking differently about how they were deploying their applications today versus how they were doing it yesterday. There's a big difference in the way that you think about deploying an application, making it highly available, making it scalable when you make that transition from the virtualization world to the cloud world. For many people, the virtualization world has always meant VMware. And whether it's meant VMware or Zen or whatever else for you, you've probably leveraged something that looks like this where when you made the switch from having servers and racks running an operating system on bare metal to consolidating those servers into virtual machines, taking all those servers and making them containers, everything still kind of looked the same. Yuri mentioned the fact that it was an evolutionary step. There was not a whole lot that happened during that change. Maybe you saved some money on power, maybe you didn't have to buy as many boxes, whatever the case may have been. Now we're looking at this world where your application not only knows about the infrastructure that it's running on, but it's running on infrastructure that may be spread out across more than just your East Coast, West Coast data center, but spread out across clouds around the world. So we will keep coming back to the theme of application, driven application focus versus infrastructure focus. That's what I meant specifically when I was talking about a fundamental change. And I'm about to jump into the demo in a couple more slides and you'll see that let me give you an analogy that maybe will help see some of that through. Let's think about TCP IP. Really simple because it does just one job, send IP packets where they need to go, the addressing system and the transmission control protocol. It doesn't have all these complex features on it. It just makes sure the packets get to where they need to go. Maybe it changes the speed or retransmits or changes the window sizes for those of you that are more on the networking side. And then other people on top of it build things like ICMP and quality of service and HTTP and DNS. Those things are not baked into that layer of the network. They're additional services that run above and beyond that. So when we're talking about this different approaches, is have an infrastructure that's really simple that delivers your virtual machines when you need them and put the intelligence into the application so it can request what it needs from the infrastructure and configure itself as it goes. And how do you do that? Well, there's a couple of things. I'll start with this concept of server templates. Years ago, I think it was five and a half years ago, now six years, when we started doing this and back then it was public cloud only, we learned that it was really useful to be able to send up a VM in quickly, but it turns out you need to be slightly different each time, most of the time anyway. And you start getting into this VM sprawl situation so we came up with this concept of server templates and it's a nice analogy because a VM is like a CD, make as many copies as you want, but they're gonna be identical. You can't change a single bit. You have a playlist, you can add or remove songs, you can change the order that they're in. So in a server template, there's two basic components, a VM, but in the virtual machine, we now put only the operating system, in fact, just enough for us to boot in, that's it. And we abstract out all the applications and configuration things, you were going to bake into the image, instead, those will now be installed at boot time. We happen to be using Chef, by the way, so we're leveraging an open source technology there. The idea is you have an abstraction layer now between the VM, the virtual machine, which is how that server communicates to the cloud, it's running on the resource pool, and the business logic, the scripts that will run at boot time that configure that machine to be what it needs to be. It's like DNA that builds a server as you need it. So if you were booting up a server for my SQL, we call it our right image, but it really just means a virtual machine. You send an API call to Rackspace Open Cloud, you say give me this Ubuntu image that only has the operating system, it does nothing. And once it's running, you're quite literally installing and configuring my SQL in this case. And because you do that, it gives you a tremendous amount of flexibility and configurability, and I'll show some of the things during the demo. Now, when it comes to the image itself, we have this concept of a multi-cloud image, having images for all the different clouds that a particular server might wanna talk to in grouping in together into a multi-cloud image so that you can build a server template again for my SQL on top of this multi-cloud image that can quite literally instantiate in your own private OpenStack cloud in Rackspace Dallas and in Rackspace London and in other public clouds as you may without changing the business logic of what that server itself does by virtue of being able to configure itself at boot time. Now that's one. The other thing, leveraging this idea of having OpenStack as a very efficient way to provision the virtual machines of servers that you need, we add a layer on top of it, as you can see in this diagram, an orchestration system, a management control for the cloud resources so we can automate what gets deployed there from a user perspective, who's got access to what, from a monitor perspective, what's happening to the servers from a learning perspective, why do I do when certain conditions are met and react accordingly? That's what makes it possible for the architecture to be application-driven instead of the infrastructure finding ways to self-heal itself, the application noticing, hey, a server failed, I need to launch something to replace it or I need to move something because I'm running out of capacity on this machine I need to add more, perhaps on an application load balance pool. So let me switch to the demo here and everybody please cross your fingers because we've been having some fun with the internet access part. So let's see how we're doing. I opened up a whole bunch of tabs, oops, let me switch to mirroring then to do this part of it. I opened up a bunch of tabs in advance hoping that if something doesn't go well, we'll be able to leverage off of that. Can everybody see the fonts okay in that size? So what we're looking at is the management system, the control system and what you can see there on the left is we have the credentials so we can authenticate into different rack space clouds in my own private clouds and make API calls and stand up machines and take actions from those clouds. From one system, a single, pane of glass to control many different resources. On the right side, we have this concept of deployments. It's not super useful to just have a long alphabetical list of servers. So grouping them into a container that we call deployments so you can manage systems, not individual machines. And it's really nice with red and green dots to be able to see what's running and what isn't. So for example, I have here my production environment, my hypothetical production environment that I'm running here. I already opened the tab with it so that I can show you what's going on here. So it's a simple, but you can see the name of the deployment up here. It's a simple three tier architecture, although here it's not in order. I have the database here, couple of load balancers for redundancy in what we call an array with the application servers. Here's a logical diagram of what that looks like. Your classic three tier architecture. We have this term here for an array so that you have a way to auto provision or deprovision servers based on certain conditions. Notice here I have a master slave database, which is not what I have going on in this particular environment. It's running in my own private cloud. So I have a nice simple environment that can serve as a good example of the kind of things you might wanna do. Well, let's say that I wanna have a DR environment running in a public cloud. It's a nice advantage to be able to have it there and not pay for the machines unless I'm actually using them. So how does this idea of server templates and cloud management come into play to bring me some benefits to make it a little bit easier for me? Well, if I go back to my main dashboard, you'll notice I have another environment here that I call disaster recovery. And let me just switch to the tab where I have that. Oops, it's this one. So this, as you can see here, happens to be running in the Dallas data center for REC space. But notice that only the database is running, not the load balancers, they're inactive. So this is, it's a slave to my production databases. It's synchronized with it. I'm pre-positioning my data in case I need to launch it. So if I was having a major failure in my internal data center and I needed to roll over somewhere else, a failure significant enough where I cannot recover even with a bunch of automation that quite haven't shown you yet, but I needed to roll over to a separate location. I can select the load balancers I have here. I can do several things to them, but I'm gonna launch them, apply to select it. It's gonna take a couple of minutes and those machines will launch. Although that's not really what's germane here. What's really interesting is that the database server configuration is the same for both machines. The one in my own data center and the one that happens to be OpenStack and the one running in Rackspace because I'm using server templates that have those multi-class underneath them. So the underlying infrastructure could be dramatically different. Different hardware, different networking configuration. It could be nothing similarly. Even the APIs from the two different clubs could be different themselves. We're normalizing that communication across them. In this case, it's not as difficult an example because they're both OpenStack. They don't need to be both OpenStack for this kind of technology to be able to work. Let me show you how some of this stuff looks in action. So I'm going to click over here to add another column to my view. Oops, so you can see the server templates that I keep mentioning here. You notice that the load balancer is based on a load balancer server template and the database on a different one. So let me, I actually already have the tab open for the database server template too. And I'm meant to go here. So what you see here is, so you can see up here that it's the database manager server template. What you see here is the multi-cloud image for this particular machine. This is really, as I explained, the magic that makes it possible to communicate. Sometimes I call it resource pools to not say clouds over and over and over. We make that possible by starting with a bunch of baseline images. That's the list that you see there. And essentially, we have here CentOS 6.3, Rails 6.3, and Ubuntu 12.4. And notice that for each of those, you can see which clouds they map into because obviously those binaries are not identical. But what we've done there is to make sure that the content of all the images is the same. The version of the OS, the patches that they might have, we put an agent that they call right link, the version of right link that they have, they all have to be the same so that when the machine itself instantiates, we're starting from the same baseline no matter which of those clouds we're launching from. So that if I make an API call to Rackspace Dallas, to Rackspace London, and remember, as I said a moment ago, we put only enough operating system to boot, just enough OS to boot, but I know that I'm gonna get the exact same version of the OS with the same patches, with the same agent that I can talk to. We normalize that communication across all of them. Now we do a ton of heavy lifting, creating all those images for all the different clouds that we work on, that's the foundational part of that. And then if I go over to this tab here, these are the scripts that actually build the machine to be what it needs to be. These happen to be Chef recipes. So some of the stuff is pretty mundane, maybe perhaps even boring kind of stuff. As you can see the stuff over here, but it gets interesting when you look at installing my SQL, configuring my SQL, there's a parameter in there for example, where you can tell that node, are you the slave or the master? And what's the host name or the DNS name of the other machine in the username and password so you can start synchronization without having to go in and SSH into the machine and do that all manually. If I was launching this machine, I would see fields where I could put in some other information to do that. And again, remember none of that is dependent on exactly where you're launching it because you're abstracting that logic above how it communicates with that particular cloud. And I keep talking about at boot time, there's also lots of useful things you might wanna do operationally, snapshotting the database or backing up certain things that many regular users that are not DBAs might not wanna know that you wanna encode into operational scripts to make it easier to do that without worrying so much about making mistakes or fat fingering a command and messing something up. And by the way, there's also this idea of decommissioning scripts, little less useful in a database, but imagine a application server where when you're shutting it down, you want to let the load balancer know that it's leaving the load balance pool or disconnect from the database and such things. So that's how the machines themselves get configured. Now let me go back and look at my, let me see, what was I gonna go here? Looking at my running environment, give me one second, I forgot where I said I wanted to go with the presentation, so, oh yeah, sorry, I've been going in different directions with what to cover here. So let's look now at my production environment, and I think I have it on this one. Let's look now at the load balancer. So I just opened the tab for the load balancer and I'm gonna look at the monitoring. In terms of the applications, looking at what's going on and being able to address the infrastructure and make decisions based on that. So we happen to be running an agent right link into the machine so we can monitor what's happening and collect information about what's going on. We also still collect the, an open source project, it's got a plug-in architecture, so if it's not a hardware metric or process things that we're picking up, you can write your own collectors to grab that information. What's interesting about that, and I think I did too, am I looking at the wrong server now? Oh yeah, this is the right one. So I'm collecting information about what's going on on the machine itself. Now I can create alerts that can take action depending on what's going on in the machine. So for example, is Apache running properly? How many processes are happening with Apache? And what do I need to do if that's not going well? Or is the machine CPU being overloaded? What should happen if a machine is being overloaded? So here's a really simple example. It's Apache running whatsoever and what should I do in that case? And let me actually run this one. To, oops, excuse me, I'm having some trouble here with my machine, so, oh, this is what I wanted to do. I'm sorry. So I can create some simple rules. So if I define that Apache is not running at that moment, what should I do? I can have it trigger a particular script that perhaps restarts Apache and sends me a warning via an email about what's going on. Or I can make it even more intricate. I can say, well, in order that this will repeat five times, I can say, well, let's, after 10 minutes, I want to add another escalation that happens in another 10 minutes that sends me an email. And I can create exactly the kind of message that I want to send there. So that I can have it perform a particular kind of script if it doesn't work. I didn't add that, but I could have it. After doing that five tries, I can restart the virtual machine if that's not working. And if that itself is not working also, then I can send me a warning email to let me know what is happening with the environment. So that's how you see the particular environments noticing what's happening within them and requesting things over the infrastructure. We start this machine, launch a different machine. Let me show you a different example with the alerts. What if a particular machine that I have here, so notice here I have two operational servers. So that's what you see here in this array, two machines running. So if the hardware fails on one of them and they're no longer running, most of the requests, all of the requests are gonna be transferred to only the other server and it's gonna cost the CPU load to spike. So that's where I'm tracking what's happening on the CPU load for the particular application servers. The application itself notices what's going on and then I can say, you know what? Auto provision more application servers into that load balance tier because I'm noticing that they're running too hot. They're running too heavy. And because they're based on the server template architecture that I was mentioning a moment ago, you can have it be auto configured in auto provision. Right, if those machines come up, they will join the load balance pool by telling the load balancer that they're there and then they will also connect to the database and be able to start transacting. That's the difference between doing some of that stuff manually and being able to do it completely automatically. Let me see, so this is the other part that I wanted to show you that gets really interesting. So I'm back into the environment that I'm denoting here as my production environment. Let's say that we've been working on a new version that we wanna test. I keep talking about the server templates and how they can self contextualize and boot them, how they configure themselves. You can pass values into the scripts before you launch a particular machine so you can make decisions about sort of conditions are gonna be set up. We call those inputs and we group these inputs into the categories that you see here so that when I'm going to set up a particular environment I can give it those parameters before it goes. In a sense it's a configuration database of all the scripts that are running in all the servers that are part of a particular environment. That's what I meant earlier when I was talking about managing at the system level not managing individual machines. You can make a configuration change about what database you want your application servers to connect to and it'll apply to all the machines in there without having to go into them one by one and also without having to make a change into a VM anywhere. So that means that you can do things like this. If you're running this particular environment and you wanna clone it to make a staging for a smoke test or some sort of kind of QA you can quite literally clone the complete environment with just the click of a button that I have over here. And it's gonna ask me if I really wanna do that and I'm going to say yes. And in about as long as it takes me to say it it'll clone the complete environment. And because I've been making backups of my databases if I were to standard this environment it will actually mount the storage from the latest freshest backup that I've done for my database. So I can actually clone a running environment without stopping it without having to do it two in the morning very quickly. Now this is identical in every way to the one that I had in production. So before launching anything here I will go to inputs, I will click on edit and I would make some changes so that I don't have the new application servers that I'm running in the staging environment trying to connect to production. I would go down to my database category and over here, did I not click edit? And over here, I would, oh, this is why. I would have it connect to qa.companydb.company.com. You know the name that it refers. In such as that I would make a few configuration changes that are dynamic. Once I have that, I click save and I'm gonna cancel out of here. I select my servers, I launch them and I have a staging environment that I did in moments identical to my production one. So that's on the management side. I kept talking about having a self-service architecture. So this is something that one of our guys that happens to be sitting in the room built to give an example of what an IT vending machine might be that an IT department puts together for the people that need services from there. Each, and I only have two examples but imagine this is a grid with a different application setups of the people that interface with your IT group needs. When you click on one of these, it quite literally launches the complete environment. The architecture that I've been showing you, we have encoded the configuration not just of an individual machine but of the complete environment itself. So that a programmer says, you know what? I need this environment because I'm testing the latest version of Tomcat or a particular patch. Instead of filling a ticket, it's going to IT, doing all those things, you can quite literally launch it quickly. Now, for financial control reasons, you might want to wrap that to somebody that's managing the project and looking at budgets and how all those things are implying. That's what I was talking about when I was referring to the cost factor earlier. You put all this power on the hands of so many people. It's incredibly useful but they need to remember to shut machines down when you're not using them. We happen to run scripts overnight to look at who's running what and shut things down with some rules that are idle. And we also make it possible to keep track of the cost. Let me see if this goes well for me here. We realized that being able to keep track of the cost was supremely important. So in the environments themselves, I'm clicking through here to I think we have called the report manager. So on the environment themselves, whenever you create a server, an account, a deployment, you can assign it machine tags. So I could have all the deployments that are related to a particular division or the company or to a particular project or to a particular category or some kind. I can assign them all the same category on those deployments. So then I can do it in an automated way once a week, once a month, once a quarter. I can say give me a report for all the deployments that have this particular tag. So report equals department, I'm sorry, it's report colon equals project X. And you can actually have this become a CSV file that gets put into storage or email to somebody at request. So now you have the ability for people to launch things as they need to and then to be able to come back and keep track of who's running what and how much that is costing. Any questions on this stuff, by the way, since I'm gonna wrap up the demo part and go back into other ones? No? So, oh sure, please walk up to the microphone so we can record it. Hello. Yeah, so can you talk a little bit about how do you manage licensing for applications? But there's an application that's part of the image and specifically if there's no keys or technology involved to get the licensing done. So I've been working on that myself from a few years back. That's tricky in that it's not a technological question but a business and legal question. You're really looking at the way that software is being so evolving together with private cloud technologies and public clouds because it changes the model. Not a lot of people wanna buy enough licenses to be able to hit their peak load when they only hit that once a year or whatever that might be. So it really is a factor of how cloud friendly that particular company is willing to be. We've seen companies evolving so that they license their software also by the hour so they can track it accordingly by API calls into those particular cloud providers and do it that way. Otherwise, some vendors don't want you to do it for different reasons. That's a tricky one. It's very much a case by case basis. Any other questions? So in essence, I'm talking about servers that are like stem cells. I came up this term when I was reading some stuff from Neurology, pluripotent. I'll give you an extreme example here. I was talking to one of our guys in product management. We have a company out there that's running 12,000 servers only one VM, one VM. Many different server templates that do many different things but only one VM to support with the patch level they need and their security policies and so forth. So that's a big component of the tremendous flexibility that you get. This idea that the machines self-constructualize. The other thing to consider is so how do you get there? By the way, this is an interesting cartoon that's meant to be a political cartoon but I thought it was semi-appropriate here and the crowd, the demonstrators are saying what do we want? Gradual change. When do we want it? In due course. So what I'm saying is as you're developing new apps and this ties into the subtitle of the presentation when you're developing new apps for your private cloud or your service providers that are building public cloud space and OpenStack don't do the exact same old thing in the new technology. This is what happens to all of us. We get used to exactly how we're working and new technology comes along and we just deploy the same way. Look at newspapers when websites came along they still only publish once per night and they were concerned about the space of the article because in the print world, column inches and delivery trucks matter not on the web. Think of your environments in a new way when you quite literally have access to this infrastructure that you can configure and deploy easily and quickly and empower the people. That's when I was referring to about the dramatic change. The subtitle of the presentation I said why you shouldn't just put on the same socks that you were wearing before after you take a shower. Why develop the same kind of applications when you're now having this new found flexibility? As the people that are here that are looking at deploying open stack based private clouds and what those should be, you really should think about the way you want your applications to be able to run and the kind of things that you want your teams to be able to deploy. That's why I pulled a couple of quotes from Garner and that's why I showed the example of a hybrid cloud as the main example that I showed here is what can I do now that wasn't practical or feasible or economic in the past? Have environments that run really well in my internal infrastructure but maintain the flexibility to move them over to rack space in case of an emergency or for cloud bursting or whatever that scenario would be and sort of going past the ideas of just pure virtualization in hypervisors. So I see I'm coming close to the end and I was leaving time for questions all though. Doesn't seem like there's a ton of it in the room. Any other questions? So I'll mention a couple more things. We make it really easy to test some of the software as you can see from the link there. We happen to just coincidentally, for those of you that might live in Northern California have a company conference next week in San Francisco and it's about $1,000 to attend but we're giving up free passes in our booth that's over there but also one of our senior product managers is giving a presentation later today. Those of you that might have attended the Samsung presentation, this is the technology and the tools that they use to deploy their own hybrid architecture where they have things running in a cloud stack private infrastructure and they have the flexibility to deploy things in a public cloud. Utpal is giving a presentation this afternoon where he's gonna talk about highly available architectures and how to engineer and deploy for those in this kind of environments for those of you that might be interesting. So we have a few minutes left or there's any other questions? No? Well, thank you very much everyone. Thank you.