 All right, all right, all right, everybody come in. Let's come in and sit down. There's a couple of, there's one seat left there. Two, two in the front, two sort of in the front. Squeeze together. We know it's tight, there's almost 3,000 of us here. I don't know who, shout out to everybody in the back room over there and the breakout hall. My name is Diane Mueller. I am a cloud ecosystem evangelist. You tell me what that means. And my colleague here, Krishna Raman, is our open source lead programmer on the OpenShift Origin project and we're here today to talk about putting the past in OpenStack. And how many of you here have heard of platform as a service, show of hands? Yay, Nirvana has arrived. And we're really happy to be here at OpenStack and so we're gonna talk a little bit about Red Hat's vision and it's very open. Actually, I just jumped past my previous. A little bit about our agenda. Okay. I'll control it. You can control it. So we're gonna talk a little bit about what Red Hat sees as our vision of the cloud and where we're going with it at Red Hat. And I'm gonna talk a lot about why PAS matters in the cloud and a little bit about what is OpenShift. But we're also really pleased to be here at the OpenStack conference and talking about deploying OpenShift with Heat, which is a really new infrastructure orchestration piece that's coming out and there's a lot of that going on today. So we're gonna talk about and show you a demo of deploying with Heat. And then we're gonna talk about the different pieces and parts of OpenShift and so you understand how PAS architectures work at Red Hat and where we're all going in. A big hint, if you're getting any internet access at all, you can go to openshiftgithub.io because this is an open source project and we're always looking for more people to work with. And so you're here, it's OpenStack. It's all about open source and OpenShift is an open source project and we at Red Hat very firmly believe that the world and the future is open and that open source plays a very, very big role in it and we're not the only ones. A lot of pundits agree with us, lots of analysts, bloggers and everybody else. It's very well known that we've come to the stage where open source is really what's driving innovation in the future and a lot of it's at Red Hat. We've got a lot of sticks in the fireplace getting very red. The Enterprise Linux, middleware, we've got going on virtualization with Overt, Gluster for storage, RDO. All of you are, I think, invited to our party tonight to celebrate the open source community around RDO, our OpenStack distribution, so we'll be out partying tonight talking about that. And then underpinning that stack is where we've been working very hard over the past two years and about a year ago we open sourced origin and that's really one of the things that is near and dear to my heart because really what we're talking about isn't just free as in beer or freedom, but it's freedom to innovate and we're really looking about trying to bring some choice there into your world and where you're going in terms of application developers, in terms of DevOps, and in terms of folks building out infrastructure as a service as well. And so we really believe, and yesterday, how many of you in the room were any of you at the community day yesterday? Maybe one in the back there. Yay, thank you. We are very much about trying to build a community of people collaborating with us on building out the platform as a service as well as on the infrastructure layer and making that the driving force. And we really feel like that all of this open source work is what is driving the cloud. The cloud will never really be able to be a proprietary place. We need to make sure that it's open and interoperable. You'll hear a lot of talk about that, but we really are trying to walk the walk. A lot of the clouds that you've been playing with already had a lot of red hat on them. So we've been under the hood with our Linux distributions and our offerings with Grail at most of the larger cloud place. And we're really very committed to making the world that and we just, we do believe very strongly that open innovation is where it's at. And so we're very, very pleased to be here. But also what we're seeing is a lot of, the role of IT is very much changing the way that we do our business. We are, the traditional IT has really changed the way it's providing infrastructure. Our, the elastic computing resources have changed the nature of what a resource is and how it's distributed and shared. We really have looked a lot from both the public and the private cloud. And so what we're seeing now is a lot of private cloud implementations with OpenStack. And we've been doing a lot of work around making sure that those private clouds and your pilots and your production ones and your POCs have everything they need to be successful. And I think that's really what we're seeing now is that it's, we've done this shift. We're definitely going into the cloud. And I'm gonna skip a few of these. And that we're taking it to the next level where we're mixing both the private cloud pieces and the public cloud pieces and trying to share resources. So bursting into public cloud when we need it and keeping what behind the firewall, the things that we need for compliance and risk and audit purposes. So we've seen a big change in what's going on in the marketplace. And we've just seen this whole IT transformation. So we've definitely seen a huge investment now and the change of the investment in where people are spending from proprietary clouds. They're building them with open source, with open stack, with platforms as a service built into them. And we're really seeing that this is the direction that people are going in. Okay, stop, this is what you see. So really what I wanna talk about mostly today is why PAS should matter to you if you're running open stack. And so when we build out a private cloud or a public cloud on the infrastructure layer, what we're also doing is we're competing with some expectations that are out there for developers. So when a developer today decides that they wanna deploy an application into the cloud, they have lots of choices in a public cloud. They can go to Heroku, they can go to Engine Yard, they can use cloud, lots of public cloud providers, including OpenShift Online. And so the expectation when we build a private cloud is that they'll have that same capability built into it. But when people go and start their first private cloud implementation inside of an enterprise, most often what you're seeing happen is they're not actually deploying the PAS in poppin' all over the place. So basically what they're not seeing is, this is driving me crazy. They're not seeing the efficiencies that they expect. So when you get to, you've just installed OpenStack and someone turns around and says, okay, great, I have elastic computing resources, I have my storage and everything else. What it hasn't solved for me is, how do I get my application running up there quickly? And so the expectation that people have going from Heroku's and public pauses is that there'll be that push button, quick deployment practice. And so what I really have been trying to get people to understand is that the expectation management piece of deploying a private cloud is the most important. So having fun with my laptop here. The point that I'm trying to make, while I play with this thing, is that the key to any sort of successful cloud initiative is making sure that you include a platform as a service layer on top of that. So when you're putting your roadmap or your project plan together for your cloud initiative, you should always include a platform as a service layer because people's expectations now have been set so that they have the expectation that they're going to have all those services and that the cloud is going to provide them the frameworks and the languages and the web servers automatically. And that's really an expectation of having a pause there on your cloud. So most of you have played with pauses before, so you probably know that what you get in a platform as a service is the frameworks, the languages, the custom cartridges, or the extensible cartridges that's our metaphor for our pieces that we add on to the framework. You get all the metalware and this is what's really what you get with a platform as a service and that's on top of the open stack or the IAS layer. So we're going to go quickly into this, I apologize. So the open stack project, which is the one that I've been working on is freely available on the openshiftgithub.io site but it is also what upstream feeds the online experience and the enterprise experience at Cloudset. And I'm going to shift over here to just do an open shift overview and I'm hoping we can switch over here. This is not working quite that well. Go for it. So openshift origin, which is the open source project can run on pretty much any hardware that you have. It can run on EC2, open stack, any sort of virtualization you have or it'll run on bare metal. And there are really two parts to it. The first part is the controlling agent that's known as a broker and all of your applications or all of the processes for your applications are running on the second side, which is the node. You can have multiple brokers. The brokers are stateless, just backed onto Mongo and you can have as many nodes as you need to run your applications on. Of course, not everyone's setup is the same. So depending on how you set it up, you can choose to have different plugins. There's a plugin for authentication. There are plugins for DNS and a couple others that are available. On the node side of things, we take the big node and we split it up into smaller containers. This allows us to pack as a lot of applications onto the same hardware that you would have trouble with virtualization. So with virtualization, you end up with a lower density than you get with LXC or any equivalent technologies. Each of the containers that we have on OpenShift, we term a gear. And this is not just plain LXC, although it uses similar technologies, it uses C groups, it uses PAM namespace and a couple others. We actually wrap everything around with an AC Linux policy. And this makes it that much more secure. The more layers we have of security, the better off you are. And this is great in a public pass because there are lots of attackers all the time, but it's also great for your private pass where you want to keep your infrastructure secure. What about the processes? When you're running your application, you have different pieces of it. And all of these pieces will sit within the gears. Each process, so a JBoss process or a MySQL process, will sit within this gear confined space. And you have a lot of different choices. All of the packages are termed cartridges and we provide a bunch. So you have the JBoss, PHP, Perl, MySQL, and other databases. But if you don't have something that you want, it's really easy to go ahead and create your own and just plug it right in. This also works really nicely with scaling. And the way scaling works is through tiers. The top tier is HA proxy base. That's your load balancer. The second one is your application, which scales up and down depending on the amount of load you have. And you can optionally have a database underneath or you can connect to an external database service. So when you're trying to manage your application, you're hitting the broker first. And the broker exposes a very well-documented REST API. You come in, you ask it to create you an application. That then goes through MCollective or an active MQ bus to the nodes. And on the nodes is where each of the gears are created to run processes for your application. Once everything is set up, this is basically what it looks like. You have an incoming request on this side. It hits our front end web server. That does the virtual host mapping from the incoming request to which gear it has to go to. And that gets routed to the gear it's on. But looking at the full picture, this is all of the components on the system that enable you to have a really nice PAS experience. You have the top part, which is your broker over REST API. You have applications coming in, going through the proxy layer in the middle. But you also have full SSH access into your gear. And this is just fine. Because although you have SSH and you can run any command locally, the SC Linux policy will protect you against any malicious activity. So before we go into today's demo, there are a couple of resources that are really good to look at. You can go to OpenShift.github.io and we have everything documented there. You have procedures to spin up your own OpenShift instance or you can download a pre-built VM and run it on your virtual machine. Let's move on to the demo. We're gonna admit right up front that this is running as a video. It's a canned one because we're not trusting the internet here today. So this demo will be available as a video for you to play with later too. Yeah. So I recorded this on Friday. So I'm just letting you know. This is pretty recent. And all of the work we have done here uses the Heat API. Heat API is an orchestration framework which allows you to really quickly spin up machines, make sure that they're started in the right order, set up your firewalls and everything easily. It works based on a CloudFormations template. So it uses the same template that you use on EC2. And that's basically what I'll be demoing today. If you go to the Heat API getting started document, which is on GitHub on the Heat website, it gives you a one command option to just really quickly install OpenStack for a demo, which is pretty much what I've used for this instance. Or you can set up your own as well, but in this case, I'm using that helper. So I created a really small startup script. There are a couple of different parts to this. There are some IP tables rules that are needed to get heat working properly. And that's basically so that the software running within the node and the heat agent can contact the services running outside. So services running on OpenStack. The second command over there is OpenStackStart, and that's just the tool that comes with heat to quickly get started with OpenStack. And then there are three agents that need to be started for heat. So there are two parts of entry here. There's the API CFN, and that's the incoming REST API. The heat engine is what does the actual orchestration. And the Heat CloudWatch API basically provides the ability for the node to publish information out to heat so that you can decide when to scale up or when to scale down. So once everything is started, the first thing you would do is of course import the Keystone credentials. And you can run the heat command line tools to list what stacks are already running. In this case, I have nothing started yet. So now that I know everything is all set up, the first step would be to go ahead and create the basic templates for heat to start the machines. And there will be two of them. The first one is the broker, the second one is the node. You can get just the basic JEOS template and run off it, but OpenShift installs a whole ton of RPMs. So it's much easier to pre-build these images and have them available. I'm gonna fast forward a little bit through this, but basically at the end of this, you get a command that you have to run. And that command basically registers the image that was created with Glance. And it will be available if you go into the OpenStack UI, all of those images that I just built and registered with Glance are available there. Nothing special, just images with the CFN tools built in. The next step is to use the CFN command to start the whole process. There are a couple of parameters being passed in there. There is the SSH key that I used to log into the machine. There is a DNS key. So within OpenShift, once you start an application, it actually creates a DNS entry for that application. So you have an easy way of getting to that application. The DNS key allows the broker to talk to the DNS server. That's it. The third part over there is the upstream DNS parameter. If the node needs to contact a YAMR repository or RubyGems or anything of that sort, that's the DNS it uses. So you kick off the command and it basically says, great, I've started. And you can also do a quick status to see what the status of your stack is. I guess I fast forwarded too much. Once everything starts running, you can go up to the UI and you get a progress view. So you'll see the broker start up. It waits for the broker to be completed and then it will run the nodes. And this is really basic orchestration within the heat template. In the heat template, I have the definition of a broker which spins up the JUS image, runs puppet to install everything. And at the end of it, it'll trigger the node startup script. And the puppet scripts included or used over here are all open and published on GitHub as well. Once everything is running, you can go and check it out. You can also, when you run heat CFN list, it will give you a status of complete. So at this point, you know everything is set up, open shift is up and running, you can connect to it. Next step is actually to take the IP address of the broker and we will add it to the resolve conf of the host machine. The only reason I'm doing this is so that I can connect to the broker and since I'm not running on a global DNS server, I need to add this to resolve it. I then SSH into the broker machine and of course I need to clear out my known host, but SSH into the broker machine and get the host name and that's what I'm gonna be using to connect to it. The host name defaults to the instancename.example.com but all of that is customizable in the template itself. When you connect to the broker the first time, it's gonna ask you for a username and password and this is the open shift login. It defaults to admin admin, so once you enter that, it will bring you to the entry page for the broker or for open shift and at this point, developers are ready to deploy an application. Everything is set up, it takes, from this point onwards it takes about five minutes to spin up an application and get going with it. So in this case I'm starting up a WordPress application. We have WordPress available on GitHub as a quick start but this could just as well be one of your Git repositories. If it's running within your system or within your company, this could be an internal Git repository that would be this one. I give it a name and a namespace and that basically defines the DNS name for the application. So once WordPress is set up, your application is up and running, it starts two services, it starts PHP and MySQL because that's pretty much all WordPress needs but in your template within your organization, you can have many cartridges within your application and that's fine. Any cartridge that starts up will print information about how to connect to it but in this case we don't really need this, we connect directly to WordPress. There you are, that's WordPress up and running. I did fast forward a little bit but it was about five minutes to this point. You can do the same thing from command line and OpenShift provides command line tools as well. So you would install RHC which is the command line tools for OpenShift and issued similar commands. You'd create an application, add cartridges to it, do a little Git push and your code is up there. So I'm gonna run through this at eight times the speed. I basically tell it with server to connect to, set up all of my keys. It sees WordPress is already installed, I create a new application, gives me back a Git repository, make a code change before pushing, I check after pushing, I check and that's about it. So basically there's really no excuse anymore for not running a platform as a service on OpenStack because we've got heat now, it's awesome, pretty easy to use, makes automating all of that instances and the machine's pretty simple. It's very fast to deploy and all of the scripts for deploying OpenShift are in GitHub right now. So there's no excuse for anybody not to give it a whirl and give it a try. So I think what we've tried to do is make it really simple to deploy a pass on OpenStack without any extra headaches and made it something that we can all collaborate on and work on together. So if you'd like to use this, again we've said that it's on openshift.github.io. We're happy to take any questions. There is a Google community out there called OpenShift Origin Developers that you can find with plenty of ways to get ahold of us via email. We're always on IRC. There are lots of forums and we'd love to have you come and talk to us. We do write tons of blogs about how to do lots of different interesting things, everything from writing cartridges. We have a new cartridge architecture that's just being launched a couple of days ago. Our first release of Origin went out two days ago as an open source project. All of this is freely available and online and we really would like to encourage you all to try it out on OpenStack. Give us your feedback. Tell us what you think and work with us to build the next generation of Paz completely in the open source world. So we'd love to take any questions you have about heat, about Paz. If you have any, there's a hand here. Go for it. So it's up to the person that sets up OpenShift. You have the ability to control CPU, memory, quota, and network IO, not disk IO. Right now it's just quota-based. The network IO you have, it depends on the cartridge that you're using. So if the cartridge includes, if it's Apache or something, we use something similar to ModBW to regulate it. And you can also set up IP table rules. You do, yeah. Sorry, can you repeat that? Multiple OpenStack tenants. Well, it would depend. OpenShift itself doesn't directly talk to OpenStack. So that's all performed by the heat template. As far as I know, the heat template operates on only one project at a time, but I might be wrong about that. Yes. Right, so the question, if I understand it right, and please correct me, is you can use heat to directly deploy something like WordPress or you can use heat to deploy OpenShift and then WordPress on top. And what advantage does OpenShift provide over just the template? Right, okay. So the heat API, so let's keep heat on the side. So just let's look at, let's compare WordPress directly on OpenStack versus WordPress on OpenShift on OpenStack. Just those two. The advantage that the OpenShift option gives you is that you can pack many applications and utilize the machine much better. So the assumption that we make in OpenShift is that not all applications are gonna be active at all points in time. So there are applications that will scale up and scale down, and you want to balance the resources amongst multiple applications using as little hardware as possible. Whereas if you're using OpenStack, then you're gonna have one virtual machine or whatever you back OpenStack with dedicated towards WordPress. And no matter how much it is used or not, that one machine is always taken up. So you get a little bit better resource usage when you're using OpenShift. But that said, virtualization may not be the best fit. Luckily, OpenStack will work with bare metal as well. And that actually works out really nicely because now you don't incur the overhead of virtualization and you just let OpenShift containers pack as many things directly on bare metal without incurring that overhead as well. And that's another option that you can do. So I hope I answered your question. Any others? Yes. We'll upload them right after this. We have to combine just the two. But we have the video too, so that'll be available shortly. The whole thing. Yeah, it's all good. Yes. So if I understand your question right, you're talking about the individual cartridges and whether one cartridge can depend on the other? Yeah. You can go either way. It doesn't matter. Your application will define which cartridges it is interested in. And within each cartridge, you can have additional dependencies and it'll resolve and pull them in, within to your application. Yes, sir? Yes, it does. It works. OpenShift, as far as the developers are concerned, has two parts to it. There is the application controlling, so starting, stopping, restarting, and creating, destroying, of course. And then there is deploying your application. So that would be Git push-based. There is an Eclipse IDE, actually an Eclipse plugin, which helps you do this. And otherwise, I'll be working with, I don't see anyone from Cloudsoft here, but there will be integration with that as well to create slightly more complex situations where you'd like to deploy to it. And of course, you're welcome to create your own. The REST API is documented and the Git push is just Git push. There's nothing special going on there. I'm not sure I understand. So within OpenShift, we don't change the software that's running at all. It's no change at all. So it's running whatever you get. OpenShift will run on Fedora or Rel, Fedora 18 or Rel 6. And we use whatever comes packaged with that application. We don't change it in any way. The only restriction you might have is if you run into an SELinux rule preventing you from changing ETC password, something like that. Any other questions? Thank you very much.