 My name is Diane Heeler. I'm the OpenShift Origin Community Manager at Red Hat and I'm really happy to be here at OpenStack. Once again, I've been coming since the days of Essex and I, a big fan of OpenStack. It's all the wonderful things all of you people do. My colleague here, Chris Alfonso, is so nicely down on his knees, going to play then a white, he's going to dance the slides for us. And then once we're done with the slides, we're going to make him stand up and do a really awesome demo of taking the OpenShift and deploying it on OpenStack with heat. So how many of you here have heard of OpenShift before? My work is done, I can go. Great. So I've been doing this talk and versions of it for the past three OpenStack summits. So that means we're doing something right. Next slide. So because you've heard of OpenShift and we are one of 100,000 open source projects at Red Hat Facts, then you know a little bit about our ethos at Red Hat about taking open source and making it work in the enterprise. And as you can see, we've created basically a wonderful opportunity to use a pure open source cloud. The guys from Centos are here with us. I didn't get time to update the Gloucester piece to add Seth in, but the Seth folks are here. Over, guys, we've got a lot of work going on in the JBOTS community, and we're here with the RDO team. We're really pleased to be included in this today's talk. But we're the people on the bottom of the little panda. We'll get to the next slide. This is the OpenShift Origin Project. OpenShift Upstream feeds OpenShift Online and OpenShift Enterprise. Enterprise is our on-premise offering. It is a platform as a service group, and it's right here on it. So it is the open source project. We eat our own dog food. We deploy OpenShift Enterprise. There's no switching between the code bases on online. It's been up and running for over 18 months now, live with over a million applications running on it. It is production ready. It is being used for enterprises. We have lots of customers. We have a few other cloud providers. GetUpCloud is doing a wonderful job down in Brazil, where I just came back from with this wonderful pool. And they have hosted public pause for the media enterprises down in Brazil. So I don't need to explain to you what infrastructure is. Services or pauses or sales, because you guys are the ones building it for us. So hit the next slide. But what I am going to say to you is that from a developer's perspective and from an end user's perspective, infrastructure really is not enough. What you're building here with OpenStack gives us all the cloud compute resources that we want on demand in an elastic and a wonderful way. And now in a wonderful open source way. Hit the next slide. It gives you basically the network, the storage, all the compute on demand. But basically what it's giving me is a server in the sky. Server in the cloud or server in some server farm. But as a developer or an ops person, you're still on the hook for configuring and managing and updating those servers or the environments that the applications are deployed on. But what platform as a service gives you is everything that application needs. The application and runtime environment. The entire lamp stack. Getting light lamp stacks. Makes that for you in... Bloody well, every configuration. And it basically gives you all of the tools to manage that infrastructure that is required for the application to run on the compute resources. Whether you're on top of OpenStack or bare metal or anybody else put out for the name. It really makes the cloud very useful for people. So the way that we kind of think about it is you code your app if you're a developer. Like me, if you're doing Python, you can pick on DJ, you can call me on Twitter and see how bad of a program I am. And you can take from GitHub whether you're using clips or you're using a command line. Or you get out of a repository. And take all of the pieces, whether it's your databases, your Apache, all of those wonderful things. And configure and deploy that application with its entire configured lamp stack. And then you're just off and running. And it's really a wonderful way. It's maybe want to be a programmer again. And it's really giving me now everything I need to become a more effective and agile developer. And that's something that's been being played with in the enterprise all over the world now. As I said, I just came back from Brazil. I've been over Europe. It's pretty much one of the standard things that people are now deploying on top of their infrastructure as a service. So today what we wanted to do was try and talk a little bit about how there's many different ways to deploy on OpenStack. And we really have been focusing in our team on making sure that the heat orchestration tools worked really well with OpenShift. And we've been doing that for quite some time. We put in the OpenStack heat template directory all of the Fedora 19 scripts for deploying those heat templates. We have the OpenShift Enterprise ones, which we'll show you in a little bit. And we also have now, because CentOS has now been working with us, we have come out of the closet as CentOS lovers, as well on the OpenShift team, because we've actually been doing most of our test and build environment, has been on CentOS for quite some time. So we just added in with the help of Jeff Peeler, who may be in the room or may not, from the heat team, the CentOS 6.5 templates for doing that. So basically, if you think of the cloud, the platform as a service is everything that your app needs to deploy in the cloud. So I talked a little bit about some of the applications, but someone asked me a question a few minutes before we started about OpenShift. And we have this metaphor, which we call cartridges. So that layered cake that is the cloud, our piece of it gets many flavors. And so I know I'm very Python centric, but OpenShift comes out of the box with a whole lot of languages that it's already supported. So Python, Java, PHP, Node.js, Ruby, Pearl. But there's also a way for you to extend that platform. It's a very highly extensible one by creating your own cartridges. And the cartridges then can be made available to everyone in your enterprise, or they can be specific to a project. And that allows us to extend our platform and add new things like Go and other languages pretty easily. And they're also different from other approaches in that these are made available across the entire OpenShift platform. So if you deploy Go, for instance, you can take that and push out Go, not just for your own application, but for everybody's. And maintain and version that across the ecosystem that you've installed. So as I said, we host in lots of places OpenShift online. I will admit at the moment is running on AWS. We'll work on that. We run on OpenStack. We do run on CloudStack. We run on BareMetal. We run on Rev. And all you need is a Red Hat Linux, whether it's RHEL, Fedora, or CentOS, and we can create that for you. So what makes us different than other cloud platform as a service and things like that? Well, one, it is the RHEL and the Fedora capabilities, and that's built in. We also maintain a really high level of security on the multi-tenant containers that we're creating. We're using SE Linux and C groups for our multi-tenancy, and we are working very closely with the Docker folks. And if you've been reading the PR press releases or seeing any of the demos that we did at Red Hat Summit, we're doing a lot of work with Project Atomic and GearD to help bring true Docker support into OpenShift. And we can talk about that separate from here as well. The other thing that we do really differently is we do something called automatic application scaling. So if you take your application in the cloud and you deploy it, you say, okay, I have this much set resources. A lot of other platformers of services will scale things for the PAS. They'll add more resources for the platform as a service. But we actually go to the next level and scale your application up and down. And if your application isn't getting a lot of traffic, we have another concept called application idling. We will take your application and we will acknowledge the fact that you're not getting a lot of traffic and spin it down and not charge you for or charge your department for that. Can we save the questions to the end? Thanks. Because we squeeze this in time. So we do a lot of that. That makes us pretty different. The extensible architecture that we have based on cartridges and soon based on Docker. The other thing is we have a real good support and a relationship with the Java folks and the Jboss folks. So anybody who's interested in that, we do a lot of work with them. And the other thing that's new and interesting is our support for .NET. Yes, we did bring .NET to OpenShift and a lot of heads turned at Red Hat. And I love the fact that the gears and the are called on the windows nodes, windows prisons. I think that's probably the best name for anything and the fact that I'm going to have to put Visual Studio on my Mac. It's going to scare everybody soon. But one of the things that we've done over the past couple of months is partnered across the community with a company called YouHeroSoft who made a huge donation to the community of all the code base to support .NET in OpenShift. And we're really happy to have them on board as part of our community now. And that's another separate whole talk too. So I'm here all week. If you want to hear that or see that demo, I'll give that to you separately from here. So one more slide, my friend. So if you want to find out any more details about OpenShift itself, you can go to origin.openshift.com and check out the blogs and the articles or send me a note there. And we'll get there. One more. So what we're really here to talk about is the interesting thing for OpenStackers. How many of you have used Heat? Awesome. I don't have to talk today, do I? You've got it all down coal. So this is really about some really cool cross-community collaboration. And as a community manager, I'm really pleased with the way that all of these communities have come together, whether it's the Docker community or the Solom community or the Heat group or the OpenStack folks. They really have been amazing people to work with. And one of the projects that we're really happy to have been working with for quite some time now is the Heat group. And as I mentioned, Jeff Peeler, Steven Dake, there's a whole crew of people I can't list that have been just marvelous to work with in terms of teaching us about Heat, teaching us about creating templates. And so my man here, Chris Alfonso, is going to take it over for a little bit. And he's going to talk about putting the paths in OpenStack. All right, thanks, Diane. Just to give you guys an idea, I guess this is our third year since the Heat project has been in existence. It's grown substantially over time and has integration points with many of the OpenStack features. The Heat has really been the basis for many integration points from the Red Hat perspective of being able to get our projects deployed on OpenStack. Over time, you'll see that integration basis expands substantially. We've got a project called TripleO to manage deployments of OpenStack. You have integration points around security, around storage, around monitoring, and image management and command line tools and web UI integration points. And those all have touchpoints with Heat. I'm going to show you an example of really what I spent a lot of my time on is OpenShift Enterprise being deployed on OpenStack. And just to give you some background on where Heat came from and why it exists, its real mission is all about infrastructure orchestration. There's been a lot of conversations around, well, is it a configuration management tool? Where's the dotted line stop between different components of your infrastructure and what's actually running in your infrastructure? And Heat has really held true over time to just being in the business of managing orchestration and leaving that as a core competency and leaving integration points around what you actually run in instances running in your infrastructure to configuration management packages. And really the user data section of instances is really where that handoff happens. I'll show you some concrete examples of how OpenShift actually utilizes that integration point. I won't spend too much time on this one because we're sort of doing a lightning talk on this one. I think we have about another half an hour. If you're interested in looking after this presentation at some of the Heat template examples that we've prepared that you can try on your own, you can go to github.com, look in the OpenStack repository or namespace, and you'll see Heat templates. There's things like a highly available directory that has ReadMe that tell you how to deploy a multi-node OpenStack or multi-compute node OpenStack infrastructure with multiple brokers and multiple OpenShift nodes running altogether for one OpenShift fabric. So go ahead and look through there and start at the basics that we'll go through today. And you can look as deep into that as you like. This diagram is up here not really to teach you about OpenShift, but to give you an idea of the infrastructure that's required to successfully deploy an OpenShift enterprise or OpenShift origin infrastructure. For all the services to coordinate together and be running and be secured, there's a lot of infrastructure configuration that has to happen. You have things like mounted storage, you have security groups, you have instances that are able to use floating IP addresses. They all need to talk to each other. What I'm going to demonstrate to you is how you would set up a two virtual machine infrastructure that one is an OpenShift broker, one is an OpenShift node, and it actually becomes a multi-tenant aware node application hosting service. So I'll show you how you do that. And I've taken a video of this in time lapse package installation that we don't have to sit here and watch paint dry because it actually does take longer than the summit lunch lines. So there are three steps to the... I just bleached my retinas, that's great. So there are a couple different steps to getting your infrastructure up and running in OpenStack. So the first thing you have to do is have images registered with OpenStack Glance. And you can either just download a prepared image or you can create your own images and save them as snapshots, customize them. What I like to do is the end state of deploying an OpenShift infrastructure, my intent is to be able to bring up an OpenShift node as fast as possible so that when I'm in a production environment, if I start to run out of capacity in my OpenShift installation, I want to be able to bring up additional compute capacity very quickly. So normally when you set up or install... create a burst machine, install an operating system, install packages, configure any configuration files, get everything up and running, that's a lot of latency. You have to wait for packages to download. Even if you have local repository, you have to go through an installation process, a configuration management process. So what I like to do is prepare a virtual machine image beforehand, get it all ready and just maintain that in a library such that when it's time to instantiate it, it comes on line as fast as possible. So one of the OpenStack tools that makes this possible is called Disk Image Builder. And Disk Image Builder is essentially a binary that will run through a templatized version of... it looks at every stage of what you need to install. So from startup to package management, configuration management, what you want to have happen one time or every time a machine comes up. It's very extensible and what I've done here, and I'll go ahead and just play this... I created a RHEL Disk Image Builder element and what that looks like is... I'll go ahead and get a tree view of this going here so that you can... This is also on GitHub. You can look at many of the Disk Image Builder elements. Let's make sure this is actually... There we go. Okay. So I'm running this on Red Hat Enterprise Linux 6.5 and I'll go ahead and show you the Disk Image Builder tree. I'll load the wrong file. Let me make sure... Okay. So I just skipped past the showing you the tree and I'm going to show you actually exercising Disk Image Builder command line utility. And this is going to build a broker image. I've blurred out a couple of things just so passwords and such, but you can configure things like the Disk Image Size and what elements you would like to load onto Disk Image Builder's path and you can tell what type of image you're going to build. I've said to... That I'm going to make a virtual machine and it's going to be a RHEL operating system installing all the OpenShift Enterprise broker packages. So I'll go ahead and... The lag on this is completely terrible. See if I can catch up here. Okay. Well, not only did I time lapse that, I quit it just now so I can skip to the next one. All right. So the end result of running a Disk Image Builder run is an image that you are able to register with the glance service. And so at the end of that clip, you would have seen a glance registration and then it would look like this. If I run a glance index, I can see that I have a broker and a node image registered. And the reason why I need to register those is that every heat template for a virtual machine instance, you need to reference the glance image name or the image that's registered with glance by name so that heat knows which virtual machine to instantiate. And before I show you this video, I'm going to just jump over to a text file and I'll blow up the text here just to give you an idea of what we're talking about when I mention a heat template. So a heat template is just an ASCII file in YAML format that has a bunch of declarations, things like, well, really a descriptor of what your complete infrastructure is going to look like or at any snapshot in time, what your infrastructure is going to look like. You can pass in parameters at runtime when you instantiate heat to say, set up all my infrastructure. You can say, well, what's specified things like, well, what is the domain name that I want to deploy this infrastructure under? What's the name server I want to use? What size machines should these, or what instance flavors should these be using? What type of resources should be allocated to the VM, et cetera? You specify things like auto-scaling parameters. You specify things like, well, this one is OpenShift Enterprise Specific, so you'll see things like what's the RHN registration credentials and subscription information, et cetera. But infrastructure-wise, so there are things like neutron network subnets and networks and ports and things like that you need to declare in your infrastructure. You would want to specify things like, well, what security group should these instances be set up in? So what should the firewall rules look like for ingress and egress? You would want to specify things like, okay, you see the neutron port that gets created here, the floating IP that gets allocated to the broker and so on and so forth for the node. But where the real meat is when you deploy infrastructure like this is, well, what happens when the VM comes online and we talked about, well, how is heat really helping us set up an OpenShift infrastructure? Well, all the components that we talked about already, those are really for any infrastructure and what makes this an OpenShift Enterprise deployment is what happens when a VM comes online and the user data section of a broker instance or a OS Nova server instance is everything that gets initialized by Cloudinit. So Cloudinit's going to look at whatever's passed in this user data. There's a lot of environment variables that go on here. They get passed to an OpenShift installer, the OpenShift installer interprets all that, installs or configures a bunch of packages. It will update the machine with any errata that's happened, that has been published between the time the images were created, make sure that SEO Linux is on, things like that and really that's all that happens. And when that full process is done, the last thing that happens in this user data is there's a CFN signal that's fired. That sends a signal to the heat metadata server and says, hey, the server instance is running, go ahead and notify heat that this machine is done being configured. And so I'll just show you a quick video on how that happens or what happens when we run this and hopefully there will be no lag here. So this is a command line utility for heat. You can also do this through the Horizon web UI, but this one will just fire off the create command. I'm going to call, give it a stack name and I have no idea if you can, you probably can't see any of that text. So I actually don't think this refresh is working at all. Well, we may be cutting this a little bit short if there's no, yeah, there's no having, unfortunately this video is not even refreshing. Yeah, yeah, I could pull that up too. I know that this has a little bit of different editing. But yeah, let me go grab that real quick. Don't mind bearing with me real quick. Oops, go to the blog and I believe it's down here somewhere. Let's see. So you're getting a good tour of where all of the content is for OpenShift and he's got... Let's see what the resolution ends up being like on this. There you go. Yeah, we'll see. Okay, so here's the disk image of the other part here. We'll just jump past all the package installation. Okay, so here's the actual heat creation. Let's see how well this refresh is here. That would be considered a local laptop for monitor issue. Alrighty, that is awesome. Well, give it a couple minutes and then I will ad hoc a demonstration. Yeah, I'm working on that. And here I thought I was going to save you a bunch of time by time-lapsing a video. Yeah, okay, so maybe it's actually going to work at least. Okay, so I showed a create there and that is really small and blurry. So let me just kind of talk about what's happening here in this video. Now I'm looking at the horizon dashboard just to see the instances were created. You can use the heat command line tool to look at the status of a stack build. So you can use heat event list or to look at the specific events that are actually being fired off. And when you look at the list of events, it may be really confusing when you first look at it because each record that's recorded is not updated. It's actually a sequence of events. So you'll see create in progress, create in progress, create in progress, but you'll actually see event names or resource IDs duplicated down as you look down the list and you'll see things like create complete and that way you know that resource has actually been completed. And you'll see things like a wait condition. And when we look at that CFN signal part of user data in a VM instance, whenever that gets fired, that's when you know that the wait condition has been satisfied and then you can look in the heat event list and you can see that the event has actually been completed. So here in the video, I've finished set up the OpenShift infrastructure and logged into the broker and run RHC setup, which is the OpenShift command line tool. I'm doing an RHC setup, setting up my domain name, making sure I set up an authentication credential or authorization token, set up a domain name space for this one I'm using, funzo, and the output of that is basically a set, a number, a list of application runtime languages or web frameworks is what we call them, that you can create an application with. And for this demonstration, I think I'm going to create a, either a PHP 5.3, yeah, PHP 5.3 application. As soon as that create app run is done, DNS will be configured and propagated worldwide and I'll be able to pull up that application in a web browser and we'll do that in just a second as soon as DNS is done propagating. Okay, the output of that is a lot of information around your application, an SSH URL, you can log in to your gear, you can look at what's running in your gear, running in that node VM. I've set up my network preferences here, pointing to my broker because I'm running Bind on my broker server VM and because I do that I can go to this PHP application and just pull it up. Another thing I can demonstrate with this is a simple update. So if I go into that test PHP directory, I can update the source here and do a simple update to the index page, save it, commit it, and push it. Do a get push and the application will just be redeployed and it's up and running. So it gives you a way to set up infrastructure on OpenSack, set up OpenShift on there and be able to, as a developer, be able to have an application created, all the configuration around it around having a proxy around having all the ability to scale up the application, scale it down, have it idled, have it be one of many tenants on a node, have it be secured, have all the firewall security set up and your code sharing, your get repository, all set up so that you can add members to it, you can have set up teams of developers and you don't have to do any other configuration other than just what I showed you to do so. And then you can use the developer console to create other applications. We have run times and if you do this online, you can see a lot of quick starts that we don't ship with OpenShift Enterprise because we don't ship them because we don't support them, we just enable them. But if you go to OpenShift online or grab origin, there's a lot of quick starts that you can just pick blogs and different types of bleeding edge frameworks to create applications with, but here you can see the same process. It's like boiling water here, so it's, if I use the right scheme. And so there's my Ruby application. And that shell, that is what HEAT has been able to, has enabled us to do, is deploy an entire infrastructure. So the demo was, it was canned. It was, we'll admit it and you can tell it. If you throw up the slides there. Could you go to the Git repository for the templates too, just so people see where that is again? Sure, yeah. I think we have about five minutes left in our time, but this is github.com slash open sac slash HEAT templates. So everything that he showed you there, the templates, I saw people taking pictures. I'll post the slides up in slide share slash open shift and you can have them all and I'll tweet it out. But everything that he showed you there, for the enterprise, for Origin on Fedora, Origin on Centos is all in the HEAT templates in the open stack repo. So what we're looking for is for you guys to go out and play with it and test it and give us feedback. Make pull requests against it if you find something that you don't like. Fork it, create different versions of it and give us feedback on it. That's really, it's been, we've been using it for six or seven months for enterprise deployments and for a lot of Fedora folks have been playing with as well. The Centos stuff is relatively new, so we would really like to get you, if you're a Centos fan, to give that a try as well and give us your feedback on it. Let me go back to my slide. I'm not sure what I left for slides for you. Some of the best practices. But we've got about five minutes for questions. Yeah, so why don't we stop and let's see if we can get some questions. You want to stand up? That's correct. And really the idea behind that is in order to have RHEL, you have to have subscription. And so it just targeted, kind of allocated our time to what we thought people would want to use. But the stuff that's on Centos should be directly applicable to RHEL. Yes. It looks pretty similar except for the user data. Yeah. Right. Well, the origin stuff. So you saw that I was using an OpenShift.sh. It's a bash script for our OpenShift Enterprise installer. The differentiation between what you'll see with origin is Puppet-based installation configuration. And with Enterprise, we do a few other things. And it actually predates our Puppet work. But in the next little bit, you'll see some other interesting things with a tool that is called O-Install. And that will be shipping with OpenShift Enterprise 2.1 that will have compatibility with the OpenShift.sh as well as vagrant-based installation. Yeah. That's a known bug. And the heat team has been working on that. I don't know if they made it before. What's that? It's Icehouse. What was your work around for those that are... To the wakeiness. Okay. Right. Is there any other work around? You can use nested templates, but that specific issue with Icehouse has been addressed. And for debugging here, have a USB key from us and thank you for that. Send us the... Any other quick questions that we can fit in? In the back? The applications are actually being run in the VM itself. So what I demonstrated was a two virtual machine infrastructure setup. One is an OpenShift broker and one is an OpenShift node. The applications that are created are created inside that application node virtual machine. We're using PAM namespaces. We're using C groups for resource allocation and SEO Linux labeling. Yes. That's not addressed with this. So there's a couple different scaling ideas here. One is that you're referring to is OpenShift-based scaling. And then... Right. And so that's... Most of that implementation is in our HA proxy control daemon. So there's that aspect. So there's some... a long list of issues to address there. There's also making applications highly available, meaning, well, what happens when brokers are introduced or taken away or actual nodes are introduced or taken away. And then there's also... then there's the infrastructure auto scaling that HEAT has the ability to do as well. But... You can have multiple HA proxies now. Yeah. So that's a fairly new feature. You have to set your... Paz admin has to set up the user to allow HA proxies... or HA applications. And then when you create a scaled application, your second scaling, it'll introduce a second HA proxy. So that's a new feature. Yeah. The scaling algorithm tries to place those gears on different... on different virtual machines or different bare metal machines. You can actually take a one-level audience. Yeah. Yeah, we could spread that out a bit. But there... Yeah. I've tested that a bit and I've seen some wonkiness there of some unexpected results around the scaling algorithm. So it's definitely a possibility to end up on the same compute post. Yeah. Yeah. Do we have time for another question? One more question, anybody? If not. Thank you all for knowing about OpenShift in advance. Thanks for your patience with my demo video lag. Thank you very much.