 Okay All right, I think we're good to go so we're gonna talk really fast because we just got we thought we had 45 minutes And we only have 40 my name is Diane Mueller I'm with red hat on the I'm the community manager for open shift the platform as a service project and my colleague here Is Daniel Hansen he is from Cisco and he is an engineer who's been working on our next generation of open shift And we're gonna talk today about deploying open shift a platform as a service on open stack using heat and Docker and Kubernetes in 39 minutes or less So we've got some work to do here. So today's Agenda is pretty quick pretty clear why infrastructure as a service is not enough when you're delivering Your clouds what is pads? I'm gonna cover We're gonna talk about how to deploy open shift on open stack today So the current version the current release of open shift and all of its flavors and how you can do that with v2 And I'm gonna do it really quickly and do it maybe a few witty one-liners And then we're gonna switch over and we're gonna give you a quick overview of Docker of Google Kubernetes both open source projects that are really rock open shift v3 and we're gonna give you a demo of that and then we're gonna give you all the tools to go off and Get started with open shift v3 on open stack today So the important stuff. I told you who we are why we love open stack. We love open source I'm the community manager. We are really looking forward We have a couple of other presentations that other red hat folks are giving I'm giving another one tomorrow on Cross community collaboration. We've been some great work with the heat team and an open stack to make the templates available For open shift both origin and enterprise. We'll talk a little bit about that All of our cool stuff is on github and you'll will show the links to that stuff and move through that Pretty quickly so that you know where to find all the resources You can always follow me at Python DJ on Twitter and I will send you the URLs If you look on Twitter now under Python DJ You'll see that I tweeted out the link to this presentation So if you'd like to follow along with that presentation the URLs up top This is a reveal JS one will tweet out the link to that again after the presentation So some of the assumptions I have here. I'm not going to explain Open stack today. So you've got to be all open stack savvy. You know a little bit about heat I'll talk a little more about it. You're either a developer or an operations person person or both You know what github is everybody know what github is? All right good. I'm not gonna explain that one And you've heard about Docker everybody heard about Docker Yes good Optional skills are get Docker anyone play with Kubernetes yet a couple of you. Yay Anybody program and go oh Even better. I'm not gonna explain go Suffice to say that origin v3 v2 that the open source Persian per Project was all a Ruby and Rails app and v3. We've rewritten the entire thing and go So I'm really happy as a Python programmer to get out of the Ruby world But some of the Ruby people aren't so happy about that so one of the reasons that you would put a platform as a service on Infrastructure as a service is really that the extra special sauce that Platform as a service brings to the to the table for your developers and for your people and operations trying to Keep your developer community happy is that we add that middle layer beyond just the compute resources So if you put it on one put platform as a service You'll be able to deploy and automate the entire lamp stack for your applications You'll be able to deploy your Applications faster in a more standardized way. You'll be able to be more flexible You'll be able to give your developers and your development teams the testing and QA environments and the different languages That they they're looking for to use And a way to do that that allows you to deliver all that in a multi-tenant Elastic way so that the applications can scale up and scale down on demand that the developers can service themselves They can do self-service for those resources And really a lot of the reason why you would put a platform as a service on top of open stack is because Developers expect that kind of functionality today They don't expect just to have to get an instance build their own lamp stack and make that all happen and manage that They really expect that level of automation So they expect now to be able to take their credit card go to a public cloud and get a lamp stack Deploy an application and if you're building an on-premise private cloud Your development team your QA team your test teams all expect to have that kind of level of automation available to them So as I said really infrastructure as a service is not enough You're just getting the service the servers in the cloud You're building and you're and you're on the hook still for building and managing everything right down to the OS to the app servers to the database and Platform as a service really Makes that into an automated way if you're just delivering Software as a service people can combine those and integrate those and we have some offerings that allow you to do that XPAS and cloud forms and other things but You're really restricted to only using those services that are available to you So really platform as a service in in my humble opinion is really the secret sauce in the cloud It's what makes the development and the agility to bring new products to market quickly and to be able to scale them and leverage all the ease and scale and power of the cloud Very rapidly and make that done in a way That's compliant and easy to manage from an ops point of view and easy to self-serve from a developer's point of view So with OpenShift, there's three flavors of OpenShift There's OpenShift online So if you go to OpenShift.com you can play with everything that we're going to talk about today and we do it We eat our own dog food. We host and we have over 2 million apps deployed on OpenShift the public cloud version of it And we also have an enterprise version But the part that I'm really happy with it because I'm the community manager is that everything that we use in the public cloud and in our Enterprise project product is available in OpenShift origin the online project and all of that is in GitHub So if you wanted to go and deploy it today on rail on Centos or on Fedora, you could do that So what can you do with OpenShift? Well, you can do just about anything you can imagine with OpenShift You can deploy all the languages that you'd expect if I tried to jam them all in here You'd not be able to read this thing, but it runs on OpenStack. It does run on AWS. It runs on bare metal It runs on Rev It runs on any rail or Centos or Fedora And it basically deploys the entire LAMP stack all of the serve database services and everything you need to get To get your applications running on the cloud So how does it work? I'm going to be really quick about this. It's basically it's taking that The the management interface or the broker So it's taking it's creates a broker and you need to know this in order to understand what we're actually deploying onto OpenStack, so when you look at the heat template, she'll figure this out with v2 What you're getting is a broker a messaging ActiveMQ and a node in which the cartridges and your actual applications live in so we've got two types of things that we have to deploy using heat So a little bit about heat We've done some great work with the heat team Steve Dake Chris Alfonso and a number of other folks have done some wonderful work creating the heat templates They're all up there in GitHub. Some of them are in the GitHub repo for OpenStack and some of them are in the enterprise ones are in the OpenShift repo So what we've done is taken the everything you need to deploy both the brokers and the nodes and put them into heat templates And that allows us to spin up register things with plants spin up the compute nodes and scale them and make them run With all the authentication Capabilities, so we'll do all that ha stuff and I'm trying to do this all in ten minutes or less All right, so you have all the time you need so the origin heat templates for Fedora and for Sentos are in OpenStack heat templates here They're easy to find and this is all the stuff for v2 and and all the stuff for the enterprise version is all in GitHub It's all open source. It's easy to run So if you've got heat installed on your OpenStack, you should be able to run these right out of the box and you can watch a wonderful YouTube at your leisure here and I've treated out these links as well and you can watch this whole thing run into it, but instead of doing that today We're going to talk a little bit about a new pass a new generation of platform as a service So one of the reasons you might ask is why do we need to do a new version of OpenShift? Well, how many of you are using Docker? Everybody loves Docker We've got change become a new era of Tooling Kubernetes was open-sourced by Google. So we wanted to take advantage of that We had we learned a lot from deploying those two million apps on OpenShift.com And we wanted to incorporate that and put out the next generation of OpenShift And one of the wonderful things about working in the open-source world is we get to work with great people like Cisco And Denian who actually on v2 did all of the HA puppet scripts to make The HA capabilities built into v2 So if you're using any of the install.openshift.com Puppet scripts for v2 you're using work done by Cisco and donated and pulled right back into the project So what I'm going to do is I'm going to step aside and let Denian talk about all the work that Cisco is doing and Introduce v3 as opposed to me talking about it. So, Denian take it away Thanks, Diane Can everyone hear me okay? I don't know if this seems working. Oh, here we go. That's better as Diane mentioned v3 the the newest version of OpenShift that is still under heavy development. It's considered alpha Has gone through a major major architecture change All the way from the language that it is programmed in from Ruby to go and So as part of the architecture change there's a lot of tools within the architecture that have changed as well and one of the first tools is how The applications within the PAS system are being isolated from one another Previously there was a technology or a language called cartridges that used Se linux and C groups and and so forth to to isolate those applications within the platform The v3 or the newest version of OpenShift again still under heavy development Is using Docker leveraging Docker and so to understand what? Dockers all about we you know, let's just really quickly talk about what containers are they're a form of isolating You know applications and the application dependencies like configuration files and libraries from one another right so If we look at virtual machines We were able to isolate virtual machines from one another all the way down to basically the hardware driver level With containers. It's an isolation approach that allows us to take a single Kernel or operating system and then isolate Applications and and those application dependencies from one another and So part of that isolation mechanism is that Docker or containers Leverage kernel facilities like C groups and namespaces to create that isolation amongst the applications And part of the way that Docker is able to do this is Is using Docker images right and so Docker images and what Docker does is it takes? It takes the file system and traditionally in a root file system. It will go ahead and The kernel will mount the root file system as read-only do its checks and then turn it over to a read write File system what Docker does is instead of changing that to a read write file system It mounts a union file system so that we can have multiple file systems on top of that Brute file system So I kind of think of it as like a layered cake and so when you create your Docker images You're creating an image based off of a base image that base image is a read-only image If you if you go ahead and make any changes to that read-only image Those changes you make within your image Are now read write within your image. You never change any of the contents within that read-only image So Docker images I Say this is it's more like git than tar right so we can go ahead and pull and push images we could tag and version them and From a developer standpoint the developers love git so you can imagine When docker came around with this image concept that developers very quickly embraced Docker images as well So you know here's just what it kind of looks like if I want to go ahead and pull So I take a host I install Docker on that host and I immediately could go ahead and start pulling down Docker images from from Docker hub the central public registry I could also push an image so I go ahead and I create my own image or I have pulled an image I've made some changes now. I want to push that image back up To Docker so that if I want to go ahead and pull down that image somewhere else It's available and again That's kind of talks about the power of the workflow of docker is being able to push and pull those images down So if I develop on my desktop, I make some changes Go ahead and push those changes up to my image sitting on Docker hub And then I'm able to go to a production system and pull those the recent image down Version and tags so I go ahead and take an image I make some changes to it and say hey, let me go ahead and tag this as something That's useful So I could go ahead and keep the existing image the way it is and now I've got an image just like that But with my changes, but just with a different tag and then some just some Typical operations of containers. I go ahead and run the container I could use different flags like the I and the T and actually get You know go right into a bash shell within my container. I could list my containers using the ps-l I can diff a container so I go ahead and run a container again make some changes and I walk away have lunch come back and I say, oh, what did I change or my changes? running on my container from that image that I went ahead and Ran that container from I ran run a diff on it and I could see that when I did a W install W Get it made to all these changes to that to that image. I Could run my container as a daemon. That's typically how how containers are run Using the dash D flag. I could also add the dash P flag to my docker run And what that does is it basically tells docker. Hey docker go ahead and map ports the dash P's the ports So here I've got a container My engine X container that is running and it's listening on port 80 of that container Well, now I want to go ahead and map that port 80 on the container to the host Right, so the rest of the world can can hit that container on a poll you know on an IP address and the port that they're expected So here's all the cool things that docker does right portability pushing and pulling images so I can go from developing on my laptop to a test dev environment to approach an environment simply by pushing and pulling all the way through And that really aligns with the workflow, right? So developers love that workflow of being able to push and pull and tag images Diff images so on and so forth really easy to use I take a host virtual machine wherever it may be install docker and Very quickly can pull down an image and run a container so I can go from a standard host to To a docker engine running an engine X container within literally a few minutes And then speed because I'm not you know typically These docker images are very lightweight So it makes it very easy for me to to run or pull and push images very quickly Part of that is a union file system, right? So if I make a change to an image and push that those changes up to docker hub I'm not pushing an entire image. Let's say it's 500 meg. I'm just basically dipping right I'm dipping those changes between my original image and the changes that I made and doctor when I do that docker push It's gonna go change by change and say oh, let's go ahead and only push these changes up So what docker doesn't I mean it's it's really was developed as You know a host type of solution, right? There's Docker doesn't have this concept of how can I manage and orchestrate? Hundreds or thousands to tens of thousands of containers across hundreds or thousands of hosts across tens to hundreds of data centers I can't go ahead and bring containers that I would want to maybe operate together and manage those containers as a Coordinated group, so let's bring in kubernetes So kubernetes open source project that Google Open source I want to say about four months ago You know Google's been using containers for over ten years. I believe they're saying that they're You know starting up and shutting down millions of containers a week and so they know a thing or two about containers and What kubernetes does is it's pretty much a cluster manager so that? It's a cluster manager for clustering and managing Containers and so in kubernetes terms There's a concept of a pod and a pod is a collection one or more containers that you would want to have Together on a single host right In this example, we see pod one with a c1 and c2 right so c1 could be my Apache container and it's running my Apache process my Apache configuration files any other Libraries and then c2 could be a service that I normally tie together with the Apache service Maybe it's a log rolling service a data low loading service something like that Whenever I whenever in my environment I go and run Apache I want to make sure that that this other services together I bring them together and that forms a pod and and it makes it a nice unit of management because If you're hearing the term of microservices right so taking containers and instead of treating a container like a virtual machine Or we take one container and load up a bunch of different services on it It's taking a container or what you'd normally have in your application and containerizing each of those services and Then grouping those services together those common services into pods There's a concept of a label so that I could go ahead and take a bunch of these pods You know and in a large environment I may have hundreds of thousands of pods, but I may say wait a second Let's start managing these pods more effectively I'm going to take and take these pods and they're going to be my front-end pod So I'm going to put a label called front-end well I'm going to even further manage those and say within that That pod or the grouping of pods called front-end. I have test dev I have Production so different environment labels so I could stack all these labels together and taking all those labels that allows me to manage These pods so that I don't have pod sprawl and Then all the different services that make up Kubernetes Can be bundled into two typically into two different scenarios One could be your master and another collection of services is your minion or your node your master Think of that as your control plane so within open stack a terminology. This is your controller node All right, and the minion think of that as your worker bee right so the the master is talking to the minions saying Fire up this pod tear down this pod Another big piece of kubernetes is at CD and at CD is a highly available distributed key value store and it's used for shared configuration management So I can go ahead and define Something I could define a label that labels may be foo equals bar Well, I define that within my master Master stores that at at CD and now any of my minions is aware of that particular label So one of the minion daemons is called the kubelet, right? And if you think about the kubelet What that does is think of it as like a translator between the kubernetes world and Docker, right? so the kubelet the kubelet will take your pod definition and it will go ahead and talk to Docker and fire up the Containers necessary within that pod or if I remove that pod definition. I say kuby config delete pod xyz It's gonna go tell that kubelet. Hey Run the Docker commands necessary to remove that container from this Docker host Another daemon is a kubi proxy think about the kubi proxy as like a Distributed virtual Load balancer right so I go ahead and create these pods. I now want to expose One of the services from the pod again I've gotten an Apache example and I say all right. I've got a bunch of Apache pods Let me go ahead now expose Apache on port 80 to the rest of the world, right? I go ahead and create the service I use the label that I was talking about to tie the service to the pod and the kubernetes API will go ahead and create that service endpoint The and store that in ecd and the kubi proxies running on all my minions will be aware of it and say, oh, there's this new endpoint Let's go ahead and pull it down and now load balance for this for this service to all the minions And so it's you know, think of it as a load balancer slash service discovery I have these pods I create the service the the kubi proxy running on the minions is now aware of it and then exposes it to the rest of the world And it's just really interesting that that proxy runs on all the minions. So if I've got a hundred minions from You know one and two one sixty eight dot a hundred to two hundred every single one of them is aware of that service That I create and will load balance on the back end It knows about the IP address or IP addresses associated to the containers or the pods And then it will go ahead and front end the proxy to its own IP address So cluster management, you know, I've talked about some of these is the kubi API That's basically how clients users interact with the system the scheduler very simplistic now But is meant to be more extensible in the future It's scheduled to basically says hey, where do I go ahead and run these pods? Throw this pod on minion one minion two or minion three and then the controller manager Basically works with the kubernetes API and it's monitoring the status of those pods So if I go ahead and say all right, I've got this Apache pod I want three replicas of it for high availability or scaling that service and The controller manager will make sure that all the three of those pods are always running if one is somehow deleted The controller manager will know about it and say let's spin that thing back up If there's for some reason four or five of them, it'll say let's tear those two extra down and Then kubi's cfg. That's just our command line how to interact with the system So kubernetes does a lot of really awesome things But there's some things that it doesn't do right and so kubernetes Really does a good job at how do I manage? containers at scale it doesn't really look at how do I take an application and Manage it through the entire lifecycle of that application How do I take my application source code and turning that into a? Running application That brings that brings us to open shift bringing it all together, right? So if we look at applications, what are they or what do they what do we want them to be in the new era of? Application development is really distinct interconnected services We say distinct because with a microservice concept we want to treat each of the services independently if we need to patch them we patch them independently we test and dev them independently, but they need to be interconnected because you know complex applications are groupings of Services and we want to be able to deploy and manage those in concert, but we don't want them tightly coupled So applications within open shift You know just some key concepts You know for example a config it's a collection of objects describing You know what the application is right? So it would contain our pod or pods our service or services our replicas How many you know how many replicas of this particular application do we want and then a concept of a template right? This allows us to parameterize our application and This is kind of what a a config template looks like we give it a name We specify the top-level parameters that we want to go ahead and use throughout the application services and Then items that really make up the the configuration of that application And here we just dive down into the parameters. So in this top section we define what the parameter is In the template we go ahead and reference the parameter and then when we actually process that template into a config you see that That the environmental variable actually picks up that expression from the template Then there's also a concept of builds or build configs within OpenShift v3 And this is a real powerful aspect of OpenShift It allows us to take source code and turn that into a running image within our environment a running docker image So it basically interconnects the source code. I'm a developer I create some source code and The build config allows me to take that source code and turn it into a running image in the environment If I make any changes to that source code I also have the instruments necessary to keep that image updated in my environment again You know very I believe a very powerful aspect and it really goes into this this slide of of life cycles So we can use or leverage triggers To always keep that image updated my Apache image running in my Kubernetes environment, right? It's awesome that I'm able to put that thing there But three months from now six months from now a year from now am I keeping that updated based on the Continuous iterations that I'm making to that code base And then the deployment the deployment really brings it all together, right? How many replicas do I want? What are my trigger policies? And what's my strategy for deploying the application? Trigger policies You know we can either manually trigger saying hey go in and build this now I could go ahead and trigger it based on a change in the the image or in the error in the config file So very very powerful as well New concepts just to kind of keep in mind with v3 is you know the configurations the builds and the deployments The configurations of the collections of the kubernetes as well as open shift objects The templates that we use to go ahead and take parameters that we're going to leverage throughout our application So take those variables and parameterize them throughout the application The builds taking the source code and turning it into a docker image and then the deployment. How do we actually Get this application up and running So let's actually see what what we're talking about here So I've got I've got a open stack Juno cluster And I'll actually show you so I've got Juno running on fedora 20 And what I'm going to do very simply is go from my open stack environment all the way to running an application in v3 One of the first things I do is I pull down the kubernetes Templates the master branch and I simply put some stuff in the local.yaml, right? What's my keys that I'm going to use? What are the flavor size? What's the IP addresses I'm going to use and this kubernetes heat templates actually a nested template. So So the cluster or the high-level template It's doing some basic things. It's setting up our neutron network. It is setting up our SSH keys to To our instance it's setting up the security and And also basically tying it down to the lower level Heat yaml template So the high-level template as well as the node template, right? It's a nested template And where are we at here? So let me just back up one second So I believe this is This is the second template that we're using here and this configures our kubi minions Right very similar to the the high level or the cluster template It's configuring the network for the minions the security for the minions and then actually configuring The nova instances and within both of these Templates there's a lot going on Basically we're using the the user data parameter within the heat template to install a special repo Pulling down certain packages running some scripts on on the nodes as well You see that we fired off the heat cluster And it is building we could see take a look at the resource progress and see that It's still building out and what we're doing now is just Seeing that oh nova has fired off these instances. They show active But let's take a look at the council log and see where they're at. Well, even though they're active They're still updating their packages pulling down the latest packages that we need for kubernetes Well, now it's complete. So let's go ahead and ssh into our kubi master Now that we're in the kubi master. Let's just see You know that the packages are actually there. We got the kubernetes package We've got the ecd package really the two core packages On the master Let's take a look and we actually see the minions. Oh the minions are there I've got two minions at dot four and dot five Let's take a look at and make sure that the services the kubi api The kubi scheduler the controller master make sure that they're running ecd's running everything so far. Yay Heat did what it's supposed to do things are looking really sweet. All right next step in this flow Now let's pull down another set of files That we're going to use to go ahead and now deploy Openshift v3 on my kubernetes cluster. I pull down the files I'm showing you what the files look like You know, you may want to modify the files if you know if you're master and ecd servers at a different ip address But otherwise I go ahead and create that kubi service, right? So now all the kubi proxies are proxying port 8083 and sending it to my my v3 pod or pods I go ahead and create a pod. You see that it's sitting there waiting You know, so kubernetes is basically saying oh danien wants to create a pod Let's go ahead and schedule these pods out to the minions Download the the docker images and start up the the necessary docker containers It's done that the pods are ready to go. I've got uh a container id and I go ahead and Verify that the container looks good. You see that the services are running now Let me actually get into the container. So I'm going to go ahead and get the container id Use a tool called n setter to basically counsel right into uh into the container And I'm in there great. So now I'm in my open shift v3 environment that's running on kubernetes that was deployed by open shift or by open stack heat Next thing i'm going to do now that i'm in v3 is again pull down some files And i'm going to build build out my docker registry within my open shift v3 environment Open shift takes advantage of a private docker registry And you see that um what this file is it's a config file what we talked about it's going to describe What service so this is kind of like a higher level wrapper right within kubernetes? We're used to or we're familiar with you know the pod files the service files Well now we've got the config file that wraps that all together within the open shift world You know part of the config file is specifying what you know what image do I want to use what ports do I want to expose Now let's go ahead and fire off that config and it creates those services The service is uh is being proxied by kubi proxy Now i'm going to go ahead and create a build config remember I told you about the build Configs what the build config does is allows us to create that connection between the source code And the docker image right and so it needs some parameters to understand that What's the url of the source code sitting on github? What's the the tag of the Image or the image name that I want to use when I go ahead and create that image I go ahead and create the build config And not what i'm doing is i'm uh simulating A webhook right and so what this webhook normally does Is if I go ahead and I set up a webhook in my github repo and whenever I make a change It's going to go ahead and let Openshift v3 know hey something's changed in this sort of source code That's going to kick off v3 to pull down those changes and re and rebuild that docker image And now I've got an application template right so this is going to be my sample application And instead of just creating an application config I'm now using a template and what I talked to you about just a few minutes ago with the template It allows us to parameterize those variables that I want to share within my application So things like admin username and passwords and maybe a certain Configurations within configuration files. I could go ahead and parameterize Here's it includes my pod template. So, you know, what's the name of the container? I want to use what's the image that I'm going to use and this is where I'm actually consuming those parameters within the template And now I'm going to go ahead and generate a config from the template and also run that config All in a single command and now you actually see I've got a running application And it's a simple application a hello world application But what I wanted to focus on was that whole process being able to take An open stack deployment Juno open stack deployment, but this should work on ice house as well How do you take that environment and Build out a kubernetes cluster run v3 open ship v3 on it and then run an application on top of v3 So here's where you can go ahead and get started with open ship v3 github.com slash open shift slash origin And it's got a whole getting started guide We don't really have much time for questions, but I'll hand it back to to dianne Yeah, so thank you very much danian. Um, that was awesome And That that's the kind of beauty of working in an open source community And we would love you to come and contribute and test out open shift origin v3 It's all in github all the getting started stuff there. I'll tweet out the links We're going to be over in the cisco booth Right after here if you have questions to danian or to myself on how to get involved Or how to contribute or how to start testing and if you're interested in running the v2 stuff We can show you how all that works as well I'll be in the red hat booth all day tomorrow. So please come forward and ask any questions you might have Thank you very much for your time and attention Enjoy the show