 Hello and welcome to this OpenShift Commons briefing today. We're really pleased to have with us Daniel Hanson from Cisco to talk about OpenShift on OpenStack and some of the work that he's been doing as a contributor to OpenShift in the area of HA deployment and puppet scripting, as well as to give you some of his experience about deploying OpenShift on OpenStack at Cisco where he is gainfully employed. And I've had the great pleasure of working with Daniel and giving presentations at OpenStack summits to other groups within Cisco and he's a wonderful speaker. What we're going to do is let him give a little bit of his presentation so you get a taste of what's going on there. He can talk through that. And then we'll do a little bit of Q&A and open it up for discussion. And somewhere around halfway through this call, Ken Owens, who's the CTO of Cisco Cloud Services is going to be joining us and he can tell us a little bit more about the roadmap ahead for Cisco and how they're seeing the use of OpenShift at Cisco and how it plays out there. So I'm going to let Daniel talk a little bit here now. And then thank you very much for offering to brief all of the other members of Commons today. Yeah, thanks, Diane. As Diane mentioned, my name is Damien Hansen. I'm a software engineer with Cisco. I'm part of our CTO organization. I've been working with and contributing to OpenShift for about a year now. As Diane mentioned, some of my initial work was related to automation aspects of OpenShift using Puppet. And then enhancing that further by including high availability aspects for OpenShift deployment so that key OpenShift services can scale and are resilient. And I've also been working with OpenStack since the Diablo release. And so what I've also done is work within Heat to orchestrate OpenShift deployments within OpenStack. And we'll talk more about that here. So I put together just a few slides to help kind of guide the conversation. But this is very much an open dialogue. So please jump in, ask any questions. And let's make this interactive. And so for folks unfamiliar with OpenStack or specifically Heat, Heat is kind of an integration point between OpenStack and OpenShift. So, you know, if you want to deploy OpenShift in an OpenStack environment, there's definitely different ways of skinning that cat the way that I thought was best a year ago and where I put my efforts around was by using this orchestration service, AKA Heat within OpenStack. And what that allowed us to do is to provide a very simple format for users to deploy a multi-node OpenShift environment very easily, not having to go in and monkey around with different settings for your different node types It provides a very simple interface. You know, you basically supply parameters to the Heat template. And within a single command, you've got, you know, the orchestration service that goes and does all the hard work for you. And so in addition to the presentation, I also pulled up a demonstration that I put together probably about six months ago. So it is a little dated, but it still provides the concepts that are important to this discussion. And again, just quick overview of what Heat is. It's a service within OpenStack. You call it a service or a project within OpenStack, and it is the orchestration service. So again, it interacts with all the different services that make up the OpenStack system and talks to those different API endpoints to realize your workflow that you create. So we'll, again, dive into one of the template examples so you could see how it works. And again, just kind of talk through it a little bit. But really quick, when Heat was created, it was originally based off the Amazon CloudFormation API, and it still supports the CloudFormation API, but it also has its own native API, and that's really where the focus of the development is. Part of the features of Heat, it's not only to orchestrate the initial deployment of whatever you want to orchestrate, typically a complex application, but it also integrates with Solometer, for example, to scale out that application. And here's just the interface to go ahead and access the Heat templates that were created. And if you want to, if you look at my browser here, you could actually just follow along. This is a reveal presentation that I'm using. So if you go to dainhans.github.io. port slash v3 underscore presentation within your browser, you could actually move along at your own pace if you'd like. But the Heat templates are hosted on GitHub within the OpenStack repo. And the Heat templates repo, it has all sorts of different types of Heat templates. And so whenever I'm actually wanting to create a new orchestration workflow for an application, I'll usually always start here to see what templates are available to start my work from, to leverage the work that's already been out there as a baseline instead of starting from scratch. There's also a video that was put together on how to use the Heat templates for OpenShift. Danielle, I would just interrupt just for a second too. The OpenShift Enterprise Heat templates are now in, have been moved from the OpenStack repo into the OpenShift repository on GitHub. So if people are looking for the enterprise supported ones, they're now under OpenShift on GitHub. The Fedora and CentOS ones are in the OpenStack repo. I'm going to dive into the Heat templates in some more details too, so everyone could see what's going on there. But I also added this page here where you go to github.com and actually instead of the OpenStack Heat templates repo, you go to the OpenShift Origin repo, or maybe even start off at just the OpenShift repo. You can see all the different projects within the OpenShift repo that you can contribute to. Again, the particular project that I contributed to was the puppet modules for automating OpenShift deployments. And so take a look at the OpenShift repo and see how you can contribute. There's a lot of different opportunities if it's directly with Origin or really any of the other associated projects for OpenShift. If there's any, I want to kind of throw it out there, any questions? Otherwise, I was going to again jump into a quick demonstration so you could actually see what I'm talking about as well as I was going to dive into one of the Heat templates. You just have a better understanding of the template itself. I'm not seeing any questions at this point in the chat, so why don't you go for that? Yeah, you bet. Okay. And the demo moves really quick, so I may pause it here and there, but this is just really quickly. Heat template, how simple it is, let me go back here really quick. We've got, you know, Heat template that a user supplies the parameters to. You know, that basically tells Heat to go and talk to all the necessary OpenStack services to realize the multi-node OpenShift deployment. And I added this here just to emphasize, you know, there's a lot of different ways to interact with OpenShift. If you're familiar with the, you know, command line tools, IDEs, or through the web interface of this, this demonstration by the end will actually log into the web interface and show that particular example how you interact with OpenShift. You know, it really comes down to whatever the developer prefers, OpenShift provides different ways to interact with it. So what you're seeing here is, you know, I'm an OpenStack user, I have an account of the OpenStack environment that I'm using supports Heat. And so when I interact with Heat, I use the Heat commands. In this particular scenario, I'm doing a stack create calling it oso-stack for OpenShift origin. And then I specify the template file, right? And so this template file is something that I would download and it's a template file from the Heat templates repo. And so I tell Heat, this is where the template exists. And here are all the parameters that I'm going to pass in to customize this particular deployment. Things like, you know, what SSH key pair am I going to use? What DNS prefix do I want to use within my OpenShift environment? What's the IP address or DNS names of my IP addresses of my DNS servers, whereas my NTP server, all sorts of stuff. And again, we'll dive into the parameters in more detail, but pretty simple. You say, hey, Heat create this stack and here's the template file, here's the parameters. And Heat starts to create the stack. And as part of that creation process, the template file consists of a bunch of resources. That's really what a core concept within Heat. So this particular template has, gosh, close to eight different resources. And it'll go through resource by resource and lets you know if that resource completed properly or if there's any issues or if it's in the process of completing that resource. And you see that right now it's going through that process. And then a few minutes later, I do another Heat resource list for my particular stack name, OSO stack. And you see that, let me move this arrow, you see that all the different resources created successfully. And so if I do another Heat stack list, you'll see my OSO stack has completed correctly. Now the next thing I'm doing, I kind of jumped through this because I wanted to make a very quick demonstration. This was for one of the open stack summits. But something that I kind of just glanced through there didn't really even show is, okay, the stack's now created. What you would typically do then is do a Heat stack show OSO stack or a nova list. And you would see the instance details or the virtual machine details that make up my OpenShift broker, my OpenShift nodes. And then I would capture that IP address of the OpenShift broker and use that to log into this web interface that you see right here. And as part of that deployment, we also set up a username and password for the administrator so that now I have the IP address of my broker and I have the username and password to be able to log in. And again, this example shows the web interface. I'm going to very easily use my client tools and interact with the broker from the command line or from an IDE as well. So now I'm logged into the broker and I'm setting up my namespace. I'm adding my public keys to some of the basic fundamental things we need to do to start interacting with the broker and creating the applications. In this example, I'm going to use CakePHP application and that source code is actually sitting on GitHub. So all I have to do is say, hey, OpenShift, the code sits here at this GitHub URL, go ahead and create the application. It gives me some information on username, password, so on and so forth, and let me go back here. And then it says, hey, application created successfully. And again, it gave me some of that feedback. So now I know the URL of my application and open up a new tab. Boom, I'm at the CakePHP home screen. And now I'm able to start working and customizing my application as needed. So again, workflow, pretty simple. In the OpenStack environment, as long as, again, it needs to run heat, which most OpenStack cloud providers support heat. I'm just pulling down the heat template and I'm passing in the parameters for the template so that I could customize my deployment as needed. And then heat does all the hard work for you. Log in to the broker and set up some of your basic information like the namespace, the SSH key you're going to use, and then start deploying applications. So, and then the last piece of this is just to kind of go through the different cartridges, right? I mean, this is the power of OpenShift is we don't have to build common applications from scratch, right? So if it's a heat, you know, the heat templates, we don't have to build it from scratch. There's stuff out there already. Use it. If you need to add to it, then add to it. Same with the cartridges. If there's a cartridge out there that you need to add to it, this is one of the great things about open source, right? Even as I look, let me jump to the heat templates. I haven't looked at the heat templates in a little while because I've been working on other projects. But I put a big smile on my face because I came back and I was like, wow, this has changed quite a bit over the last few months. Because Antoine has come back here and said, oh, you know, let's add to the heat template, right? So under the under the OpenStack heat templates, I went to the path of OpenShift origin. We could either go Fedora or Sentos, and then there's different types of templates from HOT, which is a heat orchestration template format. That's the native format. There's also an AWS template format. And then there's different, there's quite a few different heat templates out there for OpenShift under Fedora. Damian, can I ask a quick question here too? These heat templates that are in the origin one, are they HA templates or do they have to be reconfigured to be HA? So these are not HA. I believe if we go to the heat templates under the... So I see the OpenShift origin templates here. It was a different repo where the OpenShift enterprise... Yeah, so those ones ended up under OpenShift in github.com, OpenShift, and then I think heat templates in there or under extras. And they got moved similarly recently. Oh, there they are. Yeah, it looks like we have two different repos. If you want to work on the origin templates, they're hosted out of the OpenStack github repo. And if you want to use the enterprise templates, they're out of the OpenShift repo. And so these OpenShift enterprise templates, the last that I saw, they did have HA options here. I believe that's where the HA is. And you can fork them and bring them into your own repos and tweak them out as you see fit. But this is a really good starting place if you're trying to learn heat too. It's part of the reason I got into making sure that we had good heat templates is because I wanted to learn the heat templating syntax. So this is, if you're interested in heat, this is a great way. It's also a great way to deploy OpenShift on OpenStack. Yeah, and what I did now is I jumped to the puppet OpenShift origin repo. And this is where I did quite a bit of work. And as Diane mentioned, did the original work for high availability. And I bring this up because what the heat templates actually do is they leverage these puppet modules. And we'll kind of go through it in a little more detail. And so the nice thing about that is that if you're in an OpenStack environment, you want to use heat to orchestrate this complex deployment. You can also say, well, I just want to leverage these puppet modules to automate the deployment of my multi-node high availability OpenShift environment. So you really have both options. Now granted with heat, I showed you it's a single command, right? I mean, you're pulling down the templates or you're pointing to where the template is and you're just specifying some parameters to customize that orchestrated deployment. With puppet, a little more to it, not a whole lot. But again, it gives you that power of saying, hey, I want to deploy HA or not HA. Here's a number of nodes that I want. Here's a lot of ways that you can customize a high availability deployment because when the high availability deployment was developed, we're looking at each of those services that make up the system and making sure that they can withstand failure and so forth. And so there's a lot of different tuning that you could do to that. But if you don't want to tune it, then you could just leverage the default options of either HA or non-HA. And so that's why I bring up this. And if you can even, I believe the latest OpenShift guides kind of walk you through the HA options, as well as when I go back, I want to say six, eight months ago, I went ahead and put together an HA guide under the Cisco documentation on how to deploy both options. If you want to deploy just using puppet or if you want to deploy using heat. And so you could just simply Google Cisco, OpenShift, HA and without a doubt, you'll see those two options for documentation. So, Nenian, we've got one question here that I'm just going to ask before Joseph Collins asking. He says, I understand that this might be difficult to answer the question since V3 is still in development. But from your perspective, do you see more of a Nova integration with Docker using Atomic instead of putting OpenShift into a VM instance? And I think it might be a good time to bring up Kubernetes and what we're doing there and how that kind of changes the game a little bit. Right. So, there's a lot going on within Docker and Kubernetes. And I'm actually part of a project called Kala that's containerizing OpenStack services using Docker and Kubernetes. There's a new project within OpenStack that started called Magnum, which is containers as a service. Right now I would see that it would be a similar workflow where we would go ahead and say let's deploy Nova instances that make up our OpenShift environment. But that could change here in three months or six months with some of these new projects, specifically Magnum. So, if you haven't looked at Magnum, I would highly encourage you to go launchpad.com slash magnum and take a look at the project. It's hosted at a Stack Forge. So, you go to github.com slash Stack Forge. You could search for Magnum there and you'll see it as well. And there's just a flurry amount of development that's going on there. Pardee, who's one of my peers, is focused on Magnum. Again, I'm focused on Kala. And so, you know, it's tough to tell. I mean, if you're going to go ahead and deploy V3 today, then you, again, I would say be pretty similar. It is just a standard, you know, Libvert compute driver for Nova and deploy V3 as virtual machines. As you mentioned, the Nova Docker driver is also in development. It's out of tree and so it's also hosted in Stack Forge. And so, you could potentially deploy it as containers. I think if you were going to go down that road, I would probably, you know, lean more towards Magnum, which is going to be a container as a service service. And then leverage the containers that make up the V3 deployment and deploy those specific components as containers. So, you know, it's just hard to tell because we've got V3 that's just kind of still on the cutting edge. You've got these different projects that I referenced with an open stack there on the cutting edge. And so it really kind of depends on how those projects develop along with, you know, what your requirements are. I mean, if you're saying, hey, I need something that's solid and reliable today, then, you know, I would say use the tried and true and use a standard Libvert Nova driver and deploy as virtual machines. But if you want to be part of like alpha code and on that bleeding edge, then look at things like Magnum and breaking apart V3 into the different containers and deploying those using Magnum as containers instead of virtual machines. Yeah, I know that when we were at OpenStack Summit in Paris a few months back, we were trying to dockerize each of the components of OpenShift and deploy that on OpenStack. And I think we have a little video of that attempt that I'll try and post the URL for that to the Commons group. People can take a look at that and we walk through it a little bit. But as Damien says, it really depends on whether you're going to go OpenShift Enterprise and use the tried and true methods or if you really want to play with the alpha versions, which we do actually encourage you testing it out and giving us your feedback and contributing to the origin V3 efforts. So both camps are good things to test out. As I said, the heat templates are really good if you're starting to learn heat. They're very clear and well documented. If you're looking to get deeper into Kubernetes and some of the other things, playing the V3 and testing that out is also an awesome thing to do. Yeah, as Diane mentioned, I worked pretty closely with some of the lead developers for V3 last month or so in preparation for the Paris Design Summit, the OpenStack Paris Design Summit. The developers that are working on V3, they're looking for feedback like the feedback that I gave them. So they're looking for the community to test out the different kind of deployment scenarios and so forth so that we make V3 better than what it is even today. Perfect. I'm looking to see if there's any other questions in chat and not at the moment. Okay, yeah, so I was just kind of thinking, taking a couple of minutes. I don't know if we still have time to, but just really let's think and take in a couple of minutes and just scanning through the heat template so the folks can understand some of the core concepts of the template. Is that okay? That would be great, actually. Okay, so again, we've got two different kind of collection of templates. If you want to work on Origin, they're hosted out of the OpenStack Heat Templates repo. Or if you want to work on Enterprise, let me go back here. It is in the OpenShift repo and it's OpenShift Heat Templates. So let's just take a look at Origin. So I'm at the OpenStack Heat Templates. Go to the OpenShift Origin. Let's go to Fedora 19 and we have different template types. Let's go to the hot template again, hot as the heat orchestration template format that's native to heat. And then we've got different types of templates. These are for scaling your orchestrated heat environment based on different triggers. Or we could just look at a simple OpenShift template. It's an e-handle format. So again, you would most likely just clone the heat template repo. And then within the command line, the heat command line, or you could even do it from Horizon, specify the path to this template. And then the other big part of telling heat what to do is supplying the parameters. And here are all the parameters. You see that the parameter gives you a parameter name, description, type, and default. So you go through here and say, you know, I like all the defaults or most of the defaults and makes it even easier on you. But more than likely you're going to want to come in here and modify some of these parameters to your deployment needs. You'll see that tons of parameters here and, you know, don't get overwhelmed by the number of parameters. Because again, most of these you won't even need to touch. But just, you know, understand that concept of the parameters. That's what we pass into heat. And then I talked to and showed you the resources. When I did that heat resource list and we saw the eight or 10 different resources, heat basically chunks through this stuff and we're telling, hey, here are the resources that we want you to orchestrate. The first one is the type of OS neutron security group. So what we're telling heat to do is go and talk to a neutron and create a security group for my open shift deployment. And then here's all the rules within that group so that we're locking down the open shift deployment. Let's see what else. Now we're telling heat, hey, create a key pair. Now start setting up the neutron network for me. Heat create the ports, the subnets, the router, all the networking functionality that we need to provide networking connectivity to my open shift nodes and broker and so forth. And then here's a big piece of this particular resource, which is the OS heat software config. And what we're doing is you see we've got a template here and it's really all it is a, you know, a bash script where we're saying, hey, go ahead and run. And here's all the different puppet parameters that the bash script uses. And here's the puppet module we want you to use. Here's all the different puppet modules to install. And that's where I go back and say, hey, you really can, you know, automate your deployment in different ways that you would like. You know, you don't have to use heat and heat is the heat template is leveraging the puppet module. So there's that consistency, whether you're using heat or if you're just automating your deployment using the puppet module that's hosted on the open shift repo. And, you know, we're then saying heat stand up, you know, your, your broker instance, which is basically just a Nova, a Nova server, you know what image you want to use the SSH key, what networks to attach it to. And it just kind of goes on and on we kind of go through a similar pattern for the node now that the broker is up and functioning let's start standing up nodes. And then there's basically, you know, an output. So that when we do when when the heat stack is created, we then do a heat show stack or so stack, and we actually get some feedback so we know, again, you know, what's what's the floating IP assigned to the instances and now I could actually bring up. a browser and log into my broker if I'd like to. So I mean it's just kind of a quick rundown of the heat template. And so, you know, we're taking, you know, we're taking an open shift environment that normally you would deploy manually, and simply just again, providing some parameters to customize the deployment and letting the heat do all the work for us so that we don't have to go in and tell Nova, you know, start up this number of instances telling Nova to create these key pairs or telling you trying to create all these network parameters. You know, he's doing all that hard work for us. You have a final deployed topology at the end diagram or can you describe at the end of what you end up with it's one of the guys on call is asking. I don't have a diagram in front of me but let me go back to video here. And I don't think I did a Nova list like in my in, in this example, and this heat template here. This one here. This one is basically kind of like a kickstart environment, it's going to stand up a single broker a single node, and then from that point forward you could do one of two things, or one of multiple things you could add nodes to the environment. Basically, you can use the, the puppet module to add additional nodes, or you can go ahead and basically copy this template and modify it and call it to open shift node dot YAML. And instead of what he does doing here where he's saying let's create, you know, the broker node. We have a separate heat template just for adding a node. Right so instead of, again, creating the broker, the node, this template here really is kind of how do you kind of kickstart your open shift environment from that point forward manually add nodes. Use the puppet module to, to automate adding nodes, or go back here and say let me just modify or copy this template over, and modify it to create a very simple template focused on just adding a node to the environment. So, yeah, you've got a few different options but from topology standpoint, it would be pretty simple. You'd have, you'd have a neutron network, could be the 10 dot network that the broker and the node and any additional nodes attached to. And then, as you saw, we're setting up a floating IP as well. And so, we would then go ahead and map that private neutron network which consists of, you know, the neutron router. And then the neutron virtual ports that the broker and the nodes attached to, you know, with the, with the floating IP we would then say, Hey, neutron router. Go ahead and, and nat these public addresses from your public network to these private addresses to the broker so that we now expose the broker to the outside world. As the gentleman on chat pointed out, Horizon, it does have the ability to show that visually, but this is actually a movie that we made a few months back. So, I'm pretty sure this doesn't exist in physical form anymore. Now asking is now saying that they have a project to make a heat deployment or on cloud what a French pouring cloud based on the open stack. So they're going to be doing this soon. So hopefully we can get your feedback on that and learn some more from you guys would love to have you on and showing off what we're doing that that would be great fun. We're probably what I'm interested in doing out of this session is setting up a mailing list with people who are in particular interested in learning more about open shift on open stack. And so I will set that up and send you guys who are participating today and others that the sign up for that mailing list where you can share some of your insights here. And they as we have talked about a little bit, the three is moving in in in directions that are sort of a little bit more Kubernetes and maybe Nova. And so, but that is still a little ways out. So the open shift enterprise and the open shift origin heat templates again really are your best bet for testing and deploying on on open shift on open stack these days. So I was hoping we would have Ken Owens join us. It looks like he's not going to make the call today to give us a little bit of insight into Cisco's roadmap. He was due up a few minutes back. I think that what we've, what we've covered today is this quite a bit in terms of heat and puppet and the work that Damien has done on open shift and contributing back to open shift. It's been an amazing year, really getting the contributions and getting Cisco on board. Cisco's done a lot of work, not just an open shift, but on open staff as well and contributing to Koala and Magnum and a number of the Google open source projects. So we really look forward to working more with everybody on the Cisco team and thank you very much for for having taken the time today to do this. Really a big thank you to Damien for all of his efforts and and for Ken and Keith and all of the other folks at Cisco that are working on making open shift part of the Cisco cloud. We're looking forward to learning a lot more from you all. And if there's no other questions, you might let Damien get back to contributing on to all of those wonderful open source projects. So thank you very much, Damien. You have anything else that you want to say? No, that's that's about it. So appreciate the time and. And yeah, I think just in general, you know, jump into the code if it's open shift or heat and. You know, start working with it and contributing because that's what, you know, that's what. This is all about, right? Absolutely. And I think the interesting thing from from our perspective is is how much cross community collaboration there really is between the open stack and open shift and Kubernetes and public communities. So this makes for some really interesting dialogue between all the projects and within the different customers and contributors. So it's been it's very interesting. We hope to bring more of these conversations here to the comments briefings and there will be another one next week on Java workflows that Aaron is putting together for us. So if you're around, please join us for that. Thanks everybody and we'll talk to you all again soon.