 Yeah, I'll go ahead and get started. It's about 2 p.m.. So Charles Crouch was going to present with us today But he had a family member issue a health issue so he's not gonna be able to make it So it'll just be myself. I'm Steven Dake, and this is Danny and Hansen want to say a little bit about yourself Yeah, sure. So Danny and Hansen software engineer with with Cisco and been working with OpenStack since Diablo and have been focused on containers for Just about the last year. Yeah, and I originated heat about four years ago the heat project in OpenStack And now I work on containers pretty much full-time. I work on Kola Magnum Quaran Magnum ptL of Kola So that's what I'm interested in now and focusing on I'm really excited to talk to you about what we've done with Kola and that's what this talk is about So if you want to know what you're so there's there's no more real integrator releases in OpenStack Because of the big tent model. I don't know how familiar everybody is with the big tent model But basically what it allows is for everybody to kind of do their own thing in OpenStack and Compete with different projects that do similar things which is kind of an interesting concept and What that means is that there will be more projects going into OpenStack over time In my opinion You know a technical committee has has oversight over what projects enter the OpenStack get namespace But I think they're gonna be pretty loose around what they allow Just as long as there's an active community maintaining the software so if we look at the last release kilo there was four new projects that were added to the OpenStack namespace and There were 12 projects prior to that So if you look at they do the math like 30 40% growth, I think we're gonna see that every cycle So that's that's hard for distros and deployers to deal with because it's a lot of churn a lot of new code A lot of stuff to integrate and test It's just really difficult so it makes deployment difficult because you have to manage the deployments and Managing those deployments means you have to either some kind of puppet System or ansible system or chef system to manage your deployments today That's how pretty much everybody does things and as you add these new services because of the big tent What do you do? How do you manage those services? Does your distro provide those services? so it becomes more challenging to manage the deployment of OpenStack as time progresses because of the big tent and So this part's really important and I'm gonna get a lot into detail about this later during the presentation But we convert Cola Cola is a project we use to implement containers in OpenStack We convert everything into an atomic unit that does one thing does it well It includes the environment and by converting it into an atomic unit. It can be easily managed and deployed So that's what we really do in Cola So we heal the deployment pain that everybody has We don't actually do deployment yet, but we might Other projects do deployment. We did just do container content at the moment But we might tackle that in the future. So I'm on it So I saw I talked about the cola does simplified deployment. Well, how does it do simplified deployment, right? Because we we have a set of key value pairs. That's what you deploy you deploy a container with key value pairs You maybe you've got an API service and maybe you've got a conductor service And maybe those are the two services that you've got to deploy for your new service that you want to add to your system All you have to do is deploy those with key value pairs now to do this with puppet and chef puppet Chef have key value pairs just like Docker does with an environment variables The thing is puppet goes out or chef goes out and changes a whole bunch of stuff on the system and configures everything and That's not really ideal. Whereas in Cola, we don't we don't have that problem We we simplify the develop the deployment because we just deployed as one unit Instead of a bunch of stuff all over the place just one thing instead of a bunch of things So think about that for a moment and multiply that by the 12 services that are in the kilo release Times the four new services that were added to kilo and multiply that out by the growth rate of open stack because of the big tent So we simplify deployment if you don't buy my argument. I can't convince you Apologize for that. So simplifies ongoing ops operations is difficult right because You have to manage these key value pairs and keep them up to date For whatever version of open stack you're running We tackle that problem as well The way we tackle that is we stay on top of our container Content, but I think other projects do this well, too. I think the puppet Modules do this well in open stack. So I think that's really positive This is not something that's unique to cola, but this is something we do well now when you go from Kind of developing your software and then you want to deploy If you look at how things are done today Typically somebody deploys a whole bunch of software on a system and then they got to manage all that software on a system We deploy each little atomic unit their container of microservices. We put them put open stack in a microservice We put our apis in one and we put Maybe the engine or conductor or whatever it is and another for each service we've got about 40 microservices implemented and maybe 30 something like that and By doing that You can you can break apart those microservices into different teams managing each part individually This is a key key thing with microservices This is like the number one reason why you want microservices because you can break apart those 30 people all working on something separate Into separate things and it doesn't matter anymore that this has to integrate exactly with this So if you look at the puppet model, you've got puppet It's got you get a set of deployment scripts for puppet It's all got to work together with microservices. You can deploy different versions different Different from source or from our PMs or from Debian You get that whole model One of the really cool things about containers is a really reliable bass They just rock and we're gonna show that in our demo. Hopefully I feel comfortable about that. Can you move on to the next slide? Okay, so we leverage the power of containers. How do we do this? So we use container best practices that the Docker community is defined So what are what are these? How do what our container best practices? I'll give you some examples So we use data volumes instead of bind mounts if you know anything about Docker You know that it can mount data on the host operating system It can also mount a container that contains data So all of our persistent data things like the database and the glance information All that's the Nova compute VMs. All those are stored in data containers Those data containers can be backed up and restored individually of the rest of the system and the data container is not persistent It's not running it you launch it once and then it creates storage on the system And then that storage is permanent Unless you delete the container now if you came along and delete the container You would lose your persistence just like if you deleted your database So there's not really any more risk of damage your system. You can still back up and restore Now why do you want this because it gives a mutability so containers should never change except for the environment passed into a container and How that environment configures the container that's immutability now immutability is like this big computer science nerdy word Which is basically means we don't change anything. We don't change the packages. We change the configuration a little bit But it's consistent and the cool thing about immutability is that it makes an imperative system Which open stack is imperative. It's you've got a set of instructions. They're their conditionals They make decisions and you have a different tree that outputs. That's an imperative system Cola and containers if you use if you have a mutable system become Declarative so you get one of two outputs either the container creates or the container does not create those are the two results you get So that's a very critical feature and that's what you get by using Docker best practices. So that's very important I What else is there? Let's get one. I think also is Is you know the microservices approach where we're running a single service for the most part and in the containers so So that would be another best practice and right on the last best I know there was one more item potency. So that means a container can start and restart Without changing the System or implementation so we want item potency in the containers and we have that today The item potency is very important for us. So those three properties of a container Those are best practices and the way we get those is through data volumes We don't use a main bind mounting unless we're absolutely required to do so Some of the things we do with cola Some people wouldn't like for example, we use this thing this fly calling that host Which means we turn off the network namespace isolation of the host But we're running open stack on bare metal. That's our goal running open stack on bare metal without any other services We want open stack by itself. So that's okay for us We turn off sometimes we turn off the PID namespace for Libvert for example. So VMs can be reconnected to when you upgrade. This is another thing we do That's kind of weird that most people wouldn't like we had to add this Docker compose, which is one of our tools our orchestration tools But it's really cool. It's a nice feature Finally so container orchestration our orchestration today is a shell script Not ideal not really not ideal Monday. We had a session with the Ansible easy as well the day and Our answer will collaboration day and we kind of talked about what the different ideas were and I think the community has kind of agreed that as well as a way to deploy and Containers are a way to do that What we're not sorted out on is exactly what the best practices are for that to occur So there'll be more in the future. I think at the at the next design cycle We'll see exactly what this container orchestration is, but we're just getting started there We just kind of have a shell script. We've got an all-in-one Ansible system. That's about it Okay, this is architecture diagram, and I gotta get down so I can Talk a little bit about this so infrastructure engineering These are the guys that are we're girls ladies that are writing the software. They do infrastructure engineering They submit their code to Garrett And it goes into a CI system. It's reviewed the CI system reviews the code Then the CD system once it's reviewed and approved the CD system will Feed the output of the build For our open-stack containers that are based on COLA into a registry So it goes into a local registry that runs on in the environment of the system. It also Launches Ansible. This is kind of like a future thing. That's what we think we want to do in the future the launch is Ansible which contacts Docker and Contacts compose and basically launches our system multi-node with HA. This is our basic architecture Now you see some blue and green differentiation The green is stuff we do so we're going to maybe do some ansible scripts. I'm not really sure We definitely do compose stuff now and we do the open-stack service containers all the blue stuff It's this open source that everybody can leverage Okay Time check 213. Yeah Okay deployment flexibility, so this quote is great about DevStack I hate DevStack because it destroys my machine. I think if you're an open-stack developer, you've learned to accept this You get deployment flexibility with open-stack or with COLA where you don't have to have your whole system destroyed You have to install compose you have to install Docker those two things you have to install It doesn't install like a million dependencies containers Package the environment together with the software and this is like a key thing They're like an atomic unit that can be upgraded and downgraded. So slide performance okay, so Charles Crouch did these numbers. I haven't verified them myself. That's why it says unofficial I think benchmarks suck because Everybody you'd like games the system with benchmarks. So just take these with the grain of salt But I think what it showed no, no, yeah, I think what it shows is that COLA It starts up faster. This is how long it takes to get to a working environment call It takes nine minutes just that takes 14 minutes pack stack takes 44 minutes. That's kind of you know, it's faster Next slide Okay, so Docker is really green It's young, you know, it was released in June of last year. I really like Docker. I think it's cool technology It's not new. I it's not really a new idea But some of the ideas they've done if you compose other people's ideas. So it makes it powerful The first version of Docker I would recommend using is Docker 1.6 Every other version of Docker has been broken in some various form In fact, we recommend using Docker 1.6 with COLA because Docker 1.5 will lock up Okay, Docker uses threads. This is kind of I don't know an anti-pattern at least in my mind But it is what it is so Docker's it's young but the community rocks I worked with community like for six weeks on COLA upstream I talked to him day in and day out and they're really responsive They are the type of community you want in an open source project. So, you know, Docker's green and young They're on the ball COLA we're like totally way younger than Docker. We're like six months old So there's not really not much to say there other than we're missing services We need to do more work to kind of finish a job mostly cross-platform so we run on Ubuntu or rel and you can use or not rel CentOS and you can use our containers are based on CentOS Now the problem is when we did that we use CentOS on Ubuntu There's kernel bugs with like PAM With odd with the auditing system. So we had to like hack around that So so like the kernel is not totally consistent between different versions of distros because it's a different version of the kernel Even though it's mostly the same upstream a little different So not all the containers work exactly the same that said COLA does work perfectly on either Ubuntu or Fedora And I assume it would work on rel and it does work on CentOS Different deployment models new bugs. So, you know, we're doing something new. We're gonna create new bugs So it's pretty straightforward next slide. So not ready for production. We've got no high availability We do have plans in the community to resolve this with the blueprint that Daniel has worked on Operationally immature. We have no way to deploy COLA operationally. That's a problem for us And then the reason it's promised because we can't do future development Because we need to be able to do be be able to deploy on multiple nodes To be able to do HA to be able to test the HA works correctly. So we need to have that deployment in there To do that to do HA and to do multi node We need those things in order to continue our development really hard to audit. How do you tell what's going on in 30 containers? I think this is a solvable problem, but we haven't solved it yet So we don't really have a configuration management tool more have like a key value pair, which is fine works for us I think it's pretty solid. I think it's a good model. It's pretty simple The downside of it is there's a limited set of configuration options in what you can configure It's not like you can go set every configuration option to whatever you like I think this only really matters at least now for the neutron container the other containers Probably work with a kind of opinionated deployment model Okay, this is Daniel's part the wide demonstration now This is such a great diagram here Daniel drew this on it and he texted it to me and this is how I should set up my home network to get neutron to work This is it right here and this works so you can copy it at home and Daniel I'll let you describe it a bit and then you can get to the demo Yeah, so what you see here in the in the diagram really just kind of focus on where you see cola host and so in the demo and What we focus on right now is just getting cola working on a single host and This is Steve's home lab environment if you look at the cola host We have two interfaces one interface is used for management. The other interface is used by neutron to connect to The external or public network doesn't have an IP address that is kind of where your floating IP addresses come from What where neutron bridges between the tenant network and and the public network? Let's actually jump into a system Yeah, I was gonna say maybe make it how about I do this Make the lettering a little bit bigger. Is that okay? Can everyone see I think bigger would be better, okay? That's good. Let's see here All right, so here's a big iron. So this is you know the single system that you saw in the diagram and This is our our base Operating system that we're gonna go ahead and deploy all of our containers on top of and so just kind of a quick recap is Traditional open-stack deployments, right? You've got different options from deploying on bare metal, which you know if we started installing The different open-stack packages and configuring the files and so forth then we're installing on bare metal Or you may install on a virtual machine and that's where cola comes in We're trying to just change the game here and actually leverage containers install these And run these open-stack services in containers This is a fedora 21 system, but long-term Where we want to focus is more of the micro os types of operating systems. So coral s Atomic and so forth Let's see here See we've got no containers running. So basically on this operating system. We installed docker docker compose and We're ready to start using it, okay Let's take a look at a couple things here So this this gen n is just a wrapper script around docker compose so docker compose is one of the tools that we're using and It's a basic composition tool, right? So so surprising by docker compose, but it allows us to define very simply What groupings of containers that we want this system to run So it's just a nice abstraction layer and then we've built this gen n tool That wraps around docker compose so we could simply for development purposes start all the cola containers stop all the cola containers kill the cola containers and what it I'm sorry the gen n tool actually creates all Of our environmental variables, right? So Steve talked about really the only interface as a user you have is A bunch of environmental variables that feed into the containers. So when you start a container Using docker compose or using this wrapper utility. It's calling an open stack dot em v file That's the entire list of your environment variables. Okay So let's go ahead and instead of generating them. I've already generated them So let's just take a quick look at the output of the open stack em v and again Here's all the environment variables with with the settings that were created from from the tool and Then we have another tool our cola tool and this Is a tool that actually wraps docker compose So if you look at this tool in the source code, you'll see it's just basically wrapping Docker compose so that we can start stop do whatever we want with all the cola containers that make up the environment so Let's go ahead and kick that off and It's going to go ahead and start Creating all of our containers We'll give that a second here and all the containers Will be created and then the next step will be to actually configure the open stack environment because although all the services Open stack services are up and running Within these containers. Nothing's really configured or such as there's no glance images There's no neutron networking so on and so forth And so the next thing we would do is again built a simple utility that goes in and sets up a basic configuration for For development purposes Sure, so let's just take a look really quick and and verify things are up and working. So let me source my credential file and I could do a keystone and point list Glance image list got no no glance images Nova key pair list neutron extension list So we see that we're able to talk to a neutron server Let's do an agent list and make sure that our neutron agents are up and functioning and one thing just to point out too is right now The the plug-in or the agent that we're using for L2 is is a Linux bridge agent I believe there's a work underway for supporting open V switch as well But we see all of our agents are alive as well so so far so good, right, so let's go ahead and Run the init run once tool again this tool Does some basic things downloads a glance image starts configuring our neutron network subnet router attaches that router to the gateway modifies some Nova quotas because we start actually spawning and in some of the demo scripts that we have quite a few Nova instances, so we need to modify the quotas and Give it a second. It's done. All right, so now we could actually do a neutron net list and see That we've got a network or a neutron router list So we've got a router right, so let's go into our demo heat and take a look at what's here and We've got a launch script and basically what it's doing is simply just Creating a heat a heat stack and the heat stack is made up of a Resource group and a resource group is a special type of resource within heat that produces X number of Identical resources that are defined within a particular resource So we called the resource stake and we said a count of 10 of these resources Well, let's see what this stake is all about right so stake is going ahead and creating Nova instance neutron port and a Neutron floating IP so very simple right let's spawn 10 of these Nova instances and then attach them to a neutron network and and also give that instance a floating IP So let's go ahead and launch it and it will take a few minutes to actually start spawning the the instances We can do a novelist and see if any instances are starting to spawn You see that they're in a build state So it's gonna take a few minutes for for them to build out when they do build out What I'll do is is actually get into one of the instances So you could see that it's not a bunch of black magic that stuff's actually working now One thing I'd like to point out if you could show us to the docker ps. It's real quick These are the open-sack services running in containers. It's not bare metal with puppet or chef or anything like that These are the containers that are providing all the services that Daniel is demonstrating. Yeah, and it's cool from a development standpoint That I can build test and then just blow away the containers Rebuild test. I mean this this demo right here. I've probably rebuilt literally 30 times in the last few days Just trying to prepare for For today and so, you know, that's pretty awesome to not have to mess with vagrant You know not have to actually deploy on bare metal and then you know rebuild my system and and so that From a development standpoint is pretty sweet. Let's see. They're still in a building state. So it may take another couple minutes here But we see that some of the instances are actually starting to get their IPs and speaking of this so Let's see if you can elaborate on it But there's there's a bug that we ran into with with Nova where Nova lists Nova show when you spawn Tons of of instances using heat actually doesn't display the networking information. Yes Let's see here, so let's do a pseudo So you see here, let's see Nova compute for example So I could do a pseudo Docker logs and then the name of the container or you could specify The idea of the container and there you go It's gonna spit out all the logs here another thing that you can do is actually jump into the container and Then you know tail your bar log Nova compute and see the logs there as well Let's see if any instances have become active Yeah, so we do have some instances that are active and here's one, you know with an IP. So should be pretty easy I could go ahead and ping the public IP SSH Into it and then even like curl Google so great now. We've got everything up and running. I've got an operational open stack environment pretty awesome but one of the you know, one of the things that's very attractive about Docker and just the whole container approach is what Steve talked about is, you know, the immutable server in the sense that We've got these services that are immutable for each of the different open stack services and Now if I want to go ahead and upgrade a service I can go ahead and go into my container update a package make a change whatever it is Rebuild that that image give it a different tag Let's say instead of ice house. It's now kilo and then I could go ahead and very simply take down one container and Bring the new container with a new tag up if things don't go well with that upgrade I then go ahead and shut that new one down. That's broken and just bring the old one back up So basically talking about upgrades how easy it can be to upgrade open stack services And also because these services are now split up into these atomic units. I can upgrade them individually or you know groups of different services So that's that's You know, it's very interesting from an operational standpoint and let's actually take Take a quick look. So if we go into Docker, we could do Nova compute Nova Libvert And here's basically the start script for for Nova Libvert and I'm gonna do a pretend upgrade and just do an echo this is an upgrade and Then so he just pasted it at a hundred times. I think or something Yeah, just so it's easier to see when we when we take a look at the logs again And again in a reality that would be I'm going in and changing a package and I would probably go ahead and tag it with a You know test or just some kind of unique tag so that I still have my old Nova Libvert image And I have a new one and then I could start playing around and testing back and forth and making sure that Nothing's broken So now what I'm going to do is just simply Take down the Nova compute Nova Libvert Nova compute data Containers Steve mentioned earlier about that concept of Using data containers instead of doing doing union mounts to the host and That's here's one example of us doing that Let me actually do it. Okay so recreating the the containers And in doing so and that recreate it then goes and says oh, let me get This new image that you just built with all the this is an upgrade echo So what just happened here is an error with docker 1.6 and the PID equals host operation and the LVM driver This has been fixed in master By using the overlay FS driver this error It's hard to see this thing Driver device my bird failed that's okay because we can just repeat the operation and it'll go through But you know doctor 1.7. I think will be where we like Cola and docker together So just quick recap. I just showed you how easy it is to to go ahead and Get an open stack environment up and running on top of containers Configure that open stack environment and then actually, you know walking you through how to to spawn a bunch of VMs using heat as well as the last piece right here is doing that upgrade So if you need to upgrade live vert You know how easy it is to go ahead and modify something to the image rebuild it and then Take those running containers and have the you know one or more of those containers use that new image What was that last command you just ran? Dr. Logs Dr. Logs, right? Okay Good any questions I Can't really see because I'm blinded by the lights So if you just want to step up to the mic and ask away that would be great I have a question regarding the tagging in this demo. You're showing the container can be tagged So that you can roll back and roll forward a particular service. What if I want to? Identify my open stack cluster Having a specific set of services with a specific set of multiple tags So how do I tag all the containers together to one song single thing? That's a really good question and I think the answer is we haven't solved that Identifying you're talking about version management of an entire cloud, right? And I don't think anybody's really solved that even the puppet guys I don't think anybody saw that very well Maybe some of the distros have solved it in a decent way because they only allow to deploy from their latest version But yeah, we don't solve it here. Yeah, and I think with the microservice approach It's almost like the reverse of that right so, you know now it's instead of this open stack cloud it's really just a collection of services and and If I want to upgrade one service or the other or back out of that upgrade because it fails Well, then we're just using them the tags associated to those images, right? Yeah, the main concern is about the compatibility between the two services of like we upgrade one of the service But it's not working or compatible with the other one So once we find out the compatible set how do we retain that set information? That's where I was yeah I mean I would just see that as like a separate tool that's able to go ahead and soon as you say Okay, this is the environment that I want that could potentially go out there and look at all the you know the commit the commits or the tags of you know of your containers and images and so forth and and then Kind of create a snapshot of the environment from there, but that's not part of a not part of the project I get a question. I get a question. I'm not sure if you already covered. How are you restarting the services if the container dies? If the container dies we use restart always the Docker feature. Okay, so you simply start always yeah Have you tried out using it like the upstart or system D? No with the with the a flag We haven't tried it I think restart always works pretty well the problem with that is if Docker fails then you need to restart Docker, right? Okay, okay. I have a quick question. So What is the future of this project as in you know, do you anticipate this as being a standalone? Solution or is it's going to be integrated with triple low? It started out under the triple umbrella So I'm just curious, you know, you know, what is what is your vision of this project? Where do you see this? That's a great question. So will it be integrated with triple? I think the answer is yes I can't commit to that because I don't involve myself in the triple low community because I don't know anything about it I think it's very likely Will we expand beyond our current kind of objectives? Single-node container content probably to actually finish the job because we need to be able to do ha do ha We need to do multi-node and I don't want to wait on triple o to deploy our containers before we actually have ha I want to have a j ready for triple o prior to that so Yeah, and to that point. I mean just a couple things on on the roadmap, you know completing the service implementation right now We are lacking some services for example sender I think that's going to be completed here soon But just generally there's there's services that that we don't have that we'd like and and with the new big-tent model There'll be more and more services that we need to keep up with on the plus side though is that These container images are really easy to create so you spend a little time look at the code and how You know how what the containers look like and it's it's a very simple to go ahead and create the containers for new services Steve touched on high availability. I put together a spec out there. There's some feedback that I need to review and hopefully we can Actually move forward with with ha here Installing from source right now. We're installing from rdo packages, but we understand some operators Don't want to wait around for you know package for packages and actually build their own packages based on source Again ansible playbooks right now We have a sink we have one ansible playbook there to do a single all-in-one node pretty much just for test dev kind of purposes But to expand that potentially for our multi node Purposes But again, you know cola simplifies the new big tent You know with all these services that are going to be coming within open stack It's much easier to create You know images for each of these services within Docker than it is to You know to go about it in other fashions You know and just to kind of get straight to the point is you know, we could really use You know your feedback or or you know getting involved in the project You know, I'm not sure if anyone's kind of picked up on this whole kind of container You know messaging that's going around the summit But you know, this is a project that we're getting a lot of interest I would say our communities what doubled in the last six weeks So we'd love to get more people join The the project and there's no shortage of work. So you could do some good work Do some good work on the project we have to conclude there because we're out of time. Thank you. Yes