 Okay, can everyone hear me? All right. Hey guys, so we're gonna start. I think it's time So first of all, thank you all for showing up. My name is Manuel. We have Sean Kalanji and Dan here We're part of IBM. We work in the open cloud technologies So this is sort of a one of the perfect opportunities to come and talk to the community Today we want to talk about how we use Docker to improve our OpenStack HA So I want to really start off and just clarify something This isn't about how how you can use and deploy Docker instances in OpenStack This is about how you actually use OpenStack how you can use Docker to Containerize your OpenStack services. So this is how you would use Docker for For example for dockerizing Nova or your services like that At the at the last so this is sort of a progression We've been we've been talking for the end last summit in this one We've been talking about OpenStack HA in Atlanta We talked about how we originally got to OpenStack architecture OpenStack HA architecture That we use to deploy and manage a cloud foundry environment So this is a production environment that we're using and if you guys are familiar with cloud foundry It actually has a lot of really specific requirements and so at the last summit we discussed how we used it to To basically give us the availability and and the scalability that we required for that environment When we were looking at the at sort of the environment everything that was going on with OpenStack HA we had a couple of Sort of of issues and sort of some challenges and some questions that we had the first ones were There are a lot of possible configurations if you look at OpenStack If you look at OpenStack high availability, right? There's options for you could use HAD or DRBD HADR You could use database replication You could use lots of different options and it basically came down to two big things Do we want active active configurations or do you want active standby configurations? the other big issue was this question of Installing and configuration now using scripts Simplifies it using Automation tools makes it better, but at the end of the day you have to somebody or something has to keep track of configurations of ports of services, etc and the other big thing that we had was as As you start scaling as you start having outgrowing your environment How hard is it to to grow and how hard is it to grow intelligently so that you're actually distributing load To the right place at the right time So we came down to this architecture. So this is a very simplified view But the point was that we had a load balancer at the top with the virtual IP that was mad managed by keep alive And then this virtual the traffic traveling through this virtual IP through this virtual IP would then go through our load balancer HAProxy and this is a standard configuration that you can actually see this is the reference architecture for for OpenStack and As it went to HAProxy we actually decided to create three types of nodes Server nodes the first one was a cloud controller where we had horizon keystone and the Nova server Then we had a data node which had our Galera base mysql database Then we had a rabbit mq cluster node Component in the data node and then we just had as many nodes as we needed and then finally we had a storage server And our storage server had glance had cinder and then had access to the connected storage So this worked really well it allowed us to scale it allowed us to perform very well Except there was you know a couple challenges still available still existing the first one was that you know we We simplified the architecture a lot we were able to To select Chef as our scripting environment. We were able to define our architecture, but there were still many more configurations options available Installing it even though it was better the automated tools We still had some issues left with well, we'll get into a little bit, but we still had some issues left and and it was still very complex and The scaling was still we felt we could do better. It wasn't granular enough and The other big family of problems that we had was this question of automation and visibility Our environment when we wanted to to deploy it initially It worked it worked really great well and as we started growing the environment netting new nodes Deployment got to be complicated and then adding patching to it that really come that really complicates things specifically when you When configure right when configuration files have to change that things got a little Interesting and then monitoring the environment. There's lots of monitoring tools available. There's some that are built into the To open stack, but we felt we could do better So we have people on our team Phil sitting there that work a lot with Docker and since we're the open cloud team We started that they mentioned Docker. So we thought well, maybe Docker can help So for those of you that aren't familiar with Docker a high-level introduction is that Docker is a technology that allows you that allows applications and all the related Dependencies to be packaged into individual containers and then when you run these containers on a host They actually run completely isolated in a in a user space process that that's independent from other other containers and The what where the efficiency comes in is that they're all leveraging the same Linux kernel So what happens is that as you see from the picture? In the past you would have an application run natively on the host OS And it would leverage the the libraries that are installed on the system So these are shared libraries and lots of applications use the same shared libraries And then recently with virtualization what we did is you add a hypervisor and then you're actually have a virtualized operating virtualized hardware then a complete operating system and all the libraries and applications Well with containers since you're leveraging the Linux kernel You're creating these isolated packages that have the application and all the the necessary libraries that That are required by that application and then a very minimal operating system, which really falls in under all the necessary requirements So this gives us a ton of benefits right the big ones are service isolation You don't have to worry about installing conflicting versions of the same thing So you could have different versions of of applications running. They're all completely isolated Because things are isolated yesterday. We sat through a in a new boom to talk about security There's actually a lot of security and inherent to this and there's lots of functionality built in But an interesting thing that comes up for DevOps is this concept of version control version control and portability So you can actually save different states much like images virtual machines You can save different states and track your your progress and move back through through deployments the Docker instances Docker images Docker containers are completely Portable so something that is based on Ubuntu can run on Ubuntu can run on red hat can run in Susie and Vice versa right anything that's run on red hat can be run in Ubuntu And because these containers are so portable it really makes it very easy very repeatable to they're very repeatable, so you can actually when you define one Docker Configuration you can actually deploy it and you're guaranteed to get the same thing over and over and Because you're not actually installing or starting up an operating system Deployments very fast takes a couple seconds to get a Docker Container up and running and then finally because we're so close to the bare metal It's very lightweight. You don't have all this extra overhead of the of the virtual machine of the hypervisor Okay, so the big advantage is we found with Docker what was that allowed us with with regards to the open stack services Was that one it allowed us to scale a lot faster since we moved away from this concept of monolithic node to a server node to actually doing each individual service We found that we could start up We could actually start up as many services as we wanted and it would be much faster And then we found that it was much more dense the our utilization was much better because we were actually running as many Docker instances as the hard worker support and if for some reason we the hardware started getting stressed We could simply stop an instance an image and run it somewhere else and then because we're able to now move things around and because all the traffic is being routed through HAProxy it makes it very it makes it very flexible So if we need to scale up a server we can add a new one if we need to move load from one to another one It's very easy So this really makes it faster for us to respond to all these challenges that all these requirements that we're getting from the business Okay, so a very simple before and after With bare metal the stuff that we talked in Atlanta about we use chef cookbooks It was taking us a couple of days to go through the cookbooks Customize them and set them up for what we needed and then a simple deployment would take about 15 minutes And if we wanted to scale that add more more nodes We'd kick it off again, and it would take about seven minutes to get a new node up and running And again, it was completely inefficient because this was at at a bare metal node at a server unit and Moving to Docker what we found was that we we almost completely got rid of chef so that that removed a bunch of cookbooks and Docker provides a lot of the management capability and so we don't have to worry about Installation and configuration because it's all really part of the Of the of Docker so we wrote a couple of custom scripts to do our load balancing and our movement and stuff like that and The creation of these scripts really were I mean honestly took us a couple hours to get the First deployment written and created and then when we click push to deploy when we push to deploy It took us about five minutes to get the entire environment up and running and as we needed a new services set up We I mean they could be up in in a couple of seconds And again the now the big reason was that our unit changed from a server to an actual Docker container Docker instance and So our architecture really remained the same in the sense that we had The virtual IP address being managed by keep alive and ha proxy doing the load balancing But now traffic gets is much more granular So traffic is now going to to the keystone cluster or the nova cluster the the med the rabbit mq cluster Directly and not going to a server and these nodes and these containers can reside anywhere So we now we have a set of servers and this we have basically two types of nodes now We have servers and then we have the actual compute nodes. So on the servers We can run anything so one and and we really don't care So we can move around have have horizon on every one or or have a couple here if we have extra capacity And this really gives us the big flexibility to everything we're doing So now I'll hand it over to Dan to talk about Docker Okay, so Manuel described the benefits. We were looking to get out of Docker and Some of the advantages that it would have so then we started to look okay So those are great benefits. How do we actually how are they implemented? What sort of implications they have so? Docker is great. It builds on capabilities that are already there in the Linux operating system And it puts a nice little wrap around them to to use them very efficiently It's got an abstraction Accommoning to LXC to actually run the containers the individual containers can be The CPU the IO the disk can be managed with C groups namespaces handles the process isolation and Because they reuse a lot of what's already in the kernel They eliminate a lot of redundancy there So they simulate a virtual machine, but they're basically a virtual machine on steroids They've got a faster life cycle. You can start them up quickly. You can take them down copy them delete them And they've got better resource utilization of both the running container has very little overhead But also the images that you save out for them are very small on the order of megabytes versus gigabytes for virtual machines So they also have some nice developer-based features so you can iterate on creating containers Which is a lot harder to do with virtual machines So you're building layers your team can create a certified Base image or you can take one from upstream and you just apply the packages for open-sac Onto that and save that out And they're highly portable as Manuel said they run a new boom to red hat or anything with a modern kernel and Because of that they're also high performance because there's no hypervisor dependencies or Configurations you need to worry about So when we're working with Docker, we're basically working with three primitives and these come into how our workflow with with iterating on Docker to create the images we need Came to play here. So working with containers. There's a CLI for Docker where you can create start work with them You can look into a running container to see the configuration the port mappings You can look into the logs. You have standard out and standard error and As you create your containers They're based on images and then they can be saved back out as images So you're able to create and iterate over that and and get a history to show what you've done and and Those are basically where your configuration resides either the build script Which is the Docker file or the image itself which is pushed into a registry You can start upstream with base Ubuntu that you want to Start from and then as you layer add your layers your open stack layers for example You can push those into a registry internally and then use that to spin up new containers So just a look at the the overall Docker management Layout everything the Docker demon runs on top of a bare metal host in our case And it allows this isolation of separate containers. You get much more density on a host with containers You interact with them using the Docker client, which is all the the tools you'll see for our testing in our building and As you're happy with the images that you're building we push them into our own private registry so that we can create more instance of them clone them and create our HA architecture and The ports are mapped out through the host operating system, and that's how you interface an HA proxy finds them Okay, so Docker makes containers very easy to use, but you also have a lot of control The network port mapping I already met mentioned But you can also build your instances and give them configurations at runtime, which is very important for HA configurations to find Location for example the HA proxy that needs to be updated you also have control over quite a bit of the Linux networking configuration and You can limit as I mentioned the memory the CPU the IO and you can mount Any other data that you may have from that host operating system directly into your container and pull in configuration files for example or spit out logs and You can also if you need access to a driver There was a talk yesterday about isolating some of the security configuration for containers But if you need access to things you can there are configurations to allow that and as the containers are running you can also apply a Policy to see what happens if one goes down should it come back or you just replace it with a brand-new one Okay, so that's the basic primitives working with Docker so in order to actually create our open-stuff components Basically what you do and this you can just do with a standard Docker configuration is you you install Docker on that host You do a Docker run and that'll pull down an Ubuntu image and open up a shell So you can do your testing of your configuration install your packages test and once you're happy with that You can Take out the commands you run declare those and what's called a Docker file Expose the ports for example here. I'm installing an SSH server and Tell the command that when this thing runs that's where it should start So at the end once you've created your image you've built it you've tagged it You can then run it and map the external port out to the one built into the container So that's the foundation of how we started working with installing the open stack packages on top of Docker and my colleague Sean's going to describe in detail how that's done for our HA architecture. Thanks, then So today I'm going to be talking to you about our journey How we use Docker to containerize our open stack services It started off as a journey to see whether or not this thing could be done or not and You know, we thought it would be really straightforward to throw things in and everything would be all happy and glory But as you know, most of the time you you have a vision, but when you actually execute you run into some challenges so a lot of our a lot of our experiences first started off just trying to understand and Look at how Docker interacts with various processes. We Kind of looked had to look into some of the idiosyncrasies that Docker also came with things like doctors really meant to run a single process so When you have some application services that need or interact with each other Those things can be a little challenging also things like some of the Docker networking functionality things are as Dan mentioned Services services or applications exposes port so we have this port to port communication challenge But we learned a lot of things and I'll go over some of those at the end of my section So, how do you get started actually containerizing the services? Well, it's fairly straightforward It's a matter of three simple steps. We build an image We start a container instance and then we update our load balancer So if you keep in mind our h8 apology that Manuel went over we have a load balancer That's registered for all of the services so that whenever we have a new instance that comes up that We update the load balancer so that the new load can go to the new instance And then we repeat these for every single service that we want to retain our eyes Next I'll go through an example of how we actually did this so here we have an example of a Docker file for an old API It's kind of hard to read, but if you just key in on the blue instruction set So the Docker file is basically your your Build your build this or installation guide or workflow You start with a base image. So here we're starting with an Ubuntu trustee image We update the base and then we install the Nova API package Once that's done The Docker file tells Docker during the build process that we want to configure Nova So here we're injecting a file with the add instruction Of a Nova comp file that we've already preconfigured and then we finally expose the ports that Nova API needs here it's 8774 and 8775 and We tell at the end which command to run when the container is started. So here we're doing it running the Python Nova API command and One of the things that we kind of found was Docker Running any entity processes. I know no so you have to actually get and run the actual Process that the entity would actually trigger So then once you have the Docker file created you run through the build and the build actually creates the image It'll run through the Docker file that you saw on the previous slide. It'll start executing Basically pulling down the image Run through your command list and at the end you get a built image that's stored in the local registry of the host We do all of these things then once the image is stored we can start a Container instance and that's but then that's done by running Docker run And we run these in a demon mode with the minus D We tell Docker to expose ports with the minus P and then which image do we start so here? We're starting over API. So for example here. We have a list of all the Docker containers with all those Open-stack services for an over API. We can see that Expo is exposing ports 8774 8775 to the local host on port 49 340 and 49 300 Once we've built and tested all of these it's all running nice on a single node and now we want to share This across to other hosts that mention that Docker has a private registry that we can use sort of like an image catalog so what we do is we take those images that stored on one host and we With the base with the current convention of Docker We need to re-tag of these local images to specify the remote repository so that it knows how to pull the image Once we re-tag them we push them up to the registry and that gets stored for any other host that we want to pull those Images down from if we don't specify the registry then it assumes that it's to the Docker hub So that's one thing we have to set up on the side so that really enables us to scale it enables us to Leverage any additional hosts that we want to use to spin up any new additional open-stack services on Once we have this shared registry pop fully populated We can pull it down start the image and it makes it really easy and quick to scale that process out And we're gonna see a demo of this later Kalanjee is gonna take us through some of the customizations We've done and kind of run through a live demo of actually scaling a service So some of the lessons learn one as Then mentioned when ports gets when containers get started ports are randomly generated from Docker We kind of tackle this problem two ways well first we started off just kind of fixing ports So we that we knew and controlled which ports mapped from the local host to the container port that way we We would know which ports to update in our load balancers It's been well mentioned We additionally wrote a script that will suck out those roundly general reports so that we could use those Intern as part of our automation to update HAProxy Secondly services require multiple processes as I mentioned a little earlier in the presentation We have we have some services like horizon where We want to run memcached in the same container instance and not as a separate process So we found that leveraging Sketch supervisor D which basically is a process manager Helps us run multiple processes in a single container So basically what happens is when we tell Docker to run the sketch or the process our config in sketch or D Spends up the patchy instance and spins up the the memcache third there in the earlier versions of Docker there's some layer limitations and what I mean by layer limitations is The way Docker works is for every Execution or command that's run and save the commit those are being saved as new images So if we go back to our Docker file every single instruction set that is Executed within that Docker file actually gets saved as a new image or a new layer and in some of the earlier versions of Docker There was a a maximum of something like six years, so 42 layers, and so if you have a very complex configuration or workflow that you do use to Customize your services Then you you basically ends up being an error when you run through and build the Docker your Docker image So what we ended up having to do is really consolidate them similar like to bash we take sort of like instruction sets and Catenate them all into one big command that executes and that kind of got us around the problem and finally Debugging especially with open stack is not easy. If you don't have access to logs, so some of the challenges were when when we Started it off building these container images building the the processes that run We wanted to figure out why things weren't working certain way or compare them with other instances that were working and The way we typically do that is you compare either log files or you look at other processes or services that may be affecting that service that you're trying to get running so what we came up with is a way to consolidate these logs and Here's what we actually did so for the run command for each of the containers we took The run command and for every log file We pass in the parameter of the host name and whether that translates to when the container starts is the container ID This helps you uniquely identify each log for each service instance and When we start the container we specify a minus V which is a volume that will map to a Directory on the host file system, which we've also extended and shared as an NFS shared file system Which then maps to the directory or log directory in the container so that allows us to pull together all of the open stack service logs on the host in a common place and For example, we can see here. We have Nova The Nova log directory and can see that there are multiple API logs with the container ID attached to them so we can easily debug These services as they're running A One of the other challenges at the time When we run these dock containers in demon processes Docker versions lesson 1.3 doesn't let you interact with the container once it's running in a demon mode So this is one of the reasons why we had to go this route for now Next I'm going to hand it over to Kalanjee. He's gonna show you some of the extension points and how we've tried to make the management of these containers a little easier and Hopefully you get through a live demo All right. Good afternoon everyone. My name is Kalanjee van Koeh When we first started experimenting with Docker We realized that we needed a way to make our open stack services running in Docker easier to manage So we started looking at various applications We knew that we needed an application that had a front end UI could manage multiple Docker hosts and Was easy to integrate with our existing code So some of the applications that fit this criteria were chef cloudify shipyard and panamax So ultimately we settled on shipyard The reason for that was that shipyard was written in Python So we knew that they'll be easy to integrate with our open stack code We also saw that shipyard was written and based off the Django web framework So we were pretty familiar with Django So we knew that we would easily be able to customize the shipyard application to do what we needed We also saw that shipyard had a very active community. So we knew that we could count on that community for support Okay, so now a brief overview for how the shipyard UI actually works So after the shipyard application is actually deployed We needed to go to every Docker host that we wanted to manage using the UI and deploy was called a shipyard agent So shipyard agents essentially act as a Communication endpoint for the UI. So every time we hit a new page in the UI Every time we hit a new page in the UI The relevant information is returned from those shipyard agents So here is our main page, so this is our shipyard UI So what you're seeing now is our images page So this returns a catalog of all the images that we currently have deployed on our registered Docker hosts So we have quite a few images here So we decided to extend the shipyard UI so that we could organize the images of a year So to do this we actually created what we call applications So application is essentially a logical grouping of our open stack service images So here we have one application for each major open stack component So as you see here, we have a glance application which maps to our glance API and glance registry We have a sender application which maps our sender API sender scheduler and sender volume and etc We also went one level up and created what we call zones So you can think of a zone as essentially an open stack region in this case The shipyard zone is a collection of shipyard applications Okay, so now I'll go on to our containers view So this view shows all of the containers that are currently deployed on our Docker hosts as well as relevant As well as relevant information such as each containers ports base image and initial command So at first glance this container's view is pretty intimidating just because we have so many containers deployed So that's why we created our zone and application groupings So if we click on zone then we can filter down to just look at applications in our open stack region And then we can filter down even further using our application filter So for example, if we just want to look at our glass images or glass containers, then we can do so here So what I really want to emphasize to everyone is that we can utilize the shipyard UI to quickly and easily scale out our OpenStack containers So let's take the scenario where one of our yeah, let's take the scenario where some of our Docker containers are being overwhelmed. Let's use glass API as an example So before I scale it out, I'll just reiterate what Manuel was saying about our HA topology So every time we scale out a open stack instance We need to register that instance in point with our load balancers And the reason for this is so that our load balancers can't can nowhere to distribute as workload So behind the scenes we've actually written a script that will be called every time We scale out one of our open stack containers and this group will go and look at the container figure out Which port is listing on and update the HA property configuration with the relevant information After the HA proxy config is updated then we go on and reload the HA proxy so it gets that new information So yeah So this is our HA proxy stat statistics page. So this is where we see all of our open stack images So I'm sorry all of our open stack containers So let's take the scenario where our glass API is being overwhelmed So in that case we would want to Scale out and create more glass API So all we would need to do is basically go back to First of all look and see that we only have two glass API is currently running So we want to have three deployed. So what we have to do essentially is go to Go back to our UI look at our glass API containers And open this actions menu and basically what this actions menu will allow us to do is clone and Coning will essentially create a copy of that running container So I'm gonna go on to do is clone this container So the container has been successfully cloned so If we go back to our HA proxy stats page It's way too big. Okay. All right, so This is before we've before we refresh that then we see that we only have two glass API is running So if I go on and do a refresh Then I'll now see that we have three glass API is running and they're all And they're all registered with our XA proxy configuration. So Yeah, so that's pretty much the end of the demo. So I'll go on and end it over to Manuel Let me see if we can get this to work The docker part was easy. It's the the president the presenting. That's hard. So while we're doing this I'll just give you guys a piece of advice When you go down the stairwells do it use a buddy system because we got locked into one and And we had to call Kalanji to save us So the summary here is that really? Docker really fit in well with our architecture. So we didn't have to re architect the environment What we had to do was really just Really imagine what the architecture was and so by using Docker this gave us a fat an ability to scale much faster than before It allowed us to better utilize our hardware and get more density Greater density into the hardware that we already had and it also gave us much greater flexibility if we need to bring down a server we just move services over to another machine and Really docker the the big lesson also is that in capsulating services open-sac services is a very straightforward Exercise using Docker. It's it makes it now that you're using Docker. It'll make it easy to to test iteratively It's once you get the process down to what you need to do to create the dot to create the the container It's very easy to then declare that in the docker file And then once all that is created. It's very easy to run in scale and We found a couple issues a couple Challenges and maybe maybe in Vancouver. We'll talk about it But the the issue there is that really the orchestration of Docker Unfortunately for us it needed to we needed to add some extra code to shipyard to to make it do what we wanted And but the the good news is that this is a very fast-moving Area and so there's from from three months ago actually the the code that we changed in shipyard Unfortunately, we can't even submit it back to community because it was when we started making changes Shipyard move from Python to go so so the community is just moving really really fast guys And so this is this is something that's really good and you well, you know, it's like like all of open stack We're moving fast and so shipyard work best best for us But walking around we've seen all the other all the other stuff that's available and it's it's all really really cool I'm really interesting and we think it'll be Okay, so I got a minute so these are the IBM sessions today. This is the last one tomorrow We're gonna have five Sean has the nine o'clock one and with with with Phil and that that's probably the best one Ah tomorrow, though, there's also gonna be an IBM sponsored session Den and I have one there that one's also gonna be the best one And that's it guys if you have any questions, please find us we'll be around and that's it. We're out of time. Thank you