 Hello, everyone, welcome to our talk. Today, we're talking about deploying OpenStack using Docker in production. So this is where we work for Time Warner Cable. My name's Eric Peterson. This is Clayton O'Neill. We're principal engineers at Time Warner Cable. But what we're going to be talking about today is having your OpenStack services live with inside a Docker container. This is not a talk for how to serve Docker services to end users. So we're using Docker as a tool to make our jobs a little bit easier in running our OpenStack deployment. So an overview of what we're going to talk about today. The first thing, as I said, we view ourselves as operators, and we're going to have some sharing of grievances and talk about the pain of operating OpenStack and some pain points that have started us on this journey. We're going to talk about possible solutions. So today is going to be very Docker centric, but there are some alternatives that you can use that you may be using today. Some things that we tried that worked well or didn't work well. We're going to cover why Docker worked so well for us, what it really brought to the table, what problems it solved uniquely for us. We'll also touch upon why Docker didn't work. So there were a couple of things that we stubbed our toes on and had some trouble with with Docker. But overall, it was very positive. We'll go into a little bit more detail about Docker at Time Warner Cable. Things that are very specific to things that we've done, our particular techniques, things that we're doing that are unique to Time Warner Cable, we feel like. And then we'll wrap up with some lessons learned. So just a little bit of background for Time Warner Cable and OpenStack, we first put Docker into production in July of 2015. Our first service in Docker was Designate. We had been running Designate in a Python virtual environment before then. Since then, we've added heat, NOVA, in Keystone. So it's worked out pretty well for us. We've been able to make some progress with that. Also worth noting is that our NOVA uses SEF and solid fireback ends, which can add a little bit of trickiness. We're working on Neutron right now. We're looking at that. We've got some issues with Neutron. We use OVS as well. And as far as restarting OVS, having customer impacting outages, things like that, so we're trying to work with Neutron a little bit. Neutron's a little bit more complex as well. We'll touch upon that a little bit more. But Neutron is something we're working on right now. And we've got some, actually, we've got some upstream contributions for different fixes in Neutron we'd like to see. We're also looking at Glance and Cinder later this year. We think that that's probably going to be pretty straightforward. That's probably not going to be too much trouble. We're using Docker 1.10 and Docker Registry, Version 2. So how did we end up here? So we're in a Boontoo shop. We use a UCA for packages provided by Canonical. We've done a little bit of our own custom packages as well, development on those. But we found that, especially when you have to go through an upgrade, it's a pretty painful experience. And so our upgrades typically kind of became into these big bang, OK, it's now we're ready for Mataka. So we have to figure out how to schedule a massive control plane outage and upgrade all the things and coordinate all the things and orchestrate all the things. And it's been pretty painful to do a big bang upgrade. Something else that I think is probably common to a lot of people after they spend some time with OpenStack is you want to be able to carry local patches. And you get to a point where certain components of OpenStack that you run might be on a slightly different version than other things. Some things are very stable and you're OK with, or maybe you're a little bit scared to upgrade. Some things you're a little bit more brave and ready to take on the latest release of certain things. So with that, we found that smaller upgrades more often has really been something that's been a pattern that we found has benefited us, produced less pain to our end users. So why not packages? We built packages for Keystone, which worked pretty well. And we were able to do a little bit of custom code for that. We've got kind of our own little version of LDAP authentication that we use. That worked pretty well, but you get to a point where your packaging really starts to have problems. Because now if I want to update Keystone, now if I go in and try to build a new package for that, it might update 27 Oslo packages that have conflicts with something else that's on the machine that's stable that aren't going to agree with one another. And the packaging thing really becomes quite a headache after a while. The packaging workflows, it was just pretty painful for us. We were also limited by when things would show up in UCA, or when they wouldn't, and kind of the timeliness of that. And in a couple of cases, we were using Python components that didn't even have a package built for them yet. So we didn't have a Debian installer form or anything like that. So there is kind of a variety of situations, a variety of things going on where we found that packages really kind of had a lot of pain points. And I think you'll probably hear that story pretty common. So with that, why not Python virtual environments? So as I touched on earlier, designate, originally, we deployed that with Python virtual environments. We still deploy Horizon using Python virtual environments. And it works pretty well. You're able to isolate and kind of keep different versions of the libraries together on the same machine and not stop on each other. For the most part. So that worked pretty well. But that had its own kind of taxes that we had to pay for that as well. As far as you had to mirror and keep track of all these Python eggs and stuff like that, and that became kind of painful. It was a little bit slow to set up our Python virtual environments, and just working with them was still a little bit slow. It felt slow to us. And the other thing is that even if you're working with Python virtual environments, you've still got some outside system library dependencies, or maybe some system configuration files that you're working with. So not everything is going to be able to be isolated within your Python virtual environment. You've still got outside libraries that maybe you're going to compile against. You've got configuration files in Etsy, Nova, whatever. It's not going to all be wrapped up and self-contained within your Python virtual environment. So why Docker? Like Docker, it's pretty much the greatest thing ever. It's super fantastic. Everybody else is doing it. It's going to be good times. So we're only halfway kidding with this. It actually is pretty beneficial to have so many people working on Docker and seeing a lot of energy around there. It's very active development community. You can see a lot of patches. A lot of things get fixed. A lot of that's kind of similar with OpenStack. You see a lot of people active, fixing things, making things better. So that's a good thing to be able to be in that kind of environment and watch these things. And the other thing that I would touch upon is that while Docker might be this new hotness thing, and it's maybe a little quirky, a lot of us run OpenStack. I mean, that's kind of similar, maybe. But aside from that, why Docker? The one of the things is the reproducible builds. And we'll touch upon that a little bit more as far as the build tooling that we've instituted to make it so you can have a reproducible Docker image. It's easy to distribute a Docker image, at least with a setup that we have. And this has got everything self-contained within that Docker image. So you've got any of your system libraries, your configuration files. You've got everything all wrapped up in one nice little unit that you can send out to hundreds of machines. The other thing is that it's easy to install or you can have several images of the same thing on the machine at the same time. So on our compute hosts, we could have Nova on there and we can have a Docker image for version XYZ and we can have a version for XYZ prime sitting there on the machine, ready to go, primed. Nothing to, really all we have to do is start one image or the other image and it's pretty fast. If you're gonna do packages or some of these other things, you have to stop a service, install a bunch of stuff, maybe do some cross compiling, all this other stuff. So this makes it very quick to be able to be much more nimble with Docker. So why not Docker? What are some big problems or concerns? I guess you should say with Docker. One of the things is restarting Docker restarts all your containers. And so that's an aspect that we're looking at to try to figure out if there's gonna be a fix delivered for that. There's some intermittent buggyness with Docker and we'll talk about some specific cases that we've encountered with that. But it is a little bit flaky, but it's an active development community. You do see changes, you do see action going on there. So overall it's been pretty good. The other thing is that complex services are hard to fit into Docker. So as I touched on earlier, we're still kind of working on Neutrons. So Neutrons got a lot of networking namespace things, just a lot of complex interactions with the system that make it a little bit difficult to fit into a Docker container. But we're working on, we're nearly there. The other point is that if you're gonna use Docker, you're gonna have to have your own tooling and processes for how do you build your images? How do you distribute them? How do you version them? How do you know that you've got a reproducible thing that you're going to run with? And so that's something that you're gonna have to have to figure out. With that I'm gonna pass it off to Clayton here and he's gonna talk more about the specific aspects about Docker at Time Warner Cable. So Eric's talked a little bit about why and why not you might wanna use Docker and I wanna talk a little bit about how we actually use Docker at Time Warner Cable. And specifically I wanna start with how we actually build the images that we use to deploy services. We start off by building those images off of an internal Ubuntu mirror and that's a pretty small image. It's in a 60, 70 meg range. But then on top of that we build another image that all of our services are dependent on and we call that OpenStackDev and that's a pretty fat image. It includes a lot of all the shared libraries, all the CLI tools that things may need. So IP tables and things along those lines. From there we take that image and we build an image per service. So we have an image for Nova, one for Keystone, Heat, et cetera. And when we build those images one thing we're very careful about is exactly what we're putting in there. And specifically around Python library dependencies. And the way that we've managed to do that is is that we end up with two Python requirements.text files. And the first one is where we start. It's very high level. It says something like we want to use Nova from the stable Liberty branch. But we also want to put the MySQL driver and the memcache client inside of that image. And it also allows us to include custom things like we have some middleware that we've written to assist with monitoring that we also put inside of those images. We then take that image and we build a Python virtual environment inside of a Docker container. So based off of this OpenStack dev image that we talked about just now. And when we do that we're using the upper constraints file that the upstream project makes available to us. And what that file does is it says if you're going to install this Python library you should use this version because that's what we've tested. Once we have that built inside of the container we're gonna go ahead and run pip freeze. And what that gives us is a list of every library that that service depends on and exactly what version it was built with. And so that means that whenever we go to do a build of it that's the requirements.txt file that we actually build the image off of. Both of those files get committed to get along with the Docker file and any other associated tooling that we need to build the images or get the service running that needs to be inside the image. One thing I want to note with this is that for us it's really important that we have reproducible builds and the real driver for that is a lot of it has to do with being able to update these base images. So for example, if there's a new G-Lib C bug or a new OpenSSL bug and we have to rebuild these images because OpenSSL or G-Lib C is whatever is inside of that image we want to make sure that we're only updating that one piece. We don't want to accidentally upgrade Nova or accidentally upgrade some library that Nova uses. And so we're very careful to pin things as much as possible whenever we're building these images. Another thing that was the decision we had to make pretty early in the process is how do we want to version these images? And this seems like kind of a boring thing but it ends up being pretty important and it's hard to change later. What we were looking for is whenever we build these images we need to tag them with some sort of version. And we wanted that to do a couple of things for us. We wanted to be able to tell exactly what version of the OpenStack service are we deploying. We also wanted to know what version of the tooling did we use to build this image. So the Docker file, the requirements.txt that it was built from. We wanted it to be automatically generated. We didn't want to depend on people to remember to update this because we just know that's not gonna happen sometimes. And we also wanted it to be unique. We didn't want to end up in the situation where we have two versions of a Nova container and one's the good one and one's the bad one and nobody knows which one's which. So this is an example of what we came up with and it looks kind of gross but it has meaning in all the parts of it. This first part here is the tag that we're deploying for heat in production today. So that breaks down into a couple of different pieces also. So this is based on the output of get described for the commit that we're deploying for heat. And so you see there that it's 5.01 is the version that we're deploying. And that nine in there means that we're nine commits ahead of 5.01. So we're actually a little newer than 5.01. And this last part, this G044, et cetera, that's the short version of the exact commit that we're deploying into our production environment. So the beginning part of that is a little more human readable. The last part of that gives us all the information we need to find exactly what we're deploying. The second half of this is pretty similar. This 16 is the number of commits we've had in the tooling we use for building our heat images. And the last part there is the actual commit of that tooling. So we can kind of reverse engineer from the tag of that image exactly how it was built and what it was built from. One other thing I wanna mention is that whenever we do our deploys to production or in our dev environments even, we pin the versions of everything. You'll see a lot of times whenever you're reading Docker materials that there's a common convention of using latest as the tag. And that makes a lot of sense whenever you're doing quick tests and things along those lines. But for production, it makes things kind of complicated. So as Eric mentioned before, we're using the Docker registry v2. We have basic auth turned on out. We're using TLS. One thing that's a little unusual is we're using the file back in for this for local storage. The way that we get images into that registry, whenever a commit goes into Git, we automatically take all of the tooling of those requirements.txt files, the Docker file and Jenkins picks that up and builds an image based off that. Assuming that the image build is successful and it passed some tests, we go ahead and do a push into what we call our master Docker registry. That Docker registry, the only thing that can write to it is Jenkins so we don't have to worry about how did Bob build this image? Once that push to the master registry is done, we then have some other jobs that pick up and mirror the new contents over into two different sites. And we do that via R-Sync. We're just, that's one of the reasons that we're using the local file storage. And those mirrors provide read-only access to the images. And the reason we're doing it in two sites is so that if we have one site that's offline or has some sort of problems, we still have a backup and we can fail over to that. One thing that's maybe a little controversial about this is that one thing that was really important for us is that we didn't want to have any dependency on our production environment actually working to be able to pull images. And we spent a lot of time talking about this. It originally seemed very attractive to just back end our Docker registry into Swift. And kind of where we ended up with that is talking about, well, how much do we want to dog food our own environment? And one of the scenarios that came up is that if today we deploy Keystone using Docker, if we deploy a new build and it breaks Keystone, we need to do another deploy is, are we still gonna be able to pull those images out of Swift? Is the Docker registry gonna be able to authenticate to Swift? And I'm sure that we could solve that problem whenever it happened, but that it would kind of be a gross and stressful situation. So we built all these awesome images and the question at that point is like, well, how do we actually get these things running in production? And there's a lot of different answers to that question. We've always been a puppet shop and we're big fans of the puppet open stack modules. If you look at the user survey, a lot of other people are also. However, those modules only support deploying services with packages today. And so whenever we first went down this path, we were looking at designate and we just took that module and we said, okay, well, let's see what we have to do to be able to get this working with something that's not packages. And we got something that worked, we got something that we used in production. And because we don't like to fork these modules and try and maintain them ourselves, we made an effort to try and get that pushed upstream and we got some negative feedback. People, the complaints that we got were that it was probably not suitable for everybody, it wasn't very generic. And although that feedback was 100% correct, it wasn't. And so we kind of went back and started thinking about, well, how can we make this more generic? How can we make this to be more suitable to be in the upstream puppet open stack modules? And we came up with this idea of instead of integrating Docker support directly into these modules, we added hooks into these modules for software installation, configuration management and service management. And on top of that, we have a module that we've written called OS Docker that uses these hooks to add Docker support to the stock puppet open stack modules. And it handles all of the things that you would normally expect packages to do that you don't get with a native Docker deploy. So things like template config files and its grips, we have CLI wrappers in place so that you can run NovaManage and it does what you want. And so those sort of things. So the OS Docker module is available on our GitHub org, if anybody's interested in taking a look at it. We've tried to make this relatively unopinionated, it may not necessarily be a drop in for anyone, but it might be a good example if you're interested in doing something similar. That module today supports Keystone Nova designate and heat and as Eric mentioned, we're gonna be doing some more services in the near future. And just one last note on that is that this uses the stock puppet open stack modules so there's not some special Time Warner cable fork. You could take the support that's in the puppet open stack modules and do your own thing that would be a similar approach. So a couple of problems that we have had, well we've had a number of problems that we've had to work through. I'm gonna talk about two of them today. And one of them is pretty basic, it's not open stack specific, it's a common problem I think probably a lot of people have with Docker. And so Docker recommends and nearly requires that you use TLS whenever you're setting up a Docker registry. And whenever we first deployed Nova Compute to our hypervisors, it failed miserably. And I had kind of this sinking feeling at that point. It's like I don't know how robust this registry really is. And so I was a little afraid to go look but was surprised at what I found. It turns out that the InginX instance that we were doing SSL offload on was actually just chewing up an entire CPU core. And that was because we had accidentally configured it to only use one CPU core. And so in that situation we were just running out of CPU on the SSL offload. So the quick fix for that was easy. We changed it to use one CPU per thread. We did another deploy and almost everything succeeded. We came back to it a little bit later, did some more tuning and bumped up the number of CPUs on that box. So at this point for our production deploys we have a Docker registry that's eight virtual CPUs and we can do about 40 pulls at a time, about five or 600 megabyte images and not really have any issues with doing the pulls. So as I mentioned, this is pretty basic. How do I Docker sort of stuff? It is something that we learned the hard way. This one is pretty open stack specific and probably a little bit more interesting. So we don't use Docker networking in production. We think that most people that deploy an open stack would probably want to avoid it. Partially this is because of performance. There's a little bit of overhead in Docker networking. But also for some services like Nova and Neutron it's just very, very difficult to get it working with Docker networking and there's not much benefit to doing so. So and one thing to be aware of is that even if you're not using Docker networking it sets up all of the plumbing necessary so that you can use it as soon as you decide you want to. If you've run Docker very much you probably notice that if you run if config you'll see this Docker zero interface and then it's got this large private network range on it. And Docker's pretty smart about this. It goes and looks to see what IP addresses you already have in use on your machine and picks one that's not in use and assigns it to that for Docker for use by containers. So we kind of ignored this. We didn't anticipate any problems with this. We went to go and deploy Docker to our hypervisors though. And this was kind of some prep work for the Nova compute deploy that I was talking about just now. We figured this was harmless. What problems could it possibly cause? We're just installing something that's gonna do nothing. And but it turns out we had a customer who was using this network range 172.17.16 for their private internal network which happened to be the same network that Docker decided to use for its internal networking. And we learned after looking into this that Docker installs a NAT rule that takes all traffic from the network range that it allocates for itself and rewrites it to the hypervisors public IP address. And so what that meant is that all of the traffic being sourced from those VMs was actually being rewritten to come from the hypervisor and exiting the bond interfaces on that hypervisor instead of going down the VX LAN tunnels. And that broke all network traffic for that customer. It was a big impact for them. A little bit of a black eye for us. But the fix for it was easy. We've since turned off Docker networking for all of our control and compute hosts and our production environments. So as we mentioned before, upgrades were a big driver for us as far as moving to Docker. And so I want to talk real quickly about how these upgrades work out in practice. For the most part, it's not that different than packages. There are two key differences with doing upgrades with Docker that we've found. One is that you have the ability, as Eric mentioned, to pre-stage images. So if you're gonna do an upgrade from Kilo to Liberty or Liberty to Mataka, you can have both those images sitting on the box at the same time so that whenever you go to do your upgrade, it's gonna run a little faster. It's much faster to just restart services using a different image than it is to install a whole bunch of packages. But the other piece for us, which is really what we were going for is that you have the ability to upgrade just one service on a server without worrying about conflicting dependencies. And for us, this is important because, for example, it's very likely next week or the week after we're gonna upgrade Keystone to Mataka and we're not gonna upgrade Neutron to Mataka. We're not quite that brave yet. And I think a lot of people have these sort of concerns and it makes moving forward on these things a lot easier. So overall, this isn't really exciting, but we kind of considered that to be a feature. There are a lot of times in which we had done in the past package-based upgrades where we're up at the middle of the night and we're upgrading every service on a box. And if it goes wrong, we have rollback plans, but we're not happy about them. So those of you that are familiar with the COLA project, maybe wondering why in the world you guys use what those guys did. Those of you that aren't familiar with the COLA project, it is a open stack, big tent project for building Docker images, deploying Docker images. And we've looked at COLA a couple of times. We've been really impressed by how quickly they're moving. It's really a very quickly developing project. There are a couple of reasons that we are not using COLA. The first one is that we looked at it when we first started looking at using Docker. And at that time, they didn't support building images from source. And that was a key problem for us because we wanted to be able to run carry patches. So we looked at it again a couple of months later after we'd gone a little bit further down the road. And at that point, that was resolved. They, however, didn't at that point support third-party plugins. So for example, for designate, we have a third-party plugin that we use for creating DNS entries automatically. As I mentioned before, we have some middleware that we use for doing monitoring and things along those lines. We talked to them at the Tokyo summit. They had an ops feedback session. And this was a requirement that they were aware of that they hadn't quite figured out exactly what the solution for that was. And I think it's something that they were looking to tackle in the Metaka cycle, but I'm not sure if this has been resolved. But one thing I wanna say is the, Kola has been a really great resource for us. Both being able to go and look at the source code and talking to their team, they know that we're not using Kola at this point and they've helped us anyway. And they've just been a great resource for us. If you're wondering, how do I get this OpenStack service inside of Docker, they're basically the experts. So I just wanna thank them for all the help that we've gotten from them. And if you're looking at deploying OpenStack with Docker, Kola would probably be my first stop for hey, look, is this the thing that fits my needs because it's kind of the most mature project at this point that I know of. And I think that's all we've got for now. And I think we have some time for some questions. Do you guys do anything with HA and did you get any of those services in a Docker? So if you mean HA for the OpenStack services themselves, we kind of already had that in place. So the architecture that we had in place didn't really actually change very much when we went from packages to Docker containers. And that was actually a goal is that because there was enough operational overhead already in making this change, we wanted to try and keep as many things the same as possible in the short term. And we kind of expect those things will change over time. So we've not, the question was, do we have HA proxy in Docker? And we've focused entirely on OpenStack services. And a lot of that is because the upgrades there are painful and we wanna be able to carry patches. There are a couple of other miscellaneous tools that we have put in Docker, but for our production, our architecture doing the OpenStack services themselves is our sole focus. Did you primarily move away from self-packaging to distro-managed packaging? Are you still making your own packages pre-docker image? We are not really making packages anymore. So the packaging thing is kind of over for us, I think. At least for the foreseeable future. I think we deleted a bunch of that stuff. So where do you run the UisGay server inside the container or is that outside? Inside the container. Inside. So do you have any roadmap or plans to move to upstream Cola now? Because there's also that stuff or are you still gonna blaze your own trail? It's something we're gonna keep an eye on. We like what we've come up with. It works pretty well for us. It's not very hard to operate, but on the other hand, if we don't have to maintain it, that's a plus. So we're gonna continue to weigh what does it do, what we need? Does it service all of our requirements? And also then what's the cost of switching? So we're not necessarily, you know, really looking forward to switching to Cola, but it's something that we're certainly keeping as an open option if it meets our requirements and it turned out it wasn't very much work. I don't see any reason why we wouldn't. Would you think about like on a service-by-service basis sort of thing? More likely than not, yes. How do you clean up the Docker registry? We've ignored that problem so far. Yeah. We have a lot of disk space on that server and the containers honestly don't take up that much space. Yeah, but if you come up with some solution, we share it with us. Part of the problem I think is that the registry itself is a little immature at this point. And the tooling for doing that, you can find some stuff for doing that. Part of it is also you have to have a really good idea of what you're actually running and that's a thing that we can find out, but it's very labor-intensive right now. So we don't have good tooling around that. So at some point, I suspect what we would do is we would have, we would develop some tooling for determining, okay, well these are all of the images that we're running in all of our environments and so we don't have to do that by hand and then we would prune most of the stuff that we're not actually using. Yeah, we might keep like maybe a minus one version of stuff for rollback or fallback. How do you solve the running order orchestration? I mean, do you use system D or upstart or something like that? So we're using the Puppet Docker module for actually setting up the startup scripts and things along those lines. And since we're on Ubuntu with trustee, that actually just builds Sys5 style startup scripts. And so we're kind of using the standard ways of handling it there. It's not great, but that kind of goes back to my point earlier in that we were trying to introduce as little change as possible and get the benefits of containers in the short term. If you were doing it entirely from scratch today, would you do it that way? Probably not, but what we have was something that was kind of the easiest path. And you'll be questioned. What do you guys keep to call files and how do you manage those? Puppet drives a lot of the configuration files. So as part of using the Puppet OpenStack modules, they've got support for most all the OpenStack configuration options, usually. Yeah, long term we've kind of talked about it. We may put, we'd like to put probably console in place and probably start using templates at that point to substitute in environment specific things like endpoints and things along those lines. We don't feel like that works particularly difficult, but what we have work, so it's kind of not on the short list of things to do yet. I, it's a follow up to the question, so with Puppet, you template files on the host and then you bind them on them in your Docker container? Or do you run Puppet inside the container? No, yeah. So when the Docker container starts up, it has access to that local file. And so Puppet's still running outside like in the host. Okay, so you bind mount, ATC, whatever. Yep. Yes, the two things that we bind out for pretty much all services is we bind out etsy service name as a read only mount. And we bind mount var log service name. And again, that's not necessarily the absolute best way of doing it, but it meant that somebody on the team that doesn't know something, a whole lot about Docker was able to go in and say, oh, the config file still were expected to be. In the log file, and same thing with system service start up and shut down. All the commands are still familiar. Okay. Just which version of Docker do you use? We're using 110 now. We were looking forward to 111 because we were under the impression that it was gonna solve the upgrading the engine or restarting the engine, restarts the containers. It sounds like there was some miscommunication. 111 actually just lays the groundwork for doing that. It's probably gonna come in like 112 or 113. Okay, thank you. Besides the control plane, did you guys do the hypervisors as well? And did you guys do any performance testing to see the result of your work? So funny thing about the performance testing is that we actually saw in some of our services that the performance got better. It's not because of Docker. We don't have any reason to believe that it has anything to do with Docker. It's just that we were on newer versions. So we moved to newer stable releases. We got bug fixes and things along those lines. Because we're running host networking, we haven't really seen any performance issues at all on any of these things. We do performance monitoring. In most cases, there was no noticeable change. And as far as the hypervisors, we are doing Nova compute. That was a little complex in places because the SCSI aspect of doing solid-fire storage. So originally, whenever we deployed, we were only using SAP as the backend for volumes. And then we had to come back in and add iSCSI. And it took a couple of iterations to figure out exactly how to make that work. And we're also looking at, probably the next thing we will do will be the neutron open V-switch agent, which also runs on all of the hypervisors. So you use Puppet. So what kind of tools do you use to do the orchestration of the updates? Do you use? Ansible. Oh, okay. So you use Ansible to orchestrate Puppet. Yep. Okay. What do you use for logging logs? Do you have something specific? You mean like aggregating all our logs? Yeah, how do you get the logs out of the OpenStack services? So we Splunk to aggregate all our logs? Okay. So the login for us didn't change whenever we put these services inside of Docker. We actually still have them log into our log, whatever. And then we have Splunk pick those logs up so that we have some kind of more universal view. Okay. Since you're using 1.10, do you have any reasons not to switch from mounting VALOG to using the syslog Docker? Because right now in Docker, you can like from the container, you log to stand out out, and then Docker uses syslog to put it in the host, so. Oh, we haven't looked at it to be honest with you. Okay. The logging's really just not been a much of an issue for us. Okay. Yep, thank you. Okay, anything else? Hi, what do you do for database? Is that Dockerized as well? It's not. Really, we're just doing open stack services at this point. We've discussed whether or not we will put things like MySQL or RabbitMQ inside of containers. At this point, it's kind of not clear exactly what the benefits would be unless we were doing some sort of other orchestration to help with like placement or scaling of those workloads, which we don't have in place today. Hey, thanks. How did you get ISCSI working for a solid fire in the container? Yeah. That's horrible. I would, to be honest with you, I'd have to go look at how we did it in the OS Docker stuff. For the most part, the complex piece around that is actually getting the devices in Dev to be presented correctly inside of the container. And actually there's a fix in 1.10 that is required to make that actually work the way that you would expect it to. Thanks. But for a number of these sort of things like for sender and for Nova, if you go and look in COLA, you'll see that there's bind mounts for like slash dev into the container. Because fundamentally, Nova, if you're going to do ISCSI mounts, you basically have to have root access on the box to be able to do that. If you drop an email to Clayton.oneal at twcable.com, he'll give you the full answer. Nice talk. Thanks, Dave. Anything else? Okay. Okay, thank you all for coming. Thanks, everybody. Thank you.