 Hello everyone good the mics working So I just need to quickly reset this. I want to show you quickly something that I think is pretty cool so I have my presentation and Chrome itself Inside of a docker container. So I actually run my presentation by running docker compose up and I click a few things And it comes up on my other screen, of course And so I think that's pretty cool that you know docker is pretty versatile and you can even Pass graphics through it. A lot of people don't know that it's probably not a very great use case for docker But it is it is kind of fun so During the keynote yesterday morning a lot the question was asked How many people is it your first summit and probably about half the half the group put their hands up? And that means that many of you have probably very recently had a discussion that looks a little something like this And not only has lumburk here told you to go ahead and implement a private cloud He also recently read in a in-flight magazine about this thing called docker and now he wants you to dockerize everything, right? And so what I'm up here to do is I've been messing around with docker and open stack And together for quite some time and this is kind of a brain dump of stuff I know stuff I've done some tools and stuff that I found useful and that kind of thing I've tried to put it together in a somewhat useful agenda and we'll see how we go So we're going to talk a little bit about what docker is you probably all have a pretty good idea So I won't go into to in-depth I'm they're going to talk about running docker on open stack. I'm they're going to talk about running open stack on docker And then we're going to go through some tools and ideas to help you run operationalized docker and run apps inside of docker that maybe Aren't the most docker-friendly apps Which is what we all kind of have to do because we don't we're not all dealing with these great sort of friendly 12-factor apps we'll have some we all have a lot of legacy things a lot of in-house applications we have to try and run and There's no reason we can't put them in docker And then I'm going to share with you some opinions I have on how to help your org get to a good place where it can effectively use open stack and and docker But I should probably tell you who I am so I am Paul Chikovsky at P Chikovsky on Twitter I think of myself as his admin still a lot of people try and call me rude names like a devop or a cloud engineer And I work at blue box on our private cloud as a service product. I'm a fairly adored. I'm a fairly early adopter of docker I ran it in production in 0.3, which was a year and a half ago or longer For a personal app. I had a lot of fun with that I presented at the first docker con in San Francisco last year I'm a contributor to Nova Docker and to Solom both inside the open stack sort of umbrella And also I've got a few projects sort of around the docker ecosystem One called dock and stack and one called factor ish two examples And I'll talk through those as we go through and then also I helped Build and run the container days concept and we ran our first one in Austin We're not got another one coming up in Boston And we actually have a mini one going on today in the other hole across the Across the road, which is pretty cool So docker at its heart is process isolation, right? I like the term. I didn't coin it, but I really love the term It's to root on steroids It uses the Linux kernel tooling to see groups namespaces, etc to To do that process isolation It it creates shareable immutable artifacts, which is the docker images And then you share those by the docker hub or a private registry, etc And that's actually the thing that has really pushed the docker adoption is the the shareable artifacts The you know kind of the marketplace kind of thing as well as the fact It's made it very easy to use LXC. Whereas before docker. It was quite difficult And also you can have as much or as little of an OS inside the images as you want This is the typical VM versus docker diagram. I ripped it straight from the docker.com website I don't want to go through it in detail But I just want to quickly touch on a few things that is that docker is containers. It's not virtualization But you can make it act like virtualization You can make docker act like a act like a hypervisor and you can make docker images and containers act like VMs But it isn't it isn't secure and isolated the way a VM is It's sort of halfway between a true and a VM So it's not docker in containers in general and not a security feature There are a way to do process isolation and that's pretty important distinction Any OS capable of running docker itself is capable of running your app if it's inside of a docker container So regardless if it's red hat or or Debian or Ubuntu It can it can run it because it shares the kernel and then and then the rest of it is pretty much inside the container It has a it has a layered file system Which is really cool because it means your containers can can share sections of data Which means you can have even if they're really large images You can have very small deltas so they can still be downloaded and run very quickly And you have really fast startup times less than a second. There's not like 50 milliseconds for really simple ones Obviously if you go a weird app that takes 10 seconds to initialize then you have that plus 10 seconds and then size-wise anything from a couple of meg to a couple of gig I've seen and That the whole spectrum is fine. There's nothing wrong with having a really large or a really small container In the docker ecosystem talk about a few of the things quickly level set all the different tools and stuff out there So from docker themselves is boot to docker if you're running OS X or you're running Windows You probably want to run boot to docker to run docker. It's basically a virtual box and a really lightweight Linux VM And when you when you run it it starts up a VM and sets your environment So your docker client is talking to that and you get a native like experience with docker on your Mac or on your Windows box the docker registry is the artifact repo For your docker images so you can push them to it you can pull them from it and you can back it by Object storage, which is really great because we have Swift so we can now back up by Swift and then the registry itself is Has no real persistent data. So that's super super useful for us Docker compose is a way to start up docker containers based on a yaml file So kind of like heat is but a lot less a lot less features and stuff really good for quickly standing up Environment's multi-be multi-container development environments and stuff and it's like it goes from you know We used to use vagrant up it would take five to ten minutes now. It's a second or two So it's a very very large improvement there Docker machine is a way to spin up machines capable of running docker Externally say on the say on an open stack cloud. So you point docker Docker machine at your open stack node You give it your credentials and stuff and it will spin up a VM It will install docker on it and then it will create a I think it does an SSH tunnel between the two So you can talk to the docker API on the remote machine, but you're not exposing it to the entire world So you've got less security concerns there Again super useful Docker swarm is is is fairly new-ish and it's doing a scheduling of docker Containers across multiple machines that are running docker And then lib network is brand new they just acquired Socket plane and having them rewrite the network stack and they're bringing in SDN support via open v-switch sort of off in the future somewhere And in the community we have a bunch of stuff. I don't want to go through it all There's tons. We have the lightweight OS's the designer on containers core OS, etc but kubernetes, which is a out of Google for running from multi machine process scheduling I you've got You got mesos, which is kind of the same thing But slightly different and it supports docker and a bunch of other different ways of running processes We've got ecd and fleet which I use quite a lot They basically if you stand up multiple core OS nodes, they cluster together and you can tell fleet to spin up a Docker container and it will choose where to spin it up And if that host happens to die it will spin it up on one of the exist one of the other hosts So if you've got good stateless apps, it's a it's pretty cheap way to get some pretty good ha and scheduling and stuff like that Drone is a CI tool which is based on using docker images So it's super fast to spin up and run tests against your code and then destroy it again super useful It's kind of like a private CI sorry private Travis CI And then you've got the different passes that are that are based on top of docker Day is Flynn Ranch the new one from Cloud Foundry Lattice, etc So we're gonna talk about running docker on open stack And we're gonna focus a little bit on the on the open stack centric tooling, right? So Nova docker is the obvious one. It's got kind of an interesting history there I'm you're probably all familiar with it. I'm not gonna read it out And I know several companies that are actually using it in production today So, you know, it is definitely Production ready if you have the appropriate workload for it And it treats docker like a hypervisor and it treats your containers like a VM So you get a bunch of benefits from Nova as far as scheduling a common interface You know, so you're spinning up VMs the same way you're spinning up containers But it comes at the con at the cost of like some of the really cool things that docker like the runtime config volume mounts and stuff like that And there are some folks working on improving those though It does keep docker super fast start time, but remember now it's running in a distributed system You have to schedule it which the Nova scheduler can take five to ten seconds to schedule And then you actually have to download the image and run at which if it's not already cashed on that machine Could be several minutes to ten minutes if it's a large image So those things are still concerned, but once the image is there. It's got that sub-second startup Your image are stored images are stored in glance. So again, you've got that unified way to view artifacts and images and stuff like that And it means you don't have to run a docker registry if you want to use a Nova docker driver And it also uses neutron So we get a real IP address and we get security groups and some of those kind of nice things that we really like about Neutron and this is what it looks like to run it You can see really it looks the same as doing a VM, right? The Nova boot command is exactly the same The only difference is we do a docker pool and then we do a docker save and pipe that through to a glance image create There's a heat driver for for docker Again, there's some history there. I won't go through it and it treats as it It adds a docker resource type to heat so that you can treat containers as heat resources I haven't used it a lot aside from making sure that it actually works and does what it says it does a Couple of things you want to be aware of is it does require you to do manual placement So you're specifying which containers to run on which VMs inside the heat template And also heat has to have access to the docker API So that if it's if it's private cloud, that's probably okay. It's a public cloud. That's a little bit scary This is what a heat template looks like for a for an existing host I Pulled this straight from Scott loads blog so I attributed there and he's got a bunch of really good posts on it So if this is what you're interested in go check out his blog. There's a long URL there, but blog Scott low dog The other thing which is actually interesting is when you combine heat and Nova docker is you get to use You get to use Docker containers Via Nova via heat so you kind of get all the benefits of heat and all the benefits of Nova and all the benefits of Well, some of the benefits of docker and so that's actually quite interesting to me And it's actually how how Solom works for those of you who aren't familiar with Solom It is somewhere between a Paz and an app like application life cycle manager It is it kind of an open-stack native way to provide your users with a an application centric experience It uses docker is the application packaging mechanism and the artifact to ship around and also their execution runtime And it uses heat to deploy that either by an over docker or to a core OS Box and then to docker and that that's if you want to do it on a multi-tenant environment You don't want you don't want to have tenants sharing necessarily sharing the same physical host with containers So that gives you that isolation of VMs as as Magnum Mature's I would imagine Solom will move across to use Magnum for that instead of Core OS directly And the way it works is you tell someone about your application You tell it where on github your application lives and it creates a a githook up with github and then any time you do a commit to master It fires a message to Solom and based on a set of rules you have it may run some unit tests it may then create a build and put that build into Into glance or into a docker registry and then go and deploy that via heat as well as deploying Any resources you ask for like a database via trove or a load balancer And Again, I know of at least one place using it in production So if you have the right use case for it, it is fairly narrow use case right now, but it is getting better It's totally doable to run in you know for running development stuff. Maybe you don't want to run in production just yet There's also Magnum Magnum and Murano I was going to talk more about them, but they were covered really well in the keynotes and also through other sessions So I didn't want to you know rehash things that we all already heard I do think Magnum is something that we want to watch really carefully over the next six twelve months I think that's really solving the problem in a really good way Maybe even better than what Nova docker is doing in a lot of ways And then I also wanted to quickly talk about a pass. I've used a fair bit called dais I'm calling it out specifically because I've got good experience with it I'm not going to say you should be running this. It runs really easily on open stack I helped write the docs for it. It also runs really well on bare metal and other cloud platforms It runs on top of core OS and it uses fleet and at TD and all of those things and it is kind of a 12-factor microservices system in itself So they're kind of eating their own dog food there where you're saying we're a pass But we're also building it as microservices and 12 factor And it gives you a couple of very familiar User interfaces it gives you a unit user interface It's very similar to heroku where you can do a you create your application and you do a get push dais and The repo you're pushing it to and it pushes it to a get repo on dais Which then kicks off a build and then runs your application. So very similar to heroku It also has a way to push docker images directly to it Which you do a dais push and your docker image and it will push it up to dais instead of up to the docker hub And it will then run it and stuff. So you've got two very familiar ways of running containers on top of it And but there's a ton of other passes that are coming out and that are building up around the docker ecosystem It's just the one I've used the most and I feel it's the most mature of what's available right now And of course, there's a bunch of other ways to do it, right? We can we can we can curl bash the thing We can run docker machine We can run mizos kubernetes, etc. And they have limited time So I kind of picked the ones that were more open stack centric and the things that were kind of interesting to me at Least and the things that I've used a fair bit The one thing is like docker and the docker image is kind of the unifying thing amongst all of these So even if you if you use one for six months and something better comes along It should be relatively straightforward to just swap it out and use something else So open stack on docker About a year ago year and a half. Maybe it was before Atlanta I created dock and stack which was basically an experiment to say can I run dev stack inside of docker? And it turned out I could and it wasn't all that difficult And it also turned out to be to be quite interesting from a CI perspective After you've built it once you can build it again very quickly which means for running tests and stuff It's it's really useful. So Eric Windish at docker actually took the project over and with the intention of using it for the The CI for Nova docker itself. I'm not sure exactly how far along he is with that whether he's done it or not But it is kind of cool First time you run it it takes about as long as a regular dev stack install But any subsequent installs are super fast because of the way docker does its caching and stuff like that And it does support the Nova docker driver and libvert LXC right now So it does like a crazy container and container thing It is possible to use KVM by passing in some sockets and privilege mode and stuff But I haven't really messed with that Next we have cooler. It's a collection of tooling for running open stack on docker I've kind of looked through the repo. I haven't actually used it. So I can't give you too much insight into it I think it's a really interesting concept It is I'm not sure how practical it is to run all of open stack in docker containers You know, especially when you get to your stateful services like rabbit and my sequel But the API's and the schedulers and stuff like that are perfect to run inside docker containers As long as you're able to configure them to tell it which database to talk to which rabbit to talk to stuff like that if you want to run actually Nova compute or you want to run the neutron networking stuff, you do have to do some fiddling with privileges and Using host networking for neutron and some stuff like that to pass it through But it is actually possible to run all of open stack in docker containers including the actual Nova talking to libvert to KVM, etc It uses the OS packages. I think it uses the centos packages I would actually rather see it use git Because I think most people that most of us that are running open stack in any significant way Have some patches we're doing or something we're changing in it and we end up building our own packages from from git So it would like be good to be able to get that same workflow into our docker containers There's a packaging tool called gif wrap, which I use a fair bit It was written by one of my colleagues at blue box and a bunch of people have contributed to it I had builds debian packages and rpm packages for the various OSes, but it can also build docker images So you you run it and it will grab your you've got a Manifest file which is the ammo file and you describe which open stack projects you want And which get revisions and stuff you want to grab it from and where the github location is So either the public repos and a get revision or a private repo if you're running your own forks or patches or whatever And it will it will go through build them and then spit out images I wrote a tool called gift wrap wrapper, which is actually a docker factory for that And so it actually means you can run one command and you'll actually get Red Hat 6 image sorry send off 6 image and a bunch of 12 at 4 image and this way package and a docker image So you can kind of get it's basically a cross compiler for Linux distributions and you can actually because you can have I Doesn't matter. I've forgotten Yeah Totally lost my chain of thoughts. Sorry about that So so we have so we've used gift wrap or we've done something else we've used cola to create our open stack images So we've got we now have nova API ready to run in a container We actually have to still configure it We have to access nova.conf and we have to tell it where our rabbit is what what libvert driver to use all that sort of Stuff so there's a few ways we can write configs into a into a docker container The first one is basically we're just we have a fairly static system nothing changes much So we just write the configs directly into the directly into the image We just add when we do the docker file. We just add them add them directly and it's pretty simple If you're already running chef or ansible or something like that to do your configuration management for open stack Which I really hope you are because it's Not something you want to run without config management Then you can actually just keep you keep those configs and bind mount them into the docker container and you're basically done And then the final way which I think is the really interesting way to do it. It's actually templates The config files and then use environment variables at the runtime to set those up or even better to use service discovery And I use a tool called conf d for that Super useful that sort of stuff You can even use like an inline set to change stuff at runtime when you start up your container Now we're gonna kind of flip to some operational kind of stuff So there's a bunch of like golden rules the docker community has kind of adopted and will yell loudly at you about and They're not really rules Even though the community gets mad about it And you know these people don't know what your use cases They don't know what problems you're trying to solve and so a lot of this stuff up here Just doesn't apply to you So don't feel like you have to do things like the the docker way do it do it do it make sense to you to actually Make something that's useful and good for you This is kind of what they want you to have right they want you to have this unicorn app It's probably written in Golang. It's fully 12 factor and it's like a couple of megs of size Rest of us it's Python or God forbid PHP has a ton of system dependencies config files It's trying to write to log files Many multiple processes of his PHP. You probably need a patchy plus PHP fpm And your container ends up being like a gig or more It's amazing how small a Python app can give you a gig or more of OS and system dependencies even inside of Docker So and this is a kind of an example docker file of that kind of application. This is actually a docker file for PHP so it's doing engine X it's doing HHVM which is the PHP fpm like thing that came out of Facebook It's also got run out for doing process management. It's doing get CD and conf d For config matter for doing and writing out our templated config files and using service discovery if we want and then it's telling Or what do I command to run and Then once that's in and we've done a build We can then use docker compose and this is what a docker compose looks like so we'll get three three running containers here One running engine X one running HHVM and one running my sequel and because of the docker links They'll all talk to each other and we have a dev environment in less than a second with like our tiered application And that's super useful for development work There's a tool set I wrote I talked about I mentioned a little bit earlier called factor ish and it's some stuff I wrote to try and figure out how to run more legacy style apps inside of a container and Sort of the the brain wave that I had that made me do it was if we if we put stuff inside of a container And we want it to be 12 factor what's inside the container doesn't have to be 12 factor It's the container itself. It has to be right So you can do all sorts of crazy stuff in the container as long as from the outside it follows those 12 the 12 12 factor rules. I have demo apps for it from anything from a really simple Python app to a full Elk stack and also a My sequel Galera cluster that will auto discover itself using at CD and service discovery and set itself up across multiple nodes Kind of cool, and I'm not suggesting you should run persistent data in docker containers just yet. It was just a an exercise But in doing that I learned a bunch of stuff Raring multi-processes. It's fine. There's no problem with it For stuff like my PHP. It's kind of mandatory And then it tool can also help you run stuff like Apache and stuff Which isn't all that easy and friendly to run like in the foreground in a docker container the way that you should And and my preference for internet systems inside containers Supervisor D or run it both are pretty good both are pretty lightweight and easy to configure When we're talking about logging inside of a container It's really key to never write a log file inside a container unless you can unless you absolutely have to and that's because the moment You start writing log files into containers if that container is gonna stick around for a long time now You need to do log management, right? You have the same problem we have on VMs and everything else. We have a lot less tooling to actually do that work So what we can do is we always want to log to standard out and standard error And then that will then push through to the docker log subsystem and then you can consume those logs by other tools This is an example of an engine engine X config where we're actually telling it to log to standard out and standard error So there's a dev standard error and dev standard out Basically devices that talk to the Linux kernel and flip back So when you're writing a log to a file it actually writes it back to standard out of engine X So by doing that and setting Damon mode off and setting a user to run as now when we run engine X It runs in the foreground and all of your logs access logs and everything come to the come to the standard out And then now it's really really good to run it in in a docker container And you get all the benefits of running in docker and you get the logs coming out to the docker logs log subsystem, etc Doing configs in our inside of our containers. I mentioned conf d before this to me is the the best way right now to do this It's designed with service discovery in mind Etcd console etc, but also supports doing taking values for environment variables you pass into docker via the dashy It's a templating engine that's written in go So you kind of get all of the benefits of the the go language to to do things loops, etc You have a couple of files involved. This is kind of the metadata file for your template You tell it where the template is coming from you tell it where you want to write it to you get some other Attributes and you tell it which keys to listen to and that will basically subscribe to those keys and any time those keys change It will rewrite your template file and then it can do checks and reloads and stuff like that based on what whatever your app is This is a template itself you can see the double curly braces there That's actually getting though getting the values that we've gotten environment variables now environment variables can't have slashes in them But etcd and console users use slashes and they look like a directory structure So there's a little bit of disconnect there So what that what conf d does is it actually capitalizes all the text and it replaces the slashes with underscores So then we can consume it in a script like this So this is a boot script that I run whenever I start the container And the first thing it does it goes and collects those environment variables and if they don't exist it's setting a setting somewhat sane Defaults right, which is what those squiggly bits on the right there is and then it runs conf d in one-time mode And so that will write out the other config files and exit either with a zero or a one depending on if it failed or passed And then it runs engine x and it's using execs so that engine x takes on the pit of the bash grip that's running it So it's the pin one of the container And then also it does that wait at the end and that's back That basically tells bash to wait until all of its child processors have exited before it exits itself And that just helps reduce chances of getting zombie processors and stuff like that showing up Once all of that is done We just do a docker run command and all of our templating and all of that stuff is done and that's pretty useful If we then want to use etcd or console We simply flip conf d to run to not run in one-time mode And it will run as a daemon so you then want to run it by run it or supervise a d or something Or just put an ampersand at the end and run in the background and now it's going to go and get those values anytime they change in etcd or Console or whatever you're using for service discovery and so you've kind of got service discovery really cheap And you were able to do it in iterative ways yourself with environment variables You get a little better you move to service discovery Flipping to outside of the containers So things are a little bit different out in docker land There is some tooling now to help smooth those differences out Stuff like config management and by that I don't mean templating configs inside I actually mean outside of the container managing docker containers themselves Logging monitoring and stuff like that. So very quickly Config management all of the major config management tools have a decent docker story now And this is a really good way to bridge our traditional slash like the way we do things right now and doing things in a more docker Docker-centric way. So what we basically do is we use config management to install docker unless we already have it in our burnt images and then we use config management to to do to build run pull Etc our containers our images and containers and so basically what we're doing is by doing that We're treating docker is kind of like a packaging format and a and a service start and stop tool Which is which is useful in a itself and you haven't had to have adopted the entire docker is ecosystem to suddenly start doing useful things There's a interesting tool called C top it's fairly new and it gives you a top like top like interface into your containers that are running on a host So you can run it and you see and it's not just docker containers. There's also LXC and also the system D Containers, I can't remember what they call now From a monitoring perspective like C top is cool But we actually want to monitor it and look at what how much CPU is being used how much memory is being used across the entire system And but for each container Now docker users used to use LXC it now uses lib container to access name groups and see see groups and namespaces And those things already put metrics into your system But it puts them in crazy places like that CIS FSC grew blah blah blah blah So it's kind of hard to find and you've got to kind of be able to hunt them all down and know What belongs to what container and stuff like that based on IDs and stuff that's not particularly fun So if you do feel like having that like having that little adventure You can use a collectee or sent sue or something like that and build checks to go and look in those places There's probably some community checks out there by now. So maybe someone's already done that work for you Since docker 1.5. It actually has a fairly basic metrics that it can expose via the API So you can also go and interrogate those from you know, collectee or sent sue or something like that But what I use is I use a container advisor or a CAD visor It was written at Google for Kubernetes, but it actually works really well for any any kind of docker based system It has a web UI that looks a little something like this. It is only on individual hosts When you're looking at that web UI, so you're only looking that one that one host and it does have a rest API that you can go and grab metrics from It can also send data up to influx DB and Prometheus They probably need to bring some support in for graphite and some other stuff but pushing out to influx DB we can then use Grafana in front of influx DB and now we have a We have all of our metrics going to a place We can start building dashboards and stuff for all of the hosts We have and so we just run CAD visor on every single host We run containers and now bam we're getting all of our metrics for all of those containers their CPU memory Discusage all of that stuff This is how you run it We have to bind mount in a bunch of volumes most of them are read only apart from Varan which big deal We're publishing a port so that we can actually access it if you're obviously running this out in the cloud or something You might want to only expose it to local hosts and then do some sort of tunnel to access it So you're not exposing your metrics to the entire world And then onto logging which is you know a similar need that we have much like metric much like monitoring And before we were working out how to log everything to standard out now. We're doing that we use a log spout by Jeff Lindsay at glider labs and it saves us a ton of work What it does is it it binds to the docker socket and it watches the docker log log sub system And it ships those logs off via syslog to wherever you want So in this example and this is actually from a chef cookbook This is an example of using the chef cook the The docker cookbooks lightweight resources to run a container and you can see again I'm bind mounting in the docker socket and I'm giving it that command Which is syslog blah and which is in this case sending it to paper trail and log spout has Has the entry point hard-coded so it already knows to run the log spout binary itself And then I actually push most of my stuff out to an elk stack So now I just got a last log stash slash kibana slash whatever to view all my logs Just like I do with all of my other infrastructure. So we're now kind of we're equalizing and we're getting all of our logs For docker the exact same way we're getting all over our logs for the rest of our systems And whatever any tool you have that can listen on syslog can now ingest these logs. So it's blank or whatever else Then you have container management so with VMs we kind of had the problem of VM sprawl That's like magnified again with containers right because we can fit so much more into a single host so instead of having pets verse cattle now we have cattle verse ants and So management tool that can help us visualize all of this stuff and figure out like how many hosts are running docker What are they doing? How many containers are running? How many image we have suddenly starts to become very useful? And this is an example of one. It's called stack engine. They're in beta right now They're looking for customers, but I don't want to sell stuff for them. So I'll stop there And then finally I want to go into some recommendations on things that I think we can do to help our organization Get better at prepare ourselves to run docker and to run open stack depending on our maturity level I don't know where all your organizations are at. So I'm kind of walking all the way through and of course my first recommendation is Get some help obviously I did this on a higher resolution than the screen supports What you what I mean by this is you want to concentrate your efforts on your own core competencies the stuff that like the Business differentiators and the stuff that makes money for your business Anything else you can go and get help with don't try and do it all yourself. Otherwise, you're gonna You know lose focus on the things that actually bring money into your organization The things that bring value into your organization and I work at blue box So of course I would tell you that you should you should use our private cloud as a service product Don't change don't change things that aren't causing pain. That's kind of that's kind of it should be a fairly obvious one You know if you if you've got see eye tooling that works really great Don't try and dockerize it go and find something that isn't working so great and say maybe docker can help me with this You've probably got a lot of legacy applications. I used to work at a very large Organization that had like 15 years of legacy applications and nobody knew what the hell any of them did or how they ran Don't try and dockerize any of that. Just like put a fence around it If you need to do something with it to save money or something go and like Pay someone to put it on the VM where or whatever so you don't have to worry about it and you can focus on moving forwards And then as you as you're looking at things, you know, you really want to think about what you what you want to be doing in the next Few years, but you want to see say what can I do now? What's the low hanging fruit? What things can I start moving in that direction? You don't want to kind of build this big long plan That's so huge that you can never quite execute on it find the little chunks you can get you can start working on right away So obviously you want to build or get help to build open stack. I kind of have a Opinion that we should start small and start with like the very that like the the building block services And then once we've got those figured out then go chase the the more interesting ones The the weirder ones. There's so many now And you know, don't try and unify this with your legacy stuff keep that kind of separation So that you don't you don't waste time trying to like do the same thing across both when they're not, you know The leaggy stuff isn't cloudy applications. So don't try and put in open stack necessarily Build up a good internal DevOps practice. If you don't have one already Ask for help and by that I don't mean don't don't go and buy DevOps because you can't really buy devos but people will try and sell it to you and Get to go out to find your local devops community Get involved with them find the people in your org that are already doing devops He kind of things and nurture them and that kind of stuff and also obviously you're gonna be running open stack now So start figuring out how to build cloudy apps, right? It's up trying to figure out how to do the cattle thing How to build resilient my sequel post-graderlastic search all those things are the stateful Which you probably be a long time before you want to even think about doing anything container-based with them. I Can't really stress how important DevOps is If you're not doing if you don't have a good devos practice You shouldn't even really been talking about cloud and and Docker and stuff You really need to get a good handle on devops and doing stuff like config management CICD all that kind of stuff And going forward for a very long time You're gonna need config management because you need to manage your stateful applications You need to manage your legacy applications. You need to manage your persistent data in my sequel post-credits, etc So you want to avoid that? Converse that conversation is going right now that it's Docker verse config management It's kind of both sides of those arguments are straw men And the real thing is you need both to build a to build a good solid platform It's being a platforms Kind of pick one and start playing with it right now Don't think you have to instantly go to production play with a few of them Get your developers playing with them as well as your operators and see see see what they like see what they don't like If I had picked one right now It would be deus and that's because it has a really good user experience Both from the developer's point of view having that kind of private heroku point of view But also from the operations point of view has a really good split of responsibilities where your operators run deus they run open stack and You know, they run your data persistence like my sequel or whatever your DBAs and stuff run You run you my sequel post-credits or whatever and your devs Just simply push applications in the deus and it's very much like heroku Which is really really cool And that is kind of it. I wanted to end a little bit early for questions and stuff like that So any questions raise your hand and I'll see if I can answer them. No questions Yes Yes It is here right now and it is on a private GitHub right now Sometime next week I will get it on to the docker registry or something or give instruction How to build it yourself and get it put up on wherever all the slides and stuff are being summarized for the for the event Well, follow me in Twitter and you'll find it there So reconciling the config having like a base standard so you don't have all these crazy different containers all over the place So I kind of feel that and this is really good where you get your ops people involved and you you build out like a good base image and It doesn't matter it's a fat image because you only have it once on every system because of the layered system layered file system So have like a Ubuntu 14.04 or Debbie and Jesse or CentOS or whatever your preferred OS is put a bunch of system tools in that you think might be interesting and then make that your base image and then have all of your other Images start from that and that also helps get you can help get your security people involved and stuff like that as well So everyone feels a little bit better about being involved and and getting them built and Also the images that you do build that you want to push your production Have them built by a CI system or something that's blessed. Don't just like build them on a laptop and ship them off somewhere Because then no one has any insight into what's actually in there. All right. Thanks everyone