 Okay, hi everybody. My name is Mike McGrath. I'm here to talk to you today about containers atomic and best practices There's been a few atomic talks already today. I'm hoping this won't be a lot of repeat Try to go to them but I'll emphasize some of the important pieces for those that aren't that familiar and I'll also try to just emphasize the pieces that are kind of complicated and tricky to to understand. So my background at Red Hat is actually I started as a volunteer in the Fedora community. I was the Fedora infrastructure lead back when dinosaurs still were on the earth. Then Red Hat hired me on to do that full-time because they figured that's probably a good idea and from there I went on to do OpenShift and so I left Fedora to go to OpenShift and then I left OpenShift to come work on atomic and now I'm here to talk to you about OpenShift and Fedora. So you can't really escape your past as it turns out. So the topics today are just to go over the atomic universe You know, we talk about containers and all that stuff and just kind of Red Hat's view of it We've got an exception on just atomic and containers. Just kind of make sure we're all speaking the same language and understanding what this stuff but is And then we're going to go over some deployment models and best practices. These are more high level but kind of help to help you better understand what you can do with containers when you're actually using them in a production environment and before I go on just a quick show of hands who here has ever tried Docker or PCP So let's look at the atomic universe So we've kind of got two communities right now We're looking to better merge them or align them in the beginning of the atomic universe So it's actually OpenShift and we want to create V1 and V2 V3 shipped very recently includes Docker and Kubernetes and all sort of stuff at the same time all that was going on We had this atomic project stuff going on the background. I've got a timeline on that and I typically look at OpenShift is combining a lot of these components together for a full DevOps view of things, but atomic includes several pieces include not just the The containers and things there's a lot of little tools that we'll go into and a big part of the atomic project is of course the atomic host Which today the atomic project or project atomic produces Three different varieties of hosts actually four was kind of our target One of them obviously is out in Fedora. It's produced with the Fedora team. A lot of people work on that We're working on two different releases sort of in the centos world One of them is exactly we'd expect from Centos some sort of downstream build of the rel atomic host We're also looking at one that moves a little bit more quickly something that is a little bit more stable in Fedora but a little bit faster than rel atomic host and You know that we're trying to get that to you because we think that's a you know I think that's a compelling thing to be doing Fedora tends to be a bit more leading it than some people would like and then obviously we've got a rel atomic host That's the thing that puts food on our table. And so it is a fully supported product at this point On the open shift side of that that tends to be our past and it's kind of the reason that there's a lot of confusion at least internally a redhead about these things is because There's not really an apples to oranges comparison between what atomic project atomic is doing and what open shift is doing Sure, there's containers in both and both using Docker. They're both using Kubernetes But they are very different and so open shift is red hats full-on paths and the open shift guys produce Three different varieties right now one is open shift online, which you can actually just go to www.openshift.com If you have an email address, we want to go invent some thinking my address You can it's funny that you just log in you get some apps. You can build them It's great great little service. We have open shift enterprise, which is more of a redhead traditional business model where a Customer can come in download the bits install them on their system and run them all Nice and easy and then there's the upstream community project, which is open shift origin Which mostly exists out of github, but it has all those pieces that you'd expect if you wanted to build this on your own I'm something like Fedora or Santos so at the summit we announced this new project or I guess new product called atomic enterprise platform and it kind of sits and in between these two pieces and The way that I like to describe this is atomic hosts I think most people get it's a containers based operating system and I'll talk more about the details of that But it's got Docker, Kubernetes and a few other things that you need to get going with the deployment Atomic enterprise platform takes those things and automates a lot of pieces of it It allows for a larger deployment of atomic hosts that can be across several different systems Has all the stuff you need to tie stuff in and have a whole bunch of automated pieces The thing is different the thing that goes further with open shift is it adds those development features And so on the developer side if you have developers that are gonna be pushing to prod or if you have want to do a full DevOps model That you know, we recommend you go straight to open shift. It's got all the build tools It's got integrated CI all this other stuff And so over the next level size, I'm gonna kind of dig into each of these more close So atomic hosts as I mentioned all this stuff is very new So, you know, it's understandable why there would be some confusion about this We launched the product is how make back in April of 2014, which is really not that long ago in the grand scheme of things By the following December it had been put into Fedora 21 And then the following March, which was just a few months ago was when we jade the product now a redhead timeline That is that is lightning fast But you know, it's going through it It kind of feels very chaotic at the time. And so the big thing about atomic hosts That's very different from a standard row hosts is that it's it's got this core piece of technology called OS tree And this is Colin the guy that's invented OS tree. Well, I thought it's supposed to be here. Is anybody seen Colin? That's fine So a comment on this tree and I asked him for a quote and he was gracious enough and gave me this quote OS tree was born to help implement a continuous delivery model for operating systems One can be a lot more confident about updating systems if one knows that a reliable rollback system is always available And this is really the core the core feature of atomic hosts And I guess just a quick show of hands who here has installed a booted or tried an atomic host It's okay I'm lost everybody not everybody so I'll go over some of the differences here This slide right here really just outlines it So your boogie or atomic host you log into it if you type yum update you're going to get a command that found Yum is not even on there. It's not because we replaced it with something new or either it's because it's not on it It's largely a read-only file system. You cannot add packages to it very easily There's a lot of things that you just locked in the whole point of the atomic host that you're supposed to be running things in containers and so the containers are the way you'll bring down new features and things and As they will be quick to point out You know a lot of the advanced features you might need from an administrative point of view would be down in SPC's But even with all that you're still not going to be logging in and making a lot of changes to the underlying file system However, there if you do want to do an upgrade and downgrade you use atomic host upgrade Which will pull down all the information you need get the system ready, but then it'll sit there You'll still be on the old version until you reboot into the new version and An atomic host rollback will then let you roll back to the previous version again a reboot is required So this is a very drastically different way of managing the system But as you look at you know your cattle versus pets argument It's a really great cattle argument that you can just boot up these new systems bring them up and down You know that what's state they're in because they're all identical minus the actual configuration pieces So a time again a price platform So a time again a price platform when you want to install this and get several nodes across has from the usability point of view Has an OC client tool and a cockpit tool and both of those go through the API to contact an atomic master And the time master is a Kubernetes host These are typically run on their own so it's got Kubernetes and that's be whatever you would need to have in there Along with sky DNS for service discovery and it also has a few extra API pieces that we use for deployment and some of these other items So you use typically you use cockpit herbs, so you use OC tools to then contact your atomic master to tell it what to do At that point the atomic master go out to your node farm Of the atomic nodes and this would be the bulk of your environment if you're running anything significant You have lots of atomic nodes up there running containers all over the place and on these atomic nodes We actually ship a few containers already that help integrate this environment So in in a time again a price platform environment. We have an HTTP router, which is based on h8 proxy We have a Docker registry that's integrated So obviously you have a large environment You don't want to constantly go out to a red hag registry or a doctor registry to download things You may want a local mirror That's what you can think of as your local mirror that doctor registry and then we have a special deployment config that allows you to Do specialized deployment artifacts, which a lot of this stuff is actually going to emerge back into Kubernetes We try to be good citizens. So that's that will all end up with Kubernetes And you won't need these specialized container for it anymore OpenShift adds a bit to that so there's an integrated UI with OpenShift that adds Application to you of your of what you're trying to do and it also adds some additional API calls to the master and then the two Big ones that really the big differences are we have continuous integration as built in that runs inside the platform itself And there's also a source to image builder So if you're a developer and you have some sort of code of some kind You want to get that stuff all built and tested OpenShift has all the tools that you need to do that out of the box Whereas a time enterprise platform kind of assumed that someone else is doing that already either Your developers have created an image and sent it to you or that you're gonna be building it somehow outside the system And also a lot of people already have CI So, you know, they may not want to move to OpenShift unless it feels becoming too complicated to maintain So before I move on any questions just generally about what OpenShift does is target at DevOps stuff What atomic enterprise is targeted at which is a container orchestration for traditional operators or what an atomic host does in questions Okay So let's move on to atomic and containers. This is some 101 stuff to make sure we're talking about the same thing here It should go pretty quick. So We're gonna containerize something typically take something like an application You throw it into your Docker container and and then you run it and that can be moved around this way or that It's it's very simple We typically talk about this in terms of microservices meaning that it's not generally a good practice to be cramming this whole large stack with your HTTP server and PHP in there and you're also gonna throw my SQL in there and memcache You don't tend to want a large monolithic container because they don't scale very well They're kind of unruly to work with and so the idea is with containers you can break them all out into small pieces Here and there a lot of good best practices. I think most people are familiar with that Would you go into an actual container build? Which is taking what that previous slide wasn't actually doing it in some sort of reproducible fashion You'll do that with a Docker file and this is just a sample Docker file If you have any experience building kickstart files anything like that building a Docker file uses That same part of your brain to build these files One thing some people get confused about is that you can actually boot an image make some changes to it and run docker commit I generally recommend against that you lose the reproducibility The analogy for those of us that are packages would be we just imagine if you could actually log into the build server While it's building and make changes to log out you'd give Dennis a heart attack So generally you just want to use the Docker file to build these these images And so the I'll just point out what you know the basis pulls about basic fedora image in it runs young and slice GP at Ruby I've added some local Files of things. I'm going to expose port 80 nice and easy Then I'm going to build the image which it builds from fedora pulls the pieces down Builds what it needs to and I'm going to tag the image. So that's the basic workflow I'm going to tag it and specify what registry it's going to go to and that will throw it into an actual registry Which brings me to what a registry actually is So if you haven't looked at it, I think most of us probably looked at Docker IO. It's just a registry. There are several of them Red Hat has its own certified registry We're also welcome to run your private registry, which I suspect most people will be doing in any sort of enterprise environment You'll sort of take these images. Hopefully certified images from Red Hat But if you want us to go hungry, you can also use the Docker IO images and then you can try those as well The Red Hat certified images are at least groomed and things like that And I think Fedora is going through a lot of the Fedora. I'm just going to try that too The real trick is that on the upstream Docker images, there's not really any certification process They've got some official images and things, but it's pretty wild west out there, which is great You can upload whatever images you want to up there But at the same time that makes it you've got to be careful while you're consuming from that upstream place And so basically what I've done here is just you can once it's in the registry you can run a Docker run On any one of your atomic hosts and it will pull down the image from your registry and go from there. Now We're writing this process is I've described how to build a Docker container I've described how to upload the Docker container We explained that Dr. Run runs the Docker container. The next problem is what happens if this is highly Successful and you're running tons of Docker containers. That's where Kubernetes comes in And so Kubernetes is an orchestration framework that allows you to describe an application and all of its micro components and Pull it basically orchestrate several Several applications each that could have several different containers It allows you to just tell it what to do and it'll do its best to make sure the environment looks like that And I've got an example in here, but Kubernetes has some built-in routing It has easy ways to expose these images. It has ways to auto recover It's got pretty much all the things you're going to need to run lots of containers in an individual environment and Kubernetes by if you're not familiar it was built by Google Google has their their level seven Google wizards building Kubernetes and go it's under rapid development out of all of the components that we're going to talk about today Kubernetes is the newest and in my opinion the most exciting part of this It's going out of rapid changes, but it's all very good stuff It's been very exciting work with and so just a quick example of what you could expect from Kubernetes after you've described application to it is that it has this whole health check system And so if you have a specific container That goes bad and and in Kubernetes they're called pods a pod could have several containers But generally conceptually if you're not familiar with it, just think of a pod as a container. That's fine So if one of your pods goes bad Kubernetes will notice that try to kill the pod and bring it up in a new location This is exactly what you would expect to have happen from a monitoring system. It's just that it's built into Kubernetes So it's not like you have to have some separate monitoring system checking out all your containers and trying to decide if they're up or not Kubernetes is doing that for you and The thing it's nice is when you set all that stuff up. You've actually told Kubernetes Through a series of JSON files what your application or several applications are supposed to look like and so you've got this is a Sniff it from a sample service that pulls the WordPress HTTP down you can see that it's got the names up It tells you what ports are going to be in there. It's a very basic language that is really easy to control and manage with something like it It's a it's a really great little interface. So The next step of this before I get too much further is I want to really nail home this link of the container workflow and the atomic workflow how How much they have in common but also in compares compare those workflows to the traditional young workflow And so if you're going to look at an update scenario or deployment scenario, which is what we're going to go for next You know young basically the idea is how do you get from version one to version two? So let's look at these deployment scenarios Like I said version one to version two very simple So let's look at a traditional sort of young update scenario You've got your set of servers often some server farm in the middle of nowhere and everything's running fine And a developer comes up and says hey We want to we've got some new code to deploy and so with the young update in a traditional environment And you may have orchestration doing this but still somehow in here. What's going to happen is you're going to run Update on your systems, maybe all the same time maybe one at a time, but it's gonna look like this So you've got some server in transition You're gonna run you update and boom It'll be on version two and then you'll keep going until the rest of your servers are on version two And then you're up to you're ready to go now. This is the way I've done things for over a decade I think most people are familiar with this workflow but the problem is what I call the traditional failure and Many a Christmas Eve's have been had been ruined by these sorts of failures So what the real issue is what happens if you're in the middle of the deployment like this and The one of the systems you're working on young update or whatever your deployed code is going out What if that fails? What do you do? Your options are not very good Rolling back RPMs. I mean they kind of have it, but I've always expressed that to be a pretty shady solution Or what you know, what if you have to get a developer on the horn to say hey, you know We're doing this update for whatever reason this one still had old libraries on it and it's very confused We don't know what to do at this point. You know now we're we're getting Database calls that are bad all this other stuff. You're really in a bad solid You're really in a bad situation when you're if something goes wrong with the traditional update cycle. So Let's take a look at red black deployments. I think these are also basically called green blue I think there may be some argument the difference between two I think they're about the same I first started learning about red black deployments from Netflix. This is how they do things. It's very fancy So let's say you've got your load balancer up and everything's running on version 2 your developer shows up and says hey I've got this great new code to deploy Please deploy it for me. And so this deployment works a lot differently in that When you start bringing up version 2 you bring up an identical side scenario version 2 until it's entirely up and running During this entire time everything on the production side your Deployments your load balancers are also for serving version 1 your users are completely unaware that version 2 is out there anywhere And this allows you to have your QA going in test You can have your developers test you can go through and test a whole lot of scenarios to make sure that version 2 is ready to go And when everybody gives a thumbs up you flip the switch and now Everybody now your users are using version 2 and you shut down version 1 and you're good to go This is a really great way of making sure that the environment is going to function as best as it can But there are still some issues So one of them is for example is I've seen this done with DNS And so you know if you or even a really lazy or poorly managed load balancer So let's just say you do try to do this with DNS you got your version 1 version 2 up You've done all your testing again on version 2 and you say hey we're ready to go So hopefully someone the previous day has said okay Changed will set the TTL to 60 seconds and For a little bit will serve both both of them at the same time and once that 60 seconds is up the theory is that version 1 will no Longer be used because DNS is amazing except it's not DNS is terrible And if anyone's ever been in a scenario where we've done this you you've definitely seen that even after that 60 seconds Version 1 will continue to get served sometimes for hours afterwards And one of the big reasons for that is because the way clients choose to implement DNS and name resolution is not Not as standard as I think we would all like some of them don't bother looking at the TTL They don't look at the DNS entry again very often So if you have you know some sort of mobile app that is contacting your application It may very well not realize that the changes happen So this is just one of the pitfalls of a red-black deployment, but there are good pros and cons, right? So some of the good pros this will catch production users or production issues before the users ever hit it That's always nice There's no rollback required really because if version 2 is having any issues just shut the thing down version 1 was Untouched the entire time. You're not really going to be in an unknown state We have half your servers on version 1 half on version 2 and maybe a few of them that died halfway through better some completely unknown state The cons though are also a lot of this it's expensive, right? At some point time you're going to have to double your infrastructure You can do this during an off-time if you want But you pretty much still have to plan for a peak time or at least downtime because if there's some sort of zero-day or emergency deployment if you're using the bulk of your Environment you either have to choose to take some of it down and and take users off Or you just have to take the whole thing down and do the update that way which it can take a long time to do Monitoring is also very tricky in this scenario depending on how you're doing monitoring You may have temporarily brought up double the amount of environments they have the host names and things and it's how you're going to bring it down You have to remove them depending on your monitoring solution adding or moving house can be tricky especially the removing part The router flipping the whole thing when you're on version one you need to move diversion to Making sure that that actually functions, right? Especially if you want to try to do that with some sort of bleed over where customers aren't getting cut off mid transaction or anything They're gonna need to move and that can be tricky to do depending on the rubber you're using Also depending on what sort of storage you're using if you don't have some sort of shared storage file system And you're using local cash or local storage on those systems that can also be very tricky Especially if these are entirely different environments or entirely different houses and finding scale issues can be tricky You know your Kiwi can do the best job that they can but if you bring that new version up and there's a slow algorithm of some kind or you've Forgotten some sort of index or something It's gonna be painful to to track down because it's everything is there. They're getting the It's getting the load that it's getting the full load of the environment And so your options are kind of like you might be able to flip back for a little bit So let's take a look at rolling deployments, and this is what I have typically done in the past at least before clouds Rolling deployments are basically you have your load balancer up and everything is serving fine Your pesky developer comes in and says hey, I have my version So you release a new version and so you take one of the one of the systems out of your load balancer You update it and now it's on version two and you can move on to investment to version one You can even start going more at a time and do some sort of Increase ever increasing number if you have a huge environment of several thousand hosts You go up to a hundred at a time or two hundred at a time Until eventually everything is on version two Now if you go back and look this is a very odd story because this won't always work with a lot of applications You said you're certainly in an environment. I think in this point in my example Where some users were getting version two some users were getting version one for many types of updates That's totally fine. Especially cosmetic updates and things like that So there there are certainly several good options in terms of doing early deployment. There's a lot of cons as well So let's take a look the pros. This is the cheapest way to do deployments You don't need to have double your capacity at any point time You can also typically find performance issues early because all these systems are getting performant or a production level load So any of those early systems if they start falling over you can halt the deployment to take a look at what's going on instead of continue to move on This is a pretty common workflow I would bet most of us have tried something like this in the past With an update this works really well with local storage if you have storage on the actual system You've got some sort of shared environment. You can basically reuse the hosts that are there This is really great for gigantic environments where you need to do just a few at a time Obviously, I wouldn't expect Google to have a complete copy of their environment at any point in time There's still a lot of cons though. So I mentioned the unknown state concerns. You've got situations where Some users may be serving Getting a code from multiple versions of the environment. You can use sticky sessions for this if you want But you know, it's there's a consideration that you need to look at if you do get through probably through the deployments You decide that this is not working out then obviously you need to do a rollback of some kind And so you're gonna have to go back and find the versions that have been updated and roll them back to version one And this also requires some sort of integrated health check to happen on each one of the individual hosts Which is always a good idea, but you're gonna want to make sure that the health check is working very well So, you know, for example, this is a Jboss application You can't just check to make sure that the TCP connection is open because Jboss will take a little bit longer to actually deploy the application So you actually have to have some sort of health check that can then go in and make sure that the actual node will respond with 200s instead of errors and there's a lot of ways to do it, but it's something you've got to be careful So pros and cons, you know different different applications, but two perfectly valid deployment scenarios So the hard one data migrations So I would say I think the data migration story has always been the hardest part for me Especially with OpenShift Online, so I can give you an example when our team ran OpenShift Online We would do new deployments. I think I think that by the time I left we were doing about a deployment one and a half times a week, which is not the 7 billion times Netflix deploys every day But it's not bad for an enterprise company and The real trick is with data migration is because if you have any sort of update that requires some sort of scheme of change or data Migration or anything like that There's this critical point in the upgrade where once you start doing that migration Unless you have some sort of fallback plan, which is often very expensive to write if you're going to do some sort of de-migration That if you find any errors at that point, you kind of just have to power through and it's rough My recommendation for those of you that are looking at doing the sort of scenario in your environment is to decouple data migration Updates from regular updates. And so and a lot of enterprises do updates once every three months or every three years Whatever it actually is If you have some sort of data migration in that I recommend decoupling that part of the migration and doing it separately There's a lot of reasons for that one of which is that one of which is because data migrations can take a long time But also it's easier to help know when something goes wrong So if you've got you know 25 patches that you're going to deploy and one of them ends up causing a Some sort of performance issue, which one wasn't right? You do typically want to break these up So you can help find things like that. And so the to help illustrate this is In this scenario, let's go back to the rolling update scenario if you have Some of your systems on version 2 and some version 1 you got to shut down Well, which version of the schema is expecting, you know, that just it can be kind of a pain to go through and try to figure out So I just want to I've got a few more things here and I wanted to open up for questions We'll see if there are a lot So just we covered the atomic and ownership ecosystem. I know this is still very confusing I hope I have Eliminated some of it in a word ocean atomic are very coupled to each other because of the common container background I think that over shift is kind of now more considered part of the Product atomic container ecosystem. So that's always good. We've covered a little bit about Docker containers and Kubernetes I wanted to put a demo together, but I know better. So there's also plenty of Options out there. There's a lot of videos and hopefully those of us in this room at least I think want to have the wherewithal to go do that sort of thing and we covered at least two deployment scenarios with red back and rolling Now there's also several others You should go out and see and compare with what your applications look like and see which ones work for you But I do have four challenges for you if you're just getting your feet wet You're not sure where to start and what to do next Challenge number one I would encourage all of you to go back find some critical component of your extra infrastructure just some small micro service But something that's in production Something that's out there and just containerize that thing containerize it try to get it out to production in that environment You can leave both up if you want to get it behind a low balance of what some of them behind containers I'm not that's a good way to get the feet wet and get other people familiar with what's going on there It's also a very not intimidating way to learn how your monitoring is going to work or most cases not work with containers And try to find some of those policy driven items that are going to need to change while you move to a containerized environment Challenge number two once you have that deployment Once you have that small piece deployed try to get enough that you have an actual application deployed onto an atomic host So go out you can run atomic You can run Docker on any sort of red hat operating system atomic host or otherwise, but try atomic host at that way Then once you have enough containers to warrant a full application go try Kubernetes You can actually get a supported version of Kubernetes with the time of course in route, but if you're Feeling up to it. You can always try a community version and go that way and once you have Kubernetes deployed Then go and take that next step and look at atomic the time project time enterprise Linux or OpenShift and so Remember just you know if you're trying to figure out which ones right for you if you have developers that are going to be pushing code to prod or you have developers that you want to Add CI to and all the other stuff integrated builds give open shit to try that may be the first place You want to go look anyway just in case but if you haven't have integrated CI already Elsewhere in your environment and you have a very strong line between your developers and your production people or perhaps You're in the finance industry It would be illegal to free your developers to push abroad take a look at atomic enterprise platform and then do what it needs for you And so with that I'll stop and I'll see if there's any questions from anyone. Yes Open shift online okay, so open shift online is still in v2 And so we're working hard to get v3 out if you're wondering why v3 So in the past we had done over shift online first and that was where everything went We flipped it for v3 because we had a lot of customers and people that wanted to get Docker environments out as fast as they could and Believe it or not. It's the hard parts of v3 After enterprise are things like integrating with our agent like we have single-signal with our agent and the big one is billing So we have a full-on credit card billing provider That takes a lot longer to get ready And so we didn't want to block the enterprise release, but I would expect that you know probably in the next Over months we'll have something out. There would be a beta that you can go out and try The open shift online team does have atomic enterprise hosts out there running the production But they're not supporting the actual open shift installed. They're more on things like the reverse proxy servers and some of those other Dependency pieces. Yes Obviously Yeah, I think so there's not a definitive guide or anything if you're up to writing, you know Riley book or white paper I'm sure they would be amenable to that I think that the issues that we run into are a lot of new or greenfield Deployments are being done in Docker. It's much easier to start from scratch than it is to retrofit Having said that if you're just getting started and you're seriously about using Docker containers And there are actually a lot of really good reasons to move to Docker besides trying to utilize micro surfaces If you're in the environment, I mean, you know monolithic containers not gonna kill you especially if you're in it This has come up several times from customers in the education field where you have a centralized IT department and several other Ancillary groups that are kind of handing applications over. It's a lot easier to just consume that via a container If you can convince them to put it in even if it isn't a model a lot easier to pass that container around that It is a virtual machine, especially since sometimes you won't even have access into the virtual machine But in terms of the best practices, there is a little bit more that you can do if you want to go look at pods I touched on this briefly, but you are going to be in scenarios where for example You may want to have some sort of application server that needs to be co-located or at least very had very fast access to a memcache server Maybe you maybe you have 20 of each With Kubernetes and pods you can have multiple Individual containers that exist on the same system on that same pod. And so there are there really are some exotic application scenarios that you can look at which is How Yeah, so here's a really good example of that so what's what's the difference between Two really big bare metal servers running a quote microservice versus versus You know a hundred tiny ones that are spread across over them in terms of preventing the cluster fuck I think you could look at I think a big part of that is can is properly controlling the application definitions at the Kubernetes layer, so at a minimum you have a you have a history and you have some sort of control point control point there now as in terms of running lots of these little pods I think that if you end up in the trick is This isn't going to solve problems that other environments would have so if you've got a lot of bit rot in your virtual machine environment for example And you don't make any policy changes you may end up with the same situation in containers because it doesn't solve those problems But if you as an operator or as even as a developer if the developers owning these application definitions You should be okay with something like Kubernetes because you can always Destroy the application. You know, it's not like these these pods are going to get left over or you're not going to end up with sort of orphaned virtual machines and So I think that they're there the tools themselves will help alleviate some of that at least I think If you if you want your pops It'll make sure it gets rid of all those Kubernetes does tend to be smart, but like I said, it's also the newest piece of this and so Don't go expecting the world yet, but it is it is at least supportable and Red Hat has put their full GSS support behind it Yeah, so I'll give you an example Zavix was a big monitoring tool that we used an open shift online And so let's just you know, typically you want to let's just say you want to check and make sure that HTTP is running So you've got a pin to check you've got two ports that you want to make sure running You want to make sure that the SSL certificate is not expired You want to make sure it's responding within a certain amount of time and so to do that you can On a traditional virtual machine, that's all very simply just run a single command and you're good to go there Yeah, let's just say though that for example on a Docker system or an atomic host You don't even have HTTP on that post you may not have the tools even do a curl against it They're just tools that might be missing so at that point you have to go into the Docker container to get that information Which is kind of like running everything in suga. Okay, so I can kind of conceptually understand that now I've got the sort of Docker wrapper command that I need to do to monitor things Well, depending on how you do this as an API that you can report back well inside that Docker container It doesn't necessarily know what its host name is and so it's going to report back to that Monitoring system with what it thinks its Docker container host name is Modest is not going to understand what that is at all And so it's just like it's this kind of slew of very practical problems that you can work through But if you if you don't know that they're coming that's going to add a lot of time to first deployment Learn all that stuff Yeah, and I think it's it's it's not going to be surprising but if you if you go into treating Docker containers as though they are virtual machines You're gonna you're gonna get tripped up I actually like them quite a bit. I've worked in environments in the past where I had virtual machines that were getting very old and they had Kernel vulnerabilities and everything else on them But I couldn't log in to see what the heck was going on And it's just a big mess whereas with containers as root on the system. We have full access to all the pins and things I mean your container at any time. It's it's actually From the operation side, you know Some it's less now, but the operators tend to be Nervous about changes like this, but this is a big a big benefit especially in large environments where you've got application code coming from all you can always go in S-trace and see what's in the container you can check out versions of things Any other questions, yes So, okay Okay, so I'm going to ignore the kernel part of that just because I Don't I think that I you may see them if there's demand for them But I think that the best practice is that if you have I think the best practices It's going to be if you have the application created properly that the loss of a host isn't matter very much And so you so the reboot is is an intentional part of that It's sort of a clean slate wipe whereas, you know with something like puppet or regular updates You've got these machines that could be you know The actual operating system may have caught from rail six to seven at some points got all this cruft left over Atomic and both you know if you're using the Docker file scenarios Both of those scenarios basically bring up a clean slate every time. I think that's that's a big feature It's a forcing function to do that. So in terms of actually not needing to reboot an atomic host I'm not aware of anyone looking for that Is that you're in a known state you're either in state one or state two The you're not somewhere in between that if you do online updates you're stuck It's simply the guarantees that And I think one one thing I didn't point out that I meant to In my notes, but then if you go back through all these scenarios where I did the rolling deployments and the Red and black deployments, I didn't actually ever really specify whether or not this was an atomic host or a docker container That was very intentional because this when you're using an atomic host or a docker container You can apply that same policy different commands obviously But the policy can be the same Which is a really powerful unit of containers and operating systems and it's Something I think that a lot of operators will like just because of what John just said You get that you get that good known state both at the operating system layer and at the container layer Okay, any other questions? Yep That's a good question I have had several I would call fireside chats with people that have large container deployments Obviously Google has a whole lot of container deployments. So if you find some Google employee talk to them about it in terms of good Case study actually, you know what I don't know that this is the case But if you go to open shift online, I know that our product managers are real big on getting User success stories for banks and banks. I would bet they're gonna have something there I don't know how technical it would be, but that's a good place to start Were these and this micron Betsy and all them right this is what they do in LXC or I Take a look at what's the right code as a craft Image garbage collection and other stuff Well All right, any others before I hold it out, I'll be around all week. So feel free to I love talking about this stuff academically and Pragmatically, so feel free to run me down calling an idiot or ask me a question. Thanks everybody