 Okay, who had the flash drive for the talks? There was a gentleman with long hair here. No, no, this is, I have the flash drive. I'd like to return it to the person who was asking for my slides. We'll just sit it up here. Okay, I suppose I'll have to be here. We'll put it in the baggy. All right. I did that. No problem. Those slides will probably make about as much sense as Ikea instructions without the actual presentation, but I'm trying to get started. Is it time to start or shall we wait another minute for people? Let's try. All right. All right, good afternoon. Can everybody hear me all right? Generally, if I ask that question, people say no, it's probably because they have hearing problems. All right, my name is Joe Brockmeyer and we are going to be doing kind of a team effort here. We want to give an update on Project Atomic. So, like to annoy people by making them raise their hands. For many of you, maybe the only exercise you're going to get except for, you know, beer. So, how many of you actually know what Project Atomic is? All right. How many feel comfortable that if you were cornered in an elevator, you could explain it? Fewer. Okay. How many feel really weird about being cornered in an elevator? Okay. All right. My name is Joe Brockmeyer. I work for Red Hat. Big surprise in this group, right? I work for the open source and standards team for the last couple of years that I've worked on Project Atomic. I also manage the community team. I also participate in Fedora and Sentos a little bit. I have several co-presenters. What we are going to do is sort of a tag team thing. I'm going to give kind of a high-level overview of Project Atomic. Talk about some of the components that go into it, a little bit of the background, and then I'm going to hand it off to Tommas. And he is going to give a deep dive into Docker, and I think many of you will be interested in because it's easy to lose track of the new features that are slipping into Docker. So he's going to talk about some of the later releases of Docker and things that have come in there. And then I'm going to introduce Brian Exelbeard. He is going to give us a talk. He's going to give us a look into NulaQL, Atomic App, and the Atomic Developer Bundle. Okay. And then I'm going to introduce Steph Walter. He is going to give an overview of Cockpit. How many folks have heard of Cockpit? How many people have actually used it? Still a couple of people. Okay. It is some nifty, nifty stuff, and if you haven't used it, by the end of his presentation, you're going to want to. And then finally, I am going to introduce and hand it over to Josh Burkus, who is our new community lead for Project Atomic. I'm very excited that he's here, as are we all. And he is going to talk about a little bit of his background, his view of Project Atomic and the container ecosystem and where he wants to go with things. All right. So all that is what I just said in written form. So before we get started, if you can actually get on the Wi-Fi and get a decent signal, I'd like to encourage people to remind the rest of the world what they are missing. So I would encourage you all through this presentation and all the rest of the weekend to tweet about DevConCZ, if in fact you do have a Twitter account. If you don't, create one just so you can do this. And also follow, if you would like to keep up with Project Atomic, please follow at Project Atomic. I would like at the end of this talk to see at least 60 new followers, assuming you're not already following Project Atomic. And the conference, Twitter is at DevCon underscore CZ, and the hashtag is hashtag DevConCZ. And while I'm doing all this promotion of the social medias and things like that, I do want to very quickly give a shout out to the organizers of DevConCZ. They have done an amazing job as they continue to do every year. And so if you see one of the organizers walking around, please be sure to thank them for all the hard work they put in. Also, several Atomic talks have already happened, but I want to point out some of the other ones that are coming up. Right after this, Tom's Crawl is going to do a talk on how you can use Nula Kula and Atomic app. Tomorrow, Steph is going to do a much deeper dive into cockpit and talk more in detail. We are also going to have a talk about using Fedora on Atomic for Internet of Things as kind of a, not as an Internet of Thing, but as a server to talk to Internet of Thing devices. Also, an interesting talk from Yang, Atomic with and without Atomic, and what that's talking about is doing development and using the Atomic command on non-Atomic hosts. Okay? I see somebody taking a picture. I'm going to hold that slide until he's done. All right. And another one. More massively Atomic talks. We have Atomic Developer Bungle. Containerized Development Made Easy. Tomorrow it's 11.30. And A Great Beard's Worse Nightmare. I take exception to this title, but A Great Beard's Worse Nightmare, how Docker containers are redefining the Linux OS. And that is 9 o'clock Saturday. So those of you who are willing to be awake at 9 o'clock Saturday can be entertained by Daniel Rieck. Okay? All right. After all of that, do we all remember why we're here? We're going to talk about what Project Atomic is. Okay? I don't need to explain containers to anybody in the room, right? Anybody? Speak to me after class. Okay. So Project Atomic 101. It is the upstream community for developing the tools and patterns and so forth for developing Atomic hosts and generally the entire ecosystem around Atomic. That has expanded immensely since we launched Atomic in 2014. When Red Hat came out in April, yeah, 2014. It seems so long ago now. When Red Hat came out with Atomic, we were still trying to figure out exactly what it was we were going to do with Atomic hosts. We have a lot of ideas, but some of those ideas have already been chucked. For example, we were going to use GearD out of the OpenShift ecosystem. And instead, a couple of months later, we came out with Kubernetes and we went, that looks good and everybody's going to go in that direction. Let's work with Google. And that's been immensely successful. There are a lot of things over the last couple of years that have been changed. And the Atomic developer bundle was not a thing then. Newlycule and Atomic app were not a thing yet. So we have done a lot of work in the last two years or so. But basically, this is where we all come together to make this work happen. One thing that is not and is important to remind people when they talk about Atomic, is that it is not a new Linux distribution. It is on purpose built out of components that we use for our other operating systems. So Fedora Atomic host comes out of Fedora. Centos Atomic host comes out of Centos. Rel Atomic host out of Rel. The importance there is that we have a lot of folks using these things in the wild, Centos, Fedora and Rel. And we want to make sure that Atomic fits into their ecosystem or their data center or whatever very easily and so that they don't have to learn a bunch of new things. On top of the new things, they have to learn with containers and Kubernetes. So we want to make the transition easier. They get trusted software that's already been tested in other hosts. So that is very important. So why Atomic? We can already run containers on Fedora, Centos or whatever. The main thing is we want to provide the mutable infrastructure where people can run containers and not worry about the operating system below that layer. You should only care that Docker and Kubernetes and some of the other features, SE Linux, SystemD, those things are there to give you the features that you need to run containers successfully. But you shouldn't really care what version or anything else is going on under the hood. You should just worry that your containers will work. That means not installing things on the host and treating your... I'm assuming everybody in the room have the pets versus cattle metaphor? Okay? So we don't want people to treat the atomic host as a pet. We want them to think of it as cattle. Okay? Or some people object to the pets versus cattle metaphor. So in the U.S., I use the Scotch versus PBR metaphor. If you have a bottle of really good Scotch, you care if somebody drops that bottle. If you have a PBR, you don't care. So if you don't like pets versus cattle, just replace that with Scotch and PBR. So what does Project Atomic Include or what does an atomic host include? You have a question? Yes. CoroS. What's that? Little CoroS. CoroS. CoroS. Yes, I'm aware of CoroS. I will not. I figure that people can do their both open source more or less, so you can do your own comparison if you like. To be perfectly honest, I don't spend a lot of time watching what CoroS is doing. I don't. I thought that you can sell that atomic host instead of CoroS and say that CoroS is missing some functionalities. That's generally not my style. So I want to talk about what's good with atomic, not so much like bash anybody else. Okay. So, yeah. I'm sure that if you want to talk to somebody who will do that, if you find somebody who does sales, they'll help you out. I'm in the open source side. I actually like, I mean, I like the CoroS guys, or at least some of them, the ones that I know, because we work together on things. That's what you do in open source. And we use a lot of components that come out of CoroS. They use components that we work on. We both fight the good fight, trying to get things into Docker. So, yeah. So, I'm not going to spend a lot of time whacking them on the head. Actually, you said that you use some of the features of implementation on atomic and on CoroS. Now, you just said that you... We use some of the same component, sure. We have used some stuff that they have come up with. For me, that's enough. Good. Okay. So, some of the stuff that we have in atomic. We've got RPMOS tree, which is sort of like get for operating systems. I'll talk about that very briefly. I also, Colin, gave a good talk earlier today, I believe, on that. Sadly, my TARDIS is broken, so I can't send you back to that talk. But I believe talks are being recorded, and if so, you'll be able to watch that. Atomic command, user bin atomic, which is sort of brainchild of Mr. Walsh back there, but now cast of hundreds or thousands, or at least 10 people, have put patches into that one. There's nulacule and atomic app, and I'll let Brian describe those. Project Atomic, if you look on GitHub, we also have a repo for container best practices. We're trying to pull together documentation on what's the best thing to do when I am creating a container, because this is still a green field for a lot of people. What is the best way that we should put together a container? Should I put everything in one container? Should I have a container for every service? Where do I do a volume store, those things? We want to answer those questions. There is the Docker work that we do upstream. Under Project Atomic, you'll find an atomic repository, a Docker repository that includes some of the things that we're trying to get upstream into Docker, which they may not have accepted yet. It'll also include the things they have accepted, but you may find some patches and interesting work going on there that hasn't made it through Docker yet. Kubernetes work upstream. We work very closely with those folks, especially on the OpenShift side, getting things into Kubernetes upstream. Atomic Developer Bundle stuff is hosted on GitHub under Project Atomic. I'm probably missing a few things. One question people have a lot is which one they should use. Should I use RHEL, Fedora, CentOS? It depends on the pace that you want to move at, your tolerance to risk. So if you would like support, that makes the question very easy. You want to go to RHEL, and you want to use your RHEL entitlement store on Atomic. If you would like to track what RHEL is doing, but you do not have RHEL entitlements, maybe you're just using it at home and don't want to pay for RHEL, or maybe your company doesn't subscribe to RHEL, whatever, then you want to follow CentOS. They have a regular rebuild of the RHEL Atomic host. And then Fedora is where we do development and try not to break things, but we try to move tasks. And so we have a two-week release that comes out of Fedora, and I'll talk about that in a little bit. So what did Atomic host actually provide? It is a streamlined host. We try not to provide anything you do not need to run the containers on the host. We are trying to enforce the idea that if you want to run something, do it in the container, even if it means system tools. Figure out a way to do superprivilege containers and put them on the host to do your debugging or whatever you need to do, and then pull them back off. We want the image to be as small as possible, and you will probably see a lot of work this year in trying to slim down that image. RPMOS tree, which provides a basis for shipping the image that you use. User bent atomic, docker, Kubernetes. I put an asterisk there because we are working on moving Kubernetes into a container, so you actually would not have Kubernetes shipped on the Atomic host. You would get a container with Kubernetes in it. But I don't think that we have finished that work just yet, so if I'm not mistaken, if you pull down the Fedora Atomic host image today, you will still get Kubernetes installed on that. So I'm going to talk very briefly about user bent atomic. This is supposed to be kind of a coherent entry point to the system, so we want to make it easy to manage the system using Atomic, and fill in some of the gaps in container implementations. And it also implements Atomic run by looking at the label for a container. So if the person who created that container gave it a label with information to run the container, you don't have to tell somebody, well, to run this container, you need to string this long. You just do Atomic run foo, and Atomic will look at that container and say, okay, I need to mount these volumes. I need to give it this privilege, whatever, and start it up. The Atomic host command can be used to do RPMOS tree updates. There are some new things since this time last year in Atomic, user bent atomic. You can actually move the container from one sort of storage to another, which can be very useful. Atomic scan, I put an asterisk by that because it's in Atomic, but it isn't entirely working yet. But when it is, it's a way to use OpenSCAP to look at a container and see how many CDEs it may have against it, which is very useful. So you can actually look at a container and go, okay, does my application, or do I need to update this in the meeting security hold? Atomic Top lets you see containers and the processes that are running in them. And then Atomic Diff will allow you to compare two containers or images. So for example, if you have two containers that are the same, but you have a couple, somebody has added a couple of files to an image, you can see the difference between that and the stock image. And I see this gentleman raising his hand. Sure. You might retell us the volumes and see there that has actually a container. I believe so. Dan, you want to answer that? Yeah, Atomic Bagreat is the, it will allow you to say you're on a device map or if you want to. Overline. People stop. But all of that is on the same post? Yeah, we're not, this is not CRIU or anything like that. CRIU might be needed. We could, eventually, we could do stuff. Right now, what we should, the main focus was the I was looking back at. Can I see somebody else raising their hand? Or did that answer the same question? Okay. Alright. And that's it for that slide. If you want to look into development there, you have a patch or a suggestion. It's just on Project Atomic slash Atomic on GitHub. I want to talk about RPMOS tree very quickly. How many people played with RPMOS tree at all? Not very many. Okay. It's really interesting. So basically, it gives you a read-only file system except for BAR and Etsy, where Homes and so forth is mapped under BAR. And Etsy will do a three-way merge when you do an update. So if you change the configuration of something, when you get newer stuff on the host, it's not going to just overwrite something. But all the data containers and so forth are preserved on an update. But the nice thing about RPMOS tree is it allows you to switch between references. So, for example, you can have an Atomic host or a development system with, say, Fedora 23. And then you can have another one that's tracking Fedora 23 testing. And you can switch back and forth between those on the same host with the same containers and so forth. And it makes it very easy to do that. You can also roll back. So if you have done an update and something breaks, you can actually do a rollback to a previous known good version. And in fact, outside the container world, RPMOS tree came from OS tree, which Colin Walters wrote originally to work for GNOME testing. So the idea is, I don't know how many of you have ever tried to compile GNOME from scratch, but it's not a lot of fun. And so he was trying to make a project even continuous that would allow developers to just follow a tree to do GNOME development. And they could switch back and forth between a stable system and a development system much more easily. Because we all know it's kind of a pain in the butt to maintain two or three systems to do your development. It's much nicer if you can just put it all on one laptop, right? Also, the Fedora workstation folks are actually looking at an Atomic workstation right now, although I don't think they'll actually call it that in the end. That would use RPMOS tree for the base file system and then a different form of container for desktop apps. So the Atomic Update model makes more sense for an immutable system. It still preserves the tooling, however. A lot of people, almost inevitably when I do a talk about Atomic, there's somebody who wants to come talk to me about NixOS or some other packaging format or whatever. It's like, all Red Hat needs to do is adopt this completely different packaging format and everything will be great. It's like except our customers will kill us. The people who use all these systems and have legacy things will show up with pitchforks and torches and they will be very, very unhappy. So this preserves the tooling that we have done around RPM. It preserves all the work that we've done there. But it makes it better. Okay. Any questions on that? Some of the new things recently in RPMOS Tree, we now have RPMOS Tree Deploy which makes it easier to move between specific updates rather than just saying give me the latest. Colin is working on static deltas, making it easier so the updates over the wire are smaller and also package layering because one of the problems with RPMOS Tree is that it is an immutable system so you don't have the option of installing a package. Again, for the atomic model that's for the atomic host server model that's actually fine. That's kind of what we want. We don't want people installing things on the host really nearly. But there are other reasons why people might want to install RPMOS. There's also the use case of installing RPMOS to get hardware drivers or whatnot out to people where we may ship rel something, something and somebody still needs to install InfiniBand drivers or whatever or something like that. I want to talk very briefly about some of the it's interesting to work in atomic because it's not a straightforward we just push one thing push a bunch of stuff into the sausage grinder and then collect the sausage at the end. We work with a bunch of different groups to get the sausage. It's not the best metaphor for this. I'll work on that. But we have to do a lot of working with different groups because we need to we committed to not creating a fourth distribution in the Red Hat family. We talked about that. We decided what we should do is work through Fedora, work through CentOS to get these things. And so the Fedora Cloud Workgroup originally started well before atomic was conceived or talked about. But over the last couple of years atomic has become the focus of the Cloud Workgroup. We think that's kind of where the future is going. So the Cloud Working Group is really focused on atomic as the main thing that we want to push out. So since last year I don't remember, I don't think we have these quite yet. We've added vagrant images which were developers were begging us for that. So we've added those, the relinch folks the testing folks, everybody has come together and Adam's back there celebrating something else. Oh, okay. Yes, okay. Fair enough. But the relinch folks and everybody have been very helpful in letting and getting this stuff out because another interesting thing about atomic was that Koji and other tools were not originally built to push out RPMOS tree images or trees. They were built to push out ISO images and then USB images and then AMIs. So we have really made Dennis and some other folks work very hard to get things through and they have risen to the occasion and I'm very happy about all the folks that have come together on that. So we are now pushing out two-week releases. We did one or two before Christmas and we have consistently, I think we had one day delay but we've basically been hitting the two-week deadlines. The idea is that every two weeks we have that latest known good stuff and push it out the door. But there's always a possibility that we'll get close to two-week release and something in the chain will be broken and luckily so far nothing has broken so badly we haven't been able to get stuff out the door and this gentleman has also been a great help there. So yes, more testing. I would love and just be excited beyond belief that everybody in this room will be able to do the atomic testing or testing anywhere in Fedora. Pick your favorite package, pick your favorite edition of Fedora and participate. Download some of the testing images. Give feedback. Go to a voting and test the latest RPMs and give plus ones if they're not broken because the world needs much, much more testing so that Fedora can be as awesome as humanly possible. You can't read all the blue text so you can't read it but this is cloudatlistfedoraproject.org so you can sign up there and that list is in general terms of mailing list traffic is it's a little bit chatty but it's not an avalanche of email. It's not as bad as Fedora the Bell or LKML or any of those lists. And also you can come find us in Fedora Cloud and IRC and that's on FreeDode and we'll see you if you would like to help with testing or have questions or whatever. And just for fun I thought I would put in a diagram of our two week process which is a little bit involved and you can look at this in more detail later when you get the slides off of you know slide share or wherever we put them in the end the DevConc website or whatever but basically the idea is that we build, test and present and we also had some good work from the Fedora web team, yeah. Sorry, a little louder. On Sunday, Ralph and me and myself are going to dive into how that actually works Okay, excellent. What's the name of your talk and when? Filled with confidence we are now. Well, Ralph named it. I'm trying to find this schedule. State of Fedora infrastructure 1040 Here we go. And that will be a very interesting talk and I say that seriously the stuff that the infrastructure group and those folks and relinch do to take something from source and walk it all the way out the door to a finished RPM or a finished image requires an amazing amount of work and coordination and so you should definitely go to his talk. Alright, I want to talk very briefly about the centos italic SIG so again not creating a new distribution working upstream or sidestream or however you want to describe it it's actually a very weird thing with centos right now because we can't really say they're upstream or downstream because what happens is we kind of do some of the work in Fedora and some of the work in rel right now and then that eventually winds its way into a rel release and then the source gets released and some of that and then things come back and they go back into the other things so it's a very weird circle of source so anyway right now we are pushing out monthly images I believe a new image based on rel72 ish came out like Monday or Tuesday of this week and you can find it at cloudcentos.org yada yada yada over here if anybody can't find the image just shout at me on Twitter I'm just at JZB I will find it for you and send you a link we do AMIs we do ISO images so you can do bare metal installs cue cows for those of you who would like to run this either with Libvert or on OpenStack we do vagrant boxes there as well new chair so I started the SIG and worked with Jason Brooks and some other folks but in the intervening time Jason Brooks has really stepped up the work and he's been spending a lot of time working with KB to make sure that these builds happen because Sentos Atomic Coast is a little bit different than the standard images so there's usually some patching to Anaconda and things that need to happen for it to make it through Sentos so he worked with KB to make sure that we walk those patches through test everything and get the images built so if you are using Sentos Atomic Coast give Jason Brooks a big hand we do weekly meetings Thursday at I think 1600 UTC we I need to get back to announcing those on a regular basis Modulo traveling and holidays we didn't do any meetings over the holidays the shutdown and we haven't really been doing meetings because it caused them a dev comp these last two weeks we are planning to do four week releases out of the Sentos which will actually very will be a little bit off of what they're releasing in rel we want to get to the point where we're doing some testing of things a little closer to rel in the Sentos alright that concludes my section and almost exactly at 30 minutes I wanted to leave enough time for everybody else to do their demos and so forth so now it is time for a Docker deep dive so I'd like to introduce Tomas oh and you have a question alright you have 57 seconds 57 boxes Libre and the box yes alright and you need the yeah I need the PC alright and I hope that my whole work was that I'd like to use HDMI because I'll let you so I hope it will work whatever you would like to do okay so give me a minute please okay yeah you should see yeah I think do you have the slides or do you have a demo demo yeah I have a demo I know that I think it turned on yeah it was working with mine I just need a different input because yeah exactly so is there a thing there's a stylus and styluses make everything better alright here we go it was just a moment folks sorry for the I found a menu but I don't see an input it's not this box yeah but it said let's just do it did you have it here we go yeah it's better you all set yeah I think so okay I just so hello everyone I'm Tomasz Tomeczek and I'd like to share some cool new features in darker engine the reason I'm doing this as Joe already said that it's really easy to lose track what's new in darker because it releases every almost like three months and they put a lot of new features in there and for example I lost track badly and when I prepared this presentation I used new features and I'm really excited about it so okay let's start so first thing that I'm going to demo on docker 1.10 which was released today actually and I'm doing the demo on RC release because I haven't tried the whole demo with the final release because I don't want it to break so I hope it will work so RC remember the whole demo is open source you can listen right now put your handphones and go over it yourself so I I'll show you the name of the yeah that's probably more visible so it's my space Tomasz Tomeczek's live.com 2016 Atomic Workshop so if you run into some issues just open issue or even open a pull request so okay let's start I will cover docker 1.8 docker 1.9 and docker 1.10 and I will show only the most impactful features I won't go over the whole changelog and show you that there is a new option for this command and that because it's like boring okay let's start so I'll use the github for the SS slides and I will show everything on the terminal okay so let's move the terminal on the screen is it readable maybe is it possible to make it white background uh-huh okay let's try with the colors yeah so let's do slower yeah the thing is that I have all my shell tools optimized for solar eyes so if I do different scheme it won't be usable sorry okay so let's go to the slides and let's start here's some pre pre-demo stuff for me to actually run some containers in the background so we can see some more interesting output so let's start with docker 1.8 the thing is that docker 1.8 wasn't so there were not many features seen in front-end because they were working mostly on back-end they've written tools for plugins and they also uh and they will also rewriting the back-end so the most to me the most impactful features were docker copy and docker demon so let's start with copy so what this command does is that it allows you to copy files between your file system and containers and vice versa you couldn't do this before because the only way to do this before this feature was to actually mount the container somewhere and copy the files yourself but if your docker engine was on different hosts you had to actually SSH there doing the process so let's see how it works so I will create a new container this command the container will be named banana yeah it's after minions actually uh uh-huh there is already banana so docker remove yeah I've actually was trying to demo so okay so yeah so right now it's running in the background so we can try copy some files to it uh so let's do docker copy and I I'm in my uh github repo so I can copy for example license which yeah it's so it's completing for container so copy license to banana and let's copy it to the root folder so it's done and let's see so docker attach banana so yeah we we are in our container and if I do ls uh on the root system we can see that license is actually in there which is really nice uh right now I can actually copy it back so uh I will split my terminal so we are back in the github repo and I can do like docker remove license so when I do gith status says that I really remove the license I can do docker copy uh so let's from go from banana and let's copy license and let's copy it here and that if I do gith status again we can see that license it's really in there uh you can also copy whole folders uh the copy command actually doesn't uh have any it's just plain copy so that's it uh let's move on uh the other thing is that docker changed the way uh how the docker demon or the docker engine is running previously it used to run with dash D and it was the option was available on the docker root command so now they created new command for that and the reason why this is interesting is that the all options for the demon are actually moved to the demon command so if you used to go to man docker to get some uh to get information about options for demon they are no longer there so I can show you that uh okay so I will close this one and I will close this one so if I do man docker right now uh it's actually showing uh man page for demon interesting okay so if I do docker help uh it says tells me all the commands and just the options for the whole docker and if I do docker demon help it tells me all the options for the demon command actually so so this is really like this is uh a big change because I've seen numerous issues for like hey you removed all the demon options are you insane and no they're just moved okay so that's docker 1.8 that one was released like half a year ago and the most recent release till yesterday was docker 1.9 so let's go over let's go over that one so the first thing that uh is really big there is the new command called network so right now you are able to manage your networks before that uh there was just one default bridge network uh which docker created and you couldn't do much about it and right now you can actually create your own networks you can write your own network drivers to to have like multi host networking and that kind of stuff so with docker there are just two drivers right now there is the bridge driver which is uh the way docker worked like till uh before and there is an overlay driver which allows multi host networking but I'm not gonna demo that because it's really complicated I mean not really but it's complicated to set up and it would take a lot of time and in the end it would probably not work so let's play with the bridge uh network let's play with the bridge networking so uh okay so first let's see what networks are there so docker network uh LS and I can see all my networks so you can see there is uh there is the host network which is the network you run if you don't want to use network namespace so it won't create the whole IP tcpip stack and it will actually uh run on your host network uh then there is no network if you don't want networking for your containers uh there is the default bridge network uh which docker uses when you uh when you run some containers uh and this is network I created it's called kivi uh yeah and this okay so I'm going to create new network and it will be named fruits so and it will use the bridge driver so uh let's see so docker network inspect uh fruits yeah it's in there so so you can see it has it's on subnet uh it uses the bridge driver uh and it has this ID so whenever you need to identify it it has unique ID and of course the name I gave it uh I think that the name is optional uh okay so let's create some containers and add them to the network so okay I have a networking uh image there uh I'll show you that it really is there so the whole build is cached uh so it's image for my networking demo and what the image is actually doing is that phydo vm networks uh locker file uh it's really simple image the whole the only thing it does that it will run the htdp server on port 8 8000 uh yeah and it also install some uh uh networking tools so we can do ping and ip and that kind of stuff uh okay so it's built let's run it so okay let's run the yeah so yeah I was when I was preparing for the demo I forgot to remove the container sorry about it and right now it will run so we are serving htdp on port 8 uh on port 8 thousand uh right so I can't even clear okay so I will split my terminal uh let's let's try that it works so I do darker exact ty orange so I'm going to inside the container uh and uh I will try to contact the uh the htp server so I do curl its head because we don't want all the stuff that uh all the html stuff and we do let's go here uh and port 8000 you can see it's working okay really great really easy uh but it's not on the fruits network so let's connect it to the fruits network so right now we'll execute this command which will actually take the container and put it into the fruits network so it's available within that uh htp range so uh yeah it's orange okay I'm kind of lost you didn't start pomela excuse me I don't think you started pomela pomelo doesn't exist yeah but this is orange so yeah I mean okay so uh you need to go back and start the pomelo container yeah okay so okay so uh I'll start another container and I'll try to ping the orange container or access from the new container within the uh within the network so uh okay uh uh huh yeah alright so it's already on the network right now and see what okay so if I try to uh if I try to access the container uh the previous one it won't work because it's on the fruits network and this one is not there so uh yeah but now I need to figure out the IP address one so let's split these terminals and I do darker network inspect uh orange now fruits and there is uh there is this container running there it's under this IP address okay so we have the IP address uh and now we can try to curl so it's head so it's this IP address uh yeah that's it and obviously it won't work because they are not on the same network so let's move the pomelon uh container to the uh to the network fruits okay so I will split my terminal and do this so darker network okay I'll I'll copy it from here okay so the container is in there I'll close this terminal and now I try to do it again uh and yeah it's trying HTTPS probably I guess so if I do HTTP uh okay but it works apparently so you can see that it's accessing so yeah this way you can pretty much run your services or your development setups on their own networks and you don't need to worry about somewhat accessing uh your containers from space you don't want to okay so that's networking uh any questions how come that the zeros are working like a salute back uh this one yes uh yeah so you are accessing the all interfaces I mean yeah all interfaces and if any of them works you're correct yeah it's it's yeah it's dirty trick okay so this networking there are also multiple network drivers not just the bridge and the overlay community already creating more of them so if you need some more sophisticated networking uh it's available okay let's move to let's move to volumes uh yeah so previously there was no volume management so you just created volume I'm in Dockerfile or you mounted the container and it's somewhat work but you couldn't manage it you didn't know where it is and that kind of stuff and now there is a proper volume management so you can see what volumes there are uh where they are located you can remove them you can create new you can attach them to containers and that kind of stuff this is very neat uh okay so let's let's see so I'm going to close this one and I'll also close this one so yeah clear terminal so Docker volume ls uh yeah I already have plenty of volumes because I play with the stuff a lot uh these are probably generated by Docker and this one was created by me uh okay so let's create some volume so yeah I will create volume name mango it's there so let's do Docker volume inspect mango uh yeah it's in there uh the data are located in here so I can go over there and look uh I can run uh a container uh so Fedora bash and I can mount the mango volume inside so if I do v mango and let's mount it at s slash mango so right now when I do ll uh mango I can see that it works uh I can even put some files in there obviously so some file is in there and if I split my terminal and I will look into the file in my host system uh I can access it of course because this would be like really insecure uh yeah oh yes it's for sudo yeah and the file is in there uh what I kind of miss uh on volume uh in volume command is that uh you can't see what's inside the volumes through the API so if I uh if I close this one and if I again do inspect of the mango volume it doesn't say anything about the directory itself like what files is there how much space they take and that kind of stuff uh and I can't even access them so I need to create a container or something like that but uh I guess that this will be improved in the future uh okay so sir is there any support in dockerpile to create these volumes uh yeah well you can uh you can use instruction volume uh and yeah it will create a volume you can then see it in the list but I guess that you can't uh name it so it will be just the default ugly name uh yeah and when creating volumes you can actually specify a volume driver so you can have the volume like glaster or something like that so yeah that's also configurable and there are also plenty of volume plugins so you don't have to uh have the directory stored on your host so it can be somewhere uh okay so yeah build arguments build arguments are a really neat feature because they allow you to parametrize your build essentially so you can uh change some arguments when building an image and it will affect the build so you can for example uh what docker is saying in their documentation you can specify proxy so this will be available in your build and then for another build you can put different proxy uh the issue with that for me is that uh the content of the argument will be in the final image so it will be in docker history so if you'd like to I don't like specify some passwords or something like that via the arguments they will link in the final image so it's not really secure and there is actually an open issue for that like that so I specify my proxy and his name and password and it's it can be seen in the final image and I don't want that but the issue was closed so I can show you that so uh yeah I will close the bottom terminal and run it uh okay so uh let's see so in my docker file I'll show it build arcs docker file there's a new instruction called arc and the argument I name it fruit uh that's the docker file and during build I tell docker that set variable fruit to watermelon and it will exactly do that so for the whole for the build after the instruction there's an environment variable available for the build and it's named fruit and the content of it is watermelon so uh yeah it's using cache okay so let's let's do it without cache so you can see actually the content uh okay so now setting up the uh I mean registering the uh argument and now you can see that the content of the variable was actually set uh so this is a really neat way to pass some arguments during the build to your build process uh so that's build arcs uh concurrent image pools so if you uh pulled image before uh when you when you did pull in one terminal and did the same pull in under the terminal it used to say like it's already being pulled or it even used to crash or something like that and right now they completely revert in that part so when you pull image in one terminal and you do the same in another terminal it will actually connect to docker and docker will stream the progress even to the second terminal so I can do docker pull project atomic atomic gap in this terminal and if I do the same in here it will it won't start new pull but it will just stream the progress so if I cancel the later one it will it's still pulling so I think that this is very nice improvement and this was again rewritten in 1.10 so okay so I'll kill it you said it was reverted so it's rewritten are we reading? yeah uh yeah so next thing is a new instruction in docker file which is named stop signal it's it's basically what it says so you can tell docker what way it should stop your container so by default it's sending seek term when you do docker stop and after 10 seconds if your application haven't been closed then it sends seek which wipes it completely so you can tell docker how to do that uh I think that I'm sort of running out of time so I will skip this because it's like really trivial if you don't mind and you can also set the signal for the stop in when you are actually doing stop so you can do docker stop you can actually set the signal okay it's not here so it's on run actually docker run yeah it's in here so you can on run you can specify what signal it should stop your container okay so docker starts this is actually really neat thing you can see statistics of your containers so yeah I'm running several containers so you can see in the introduction and they have these resources used this is actually done via two API calls one API call is for listing all processes within your container and the another API calls will send you data about resources and that kind of stuff so if you want to build an application on top of it you can do it easily okay so the docker starts and now we are at 110 which was as I said it was released today for me I think that was one of the biggest releases because they added so many code inside that it's almost impossible they revert them the whole back end which means that the content of varlib docker changed completely I mean it's the whole new thing which means that when you upgrade to 110 it will take probably minutes to even start the docker engine because it will need to reindex all the files and do the new layout so it might be a good idea to actually before upgrading to 110 to migrate the content of varlib docker and then start the demand so that's their preferred solution or my preferred solution is to remove varlib docker and start it over or move it so you won't have to do all that stuff okay so they so the first thing in 110 is that they added seccomp to the docker so with 110 by default containers will run with the default seccomp profile seccomp is a technology for disabling some syscalls in your container so for example you can deny doing read for all your containers with this which obviously you don't want to do probably but you can disable some syscalls which tend to have vulnerabilities or if you want to deny some of your options access to some more privileged syscalls okay so I have prepared a simple demo for that so let's see I have created my new policy I have created new policy and I am going to deny these three syscalls in my container so it's get current working directory change ownership and change mode so let's do that okay so you can enable it with seccomp and then path to the profile and docker has its own default profile where it disables some of the syscalls so and you can see it in this link okay so let's start the container and you can see that it already is somewhat confused that it can do get cwd and that kind of stuff for example I can try to if I do cvd okay sorry it even says that the operation is not permitted as I was showing you the policy so there are multiple ways how to handle the process which is doing the syscalls I set it to deny and there is also kill so it can actually kill the process and for the funds I have created actually another policy which I called hardcore policy and it really denies the read so if I try to run that container it won't even start because it can even read anything like I can show you the policy yeah so you can disable read you can't even start the container about this just for fun okay so we are almost finishing up and next thing is that in 1.10 user namespaces escaped from experimental are not stable so you can run your local demon with namespaces enabled so user namespaces are the final namespaces which were not implemented in Docker and they are now and what they do is then they can map the users which are used within the container and outside of host so you can have root within container which the container thinks that it's root so we can install packages and do privilege operations but on host it's just different user you will set the mapping at C sub UID so for example in here Docker will create a new user which will which will call dark rematch and it will have this UID on host and within the container it will be root but the root from host will act as this UID so it's like messy a little bit on Fedora these files are not available like by default so you need to create them yourself that's what these two error messages are about that Docker couldn't access those so I need to create them yeah okay so the way you use user namespaces with Docker that you run the demon with this so okay so let's run it like it okay so I have root shell in here yeah so here's my super long way to run Docker so I've added user namespaces remap and now I restart Docker yeah I guess it's trying to kill all my containers and okay so it's working yeah so now Docker is running with user namespaces and let's try it so I can do something like this so I'm host file system inside the container and it's available under slash host so if I do ll slash host yeah it's really in there you can see that it's under like different UID that's my root from the host and if I try to see the content of etsy host etsy shadow which is my passwords it's permission denied because it's different user and without user namespaces this would work actually so I could see my shadow file okay and we are almost finishing up just two items that's really quick in one 10 you can actually reload the configuration of your docker demon you don't need to restart it you just send sig hub to docker demon so I'm not going to demo because the person who wrote the patch actually created this very nice gift for that so he's actually writing the config in here and here is the docker demon which is running and right now he just sends the command to it no right now he's about to send the sig hub to docker demon and in here you will see that it just reloaded and he didn't have to restart the docker demon unfortunately it won't reload the whole configuration but just some of the options and you can read more it's written in the it's in here and the final thing is that Dan Walsh got accepted his patch for temporary fs so you can actually create mount points within container which are tempfs so and you can finally do full read-only containers as he was saying during his talk and you can read more about this in this blog post so that's all from me if you have any questions I can answer it and if you don't I will pass my word to you thank you Thomas well I play the projector game I'm Brian Exelbeard I also have the privilege to play the game on project atomic and I'm putting that here because it annoys me um and I work mostly on the adb but I'm not going to start out by talking about that so magic magic okay I want to just briefly introduce the idea of mulecule atomic app and the atomic developer bundle I say briefly because there are talks on all of these things and they're done by fantastic people one of whom is me and you'll see all of the talks that I'm setting up um mulecule and atomic app specifically are going to be talked about in this very room in the very next session um in a workshop format so I would strongly strongly encourage you to go to Thomas's workshop I'll be hanging in the back making noise because that's what I do um very briefly mulecule is a specification for describing multi-container applications all it is is a specification there is no magic here there's just a whole lot of words in a spec and there are lots of fun and you can extend it um atomic app on the other hand is an implementation of that specification and so it is one of the ways that you can think about the you know using this specification it is currently the only way you can use the specification but there can be more and I challenge all of you to have more um so I'll just show you rather than tell you about this and also I don't want to steal all of Thomas's thunder um let's see here you all can't see my screen that's not good ah there's the screen and can I see my screen no okay we'll worry about that later um so I'm not running ash I'm running bash so very briefly let's talk a little about this so you've got let's just say a single container application let's make this easy you just want to run say centos httpd because you need a web server and you really like the apache test page so we could all run docker run lots of fun stuff dash d dash p blah blah blah blah that's pretty easy that's pretty basic we're not doing that talk but you want to try it out on kubernetes kubernetes is an orchestrator it's going to be awesome you're going to be able to run httpd on kubernetes and tell all of your friends so kubernetes has this concept of pods which I will talk about very briefly in just a second but I just want to prove there's nothing up my sleeve I have no pods I have no um replication controllers and I have no services well I have one service because it's kubernetes but you know ignore that service it is not here so let's figure out how to do that well theoretically I can remember how to do this I can give it a name we'll call it beck httpd I'm going to use the image centos slash httpd and it's got port 80 okay I have a pod now I can prove this to you by hitting up arrow a whole bunch of times I have a pod so what I have done is I have created a pod which is a group of one or more containers that hang together in kubernetes and only one friend so there's only one container and it's running on a kubernetes cluster it actually even is running I used to have more minions than I have now and they all went on vacation this morning and I don't know where they went so we only have one and it also got a replication controller which is going to make sure that pod stays running I want one copy I get that stuff for free small problem I can't actually get to this web page it's running somewhere in the flannel overlay and I can't get there from here so I need a service well this one command was pretty easy I could figure out how to do this in ansible and I know nothing about ansible but I need a service in order to be able to get to it that's basically something hanging on a routable IP that knows how to route back to this pod and get it and so I went and said well I ran this kubectl thing and there's no service option so then I discovered that you have to build a piece of JSON here I'll clear the screen and put it at the top you have to define it in JSON so this is defining a service called vex-httpd because I'm not creative that's going to map a public port 8765 to the web service port 80 8765 is also not creative because I copied and pasted this from kubernetes.io and that's because the preferred way of delivering this information so if you all want it is to copy and paste it we didn't really like that idea first we said well we could just put them in a zip file but then like 1980 called and said it wanted its software distribution methodology back then we said well okay we can take all this config data we'll wrap it up in an rpm and then Debian called and said we'd really like to run your stuff but we don't use rpm and that wasn't very cool and so we said we have to come up with a way to do this it's going to make some sense and we contained it and what do I mean by that so we came up with this newly kill specification anatomic app and I'm going to fetch anatomic app actually first I'm going to show you awesome so that's what it took for me to make one container in one pod with a service and I could go bang on the service but it's not terribly interesting oh I didn't actually run the service but we're short on time so I ran the service I'm going to actually pull one that I have already cached it's very difficult to type like this CentOS 7 Atomic app project atomic sorry can't spell because debug output is awesome and that's currently the default but what I have done with one single command is I have launched lots of error messages oh because the network here is not having a fun time with life but the short version is I have three very broken pods normally they would not be very broken but everybody's on the network I could pull a Steve Jobs and tell you if you don't cut off all of your devices I'm not going to show you the iPad so any who the long and short of it is I was able to do that magically how did that work so with the atomic app I will just fetch one so that you can see the whole magic of it so I'm going to run it in mode fetch this time and more debug output but the important part is here I can't hit the mouse that's the directory where all of the files landed by default and varlib atomic app you can put them anywhere I'm going to stick with the defaults so varlib atomic app and here I'll show you a secret the one that I also ran it's also there so let's take a look at either of them because it doesn't really matter which one we look at we'll use D because it's easier to type clear find dot so this is what is actually the atomic app you're getting a read me and a license those are fantastic we know what those are you're getting a docker file because we're actually distributing the nulacule information and the atomic app executable for the nulacule in a container because we already have lots of fun tools to pass containers around the world and stick them in registries and validate them and build them and all kinds of good stuff the docker file is actually very simple literally we are grabbing a container that has a tonic app in it and we are dropping the metadata in and we're done here so what is all this fancypants metadata and you will learn about fancypants metadata in detail in the very next session in this very room but the short version is you get a nulacule file which describes your application this is gitlab when it runs and it's a directed graph here that's telling you that you have to have a red-ish you have to have a POSCRA SQL you have to have the magical gitlab container down here you can do cool stuff like have parameters like here's gitlab's database information that it knows that it needs in order to get the POSCRA what port it should be on notice that there's this really wonky rule that says the ports are only allowed to be in a certain range mostly to prove that you can have a rule that says the ports can only be in a certain range but the idea is configuration information want to draw your attention though to one thing up here it's similar to the concepts behind docker compose in that it is allowing you to orchestrate but this is and you've set me up perfectly allowing you to orchestrate on multiple different orchestrators with a single set of metadata here we've got orchestrator information for kubernetes and for plain docker and so we've got the information that you need this case we've got a replication controller and a service file that's needed to launch the POSCRA database because that's actually the one we're at the bottom of right now and we have the same thing for docker so we've packaged up how you run it but separated from it some of the data about what you're trying to accomplish so that you can have a single process the power of that is two fold but I think this might show a little more interesting power docker, ps so I've got lots of things and let's do this I have wow nobody can read that because there's stuff there's like six of them so I can actually run one more of these real fast but instead of running it on kubernetes I'm going to force the provider to be docker lots of debug information and now I have more of these things and the reason for that is that with one command line option I switched from one orchestrator to the other so it gives you a little bit more power in that arena short version and then I'll move back to the last two slides that I have or three slides is there's this answers comp that you can provide and this is all the data that you need to make it lab work in one single place so if you just want to modify the data and then pick an orchestrator it's very easy the expanded version of all of this information in tomash's talk um don't think this is going to work but let's see if it will give me the slides back so we had nulikil, we had atomic app we're running out of time so I talked fast yes to download the software to download the software sounds like when we were shooting in attachment now why is the implementation done this way and not by adding atomic app on the atomic post so that you would skip the first step for downloading binary so the I was not part of that decision but my understanding of part of the rationale behind this decision is two fold one, the atomic host is supposed to be very minimal so we're trying to minimize the number of things that we prepackage into it hence the trying to get kubernetes for example to be in a container two, you're always going to have to download something because you've got to get the metadata at a minimum and we're using a container to deliver that and then three we are choosing to use the atomic CLI as the entry point for all of these components and so in this way the fact that atomic app is not on your hummus the fact that you're not ever directly invoking it really doesn't matter to you because atomic CLI took care of invoking the atomic app to implement the nulecule's requests the other nice thing is that you could drop in an alternative implementation of nulecule at the top of that dockerfile and not have to worry about whether your implementation was installed across all of the various distributions and systems I'd be happy to talk more about it afterwards but I also want to leave some time for the folks who follow me Okay, very quickly there is a library of these things please take a look at them, please study them please send us PRs the atomic developer bundle will be talked about by Naveed, shake an eye tomorrow it's an easy to use container development environment very briefly it's cross-platform it gives you a lot of ability with multiple orchestrators and everything comes set up we have three user cases we talk about command line, Karla, IDE, Igor and my environment, Mike and I'd like to introduce you to those people with Naveed tomorrow I'm done, thank you very much Complaints to Ed, Jay-Z Steph, do you want do you want to just direct people to your talk because we are down to 10 minutes or I'm sorry or if you want to power through is that to you part of that is usability of atomic code making it discoverable and usable and so tomorrow we'll go into some of the information that we would have talked about here which is how it integrates with atomic code how it's been making OS tree better how it's been figuring out how to containerize things because everyone's trying to containerize stuff but actually doing it ends up in a lot of trouble issues so if you're interested in other folk and also CI testing every day before emerging thousands of times it sends atomic codes with patches that people are sending so if you're interested in that kind of stuff we'll talk tomorrow or there's a hack test in this area on Sunday early and we can go into more detail on it thanks and I'm sorry we did not have as much time is that okay is it just me? it is just you 10 minutes howdy I'm Josh Berkus I've just started with the atomic team that Red Hat as of about two weeks ago and I'm just going to be talking to you a little bit about where we're going with all of this so like I said I'm just Josh Berkus and this is kind of a visionary state more of a personal vision I joined Red Hat to work on Project Atomic because I already use containers for a lot of things I'm involved with Docker and I actually wanted a chance to work on container infrastructures for time but most people actually know me for something else entirely for PostgreSQL how many people use PostgreSQL? two of them? so I've been on the PostgreSQL this is double clear I've been on the PostgreSQL 14 for quite a while since 2003 started out as a database developer with Microsoft SQL Server and SciBase been doing databases for 19 years been involved in a number of database startups and and forks of PostgreSQL and other things and spending a lot of time doing consulting and building database based applications in Silicon Valley so how does a person like that get to Project Atomic? well one of the things that happened to me that was interesting was a few years ago was a few years ago actually it's 0.8 or 9 years ago somebody introduced me to this concept there's something called DevOps and when they finally explained what DevOps was and what a DevOps was supposed to do I said oh so you mean more or less what I've been doing for the last 10 years because as a database administrator database developer for the longest time it was our job to make sure that applications went from development to production successfully now there's much larger infrastructures for that but as a database guy I still end up getting involved a lot in DevOps and as somebody involved in a lot of DevOps I was very interested in containers and not just recently I would actually say my stroke of container starts in 1990 1990 I helped pay for my degree in fine art which is actually my university degree by working in the university computer lab on VMS on the large university VMS mainframe which included a lot of administration of time-chairs and who got what resources because obviously everybody wanted computer time and conceptually this was sort of you know as somebody today conceptually this was more or less the same concept that we're dealing with with containers in proto form and so when FreeBSD added sort of jail support because FreeBSD was one of the biggest platforms for Postgres in the early days because they both came out of the university in Berkeley we immediately had support for FreeBSD jails for Postgres from the beta point as a matter of fact I was involved with what could arguably called the first cloud web host a company called Hub.org which used FreeBSD jails to do multi-tenant hosting something that nobody else really did it or something so this continued until I went to work for Sun Microsystems in 2006 and was part of the Solaris department and was doing stuff with Solaris zones including making Postgres fuel work well in Solaris systems and this was one of the things I was really disappointed to lose when Sun got bought by Oracle and OpenSolaris ceased to be a thing that anybody really cared about because there were a lot of really nice features in Solaris zones so when I was introduced to Docker at version 0.5 in 2014 I got really excited about hey, now I can have my zones back but this time on Linux but overall what I'm saying is this is not a recent history as in this concept of containerization or encapsulation is kind of fundamental to Unix and Linux in the first place and the reason why it seems to work so well is it was always meant to be there now one of the other questions somebody may ask is like hey, if we had FreeBSD jails in 2000 and we had Solaris zones starting in 2005 and that sort of thing why are those not all over the place now why did those not create the whole sort of ecosystem of things that Docker has and one of the reasons for those is that those were basically kind of ops-only systems as in FreeBSD jails and Solaris zones provided a lot of things for system administrators and database administrators and a lot of tools for people on the ops staff to make life easier for them but they provided really almost nothing for developers in the way of tools or advantages and the result was that you had really good sort of system management platforms that were not used by developers to build anything so they were there, they were great nobody was building anything on them and so they didn't take off the way that Docker has now when Docker came out the Docker team decided to do something interesting which was instead of emphasizing ops to emphasize development entirely the entire pitches for developers is to make life easier for developers and to build easier development pipelines and develop on your Mac and all these other things and ops, you know, someone else will take care of that that will be just somebody else's problem now the result is that we've ended up with a tool chain in Docker particularly if you're looking at like Docker 1.0 where it's great for developers but ops people have a lot of problems with it in terms of actually deploying this for example I was at DockerCon2 recently in San Francisco and huge keynote 1500 people in the room one of the Docker VP's like how many people are using Docker and 90% of the hands go up or in production and 80% of the hands go down and the reason for that is that the dedicated ops people or even people who are not ops people but are just looking at operationalizing things suddenly run into a whole bunch of problems and they start saying maybe this isn't ready for us yet you know maybe we need to wait until this is a little bit more fully baked so what we really need here is a little bit more balance between dev and ops and that's actually what I'm excited about in joining the project with the comic team give me an example where we actually need some more balance here this is your sort of Docker thing right here is our continuous integration vision centered around Docker develop test deploy cycle and that sort of thing all around Docker containers problem is for anybody who's actually done continuous integration in production it's a little bit more complicated than that because you don't just deploy you're deploying and then going back to development but then you also have to maintain the old version because you've got a bunch of people who are on the old version and can easily be upgraded and then you have upstream changes that come in either during the development phase and need to be tested or come in and have to be passed against the maintenance version and oh don't forget that we have to actually secure all of this also and then you've got those bug fixes that you find during maintenance or during development that have to be pushed upstream more to the development version of the application so we really need something a little bit more a little bit more sophisticated set of tools a little bit more than just core Docker in order to help you with this whole set of things the whole real sort of application and deployment production lifestyle so because I'm a career ops guy or a career de facto DevOps guy I'm going to be looking a lot more and what we're going to be looking at a lot more in the project atomic use tools and that sort of thing is adding a lot more sort of ops intelligence to what's already a great tool for devs you know so deploying reliably, maintaining versions in libraries managing large infrastructures ensuring availability securing containers and images persisting storage which includes databases because I am still a database guy at least I'm going to be looking at that the but you've already heard from a lot of people if you've gone to some of the other atomic presentations at this time about some of the tools that we already have to do a lot of things like you know orchestration Kubernetes nuclear to help deploy reliably, consistently our PMOS tree were to make that atomic and reversible we haven't forgotten about the development people either a lot of what we're doing for developers is more on the OpenShift side of things with the integration to OpenShift the idea that the promise of Docker is hey develop it on your laptop deployed into production to the same container that hasn't been as much of a reality certainly as I would like we can make it a reality with the project at his home and a lot of the tools that we have around so that's my view, thank you we are pretty much out of time but I would encourage folks if you have not already picked up on these to check out these links and follow up the project atomic after the talk again I'm really sorry we didn't get to have more time for Steph I feel terrible that we did not get to this fit his stuff in here and we do not really have time for questions but all the people that you saw speaking here today will be wandering around the halls so thanks very much folks sorry nobody's sitting yeah well you can you can oh we're gonna leave them to the workshop too oh ok it's all up to you yeah I'll leave you for the workshop because I just nobody to give out ok where are the stickers oh thank you I will have it put on my laptop we should have distributed scar for the people who lost questions alright I think I see one of the guys there alright I feel for the scar people thank you so much thanks for the questions I'm really sorry about Steph I'm going to upload to our pool I think I tried to include too much in the talk okay okay yeah okay yeah yeah And I did copy it to somebody who put it in the bag. Okay, perfect. No words to say. Thank you. Perfect. Thank you. I'm not sure who's liked that bag. The other guy who was speaking.