 Okay, so my name is Adrian Otto and I've come to talk to you about Docker You are all like lottery winners apparently Because on sked.org where we have this session There were I think there's 66 seats in the room or something like that and there were like 524 hours VPs So like we had like a small panic attack this morning when we're like what are we gonna do? There's gonna be like a flash mob at the door for this Docker workshop So all of you thanks for reading your email and coming early There are prerequisites to participating I'll get to those in just a sec This is what we're gonna cover. There are a total of two lessons and two labs and a lecture that kind of Talks about the overall like toolbox landscape There is also a third lesson so if you're a rock star and you're just going way faster than the rest of us There's more stuff in the slides that you can go ahead and try But I'm not gonna aim to get that stuff done because I have a whole lot to say and not a whole lot of time This is only a 90-minute session and I've got at least two hours with the content here. So Lesson three is extra credit so Pre-requisites in order for you to participate in the lab portion of this you're gonna need to have a working Open stack cloud account. I don't care who the cloud is, but once you create an instance you must be able to Get to it remotely from here, so if it's not assigning public IP addresses or automatically making Elastic IPs that front-end so that the metadata in the response from the Nova create command Needs to be an address that you can reach from here. If not, none of the lab stuff is going to work, okay? If if you have just a sec if you have a rack space cloud account that will work And the examples that I will show will assume you're using the rack space cloud If you're not using the rack space cloud everywhere. I have a dash D rack space. You're gonna use dash D open stack, okay? First question a doubt you'll get it done in time, so if you don't have a working cloud account My suggestion is that we swap someone in to take your spot And you can take a standing room and you can stay for the lecture part of it And you can use the slides and do self-study on the on the lab part of it But you probably won't be very entertained when we're doing the labs and you can't participate, okay? Another question here. I'm sorry. I didn't hear it You need to be able for your laptop to reach the address of the server using The IP address attribute that is in the response from the Nova service, okay? If that happens for you then great However, you get that done But if you you know if you've got an open stack cloud and it's creating instances You can't reach from here on a public address that could be problematic unless you have some way to work around that, okay? You're going to need the docker client binary on your laptop if you don't already have it I will put a link up for where to get it. You also need the docker machine Client binary, which I also put a link up to okay if you don't have either those things again. You can't need a lab So first download the slides This is where the slides are hosted This is going to be if you're going to move faster or slower than the group This is how you're going to continue to go through the lesson In the docker machine github repo is Where you're going to find a releases tab? It's like near the top in the middle you go to the releases tab and there are binary releases You need the binary that runs on whatever kind of machine you brought today So all you need to do in order to run docker or docker machine the minimum that requires that you have the binary That it's built and go so the binary itself is statically linked You don't need anything else all you need is the binary into chmod plus x That file and that will allow you to do everything. We're doing today. You don't need to download boot to docker If you don't know how to install Sorry if you don't know how to install docker you can You can install boot to docker and it will install the client, but you don't have to use boot to docker to do this lab So that's the prerequisites questions on the prerequisites All right, so I'll let you all get those things downloading I'm going to talk for a solid 15 minutes. So if you don't have this right now don't panic. That's fine But by the time I'm done talking about Tools that's when you're going to need this stuff to be working. Okay, Simon. This is my colleague Simon jakesh and Okay, Andrew Melton is here and Thomas Also, also here all of us are here to help you. So if you get stuck We're only going to be able to access people that are on the perimeter since the way that we're kind of really tight in here So if you are really having trouble ask to switch with someone who's not having trouble who is accessible to the outside and One of us will try to get you through whatever you're stuck on, right? So we'll sort we'll just do our best there. So if you're if you're struggling or you know, you're You know, you're gonna have trouble then Come to the edges and if you know you're a pro and you're just here for additional information, maybe go more toward the center, okay? All right, so let's talk about the container itself Container technology has been around a long time It's been in the Linus kernel more than six years the original C group code was contributed by Google They've been using it in production at scale for quite some time containers are mature technology But when we say the word container that means something different to a lot of people and I'm going to give you all a definition my personal definition of what a container is in today's world It used to be when we would say container what we meant was a C group and then we had Other things that we would like LXC. We would call that a container So it's now, you know some namespaces plus some C groups who call that a container And then we have Docker come along and Docker added this concept of the the Docker image, which I'll talk about a little bit more and All three of those things together is what makes a Docker container So a C group plus a namespace plus an image all three of those things together is what I call a container So a C group is a way it's a feature of the Linux kernel It's a way for the kernel to group Processes so that you can control how much of the system resources they can consume So things like how much CPU they consume how much memory they can allocate How much disk IO or network IO they can do? And these can be arranged in a hierarchy so you can have a C group that contains another C group Then we have namespaces so namespaces again another Linux kernel feature and It gives you a restricted view of the system So by default when you log into a Linux system you see what's called the root namespace And the root namespace has everything attached to it already So let's talk about the network namespace is an example when you log into a machine You're in the you're in the root namespace if you run if config to get a listing of all the Addresses that are on that box you're going to see the entire list are going to see vifs You're going to see bonding interfaces you're going to see eth0 eth1 you're going to see all that stuff And if you want to restrict that view so that a given container is only going to have access to maybe one of those Then when you when that namespace is cloned and restricted It may only have just one thing in it So when you do an if config within a container that is running in a namespace in that way You're going to see that limited view and this applies to all sorts of things that the kernel does right so mounts right what file systems you have access to Like a like a ch root so when you do clone new asset or when you use the ch root tool What it's actually doing is calling clone new new NS and that's the like the most basic of the namespace features There's one for UTS. This is like what happens as a result of running your name So what the host name is when you ask for the host name? IPC deals with the Interprocess communication so things like semaphores it would be really sucky if you had multiple Machines on the same box and you created a semaphore and somebody in another container could access your semaphore That would be really bad. So we've got namespaces for those two We've also got a PID namespace so the process ID list when you enter a container the first process is process ID one Instead of a knit or system D being process ID one It's whatever process you created inside the container if you've created a A PID namespace you're going to get a process ID of one and each new that you create is going to increment up We talked about the network one and there's also a username space now username space As of the time I made these slides was not supported in docker yet But the idea here is that you can have a user ID of Whatever inside your container and that can map to a user ID of something else outside the container So you can do things like have user ID zero inside the container So you appear to be root but that's actually mapping to a non-root user on the on the host system And these can also be nested So let's talk about docker images So a docker image is definitely not the same thing as a glance image It's not the same thing as a virtual hard drive Or a virtual machine image It is essentially a tar file with some additional metadata in it And these things Have a hierarchy meaning that a an image That is at the very root of the Of the hierarchy is called a base image And you can actually create your own from scratch if you want like an empty one And you can put your own stuff in it That's one way to do containers, but another way to do it is When you define your container when you're going to build the container image You say this image is from another image that has a name So I might have a named base image called Ubuntu Ubuntu colon latest might be one that's available in my container registry And so if I create a new container that is from that Um, I am now creating this hierarchy And so my container has only got the bits that are unique to my application Or whatever modifications are in my container But nothing relating to what's in Ubuntu So container images end up being very very small By comparison to virtual machine images The docker registry is just like A git repository How many of you use git every single day in your work? Okay 90% of the room So you all know the semantics of a git repo You pull or you clone right in order to get your code out of the repo If you make a change and you want to save it you do a commit When you're done with the commit you do a push in order to put it back up into the registry All those same commands work with docker So if you understand git The docker registry is just like a Git repo but the things that you're putting in it are these binary container images instead So all of you totally understand this already So let's review a container is three things It's the cgroup plus the namespaces Plus the docker image and that's what gives you the the container So talk about the docker file The docker file is not the same thing as a docker image Docker file is basically like a make file So in the case of a make file you're starting with csource code And this make file that says how it gets compiled Basically a script and then when you're done you get a binary output right With a docker file you have some directory full of stuff And you've got this docker file basically the script And you run docker build against that docker file and you produce a container image That's the binary output So essentially it's the same thing as a make file except the things that go in Are a directory full of stuff and some commands that might run on the container And then what you get out the back is the container image And we'll actually see this today we'll actually be making container images So this is what a docker file looks like This one says I am extending from sento s Version 6 the from line Is required unless you are a base image So in the general case when you're making docker images, especially when you're a novice You're always going to have a from line Maintainer line is just a way to label this docker file to say who's taking care of it It's totally optional A run command says while I am building the image in the context of the container So any process that is in the container run this command So in this case I start with sento s and I install ssh In that container I expose port 22 which means when I When I run this container I can map port 22 of this container to another port on the host I'm adding a file called start.sh which would come from the current directory where I'm running docker build And it would put it into the container at the location slash start.sh And when this container runs it's going to run the thing that's on the cmd line So there's a difference between the run line and the cmd line So run happens only during the container build process Once you have a container image that never runs again And cmd runs every single time you start the container So this just takes my start script and runs in Now let's say I did that and I said Save my container image as something called adrian server with ssh Okay, so I started with this one And I saved a docker container called adrian server with ssh Now I'm making another docker file that is going to extend that So this is the grandchild image in this case And so I say all right I've already got the ssh server here. I'm going to install the The Apache server and instead of exposing just 22 I'm going to expose 22 and 80 And again, I'm going to put another start script in So it's going to overwrite the first start script with the one that I put in the second time So this is a way so that you can have your base operating system is just taken care of by whatever upstream you picked Okay, your stack for running whatever your app is Is handled in this middle tier And then your app that runs on that stack is the grandchild So what you actually ship if you do this grandchild Approach is going to be really really small because it's just going to be the files that are related to your app It won't be the whole app stack and it won't be the operating system That stuff will just be there already and you're going to use it again and again for all the different apps that you deploy that are using a common from line docker containers have a lifecycle and Kind of like we do where you're when you're conceived, right? We have a build So when you create a docker image from a docker file, that's the birth Or the conception of a container You can bring it to life by doing run, which is a combination of two things run is a create and a start So you could you could individually do container create which says Make the place in the kernel where these things are going to be started, but don't actually start any processes yet Just make the namespaces in the cgroup and use the docker image to set up the the file system for it And then you can start it at a later time Then there's reproduction You could just like you can with cut with with source code, right? You can commit And once you've commit you could push it back up to the registry go to another host and run it or you could just run it again as a modified a modified image I would argue that it's not generally not a good idea I wouldn't recommend using commit. It's better if all of your containers just come from docker files rather than coming from some Manually driven process where you did something one off not from some automated formula It's just it's not a best practice, but you can if you if you need to okay There's sleep you can kill you can either stop the processes by using a docker stop command Or you can actually kill off all the processes in the container and then start them up again later Which is much much faster than creating a whole new container Then there's wake you could start ones that are stopped there's death You can do an rm on a container. So if you have a container and you've killed the processes inside of it You can do an rm which says take away the namespaces and take away the The cgroup and take away the file system just discard everything And then there's extinction right extinction is I don't care about the image that I use to create this container to begin with Get rid of that too So if you've gone through all the way through kill an rm and rmi you are back to A nothing state again All right now as of the time I wrote these slides, which was maybe six months ago This was version 1.4. These are the different commands that the docker cli can do I bolded the ones that you're probably going to use a lot Uh Well, we're going to actually do a hands-on here So I'm not going to explain what everyone does but if you just run docker with no Commit line argument you're going to get this list so In the docker world containers all share the same kernel But you don't necessarily need to run the same OS environment or when I say os I mean like user library the system library environment Um, you can run different ones and different containers on the same host because the linux kernels are all compatible, right? the linux distro Is not is only dependent on the kernel. It's not dependent on Um, anything else like anything else running next to it So you can run a single kernel with here. I've got an ubuntu environment I'll demo this too for you later I've got an ubuntu environment sitting next to a centOS environment sitting next to a devian environment And all on the same host or in the same kernel if you want to run um Say a web server. This is a web server that's going to be listening on HTTP and HTTPS ports Use docker run minus d is for running as a as a background daemon P is a port mapping the thing that comes first. This is 80 colon 80. So this says on the host port number 80 Run what's happening in the container on port 80 But I might say oh on the host I really want it to be 8,000 in which case this would be 8,000 colon 80 And then dash p run port 443 from the container on port 443 of the host and run The container from the image HTTPD colon latest and I don't have a command at the end here So that means it's going to run whatever is specified on the cmd line of the Docker file that was used to build this container. So whatever the default command is is what's going to run You're also able to inject Environment variables at the time that you create the container that will be available to all the processes that start in that container So the dash e here So I've got docker run. I've named it db so I can refer to this container later as db I've mapped port 3306 from the inside to outside and I've said dash e the mysql root password is this and I've also said dash v so dash v is a volume mount Or also referred to as a bind mount Where the file system belonging to the host is made available to what's running inside the container So by default containers have their own file system that's layered on top of the host file system And if you don't want that and you want things that you read and write to actually be Happening down at the host. This is how you do it So I might say everything under var lib html or say var run or yeah var lib html I want to be bind mounted to the host or i'm going to run on my sql container. So var lib my sql I want to be mapped to some volume that I have mounted on the host And that way i'm not actually putting files into the container file system. I'm putting them on to a host file system so Let's use it Actually, hold on this is toolbox overview. So I haven't talked about the different tools in the docker Ecosystem we'll do that first and then we'll then we'll actually attach it. So docker hub This is where The base images all come from This is a service hosted by docker Docker ink. So I didn't explain so docker Docker is an open source project and it's also a company The company used to be called doc cloud and it's now called docker ink So docker ink hosts a registry for docker the open source project to use as the upstream for where your base images come from They also sell private repositories meaning You need a username and password login in order to access the the images And this is a way the docker makes docker ink can make money And if you want to change or view what's in there, you need to be properly authenticated either using their Credentials or from a github credentials. That's SSI with github as well Docker also has something called a trusted build system Uh, this essentially takes code that you put on github And looks for a docker file in the register in the in the repo So if your source repo has a docker file in it, it will every time you do a commit It's going to get a trigger it will check out that code and build A container for the docker file So you can always have the current container that goes with whatever code is in a code repo if you're using this system core os is A micro os meaning it's got just enough to run docker ssh xcd and systemd That's it So it's like a little tiny mini operating system. That's good for nothing, but starting up containers essentially, okay It the core os has a another software called fleet that you use to make a cluster of core os nodes Which then you can distribute work on to And this is suitable if you're running microservices And You can set the hosts automatically update themselves so that you're all running always running the most current version of code If you decide to use that option that can be tricky if you're running pets Pets are hosts that have names that you care about if they die Rather than running cattle So if you're running cattle if a single node reboots because it's restarting to add new code, you just don't care So if you you couldn't afford to do that deployment model, uh, this can be really handy because it could save a lot of a lot of ops effort in keeping your hosts up to date Okay, weave and flannel Both of these tools are what we refer to as overlay networks They um, they both use either udp tunnels or vxlan in order to make a Essentially like a vpn connection between the between the containers themselves Uh weave is basically a shell script wrapper around a whole bunch of Linux networking commands that make it convenient to set up a A peer-to-peer network one of the cool things that weave does is if you have say location a location b and location c And location b goes down Whereas you can just like alternately route like if the link between a and b goes down and you still want to reach it It can go through the other way So it's a little bit more than just tunnels between hosts There's actually some auto discovery about where the nodes are a little bit of routing in there Um flannel is another one. It's a functional equivalent to weave. It's a little bit Easier to get going in terms of the install. Well, I don't know. It's they're both pretty easy to use. They're both pretty Pretty easy to to get set up But flannel is the one from the core os community and weave is uh another one's been around maybe a little longer And we have some examples. I'm using weave here later in the In the course So this is an expression of what you can do with flannel You can see there's both routing and Tunneling going on here. There's kubernetes. I talked about this a little bit earlier today in the in the morning keynote kubernetes is a project that google started Taking the lessons learned from many years of operating containers at scale Um kubernetes is the the way that google wants to share how this should be done with The outside world. So they're taking their opinionated view and making that available as an open source project. They went around at the time of Launching this to get community participation and they've got a surprisingly active community Contributing to that project It is essentially an orchestration system That uses a concept called a pod a pod is a grouping Of containers that are expected to run together on the same host And if you're using a microservices design Some of your services need to interact with each other at high speed or may even need to share resources that are on the same host Like a logger You really want to stream the log stream directly to the logger right there next to the app where it's being created And maybe the logger's job is to send that off over Syslog or to a network service where you use to aggregate logs that kind of thing That's a perfect example. You might have a queue service Maybe your application is extremely sensitive to latency and you don't want any latency between the App and the queue service. So having them on the same host allows your app to perform better That's another reason why you would group them in the same pod that sort of thing Most of the things that people want to do with containers can be Accomplished by kubernetes, but it is a declarative system Which means you describe the intended end state in the form of a yaml file And you give that to the kubernetes Service through an api call and it goes off and gets it done and you don't care about the details of how it gets done You just want it to happen But if you do care about the process you need to use something else that is more imperative in nature Where you can actually say yeah, hey in step three, we're going to do this different thing instead kubernetes isn't going to give you that level of Customization in what happens in the orchestration process The way that it works is the nodes where the containers are going to run Are called minions and they report up to this thing called the master So there's essentially a centralized control that has the cluster state in it. That's the master That's the thing that's going to accept your requests to make containers show up on hosts And then the containers are actually going to be run on the minions. So it's kind of like a queue and workers So the minions are like the workers and the masters like the queue Okay So let's talk about all the ways that docker works at the open stack This morning. I mentioned nova docker. This is a vert driver for nova So that when you ask nova for a an instance instead of getting a virtual machine or bare metal instance You're actually going to get a container out of nova that's going to be running on the compute host There's also a heat resource So that you can create essentially a nova instance that has containers on top of it and you can represent them in the Heat dsl so in the hot format, which is the format of the template that we feed into the heat service You can actually have containers that depend on clouds or that depend on nova instances So you can do things in orchestration so that containers show up in the process of Running orchestration The trouble is that the heat resource as a standalone thing isn't as useful as something like a kubernetes because there's no scheduler There's no like container management logic. There's no placement logic. There's nothing like that. It's just Create a container on a specified instance Then there's magnum Magnum is for cloud operators who want to offer Containers as a service and it's integrated with keystone so that you use the same authentication credentials that you use to create cloud resources like Nova instances and cinder volumes and storage volumes. You also use to create containers All right, so let's get to the hands-on stuff So anybody Not have the docker the slides everybody have slides, right? Yes. Okay. Everybody have the docker client and the docker machine binary Yes Okay, anybody not have that stuff and need help getting it going. Okay. If you don't have this stuff and you need to get it going Um, one of our guys will will help you get that done. Okay Simon can probably help you through that or or andrew can help you through that as well All right, so in preparation for running these tools You need to set some environment variables in your shell The first thing you need is to specify what region your resources are going to be created in So this depends what this value is depends on what your cloud has If you're using the rack space cloud, um, I recommend you use iad That's going to have the most available capacity for all of us You need to specify your username and your api key if you're using a rack space cloud count and you don't know Where to get your api key raise your hand Okay, great. So everybody got their environment variable set. Yes Yes, yes, yes, yes Okay So the first thing we're going to do is create a docker host using a docker machine command What this is going to do In fact, I'll do it with you. Okay, we'll use docker machine create dash d rack space machine one Are you all able to see that? It's probably huge. Yeah So docker machine is a tool for creating a host that runs a docker daemon And what we're going to do instead of running docker on all of our local laptops because if we did that And we start doing like descends from ubuntu or descends from sentos I'm going to have 66 people downloading the operating system and we're all nothing is going to get done Okay, so please don't do that Instead what we're going to do is you're going to use docker machine To create A nova instance in your cloud And then we're going to run docker on that. So when we pull down Images from the docker registry from the from the docker Yeah, from the docker registry It's going to pull over the data center links instead of pulling over our wi-fi link here and hopefully that keeps us all going All right I'll make no apologies for the quality or bandwidth of the network So these are the things docker machine can do um It uses create in order to make a new one Like I mentioned before when I in my examples if I've got the word rack space and you're not using a rack space account You just put the word open stack there instead. There are a bunch of different drivers um There's one for just about every kind of virtualization there is there's one for virtual box. There's one for Um open stack. There's one for rack space cloud. There's one for google cloud. There's one for um vmware both vcloud and vcloud air there's one for Soft layer Digital ocean you get the idea So all the different kinds of places you might be able to start a machine in the cloud Docker machine knows how to do that And it will install the latest version of the docker daemon on the host that you create And then from that point you're going to use your local docker client on all your laptops to communicate over an ssh tunnel to that remote machine Initially, and then you're going to use the a tls connection Between your local client and the version running in the cloud. So let's do it machine one So let me explain what this is doing. This should take about a minute and a half on rack space Depending on the speed of your cloud plus or minus a few minutes What it does first is it says To the nova api That you've set up your environment variables to To interact with it says give me a nova server or a nova instance Once you get the ip of that instance and the status goes to ready It will then make an ssh connection using the key pair when it creates a nova server creates it with a key pair argument So it injects an ssh key Into the host it uses that ssh key to connect to the host That it runs a sequence of commands to get the docker daemon installed Then it creates a tls key pair The tls key pair the the private key goes on your laptop The public key goes on the On the remote server And so there's a trust setup. So your interactions that are going over the api to the docker Daemon are all being tunneled through tls So that only your commands are going to be accepted by the api any random guy in the internet It's not going to be able to just start processes on your machine figure docker client. Okay it It doesn't We did use core os by default. It doesn't know we used ubuntu by default It does work on core os. It does work on ubuntu. It should work on most of the upstream os's but All it really does is it it makes a connection to the to the docker web server and downloads the client And installs the installs the package based on the auto detected os type So it should work on other os's, but I know it works on ubuntu. So if you if you're deciding what image to use then use the The image id that goes with That goes with ubuntu. All right, so mine finished Anyone else finish? Okay, I got maybe 10 of the room done. So we'll hang out for a little while before we move past this point Yes You can use digital ocean. You can use any of the drivers. It really doesn't matter I mean I I titled this this talk using docker with open stack Um But yeah for the for the purposes of this session if you've got credentials on any cloud It all of this should work equally well. Yeah, so digital ocean soft layer ec2 Uh pretty much anything Yes, you can do that too as long as you're not well Yes, but then when you go to actually run the run the docker Container it's going to download the image So your start time Yeah, your start time is going to be dependent on our local network here if you do it that way Maybe it'll be maybe it'll be okay, but it's Yeah, you can use there's a virtual box driver too A lot of us are running that and so that that works the same way as this I would just do dash d virtual box and it would put it on the local boot to docker Exactly rkvm exactly it runs core os on the machine or actually it runs a bunch you on the machine It just has a default value for the um For the glance id in the client So when you're using the open stack one, you can actually specify what glance id it uses That's how yeah, yes Well, it'll pick It'll pick a uid value Which you're not likely to well it'll pick a ui if you're using the rack space driver It's going to pick a uid value that we already have in our public catalog Does that make sense? It'll be in our public glance catalog So it'll be available to everyone. All right, so how many are are up with at least one machine so far more hands One more okay because it injects the it injects the host key as a root key So mine my second one finished Take notice of this little command that it gives you when it finishes the docker machine create It gives you this little eval command And what that does is it sets the environment variables if let's just run this like this hold on See those three environment variables. That's what it sets So I just say do that So this is telling it um that it's going to be doing tls to communicate with the remote server where to find the remote server Um and where my local certificate is that it generated for me So if I do an ls If I do a docker machine ls what it's going to do is look at my list of servers Well, there's some latency All right, and you see there's two machines that I've created here machine one and machine two And machine two is the active machine. So if I run docker commands right now They're not going to run on my local box They're going to run on machine two So if I run docker ps I should see no no containers running right ps minus a That's a list of all containers ever created and not deleted Again, I should get an empty list minus i minus t means interactive with a tty You can also put these together like that um What image you want to run and this does not matter right you can pick here whether you want to run it on Debbie and a bunch of Busy box. I don't care You pick whatever docker image tag you want here And then what command you want to run if you don't want to run whatever was default So this has made made an api connection to the remote server And told docker to run Bash on a centOS box So right now it's pulling down the centOS image And now i'm on the remote box. So if I run You name it's like irc in 1993 You'll see here i'm running on on an Ubuntu kernel and if I exit this container and I run You see this is my my mac Okay, and the second time I run that You'll notice it comes back a bit quicker because it's already got the centOS image on it It didn't have to download at the second time right Now let's say this time instead of running centOS 6 I want to run centOS 7 It would download The bits that are different between centOS 6 and centOS 7 Since centOS 6 is or since centOS 7 is actually a descendant or a derivative of centOS 6 So it says I'm downloading only 77 megabytes here Instead of the few hundred megabytes that I would expect this to be version 7 Let's say I only wanted to run Ubuntu here Or let's say debian let's go back to the back to the lab here so All the containers I've run so far have all been interactive So when I exit them the containers are stopping But they are not being removed So if I do at the end of this debian being created If I do a docker ps So this is a debian box Also same kernel again right same Ubuntu kernel, but now I'm running a debian Distro on top of it So docker ps shows me the list of running containers. There are none Docker ps minus a shows me all the containers I've created. I should get a list of Four of them that I created Let's say I want to run this same one again This debian. I say I like this into a 6 1 again. I might say docker start That one Now I can do an attach Because I created I did it twice in a row to show you first first downloading first. I downloaded it from To show you that it would download the centOS 6 Image and the second time I ran it I ran it just the same thing again to show you it doesn't do that at the second time So you see here. I'm reattached to this same bash again now. I'm back in I'm back in 6 6 again So you get the idea you can you can start up containers that have been that have been stopped now. Let's say I don't want to do I don't want to use this container anymore and I want to go away I would do docker rm and you can actually remove More than one at a time you just list them all out like this And now I'm telling the remote api go ahead and delete all those containers. So now I run a ps against it again, and I'm going to get an empty list So that's how you clean up after yourself I didn't show you okay docker machine has a um ip address You can get by by running ip You can also connect to it just like with vagrant You can do a vagrant ssh and you get into the box docker machine has that too It's the functional equivalent to vagrant If I want to make the active Machine one instead of machine two that'll log me into machine one instead You get the idea so you can by toggling active you can decide which of them you're going to talk to at a given time Okay So early on when docker was new here. Let me just talk to this slide for a little bit um When docker was new In order to get into a container you had to um You had to either know that you could run um ns enter In order to create a process in the in the container What a lot of people were doing is they were actually running ssh d's Inside of containers and using an ssh client in order to ssh into the container Don't do that that just Causes a whole bunch more ssh d servers that don't need to be there Instead you're already running an ssh service on the host okay You can create a shell Anytime you want in any of your containers from the host by using something called docker exec So docker exec Minus i which for interactive minus t for i want a tty Sometimes the things you run you actually don't want a tty, which is why this why these flags are here um The name of the machine this should say machine 2 And then the name of the command you want to run So in this case If what I really want is let's let's start um, so set active to machine 2 Go back and create my machine again Okay, so my can let's just do that as a background So if I say dash d it's instead of going to be running in the foreground. It's going to be running in the background And i'm going to say Instead of running bash i'm going to run sleep for 30 Days, okay, so now i've got a container that's running right docker ps just shows me that machine I can say docker exec minus i minus t This container id by the way, you don't need to use the full hash of the container You only need enough of it so that it's unique So in this case, I could probably call it zero a not probably enough And then the command I want to run Yeah, you can use you can use this this This auto assign name so every docker command when you identify the the The container that you're running can either be the id or the name And you can specify the name when you start it, right? So if you say I want to run I shouldn't have removed that hold on I was showing you that you can name it you can name a So you don't have to run a station the container you can just create a Create a process whatever process you want in whatever container you want just by using exec There's another way to use A tool called ns enter which gives you more control over the kind of namespace that you create And this is useful if you want to only share Some of the namespaces, but not all of them Like so say you want to create a process that shares the network namespace, but not the pid namespace Or you want to create one that shares the Anyway, you get the idea you can like do like creative groupings of new processes that you start across your different containers And control what namespaces your new processes share or don't All right, so moving a moving a docker Image to another host Requires that you have a repository that you can write to so using docker hub you can If you don't already have a docker hub account you can create one there. It takes a minute You can log in Once you've got an account on docker hub you run this command docker login And docker login It indicates you against your account so that when you do a commit Now you can do a push So you can tag it with a name and then do a push into the into the private repository so that you can check it out on another machine So we've got our two two machines here You can actually do a You know make a change in one of the containers Do a commit Do a push and then run the command using the resulting Repository name after you do a docker login on the second host and now you can run The same container with the same state that you committed on host one. So it's a way of doing Migration essentially using an export and import process It's a cold migration. Well, I don't know you you might call it warm, but I would call it It's it's a really export important not a migration. Yeah Yeah, docker save and docker export. I think are aliases And that just takes the that takes the contents of the image and gives it to you as a tar stream Which you can save into a tar file and then you could without a registry at all You could just copy the tar file to somewhere else and and start it back up or yes, you could put it into another registry That's right. It flattens all the it flattens all the all the layers That's right. It's generally better just to build to to start always from a docker file and build Rather than modifying an image and expecting to be able to move that around But you can you can import images from from tar's Yeah, yeah, all right, so in the interest of time I'm not going to Demonstrate this but I'll leave this in your capable hands of doing this The key here is just the docker login. You do the docker login on each host where you want to be able to push or pull From your private repo. Okay, listen to Yes, so active is how you change by default if you don't specify the machine name Um, which of the docker machines you're going to be interacting with when you run docker machine ip Or when you run docker machine ssh So if I run docker machine ssh machine one, I'm going to get machine one If I run docker machine ssh machine two, I'm going to get machine two If I just run docker machine ssh I'm going to get whatever is what was marked as active It actually does when you switch from one to another it actually does I tried it Actually, it depends on the version of the version of the client that you're using but The best practice there is to set it every time If you want to change like You either specify everything explicitly or you set it every time that's that's the best way to do it It's going to work across any client regardless of the version of it. Yeah times up ONE In this case is actually machine one See right here Machine this is actually an this is a bug in my slide That should say that should say machine one Or actually hold on a sec. No No, no, no, I take it back. This says commit The container named one Because we created earlier on we created um One One and two So so docker so we've got a container named one. We've got a container named two I say so sorry. It's not a bug. It's it's actually right um Docker commit the container one right the state of that one to The path of the private private repository with a tag on the end, right? So docker files. This is what I showed you earlier from maintainer run expose add and cmd um Remember run only happens when you build the image And cmd only happens when you run the image So from is either in the in the format name or name colon tag If you don't specify a colon tag, it's going to use colon latest by default The maintainer line is optional The run command is going to run At the time that you build The expose command is if you ran. Okay, so there's this flag for docker called dash capital p And how that works is When you use dash capital p It will automatically pick What was on the expose line and map it to a random port on the host So that you get a random port on the host that maps to whatever you exposed So if you're using a system like a kubernetes Where it's got a services capability that's going to connect you to it It doesn't want to take care of static mapping. It just Creates it on the fly whatever docker applies. It's going to connect the service up to that port So if you're doing this in a like in an orchestration system dash p makes a whole lot of dash capital p makes a whole lot of sense If you're creating these things explicitly lowercase p and specifying the actual port mapping of what you want externally What you want internally makes more sense. So just depends on what you're trying to do Um ad is for adding files. You put the source file first. That's the local one that's in the build directory And then the destination is where you want it to land inside the container So this is how docker files are built You make a you make a new directory you go into it You put a file in there called docker file you put in your docker file language, right the from maintainer ad any cmd's Or any runs You want and then your cmd line and your expose line And you save that and then you run docker build minus t The name of the image you want to create and any tag that you want to attach to it You want to see how to copy how much time do I have here? All right, so we got half an hour a little less than half an hour left So let's do a little bit of docker copying So in cpu specify the name of the container or the id of the container and then a colon and a path of the thing that you want to I think that you want to copy Path to a place where you want to put it So now my mac has that file on it and this is handy like if I've if I've like I'm in the process of building Experimenting with some containers And I've made a configuration like of a Galera cluster Maybe it's like I've done this recently where I wanted the configuration of the one Galera node And I wanted to put the exact same config on another node I would actually use docker copy to copy the file out to be sure that I had exactly the same version of the file on both of the environments that I was testing with But you can also do the reverse of this so Using a bind mount dash v So let's say this temp directory. I want to mount to Slash adrian inside the container. Okay. See I brought my experts with me keep me straight all right, so now I had machine one I think I was working on machine one right slash temp To the containers slash adrian, so I'm going to get into the host and I'm just going to show you how this works Okay, so I'm on machine one and I'm on a container on machine one. Let's put this side by side here Okay, so I'm in slash adrian in the container on the right I'm in machine one slash temp on the left See how that works So the host and the container are in sync in terms of their file systems because I'm not using the container file I'm using the host file system Okay, so that's how you get things from the host in right? I would put something in slash temp and then I would copy it so this I created this to this um High dot text came from the host right that's where I created it. I could just do cp that to Now it's in the container right? Get the idea docker cp is only for slurping stuff out of the container. It's not for putting it back I don't know why but That's the way it works. Yeah Okay, so we talked about docker commit. I mentioned before doing commit and creating new images is something you can do I think that's fine if you're just playing around with stuff It's not fine in a production environment where you want your environment to be repeatable in that case the best practice is Create your images from a docker build from a docker file not by Creating an image Like I've been doing here in these these demonstrations Fooling around with them getting them to like a golden image and then doing a commit on them and pushing them up That's generally a bad idea if you have a docker file You have a way to create the image the same way every time repeatedly And that's what I would recommend and I would recommend putting the docker file if it's for an app I would put the docker file right in the code repo with the with the source code So I made a git repository that has A demo docker file in it like the ones I've been showing you So you can clone this repo github.com slash adrenato slash docker file underscore demo And it has a couple of things in it Yeah, you can well wherever you want to clone it. I don't care Wherever you want to do your docker build, but I would do it on my laptop If you're having trouble with the network, you might actually want to do it remotely I've seen the networks pretty pretty laggy right now, but I'm going to show you what's in that So there are four files in here I've got a docker file This says from sent os Maintainer is my name We could put whatever you want on the maintainer line. It doesn't have to be an email address Uh, I do a yum update and then I do a yum install of apache And then I run I add the start script that I created and I expose port 80 and then I Make my command to run my start script So the start script Because a container has no init process If you use some if you use like init d scripts They will not be started when the container is started The only things that are going to start when your container starts are the things that you explicitly start So what I do in my like the way I use docker Is I always just have a script like this and I start up my stuff From my script and that way if I want to start services, I can I just use service Whatever start in my start script and then at the end I sleep indefinitely And that way the container doesn't go away when I'm done starting my services It stays around but the thing that is actually running is actually doing nothing And that way I can do a docker attach or I can do a um a docker exec to get a shell in this In this container later if I want to And I'm going to be able to um, you know go in there and start more processes or do whatever I want Yeah Exactly You don't have to do it this way. There are different ways to do this You know another way to do it is you just have your app Put something on to standard out And not and not exit You know like a dash d You know you can run your app with a dash d So that it's just it's holding standard out open and then you're going to be able to get that output from docker logs You could you could you can also run system? Well depending on whether you run it privileged or not you can actually run system d inside of a privileged container It's going to okay. Great question. Andrew. He asked He asked what happens when I do a docker kill what happens to all the processes that I create if I have more than one process It's in a c group when you signal a c group everything in that entire Hierarchy gets the same signal so if I send a sig term to a docker container All of the sub processes in that pit tree are also going to get sig term So it's a way to kill off all of your processes at once It's actually really convenient because sometimes what I actually do want when I stop a service is for this entire grouping of processes All to end at the same time in that case I might actually put them all in the same container together like this rather than having different containers for every single process It's easier to deal with them all at once Yeah, so what am I teaching you? I've lost track now Oh, we were doing docker files. Okay um So there's I also put a script in here called build.sh Now there is a tool called docker compose that allows you to do this in a more graceful way This workshop was actually prepared before docker compose existed So my apologies. It would have been cooler if I had time to update this, but I've been too busy. Sorry But you don't have to use compose. You can use whatever the hell you want So this is an example of a super simple shell script that just does a docker build process so I set a Variable here. I get the date. I use the date as a tag So when I build I'm actually building Httpd colon the date I tag that against latest And then I check to see Did my docker tag command work? And if it did Then I kill off any other container that has the same name I attach which I expect this to fail Because you cannot attach to a container that has been killed But if it takes a period of time for your processes to exit there is a period of time where It's received the signal, but the processes aren't gone yet Okay, and so what attach is going to do is it's going to block until all of those processes are actually gone So docker attaches my way of waiting Before I start the new one for the old container to actually be gone If I didn't do this and I just did docker kill docker start Sometimes the docker start won't actually run because there's another container still there because the processes have not stopped yet Make sense Okay, and then I run a new one and I give it a name And I say map port 80 on the host to 80 in the container And I say run it in the background and run it from the image that I just built And if that didn't work then Print a nice little error message. So let's try it So this is what a build looks like Every stage of a build creates a new container Every single one of those lines in your docker file Unless it's like the maintainer line Is going to do something and that something does not happen on your host It happens inside of the container that is created for the build process That is intended to be around only for a temporary basis until the end of your build process Where we commit the container To an image that's where the image actually comes from it comes from the container that was running at the time the build was running So this is doing the um update Question louder Where do the images actually live at the time that you are building the images live on the local Var lib docker directory using whatever storage driver you have chosen So depending on who your upstream linux Provider is it's going to that's going to determine what the storage the storage uh back end is for this Uh, I happen to be using it on centOS. So it would be like the dev map or back end. So you'd have like a um a virtual File that has metadata in it and another virtual file that has the actual data in it So essentially It's like having a docker database for what's going on locally Kind of like when you use git right you get a local copy of the source In the same way you get a local copy, right? Remember I told you if you understand git you understand repos. That's what's going on here. Okay There's no there's no requirement that you use the docker registry to store your stuff in it You can just build your images wherever you want to use them if you want um I actually have production deployments of docker that do not use a registry for any of my My images I only use the registry for the upstream stuff So In some cases I'm using production deployments where I started images from scratch There's actually a literally a image called scratch and it is empty There's nothing in it So you can build your own linux distro Inside of the scratch image and then commit it and that's your new base image And then you can create your own docker files against that so you can have an entire environment That depends not at all on anything on the internet Okay, so there's the container that I made and if I do the same thing again It's going to rebuild now. Here's an interesting thing The docker build process created those containers in the build process Those containers are cached So if I run the same build command again The things that are the same will use the cached result instead of running again So if I've changed my Source code Of course, it's going to be different when I create the context when you run docker build It basically like tars up everything that's in your directory And then feeds that into the docker service as a as a stream And it looks at the checksum of that of that data that you're checking in and determines if it's different And if it is then it actually does what the docker file is asking it to if nothing's different It just uses whatever it's done before and doesn't repeat itself So that's why Docker builds can be much faster when you repeat them Why is this taking a long time? What's it doing? Oh, it's talking over our slow network I know why because I told you guys to do docker build and you know what docker build is doing It's flooding our network with activity It's going to push the context up So depending on what I told them to put in the context Well, it's this is only just a couple of text files, but still it's whatever Question here If you have a device driver, can you pass it to the container as a gpu resource for example? You can give access to device files Depending on what you want to do with that device You might not have the rights Because when you create a container unless you create a privilege container, it drops the capsis admin capabilities So you can't do things like mounting file systems So the most common reason people want to do that is they've got like a file system on some device Yes, you can get access to the device, but you can't actually mount it from inside the container after that So you would need to use what's called privileged mode. So you create a privilege container I think it's like Dash privileged or something when you start the when you start the container In which case it won't do the drop of the capabilities at the time that it starts it It's a less secure way of running the container, but it allows you to do stuff like this So yes, you could question there Five minutes. Okay. The question is if I used kilo today, how long would I need to wait for this to be just built into open stack? Well, everything I'm showing you today is possible with all All open stack clouds In terms of treating it as native from the magnum perspective Magnum is released. You can download it and use it today If you're using it for experimental stuff, it's perfectly appropriate I wouldn't recommend using it for mission critical workloads yet You probably come back in the liberty time frame liberty release time frame To see where we are at that point to decide whether you want to put serious stuff on it Yeah, okay. Yeah, sorry about that. All right, so let's see if uh Writing your own docker file. I showed you the repo. I showed you the build script I explained when you run things a second time that it uses a cached result So if you have builds that are Like 90 similar from one build to another build Put the thing that changes the most at the end of the docker file and you're going to speed up your build process a whole lot Especially if you're doing stuff like yum update like I in my example. I have a yum update command That's not going to change five minutes from now when I rebuild my app, right? So it's great to put that at the top and then to put like the add my source directory to somewhere in the container at the bottom And that way it's only changing the stuff that's actually changing and my builds are much much quicker than than they would be the very first time okay We got through all our material And I still got three minutes left To yeah, awesome. So I'll just take uh, we're done with the instruction half Uh, or the excuse me the the lab half the the end I'll take the rest of the time for questions or we can just wrap up and you can come tackle me at the front Thanks everyone for coming out