 Okay, so welcome around let me introduce myself and my colleague and Joseph and this is Peter we work for Red Hat and we are going to leave this workshop today it's about Docker and it should prepare you for the whole track of the whole container track that is upcoming for this conference and first I will tell you what this is going to be about and then what it's not going to be about because what we're going to start with is just setting up the environment so we can do the steps the workshop itself and during the time you are setting up your computer we will give some introduction to the container technology the background of it and a little bit of the history and then we will speak about Docker what Docker is how it works what it does and this will be hands-on workshop so you will be trying all those examples on your own and what's what's what this presentation is not about it's why you should use Docker because there will be a whole container track and you will you can pick presentations and they will be more specific this is only introduction what it is and we are not going to talk about content of the container because all of the other talks actually talk about the contents so let's note how to participate as I said you will need the Docker program on your machine I hope that I have helped everyone who if you need if there's anyone who still needs a help just raise your hand and I will come to help you okay yes it's better to run it natively on on Inux if you don't have the chance we give you the virtual machines on on the place right okay so I will hand over to Peter and he will say something about containers we have the little little dance node here is so if you have a question just raise your hand fire it we are here to help you so don't hesitate and just shoot it and of course enjoy yourself okay so no issues right if you are like as Joseph said if any problems like like raise your hand and one of us probably the one who is not speaking will come to you and help you okay so well you are trying or setting these things up I will try to ask you a questions and if you can ask answer it in like in one sentence or in like few words what do you what do you said if I ask you what is a container anybody no did anyone use the docker before like trying trying some simple stuff or anything just put your hands up okay so all right all right so I'll I'll answer this question for you and just to just to make it kind of understandable in the wind virtualized a virtualized world let's let's say that actually there is no container but don't leave yet please we we have constraint applications and by application I mean application is one or more running processes okay so why do I say this when we want to try to compare containers to the virtual machines and we want to run application in virtual machines we usually do that we create the virtual machines install it set it up somehow log into it start the applications and then we'll start using the application so we can the application in the virtual machine we can we can like stop but the virtual machine as it is it will it's will stay and will exist but on the other hand container it actually does not it it can exist without the application running inside it once we stop the applications the container creates so we can create empty container right this is like important to understand so you can see the differences between virtual machines and containers like so when we talk about containers and this whole conference is more on the most of the conferences about it we actually talk about multiple multiple Linux kernel features configured together for one or set of processes or for an application so there is like again multiple kernel features configured together meaning that the meaning that all not all are like dependent on the dependent on each other or or like anything to that's why I want to say is that every the every kernel the those kernel features must be configured like in the independent way so we can we can do like we can have like various levels of isolation inside the container right we can configure the container so it has access to the host networking but we can configure containers so it doesn't and so on so now about the docker docker is actually a platform for running shipping and building containers we in this workshop we will look into running and shipping but we will don't talk about building containers because we don't have enough time for that and when you start to care and install docker you start the docker demon on your house which is responsible for managing containers on the on your house so as I said when containers are or technology used for containers is kernel features it is true that containers are actually pretty old and the reason why docker is now a kind of hype is that the docker provided a whole platform meaning that not only docker allows you to run containers but it allows you to build and ship them so for the other versions of content containerization platform or how to say they usually allow you to only run the containers like system dns phone or i think open vs and so on but they don't allow to easily create the image and ship it to from one computer to another so docker solve this issue make it pretty easy and that that's why i don't like so many people are using it today okay any questions to the theoretical intro is there something you don't understand okay i'll give my word now to yourself who will talk about docker images all right so what is cool about docker is actually the packaging of the application and its dependencies to one single unit we call it an image it is immutable and you can ship it to any computer which is capable of running docker where it can be instantiated and that's actually when we instantiate the image we call it a container and it can you an image is usually composed of the application itself and its dependencies but it can have we call it a minimal operating system but it actually is not there in the container there never is the kernel it uses the kernel of the host so when you see an image called fedora or wuntu it actually is not the operating system it's only the binaries that make it easy to work with it let's say in context of fedora a very very good binary is the df installer so we can download and other packages and grow your container and if you don't really need these tools you can only package your application and the dependencies which which you need in the in the container in the image so you can have one gigabyte image for let's say called java on wuntu and you can very very well have one megabyte image which would be one statically compiled application so that's the big difference between these two when you see really an image called fedora it it is not a virtual machine it is not a operating system it's only called fedora because it contains the applications the tools you are used to using on fedora and for distributing the docker images there are repositories and there is actually one repository managed by the company docker and when you identify the image the source repository is part of the fully qualified name so in context of redhead our repository runs at registry.acted.redhead.com that's the repository but if you don't actually specify the repository the docker defaults to the docker.io which is the docker repository it's available on hot.docker.io and you can see all the images available in the redhead registry we will of course find our products that are tested and the other part of the fully qualified name is the outer of the image in this context we use javas it could be some team or outer person of the image and then the name of the image itself so in this case it is the application server built for the open shift environment and the last part is tech which is a version of the image and that way you can easily build several versions and what is cool about docker and the application in which we use it we can not only upgrade from one version to another very easily but we can also downgrade and downgrading has usually been very difficult and with containers with shipping applications as contained as images it's quite easy okay let's do a stress test to our internet for this workshop we will need some images so let's download them now there's a command for downloading docker docker pool and really the vocabulary is similar to git you can do docker pool docker push docker commit and it really does what you expect it to do it's very similar to to git so it's actually quite quite easy to start with docker docker pool is the command which downloads images from registry you specify the registry the outer and the image and the version as i uh showed on the previous slide so let's have a look at it it's just a command docker pool and then you specify who is the outer or what is the what is the image itself where it is published and what is the tag you see we put here docker.io but if you don't specify any repository it always defaults to docker.io so you could just do docker pool fedora and it would by default download fedora from docker docker.io uh add the latest version the second part the creator what was the second part oh there is the outer the outer yeah so you didn't put the outer in here so is it also a default author yeah there are like special kind of images like official plans uh which doesn't have outer so if you like download fedora it's uh and for example if you find it on docker hub it says it's official image so it's like don't have an out offer okay so yeah that's like wordpress uh sentos mariatv mask oh yeah there are a couple of these okay yes there are like kind of official images uh for example the sentos image is published for from was built by people from the sentos community it's kind of the star on twitter if you use twitter you know that there are usually uh celebrities and famous people barack obama has the star because he's certified or it's it's official account it's not someone else so this is the same way okay and now how does the internet work is it can you download it yeah it's slow once you download it you can list the images i hope we'll get a bit before we end before our time some stuff the command for listing images is docker images okay anyone who has downloaded the image yet okay question question yeah uh sorry you mean when you download them on your computer do you need something there will be multiple layers and stored in you mean the path to it yes that's i think varlip docker yeah varlip docker i think yeah but it will be it will be some hash you won't see actually verthon fedora and those who are using the virtual box this image should be already present on there so when you when you issue the command docker pool fedora it should say it's located on the computer and there is a command which you don't have to issue just right now and that's a docker remove image so please don't do it because you have to wait for another 10 minutes to get it get it locally so i have a couple of questions so when i run docker pool can you say it's download and update the docker images what does it mean update so if i have the application will be down it should not be running i guess uh you can you can you can download you can update you can issue docker pool even if you're running and so actually when you create applications it doesn't use the image like for writing it creates overlay layer and create saves changes only on right to the different different layer so you can have like you can you can download image create application or multiple application from it you can update the image and create new applications from the from the update image but the old application will stay using the older version of the image so do i have multiple versions of the image when i shut down the running application and can i run it from the old image or uh yes you can you have a older version of the image and actually the image is it's a layer so the the one image contains multiple layers of the data and the image is like a like a description of the layers it says ritual size i think it says when you uh extract the image so the when we are actually download when you are pulling images you are downloading tariff file and uh i think the tariff file might be smaller than the ritual size yes now when you download the image place download the whole image it's not that you would have just a part of it and need some other part of it how images work they are layered and when you build the image you build a new layer several new layers and so then the whole image is actually a compose of five or thirteen layers so what is the later like when you buy later yeah uh it's actually like this so archive uh like oh i think what you want to ask when you're creating a docker image you are using docker file and docker file consists of a set of commands the commands are executed from top to bottom and for every command there is a layer so when i on the when i create one command like dnf install httpd it will install httpd and create a layer committed then another command is like add my configuration files it will add configurations and commit the layer again so yeah exactly so when you uh so when you want to just change the configuration and create new image the downloading part isn't done again but the only the it will add only the new layer with the changed configuration uh it's i actually i don't know why did it create it or exactly this way uh one way it's optimizing for example if you have two images of both built on centOS one is for httpd second is for mario db and when you pull both images and they have like a same base you don't download twice the base you can share it you all you always build a new image oh yeah when there's a new version you rebuild the image and publish it to the registry uh i don't know if there's ideally ideally the state of the application should be uh persisted and you should run a new new container and the old old one uh erase so i i'm sure at this conference there will be a lot of talks about kubernetes and output shift and that's actually the orchestrators that do it automatically that if you need to scale up they will publish uh several new new new containers automatically i can show you show you uh command line and uh when we are talking about the layers of the image when i list my uh my images okay so what we can see here actually is basically here is a centos image and it has about 200 megabytes and you see the whole images this this record it aggregates every record aggregates all of the layers but i can issue all and here now i can see all the layers so you see those the tag is none because the tag is corresponding to the top layer of the image and as peter said it's optimization that you can share several layers and uh build new ones only when there's a difference between the layers no no uh vagrant is for managing the virtual machines uh like uh uh it is a kind of of orchestrator for you can use kpm or or a virtual box but containers are not virtual machines those are isolated processes actually so my ground is just uh an orchestrator tool working about the virtualizations uh tools but i think it has a driver already for docker so um on on the um you can like configure the configuration or configure the vagrant but uh it will translate the commands to the docker and use the docker or the virtual machine kpm and so on right so it's just a tool on the different up section level okay so how is the downloading going still downloading so um i'm just yes so so for example when you have a docker image for fedora and you also have a docker image for sento for the same host so what is it actually like the the library and the application layer and the kernel which like defines fedora right and defines sentos so that means i have the things from two distributions in one host or if they are shared i don't think that fedora and sentos they are not shared but uh it's like uh this is similar like virtual uh virtual machine image right you can download multiple virtual machines ahead uh this is the idea is the same like um when we say about fedora docker image it's like a super minimal file system with programs and the library is just to basically install another software with so the dnf command works right so what what can i use for that you buy a docker each fedora even what what can i use for that well so as uh as there is the dnf command available already so you can super easily create your applications above it so when you create the new docker image the first command is usually from something and that's meaning base image so you can create a new image from the fedora base image and uh you can install there like my epigee or myma or you install epigee there like with dnf install htvd and create your custom configuration there it's like a template docker image we can then start yeah yes yes usually there is bash and we are using it like for experimenting on trying stuff and so on but for the production you want to yeah pick each or install another software there and pick each step so we dockerized application then the application but because the docker is set it's the it's the content is the list of the three external functions right features and so then if i for example i have htvd the htvd is using libraries of the hosting operating system no uh it's using library all libraries for htvd must be contained in the image okay so there so the application within the docker container yeah not using any of the anything about the only kernel of the host only kernel and all the linking uh dynamic libraries must be in the uh in the column in the image so so either if i use fedora as a base it's perfectly usable on sentos yes fedora is just basic exactly okay if you know uh a root hroot right so this is like hroot on steroids so right okay and uh how much is the docker in which configuration like can i allow some for example can i allow the htvd to see the other libraries within the hosting operating system usually no no you can you can allow it to see the host networking for example otherwise it is isolated you can well actually yes you can there there are concepts like there is concept like privilege container and the super privilege containers i don't want to talk much about it you i guess you will hear about it on the in the conference but uh you can allow yeah you can give what basically any permission to the yes exactly yeah yeah so uh live we uh live package the um live we are doing it with the fedora always saying that it's it's based on fedora it's because uh we have like pretty good system for installing applications with dependency resolving and so on and so on right so you are perfectly able to compile htvd yourself like add all the necessary libraries and like put it in the image and packaging the shipping but uh yeah it's much harder for you and it's not as effective yep yeah yeah exactly uh docker is written in go so it's uh statically linked and have everything in the binary so yeah there's only dependencies the kernel you yeah uh the container as we now know them they they it's like only linux can help feature windows or mechanism doesn't support there are some announcement that this year it should be available in uh in the windows server 2016 so yeah i hope that the kernel must provide the same function they can probably sound very important external i don't know how my console is going to do that but uh they will be able to do it in this year we have a cockpit uh done by redhead which is like uh web application management of the server and it's like that's for docker mg is docker container's kubernetes so you can uh just click oh i know okay sorry uh now i understand uh there is uh initiative or it's i think it's called xdga or something like it it's uh they're working as battle to be known you know battle to be known and uh they are trying to yeah run they're trying to run uh web applications as containers but it's much harder as there are much more dependencies with the system and so on but uh i think some applications are already running there and i think they want to uh like move all the applications to containers for the portability and yeah easier than common and so on okay so let's continue in the presentation and i hope that so i will show you just some basic commands and so i hope that by the time you will have all of your images downloaded and so it will be easy to keep up because those are very basic very basic commands and so let's talk about how to now we we have the image or we should have the image and we want to instantiate it we want to create a container from that image there is a command for it called docker create and you will give it the name of the image and also some parameters of what to do with it so in this case we'd like to instantiate the federal image and the process to be run inside would be bash so what we need for bash is we need to interact with the container so there will be a switch for it yeah sorry very important very important and good practice is to always give your container a name if you don't do it docker will generate a hex hexa id every every container is identified by the hex id internally and for as a human readable way of naming the containers we can give them aliases so it's very good practice to always give it an alias there is the switch name for it okay and to interact with the running container is the switch i and also we want to be able to connect it so allocate to pseudo terminal the switch and you can also write it together and it's kind of nice command docker create create it and name of the image and name of the process you want to run inside so if i try okay so this is the hex the internal id of the new layer of the image is here anyone who has downloaded the image yet one couple okay perfect so when we issue the command you see the id of the container and let's have a look at the container there is the command docker ps which lists all containers but this one which we just created is not running so when we issue just the bare docker ps you won't see it but there is a switch all the little a as all and then you will be able to see the image of the container which we just created but it is not running so we will proceed to actually starting starting the new container and this command start and it does it is very similar to the create you specify the container which you want to start and you still want to interact with it because we created the container to run interactive bash so don't forget to use the i switch and because the terminal has been allocated you only want to attach to it you don't need to allocate a new one you only attach to the existing one and that's it if you issue this command you have running the container and you are interacting with the container through your command line and the process running in the container is bash so you can use all the features or all the binaries allocated in the fedora image still downloading is there any dependency coming in multiple versions of the same image running for example latest and you see you have the latest version of the running then you have another docker with the previous version is that possible yeah or if you update the image you're still keeping the container doesn't update the docker powerful so you actually you need to destroy the container in your new one so that's why you should keep the data separate keep the logic and the data separate so you can create new containers with new applications but using the same of later the question was kind of similar to the one we heard previously so we'll do a little drawing we don't want to do to go deep in the technicals because this is the introduction but that's very let's have a very very short look at it when you download the image it helps is immutable you have to always create a new image for a change and when you run a container you create a new layer on top of all of the layers which are composing the image so yes you can run several versions okay we have started a new container and to remove this container to save the memory is a basic remove command you give it name of the image and in case the container is yes so to remove the container when the container is still running you can force the remove and there is the switch rf3 as so as so in usual commands now when you issue this command when you remove this container what we actually remove is the top layer which you created for this container okay so we pulled the image we started a container or we created first the container and then we started it you can do this automatically when you there is a command run which is actually create and start together and it has absolutely the same same switches so it's the same as we saw with the docker create you specify the image you want to instantiate to the process you want to run inside and of course give it some nice name so you don't have to refer to it as the hex number and we are using in this case bash so we want to attach to be able to interact with it and switch we haven't showed yet is remove and what it does once you exit from this container it will automatically remove the top layer okay so just let us check how many of you are still downloading the images okay so you see as in previous example you have a running container okay and I will pass the word Peter okay so when you have the image downloaded we can now just play with it a little bit and we will look around the content of the container and on the host so run the create the federal image as you did or as it was described here with docker run it so we can we have the interactive shell to specify the rm command which is useful when you are just playing around and when you just stop the container the docker will automatically delete it no docker run consists of two commands like internally it will create and start the image together if you created the the image before with docker create and have the same name you first need to delete the old one with the docker rm command and when you when you start the the container install those packages inside because as I said when installing and updating packages should be always done during the during the image build so when you so when you yeah so when you starting or when you want when you want to update the the container you should update the image create the new image push it to the registry and on the node where you running containers you just do docker pal pull and create the new container from with the new version so that's like a ideal world right so it should be done this way but it's not possible the policy I understand so for now as so you see I don't have any running container maybe just a little bit but I do have my fedora container so I will remove that one okay I create the new one with the command I set I want bash inside okay so yeah I created container I connected to it but what I actually said I did I just started isolated batch process yeah that that's that's it so I want to install the mesh packages one take two one oh maybe I'm doing this I'm doing it for for exercise purposes right I'm not so I'm not going to use this image for production so you create the new image with updates with new versions of packages okay so let's say I want to update I want to have a new image this one yeah well what I should what I should do or what I should did ideally I should create new image like like Fedora docker one one install all the packages and publish it publish it to the registry yeah when you are when you are doing the when you are creating images as you said as I said you have a docker file containing the command but docker is doing like this same thing it will create the container around the then I've installed command inside the container and commit the changes as and creates the new layer right so with this command yes you do it with this command but you should do it in the docker file so it's automatic and repeatable and so on if you are running if you are running docker containers only on one host you sure you don't need to push it anywhere but again good practices that in your company or in your network you have registry running and you have nodes which are only building images and nodes which are running containers so on the nodes when we build where you are building images you build their images there you push it to the registry and all the nodes will download the image yeah it's uh it's uh I think it's with curl you can download the the layers on the layers yeah it's it's also distributed as docker container without the graphic interface but there are also services with graphic interface where you can search and and you see all the yes yes also it's actually part of the open shift so when you install open shift the registry is there it's not good it's it's a text file it's text file well like there are there are commands only for docker file like if you want to run some batch command you need to specifically run and the command but there's uh there's some meta commands like name, version, author and so on and yeah it just so you said you usually have one host where you build the image and then you have another host where you run the image once the so you create the the docker file built on some particular host you build the image then the image exists and this image has not been built again so it can be just taken as is shift to one host and the application compiled inside then runs immediately yeah exactly yeah that's it like super easy docker file I just will go okay so I specify from I specify the base image I want to create my new image this this can be like Fedora centers as we said or it can be like here we can see Apache 2 meaning that the original image already will have Apache installed but I will just add there some other applications on libraries and so on uh the adding the applications is just update or dnf install and so on usually this is also back practice so it shouldn't be there uh so you install the required package packages by uh by you and clear the cache which is good practice and in the end uh oh come on oh see here it is yeah and as the park add we we specify the command which should be run by default when we start the image and don't specify the command for Fedora we have to specify the base command because by default there is like no commands specified it's like a general image so there's nothing to be run but when I have specified image like here and I want the Apache to be run there I'll specify the command and when using the docker run I'll just do docker run Apache 2 with something and it all we will start the service inside it update this also bad practice so how to get the latest sources you show the update the base image okay and the updated base images are always available on yeah they're on the docker hub when you publish the images there's like something called like automated build meaning that you when you set up like watching your base image and when the base image changes your image will be automatically rebuilt with the new base so the like the original base like Fedora isn't uh isn't done with dnf install because like there's nothing so they're creating like like archive just with the files like copied the minimal file system there and something like that I don't know exactly yes we will get to it my application on particular version of the the profile uh it's uh sufficient not to uh not to build on the latest version but if I specify some yeah some precise for example Fedora let's take uh you have Fedora like 22 23 and so on so you can specify like from Fedora column 23 okay so it will stay with the always latest 23 version of Fedora yeah you don't do this in docker file you need uh in the well there are a couple of considerations like when you are running your application in containers like on multiple hosts so there needs to be overlay network and for that you need orchestrator like Kubernetes OpenSheet which will take care about the networking and IP addresses uh the information like from one container to another one how to get the information is usually using environment variables so when you link two containers together uh the one container can expose or inject environment variables to the new one containing the IP address and the port and all the required informations so nothing is hardcoded and in the your configuration you just use the variable and the one yeah then I want to know all of them have different IP addresses but that means that all the IP addresses are on the level of operating system like docker demon creates a virtual bridge and every container contains contains virtual networking device connected to that bridge so docker demon creates by default I think it's 172.16 something network and all the containers contains IP address from this network external you mean it's it's not it like by default so it's it has access to internet okay but uh you can uh you can also use transparent routing or do you always have to use uh network you can expose usually expose a port from the container on the host so this is transparent uh transparently routed to the to the dock image so yeah yeah because in some applications you can you cannot really use uh netting yeah uh you expose like if I have apache and I want to allow world to access the port 80 I can like expose the port 80 of the docker of container on the host so it's it goes directly okay all right so we have installed installed darkride packages here we can do this and also so we will be using these those commands but I will be doing them so as I said the docker container is using couple of kernel features and one of those is called namespacing meaning that for various kernel resources we can create the private names private space for it in this case we have like a process or PID on namespace so when we when we use command psix in the container we see that there's bash with PID one but when uh but when we do psix command on the host we can see that we can't see because I don't have here we can see that um yeah oh that's okay so I help yourself a little bit so I have one uh one more processor running inside and here I see so uh the docker did not created the new container like started base application inside the container we've set up the linux kernel features I I created a new process inside and here I can see that here I can see that the oh this is terribly broken but all I wanted to really show that when you are do the ps on the host you see the you see the exactly the same process but with different PID yeah that's all I wanted to show so inside the container the bash thinks it has the PID one but actually it's different so uh as opposed to virtual machines when you are when you start virtual machines you usually see one process per virtual core of the virtual machines but here you see the exactly same processes as in the container another thing as I as I talk about there's network namespace meaning that a docker will create virtual network interface on the docker bridge and assign the private IP address and the and the container doesn't have access to the host and networking by default uh another feature is for example hostname container has also different hostname than the host yeah and so on and uh to the question about the resources when I now when I exit the container um the it will be automatically removed as of because I specified the uh rm command but um for example if it's that it doesn't really help right so I can do this it's better why uh so when I specify for example the memory switch and I'll say that this container will have a maximum of 256 megabytes of memory it will doesn't have more right so uh for this uh docker is using cgroups or control group of kernel feature where you can limit memory cpu of disk IO and so on uh yeah it will specify cpu weight and uh it's uh it's kind of the cpu it's kind of complicated because you define the whole uh processor time the docker can use and then you you specify like portion of the processor time the each container can consume uh so we need to uh yeah exit from within the docker all the all the installed packages from before will be are done because I specified the rm command yeah yeah so that's because if I don't uh and I and I exit the bash process the bash process stops but so the container stops but it will stay stopped on the host uh I'm modified okay but with rm everything's gone yeah with rm everything is gone yeah does the container or the processor container know about this memory limit or it's it doesn't know it's it's like uh like you have really only 10 uh 256 like install memory so it's yeah um it depends on the application how we can handle memory like if it's the application and you are you you need to have like maloch's failing handles right so you can yes so no well well yeah yeah it's like uh it would do also when your memory would run out uh there is no sweeping I don't know how maybe if I have uh 10 locker uh instances running 10 locker containers and I want to limit all of them uh so they don't consume memory which is intended for the rest of the locker uh containers but I won't kill the one application maybe to slow down because it's mapping or whatever but to work um because if I limit it like this and the maloch's will fail I need this kind of application be ready for it like hopefully it is but it's because I need it it should be um uh it should it it should be and uh like um if an application crashes when there's there's not enough memory it's back in the applications right because uh in the uh I said c groups so you can like in when you are running system with system d like every process you run you can uh you can restrict its resources with c groups like in the system d file so like you can always settle it uh that Apache will always get only maximum one gigabyte of RAM never more so if if the application can't handle it it's back in the applications it must and handle it so it's like um I don't know if my notebook has only if uh like uh one gigahertz of RAM physically no really um the it might limit the all the all the world kind of applications because currently like new world or the cloud applications should work like they should have like plenty of small workers so they are easy so they can scale easily uh scale horizontally easily so uh it's it's not actually a problem like right if you have some I don't know uh prehistoric oracle database which can run only in one process so and you want to scale it you need to add more RAM yes so it's it won't work well in this in this containerized world but when your application is like uh plenty of small nodes it's it's perfectly fine yes you can bind mount the volume so you can you can make uh some directories from the host accessible to in the container because by default there are there is uh there's extracted the fedora image uh we downloaded but it has nothing in common with the host yep it's uh incorrect yeah uh that's a big question actually you can do uh you can do it in plenty of way usually you mount external storage to do to the host where you're running containers and with systems like lvm or bt rfs you just create the snapshots or or you create the thin volumes from the storage to the container to the container so basically I bound something from the host to the yeah uh docker can create like a similar to virtual machine you can configure that when when then new uh the new virtual machine will create new lvm thin volume so you can do the same with docker okay so let's move a little bit as we don't have much time left so yeah this is just the you you can try this example you can install for example stress it's small applications for like exhausting resources for resources of your computer you can configure it to it to consume one gigabytes of memory and on the host uh you can watch the system this cg top command which displays the actual usage of the cpu memory and disk of the processes or the cgroups like on the cgroup so you can exercise for home you want okay so let's move to the more practical part of the docker and for example usually when you when you want to run a apache service in the container you don't want it to be running on the on the foreground to run the application the background just simple add the dswitch right and here we have nginx image which is simple web server and when you start this web server we don't specify command because the image itself already has the command specified and when you create uh when you create this container and run docker ps you see that the docker in the the command in the docker image specified to be run is exactly this one it's good practice to run processes in the containers on the foreground so when you use command like docker locks all the input or the locks from the container is easily accessible right so you don't need to look for the lock files inside the container but everything is in the foreground so it's easy to work with so docker locks means that you will see what like everything was out without the terminal yeah yeah uh and other stuff what you can see in the docker ps command are the ports docker file or docker image can expose uh ports uh which um the demon is supposed to listen on but when you don't specifically allow the exposing of the ports during the creating of the container it won't be allowed so here i started the docker run just with the d and name and image docker ps says that the ports are exposed but when i try to when i would try to connect on the localhost portated into a wooden block so is there so that means that uh is there any way to modify this running container i don't know how much you can modify running container you usually configure the container when starting it right the run command has plenty of options for resources for ports and so on so you specify what exactly you want and um yeah it's uh it will be configured during the creation of the container and when is run i i i'm not sure what is configurable during gantt here uh no you can't you can't run the multiple service on the same port you need uh what's uh in this example what's done is that uh as i as i shown you the the container has a local ip address and it's listening on and the ports are listening on that ip address so from the local house i can do type the uh docker the private ip address colon port name and it will work because the bridge is on my host right but uh to expose ports like for the world or so uh so user in the internet can connect to the container i need to expose the port and expose it on the host like on all interfaces like star double colon it so yeah this is done with the there are two commands for that one command is p or capital p which like publishes uh all exposed ports from the container to the random high ports of the host right so the uh when i connect to the local house three two seven six nine i'll go i'll get the output from the port at inside the container right and here i can it's also it also modifies the docker ps command and another one is the small p where i say that i want just the port at from the container be available on the host for at right so again here we can see that port at is exposed but for for free now not so this is how usually configured in networking any questions yep can i connect somehow on or restart it the current remit for example uh i restart the engine engine x process and uh running on the ground and lately i want to see configuration or some part inside the the core yes uh i uh i'll skip a few slides and share with that just because we are running out of time uh you can link containers together so uh in the previous examples i created maria db uh container we don't expose any ports so it's only uh locals and uh i want to add the wordpress uh container using the maria db so uh i use link option which creates like which enables to communicate the two containers on the private interface and also it injects uh the environment variables called uh here here is the this is name of the maria db container and i say i want to link it as mysql alias and and in the wordpress my wordpress application there will be some variables like mysql underscore ip and so mysql underscore port and so so in the wordpress i can use those variables and connect to the maria db container not really you can uh yeah the attached i'm not sure what exactly i'm not sure if it the it is works what uh he's asking yeah it i think it only connects to the output but if you want to start the new process inside the container like in the same group of the containers like bash you can use exec you want the bash process so the bash application available yeah in the container yes yes it will start the bash in the same cgroup cgroups as uh wordpress it will attach it to the same networking of namespace and so on so consequently then those processes will see each other yes yes this is mostly for debugging yeah any questions okay so i'll we have five minutes so i would it's on the it's on the github slash josef karrashek slash docker run around you can get there from uh from the schedule page for the for the phone yep it's yeah slink sure also in a similar way as you can expose ports you can bind man volume from the host to the container as i said earlier um what's done is like it is actually bind mount so the content is not copied to the container but uh there's a bound mount link created and when you create the when you change the content of the bind mounted volume on the host it will be um instantly changed in the container so if you want to serve content from the host in nginx container you bound man the one mind the volume and it will serve the content from the host when there is new version of nginx or anything you you download updated image kill the old nginx container and create a new one with the same configurations so the data stays and software is updated it's similar for the database database should have a bind mounted data volume with the with the database files so again when when there's new version of maria db you stop the old container download new one started started started with the old configuration and continue working okay okay so we are going to wrap up if you have any further questions we have about 10 minutes for it and we're going to yeah you can find us during the conference ask us anytime there's some scars for good questions I saw that you were active