 Okay, I'd like to ask you to take your seats. We will be starting in a minute. I just have a few things to remind you. Today you can sign up and vote for the Lightning Talks. The table is right outside of this room. Also, if you want, you can tweet, you can write a blog, whatever. Just if you feel like it, use the hashtag defconfzz or define future. Please, when you are coming through the rooms, just close the door very gently. Don't smash them. And respect the full sign. So that will be all from my side. And now please warmly welcome Honza Horak. Let's see whether mic works. How are you guys in the back? Great. Good morning. It's great to see quite a lot of guys. Well, maybe a little less than previous dawn, but still great for another talk about containers. I still wonder that you are still not fed up with containers yet. So what this talk will be about, you already heard about how to use Docker for testing applications, how to build Docker, what to choose to use, what's the history of the containers, all that you already know. And I think some of you might already try to write your Docker files. And eventually, maybe in a few weeks to run them in some platform as a service like OpenShift. And this is what this talk will be about to help you with this particular task. So I'm Honza Horak. I'm working in Red Hat. And why I'm talking about this topic is exactly because this was our task during last year. We had a couple of packages, RPM packages. Well, it was actually a set of software collections. And it included quite nice set of development tools in quite recent versions. And we wanted to give those nice set of packages into containers. And what was more important, these containers were supposed to run nicely in OpenShift or generally platform as a service. It can be even something different. But on the other hand, there were supposed to work nicely on some usual machine as well. And it turned to be kind of challenging. So this talk is like set of things that we learned during the way. And I would like to share with you because it's not necessary to fail again the same way as we did, right? So if you will remember some of the tips, I'm sure that you will fail less than us. So the talk will cover several parts or several topics. First, some general stuff. Then we will take a look how to create a database container. Then we will take a look how to create a Python container as an example of application runtime container that will be used for building your applications. Then I will say some few tips how to build containers based on software collections because it's not always that easy. And you will also look what nice containers are out there already. This talk won't cover some basic stuff about how to run containers, how to, what the containers are about, some technical stuff below. It was probably already done or will be done by some other talks. So I think you will probably find this information in other talks. So, and yeah, this is also not about OpenShift itself. I won't show you any examples running the containers in OpenShift, but you are also able to see these examples in other talks. Yeah. So, I said this was not about, this is not about basics, but still some very, very basics about containers are mentioned. This is how I imagine containers if I hear the word containers. I'm not sure if you imagine something like this, it's like similar. And I just recently realized that it's kind of a good, well, there are some similar things between Linux containers and these kind of containers because when we want to be ecologic, right, we put the right stuff into the right container and we put only one kind of stuff into container. So it means we kind of use these containers as one purpose things, like microservices are supposed to be designed. So that's the first thing that we should care about what the content is inside and also that we use these services as microservices, like the applications in containers are microservices. Well, these applications, if we run the same application on virtual machine, the usual way, it can be also microservice, right? But what's really nice about containers is performance because we don't have this level or layer of another kernel and the app provider running. So it's much, much more efficient. But on the other hand, you already heard it several times, I'm sure, there is some security risk because since we don't have this another layer here, it's totally possible to influence the host system or other containers if we found some issue in the kernel. I won't talk about details. I'm not the right person for it. But this is really necessary to keep in mind if you are developing the containers. Yeah, this is one of the tweets from yesterday which proves that it really matters what is inside and if there is only one thing that you will remember from this talk, so please remember that it really matters that the content, what is the content inside the container. And this is one of the first steps I give you, look for the content that you really trust. When you are putting something into the container, it's also, even if you know what it is, it should be something you really trust. So if we speak about RPMs, you should be like sure that these RPMs are coming from real resource. They are signed, you should check it. How we can build a container, this is very, very, the easiest way how we can build the container is the slide pool, the image after starting the container, running the container and then running the Docker commit which creates the Docker image. Well, yeah, it works, but don't do this. If you want to create Docker images or container images in another formats, use some recipes and reproducible builds because the way, the example, what was, well, this example isn't very reproducible. So use reproducible builds. In the Docker world, this is what Docker files are served for or this is what Docker files are for. So this example shows the same what the previous example was about creating some file with some nice greeting and we build it using the Docker build. Nice. So let's move into some more interesting stuff. Postgres is here used as an example of the database. We want to put it into container, but you can imagine any other database because it would probably work the same way. So when we want to create the Docker image and we want to, like, deliver it to someone, we start with the Docker file again, we start from some nice base image which in this case might be rel7 and we install RPM, well, quite easy so far. And again, Docker build. And we see that, yeah, the YAM is running, there are some layers created for every command and yeah, the build is successful. But what is the problem here in the example is that YAM creates some cache inside the container and we don't want to have these caches like distributed to customers or users, right? So we need to take care about the container size in this sense as well, but generally we try to create the containers as small as possible. So in this case, what we can do is to use some options for the YAM command and run YAM clean all in the end. And question for you, why is this YAM clean all used with the at, at, not as a separate run command? Anybody knows? Right, every command creates, well, the run command creates another layer. So if we did it in a separate run command, it wouldn't make the image smaller but even a bit bigger. So this is the right way to do it. Another thing is when we try to build this Docker file again, we will see a bit different output because the Docker is working by default with caching mechanism. So the YAM that already passed before, the YAM command will be used again because Docker thinks, well, the command is the same so the output will be the same. But it doesn't know that there was a security update in the Postgres image yesterday, right? So what could happen is that you want to update your container, you build the same Docker file again after the security update of Postgres is out, but in the container there still will be the old stuff. This is because of the cache. So, easy fix, Tomasz was speaking about it yesterday using no cache. True is one of the options Tomasz was mentioning, well, more options out to avoid caching. So please be aware that this is something you should just keep in mind. And of course there are some services for it. You can use some tools like OpenShift build service and some something more about it. Well, I hope it was mentioned in Tomasz's presentation. I'm not sure now because that was what I supposed. But so probably you will have to find more about this service. It's like an implementation that should help you to build Docker images in a correct way. Okay, do you think it's all? Are we ready? Already because we have like Postgres in container, right? So we should be almost ready. Well, but we are not because, yeah, there are simply a couple of things that we need to do. One thing is related to security. So this is, yeah, try for a joke for you. So this is a question. For you, anybody knows what is small green and very, very dangerous. So it's a frog with root access. And why am I saying it here? It's related to the first slide for the first box slide when I was speaking about possibility to break through kernel to host system or another containers. If you are a root in the container, it's not very safe still. Well, there are username spaces now in Docker since yesterday, but still it's better to not be root inside the container. So this is quite easy fix. You change the user inside the Docker file. And all the Postgres is already ready for usually databases are usually ready for running as non-user roots, non-root users. So it works pretty much well. There are also other environment variables set. And this is done because, yeah, this is just a tip summarizes in order to memory. And yeah, that makes the frog not to be that danger. About the environment variables, that was the preparation for these two lines. We don't want to have just the Postgres inside the Docker or image. We want the Postgres to be the microservice. So once user downloads the image and runs the container with the Postgres, he wants to run the Postgres, right? So we want to run the Postgres, right? So we want to make this happen inside the container or inside the Docker file, which is done by the CMD comment or, yeah, comment and it runs some script, which is not in the Postgres RPM. We need to edit into the container. It's some arbitrary script and I will show you how it looks right after another trip. That we should do the microservices, as I said. And this script looks like this. It's a database that needs to have some data directory prepared, like Postgres needs to have some directory prepared for running correctly. This is done by initdb comment. So when we run the container, the initdb comment prepares the directory with the data. This script also creates some configuration options to listen on all devices. And finally, there is exec Postgres, yeah. And why is exec here? Well, it's a good practice that we found during the way that in order to pass signals properly to the process, it's better than forking from the bash. So another thing for you, if you want to, well, if you are ready with the preparation of the container process, use exec instead of just comment which would fork. Okay, and we are ready again to build and we can run it. The minus p option, I think you already saw it in some other presentation. It maps the parts inside the container to outside, so to port on the host. And we also named the container with some shorter name than the long hash. When we want to connect to the running Postgres container, we need to know the IP address, which we can do like this. And we already see that the Postgres is asking us for the password. So we are almost there, except that we don't know what the password is, right? Or anybody knows? Anybody, nobody can know because there is no default password and it's really by design. Please don't use default passwords because users are lazy. They won't change the default passwords. We can rather use something like this. We can configure the PG HBA, which is the configuration for accessing the database. And this part is again part of the starting script. It starts the container locally, starts the Postgres locally, like the database itself, but without being able to connect from outside. It changes the password for the admin user, which is called Postgres in Postgres SQL. And it changes the password based on the environment variable. And it stops the process in order to start it properly again. Now we are able to set the password because we know already that what the password is. And I see right now that there is a small mistake. We should have minus E option here to specify the environment variables. It will be in some further examples. Yeah, this is missing here. So yeah, you see that it works. Okay. How to configure the database? We already saw one option to configure the default password for admin. We can probably do more. For example, setting max connections to the database and it's done quite in a similar way. We use the environment variable here and write it into the configuration file. That's what we do. And we can run the command like this. Setting a lot of environment variables here, you see what I meant by the admin password environment variable before. And you see that we also can pass some variables that specify which database to create initially, what password to use for a regular user called questbook here. And this is actually the way how we configure the containers for OpenSheet because if you did something different like bind mounting a file with configuration into the container, it wouldn't scale much in the Kubernetes. That was one thing which we really talked about a lot with OpenSheet guys. And this is the way which we chose in the end. So for Kubernetes, it's much better to configure the services using environment variables instead of the bind mounting. And since we don't want to have like different user experience in OpenSheet and outside of the OpenSheet, you also decided to support only common options, which is kind of weird. So we can ask what if I can change some another option, right? It's obvious question. So the answer right now is, if you want to do something special with the container which is already prepared for some general use case, you should create your own thin layer on top of it because Docker, as you know, works in layers. So you will use like a good general, but only general image as a base and use, well, create a thin layer on top of it. One thing I already saw but I didn't talk about it is bind mounting the data directory for database because you don't want to lose data once the container stops or breaks, whatever. You know, Kettle, it's not bad. So it can just happen that the container crashes and you want to have the data still somewhere. So this is what we should do with the data of the database. And what's the point here is that users, when they use bind mounting, they need to know the path inside the container. Where to bind bound their directory. So you should consider using some paths that are common for Postgres. It's usually this path in like regular system. So why not using it inside a container? And that brings me to another tip that doesn't have any example, but as I was speaking about the thin layers, the thin layer was actually extending the container so users should have an easy way how to do it. So we just think about what it means to extend your container. That's it. So that was for the Postgres or general database container. Let's look about, yeah, question. Right, so the question is what's the ownership of the files inside the container and outside the container, how to make it working? Yeah, this is responsibility for the user who is running the container. So the data or the files have proper owner. It's also connected to Selenux. They also need to have proper Selenux. It's not about ownership only. And yeah, since currently as till yesterday, there was no username spaces, so we need to keep the UIDs the same inside the container and outside. So this is a responsibility for the user who is running the container. Okay, the Python container is one of the examples of the build containers, how we call it. So the build container, what I mean with it is that we use this container to build another applications, okay? Yeah, or we can use the term build image. That's what I mean by it. Okay, so let's try again from the scratch, creating the Docker file from the base REL7 image. Create, well, installing Python with Python pip. Okay, I should probably ask before I edit. So yeah, hopefully some of you managed to spot the issue. There was no YAM clean all in the end. So yeah, this is how we do it correctly. You already know why. And again, running it and we see that the build succeeded. But we also see, maybe it can be overlooked sometimes, that YAM was having a problem with Python pip. Yeah, it's not available on the REL7. So it's kind of weird that YAM can't install some package but still the build is successful. So please be aware of that and don't believe YAM much. Well, look at the look and see what is happening. Or we can use some trick that we started to use in the images that we checked the packages installed using RPM minus V. It's also way to go. Okay, so this is basically, the Python itself is not a microservice at its basis. Like, we can't do much with the Python, right? So we have the Python in the container and we can be almost ready. So we can ship this to a user and we can say, okay, this is it, use it, and the user will ask, okay, how I get the application inside? So this may be one of the options. So now we are using as a base image or already our created image before, okay. We add some installation script. We run the installation script and set the command, the default command to run the application itself. So let's imagine that this script will install some big guest book application and the application will be located here and we will set this as a default command. Yeah, it works. Well, let's see, yeah, this is the script actually and after we build it, yeah, it should work. Well, it really is not anything wrong with it. What is wrong is that every user would have to do something like this and well, it's not that complicated, but we can make it better. So let's try to be user, let's try to make user to be a bit more effective and for this purpose, there is the tool called source to image and it was developed by OpenSheet specifically for this particular case to get the source into the image and produce in the end another layer image with your application already. So this is that definition. I hope you already managed to read it, but let's rather take a look at the example. So you can install the package called source to image and this package includes the binary S2I and using this command, we do pretty much the same as before using how many lines, maybe 15, 10, I don't know. So what it does is it downloads or just copies the application on this path. It can be also the git repository. It will use this Python 3.4.7 image as a base and creates another container called guestbook. Now how it works inside because that's probably what you are wondering about right now. In the base image itself, we need to have some support for this source to image and the support means to have basically two scripts only. We use batch scripts. It can be probably whatever language you want, but batch is totally fine here. First script, SMY is used during creation of the layered image with the application and second run script is used as the main command, as the default command to run the container. Let's see how the most simplest example might look like. This is the SMY script. It just copies the application which is at this point mounted to the TNPS SRC. And we do also some, well, I would say clever decision if there is some requirements TXT, which in Python word means usually PyPy packages list. We install these requirements. The run script is also, well, can be much, much simpler than this, but in this case, I want to show you how we can be a bit clever that when there is a Django application installed, we can run the application so the user doesn't have to specify this manually because Django applications are run in a simple or similar way, so we can do it for him. And of course, in the practical, well, in the end, we can support more frameworks. So this is another tip I would like to give you, focus on most common frameworks and try to support them. As I was speaking about microservices, this is also something we should care about in these kind of images, exposing some known parts. For example, 8080 for the Django. Yeah, it's quite usual part. And this is exactly because we want to create microservices. One special requirement from OpenSheet was that the containers should run as a arbitrary user. So let's have an example. I would run the container as user 5006, for example, and it should work. And it was not that easy to do, well, but it is possible, of course. So this is how we do it. We need to change ownership of some files to some specific UID, and especially here, you can see that we set the group ID to zero, which is what Docker uses if you specify the UID without group ID. So this is how you can manage to run the container with any user ID and it works. Of course, you won't be able to use the root inside, but that's not what you want. You can already work with the non-root user at this point. Another example, how the source image works in practice with the GitHub, as I already mentioned. And this is a test of you. What really matters in the container world? Right, content, thanks, thanks. Okay, now, we know how to build containers, and as I said, we need to use good content for it where to get the nice bits. I already mentioned it in the beginning, software collections already deliver nice content, test it, it also includes PIP for REL7 or REL6. So why not use software collections? Installing the RPMs is quite, quite easy. Software collections just use some weird names. As you see, the packages have some weird prefix, but it doesn't change anything in the Docker file. And what is special about software collections is that when you want to use some, for example, binary Python in particular version, you need to use this trick, SCL enable to change the environment. And that was like, this would be done in a non-container world. In container world, we are able to do something like this. We can hide the fact that the container includes the software correction. We won't be able to say which of the containers include software collections, which not. That's quite nice because what people are complaining about software collections is because of this weird SCL enable stuff that must be done and the weird packages names. So don't be afraid to combine these two technologies. It's working quite nice. We can have containers testing once, we can have multi-perversions of particular image or particular package in the container as well. Well, usually you don't need it, but sometimes you might. For example, the Python example, you still need to have Python 2.7 in order to use YAM inside the container. So when you want to have like Python 3.5, well, software collections might be quite usable. So how to make software collections enabled by default? Yeah, this was not an easy task and it's not working in 100% of cases, but only in 99%, so it's quite good. We can just, yeah, this is the example how it runs, but how it is done is here. We can play a bit with mesh environments. We can then unset them. Well, this is just for the reference probably. I won't speak much about it because time is almost done, so just if anybody else would like to do the same, like changing environment variables using some command, this is how we can do it. Yeah, and entry point we use is very, very simple, but we need it for the collections, but don't use entry point much because as I see in different containers on the Docker Hub, people use container entry point much and it's not good because you break. Well, there are reasons to not use it. So what containers do we have right now in the Docker Hub or in the Red Hat Registry? We have containers for CentOS on Docker Hub and for ReL Atomic or OpenShift in the Docker Hub or the Red Hat Registry. We focused on creating these containers look the same. For example, databases use similar options and as I said, we support only specific use cases. This is a set of containers based on software collections in the Red Hat Registry, as you see. This is again set of images, similar set but available for everyone on Docker Hub and this is a set of images based on CentOS. So we can try them right now if the internet is working here or in the afternoon. And what I would really recommend is even if you want to build your containers on top of some another container, use the containers for ReAlbo provider like Red Hat or CentOS. So you will probably find it a bit difficult to find those ReAlbo containers on the Docker Hub. So just use like, yeah, you need to find a bit and if you want to push some Docker files or the Docker images into the Docker Hub, think about what the name says about the containers because you see that the name is the only thing but users see. Okay, this is not very important and I don't have much time for it. So just a few things about how to call the containers. Not very interesting. What I call the API of the container is something what users see from outside. For example, the paths. I already mentioned it. So just pick the paths correctly or use the paths that are supposed to be used outside the container, for example. Also use some metadata for the containers. For example, in OpenShift, you can use this metadata to be able to find the containers easily. Yeah, so taking note about it. And don't forget about the security. And at this point, I just recommend to use some of these links or both of them. Idai to Dan Walsh and Marianne Defis coloring book. It's really fun. You can really coloring the book. It's not like, it's quite fun and a lot of things to do, a lot of things to learn. Okay, about complex applications. I realized that this was what Wasek Pavlin was talking about yesterday. So I will probably just skip it and tell you that you should, if you weren't here yesterday for Wasek's presentation, you should probably just download the recording. And yeah, it was about the electrical stuff just to refresh your mind. And that brings me to the end. And since we are almost, not almost, we are out of time, I'm here during this day of the whole tomorrow. Talk to me about everything, what you want to know about containers, building containers for OpenShift, whatever software collections even. Yeah, I'm here. I'm on Zahra. You can see my email here. Thank you. Arras. There's like five people out there. No, no, no, I'm not presenting myself right now. Oh, right, okay, okay. So you get one of those? Yeah. Yeah, so you have 40 minutes total. We'll show you the time when it's time to have a slap. If you have time for questions, if you have a scarf, some people who make like, ask good questions, you can give it to them. So we have a second microphone. Yeah. Yeah, we have some spare here. Thank you very much. I can't. No, if you're sure, it's great. The notes will be nice, but let's check out the schedule. So is it possible, of course, to grab the slides now? Or do we have to do that? Do that work? That works, we just need to put them on to the laptop. Are you sure that you have any dependencies on what you bought there or something? No. It'll be easier to go back after. That I don't have. Let's watch it again. Do you have any of the key notes working? The scratches of this microphone, so I'll just leave you with the scratcher. There's something, there's something to get the scratcher off the microphone, so I think we should get it off. I'll do this one. I'll go and visit the museum. Yeah. I'll just start it. Okay. I'm feeling to move a bit to the center because I'm using up the size and then... Yesterday it was actually full, now it's... We'll see, we still have our measurements and you'll really be able to see them. Probably because it's tiny, it's not that early. Yeah. Alright, thank you very much. It works in that directory, but it doesn't... that directory doesn't have it in the notes. No. Okay. So please take your seats again. We will be starting in a minute.