 Has any of you seen the talk earlier today from Chris about containers? There's a lot of overlap between the two talks, and there's a talk right now about static code analysis. So if you're interested, and by all means, go ahead, get something out of this session. Does anybody use Docker in their day-to-day kind of to get everyone to get out and so I can just go for a nap or something? Does anyone? I just wanted to be honest with you guys, and this is kind of, so I'm gonna go do a lot of overlap with Chris' talk. Unfortunately, this is not a talk that was originally part of the schedule, so that happens sometimes. So I'd rather you people at least be aware of that and if there's something else that you wanna attend. By all means, do so. Thank you for saying, for those who stayed. Not sure if I should thank you, because that ruins my nap, but containers, containers. They've, containers have been around for a few years now. By now probably all of you have at least heard of Docker. If you're here, I kind of assume that you don't use it daily, that's kind of where I start from. So yeah, so everybody says that those containers are wonderful, they'll solve all the problem, but yet I didn't use it until very, use them until very recently. And why is that? Well, for me it was kind of a little bit too complex. Like there's this learning curve that I didn't really wanna worry about and I just didn't really see the benefits. I mean, I'm sure I heard that it fixes the whole works on my machine syndrome and those type of thing, but I work alone most of the time. So what's the point, right? But until I found some actual real case scenarios and I saw some really practical examples, I didn't make any sense to me. So this is kind of what I wanted to, I wanna show you today, just a few very practical examples, how can you use containers just to make your development life a little bit easier, your software developer life a little bit easier. So that's kind of what I'm gonna go through. So I'm gonna show you some practical examples and how to use Dockers in those games. Most of this will be done in the command line. Centers for Software Developers, Centers for Developers, if we're gonna spend our time in the terminal, you might as well start from here. So hi, I'm Joel, recognize me there, that's an old picture, so don't worry, that's why. I work as a developer advocate for Red Hat OpenShift, which is a Kubernetes distribution made by Red Hat. If you wanna talk about it, I'd be more than happy. I am based in Canada, in Ottawa. You've heard that this morning, anyways. And I love Twitter, so Joel with 200 scores, Lord. Oh, I have stickers, I forgot to take them out, but I'll put them out here somewhere. Might have seen them on Twitter if you follow PHPUK, password hygiene. Come and grab some on your way out. Let's talk about containers. So why should you use containers? The main reason that people will tell you is usually that it's the same environment as your production environment. Because you're using a container, everything is packaged as one, so basically you have the exact same environment from one setup to the other. So it makes it a lot easier to deploy different things. But it can also be very useful for a lot of different things. On-boarding, for example, is a great use case for containers. If you have a new member joining your team, the first few days you'll have to spend installing a LAMP stack on this machine, and then he won't be running the same version of PHP as you do, because you just installed it, and you installed it two years ago, and you have all those differences between your environments. So containers can be very useful for that. If you have a containerized environment, then they can just do a Docker pull, and they're ready to go, and they can actually start on day one. It's also very, very, very useful for open-source contributions. I really, I strongly believe in open-source, and, well, I work a red hat, yeah. And it can be very, very valuable there. You've probably experienced that at one point in your career so far, where you're using this open-source library, there's a tiny little thing that bugs you, so you wanna go inside the source code and change that. So you do a git clone. Now you have the source code, but you don't know where to start. You don't have anything set up. You need to set up a database in order to connect with all that stuff, and it's just, it takes forever, and it's kind of a pain to just do that little tiny change that you wanna do. So if you containerize your environment, then people can actually, people that want to contribute to your project can actually do a Docker pull, and they have all the environments set up for them. They can just try that little change, see if it works, and then push everything back. So it makes it very, very, very easy for people to contribute to your project. And joined in is one of those projects, an open-source project, the one that you're using to array the speakers today, where basically you can just download everything, run it in a containerized environment, and you're ready to go very, very easily and you can do PRs right there. It's also very, very useful for testing, especially if you have a team of QA people doing manual testing. In this case, there's no more fighting over discourse on my machine or not. You can actually ship everything. They run the exact same thing, the exact same environment, the exact same setup as you do, so there's no more arguing, almost. But it won't solve all the problems that you might have. It won't work. It won't really help you with all networking, DNS routing. There's a little bit of tweaking that you'll need to do. There are some tools that can help you with that, but really what you would need if you wanna go into those is to look into Kubernetes. Same thing with scaling. Using containers themselves won't help with scaling, so that would be the job of Kubernetes, for example. By the way, there's also a tutorial about Kubernetes right now. So, but Kubernetes will be there to help you with the scaling. So what is a container exactly? Container is a standard unit of software that packages up code, all of its dependencies, so the application runs quickly and reliably from one computing environment to the other. So really it takes everything that you need to run your software, packages, all of that, and that's the unit that you will be shipping. It's a lightweight, standalone, executable package of software that includes everything needed to run in an application, so it includes your code, your runtimes, your system tools, system libraries, and all the settings that you need to run this application. So that's why if you ship everything to your QA team, they'll be running the exact same code, runtime, system tools, system libraries, and settings as you are. It's also a disposable unit, so once your container is done doing whatever it was doing, it'll just destroy itself, and that's kind of part of the, of the raison d'etat of a container. It'll just start, do something, and then die, gracefully die, and destroy everything that it created. So it will destroy all the extra files that it created. It will also destroy any data that you've entered in a database. So it can be very useful for testing, for example, automated testing, so you can inject a bunch of crap in your database, and it doesn't matter because you're gonna destroy everything, and next time you're gonna start your testing, you're gonna start with a fresh new database again. So it can be very useful for that, but it's also something that you need to keep in mind, so if you actually want to keep some of that data, you have to make sure that you map to a volume and make sure that data is stored somewhere outside of the container. Just like a VM, right? It works the same way, it's kind of containerized, works in its own bubble. Nope, not the same. Going to too much of the details of how a container actually works, mainly because it was explained this morning, so there's a great talk at 10 a.m. this morning, if you want to learn more about that. But basically the way we will look at architecture when we have a VM is that you have your infrastructure that runs different things, and you have an operating system that will run on your infrastructure. So you have an operating system running on your machine, a Mac OS on my machine, for example, and then you have a hypervisor. I'm not sure what a hypervisor is, but the name is so cool that I had to put it back there. That's something with sharing the resources. But then it shares the different resources on your machine to all the different VMs that are running on your machine, so it shares the memory and so on with all those VMs. Now each VM also has its own operating system and is only aware of what's going on in this VM and is only aware of the resources that has been allocated to that specific VM. So you can have multiple VMs running on a single machine, on a single operating system, but you can already see that there's a lot of overhead, and if you need to start a VM, well, it'll take a few seconds in order to boot that extra operating system. Now containers are a little bit different in the sense that they run directly on your operating, or as part of your operating system, if you're running Linux, or you will use a tool like Docker that will act as a demon. So you have your infrastructure, you have your operating system, and then this tiny layer of Docker in this case, and it will just spin up new processes running your application. When you're running everything directly on Linux, you don't even have this layer here, so you can use a tool like Podman that will directly allocate the resources to your processes, and it makes it much, much, much, much faster than a VM. I've been talking about containers, where does Docker fit in there? Docker is just one of their various tools to run containers. It's the one that most developers are familiar with, unless you're actually using Linux, you might have heard of Podman, but Docker is just one tool to run containers. Containers, or should all be using the OCI, which is the open container initiative standard, so they should all be using the same type of structure. So in theory, you can use any tool to run those containers. So Podman is the alternative for Linux. For containers, that's kind of what it is in a nutshell. There's, I have a few links at the end. There's a great resource that explains really in detail how they work, if you're interested in the internals. Let's look at how to use them in practice. Be conference, but I do a lot more JavaScript nowadays, but I did a lot of PHP, just up until a few years ago. One of my colleague knows that, and he once came to me and was like, hey, you know, I'm running into this issue with my PHP code, I can't figure it out. Can you take a look at it, want to install anything? Because, like I said, it's been a few years since I did some PHP, but I still have nightmares about installing MAMP on my laptop. So really, I didn't want to go through that hassle, but then I remembered, oh yeah, like I've heard about containers, and that was one of the first time that I actually used a container to help with the natural problem that I had. So here we have all inside a terminal that I built because it's the first time I'm using it on this machine, so I'm not sure if it's actually gonna work, but let's see. So what I did that in that case, well, of course I needed to just start an Apache server and install Apache and PHP and all that stuff. Well, I've heard that you can use Docker run, and then we'll run something in cache mode, and you can just use an image like Apache 7.1, and that's it, my server started for a local host. Okay, so it doesn't quite work. Connection is refused on port 80. That's because you cannot run something on port 80 on your machine. You have a server up and running already. I'll show it in more details in a few minutes so that you actually believe me then you see that it's a real server, but that's all you need. You just need to do a Docker run and you've got a server running. It's the first command that you'll wanna use, and you just give it the name of an image. I've used PHP 7.0.1 Apache in this case, so there's a bunch of different images that are already pre-built for you. Take a look at hub.docker.com. Everything is there. There's a lot of, and look for the official images, that's very important, and you'll see them there very clearly marked as being the official image. And typically they have the name of the image and then a colon and a version number, and sometimes there's a dash with a variant. So you can have the PHP, just the PHP tool. You can have the PHP for CLI. You can have, I think there's a PHP and something with Nginx or the PHP that runs with Apache. So this is why I've used that one. So PHP, I wanted 7.1, and I wanted to include the whole Apache server with it. Now I've shown you that it didn't work on port 80, so I need to actually map this into a different port on my machine. So I'll use the dash p flag here, and I'll tell Docker to map the port 8080 on my machine to the port 80 inside the container. So Apache runs on port 80, and I'll be able to access that port 80 through a tunnel on the port 8080 on my machine. So now we have an Apache server. It runs, but it doesn't serve any files, right? So what we'll do is that we'll mount a value. So we can use the dash v option, and dash v will actually mount a volume, so we'll tell it, well, use the current working directory on my machine, so the one from which I've started or this command, and map this to slash var slash wwww slash html inside the container. So it will actually overwrite that directory inside the container with whatever I have in this folder. So now that I have all of this, I have this big command, and I can pretty much run an Apache server that will run, and it will be pre-configured with PHP, will serve the files from my current directory, and it will map all of this to port 8080 so I can actually use it. There's a few other useful flags that you might wanna use with Docker run. Dash d will let you run a Docker container in detached mode. If you don't run it in detached mode, you'll see all the logs being printed in real time. Dash dash rm is kind of a useful flag just to make sure that everything is removed once the container is stopped, or else it'll just start leaving a bunch of junk. That's one thing that you have to be aware when you're using Docker, you will at some point run out of disk space that never happened to me before, but it leaves a bunch of different cache images and stuff on your machine, so you have to be careful about that. So using dash dash rm at least ensures that it removes that container once it's stopped. You can also use dash dash name, which is very useful if you're using script, so that you can actually give a name to your container and you'll be able to access that container in the future. So let's start my container, I now have this command, but we've seen each part one at a time so it shouldn't be too scary anymore, so we've got our Docker run, we will run this in detached mode, we will make sure that we remove that container once it's done, it's execution, we'll give it a name, so we'll call it myphp, we will map a folder on our machine and then we will map a port and the image that we actually wanna run is php71 Apache. What did I say, detached mode, make sure that we remove it afterwards, can actually curl localhost 8080 and I get my file that I served. So that's all I need really, well it's a long command and it's a lot of things that you need to remember by heart, but there's a lot of cheat sheets to help you out there, but that's all you need to actually set up an Apache server, so it's a lot simpler than actually going on and doing the whole install process. And now everything is mapped on my local file system, so if I actually go and you see the extra, something that I've just added right there, so it's really mapped on my local folder, so now I can actually work inside my local file system, I can do the changes in my file and test everything locally, so I don't have to install all that system or even worse try to push that into a server to see if it actually works. So this is one of my first use case and I thought that was very, very interesting and it definitely made my life a lot easier and that's when I started digging a little bit more into Docker and try to see how I could use it. So you can also run PHP command line tools or command line commands, so we will use the same type of thing here, so we'll have a Docker run, we'll map a local folder on our machine into the slash app folder of our container. Our container is Linux based, so you'll have to keep that in mind when you give commands to your containers. So I'm using slash app here and I'm gonna use PHP 7.1, I won't be using the Apache version this time because I actually just want to run this command. So I'll actually run PHP and slash app slash CLI dot PHP and it'll just output me whatever result that I get from that file. You can also specify a working directory, we'd be in slash app in this case because I've specified the whole PHP CLI. So that last part is really the command that you want to execute inside your container after it started. So let's take a look at this, simple script will just count the number of files and output, whatever it is. I like to use PHP for CLI stuff, every now and then just to script different things. I hate bash, I shouldn't say that. I just don't like bash, I don't like the structure so PHP is actually very useful if you want to do command line stuff. So that's one example. Now that's interesting. So one thing that I've learned while I was here, so I went to this talk yesterday, that's a picture that I took during this talk. It's a little bit small here, but one of the feature, oh, you can barely see it, but one of the feature that was added inside PHP 7.4.3 is that it gives you a warning when you're using or doing a bad use on precedents of operators. I'm a bit tired, I run a script and it takes, it assigns three to the variable A and it assigns seven to variable B and then I will echo sum, colon and then close those and then append or concatenate A, which is three and then plus seven. So anyone who wasn't at Derek's talk knows what the output of this will be. So the output will be seven and that's because it does this operation first so it does the concatenate sum and three and then it does the plus and it converts that sum into a number, doesn't find any number because it starts with an S so it's just seven. So it converts it to zero and then it's zero plus seven. I had kind of a weird output so they added a new warning so if you're trying to use those operators in the same operation, it'll give you a warning now. So I wanted to see if that was true because I didn't trust Derek. So one thing I did was to actually run this tool and I'll run it with, you'll see right here at the top. So I'll run it with PHP 7.1 and we can see that it gives me, it all gives me a warning, non-numeric value and I have the output seven. But what I wanted to see is, well, what do I get if I run it with PHP 743? And in this case, well, I'm getting the deprecated warning saying that it won't work starting at PHP 8. So that's a very interesting use case for Docker containers as well because now I can actually test the same code in various different versions of PHP. So that was also a very eye-opening thing for me. So now I can run different scripts and actually test them. So if you need to migrate something from PHP 5 to PHP 7 and you wanna see how bad you will break everything, well, it's just a matter of using a different version of PHP inside your container. That tells you how long it's been since I've been doing PHP. Does anyone still migrate from 5 to 7? Java script nowadays, so this is how you would run a Node Express server. So very similar, you do a Docker run, you map a file on your file system, map the different ports and you just specify which version of Node you wanna use or which image of Node you wanna use and then you specify the command to run. So you'll actually need to do a Node and start your server. So we've seen how that can somewhat help with the works on my machine syndrome. So now somebody, like if I had this new guy that just joined the team today and he downloaded the newer version of PHP because 7.4.3 was released yesterday, well, he would get that warning message that I would not have gotten if I was running 7.1. So that's where it comes really useful. So you make sure that your application is always packaged and always shipped with the same version and the same environment no matter who's using it. So that's one of my problems. I was able to run that script from that PHP application from my friend but I was getting this weird error something about file not found. So my friend was like, well, do you have the file C-temp log? I was like, oh no, why? So that's where environment variables will help you. So when you start using containers, you'll have to look into environment variables and you'll wanna isolate everything that is very specific to an environment into different environment variables. So it's really meant to store variables that will be specific to the environment, stuff like file names, ports, base URL. So in his case, he was using a Windows machine for development so he needed to specify C colon slash whatever it is and because I'm using a Unix-based system, I had a different start folder or base folder. So you will specify those inside your environment variables and then it's just a matter of setting that environment variable and the people will be able to run your container. So very useful for file names or ports. You're most likely not gonna use the same port that you're using in development that you're gonna use in production. So chances are you'll be using 8080 in development but port 80 on your production servers. Base URLs, stuff like the APIs that you're gonna hit, you might wanna store those in an environment variable so that you can actually change the different APIs or the APIs that you're gonna use or the base path for your APIs. So in this case, I will run my server again but the API that I'm accessing, I want to access the one that runs on my machine, I don't want to access the production API because I'm actually working on something on my machine. So you know, I can specify an environment variable, base URL and I'll just give it the URL. APHP, in order to access those environment variable it's just a matter of using the dollar sign underscore ENV and you have access to that base URL that was passed in that command line. In Node.js, very similar process.env gives you access to the environment variable as you might be able. So you can see here that the first Docker run had no dash e argument in here so it only outputted no name provided so my script was looking for an environment variable, couldn't find anything, so outputted no name provided but the second time I ran it, it had the dash e and name equals Joel in this case and then the output of my PHP script was hello Joel. So you can really pass information into your containers by using environment variables. When you'll start using different containers and you'll start from base images, you might have to pass environment variables to your images as well. If you wanna run MySQL, you'll need to specify a bunch of different environment variables so that MySQL knows how to configure itself. You'll wanna give it the root password. You probably won't wanna use root. You'll need to specify a default user and a default password as well so you can specify all of those as part of the environment variables then you specify your ports and the image once again so you just specify, well MySQL and you give it a version in this case. So I'm gonna run a MySQL server and also an interesting thing if we're gonna take a look at the running a MySQL database is that you can actually specify an initial set of data that you want to inject into your database and a lot of the images will have or base images will have something similar to this so they'll have some sort of entry point oftentimes called Docker entry point or some flavor of that but whatever you put in there will be executed by the image as it starts. So in the case of MySQL it will actually look into the Docker entry point in a DB folder and it will apply any SQL files that are in there and it will actually execute any script files or .ssh files that are in this folder. Now I've started MySQL server so once again no installation necessary so just this very simple command but that's the Docker command that I've executed and that's it, MySQL server is started and here's the fun thing that we can do so we can now run commands inside our container so I can do an exec and I'll run this in interactive mode so by doing so I'm actually doing a shell inside my Docker container so right now I'm accessing or I'm inside my Docker container that runs my MySQL so I can do MySQL and I'm inside my MySQL database. And from here I can do different things like show databases and there's nothing so it's just the plain vanilla install of MySQL at this point but really that's all you need to start a MySQL server is it quick or exit? Now I'm back into my, that I can do and I've already mentioned it but to Docker, I'll stop this one, initialization database and now I've started my MySQL server again into my machine now you can see that I have this table or this new database presentation and I can presentation we have an initial dataset that was injected in my MySQL database so once again it was very, very fast it started everything no configuration necessary the only thing that I need to do here was to actually say well this is my initial dataset that you're gonna use please set that as part of your setup there's more than just a CLI to MySQL and using those commands there's a lot of things to remember I've shown you a lot of commands and even my DNI don't even remember them so that's why I have those script files that I just typed in but so you can use there's a bunch of different cheat sheets I'll share some on Twitter afterwards but one thing that you might want to also do is to use a Docker file so Docker file will let you use pretty much the same options as you would inside the command line and there's a standardized set of command and it lets you pretty much build your own images for sharing so you'll be able to create base images if you want that will have all of your specific environment variables and all the specific setup that you need for your specific use case but you'll also be able to do stuff like copy all of your code inside that container and then you'll be able to share that full container containing both your code, your source code and all the necessary tools and share all of that with your QA theme for example so it builds a bunch of different stacks that can be shared so Docker file will look a little bit like this for a PHP project so you will start from a base image so you will always always start from a base image so we'll use PHP 7.1 Apache in this case you'll specify which port is used inside the container so you will expose that port that expose in this case is already specified in the PHP 7.1 image so it's not necessary but yeah and then you copy stuff from your local folder from your local file system and you copy all of that inside the container so instead of mounting a volume like I did in the previous example in this case I'm actually copying all the source code into the container so if I change anything on my file system I won't see it in my Docker for Node.js very similar so you'll start from Node you'll expose a different port that you wanna use you will copy all of your files you'll specify working directory in this case and then you can run your MPM install and do an MPM start the reason why I wanted to talk about the Node.js here is that the order in which you do the commands actually has an impact so it will create a first image based on Node 10 in this case and that will create a cache and it will create one layer to give it a name, it will give it a hash and it will always try to reuse that same cache image whenever it can the next thing it will do it will expose for 3000 but that works and then it will copy all of your source code but as soon as you copy all of your source code the chances are it changed since the last time you've built it or else there's no point in building a Docker image at this point so by doing this here it will create a new cache layer and all the subsequent layers will have a different hash so you won't be able to reuse that from the cache what you could do is change the order a little bit and start with your from Node expose the port then copy your package JSON which is similar to a composer JSON file specify the working directory so so far up until here if your project is relatively it's been a while since you've worked on it chances are you won't change your dependencies very often so this is probably all the same running your MPM install will install all the different dependencies specifying package JSON so this one can also still be cached so up until there it's all using cached and it's much much quicker and then you copy your source file and that one will change and you'll be able to run you'll only have to do those two operations as part of your Docker build so to build your Docker image it's basically a way to compile your own images you'll use Docker build and the path to your Docker file it will look for a file named Docker file so literally Docker file inside the current folder by default if it's not you'll have to specify where it is so Docker build dot for a current folder you'll probably also want to give them a tag so you'll be able to find those images after a while so you will just give it the name of your image and then a version number very important to give it a version number or else it will just tag it latest but you won't be able to find that image so it's nice to actually tag them just a quick warning your containers might be running as root not all systems will let you execute containers that are running as root so if you're using some Kubernetes distribution it'll give you errors so you might want to change the user and do a bunch of things there's an article at the end that kind of explained that but it's just something that is good to know so if you see error messages about being a root user that might be avoided so Docker build lets you create a full custom image and those images can be shared on registries and the image is ready to share with the team or to deploy somewhere in order to share those images you will use public or private registries which are pretty much the equivalent to GitHub so Docker Hub is one of them so you can use it in the same way that you would for GitHub so you can put some registries some private and some public you can use Quake Quase and open source registry that you can install in your own infrastructure and pretty much all the major cloud provider have their own registries that you're using that you can use so if everything that you do is on the Google Cloud then it makes sense to use Google Cloud's registries just for connectivity reasons to publish your image just do a Docker push and you give it the name of your image you will want to specify the name of the registry when you're using Docker it's docker.io by default so if you're pushing to Docker Hub no need to specify that you'll need to specify your username and then the name of your container so if I want to push my PHP image I'll just do docker push dockerio at joellord slash my PHP image if you want to download an image if you want to look at the code or an image that one of your colleagues published then you can just do a Docker pull do the pretty much the same operation it will download the image and then you'll be able to run it there's a few other interesting commands that you can do Docker PS will list all the running containers so I've did it a little bit earlier just to see the name of my container Docker PS dash A will list all the containers so even if they're not running right now so that's where you'll see all the junk that you've installed and that you'll want to remove at some point Docker stop will stop a container from running so you can give it either the identifier that it got or the name that you specified and Docker RM will actually remove that image and if you need to rename a container for some reason then you can use the Docker tag it's all nice in theory but one of the things that I found when I started using containers is that they just never did what I wanted so there's a few things that you can use to help you kind of debug and try to figure out what's going on Docker logs is a very useful command so it will show you what is going on inside your container so everything that you output in there or the standard out the STD out will actually be outputted there so if you do Docker logs my PHP actually let's see I still have I still have a PHP running so I can do Docker logs I actually server in real time so if I, you will see that there's a new entry that just appeared right here yeah so you can see the logs in real time and you can kind of figure out what's going on it might help you to if you have permission issue or some weird errors that you can't figure out dash F to see them in real time you can also execute commands inside your container so you could do things like let's use LS and see what is inside my slash var slash w w slash html folder inside my container inside the container named my PHP keep in mind that the command that you will execute in this case is slash bin slash ls that's because the container is actually a Linux container so you can't use dear as you would in Windows you have to use the Linux equivalent and you can also log into your container so I've shown you that in the MySQL example so just use an interactive command in this case and then you just specify slash bin slash bastion will lock you inside the container and then from there you can actually poke around the system see what's going on so that can be very useful for debugging purposes as well you can also copy files to and from your container so Docker CP will help you with that just specify the name of the image and then the path to the file in order to copy from your container inside your local file system so Docker copy from my PHP and you can download the Apache configuration file for example if you're trying to figure out why your Apache is not working as you're expecting and then you can copy from your local file system into the container by using the opposite so you can maybe you wanna tweak a little bit your app or some configuration file and then copy that back inside your container and see how it now behaves so Docker logs, Docker exec, Docker CP are all tools that can be very useful when you have to debug or when something is not quite working as you do more and more it'll get easier and you'll see that you probably don't need them as much but at the beginning those were tools that I really needed now if you have multiple containers that you wanna run and I've told you the example I've joined in earlier where they have the whole PHP application and they also have MySQL database then you'll probably wanna look inside into Docker compose Docker compose is a tool that will let you run and start a bunch of different containers and take care of all the networking for all of them so that they can actually talk to each other. It'll use YAML files to describe all the containers so you'll specify the version of Docker compose version three in this case and you can specify various services that will run in this example I'm starting a database and I'll start it from image MySQL 5.7 I'm not doing anything specific here I should put all my environment variables here as part of my service and then I have my web service as well which is my PHP 7.1 Apache and then I specified that it depends on DB so now from there I'll be able to access DB and there's an internal DNS that will map DB into MySQL database here. Once your YAML file is all set up Docker compose up it'll just spin up all of your containers and you're ready to go. So should you use containers? Well for your day-to-day development definitely I think it makes a lot of sense it makes it for more uniformity in your team or consistency inside your team everybody's running the same version the same type of thing but it also can be very useful if you need to test some code on different versions of PHP for example. For testing it makes a lot of sense because now you can package everything into a container and ship that to your QA team and in theory there shouldn't be any more arguments on whether something happens on a machine or not. And for open source project yes, yes, yes please do so if you have an open source project and you're having trouble getting your contributors to actually help with your project maybe it's a little bit too complicated the door to entry maybe is a little bit too complicated so that's definitely something that you should look into and try to dockerize that application. If you need any help with open source project that you want to dockerize I'd be more than happy to help you out I just reach out on Twitter. So with this that's all I had there is more information available pretty much all the links that I've shown can't see anything easyurl2 slash containers so you'll find the information there and that's it so thank you very much for attending.