 after this place, so maybe a bit too much information on your heads, or too much beer, depending on what you think, just a little bit. But yeah, I mean, I hope that you can make the motion to this presentation. As you know, we're going to be talking about docker a little bit. We're not going to go deep into detail, but I'm happy to after if you want to reach out to me. So we're going to be talking about how we integrate docker in our local environment. I know some of you already, I did talk with you, which has been great meeting many of you over the weekend. For those of you who still don't know me, I'm Frank. I work for Amazing Labs, and those are my emails, my profile, I haven't got to be there, but if we can reach out to the email and you want, I'm happy to answer any questions. What are we going to be talking about today? So quick intro, then we're going to be really focusing on the docker setup. That's going to be the really meaty part of the presentation. We're going to be doing a real-life example. I was going to do a live demo, I should have bit before, but I was covered, so I decided to record it just in case, because I had just a day somewhere by Jesus, so I just decided to record it. And then, yeah, we just closed up the session with some pros and cons and some more questions. Okay, so the very first thing, I mean, some people, in fact, they were talking to me before, they said, we need to know docker in your presentation. The answer is no, okay? If you don't know docker, you will understand the way better this presentation, but if you don't know docker, this can be a very good entry point for you, so you can maybe start playing with it and yeah, you can get some good things out of this presentation. A quick note on environments, I put this same slide just in one of my presentations, but again, I want to go through it because it's something that I consider important, depending on the size of the project, depending on the size of the company, and depending on the skill set of the people working on the team, you might be working on one of these situations, so I know there's people who directly work in production, they just SSH into the server, they just open green and start hacking codes, I think any of us would do it at some point in our lives, feeling bad because they do it, but I'm going to discard that one, okay? So yeah, it's not really what we should be doing, and more typical situations, depending on the above factors are being up at the very open environment and in a production, if maybe we're talking about more projects, more company, okay, we don't have many resources, so we just have them on board. If we're going to go a bit improved, that kind of workflow, I mean, what happens when we have multiple developers or working on development, we're going to be overriding, data is going to be overriding, we're going to be overriding those things. So it's kind of a better approach to go to a local environment, in which that same is my local machine, I can go wild, I can delete data, I can try out new things, if I break something, it won't be able to work. So having local development and then production is a kind of more common scenario, but then again, coming back to that beginning, such as projects, such as company, we can have also multiple other environments, I could be stating, it could be maybe an environment pair of brands, if you can afford it, if you can afford it, you can just create a PR out of a brand and then trigger a whole new setup, a new server. So yeah, these are the things that I'm going to be considering. We're going to be mostly focusing on this one, but because we're talking Docker, we can go wild to the environment, because we can just spin up as many as we want, and it's going to be very easy. So which options do we have for local? I think this has been changing a lot over the years, and I think has been changing for good. Many years ago, that was the only option who has not used that, probably nobody, okay? That means that we've all used it at some point, where there's Windows, Linux, Mac, we've used the one version of it. It goes back to the easy, yeah, the easy is going to sort of put something that resembles a server into our local machine, so we've all done it. It did the job for quite some time, but obviously it had a lot of limitations. We needed to have some knowledge, I mean, once you wanted to have more than one project, then you needed to know what a big host is. So that means that you need to know a bit about Apache or about any necks, I mean, in this case, Apache, because it is there. So you need it to be a bit technical, and maybe some people would struggle with that. But anyway, as I said, they did the job, they were good, so I'm probably just set up the foundation for many of the following tools. A tool that came here years ago as well, is Veyron, and has everybody use it, most of us, up to a point, okay? And it was an improvement, okay? It's giving us a very good machine that we can just, as I said, it's not our local machine, and that means that somehow, if we get a good Veyron image, we can get a similar setup to the production service. What happens when we have multiple projects? Then we, again, we need to go into Apache configuration, so it's our Linux machine, so we need to kind of come up with a bit of a demo. So you can potentially do it, there's loads of information, but again, maybe some people could struggle, and also one of the things is resources of the machine, okay, it was a cool built-in machine, so it would consume resources. And also, another thing, what if I call that front-end project and a back-end project? What if one of my projects is using React Front-end with TruePy back-end, and another is fully TruePy, the configuration for those vehicles is really different, what if I need engine X for one Apache for another? So, that's when things start to get a bit tricky. We could also go, like, set up cloud servers, and then maybe SFTP, I think we've all done it at some point as well. There are some other cool options as well, like ballet, that's a tool which is created for larvae, but it works pretty well with pretty much any PHP thing that you put in it, as somebody use ballet, or at least help it. I mean, you can just Google it, it's most probably you haven't heard of it because it's for larvae, but it's really cool. I mean, you can easily set up multiple vehicles with minimal configuration, which is good. So, yeah, you can give it a go, it's still out there. We can even use the PHP breathing server, I mean, set up, we have a cloud database, so we can always use the PHP server and have the project up and running in TruePy, that's probably the quickest, cheapest option, but yeah, obviously it doesn't allow you for too much configuration. Many others, I'm not sure we have Docker, that's probably the reason why most of you are here today. Okay, we have Docker, Docker is, I don't wanna set a new hit on the block because it's been here already for quite some time, but it's definitely gaining on popularity. What is Docker? Basically, at the end of the day, it is a tool that is allowing me to develop locally. That's all the other tools we've been mentioning. We're doing it, so that's what we are kind of using Docker, and we're doing many, many, many more things with Docker because the technology allows. That's right, we're gonna be talking about a few of them as well, but the end goal, remember, is we're gonna try to set up our local environment. There are really many options, I'm just gonna name a few, so amazing, I know that one because they are the hosting guide host us in amazing, so I know about that. Then we have Docker for Drupal, which is a fully open source solution that anybody can use it, and then we've got Lagoon, which is, again, done by, amazing, but this is fully open source as well. Yeah, they are growing in popularity, and they are creating more and more documentation, so that's really good, and the setup is very, very similar to the Docker for Drupal, and I think we can always use custom Docker images, so if you know a bit more about Docker, you can create your own images, so Docker is basically, it's like a central library in which we can put loads of images, so they are like, I mean, my impact for a processor we have Ubuntu 16.04, so there is a Ubuntu 16.04 in the Docker image that we can just download, and that will be an exact copy of what the prod server will have, and what if we want PHP 7.1, there is a Docker image for that, that can I can just download, and it's exactly the same package as it's using the prod server, and what about MariaDB or MySQL, there's an image for that, so with Docker, the cool thing is that, and it works in layers, you can choose the layers that you want, and the cool thing about Docker is that it will connect all of them for you, obviously it needs a bit of configuration, we'll talk about it later, but they would join in with those things, and what if I want redis, what if I want black firewall, there are images for everything, and you can put them all together, you can make them talk to each other very, very easily, and with that, one of the greatest things is that you can get an identical setup to production, easy as that, okay, so your production server, your local server is a 100% copy of what production is, and I'm just making a quick note in there, like Docker in production, have you ever heard of that, using Docker in production, so it's kind of becoming a thing, okay, and people say it's risky, and people are brave, and they try, the ones that they think that is not so risky, so why not, anyway, we'll come back to that at the very end of the presentation, but I just wanted to kind of put it in there, and talk for thought. A little bit about my day today, so that you understand what my context is, and probably the main reason why I was doing this talk, why I wanted to give it, so I work in maintenance and support, basically in, amazing, we have three big teams, that they work with the big, big projects, six, nine, one year development, what was their things, and those projects need maintenance, and all those projects that are famous and need maintenance or extensions or whatever, they come to my team, okay, I work in the global maintenance team, and that basically what we do, we inherit projects from the big developers, and we just try to keep them up and running as fast as we can, okay? So how does Docker end up with that? So it could be as, I mean, one of the greatest things is quick set up and fast switching, that is a big, big, big thing for me. On any normal day, I can work on easily three to six projects and not exaggerating, I thought there was a place that I work in normally at, not normally, but any given day in my daily work, I can be switching between three to six different projects and doing different things for them, deploying to the different environments and all that, and on any given week, our team, we work with up to between, yeah, 15 to 20 projects. And that is a lot of context switching, okay? And many developers in our older teams, they work with one project, which you set up, you work on it, you focus, you lock down, and then you're good. But in our case, because of our context, we need to do a lot of context switching. Can't you imagine setting up all that with month or with background or one of those previous tools we talked about? It could be a huge nightmare because also what I mentioned before, those projects, they don't have similar setup. Some of them use Paxes, some of them use Nginx, some of them use Node for rendering the content, et cetera, et cetera. So it's a lot of setup that would be needed, and it's a lot of task switching. In our team, some of us are kind of all around developers, so whatever you throw at me, I will try to do it. But not everybody in my team is the same. We have front-end people, we have backend people, we have junior people as well. What happened with our junior people? Are we going to tell them to configure Docker? No way. We just run away from the room. Okay. So in that case, we need to make it easy for them, which is important. When we are talking about maintenance, about support, one of the key things is replicating what's happening. I'm pretty sure that some of us or many of us are trying to have reported a problem and said this is happening. Okay, I'll try to do it locally. It works. Okay, let's try and do it, but it works. Let's try and stay in, but it works. It's like, there's no way I can not replicate it. So one of the key things for us as well is to be able to have an exact setup to have pretty much the same context, to have the same files, to have the same closest thing that we can get to the prod server without hacking or going into the prod server, okay? So that is really important as well. And another key thing, coming back to these three to six projects or up to 20 on any given week is the computer resources. And I thought a really nice looking good Mac, but it still has limited memory. It has limited storage. It has limited everything. So I really need to pay attention to those things. So one of the cool things about Docker is that resource-wise, is probably the least consuming from all the other options we were talking before. And then the last, I think it's last, but probably not least. I don't know if there is any other point. Yeah, last, good. We already did a common setup. Okay, we are even if we're working in maintenance, our team is nearly 10 people and we all need to have a common understanding, a common ground so that we are all able to work within the same constraints and the same kind of setup. So that's why, and that's where Docker help us. Okay, so that's kind of our context, but I'm pretty sure that one way or another you can adapt what I'm going to say to your individual context. It might be that you're working two projects. I don't want to start math. Why do I need to start math in my machine? Yeah, let's have a look at that. So I'm pretty sure that you can still take something out of it, even if your context is different, which we probably wouldn't be. So before going in, how do I set up any given project? A few useful aliases that we use. So if some of you have been using Docker already, you're typing a lot of stuff in there. Whether it's Docker compose or Docker run or whatever, you're typing long commands. So we're just trying to make our life easier and we're trying to abstract a little bit into complexity for some of the people that understand Docker a bit less. So we have vsql command, what is that vsql command? Okay, so that vsql is just, in this case it's not even Docker related, it's dross, okay? And that is running dross, sql sync, argument, default. Does somebody know what dross, sql sync, do it. Go on. It's sync to database. Yeah, exactly. So that command will sync databases across environments and first argument is where do I want the database from? Second argument, where do I want that database to? Okay, so we abstract it because what if a union developers put the arguments the other way around and put default to fraud? It will be pushing the local database to fraud. So it's kind of risky. So we want to abstract some complexity and say, okay, just use dsql prod and admit that you want the prod database, dsql bed and admit that you want the bed database. Very similar for dfiles, okay? We then want to be uploading files to the prod server from local or from there. So we do dross a rsync which is basically just syncing files using the rsync command. And we also use other aliases like vssh We're talking Docker and to be honest, there is no such thing as ssh into a Docker image, but there is a thing which is connecting to the Docker image that runs the past commands. And that is the closest thing that we do when we ssh into the depth server or that we do when we go into the prod server. So again, we do use this vssh just for connecting to our local instance to simulate that you are sshing into a server. And if you want to do it as route, you can have that and docker composer app minus d, that would be the app, okay? We're just trying to be brief. And again, a junior person, if we just didn't do the app and that we do something, then we said, okay, we did the docker composer. But I don't know Docker, what do we do now? Okay? So we are trying to kind of abstract that from people but at the end of the day, even for people who understand Docker, it's easier and quicker. The stop to stop the container, set it down to just completely eliminate it. GPS, I just want to see all the images that I got running at any given time. And as I said, the slides, they will be completely available online already. So you can check all that. I know that this is kind of small but I put all the aliases in there that I use. I'm basically the AI docker composer down, docker composer up, et cetera, et cetera. So I'm gonna be referring to these aliases during the presentation. That's why I wanted to sort of introduce them briefly. What else do we need to keep on working with Docker? This is not totally Docker but because we're talking Drupal, I think it's important to set this up, okay? So we're gonna need to set up this file, aliases.dras.psp, I put the Drupal.org link. And basically this is just an abstract of that file. You don't need to read it at all but basically you can see that it's an array. And that array has a key and some SSH settings. So as you can see in here, it's aliases.local, aliases.dev, aliases.prod, and then we've got the prod URL, we've got the remote user, the remote, maybe password or all these sort of things. And I can set all kind of options. As I said, the link is in there. That is just a small excerpt of the file that I put in there. So it is important to configure that file. It will make your life easier later on. Remember that the end goal of this presentation is to have a full Docker workflow set up so that it can ease up and speed up the whole switching of environments, setting up and all that. The other file, settings.psp, we all know about that one, how is that file kind of important? Again, it's probably too smooth, you cannot see it, but anyway, I'm gonna tell you what's in there. Basically, that array is the database configuration. Are you putting your database prevention on settings? Are you committing to setting? Yeah, we are. Because in this case, we're using server variables to contain those variables. So we have a server environment variable that has the database username and a server variable that contains the database password. So the only thing we need to do when we are having a Docker container is to make sure that that server has that variable. Easy as that. And in that sense, we can fully commit those settings file. Anyway, later on, we're gonna go into a little more complex settings file set up. But in this case, what you can see in there is the minimum working piece of work that we need to have it all working. And that is just using the getM function from PSP that reads the server variable. Cool. And what else? We need just one more file, which is drasrc.php. And the only reason why we need that is so that dras plain as with many of the commands. It's a bit bigger. I can't because basically it's of the rules that I think have a different resolution. I'm sorry about that. I'm not expecting you to break it, to be honest. I mean, the slides are available. I just could be called as snippets because I think they could be useful later on, okay? If you wanna check them. Basically, the only thing that that file has is options URL, and it's getting the URL of the server. And that's just useful because when in dras, we run for example dras status. If we don't put that, it will say that the URL of my site is HTTP, and let's pass the full. And if we put that, it will put a full in 45 URL, which is useful. Anyway, I knew that I was dumping here a lot of code. I wasn't expecting you guys to read it. I want you to understand that we're setting up these three files that will make our life easier down the line, okay? So I apologize for that. But I mean, I cannot think this will happen. And okay, so we've got the analysis set up. We've got those files. Let's start just going to the main thing, okay? Okay, and the first thing I'm gonna do is be up. What did that do? That just keeps the container up, so we just start the docker container, and then I will do the SSH. So now I am inside the container, so the very first thing. I'm assuming that we will have the source code, okay? We have the clone, the right brand, et cetera, okay? So that's, I think, I will follow in and play that, so yeah. So we've got the code, we've started the container, and we've kind of SSH into my local server. So the first thing I'm gonna do is think in the database, okay? So every time that we're setting a local project, that's one of the first steps, where there is the map, and background, and build by matching, we need to think in the database. So for that, if you remember, we had a command, and actually there is one. Too much. And this is not main. Is it messing up? Ooh. Oh, okay, hold on. Cool, so I'm running this, this is just a trust command, and it just stands for trust side aliases. And if you remember, we were setting up an aliases file before with a few entries and an array. That's pretty much what it's listing. So it's telling me the aliases that are defined. So in this case, we've got local, which is usually a default, and depth, and prod, okay? So if I want to sync the database, if I want to get an exact 100% copy of the database, I just need to do simple prod, and I will connect the prod server. If we run this SQL synchronization command, I will bring it to my local copy, end of it. That's everything I need to do to have a copy and exact copy of the prod database into my local, which I think is easier to have this in my admin neighborhood, run on an export that could be 20 next, could be 300 next, could be whatever, more. And then the other possibility would be just to use the native trust command, but again, we're just trying to stay away from that just for safety reasons. Some people that might have been working on Docker decide what about for systems on the database. As soon as you stop the container, the database is gone. Probably if people have been using Docker they have already experienced that. If you have it and you don't do anything about it, that's probably one of the downsides of Docker. It's really good, it's really fast, it consumes very little resources, because as soon as you stop it, it destroys the resources, okay? So I've brought a whole database, I've stopped the container, and now my database is gone. That's, I mean, that's not saving me any time, if I need to sync the database every single time. So that's where the volumes, sorry about the arrows, overlapping, that's where the concept of volumes on Docker's come from. Again, I'm not gonna go too, too much into detail, but adding something like this into a Docker file that we're gonna be talking about later on that will short the issue. That will make sure that if you stop the container, the database is still in there, you go, you stop the project, you go on holiday for two, three weeks, you come back two, three weeks later, you up the project and the database is still there. And as you see, it's not really such a huge deal. We are just basically mounting a folder inside a Docker. I mean, you don't even need to know what's going on, you just add that and it works. It won't destroy the database. We're just trying to be practical here. And we can even export and expose the ports in case we want to use tools like MySQL workbench or any sort of UI to check the database to be able to connect easily to it. Yeah. I mean, Docker, it works pretty much like Git in the sense that you don't load something, you can make changes to it, and then you can commit, you can push, you can have your local reports, you can have the whole report. And in this case, that's not exactly what we're talking about, but I mean, it is possible. You can make changes to an image and then just commit and save those changes for you and they will be there next time that you use that image. So say that, for example, you just download the image for the OS for you want to and then you want to configure yourself, the whole MariaDB, PHP or whatever because it's just pretty comfortable with that. So you can just do it and save all those changes and reset the image, right? In this case, because we're going to be talking with kind of an open source or publicly available images, we're not doing many of that, okay? So this is mainly focused towards a group by developer, okay? So yeah, if you don't want it, you can do it. Cool. So the next thing, I already have the database, I know that that database is not gonna go away any time. So the next thing would be syncing files. So we could do something as easy again as the files from Kot. Not that we bring all the files from Kot. And in one of my presentations yesterday, I mentioned what if I have 10 gigs of files, then don't run that command because that means I do have 10 gigs of files and you're like, I'm not seeing it. So you can maybe use the stage file proxy module that way you don't need to download any file at all and if the site is using any reference to a file, it will download it then and there, okay? But you don't need to download the files and before that and that's adding to the module. Yep, that's it. Yeah, if you want to use this stage file proxy, you can easily set it up in one line of code so that you don't need to kind of re-enable the module all the time and all that. So you can just have it enabled and then with one line of code, you can either use it or not use it. Okay, we've got the database, we've got the files, what else do we need? Maybe my project is using Sorak, okay? So we're talking now, so far we've covered the basics of any proper site, which is code, database, and files. 90% of the cases is if the project is small, medium enough, you don't need that, you don't need to do absolutely anything else. But what if the configuration of the project is starting to get a little more complex? So what if the project is using Apache Solar or it's using elastic search or it's using any other thing, okay? So in this case, very easy, again, to do it with Docker. So what we, oh no, this was happening, sorry. And again, I'm gonna talk of what you are seeing, you don't need to read what you are seeing, okay? Oh, I'm sorry, I'm sorry. So I'm gonna take what you are seeing there, again, I apologize for that, it's screen resolution, some of these sort of things. But anyway, what we are just doing in there is there is a keyword in a Docker file, which is Docker compose, we talk about that file at the end. That's a keyword, which is services, which is basically which things are we downloading, okay? Which things are we grabbing? In here, that says solar, and in there it says image solar. So if you go to the Docker hub and you type solar, there is an image, which is solar, and that is the one we are downloading, okay? We're not creating our own thing, we're using something that is already done and pre-contribute for us. We're just exposing the port, so that the containers can talk to each other, so that the Drupal container can talk to the solar container. And then we're mounting a volume, remember that the volume was just making sure that something on my local thing is talking, is communicating with the Docker image. So in this case, we're mounting the folder, Docker, whatever, whatever, whatever, to another folder, solar, whatever, whatever, whatever, on the server, which is precisely where solar, in this case, keeps the configuration. If it's elastic, there's gonna be other set of files. How does that translate? So basically, we just commit to version control, the solar configuration, and that way we make sure that it's always there. It doesn't matter because we are setting up the short local staging. Configuration is there, use it, mount it, and that's the configuration that we've been taking for solar, and that's it. Re-index, you don't. So I remember, I think it was in Drupal.com Vienna, some of the persons that was talking to me about this is like, yeah, we want to do this kind of thing, but then when it comes to solar and all that, we cannot do it because we're using this and this setup and this. That is, it's like, I mean, it's obviously you need to do some work and you need to do some body to explain a little bit about that docker bit, but you see it's three, four lines of code and then you go back to docker instance, up and running, pre-configure, and you just need to re-index it and that's it. And for that, you can just run a trust command, so you don't need to go to the UI. Without all the integration, so to be e-mailed, we might be talking about, I mean, in this case for e-mail, so a service that we usually do is a mail pop. We don't want actual e-mails to be coming out, so that service will capture every single e-mail and it will just show it for a nice UI. What about credits for caching? It's just, I'm actually putting big configuration that we need to add into the docker file. As you see, e-mails, just two lines of code, redis, we're just exposing a few environment variables and we're just connecting to the official redis image, so it's three, four lines, adding a whole new service. If we need to go through the actual setup in our local machine, it could be a nightmare, but doing that is very simple. Varnished, main cache, nodes, black file, you name it, okay? So basically at the moment, docker images work pretty much everything, they're pretty helpful and you can see that the setup is not usually complex there. When you go and visit those images on docker hat, they even explain you what you need to do in order to integrate it to your project. And then again, another thing that could be really useful is if you remember, we talked before about settings. So, we commit settings of P3 to version control. Most of the people don't do it because it's got credentials, but we usually store the credentials on server environment variables, okay? So, again, you don't need to read it, I'm gonna tell you what it says in there. So, our settings file looks like that, really small, really tiny, and basically is if settings.all.php exists, include it. So, that's the first block in there, okay? I'm gonna show you that. The second block, it says get environment of the server. So, if we're in production, that will say production, that will say development, that will say global. And then, what we do is if the file settings.development.php exists, we include it. If the file settings.production.php exists, we include it otherwise, we don't. And last but not least, we still want to have that possibility of not committing something to version control. If settings.local.php exists, we include it. And we don't need that version control. And we always have this path that you had in settings.local.php. We usually just dump it in there if there is any sensitive information. But we try to stay away from it, okay? How do those files look like? So, settings.development, again, you can not read it, but basically, we are just masking the analytics code so that we don't communicate to Google Analytics. We have to save them with the class. We're doing lots of things that are just specific for development environment, okay? We won't have that in the production file. And then the settings.all, basically, because remember, we are committing that to version control, is all the environment and variables that we have defined in the server, we dump them there, okay? Just because it's easy. And that's pretty much where we configure most of the services, like database, base URL, readiness, solar, all these sort of things. They usually go there because we use an environment and variables that work for us. It might be that you use something else and it might be good, it might be better, so I'm more than welcome to relate to those other options. Cool. Yeah, for you. Okay, so a little bit of Drupal 7.0, so it's dupleted. One of the differences with dupleted is that we have a CMI configuration in the game for export. And this is for real the actual workflow that we follow in our projects. So, as I said before, we work with 15 to 20 projects. Easily, you're still 60, 70% of them are Drupal 7.0, the others are Drupal 8.0. And this is what we have when we are onboarding a ticket, so these are the instructions we actually follow. Okay, we clump the repo, we go into the folder, because it's D7 where you're using suit modules for the Drupal contrib modules and all that. And then we add the SSH and what's inside the container. So, in this case, I'm just using a stage file proxy. So, this is where from thought, in the database, that's probably the operation that takes the longer. Depends on the database size and your connection. In this case, because it's a file state proxy, we just create the file spoiler manually, we just keep 7.7.7 or local. Okay, and we just enable the module, just give it a line to log in as admin. We're gonna, usually that process doesn't take more than, I mean, because in most of the cases, I mean, you don't do this every single time. You only do that when you need to debug that pre-approved that cannot be reproduced and all that. So, if we take that out of the equation, we're talking about one minute setup, two minutes setup of having, from having nothing running on my machine to have a full URL available that I can develop and sync in to. So, that's very quick. If we're syncing the database, in fact, we're gonna see that later in the video that I recorded, it's gonna be around, I think two minutes, so there's something like that, which is not such a huge deal. I mean, if you know that you're gonna be working in that project, you just run the command. And in fact, I mean, now something we are working on is just, if you see all these are commands, so nothing is talking to you from putting it in a past script and running. So, we're kind of working on that, so we just run the one script and it sets everything up from scratch. What would it look like on a Drupal 8 project? It's pretty much the same thing, except, we clone the repo, we go into the folder, the app, the SSH, but in Drupal 8, as we know, we have the wonderful composite. So, instead of bringing modules and all that, we just run composure install. And then again, if I need to sync the full database, I would run that. If I don't need to, I can just run compeguation import and that will bring the changes. And then again, I could use deep files from Prod or I could use the state file proxy, clear crashes, ULI, so that they have been logging and went on. And as you see, I mean, I do this five, six times per day. I mean, I don't need to go through the whole command for me, it's usually via the SSH, trust you like those three commands, and then I'm logging as admin, I can start the debug. Resources-wise, I've had multiple instances running at the time, I never had issues, I know I've got a powerful machine, but compared to playground, compared to month, compared to all the services, the amount of resources that is consumed, they are minimal. As I said, we see a video in a couple of slides. And then, okay, so I would have told you this in the very first slide, but I wanted to somehow grab your attention and due to listening to me, I have everything that was involved in adding these new services, I will show you things. I told you the very beginning, you didn't need to know Docker, but I've been talking about Docker, Drust, things, so if some people say, you've been saying quite technical, I mean, hopefully most of you have been following me, if some of you think that it was very complex, all you need is, obviously, to have Docker installed, makes sense, and have that one file in your report. Docker compose.java file, and that file, in this case, a MacIO, we have a custom image. I'm gonna mention as well, and I'm gonna mention as well, sorry, Docker for Drubile. I could keep on mentioning more and more and more, okay, because there are many providers, but I think these are good enough for you to keep going. So this is literally one of the images that we have for our MacIO setup. In this case, it's not very Docker-y, because they actually did that at the beginning of then learning Docker, and basically they did a very long image inside of Docker, so you know what I mean, okay? So they don't feel everything, they say that image, and that is the image that we are don't know about, I'm pulling, okay, so it works. But you can still plug services, you can still plug credits, you can still plug a make-up, you can still plug loads of other useful things, and if you see that file, it's 15 lines of code, I have no idea how to write that file, okay, but maybe somebody in your team, or I mean again, if we're going down this route, that's open source, somebody has done it for you already, somebody has given you step-by-step on how to do it, so you don't really need to worry about it too much. If you are interested in it, you can read about it, and you can just, yeah, tag around, ask the DevOps team, I mean, in this case, we're lucky that we have a DevOps team that they do all that for us, so that we don't need to worry, but it's not really complex in that sense. I put the link in there, so it's available, all the documentation is available, and it's playing a lot of really good and useful things, so I encourage you to go through it. Lagoon, that's completely open source, okay, so this one is, I mean, you need to eventually, you want to use that, you need to pay for their services, but these other options, they are completely open source, okay, Lagoon, this is starting to look more darker, like the things I've been talking about previously. We have the keyword services, I mean, we see we have an image for the CLR, so just for connecting to the console, we have an image for NGNX, we have an image for whatever that is, I don't know, we have an image for PHP, and we have an image for Redis, we have an image for Shorter, we have an image for Barney's, we have loads of images, hey, man, I'm not gonna write that, my symbol has drawn it for me, I just need to be up, and it works, that's all the cool things about this, and you write it at the beginning, and you just don't need to worry about it, okay, and you want to understand life, share the changes, in which case you can just, yeah, edit this file, and that's about it. As I said, this is public open source, so you can just go through it, they are even using this in production as well, obviously no risk and projects, but they are using it in production, which is kind of cool, and then there's Docker for Drupal, that's the way I started with Docker and Drupal, and this is a very similar setup to what we have in this lagoon, so we have services like MariaDB, PHP, NGNX, Mailhub, it's got again, Redis available, we can see all those in there, you can go and put the link, so that you can go to the actual site and see the file, and again, you put it on the root of your project, you run Docker, Docker composes app, and you're up and running with a copy, I mean, obviously if you're using one of these predefined, it might not be an exact copy of your server, but it might very, very difficult, for example, in this one, you can configure the PHP version that you have, they've commented in and outside live to change the MariaDB version, the PHP version, which can be pretty useful, and literally that's all you need, so realize how much time you have, okay, two minutes, okay, I think that's really, if you remember that was the B7, I just wanna go through what's next, I think to be honest, I'm gonna skip the video, I'm more than happy, I in fact kind of show it to some people before this talk, I was just showing them really how do I set up all that, and that's literally running those commands, and you can see that in two minutes, I put the site up and running, so yeah, and that's my, I mean, that's a screen recording from my own machine a few days ago, so I'm just gonna go to the pros and cons and I'm gonna start with the cons, and initial configuration, obviously we've been talking about that, but I think that if we go back to the alternatives, to the man, to the vagrant, all those really need configuration as well, so it's not that we are really, yeah, avoiding anything, we're just doing it in a different way, but the good thing is that there is a lot of open source and things in there, there is a lot of help available, and once you do it, it's really easy. Some people could say volatility, I don't think that's an issue because now we have volumes, we can have redundancy, we can have many other options, I mean, if you don't want to risk data, you can always put it on an LDS image in Amazon, and that's it, really, so we've got options, I don't really see it as a problem, and then learning, that's why do I need to learn Docker? I can tell you, we've got juniors, we've got loads of people that they don't, they haven't got a clue about Docker, yet they use it on a daily basis, and they don't need to know, so I don't really think that there's a learning problem here, there is really, I mean, for the purpose of this talk, which is Docker as a total environment, okay? If you want to learn Docker, then there is learning, too, but if you want to work with Docker, you don't need to know Docker, okay? And then finally, cross, so again, I cannot emphasize that enough, you can get an exact copy of the product, that's all, it's really fast, it doesn't consume resources, you can have multiple containers running at the same time, and it's not really a problem, it's portable, it's open source, and then, yeah, in that, I'm giving you a link to a talk, which is about using Docker in production. That was it, really, I mean, I really hope that you could make the most of this talk, I know I didn't go too, too much into different Docker, but to be honest, I can't really, because I don't know too much, I know I can read that, I know my way around it, but I'm not an expert in Docker developer, we have devos for that, so I really hope that you could make the most of the slides, which are available in there, so I encourage you to read them, go to the code, go to the names, and just investigate on the topics over there. Thank you all, I'll leave you in it, too. Tom? Yeah, exactly. So I've got the Docker infrastructure running, so green lights, and if you remember that was this command, which is just an alias, which is showing what's running, at the moment I have these, these are just services, okay? So this is a project called Venture, I just show you that file, and in here we have that URL, okay? Okay, that URL is key, because it's going to be talking to that DNS mask service, and that will make it available, it's like hooking up into the host file of your server. Thank you very much for that, yeah, oh good, thank you. So yeah, Venture, you can see that it's not running, if I were to go to... The Docker Composers, the amazing IO? Yeah, in this case, yeah, in this case it is, so if we just go through it, I mean I just removed in there all the comments, but if you see there are very few lines of code, and the main image that we are pulling from is an amazing IO specific one, which is using PHP 5.6 on solar, so they read it and they save it with that setup. As I said, if we visit venture.docker, whatever is not available, that means that the server is not running.