 Welcome everyone. This is to talk about what is better than VMs. I'm going to talk about tool that's we're building called Doxel. If you were present at the previous half session, it was a great presentation for Michael about what containers are how they work. So that was a good priming layer for the audience that stayed for this part of the session. I can skip some of my slides or really go quickly through them. So my name is Leonid Makarov. I'm a chief architect at FW and we will be talking about what is better than VMs and I think you already know the answer, but let's dive into this. So well first, what is the whole point? Why do we want to use containers instead of VMs in local development environment? Well, because project onboarding is difficult. It takes an enormous and a reasonable amount of time and we as a big digital agency know it really well. We have distributed teams using different tools, different platforms and it's very difficult to get everyone on the same page using the same tools and achieving the same results. So you start with having no documentation at all. Then you realize, well, that's not going to work because people spend weeks trying to figure out how to start working in the project, how to get the code base and then from getting the code base to having a running site, you really can spend days on that. So even if you have the instructions, they tend to be outdated, incomplete, then what you do, you start trying to automate that. You use a virtual machine, you use a vacant box, but that takes quite a while to provision. Over time those provisioning scripts start to fail as well, as well because, well, maybe something failed to download, maybe something just got outdated. So it's pretty heavy to use that for like day-to-day use in development. And developers try to fix things on their own, but they are not so sad. So they kind of patch it here and there, gather it up back and running and everyone ends up with their snowflakey setups. And this is the direct path to, it works on my local. How many of you have had it works on my local case? Exactly. Pretty much everyone, right? I don't know if I think you have a broken head. Sorry for that. You weren't able to lift it up. Yeah, so that's the exact situation. Onboarding is difficult because of all of these issues and it tends to be difficult to automate it and keep that automation alive. So, and this is the reality. This is exactly what we went through as a big company through all of those stages over time. And fortunately, today, this is what the process looks like. Our developers can clone a project people, switch to that directory that is just cloned and run in commands. And that brings the entire stack for them up. And that stack is not just something default. It's a specific stack for that particular project that they're working on with whatever components are necessary and versions for that project. And then it also provisions the site itself with, let's say, pull database from somewhere around the specific set of commands. So in the end, this is everything I, as a new developer being onboarded into the project, have to do to get a site up and running and be productive right away. And this takes me about 10 minutes, let's say. And yes, we are using Docker and containers for this. So, let's just, because I know some of you were not present in the previous session, I will quickly go through and repeat per state again what containers are and what they are not. Containers are not virtual machines. And this is a very important thing to understand. They are fundamentally different. Virtual machines or hypervisors are used to virtualize hardware. And that's pretty good because you can run any guest operating system on any host. That's cool. That's extremely powerful, but that comes at a cost. Performance and other challenges. Containers, on the other hand, operate at a higher level. That's the operating system virtualization level. And you, a container guest, the Linux guest, uses the Linux kernel of the host machine. So you don't have to boot the whole operating system. It's already there. It's reusable. So eventually, containers are really just processes, isolated caged processes within the same running host machine. So again, quick overview. On the left side, we have traditional hypervisor virtual machine infrastructure where you have a full operating system for each VM started and running in the overhead of that. On top of that, you have your libraries and applications. And on the left side, on the right side, I'm sorry, you can see that there is no hypervisor in place. You have the Docker engine or whatever container engine it would be, in our case, it is Docker engine. And you have the necessary libraries and binaries. And that's really the difference between different Linux distributions. Just the file system is different. The kernel is pretty much the same. On top of that, you have your applications or containers. So with, again, virtual machines on the left, what we're all used to, you have a physical host, you launch several virtual machines, they reserve resources. Not all of those resources are used. So you end up with space being kind of reserved but not used. With containers, it is used more efficiently. So the physical resources are now freed up because containers only use what they need. And all that space that is left can be used for more containers. So the whole infrastructure and setup can be much more dense. You can launch a lot more stuff and more efficiently on the same physical host. Another cool idea and approach that you probably heard about, microservice approach, what does it mean? VMs, monolithic, huge, couple gigabytes in size, preprovisioned, prepackaged with everything you would want in a single operating system image. Well, try changing something and you can start to building it and maybe go have coffee one more, one, two, three, five times. So let's break that apart. We switch to containers and every piece, every component, every service, we break that into a tiny service, microservice. So now our Apache is its own container, our database is its own container, our PHP is its own container, and whatever else you want, varnish, memcache, solar, there is a container for that. Similar to modules in Drupal, there is already probably a container for that, a container image, where you can always build your own. So this is very similar to how you can plug and play Drupal modules, you can plug and play container and container images into your stack composition. So you are no longer married to this huge VM, which is your pet eventually, because that's all you have. And you take care of it until it dies and then you go grab another pet. With containers, you just plug and play whatever you need at that point in time for that particular project. So to recap, containers are super efficient because they are not VMs, they are processes. They are super, super flexible because of the microservice approach. You grab what you need, you plug it in, it works, they are consistent, every container starts from a pre-built image, which you pull from a registry, let's say Docker Hub or private registry. And because of that results are always the same. It's the same image, the same container that starts from that image, the same result anywhere. And portability, anywhere. I use it locally, it works this way, I use it in continuous integration, it works the same exact way, I use it in production, it works the same exact way. So taking all those benefits and applying them to the problem we stated originally, how can we make the onboarding process more efficient? How can we optimize the way we do local developments? How can we make it more consistent? We developed a tool at FFW called Doxel. We've been working on this for about two years so far, we're using it in our projects, we build it for our development teams, we open source it, you're welcome to try it and let us know what you feel. So setting up, this is what it would look like when I start as a brand new developer, let's say, and I'm going to use a project that has Doxel configuration in place. So I install it first and installation process is very seamless. You have the one line installer script which is cross-platform, you run it on Linux, Mac on Windows. For Windows you do have to install a Linux shell first, so that's one manual step that you have to perform. Everything else is automated by the installer. Linux, native, no VM, super efficient, all of the benefits of containers. Well, for Mac and Windows you still have to virtualize. Containers are native to Linux, although Michael just told me great news that now you can run Linux containers on Windows, which was not possible, maybe I missed something from DockerCon that happened a week ago. But it used to be that you can only run Linux containers on Linux and Windows containers on Windows. We're not so interested in Windows on Windows because Windows was not made for web development. We're interested in, oh vice versa. We're interested in running Linux containers on Windows. And for both Mac and Windows we have to virtualize. But in this case it's a really, really tiny, thin VM layer. It's a very small virtual machine which is about 40 megabytes in size and all it can do is run in containers, nothing else. It's very thin and because of that very secure as well, which we probably not care that much in the local environment, but it doesn't matter when you go up into continuous integration and production. So once we have the doxel environment installed, I forgot to mention, we do use virtual walks for Mac and Windows. This is something that works really well. And it's cross platform. There are now solutions that Docker are working on which are called Docker for Mac and Docker for Windows. When those get production ready, we will start using them as well. Right now it's very experimental. There are still issues in performance. So we're still sticking with using virtual walks for virtualization on Mac and Windows. So once we have the doxel environment provisioned and working, this is that second step. I clone the repo and now I run the init script. So what is really this init command? Well, it's a custom command which is specific to your project. And every project has its own init script. And this is just a very simple example of what it could look like. Once you have your, you pull your code, you the stack is up and running, you want to provision your Drupal site. You can either run a site install, almost likely you will grab a database from somewhere and then run some kind of update scripts, revert features and things like that. Again, this is just an example. In reality, the init command in that script can be as sophisticated as you want it to be. I'm actually going to demo you right now what it looks like. Specifically, this example, I'm going to use a, okay, you can see that. So what I'm going to do, I'm going to do exactly that. I'm going to get clone a repo from GitHub. In this example, it's a sample Drupal 8 repo from the doxhole repository on GitHub. And imagine this is a real project that you would clone and you want to start working on. So you clone it, you switch into the directory of the project, you run fin up. I'm sorry, not fin up. That was a mistake. It was supposed to be fin in it. So part of fin in it is actually fin up but also a lot more in order to provision the actual Drupal site. Okay, let's try it once again. So the init command and being a custom command, if I run fin just like that, you can see that there are a bunch of things that fin can do. And then at a project level, I have this init command in the repo. So fin detects the commands. If I had more commands, it will show me them here. But I see here there is an init command for this project. And I started. So what's happens here, part of the initialization process for the project, the settings for my local set up are copied. There is a template in the repo. Again, you have to create this for one time and then it's being reused every time by every developer and we're also using the same exact process and continuous integration. So settings initialized, my stack is starting up. My SQL starts up. By default here, we have only three containers with PHP, MySQL and Apache, those CLI, DB and web containers. And when the stacks started up, what happens here is the Drupal site is installed with the rush. This takes about 30 seconds right now. So once it's installed, it spits out the URL for me. So I didn't have to configure. I didn't have to have PHP, Apache, MySQL, any of that stuff installed on my local. I didn't have to configure my virtual host. I just initialize the project. I paste the URL. This is really tiny. But you can get the idea. I got the project right there. And all I had to do is clone the repo and run the init command. Getting back to our presentation. So once I have my stack up and running, there are many ways how you can work with it. So part of the CLI container, one of the three containers in the stack that we have there. In the CLI container, we have PHP running. And also, we have pretty much every tool you would ever need to work with the Drupal site. The box of support is also other CMSs as well. Not going to name them with Drupal Khan. So you can run any command inside of the container with FinExec, whatever it is. Let's say you want to run Composer install or drash or Drupal console or, I don't know, Gulp, for instance. Or you can also just enter into the CLI container and run those tools directly. So again, click them over that. Since we already have the stack running here, I can do FinExec. Let's do this. So list the files. You can see it's my, that's the project I copied. There is a docker with folder where Drupal is. So I can actually go ahead and switch into the docker with folder. And then I can do the same with FinExec, LSLA. And this will run the same exact command but now inside of the container. I don't know if you notice there are no differences and that's the point. Because if I now go inside of the CLI container with the FinBash command, I'm basically inside of this terminal where all of my tools already installed and they are the same exact versions and the same exact set of tools for anyone running this. So I have my drash here. I have my Composer here. It's like this. And everything else. So for comparison, luckily I don't have any of those tools installed. They are not there. I don't have to install them. My stack is provisioned there. Nothing is polluting my local machine. I don't have to deal with version collisions or just switching versions. It's all in there. It's all per project. Every project can customize it the way they want it. And everything is consistent within the team. Zero configuration. So this is not a great example how we try to make it very simple. And it's on the other side of the spectrums compared to running the init script where we kind of scripted everything and didn't have to configure it. Actually we had a lot of configuration there. But that's done one time. I can actually have instant environments. And those will be full-lamp stack environments with zero configuration with doxel. And I will be getting that default stack composition for lamp. Obviously Linux, Apache, MySQL, PHP. All I have to do for this, switching to our demo, going couple levels up. So I'm going to create a folder, call it hello, Baltimore. Switching to the folder. There's nothing there. I create a doxel directory. So there's that directory. And this is all doxel needs to know that this is a project route for some project. Now we're on thin up. This is not the init command because I don't have the init command yet there. It's just up to start the stack itself. And you can see it's starting containers for me. It's starting the default lamp stack with PHP, MySQL and Apache, those CLI, DB and web containers. And I can open my browser window here. And, well, that's the previous state. Now I reload it. Well, it doesn't find, oh, because there is no dock with really. I probably should create something, create a dock with folder, and maybe output some PHP info into the dock with index.php just to have something up there. Now reload it. This is my complete lamp stack with zero effort. I didn't have to do anything for that because doxel comes with a pre-configured stack which you don't have to do anything to start using. You can extend it going forward. You can override some defaults. You can go completely custom if you want to. But this is like ground zero. Very easy to get instant stacks, lamp stacks up and running. So as I said, you can go very simple. You can go very custom. We have a bunch of services available in the doxel library, as well as any image you can find on Docker Hub. Because of and thanks to the microservice architecture, you just grab a service, you plug it into the configuration file, you use it. So in this case, I want to add memcache to my configuration. So all I have to do is define a service, define the image. This image is freely available on Docker Hub. I can also build my own and have it either private or public. And maybe there is some configuration for this service like I want to set the memory limit for memcache. And that's it. I add it to my configuration file. It's like downloading a module for the service. So I have a web browser. It's available. Plug it in, run the update command, and it spins out the service. If the image does not exist on my local computer yet, it's going to pull it from Docker Hub, so I do need internet connection for that. But once it's there, or once you have it in your continuous integration environment, it doesn't have to download anything. It will reuse that image and it will just update this second, spin out that service. Some other things. There are a lot more, but these are kind of cool. So I wanted to talk about them. First one is FinShare. So let's go back to our Drupal 8 site. So imagine I'm working on a project and now I want to show it to a client or a team mate. And we're not in the same office. So I could obviously go to meeting or Skype or something else. Or I can also, if I want to give them the ability to try themselves, I can launch FinShare. What this is doing, it's launching a service called ngrok. And ngrok pays a tunnel. So it links this URL here from internet to my local environment. So without any effort, I just exposed my local development environment on the internet. Thank you. Another cool thing, if you're familiar with Rush and LSS in Rush, you can, and you're working with multiple projects. And this is one misconception that people quite often I see. If you are working with multiple projects, you still only need a single virtual machine. That virtual machine on Mac and Windows only, Linux no virtualization. You only need one virtual machine that we provisioned for you, the doxel provisions, and nothing else. And it's very transparent to you. You don't even have to think about it. So running multiple projects, if I check my project list here, I can say I have maybe three projects here. Hello, Walter Moore, Drupal 8 and my client project. They are all using the same virtual machine and I don't have to spin up new VMs like with Vagrant I would have. And these are all, they can have different stack. Like this client project is using PHP 5.6 and our Drupal 8 project is using PHP 7. I can have completely different stack in position running within the same VM and they are still isolated. And let's say I want to, usually you have to be in the project directory to run a command. But if I don't want to switch, I can also do it like this. I can use an alias. Let's say check the status of containers and maybe I want just to stop them to conserve resources and to free apps in memory. So alias is a cool thing when you start working with multiple projects. Then we have a set of commands to manage databases connecting to the database, dumping or importing the backup back. You can also connect with any tool you want to a database. There is a random port that is being assigned to the database container and you can access my SQL of any tool you want from your local. Custom commands, the init command is an example of that. You can quite often use cases that we have maybe running a PHP code sniffer. You just create a command and it's basically just a shell script. But you can use any other interpreter. You want note, use note, you want gulp. Go for it. And you script whatever you want there and then you commit that code. The custom command exists in the project configuration so everyone gets it and everyone can start using it. And another feature I wanted to mention, which is a more advanced use case, but a very nice one for occasions like DrupalCon and trainings, different conferences is the offline installation mode. We actually introduced this feature specifically for DockerCon. You can use the same online installer, the one line thing, but having a package that is pre-downloaded on a USB drive, you run it in that directory where everything is downloaded and the installer is smart enough to pick the files like virtual box, Docker tools, all of the necessary Docker images from that drive and you don't download anything from internet. So it's just, you still download maybe 200 kilobytes of the installer itself, but then it picks from the USB drive and installs everything. So you don't have to run provisioning that pulls gigabytes of data from internet. You don't have to wait for it. It just works offline as well. So a quick recap to a successful way of onboarding people and managing your development environments, you have to standardize. Without standards, it works on my local, you will never escape that. Switch from VMs to containers for many reasons that were already discussed, no virtualization on Linux, very fast, easy to swap components, microservice architecture and automates, automates, automates. Really, this is probably the most important one and everything before that was just tooling, but this idea of everything has to be automated is what will bring you success and this is what you can apply in local, then you can take it to continuous integration because exactly the same workflow and exactly the same tools with containers work in your continuous integration as well. And the same, you get the same results regardless of where you use this setup. Find more information on our websites, documentation, GitHub people, and we have a community support chat on Gitter. And I know we're just on time to finish. So I'll take a couple of questions, but there will be a booth in half an hour, right after the break in room 311. You're all welcome to come join me and we'll have a full hour to discuss this and have a conversation. Thanks for the presentation. Quick question about the thin layer of virtualization if you're on macOS or Windows. Do you have any automation for that or does the virtual box piece have to be set up manually? So the installer automates all of that. Once you run the installer, it installs virtual box and provisions DDM, it's taking care of you. I've been using doxel now for a month or two and it's great. But I still wonder what does the name mean and what does Finn mean? Why doxel? Yeah. So docker is like that fish shape thing or whale, whatever. So when we were picking the name, we had an idea of docker Finn like the thing that controls it. And then docker, no, doxel Finn. Doxel Finn, docker and then we can bind them and end up with doxel. Something like that. When you're managing multiple projects on your local, does doxel automatically handle conflicts with like ports? I'm sorry, can you speak up? I'm sorry. When you're managing multiple projects on your local, does doxel automatically handle conflicts between the different containers with like the patchy ports? There are no conflicts, they are all isolated. The only conflict you can run into is when you name the folder the same or you use the same virtual host. And then yes, it handles it, it will warn you that there is already a project using the same virtual host and it will offer you to stop that project or resolve the conflict. But in terms of stack itself, it's isolated. You will never run into, oh, I have this PHP version in one project and another PHP version in another project. No, it's isolated. Containers are very isolated. So no conflicts from the perspective of the actual tools. Thanks. I couldn't tell from what I was seeing. I'm sorry. I couldn't tell from what I was seeing. But are you going directly to the ports or was it going through a proxy of some sort? So with doxel, we do have a virtual host proxy service. And this is what allows us to run multiple projects, multiple stacks, and still using the same port 80. So yeah, there is an engine next that routes requests. That's great. Thanks. Hi, thanks for the presentation. Your demo showed setting up from a clean slate starting a new project. Can this be used to like pull an existing project that will then also pull a copy of the production database? Yeah, of course. So you can integrate doxel into any existing project very easily. The minimal setup is basically the dot doxel folder. And from there, read on documentation how you can extend and customize it. Thanks. Hi, could you speak a little bit to how you use doxel like production, you know, with certain hosting companies like AWS or stuff like that? How you use doxel and production like with AWS? We're not using doxel in production. So I had an example that you can use the same container setup and ideas in production. So far it's for local development. We do use it for continuous integration. So we have our own like pantheon has multi depth. We have the same exact thing. And it's very easy from local to CI. It's just one step because containers are portable. So we can run the same thing on a Linux server in the cloud. Cool. Thanks. I've both with VMs and with like Docker compose. I've had like a lot of trouble with the share on Linux and Mac, sorry, on Windows and Mac machines. Have you had any, are you doing anything to mitigate the IO latency? So on Mac and Windows, we're still sharing the same challenges with virtual machines. You can obviously keep your sources inside of the container and get native performance, but that's not the base workflow for developers. So what we do on Mac, we use NFS and we fine tune it. So the overhead is about 10% on that latency, which is pretty reasonable. Like I don't notice it on my Mac. On Windows, we use Samba Windows file sharing, which is slower than NFS. But that's also what Docker uses with Docker for Windows. Like there is still no better solution for Windows, unfortunately. The better approach would be if you put your sources inside of the container, but then you deal with challenges. Well, how do you edit them then? How do you use your ID to connect to it? And debug and such. Well, Exdebug actually, yeah, Exdebug as well. But since you mentioned this, we do have documentation for, well, it's fine. Documentation for zero Exdebug configuration, but it's still there. Like how to click those two buttons in PHP storements in your Chrome to get it up working. Yes. There are a lot of tools that we integrate into this thing. Thank you. Welcome. The doxel.yaml that looked a lot like a Docker compose was that just exactly that thing. The only difference is that we started with just pure Docker compose. But then since we wanted to provide this default stacks and then let you override it easily, there are a lot of YAML files, which are in fact just Docker compose YAML files that we stitched together. But in the end, it is just the complete Docker compose YAML file. We don't extend it with anything in addition to that. It's pure Docker compose. So in your CI environment, you installed doxel as well to interpret this stuff or do you somehow like export that? Yes. So doxel has to be installed in CI as well. So you can't use the built Docker compose instead like in the CI? You could eventually compile it into a single Docker compose file and use like that. But for us, there is no reason for that. Eventually, when you go into production, yeah, it does make sense to compile the Docker compose YAML. But that's not the use case we're targeting right now with doxel. Cool. Thank you. Okay, last question. Yeah, sorry. So one thing I noticed with Beggar and occasionally is I run into weird instances where NFS file permissions cause create havoc. And so your PHP can't write to a file. So let's say TCPDF, you're creating files, right? I know what you're thinking about. Yeah, so have you had those issues? So we do a smart move here. When the CLI container starts, we tell it what is the user ID of the host user and it just inherits it and we switch IDs. So the user inside of the CLI container is the same user as you have it on your host. So files created by that user by PHP, whatever tools, get the same ownership, the user and group. So there are no conflicts in terms of permissions. Cool. Thank you. Okay. Thank you, everyone. This is my first DrupalCon presentation. So feedback is very valuable. Please rate this session and come to the booth in half an hour, room 311 if you have more questions. Thank you. I'll find someone. Okay.