 I'm going to talk about using containers for building and testing and covering Docker Kubernetes and misses. That's my tour handle. If you want to tweet about it, only good things please, but things you can tweet them to that guy over there. Well, a little bit about me. I work at CloudBees. I work in what is called the private SAS edition team, where we basically run Jenkins at the scale using everything in Docker containers. I contribute to the Jenkins message plugin. I'm also the author of the Kubernetes plugin. I'll try to tell you when I'm biased for something, but that's where my experience comes from. And also the maintainer of one of the maintainers of the official Docker images for Jenkins and for Maven. I'm a longtime Maven contributor, a member of the Apache Foundation and helping with any other open-source software that I've used. And I'm a Google Cloud platform expert that this comes from the Kubernetes side of things, whatever that means, that title means. Okay, so who is using Docker? Raise your hands. In one way or another. Okay. Who is this Docker in production? Okay. Well, more than usual, the pace of Docker adoption is being like through the roof. So I love this tweet. The solution Docker, the problem, you tell me. And this is a lot of what a lot of people are doing with Docker, basically just using for anything. But it's for building and testing and deployment and all these sort of things. It's actually a pretty good solution. And it helps a lot dealing with, I don't know, like multiple architectures, multiple operating package versions, tool versions, combinations of different things. But it's not trivial. This is actually how they used to ship containers from both into the hardware in some place in the Caribbean. So using containers is not trivial. There was recently a post, I think it was Foler saying, you've got to be installed to write microservices and all the things that you need to do to be able to use it and it's not just, oh yeah, let's reach completely to microservices and Docker containers and everything. And everything is going to be fine, right? So one of the things that you're going to do when you're using Docker containers at a decent scale or, I mean, pretty soon after you do the hello world, it's going to, you're going to need a cluster scheduling system. So something that is going to create a cluster of hosts, service running, Docker, or maybe now other types of container runtimes or in the future. And you're going to, and especially this is what we built at CloudVis, running in public cloud, private cloud, bare metal, and this was our case, our preferred clients and HA and fault tolerant, of course, and with Docker support. And there is three alternatives once you decide to go Docker and more than one host. So you have Apache Mesos, you have Docker swarm, and you have Kubernetes. So these are the three big cluster schedulers that exist. So what is Mesos? Mesos is what they call a distributed systems kernel. This is a way to say you can run a lot of things on top of Mesos, basically they abstract, Mesos abstracts like the operating system and provides you some primitives to deal with multiple hosts from the application player. And you can run Hadoop, you can run Espar, Kafka, and all these other big frameworks, a lot of big data work going on in on to Mesos. And that's what you can do there. So it started before 2011, so it's the first of them. And it can run any sort of tasks, not only Docker containers, but also just pure binaries and rocket containers and app-sea images now. So the container, I mean, the container format from the, what's called the container foundation. And then what you run, so you have Mesos, and Mesos just basically abstracts all this infrastructure for you. And then you run frameworks on top of Mesos. There are the ones that actually do something. So some of the things we saw, Hadoop, they have their frameworks. And then you have marathon, Mesosphere marathon for long running tasks, long running services. So if you want a service that is always running, and if for whatever reason it dies, marathon will restart it for you. Or if a host dies, marathon will notice and will run it in another host. And then you have Apache Aurora that is doing something similar. And Aurora is being, so both of them are being used and Mesos are using Twitter, Airbnb, eBay, Apple, you name it. There's a lot of big companies behind it and also the traction it had over the, all these years. There's another framework that is Kronos that is like a distributed Kron-like system. And I'll talk later about the Jenkins framework that runs on Mesos. For Docker Swarm, this is something built by Docker Inc, the company behind Docker. And the first version of Docker Swarm used the same Docker API. So it would allow you to basically point your Docker client to a Swarm API. And then the Swarm API would run whatever you asked to run across the cluster. So you wouldn't need to modify the system tooling you had. Everything would run. It would be the same command line, the same options in the same Docker client. But I guess they realized that had some limitations. In Docker 112, they came up with this new Docker Swarm, Swarm mode in Docker. And it's included by default in the Docker demo. So you don't need to install anything else. And I guess they play with this ability for them to include these features in the Docker demo. And then everybody was going to get them for free. And so they have like a first step on the door for you to use it. And with this new Docker Swarm mode, they changed the API or they created a new API better where you have a new object that is called the service. And this object is what basically defines how some Docker container runs across multiple hosts and everything. Same reasoning as in Mesos, if it dies or a host dies, then the cluster will notice and will restart it if you configure it to so and in another host. With the big difference from the previous Swarm, that existing tooling needs to be changed because this is creating a new API, a new model to deal with containers in the cluster. The last of the three is Kubernetes. So it's something that came from Google based on what they were running on their Google systems. And it can run on local machine, virtual cloud. And of course, Google is making it so the best place to run it is in Google Cloud. And they offer a service called Google Container Engine, GKE, where you basically go and say I want to start a new Kubernetes cluster and it will create it for you. But then you can install it anywhere. And there's a nice provider, well, like page stack point where you can create clusters in different cloud providers. And then you have commercial software on top also like Chorus Tectonic and you can run it in Azure and you can run it even on your local machine. And for the local machine, actually it's Minicube. It's a VM that has Kubernetes installed just with OneNote, but it's great for testing and playing with it and with the APIs. So when we were building this scaling Jenkins goal that we have, who's using Jenkins here in the room? Okay, I should ask who is not using Jenkins. And who is using MISOs? Anybody using MISOs? One, two persons? Docker swarm? Like two more? And Kubernetes? Like four or five? Okay. All right, so, well, if you are using Jenkins, you know that's how Jenkins works. There's two options we saw to scale it. So you can have either more build agents or slaves per master or more masters. And if you have more build agents, there's plenty of plugins that you can use to create new agents. There's like the old Amazon EC2 ones to create virtual machines or Azure machines or anything, any cloud provider. And dynamically, like when you have a lot of jobs, they get created automatically. And I will talk about the ones that work with Docker containers. And the problem is that the master is still a single point of failure. And if your master dies, then you have a problem, which is not, I guess, nowadays you have a resumable pipelines. If you were in this room before, there were some talks about pipelines and how there's ways where pipelines can reconnect to a master after the master gets restarted and the job continues running. So if you restart the master, your jobs continue running and they don't get killed. And you have a problem having multiple configurations or plugin versions or a restart of the master that basically you get down time. And there's also a limit, although that limit could be pretty high. There's a limit on how many build agents you can attach to one Jenkins master. And then the other option is having more masters which are with the benefit that you can have multiple organizations or multiple departments having their own master. So it's basically more like a federation. Well, it's like a shardling of your builds. You can have multiple masters with their agents into different organizations. The problems you have is single sign-on. I mean, how do you all of them connect to each other? Connect, how do you log in to all of them in the same way? Or how do you configure all of them from a centralized place? But we have at CloudBees, we have this Jenkins operation center and then the private substitution where I work or when I'm working now kind of basically is doing the best of both worlds. It allows you to have multiple masters running on Docker containers and they all get configured from a single place with this operation center. And then all these masters are created in Docker containers. So you can spin new masters whenever you want. And all these masters get configured to use the same cloud. We are using Mesos right now. And so all the masters are sharing this pool of a cluster with Docker hosts running. And then another great quote is to make error is human, to propagate error to all serving an automatic way, that's what DevOps. And when you are automating a lot of things and there's chances that what you are automating is going to break. And I have a different version of this that basically conveys the message is if you haven't automatically destroyed something by mistake, you are not automating enough. And this happened to me several times at least a couple. Nothing really bad like this guy's from this week. But yeah, if you are not breaking something is that you're not trying hard enough, right? So that's my idea. So I always try to automate things. Sometimes you screw it up, but as long as it's not too bad, it's okay. Okay. So how can you run Jenkins in Docker? We have several Docker images available. So you have the official Docker image that is built by Docker themselves. But we provide like the Docker file and all the new releases. And this has the latest LTS, well, all the LTS versions. If you go to just Jenkins, Docker pool Jenkins or Docker run Jenkins, this is what you get the latest LTS. And then you also have the Jenkins community has this Jenkins CI group in the Docker hub. So Jenkins CI slash Jenkins has the weekly builds. And we possibly will have more build also more more than the weekly builds on this week. Because so this is an automated build that we have. And that is publishing continuously every new release, every new weekly build. And this is built by the Jenkins community and pushed to the Docker hub. So it's the same thing. Just this has the weekly bits. The other one is LTS. And then if you're going to run slaves in Docker, then the one you need to be aware of is the Jenkins CI slash GNLP slave. So this is an image that has just the remote in bits. So it's based on the Java Docker image. And it has the Jenkins slave. And when you start this, basically here, it says Docker run Jenkins CI, GNLP slave. And you pass the URL and the secret and the slave name. This will connect to the master. And then that's it. You have a new slave running in Jenkins. Obviously, you probably won't need to do this because there's plugins that will do this for you. And I'll show you later. And the other interesting part about this image is that you have two versions. One is based on the official OpenGDK image, which is Debian. But there's also an Alpine image that is really small. I think it's like 40 or 50 megabytes. It's a lot smaller than the Debian base one. So if you wanted to manually create 100 slaves running in Docker, you could just run Docker run all these times and point into your Jenkins master. And then you will have them. So for cluster scheduling and Jenkins, what do you want? What do we want and when do we want it? So you want isolated build agents and jobs. You want one job to not mess with the workspace or something of another job. And same thing for build agents. You don't want a job using a build agent and then another job having any sort of conflict with that. We wanted to use in Docker so it can start in like seconds. And you can also, we want also to be able to drop capabilities like this in the container world, like be able to not run as root and run as a different user, maybe not have access to network or not have access to something or another. And I'm going to go through the different features that the cluster orchestrators have. And I'll tell you which one of them have what. Feature number one container groups. So in the Jenkins example imagine you can have a Jenkins agent container, a Maven container and then Firefox container or Chrome container, a Safari container. So you would have what it's typically called a pod of containers. And you can have five containers running for one job. And if your cluster scheduler support grouping containers, otherwise you have to build one container image that has all the tools that you need. So this is something that is experimental in Mesos in 110. So you need a pretty recent one. Docker swarm supports grouping through Docker compose. And you can also force the execution of all those containers in the group in the same host. And Kubernetes supports the concept of pods natively. And it warranties that all of them run in the same host. And they can run, let me see, yeah. And they can all refer to the other containers by using local host. So it was, it came, the idea comes mainly from Kubernetes. That was the one, the first one implementing it. And it's the first, yeah, the first one implementing it. And that's the power of it, of being able to use multiple containers just for one job. Because imagine that you want to do a Maven build and something else or Selenium test. If you have to create your own image, then you have extra work to do with all those tools. This way, you just reuse all the images that you have in available in Docker Hub. You don't have to write any new Docker image at all. Memory limits. So the scheduler needs to provide a way for you to limit how much memory the jobs can use and prevent from these containers to go over the memory limits. So imagine you have all these resources in the cluster and you have different jobs trying to fetch, get these resources. You don't want to, maybe you have a build that is going wrong and is using more memory, more CPU or something. You don't want that to happen. So all of them support memory limits. Inmissives is actually required. Instrum is optional. And in Kubernetes, they have some defaults. The ones that are optional. And in Kubernetes, you can even do namespaces. And so you can isolate containers into namespaces and having group limits set at namespace level. So you could say, not just by container, but saying, whatever number of containers you run, just make sure they don't go over this limit. And this memory constraints translates to docker-memory parameter. So I have some questions here for you now. I'm sorry, I know it's late and you are all tired, but I'm going to make you work a little bit. How do you think it happens when a container goes over a memory quota? You have a build that runs the JVM as a sample I have, and you set a memory limit for the container. What would happen? Any takers? Sack fault. Sorry? Sack fault. Sack fault. Okay. Any other options? Memory? Out-of-memory reception. Out-of-memory reception in Java. Okay. Anybody else? Memory skew? Container skew. Container skew. Okay. Let me show you. So I have this this is just a maven application, a maven build. And in the tests, I'm just using memory and whatever the normal Java thing, the garbage collection happens, and it's using this memory without limits. Okay? The container has no limits. This keeps using memory and the JVM is doing this garbage collection thing. And this would run forever. So I'm going to kill it. In this one, I'm going to set it to memory dash m 220 max. So basically I'm limiting how much memory the container has. 220 max. This is a random number. This depends also where you run this. But what you see is, let me show you, put it here at the top. This is doing the same thing until it reaches a point where basically something happens and nothing happens because you get nothing. This just stopped running. So what happened? And the only way you can know what happened here is by looking at the inspecting the container. And when you do a Docker inspect, there's an interesting line here that possibly calls your attention if you know where you're looking. Otherwise, then you have a long JSON to read. That basically tells you how I'm killed through. This is telling you the kernel killed your container because it ran over the memory that was set for that container to run. So whoever said that, last one, he wins. Yeah, people, especially people coming from the Java world with the spec that I got out of memory exception and things like that. Now the problem with Java is when you run Java in a container environment, Java is not aware of the limits of the container. Until Java 1.9, some patch that was merged last week, that supposedly makes it be container C groups aware. So until Java 9, you start using Java 9 properly in months for now or years, this is what's going to happen. So your container, you're running Java in a container, Java sees the host memory. And because I'm running this Docker in my thing, I think the host memory is two gigs of the virtual machine where Docker runs. And typically in 90 something percent of the cases depends on certain rules. The JVM is going to take one fourth of the total host memory as maximum heap size. And this is what you see here. The limit, this is the max memory, 444. And this is the same number that was at the beginning when I was not setting any limits. So Java is not aware of the JVM of the limit. So that's what happened. So how can we fix this? Because especially think that this is running, maybe you're running this in a cluster, so you have multiple hosts, now you're running maybe Jenkins jobs in containers, and they just disappear, get killed, and you don't know what happened. So there's another, another, something we can do is something that is very specific of whatever you are running. So for maven, you can pass JVM options as maven opts. For and, it's, I think it's and opts or and options. And you have to know what you're doing and say, okay, just you pass this parameter to the JVM. And I'm saying, okay, XMX is 210 megabytes. Because I know I'm giving it to the total containers 220 megabytes. So let's make sure Java is aware of how much memory is available. And what happens here, it's a little bit different in the sense that the max memory that Java sees is 187. Okay, so it's keeping it under the limits. And this is going to do more to gather much collection, but it's never going to run out of memory. I mean, it's not never going to get the container crash killed by the kernel. Now, I was, I was cheating a bit here because by default, what happens when you run maven and you run tests on maven, the default is maven will fork a new JVM to run the tests. And I was cheating because I said it to do not fork. There's an option in maven in the POM file where you can say whether to fork or not fork. So in the surefire plugin, exactly. So I told maven not to fork. So all this was running in one JVM. Now, if I run it in the default mode, even with the same parameters 220 megabytes of memory limit on XMX 210, something's going to happen. Guess what? So this is for, this is calling maven. And maven is creating a new JVM for surefire and that JVM is running the tests. So what's happening, this is going to, it can take a little bit longer maybe. The new JVM is seeing 444. So the new JVM is not aware of XMX that it passed to maven. And what I'm getting is fail to execute call. The fork VM terminated with same, without saying properly goodbye, VM crashed or system exit called. So the new JVM is not aware of the XMX memory limits because I said in an environment variable this is for maven. Now, how can we fix this? Well, you have an option which is in maven in the POM file again, you can configure the server file plugin to pass variables or environment variables to the new VM. So you could go in there and set XMX equals whatever. But you could keep doing this over and over and over again. There's a slightly better option which is a somewhat obscure environment variable that is underscore java underscore options. And this will work in open JDK and some JVMs at least. And what this means is any new JVM that gets started will use these parameters. So whenever I start maven, it's going to use XMX 210. When maven starts server, it's going to use XMX 210. So now this is going to solve you a lot of problems. I'm going to just kill it. This would continue working. And this would solve you the problems unless you're running several JVMs and all of them are using the total XMX. Then you have to play with how much you give to each of them. But if you're running one, two, I mean this will be honored by all of them. And you've got to be aware of what's happening when you run out of memory. Well, okay. Oops, what did I do? All right. So that's, I didn't know that key combination. Okay, I talked about that. Then there's the CPU limits. It's just something like the memory limits and you can pass how many, for messes who are on Kubernetes, and this gets related into CPU shares. And what do you think it happens when a container goes over its CPU shares, over the CPU limits that you set? Well, nothing really. What CPU shares, what the memory limits mean in messes, messes memory limits in Docker is CPU shares that makes it a little bit more clear is how much percentage of a CPU you can get. So if you say this is basically a weight and depending on how many containers are you running has how much CPU is going to get. So if you say CPU shares is one and you run one container, it's going to get 100% of the CPU. If you say, if you run two containers and both have CPU one, they each get 50% of the CPU. If you run 10, they only get 10% of the CPU each. So it's just a weight across all the containers that you run. So it's all relative. And the other important thing to handle on a cluster is storage and how you can do distribute the storage. So messes has in versions 1.0 plus Docker volume support and swarm also has the Docker volume plugins. So you can use whatever plugins you use for the normal Docker. And Kubernetes from the very beginning, you have the concept of persistent volumes. And all of them pretty much do the typical thing like EBS volumes in AWS, NFS, and Glass that I think is supporting all. It's just a matter of how you use it. And also some considerations you should not that these schedulers allow you to do is run as a different user, not just root. But you have to be aware that the container user ID is not the host user ID. We have a lot of questions in the Docker image about the Jenkins, because the Jenkins master is running as the Jenkins user, which is always 1,000 inside the container. So if you run it in a host, in an Ubuntu host, user 1,000 is Ubuntu. So if you are mounting host volumes into the container, which is typically a bad idea, because you have to deal with all these things, and it's not very good, it's not great to schedule it across a cluster. But you got to be aware, or if you're using NFS, then all these the names and the users, not the names, but the UIDs of the users have to match how the container is trying to access the data and how the data is, what are the permissions of the data itself. So NFS users. For networking, you need to open, for the Jenkins case, you need to open the HCDB port, the GNLP for connecting agents, and also Jenkins has a sort of SSH server building that you could open if you wanted to. I'm not going to enter into details. And there's support that allows you to get one IP per container in clusters. In Mesos, it's more recent. There's this, you can run with Calico, with Weave, and same thing in Kubernetes and Swarm. Swarm, by default, uses the Docker overlay. But all these options, it's just a matter of, I mean, in Kubernetes, it's pretty straightforward if you run it in Google Container Engine because it gives you everything. If you run Mesos or Swarm, then maybe you have to do a lot more setup and configuration to make it work in a virtual networking. And just lastly, I'm going to talk about the Docker plugins that are available to take advantage of running in containers. So there is several Docker plugins. There's one, I think at least there's two for dynamic agents running on Docker. So basically, whenever you have a job, they will spin a new Docker container and run the job in the Docker container. And there's no support yet for the Docker Swarm mode because it uses a new API. This is not yet supported. The agent image needs to include Java and it will download the slave jar from the master. So it needs to have connection to the master to download it. And then you have multiple plugins for different tasks. This is how it is today. There's the Docker build on Publix plugin to build Docker images. And then there's the Havan registry notification to initiate jobs based on when an upstream image is updated and things like that. And it has great pipeline support. So I'm not going to go through the configuration, but I'll show you like a Docker pipeline. You can run Docker with registry. If you want to use your private Docker registry, you can do docker.image and the name of the image to use it and then .pool to download it from Docker hub. And then you can build Docker images with Docker.build. And the interesting bit is probably this image.inside and then whatever shell commands you put in, they are run into inside the Docker container itself. There's also a plugin that is pretty recent that allows you to, it's called the Docker Slaves plugin. There's a lot of mixed names here and allows you to use any Docker image for containers without the need to have Java. And also, so basically it's a lot easier to reuse images and allows you to define the slave in the pipeline and you can have site containers. So this is called the Jenkins Docker Slaves plugin. Not to confuse with any of the other 10 Docker plugins that are there. So you can do something in maven with Docker node, the name of the image, the maven image and then shell and whatever you want to run inside the Docker image. The Mesos plugin allows you also to have dynamic Jenkins agents, both Docker and isolated processes, so any random program that you want to run in Mesos. And the image has to have Java because that's how it runs the slave jar to connect to the Mesos master. And you could have Docker, you could run Docker commands, but it's basically outside of Mesos, you would, I think I didn't explain it here. So you can use Docker pipelines with some tricks, like you need the Docker client installed inside the Docker image and share the Docker sock. The typical way to run Docker side by side Docker, so Docker container running against your host Docker demo plus you need to mount the most workspace in the host in the same directory as the container that is running. With this, do I have an example? Yes. With this you can run, this would be in a node running on Mesos and I can run a Golang image in the host and then I can do a go build with no problems reusing that Golang image. But the only caveat is that this runs outside of Mesos. This is just running in the host Docker demo. So Mesos does not know anything about it, doesn't know how much memory it's using and what ports it's using or anything like that. So you're basically running outside of the scheduler. And then the Jenkins Kubernetes plugin, same thing, you can have dynamic Jenkins agents and they run as pods. So a group of containers. So you can have multiple containers, just one of them just has to be the JNLP one, the one that runs the Mesos, the Jenkins slave to connect to the master. And if you don't set it up, it will create it by default. And it has pipeline support for both defining what these pods images are and to execute things inside these pods. And in the next version that I can release, I hope to release soon, it's also having persistent workspace. So all your agents can mount the workspace from NFS or EBS or whatever. This is using just what Kubernetes provides. So you can have one of the typical problems when you run things on Docker is that you don't have the previous builds. I mean, you start from zero every time you do a build. But with this, you could have a volume with your workspace or NFS or Mount Share Mount or anything that is supported in Kubernetes. And then you wouldn't need to start from scratch every time. So this is what the pipeline looks like. I'm saying this is a pod template. I have a container maven. I have a Golang container. And in this pod, my pod, what I'm saying is check out some git code and inside the maven container run a maven build. And then inside the Golang container run a go build. So I can reuse the images from the Docker Hub. I don't have to create my custom image or anything with both maven and Gol. And I can run both things in these two containers just with one agent. Yeah. And just the recap, these plugins allow you to then dynamicating creation. They all use JNLP as the protocol to connect to the master. In some environments, you can use Stunnel to connect to the master depending on how you run this. I guess we don't have time to go into more detail. And they use the Cloud API, which is not ideal for container workloads right now, because this was designed in Jenkins for like Amazon images and instances and things like that. So it may take a little bit longer to start the containers. But there's a Jenkins one shot executor plugin that we hope to include into at least the Kubernetes plugin and possibly it's going to be in the Docker plugin too. And this basically is optimized for containers. Because the previous, the Cloud API assumes like when you start an instance, it takes longer. So it keeps the instances around and doesn't start a lot of them at the same time, because it has a cost associated. But this one shot executor, it's going to just create the container, run your thing, and then kill the container at the end. So that's me. Do you have any questions? Yes. So I had a question first with the Jenkins slave in a container, in an image that we can just pull it and have a slave. How do you synchronize dependencies across the containers? Then you went on and said, well, in fact, in the slave, you can run another container, like an inception, like a container in a container. And so how would you then, in that container, how would you then synchronize dependencies? Let's say I need a gradle, right? Or I need gradle of certain versions. Yeah. Okay. So how do we manage versions and run container inside a container? Okay. Yeah. So maybe I didn't explain. We are not running a container inside a container. You can have, so what a pod is, you can start multiple containers, but they are all, I mean, they're not one inside another. No, no, no. I mean, the Jenkins, when the Jenkins is doing the job, when the Jenkins is building my image. Oh, when Jenkins is building an image. Yes. Yes. You never, right now, there's no good way to run Docker inside Docker. So the only, the recommendation is always running Docker side by side. So what you do is you have a container running your slave or whatever. This container has to have the Docker client installed and you mount the Docker socket inside the container. So this container can run Docker commands in the Docker demo in the host. So when this container tells the Docker demo, Docker run something, is the host that is running and is running here. So you are basically talking to the host and the host is creating another container. And so they're all side by side. And then how do I keep track of all the dependencies? How do I keep track of all the dependencies? Let's say that I deploy five slaves, right? And I have loads of builds in a pipeline and it's taking it by one by one. How do I make sure that all of my slaves have the right version? How do I make sure that all my slaves have the right version? Okay. The reason is, the way it is done in all these plugins is the slaves are short-lived. The slaves, ideally, they'd run just one job and die. So whenever you, with the Cloud API, it may not be exactly all the time like that. They may stick around for a little bit, depending on there's some parameters to adjust. But basically you are saying, I want this job to run in Maven 339. And then whenever that job runs, it will download the Maven 339 image and run your job and die. And you have another job that says, I need to run this in Maven 3.1. It will download the Maven 3.1 image, run that and die. If you have, this is the beauty of it. You can have all sorts of combinations and using all these half images. So like the Maven image has versions for Java 7, 8, and 9. There's three different images. So you could run some builds in Java 7, some builds in Java 8, or maybe the same build in 7 in parallel with 8. And they are all in different containers. Yes? If you have had to choose only one between Maven's, Kubernetes, and this farm, what would you choose and why? Okay, if I were to choose one, I would choose Kubernetes, but just because I'm biased, as I said before. It's going to depend if you have it, if your company has it, if your operations people already have something running, then it's more likely that you're going to choose that one. Mises has the advantage of being able to run any process. So it's interesting for maybe some more like high performance things. And there's a lot of scientific things running on Mises because of that reason that you could run in bare metal things. Docker Swarm has the advantage of it's coming by default with Docker, but it doesn't have the support. And Kubernetes has a lot of open source community behind it, multiple companies. Google is Red Hat, is CoroS, is all these people building on top of Kubernetes. And if you are running on Google Cloud, then it's like a no-briner, they already give you that for free. Okay, so I'm getting both. Thank you.