 Tako, kaj sem zelo, mi je Giuseppe Magija. Mi je zelo Data Charmer. Mi je blog in Twitter akant. Mi je zelo v blog in Twitter. If you have comments on this talk, you can send me a tweet and I will try to respond. I have something to say that should be clear. I don't work for Oracle, I used to, but this happened a long time ago. So everything I say here is my opinion, nothing that Oracle wants me to say or doesn't want me to say. Same for my company. My company does a lot of things and is even involved in container stuff. But what I'm presenting here has nothing to do with what my company does or what my company think I should do. So everything here is going to be my mistake. We are going to cover a lot of ground. So I will try to describe everything I know and everything you would like to know about operations with my SQL in Docker. Let's start from the beginning. Let's start by talking what we can use to make computations. So if you want to run a service, the easiest thing that you can do, you have a standalone server bare metal. You just install everything there and it works. It's simple, clean, very fast. What is the problem? The problem is that it doesn't scale. So if your server is not enough anymore, you need to buy another one and another one and then try to work with them together. What else can you do? Servers now are very much powerful and the hardware has developed much faster than the software. So chances are that a single server is enough to run not one service but two or three. So you can try to sandbox the services inside the server. You say, I'm going to run this particular service in one portion of the server and the other particular service in another portion of the server and you try to tell the services to behave in such a way that they don't interfere with each other. This may work sometimes, but you need to run the rules. You need to basically do the work of the operating system for your services and tell them how to do, how to behave. So this is a solution that may work sometimes, but it's not optimal. What can you do instead? You can use virtual machines. What is a virtual machine? Something that looks and feels like a single server only that is created by software. So how does it work? You have the machine and on top of the machine there is something new that is called hypervisor. So hypervisor is something that is between the operating system and the virtual machine. The hypervisor simulates single machines. So your virtual machine looks and feels like a single server. What is the beauty of this? The beauty is that all your applications work like they were using a single server. No modifications, you don't need to worry about anything. The hypervisor makes sure that your virtual machines are isolated so they don't interfere with each other. They are secure, they are safe and your applications work just as if they were using a single machine. What is the problem here? It's speed, so the virtual machine is slower than bare metal server. And then we have the new thing, containers. In this particular case, containers using Docker. What is it? It's something that looks like a virtual machine but not really. So you have something between the operating system and the container that is called the Docker engine. And it's like a traffic light. It's something that tells your container you have to behave this way and that way and you have to share information or not share information with other containers. So the difference that you have between virtual machines and containers is that containers use the same kernel as the operating system. So it's extremely fast and much faster than the virtual machines. Of, let's say, two orders of magnitude faster than virtual machines. And the way it works, the container has not everything that you find in a virtual machine. The container has only the bare minimum that is needed for the service inside the container to work. So you will find that there is a thin layer of kernel, plus there are the libraries that are needed for that service and the service itself. So the container is much smaller than a regular virtual machine and this is one of the reasons why they are faster. So what is a container? It's a virtualization system but it's not a virtual machine. And it works very close to the operating system. For this reason it runs only the same operating system as the regular, as the host. It's theoretically less secure than virtual machines, but it's extremely fast to deploy. So let's introduce the concept of mutable and immutable architecture. This is important to understand how we play with containers as opposed to virtual machines. Let's start with virtual machines. What do we do? We have something like Puppet or Chef that will automate operations for us. We deploy an operating system. On top of the operating system we deploy the hypervisor. It's a lengthy operation. Then we deploy the guest operating system, so the virtual machine, the empty virtual machine. Then we install libraries and then we install the service. It's a series of steps that the mutable architecture does to arrive at the final virtual machine with all the services that are needed for you. What happens if you need to do an update, you just deploy more libraries or replace existing libraries and deploy more services or replace existing services. Again, it's a multiple steps procedure. Instead, let's talk about immutable architecture, so the way containers work. We start with some central organization like Puppet and Chef. We install the operating system, that is the thing that you put on bare metal. Then you deploy the full container with everything inside. It's something that works in less than a second. What is the difference between working with virtual machines and working with containers is that when a container fails or we think that it may fail, we just remove it and replace it with another one. We don't do virtual machines that we deploy new libraries or change the libraries or try to modify the virtual machine. We just get rid of it and replace it. It's a very powerful concept that is much faster than working with virtual machines, but requires us to adapt to the new system and do operations in a different way. Be aware of this when we then start talking about how to use mySQL with this. If you have immutable architecture, it means that you take the container away and if the container contains the database, this is bad. This is the focal point of using containers. Remember that in immutable architecture you just get rid of containers and replace them, so you need to have a way of preserving the database and not throw it away because otherwise the purpose of using a database will be nullified. One thing that is not the focus of this presentation but I want to talk about is because many people confuse containers with microservices. So what is microservice? It's something that doesn't have an operating system, doesn't have shell, doesn't have extra features as only the things that are needed to run some particular service. The definition of microservice is that you can put them together very easily. Containers are not microservices even though sometimes the two things can identify between each other. So how should microservice work? Microservice work like Lego. You have several pieces that you put together and produce a whole that is more than the sum of the single microservices. So you have some, let's say a component that has a Python application, another component that has a Java application and another component that has a database and you put them together inside, let's say an infrastructure and they have a way of communicating to each other. This is not how container works. The container works in a different way. Instead of having every component inside the same container, every component here has its own container. So the philosophy of containers is that you can play very well with applications that are easy to work with across a network. For example, you can work very well between an application service and a web server. They are designed to work across a network, so they work very well together. And somehow also databases and web apps work very well in containers. But the difference is that these are not microservices. They are separate containers that somehow can work together but they are not designed to work together. So, one thing to refresh about virtual machines. Virtual machines boot quite slowly. If you can replace a virtual machine or create a new one, something in minutes, a container can deploy in less than a second and usually in very small fraction of a second. The same we can say about using virtual machines compared to using containers. Virtual machine runs in virtualized hardware, so it's always slower than real machine. Containers instead, they are as fast as the host. They are as fast as the operating system allows because they share the same kernel. What do you need to do to use containers, in this case Docker? There are examples that are mentioned in this presentation. The examples are online and they use Docker 17.03. Be aware that until a few weeks ago this 17.03 was called 1.13. So it's the same thing but they have changed the numbers. It's almost like MySQL has jumped from 57 to 80. They just thought that it looked more mature. Before you start, whichever operating system you are using, you need to install Docker. On Linux you use apt-get or yam, and on Mac or Windows you use a Docker for Mac or Docker for Windows. One thing that is very important, if you use Docker on Linux directly, you are using the real thing. It is the container that is as fast as the operating system can. On Mac and on Windows you are using a very thin virtual machine that uses Docker. It is integrated with the Mac or Windows operating system, but it's not exactly sharing the same kernel as the operating system. On Mac or Windows the containers, even though they are very fast, they are not as fast as they are with a virtual machine, with a Linux server. The container which is discussed here is nothing but LXC, the Linux LXCs. These are Linux containers. The technology has existed for many years, but it was not easy to use. You can do containers using Linux virtualization systems, but it is quite difficult to achieve the same results as Docker does. The difference between Docker and other Linux virtualization systems is that Docker is the Nespresso of the containers. Nespresso, you take a capsule, you put inside, and you get a beautiful coffee. Otherwise, you have another coffee machine that is huge, beautiful, recommended by many companies, but in order to make a good coffee, you need to know how much to press the coffee, and how to put it, clean the machine. You can do the same thing with both, but doing that with all technology requires more effort. Docker has found a way of creating capsules of software that you just put inside a Linux machine and boom, they work. How does it work? Let's go inside. First, you need to download the image. What is an image? Image is all the software that is needed to create a container, only that is not running. So we call image the container that is ready to be deployed, and container the image that has been started. In order to run a container, you need first to pull it from the central repository. So we say docker pull mysql slash mysql dash server. And depending on how fast your network is, this takes between 20 seconds and 20 minutes. To give you an example, in my hotel here, I download this in something like two minutes. So at home it takes 20 seconds. It just depends on the speed of the network. But the image is not very large. So compared to a virtual machine, it's a fraction of the size. This image dash mysql server is about 400 megabytes. So I believe this is the official mysql docker image. Is that correct? Say that again, please. Is this docker pull mysql slash mysql server? Is that the official mysql docker image? I'm going to this now. So what you see on the screen, I will get that in a couple of slides. What you see on the screen is one example of what you have after you have pulled a couple of images. You see that mysql server, you have a tag that says latest version, image ID that is the unique identifier when it was created and the size, that in this case is 369 megabytes. So the question that I heard before is, is this the official mysql image? It is and it is not. So if you go to docker hub, the place where you find the things, you see that there is one thing that is called simply mysql, mysql server, but simple mysql and it says official. What does this official mean? It means that it has been created by the docker team. So they maintain this themselves. Instead, the one that I used here, mysql, mysql server, this is produced by the mysql team at Oracle. So you will not find this labeled official under Oracle, but it is for me the official one because it comes from the team that creates database itself. So it is a bit confusing because a docker team called this official because they made it, but the official one, depending on what you do or what you want to do, is this one that has been created by the same people that are creating the database itself. A couple of key points. Every container contains an executable layer and it is a full operating system. Meaning that if you enter inside the container, you can run operations like it were a regular Linux operating system. Containers are always Linux. So here in the Mac I run a container, I enter inside a container and I have Linux. The container is not a full virtual machine. So if you expect to find everything you want, you will find only the things that have been added to make the service that name the container work. So in the case of mysql you will find only the things that have been made for database to work. One thing that is important is that the way the installation works is that you need to set a password on the command line. Oracle has made a lot of changes in the security, the default security of mysql and some of these changes are good, some can be reviewed. The thing that Oracle has done is made in such a way that you can create a random password, generate a random password and you deploy the server. In this case that would be useless because containers need to work automatically. If you generate a random password, then you have to do some operations manually to use that random password. So you can pass the password using a file which is recommended or you can pass it using an operating system variable. Not bad, but this is the way that everybody is doing operation. So this is something that could be improved. Despite what most people say, containers are isolated. You cannot access services inside a container from outside the container unless you configure the container to share information. We'll see more about that. You can say expose this part of the container like a port or an IP to the outside. The important thing that is needed to remember for us is that data storage needs to be taken into account. So as we have seen about immutable architecture, if we replace a container, we lose the contents of the container. So we need to use something that is called volumes to preserve the data. And another important thing, we don't modify the containers once we have deployed them. So the configuration is done by composing the container when we deploy it. So when we run the container, we say run this container using this file as part of your operating system and it's going to replace a file inside your tree. It's something that you need to see an example, otherwise you don't understand how it works. But the general concept is that you don't modify things, you just prepare things that you want to put inside the container, and then you tell the container run using this file instead of your default file or in addition to your default files. Let's see a couple of examples to understand what we are doing. If you want to deploy a single container, we use the command docker run plus the options. For example, this is the minimum you can do to deploy a container. Docker run minus d, minus d means demon, run as a demon, so become a permanent service, and the name of the image, MySQL server. However, this command will fail. The container will be deployed, but it will fail immediately. It will not work. If we look at the logs of the container, the log said MySQL root password was not set. This is what we need to do. Remember, it was one of the key points. We need to set a password when we run the container. Let's try once more with more information. Here we do two different things. We use a name, and then we say minus e, minus e stands for environment, and we say MySQL root password equals this is my password. This is very unsecured, and unfortunately the way that the docker team made it and the MySQL team used it and copied it instead of introducing something better. This will work so the MySQL server will start and will be usable. When we want to configure the container, we can use the volume option and the environment option. Let's talk about volumes. We have three cases that we can use. We can create a volume, and we can say the container one has a volume named data. Data did not exist in the image, and it's empty. And then we can run container two and say use the data from container one. Why do we do this? In this way we are using a centralized data that we can use for exchanging data between containers. Case two, we have data that is already inside the image, and then we create the container, and then we say use the data from the other container. This way the second container can take software or data that was deployed inside the first container. The third type, that is the most common one, is the volume inside the container is mapped to a folder in the outside operating system. And this is what we do to run databases. Since we delete the container when they become unresponsive, we use a volume to put the data inside. So the data is not inside the container, it is inside a safe portion of the operating system, a different partition, any place that you think is safe. And this way the container is running database and it looks like it is using the data inside the container, but the data inside the container is pointing to a folder outside. Let's see an example of a customized container. Now, we want to start a container with binary log and server ID. One thing that we can do, we can just add the options to the command line. So after everything that we have done before to run a simple container, we just say minus, minus, log bin and server ID 100. If you have only one thing to change, this is enough. But there is a better way of doing this, especially if you have a larger deployment of containers. This is what you can do. You can create your own template for myCNF. So in your minimalCNF you put what you want to replace, not what you would put in the regular ETC myCNF. In this case we have log bin, server ID and user. So what do you do? We use the volume, minus V, to say match the minimalCNF and make it in such a way that it will be ETC myCNF inside the container. And this is what will happen. So the container will have ETC myCNF that has the same thing that you see here in minimalCNF. And we can check that we have done what we wanted. So we use docker exec, minus ti, means minus terminal interactive. The name of the container and the command that we want to run, in this case bash. So we run mySQL and we check the server ID and the server ID is the one that was created using the file that we passed as a volume. You can also run a container with a dedicated user. So you can say run this container with a mySQL database named personnel with a user, perseuser and a password, persepassword. So what is it? In addition to the root password, you just create a user on the fly and the database on the fly. This will be used for anything you want. So when you want to connect to that database, instead of using the root user, you just use the perseuser with the personnel database. Okay. Let's see a single container deployment just to make sure that we understood what we are dealing with. So here you see we have the same commands that we have seen in the slide. Now we are going to execute them. And we are saying that we have docker mySQL single and will become varlib mySQL. And you see that docker mySQL single does not exist yet. So let's see what happens when we run this command. You see the single has been created and here is the data directory in my host, meaning my external data. But this is the data that is being used by the container. So docker exec minus ti, mybox dash. If we do... You see this looks like the data is inside. So what can we do here? Just to make sure that this... ...stouch. So we have a hello file inside the container. And the hello file is in my... ...external... ...in the external place. So we go inside, we say mySQL minus psecret. And we are inside mySQL. So this is a single... ...simple usage of the container. We have deployed a single container. Now we are going to remove it. I have a script for this. Remove containers will call docker stop container and docker remove the container name. Let's go back to the point. Security. So we have seen that we are passing the password using the minus e. So we say in the clear, in the open, and everybody can see the password. So it's not good. There is another way. We can create a random password or we can pass passwords in a file. So this is the approach suggested by the MySQL team. Instead of passing the password, we say mySQL random root password equals yes. And what happens then, that in the logs you will find one line that says generated root password and this is the password. So the next time you connect, you have to use this password and eventually depending on how the server is configured, you have also to change it. This is good or let's say that this is safe when you are using the container manually. It's not good at all when you want to use containers automatically. So what can you do? You can use another feature that I requested and the MySQL team implemented. So instead of putting the password in MySQL root password, you can put a file. So in this case, we are not passing a password in the open. We are just passing the name of a file. What are we doing here? First we create a random password. Not the most secure, but anyway. It's one way of doing it. Eko random, then we have SHA256 and we get the first 16 characters. So it's a random set of characters and we put this password in pwd.txt. We do two things. First we say use the volume to send this password to one file in the operating system of the container. Second we say MySQL root password is equal to that file. So the file is something that we are created on the outside operating system. We are sending to the container. So we are not sending anything in the clear. Then we open the container and we can use this command to run the MySQL without seeing the password at all. See, minus p cap and the name of the file. So this is more secure and it has the additional advantage that you can use it for automated deployments. Because you can use the same file for several containers if you want or you can use a dedicated file for each container and then pass that information to the applications that you will use the containers. An even better approach is to use the same system but using MyCNF, dedicated MyCNF that you will also deploy to the container. So we generated the password and then you have MySafeCNF that will contain one change me. So you need to change that. And how do we change it? We just do the change automatic using sed and we create a MySafeCNF. MySafeCNF will become inside the container etcmayuse.com So you can run MySQL without entering the container itself using default extra file equals etcmayuse.cmf This is going to be even better for automated deployments. Any questions so far? This was the easy part. Would it help if instead of the configuration file which contains the password in the plain text and we utilize the login path? To utilize the what? Login path. So MySQL configurator can create login path in shortcase the password and automatically you can pass it to MySQL. It would help but it's not supported by the image. So this is why I said that this could be improved because at the moment the only mechanism that we have is either to pass the password itself or to pass a file. So I think there is room for improvement from the MySQL team itself to make a safer option so that you can pass a password in a secure way and in a way that is easy to automate. Let's talk about networking and replication. Networking is important especially for replication because in replication you need to communicate between containers. So we need to understand how networking works. In Docker you have several networks. You have the default network that will isolate the container and so the container will only communicate with other containers that are created with the same network. Or you can create dedicated network and then you can call the containers by their name instead of using IPs. Let's see a couple of examples. When you inspect a container there is a lot of information. There is a huge JSON object. Among this information there is one thing that is called IP address. And the IP address is the address that has been assigned to that container. And this depends on the network that it was created with. So if you want to access this server you need to find from the outside you need to know that IP address. So how does it work? If you use the default network in Docker you can communicate between containers if you know the IP address. There is no other way. So you have two containers and you want to call the service in the other container you need to find the IP address. Another thing that you can do you can export a port to the host operating system. So for container one if you have exported the port you can say use the port you can call the service at 127.001 plus that port. So for example you can export port 3306 from container one to port 5000 and port 3306 from container two to port 8000 and this port will be available in the local operating system. This is doable but not very much efficient. Another thing that you can do if you are using a dedicated network then you can call the container by name which is much better for humans. So you don't need to find the container IP you just call the container by name. This is more efficient because depending on how you deploy the container the IP address might be different. So you don't have to guess the IP address you just call it by name. Having this information in mind how do we run replication? We create a dedicated network deploy the master container using that network why and also the slave using that network wait until the servers are up and then we create the replication user in the master and run change master to in the slaves. This is also a place where things can be improved by the MySQL team because there is no mechanism to make this operation easy. There are a set of examples in GitHub so you go to GitHub look for data charmer and there is a MySQL replication samples and inside there is a Docker replication. There are all the scripts that you need to run replication and to test replication with Docker. There is also a test for group replication using the same principles. So what do we do? We deploy a master. To deploy a master we create a template MyCNF where we leave the server ID as a placeholder and everything else is standard options so nothing that should change. We put the GTID enabled so this is ready to run with GTID. What do we do? We change the server ID and send the information to one temporary file that we will use as MyCNF for the master. And we use several volumes. One volume for the MyCNF one volume to put the information in the root so we can use this container without a password later on. And another volume for the data itself. So we create one folder and the point of that folder to varlid MySQL. Notice that we are using the MyRepNet and the same network we are using for the slave. And we do the same things for the slave. The only difference here is that we have a different server ID. So after that we need to check that the server is ready. So what do we do? We just send a query to the server and if the query returns the result that we expect then it means that the server is ready. Otherwise the server has not finished the startup so we keep doing that for a couple of times. Then remember we have the temporary file for the MyCNF that we put in the root. So after that we can enter the server without using a password. Next step we call we created the application user and then we run ChangeMaster 2 in the slaves. Let's see a demo using the files that I mentioned a minute ago there are no containers. This is the file that does what I have shown to you in the slide. It does a lot of checks more but in the end it does the main thing that is several volumes and the setup of the servers. Then we check that the server is ready and finally we call the replication command. Three nodes have been deployed now sleeping for 10 seconds and then we try to check if the node is ready. You see it's not started yet started set replication and now we have inserted something in the master. We wait a few seconds and then we check the slaves and the slaves have received the data. Using this script you can deploy three nodes set replication and then check the results. This is working Docker PS It tells you that we have three nodes MySQL node 1, node 2 and node 3 and they are up and running. The script we also created some additional files that we can use to access the nodes. For example, this is the master so let's do create a schema db1 and now we go n2 show schemas and you see db2, db1 exist so it means that the system works. We remove the nodes and continue with the presentation. Orchestrators, I mentioned before that containers on its own are not very much useful so you need tools to run several containers at once. There are orchestrators that do this thing. The problem with existing orchestrators is that they play very well with almost everything except databases. Why? Because I told you databases require handling of volumes. So there is one exception to this. There is Uber. Uber is using containers with MySQL and a specific orchestrator. But this is not the run of the mail deployment of MySQL. It's not something that you want to do at home as a very peculiar way of using MySQL. So they use MySQL in a write-only way and they instead of writing regular data they only write JSON blocks. The other thing is that they don't use an orchestrator already existing, they run that fits their needs. This is a proof that it can be used. It doesn't mean that you want to do the same thing at home. So the lesson for me is that orchestrators still need to evolve in such a way that can be used with databases a bit better. Another thing important to understand is that you will find a lot of service that offer you docker with orchestrators in the cloud. For example, there is a service of Google that offers Kubernetes. It works on the cloud and you have to understand that this will make using docker not as convenient as when you do it on your Linux machine. And I'm going to show you the reason. If you are using docker in the cloud, you have first level, ok, and then you have an hypervisor second level, then you have a third level virtual machine and finally you have the container. The benefits of performance that you have with containers are lost. They are still fast, they are still convenient to use but it's not the thing that you want. So this is, why do you find this in available in the market? Just because they can. So this is in the interest of Google, and everybody that is offering this is not in your interest. There are a couple of companies that are trying to create a system that is based on containers and uses containers natively. Oracle is among them but my understanding is that it's still early and it's not coming out very soon. But anyway, there are a few companies that are offering container based cloud. They are not very very famous yet. So be aware that the service that are offered right now might not be exactly what is best for you. Let's talk about group replication, yes. I have a question on the previous slide about the docker in the cloud so I understand that from a performance perspective it might be a waste. One of the advantages you mentioned would be how easy it is to deploy a container. What about the security aspect of having an isolation between the processes that run inside the cloud? It will be the same as you have on a standalone machine. Because the security that you want is between containers. Exactly. So this is the it's a good enough security is not as good as you have with virtual machines that is very good. So it means that you cannot export data between containers but two containers might affect each other in performance because there is not as much isolation as you have with virtual machines. But the level of security is the same that you have in a standalone machine. So group replication we have seen a small presentation of group replication and this is the new thing that was released as GA Bioracle in October 2016. What is the main difference you will find out in a presentation that is devoted to group replication this afternoon? Suffice to say that it's a bit different from regular replication at least in virtual machines because it has a couple of quirks. For example, you cannot use the host names to refer to other nodes you need to use IPs. And this is a limitation of at least the first version of group replication that I found to be problematic. So we pool the latest MySQL server because we need it to run group replication 5.7.17 And then we do a couple of operations and don't worry about all the details here. Everything is taken care of by a script. And the scripts are all in GitHub the same place that I told you before this is the link that takes you directly there. So what do you do? You create a network you customize the MyCNF deploy the nodes and start group replication and finally you check standards. Let me let's go directly to the demo for this so we can see how it works. So what do we do? First we create the network and we create a network with a specific IP address base. So we know exactly which IP addresses we are going to use. This is in order to simplify the templates that we use for deployment. The point that we have is that group replication requires you to set the group replication seed that unfortunately as far as I know only work with specific IP address. So we have this template and the shell script will replace the base IP with the IP created by the network and make a different one for each node. So the deployment itself is similar to what we have done for simple replication so we do a volume for the for the data directory and a volume for data itself the data that we are going to import and a volume for the myCNF. Let's see. We have deployed three nodes and let me show you one of the myCNF that was created. So using the information that we had in the script we have this group seeds that now have the specific IP addresses. This is a complication that should not be there and I'm sure that the MySQL team eventually will make things easier, but this is the way it works right now. So once we deploy, we need to configure group replication, actually we have to start it. So GR start OK. One minute has not passed yet once we get to the one minute after the deployment we start so we have told every node to start the group replication sleep 10 seconds and now we see the status of every member two are online now the third one and the third one is online. So how is this deployed? This is deployed the way that Oracle recommends it. So we have not all three nodes that are masters but only one of those is master and the authors are slaves. There is another way of deploying this in such a way that everybody is a master but just for this test we are going to use the recommended way. Now that we have deployed we can test that we can run replication so this was too fast. Basically we have we check that we don't have a test schema then in node one we create a table in test we create test we create a table in test and then we check again in all the nodes that they see a test database and the table T1 exists everywhere. So using group replication we have created a node and we have created a table that has been replicated instantly to all the nodes. Now we remove everything and back to the presentation. You can deploy several things in containers but you have to obey what is available. So the official MySQL image meaning the one that is created docker team uses Debian the MySQL image from Oracle uses Oracle Linux the Percona server image uses CentOS and you can only run MySQL 5.5, 5.6 and 5.7 and lately also 8.0 If you want to run 5.0 or 5.1 or if you want to run MySQL 5.7 on CentOS you can't. So you need to you just need to run using the containers that are offered. However there is another project in GitHub that you will find under my username so there is a collection of reduced MySQL images from 5.0 to 8.0 and customized images for Ubuntu, CentOS and Debian What does it mean? It means that you want to run MySQL 5.6 in Ubuntu and you can you just deploy two containers and mix the MySQL image in the MySQL 5.6 the minimal plus the Ubuntu Let's go back to the volumes that we have seen before case number 2 for the volumes was when the data existed in the images in the image and is used by the author container How do I do this? I take an image very small there is an operating system a Linux operating system called busybox that is something like 5 megabytes so it's very thin and I use this image only for the purpose of transporting the data in this case the MySQL binaries so I have this small container that contains let's say MySQL 5.1 and I just create a volume with that and then I deploy a real container that will use the first container data just to have MySQL binaries in the path so it works it's a bit complicated really sometimes I surprise myself my ideas are but it works so the point is that you have several things to choose from Alacard MySQL and you can say I want this version of MySQL running on Dibian and boom you can do that so it works this way you first create a volume containing MySQL binaries so you say Docker create and then you say data charmer MySQL minimal 5.7 and then deploy the container say volume from Mybin name whatever and using the operating system one of the operating systems that I mentioned before so My Ubuntu, My CentOS My Dibian this is our operating system all the libraries that are needed to run MySQL from version 5.0 to version 8.0 this is what you will find in the Docker hub about this alternative container so you have MySQL minimal 8.0, 5.7, 5.6, 5.5, 5.1 and 5.0 and you have three small containers and you have the optimized libraries to run MySQL and I recently made a couple of additional builds to run MySQL with MySQL sandbox who knows about MySQL sandbox one okay MySQL sandbox is a tool that allows you to run several versions of MySQL in the same server and if you want to run on your Linux machine or Mac you need to deploy the software and get the biners of MySQL manually so it might be a bit cumbersome using these containers instead you can have everything just in one place so Data Charger MySQL as the full contains all the biners so all the versions of MySQL in one place plus MySQL sandbox and you can just run the run MySQL sandbox to create every version of MySQL in one place it costs you a bit because it's 900 megabytes but consider that these are squeezed MySQL binaries so for example if you take MySQL 80 the original it's 1.2 gigabytes that expands to 3.5 gigabytes in your disk instead these just basically MySQL 80 it's 158 megabytes so let's see the last demo so we want to use this one Data Charger MySQL as the full so Docker run minus minus name mybox takes a moment because it's big so now I'm connected with as a regular user not root the name of this user is msambox MySQL is not here you see but I have the binaries for all versions of MySQL and I have MySQL sandbox I say make sandbox 5096 and then I have a sandbox for MySQL 5096 Welcome to archaeology MySQL 5096 which has been dismissed a long time ago but is still available or we can do something new make sandbox for MySQL 80 this takes a long a lot longer because it's bigger come on and welcome to the future MySQL 80 at your disposal so with this you can do a lot of things like run things in replication MySQL sandbox would require a tutorial of its own just so you know you can do a lot of things like we have already two MySQL servers here but in addition to that we can do replication sandbox 5 which one was it which version of 5.5 we have 5.5 52 and now we can install replication one master and two slaves inside the container and you can see that replication is working and then we exit from here and very crudely we are going to remove it quickly I think we are we can skip this I'm gonna mention that we can do your own Docker images so the ones that I have used are the customized MySQL with MySQL sandbox which is something that you can do with a very simple procedure and again all the examples are in on github so you can see them let me see what I want you to remember after this tutorial so containers are a promise in technology actually they are wonderful for testing but we use them every day because it's quite more convenient than virtual machines especially if you have to to run things in several versions of MySQL the current trend is aiming on microservices but we are not there yet the most common offering that you will find for containers are in the cloud so be aware that it's easy to use but it's not exactly what you would like to have so it's best to have your own Linux machine and try them the biggest problem that we have now is that MySQL and even other databases are not well integrated with container technologies so there is still a lot of work to do MySQL on Docker is wonderful for development and testing everything can be inside your small laptop but if you want to use it in production you need extra care so it could be really wonderful to use in production but you need really to have all the tools and to test everything before you do it I have seen more horror stories than good stories about MySQL on Docker in production so be careful questions I know that it's lunchtime and you want to know when can we eat so this is the most comprehensive coverage of Docker with application that I've seen so far I've attended a lot of Docker presentations so I'm sure one question that a lot of us will be asking is your slide deck available somewhere? Is my what? Slide deck Yes, I just put in Dropbox and there is Dropbox folder for all the presentations for Seja I received that I received an email They never distribute a slide deck OK, I will look at my twitter account I will post a link to that presentation in a few minutes And you said your demos are on your kingdom? Is that correct? No Did you say that they are on your kingdom account? Well, but you don't have to get them all at once only the things that you need so you will get the presentation from twitter let me go back to the point where I talked about this so twitter data channel start following I will put the slides there soon More questions? Are you excited? Terrified? What? We are excited, especially with the talkers but what I get this is my perception as well we already answered it in the last slide Docker is not ready for the production but for testing For testing, I highly recommend it I know many people in the MySQL team that are using Docker on a daily basis for testing just because it's very convenient But the specific use is for testing for example, as you mentioned there is some Docker image which is 5MP, it's available so you can create your shared storage and then bindries for MySQL and then you have different Docker versions for MySQL and then you have running for testing purpose so you have centralized repository for bindries of MySQL No, I use the Docker hub which is free so I just put everything there If we have a shared storage can we use any shared storage in a Docker instead of the local machine the whole switch can I use different different one? You can use your own machine and Docker offers also several plugins to create volumes in different places so there are plugins for Google cloud for Microsoft cloud for a lot of things I think so I'm not sure but just look for Docker volume plugins and then you see a long list of things that you can use instead of your own machine to deploy volumes I don't know if they work well with databases So I'm not sure if you have a concept of Oracle Reg that is a Oracle product where we have a shared storage in multiple instances of the database running and fetching the information data from a centralized storage Can we do the same kind of thing with the Docker MySQL instance is running in a one Docker It's multiple Docker Don't It will fail Maybe it won't fail immediately but it will corrupt data So I mean share data with MySQL but it's not out of the box So there are solutions for example DR2B DR2B DRBD and this is the only one that you can use to do something similar but it only works with two nodes and it's not exactly what you want to do So I have seen places where they use a shared volume and several MySQL that are connected to that but only one at the time it should be active otherwise you are corrupting data I have one more question Regarding over, you have mentioned something orchestrating their databases, JSON base write only db's I call it write only db's because they are instead of using MySQL in a traditional way they are just writing inside JSON blogs Yes How they are mentoring the orchestra thing is how they are mentoring what is the orchestra thing they are using themselves there is an article if you look for Docker MySQL Uber and I think you will find it easily it's a very interesting article and very scary In the same time There is a good job about JSON base things because I read the article it means I am not convinced I only know that it works I am not something that I want to do on my own They have split the write so sharder the databases is easy to order any pick table the biggest pain to do anything in MySQL is to order a pick table if we need a downtime or either you do on the slave and promoted a master these kind of things are terrible I don't know the details so I won't comment more on that Any more questions? Any questions? I'm not sure the next time you see face to face with UCP in Singapore So last chance Thank you very much and have a good meal