 This presentation will be about building a scalable and flexible MySQL HA environment with Ansible MHA and ProxySQL. Proxy in the new player. We like it really much and see how it works with other tools. My name is Miklos Mukaseyel. I'm from Hungary and I have a pretty long history with Linux and MySQL. I used to be a backend developer then as an end-of-guide. I worked for Walt Disney and after that I joined to Palo Mino DAB as I was a consultant for many pretty big companies and I'm a freelance consultant since working for Edmodo. Here you can find my contact in case you have questions. So I built this thing called DEMP. That's a DEMP naming but still. So keep in mind this is a testing environment so this is not the use case for Docker but I want it to be as portable as it could be and I didn't want to install anything on the testing computer so you only need Docker to run and GNU Bash. Nothing more. So it's built on Docker, Ansible MHA and ProxySQL is licensed in GPL3 and the important part is the Ansible Playbook is eventually an excerpt from our production where we use it with MHA. So this is an enterprise grade code you have here. You can take it and implement it on your own environment and you can forget about the Docker of course. Okay, let's move forward. So yeah, as I already mentioned I chose Docker because I've been using it for a while to testing the Ansible Playbooks and because it's portable. Ansible is my favorite choice when it comes to config management and its simple agent test is just SSH in and execute the command that's necessary if the change is already there it won't touch the server. It has templating, pretty good one and what is important from our standpoint that it has modules for MySQL and ProxySQL. Thanks for Ben Meldon for the letter. MHA is an old but proven and stable open source HA utility written in Perl. It's easy to configure. It's automated, non-interactive and interactive master feelover but it doesn't deal with moving with VIPs or any other aspect of the feelover. It only deals with changing the topology to the desired. And ProxySQL, thanks for Colin for the great presentation. It's a crediting, rewriting, killing tool based on rules and patterns. It's also a loot browser. You can split reads and writes. It is also, it can be used as a query cache, supports connection pooling. You can prioritize queries based on the importance and it supports many more things but what it is not, it doesn't execute replication topology changes. What MHA does, so they eventually really good pair because of this. So this is how ProxySQL looks like in nutshell. This is the application server. ProxySQL has configuration interface can be accessed via standard MySQL client and it stores the configuration items in tables. You can, by default, you can just log in the interface on the port 6032 and the default port for the application is 6033. And based on the routing and other settings, it will route your queries to the desired cluster. And instead of using master and slave, it uses concepts of host groups. So, for example, the master down there is the host group three and the host groups four are the slaves. And once the topology changes, it automatically detects the changes based on the read-only flag and move the servers from one host group to the other one. I'm going to show this how that works soon. So this is what I built. There's a web server and everything is running on the docker. This is not the use case for docker, but it works fine. So we have two clusters here, for example, one with three slaves, one with two slaves and there is the admin host. And within the admin host, an Ansible is running locally to configure ProxySQL and MHA based on the inventory file generated during the cluster creation. I'm going to show this not rocket science, but the automation works pretty fine. And the whole point is you can rerun the playbook once you change the topology or add another cluster and it will regenerate every possible configuration item within ProxySQL and for MHA to do the failovers, do query routing or pre-write queries. Basically, that was the presentation part and let's see the demo. I'm going to create a cluster. So can you see the... Let me increase it a little bit. Can you see it from the behind? Yes. Okay. How is it now? Thanks. Okay, so let's create a cluster. Okay, it created cluster one, two, three. I mean created a cluster called Zephod with three machines, one master and two slaves. We wait for the Docker containers and the MySQL inside to be available. Yes, I'm going to use Hitchhiker guides to the Galaxy references. Okay, now the application is set up. Let's create another cluster called Arthur. This is going to consist from one master and one slave. And it will... And this little bit script will also set up the replication. So let's see. We have a host file here. See the inventory, the standard inventory file, Ansible file is generated. Once the second cluster is ready, there is the inventory entries for the cluster Zephod and the Arthur. And we are going to use the host group one for Zephod. Oh, sorry. Yeah, again. So we are going to use the host group one for Zephod and host group three for Arthur this time. Okay, let's start running the playbook. So it's going to use two files as the source of truth. One is the inventory file and the other is the groups, group words per all. And you can change the ports. The IP is listening. The default application user and password, the credentials, it will set up on the MySQL and the privileges for this given MySQL user. You can change the proxy SQLs, global variables here. And we're running the playbook. We'll apply these changes. This should be done in 60, 90 seconds, depending on the speed of the internet connection. I'm running this on remote machines because it installs dependencies for MHA. That's the most time-consuming part. Once it's ready, I wrote a little script that makes it easier to demonstrate use cases for the proxy SQL and MHA. I'm doing some sysbench. Yeah, sysbench is the last part of this playbook. Once it's done, I'm going to call this little guy proxy SQL menu. And the list of features is pretty long. So let's see the servers configured in the proxy SQL. Everything is done. So we have a fully set up environment that can be used from an application. There are MySQL servers and clusters behind. And proxy SQL is absolutely aware of everything. So if you see, there is the host group one. That was for Zephyrd. And the two slaves are in the host group two, which is the host group for readers. Let me show you how this looks like in proxy SQL. So the writer host group for Zephyrd is one is a writer, two is the reader host group. And it's three and four for the author. So it also generated users. That was it again. Users is selection three. The users are duplicated because there is one for the front and then one for the back end. But the point is proxy SQL is set up to use the credentials. So the credentials we set up in proxy SQL is based on the host group. So it will be used app one, app one for host group one, and app three, app three for host group three, and so on. So let's see a failover with time h a. Oh, by the way, every comment we execute here, the terminal is pretty big. I printed above the menu points. So if you go to this menu and start playing with this tool, you can have the actual commands so you can figure the rest out and you can customize it whenever you want. So 20 is the interactive failover for my SQL. Yes, we want to run this. And we will change the topology from so using this master to using the three as the new master. Yes, we want this, please. And switching to master was completely successfully. Check the servers. And we will see that the host group one is now contained the IP with the three and the rest is in the host group two. So proxy SQL detected the change and it moved the servers to the proper host group. This is how easy to do a failover with mh and proxy SQL. You don't have to do anything. It's absolutely transparent from the applications perspective. Which is a great thing. Okay, now let's see some statistics. Oh, unfortunately. Mm-hmm. Okay, very digest this empty host groups. It's still empty. Okay, let's run a benchmark. To run a sysbench we have to prepare the database. It's I'm going to run the sysmatch prepare. Which is basically this command. It's going to fill the sb test one table. Once it's done, we can run the let's run a short sysbench read only test. It's many points 16. I started. No errors. TPS is pretty okay. So now we are going to generate some traffic to show you. So I can show you the statistics. It looks better if you decrease the size of your terminal. So you can see that every reads and writes went to the host group one because that's the writer host group. And by default every application, the app one is rooted to use the master on the host group one. You can have statistics by the host group. So the summary time of some time spent on the given host group and the number of queries ran there. Now let's change the game a little. Let's run the split read write which is basically a simple command running replace into. You can see that we are going to add the rule which is active that every select should go to the second host group. I mean this rejects that queries that matches that this rejects. We'll go to the second host group. Now you pull the query rules table. This is what is inside the runtime configuration of the proxy SQL. Now if we read on this test and check the statistics we will see that a lot of queries were being re-routed to the host group two. It's a little bit long for this presentation. Yeah, it's done. So you can see that these lines, these servers with the host group two also received reads. You can check the statistics about the host groups. Let's see it over here as well. And there is also a statistics about the query digest. So you can see these selects were run on the host group one during the previous run. But later on all the selects were executed on the second host group. So this is how easy to set up proxy SQL to send, to set up a really basic read-write script. Split. Okay, how much time do I have? Five minutes, that's nice. Even better. So let's have two consoles here. And we are going to execute, we are going to see what happens if we are using proxy SQL for reading and doing a failover in the meanwhile. So in one of the windows I'm going to execute along this bench. That's 60 seconds long. And in the other window I'm going to execute a non-interactive failover. Eventually it just doesn't stop you to confirm that you want to change your topology. No worries. Okay, so we switch the master successfully in this window. And the other window, there is this part when the failover happened. Right under, way under one second, by the way, around one second. There were some reconnects, but proxy SQL can resend the select queries. If the underlying database dies. So it's really cool. Basically, this is what happens when you experience a failover during reading through proxy SQL. And, well, I also added some minor things like created word database. Since we have some minutes, let me show you how to add another cluster to this setup. Damp. Let's stick to this example. Let's add the cluster with two machines called Trillian. And it's done. Let's check the host file. Here you can see there is a third cluster appeared with the host group 5. So the users will be f5 f5 file on the MySQL server. The credentials on every MySQL server is up and down. Which is configured here. So don't mix the two. These are the credentials on the MySQL servers and proxy SQL makes it transparent. And you can use different kind of credentials there. Just saying, okay, now we have this new inventory file. We can just read on the playbook, which is basically this command. We should wait like 60 seconds. We don't have to reinstall everything. So if the proxy SQL is already installed, Ansible won't install it again. And it won't execute most of the part. Only the part when it says changed. Adding new servers, that's something we want to add. We want to change with Ansible. Yeah, we are now generating the config file. That's it. And if you check the proxy SQL menu. And the second is the show servers. The server truly appeared. It's user appeared. It has its own host group. And, well, yeah, that doesn't change at all. And, for example, there is also a log part of this little script that can show you the most important log files. Read only the ping and the connect log. And here you can see how it maps the servers in order to move it to the proper host group. Basically, this is what I wanted to show you. Questions? Yeah, proxy SQL also adds the writer to the readers by default. Because a master is a source of it. And you can also set up weights for the servers. So if the master outweighs the slave, then the queries are sent there more likely. This is the expected behavior when you want your data back immediately. And there is no chance of having replication lag. By the way, proxy SQL checks the replication lag and you can set up a threshold. So it won't send reads for a slave that's higher than a given threshold. Sorry, by the way, Krzysztof asked, why is the writer in the reader host groups as well? That was the question. So the writer by default is also part. The proxy SQL monitor also puts the writer instance to the reader host group. Anybody else? Thank you.