 Hi, everyone. My name is Alexei Kapitov, and I will be talking about Sysbench, more specifically about some new, recently developed features that you can use to do some pretty cool things at Sysbench. I've got some introductory slides here, but I'm curious, how many people here have never used Sysbench and don't know what it is? Please raise your hand. Okay. There are some people. Thank you. So, the next few slides are for you. So, Sysbench started as an internal project in the high-performance group in MySQL AB. The very first version was written by Peter Zaitsev, and I took over the development as soon as I joined the team. It contained an SQL-level benchmark, which we called OLTP for the lack of a better name, but there was also a number of micro-benchmarks, file system-level benchmark, memory, CPU, and scheduled benchmarks. I will not be covering them in this talk. And the tool proved to be very useful in identifying performance problems, troubleshooting customer issues, or just providing repeatable test cases for other people to investigate further. And we had some external users and some external feature requests, but most of the feature requests were internal, and most of them came from Peter. Like, can you add a new option to do the same workload, but with prepared statements, or a new option to do the same workload with stored procedures, or can you add a new option to use slightly different table schema and so on and so forth? And it soon became obvious that if things would keep going this way, that Sysbench could become unmaintainable pretty soon, simply because it is impossible to have a single benchmark application covering all possible workloads and benchmark scenarios that it can come up with. In fact, the code for Sysbench was already barely maintainable by 2006 or so. And the solution to this problem that I came up with was kind of obvious, well, let's make it scriptable, right? Let users define their workloads with some high-level API or high-level language, and let Sysbench do all the heavy lifting, like managing threads, calculating, collecting and aggregating statistics, generating random numbers, providing the DB abstraction layer, because Sysbench actually supports multiple database backends. And these things are not actually as easy as it may sound, because if you want them to be fast and scalable. So, as a result, the OTP benchmarks were rewritten as lower scripts in Sysbench 0.5. Why did they choose Lua? So, Lua is incredibly fast for dynamic language. Some people call it the speed queen of dynamic languages. It's also designed to be embedded into C and C++ applications. There's a very simple straightforward protocol that can be used to call Lua functions from C code and vice versa. It's simple and elegant, but powerful at the same time. You can do pretty complex things with just a very few, very simple underlying concepts. And if you don't know Lua and I encourage you to learn it, even if you're not planning to use Sysbench, it's a beautiful language. If you need a quick start, I can recommend this link, which is titled Lua in 15 Minutes. It explains all the basic concepts in a very condensed form. So, how did this Lua scripting thing work in Sysbench? There was a number of predefined hooks called by Sysbench from C code that you could implement in your Lua scripts. And there was an API for SQL queries and random numbers and string generations written in C that you can use in your Lua scripts to call back to Sysbench. Here's an example. So, in this example, we define three hooks. The first one is prepare. This hook is called when you call Sysbench with the prepare command. The next one is, in terms of database benchmarks in the prepare hook, you will usually want to generate the dataset for your benchmark, create tables and populate it with data. The next one is a hook is called event. It's the most important one and it's the only mandatory hook which Sysbench expects you to implement. It basically defines your workload. It tells Sysbench what it should be executing in a loop for each thread. And the next hook is clean up, which is what the name says. So, it basically cleans up after a benchmark and returns the database to its original state. And here's an example. So, if you call Sysbench with the command prepare, then Sysbench will execute the prepare hook. If you call it with some command line options and the run command, then Sysbench will create the specific number of threads and each thread will be executing the event hook in a loop. Sysbench is simple, right? So, even though that API was very simple and quite limited, I know that it worked well for a wide range of use cases. I know that it was used by many individual users and companies to benchmark my scale, compare patches, compare versions, compare different branches, and also for internal QA, when people actually created their own law scrapes to simulate their workload rather than using the standard OLTP benchmarks. I stopped active development after moving to my scale development department and later to Bercone because I was busy with other tasks that did not involve using Sysbench. And starting from about 2012, I started getting reports that Sysbench itself became a scalability issue when benchmarking on some high-end hardware, like the hundreds, of course, and that kind of stuff. Finding time to look into those issues was difficult. And on top of that, I didn't even have access to hardware to look into those issues. So, even though I was aware about those issues, they stayed unresponsive for quite some time. Things changed last year when I started working again on projects that required using Sysbench on some pretty powerful hardware. And then that's when I hit all those limitations in Sysbench scalability limitations, but also some limitations with functionality. And after analyzing them, I realized that a major refactoring effort was required to address all those issues. And I understood it would be a major project. I announced the start of that project in my blog about a year ago. But I'm not very good at blogging, so I have failed to report progress. My progress on that journey is originally planned. However, I'd like to make an announcement. People usually make announcements on conferences, and I've been working hard recently to make it happen. So without further ado, the Sysbench management team is happy to announce for immediate availability the release of Sysbench 1.0. Thank you. It is, in fact, the first Sysbench release since 0.4.12. I don't even remember. It was probably released in 2006. It also closed session number one, release of Sysbench, because, yes, it was the first issue reported when I migrated Sysbench to GitHub. The reason is that Sysbench 0.5 has never been released in a formal way. It has always existed. It's just a source called branch, which worked for me and apparently for other people, but distribution packages were unhappy about it. They needed a formal release with a tag and a change log to package it and make it available in distributions. And that's why most Linux distributions, they offered this very old, outdated version of Sysbench. But now I think it will be just a matter of time for them to pick up the new release. So the new release contains a bunch of performance improvements and some shiny new features, and we'll go over them later. But let's start with performance improvements. So the first question I had to ask myself when I started working on optimizing Sysbench is how to measure the benchmark overhead. In other words, how to benchmark a benchmark utility. There may be many ways to do it. I implemented some time ago this option called MySQL Dry Run. What it does is basically it tells Sysbench to do all the work required to generate queries, but do not actually send them to the server. Just exclude the server, exclude the client library. Just let Sysbench have all CPU resources it wants and see how many transactions per second or queries per second it can generate. So this chart shows improvements in single-freight performance. As you can see, Sysbench 0.4 generates about 2 million transactions per second. Point select transactions per second in this dry run mode. Sysbench 0.5 because of the low overhead generates about twice as low on TPS. And Sysbench 0.1.0 generates more than 6 million transactions per second, dry run transactions per second. Some changes in 1.0 that made these optimizations possible include migrating from plain Lua, interpreted Lua to LuaJit, tracing just-in-time compiler for Lua. There are lots of low-level details that needed handling to integrate LuaJit properly and use it efficiently, but I don't have time to go over them right now. But basically LuaJit provides faster low-code execution and faster calls to C from LuaCode to C calls with the mechanism which is called foreign function interface provided by LuaJit. Sysbench 1.0 also has some optimizations in the main event generation loop. It has faster random numbers generator. It now uses the generator called ZoroShiro120i+, which is incredibly fast, but it also has good statistical properties. It's the fastest random number generator passing the B-crush test suite with no systematic filers. So as a result, Sysbench 1.0 is more than three times faster than the old 0.4, which was written purely in C, and it's more than six times faster than the Sysbench 0.5. What about scalability? So this chart shows some scalability improvements and there are even more dramatic than single-thread improvements. So the red line, which is Sysbench 0.4, it basically has negative scalability. The best throughput is achieved with a single thread and then adding more threads just makes things worse. Sysbench 0.5 is a little better, but still it reaches its maximum on just the number four threads. And the blue line, which is Sysbench 0.1.0, as Dimitri Kraschuk says, no comments. Some changes in 1.0 that made this improvements possible include, well, inclusion of the concurrency kit library, which is a C99 library implementing Atomics for various CPU architectures and also some lock-free and weight-free data structures and I already used them, some of them in Sysbench code. I also contribute to some patches to concurrency kit to improve Atomic support on the ARM64 architecture. There are also no moduxes at all, at least on critical code paths. And there are no shared counters. So all statistics are now calculated per thread and only aggregated when you need them, basically either for intermediate reports during the benchmark execution or for the cumulative report at the end of the benchmark. There are also a number of changes in the command line syntax. It has been simplified to improve usability. So in Sysbench 0.5 there was this weird syntax when you had to specify the path to your lower script with the double-dash test option. It was the only required option. With Sysbench 1.0 you don't have to use it. You can just specify the path without any options. Or you can now even do this. You can use the standard Linux and Unix mechanism of hash banks, just add the Sysbench hash bank as the first line of your script, make it executable, and then you don't even have to say Sysbench your script. Which is a minor prune, of course, but I think it makes Sysbench feel a lot more like a standard Unix or Linux utility. Yes, command line options. So one serious problem with Sysbench 0.5 was that if your lower script requires some command line options, then Sysbench, basically, there was no way to declare supported command line options. Instead, Sysbench just converted all command line options into global lower variables and made it available to lower scripts. So that was a nice hack, but there was one significant downside, namely if you make a typo in your command line, then there was no way to validate those kind of things because Sysbench didn't know what kind of command line options are supported by the script. I also had to handle default variables manually in your scripts, like in this example. So in Sysbench 1.0, scripts can declare their options, so Sysbench can validate them. Like in this example, you basically define a structure or a table in LowSpeak containing names of options, their descriptions, and their default values. So now if you make a typo when passing command line arguments to your script, Sysbench will actually fail with an error complaining about option being invalid. And there's also now the help command, obviously, because Sysbench can now print descriptions of available options. And of course, all bundled LowSpeak scripts declare their options and respond to the help command. It's now also possible to use the C library because of the migration to Lua.jit. So in plain Lua, Lua tries to be as portable as possible, and that's why its standard library is quite limited, which limits the number of things you can do with it. For example, if you wanted to introduce delays in your scripts, then you had to resort to hacks like this called the sleep executable from the operating system and pass the number of seconds. Some operating system support fractional number of seconds, some do not. That was ugly. So Lua.jit, the mechanism called foreign function interface, allows you to use arbitrary functions from the C library. So this is an example showing how to use the use sleep call from the C library. You just need to declare it, Lua.jit knows what arguments are accepted by the function, the type of the return value, and then you can just call that function directly from the C library. The SQL API has also been redesigned. It now has this object-oriented look and feel. For example, to create a connection, you first need to create the driver object, and the type of driver being used depends on the command line argument again. Then you can use the driver object to create connection or connections, and then you can use the connection to execute queries. Here's the simplest example showing the new API. Yes, and by the way, they also use Lua.jit FFI for better performance. So it's now possible to create multiple connections for threads with the new API, because the old Liga SQL API, this bench used some pre-created connections automatically, and it was only possible to create to have a single connection per thread. In some cases, I think it was a feature request from Rene, he wanted to test proxy SQL with some crazy number of connections, like a million of connections, and creating a separate thread for this kind of test was basically impossible. So now it's possible to create multiple connections and use them in the same thread. Result sets. It has been a long-requested feature, even though Csbench 0.5 discarded all result sets automatically. There was no way to read and process results returned by queries. And that actually worked very well, because usually in benchmarks, you don't need the results, you just need to execute queries as fast as possible. That actually processing results is acquired by some complex benchmark scenarios, for example, linkbench. Some people have tried to implement linkbench and that was the major obstacle to do that. So now you can read results and process them in your old scripts. Let's see histograms. So Csbench has always had this data structure, an internal data structure called histograms to provide personal statistics for latency, but it has never been exposed to user. Now Csbench 1.0 has this option called histogram. If you specify it, then you will have this nice detrace like latency histograms, which provide much more better visibility into latency distribution than just percentile values. And you get it for free, because Csbench maintains this structure anyway to provide personal statistics. There's no extra runtime overhead to print these kind of things. Error hooks. One problem with previous Csbench version was that sometimes you need to provide special handling for certain SQL errors. For example, if your benchmark in a cluster, a gallery cluster or a group application cluster, then you need to provide special handling for situations when one of the cluster nodes becomes out of sync. And then instead of terminating the benchmark, you want to wait for that node to become available again, or you may want to re-root queries to another node. And there was a half-baked solution in Csbench 0.5, which is called onMySQL ignore errors, but it didn't work in some cases. So there's now a much more flexible way to handle errors in Csbench 1.0. Here's an example showing... Basically, you can define your own handler hooks and then decide whether that specific error is fatal or ignorable, and you can also provide some special actions for certain kinds of errors. You can now also define custom commands. In the previous Csbench versions, there was a predefined set of commands which would basically prepare, run, clean up, and help. In the new version, you can define your own commands. For example, in my personal scripts in benchmark speeds, I often wanted to force Csbench to load a specific table or specific set of tables to database cache, for example, before starting the benchmark. Previously, I had to do it in my own scripts. Well, not anymore because the bundled LTP scripts now have this custom command called pre-warm, which do that kind of things. Parallel commands. Another long-requested feature. Some people wanted to execute commands in multi-threaded contexts. The most useful example is the prepare command when you want to load the original dataset in Parallel. In the previous Csbench versions, all commands except run were executed in single-threaded contexts. Now you can declare your commands as supporting multi-threaded execution. In fact, the LTP scripts are capable of loading the original dataset in multiple threads. Another long-requested feature is that you can define your own report hooks which are invoked by Csbench whenever it wants to print some statistics. This is required to print statistics in some machine-readable format instead of the default human-readable one. Here's an example showing CsV reports. Basically, this hook receives a structure containing all statistics accumulated and aggregated by Csbench that it uses itself internally to print statistics. So you can replace the default human-readable format with CsV or JSON, like in this example. Using that data structure. But of course, it's not limited to just changing the report formats. You can do a lot of interesting things like storing the results in sometimes serious database like Prometheus or Graphite, or you can get custom performance metrics from the operating system or MySQL itself. In this example, it's a little artificial example, but I'm using it just to show how you can query performance schema for the top-weight event and print the average latency for that event along with standard metrics provided by Csbench. What about all scripts? I know that the world Csbench version has been around for a very long time, and some people have accumulated a lot of scripts to do their internal QA. I didn't want to break all those scripts in an instant with the new version, so I went an extra mile to make sure that the world scripts using the legacy API will work, and there are some regression tests to verify that. I also had to drop support for certain database drivers, mainly the Oracle Database, Drizl, Libetaj SQL. I don't know if anybody here cares about those, but the code is still there. I'm not sure if it even compiles. If you care about any of these things, then batches are also welcome. Yes, view support has also been dropped for a number of reasons, but after a long consideration, because believe it or not, the most frequently asked question about Csbench is, where can I download a Windows binary? I don't know why, but supporting Windows and moving forward makes moving forward very hard. So, the future. So far, this is the only documentation for all the new features, but I'm serious about changing this thing, because I don't want Csbench to repeat the fate of previous versions, where a lot of useful functionality was available, but people didn't know about it, so they were shy to use it. I'd like to create packages for Csbench. Doing packaging is hard, especially if you are a single-man project and you have to package for multiple distributions, but I'm going to look at some public cloud solutions providing packaging infrastructure for open-source projects. Implementing Cslink-Vench, there is the link-Vench implemented in Csbench law is also possible, and I'm not the first one to come to this idea. MongoDB driver is also my to-do, and my Cslink-Vench protocol driver, which is the new key on the blog, I think it's only a matter of time before people start requesting this functionality. So, to sum up, Csbench 1.0 is probably the most significant milestone so far in Csbench development history. I hope it will be as useful for you as it already is for me, just in case the link to the GitHub project and these slides can be downloaded from my website. Thank you for listening. Does anybody have any questions? No questions? Thank you. Did it work? Thank you. Can I see yours? Maybe you won't see it. You just see me. Is that a problem? I wouldn't really like to see them here. That's... Is that your screen to just mirror, maybe? Yeah, that's what I will try to do. Let's see if we can... It seems to be right. You see, it's cut. Yeah, it's indeed just eating a bit of the slide. Can you maybe change it? No. So, I'm Daniel Van Ede. I'm here to give the talk about Ghost. Many of you might have expected Slomi Noach here, but unfortunately he couldn't make it. So, you're stuck with me. So, Slomi is the main order of Ghost. But I... and some of my colleagues also worked on some of the Ghost source code for Slomi. And we discussed ideas about Ghost. I'm working for Booking.com. What we do is we sell hotel rooms. But we also run on many, many MySQL servers. And we do many schema migrations like every day. So, maybe to start off with a question who of you are using schema migrations other than just running alter table? Okay. I would have expected like a little more people raising their hand. So, who of you are using Perconas online schema chains? Who of you are using another tool? Only a few hands. So, Ghost is the online schema migration tool from GitHub. And to start off, there are a few limitations about things which are currently not working. Of course, there might change in the future because there's ongoing work. For some of these things there is a problem, but it's not merged yet. So, foreign keys don't work currently. For us, it's not too much of an issue, but for other people it might. Triggers doesn't really work. It requires row-based replication for now. And it requires a full RBR image which should not be too much of an issue for most people. Generated columns in 5.7 is not yet supported, same for like the similar feature in MariaDB. Multisource replication is not supported. Well, it probably works somewhat, but it's not really tested. So, probably don't use Ghost with multisource. If you use an active-active multi-master, then Ghost is not the tool you want to use. It might be possible to add that later. So, Ghost has much more code and is much more involved with what's going on than most of the other tools like PT Online Schema Chains. So, there is more room for error there, because it does a lot more than just monitoring stuff, basically. There's a GitHub Paste and there you can see all the issues which are and all the limitations because, well, things might change and have changed a bit already. So, schema migration is a well-known problem because if you just run an alter table it might lock things on your master and later on when it replicates, it might block your replication from continuing which causes your slave to lag behind. Well, then Percona made the online schema change tool and there were a few other similar tools to do schema changes, sometimes with like a multimaster and then switching replication. There were many tricks and the Percona online schema change tool worked really well for most of the people for a long time. So, it is a tool which served everyone well but there are also other options. Facebook also made one of the made a tool. I think it's written in PHP which limits its user a bit. So, I don't really think many people are using that outside of Facebook but they they do it in a different way which is interesting to see. Of course, these slides they were originally made by Sloamy so there might be some GitHub related stuff on there about internal operations at GitHub. I will skip over most of it. I might add some information about how we do schema migrations in booking and how we are using ghost in booking. So, this basically is a PT online schema change. You're creating a new table. You're altering the new table and then to get all the data changes applied to your new table you put like three triggers in place trigger for the insert, for the delete and for the updates. So, all the things happening on the original table will also happen on your new table. And then of course it also needs to copy rows from the old table to the new table because not all rows might be touched while the online schema change is running of course. This is what Facebook is doing with Facebook online schema change tool. So, all the insert, the delete and the update are going to a change lock table. So, it's like a three step approach because then later on all the changes are being applied on the new table which does help a bit but it also has its limitations. So, one of the things ghost does is that it's not using triggers anymore. There have been a few issues with using triggers for online schema changes. First one with sort procedures is well, they're interpreted, not compiled. So, each trigger is running and it adds a lot of overhead on the master just to execute the trigger. And of course there are a lot of issues with locks because there are locks of course on the old table, on the new table also at the eventual cut over there are some locks and the more active your server gets the higher the risk gets of actually these locks blocking your online schema change and also blocking inserts and updates on the original table because with the triggers anything which happens on the table also has to happen on the new table. If anything is blocking the new table then of course the insert cannot happen on the new table which means that your application gets an error. The other issue is that it's not possible to suspend the triggers. So, you're running your online schema change, certainly there's like a big peak in traffic and you just want to pause the online schema change for a bit. Well, that's not possible. You can stop the copying of rows from the old table to the new table but the triggers still have to be in place and they still add overhead. So, that's a bit of an issue there and I don't think I've ever seen anyone running multiple online schema changes with the PT online schema change tool in production and be actually be happy with it. I also wouldn't recommend doing that. And there's the issue about testing with the PT online schema change it's not really easy to actually test if a code change in PT online schema change does the right thing because well it has to put those triggers in place and remove them at the right moment. And if you're testing your code and for example, removing the target table before you're actually cleaning up your triggers then all the triggers will fail which will cause a production issue because you cannot insert in the original table anymore. So it's difficult to actually test all the online schema change tooling. So of course, Ghost does everything in a different way because there's a lot of data that's binary logs which have to be in row based format then parses the information in a binary log and applies it to the master if it's running in production mode. And of course it also still has to copy rows from the old table to the new table. So basically this is how it looks like. There are inserts, deletes and updates on the original table into the binary log. The read by Ghost writes them to the new table but if anything goes wrong with reading from the binary log and inserting on the new table the inserts on the original table will still go on so it's much more reliable in that way. Does everyone understand how this is working? Any questions? Feel free to stop me if you have any questions later on. So of course it's best to use a replica or a slave, however you want to call it. So if you're reading from the replica there's even less load on the master and the only thing the master actually sees is normal inserts in a table which is well normal operation. Nothing special going on the master. So of course we can read the binary logs from whatever thing is providing the binary logs so that really helps. And Ghost controls the whole data flow because Ghost is reading the data from the binary log and applying it and with PT online schema change the trigger is doing the work but here we have to actually read the data and write the data again. But also because we are actually touching all the data we can do some interesting things with that. And so again the writing to the master reading from a replica from binary log and then eventually we switch a replica, switch the two tables. Also the there's an interesting method about how a cutover is done with Ghost. It's quite difficult to actually log two tables and rename them autonomically. But there is a trick Ghost is using which actually ensures that when you're switching those two tables they always exist. There's no gap in which no table exists in that place. So all the writes which go on will still find a table and work normally. It's best to use a replica but it's also possible to connect to a master. And the third option here is to test on a replica which is a really unique feature for Ghost because that really allows you to run Ghost only on a slave see what it does see what the end result is compare the old table, compare the new table see if there are any changes So at booking the first Ghost version we tried we actually were testing on a slave and we always test on a slave before we run in production. So we were testing on a slave and we noticed that well some of the timestamp columns were like off by like one hour, sometimes two hours well one hour or two hour well that's daylight saving time or not daylight saving time in Amsterdam so that was obviously a time zone issue because Ghost is used at GitHub Ghost is used at booking but of course the environment is different we might use different time zones we might use different schema layouts different server versions so it's really good to do some testing because your environment might be different and this really gave us the option to test see what was happening on the data work on the code fix the bugs, the bug is fixed for quite some time already and then when we were confident that the data was correct after running the migration we could just run it in production and we now often run Ghost in production at booking and I know that at GitHub they are running multiple Ghost migrations a day so let me just yeah this is the testing so of course you can do some more testing because you can just automate and do a really simple no op migration and test every day if your data still looks the same before and after an online schema change because if you're adding a new column like a JSON column well you want to know if it still works or that you need to work on the code there's a Unix socket and Ghost allows you to connect on the Unix socket ask about the status change the troddling change which slaves are being troddled on and change more parameters and usually when you're in Ghost it will add like a delay of less than one second because it's using a heartbeat mechanism so it's not really adding any replication delay which is really good also it's possible to delay the eventual cutover so you start your Ghost online schema change then when it's ready for a cutover you make sure that you're in the office that you had your coffee and then you start a cutover and then you can monitor it and do anything if something goes wrong at booking we never saw any issue with the cutover but you might want to be in the office anyways because a client might do something strange or if you're adding an extra column the client might start to do something differently of course Ghost allows you to run a number of different hooks so at GitHub they're using chat ops so it's possible to integrate Ghost with things like chat systems to give you a message when it's ready for cutover for example so one of the last things I want to mention is that now the row copy is done with just a insert into a select from but eventually there are ideas about decoupling that then you can actually read from a table and write to another table and then there are other things you can do eventually like a live table migration so you're reading from the binary logs you're reading from the original rows from the server you're writing it to a completely unrelated server which allows you to a live migration of a table which is pretty nifty there are some other things Slomi's working on and Slomi's colleagues and we are also working on one of those things is a resurrection to actually restart an online schema change if it was stopped and of course it's open source and of course it's on github but that's well I guess you guessed it already some example well I think that's it so please have a look at Ghost give it a test so it's really easy to just test on slave see what it does no production impact whatsoever and if you're confident you can run it in production thank you any questions? I think that should be possible not only for Ghost but probably also for pto online schema change so the question is after the cut over it's really difficult to do a rollback so both Ghost and pto online schema change will just keep the old table around if you use the right options Ghost does that by default pto online schema needs an option for it but for Ghost it might be just possible to reverse the operation of inserting in the old table so that would for sure be possible I think with both of those tools but someone has to do it yes so all the details about how that's working are on the documentation on github so basically what happens is that there's a rename table and it blocks that by having a lock and when it's done applying all the binary locks then it will remove the phantom table and do some other tricks to actually make sure that it's atomic so the first thing you have to do is do a lock tables and then you have two tables under a lock and then you want to do a rename table once everything is right but another threat is still applying so there are like two threats which have to coordinate work so that's yes so one of the yes so once you have the lock in another threat you're running the rename tables and the rename tables will have a higher priority of already running inserts and updates but all the information about how exactly that's working is on github so the question was does it support gtid yes we are running with gtid in a lot of places I would have to think about in which places gtid really matters there so probably the most important thing when running with gtid is that you want to have the gtid actually matching in the end but you're inserting on the master well every insert is just a regular insert because that's the things you're applying for the binary lock and you're doing an insert into select star from which will also just generate normal gtid so I don't really think we have to do that much special things to allow gtid to work properly thank you so hi everyone the talk will begin in 5 minutes and during those 5 minutes Frederick will distribute some goodies that I brought there are some seeds here in front 3 maybe 4 2 over there just watch out the beer just need to keep this one free it's mine there is still one the more I try, the better