 Let's talk about pretty controversial topics. I'm going to talk about ten reasons why you should prefer PostgreSQL to MySQL. I'm sure a lot of people here will be using MySQL, so hope you'll like this talk. Let me give you a brief introduction of myself before I start. My name is Anand. I'm an independent software consultant and trainer. I'm not a DBR database specialist. I build web applications and it's hard to think of applications without a database. So during the course of time, I've worked with both MySQL and PostgreSQL. I've worked with fairly large databases and I've seen issues with both of them. So I started liking PostgreSQL for its simplicity and its advanced features. So I'm going to share some of the insights that I've learned during this time. So MySQL or PostgreSQL? So PostgreSQL is also called PostgreSQL. So there are two popular open source databases available. MySQL is most popular and the tagline they say is MySQL is most popular open source database and PostgreSQL is most advanced. So which one you want to use? Most popular one or most advanced one? Before we actually get into the details, let's make a quick comparison of all these things to make sure we're not comparing apples to apples. So MySQL is actually two different databases, two different storage engines. One is MySQL and InnoDB and there's PostgreSQL then. So MySQL is a very simple storage engine. It doesn't support transactions. It doesn't support foreign key constraints. And for every right, it locks entire table. So it's very simple model. So for simple operations, it tends to be quite fast. And if you have a read heavy load and not complicated queries, MySQL tends to be faster usually. And since it's locking entire table for writes, if you have large write queries, that won't work very well. And InnoDB is more signed compliant kind of thing where it supports all the transactions for the constraints. And it has a MVCC, the multiversion concurrency control, which means multiple writers can update at the same time. It locks only a row. So that's a bigger picture of how these things stand. So if you look at the raw performance of these things, that's probably misleading. So we're not really trying to compare the raw performance for one particular kind of use case, but we're just trying to understand the databases at the more advanced level and then actually see what each one has to offer and then what kind of issues we may face in production. So who uses MySQL? A lot of people use MySQL, for example. Facebook uses MySQL, French uses. A lot of companies use MySQL. So I'm sure there are a lot of people smarter than me using MySQL. It doesn't mean MySQL is a dump tool. I mean it has its own use cases. And same with Postgres. A lot of big companies use Postgres. So it's actually misleading to actually see these names and see that it's better to use. Because these are big companies and they actually have a lot of resources and actually take any database and then put people in and scale it. So what we should really see is, given the database as it is, as a small team or in a team of what size that we're looking for, can we use it? So what does it offer? And what likes that it has? So just using the names of the companies wouldn't really be helpful at all. So I'm going to start with a fun thing about MySQL. So first let's actually talk about why we use a database. Why can't we just use flat files? We don't use flat files because we don't want to worry about a lot of integrities. So we just want to give data to a database and then ask it to save it. If it didn't back, we know that data is saved. I don't have to worry about it. If there are concurrent writes or whatever it takes here, I send a query, insert query to it and then if it comes back, I know that my data is saved. I don't have to worry anything else. So that's the confidence that database gives us. And apparently MySQL doesn't give it, not always gives that. And MySQL sometimes lies to you. Let's look at a small query here. So I've created a table called cake with the name varkar3 and I've inserted pancake into it. What should happen? Like it's a database, you give anything, it saves it and then when you ask for it, it should give you back. Let's see what happens. If you query, select certain cake, you only get a pack. Where's the cake? What happened to my cake? So actually MySQL ate your cake. But actually, if you've noticed in the previous slide, there is one warning here. Saying that the pancake is too long for it to store, so it ate the cake and only kept the pan there. So that's not something I would expect from a database. Do you expect something like that from a database? A database is something where I give some data, I expect it to save it as it is. It's not the only thing and there are more things with MySQL. But let's see what happens if we try the same thing with Postgres. It says, I'm very sorry, I can't handle this thing for you. It's an error. So MySQL, if Postgres says that it has written back, that means data is safe. But in this case, we're using varkar3 and we're giving a longer one that gives an error. That's something I can live with. So immediately I know that it does not accept that, then I can look at and find out what happened. But in case of MySQL, if we don't know that these kind of things are happening, it's going to be difficult because we'll realize that at a later point and your data would have been lost. Have you ever seen something like this? Use a table. It's pretty common to use password as... I'm sorry. So it's pretty common to use password as varkar8. Have you ever seen any website where you log in with a long password and you sign up with a long password and when you try to log in, it won't let you in. And if you try with eight characters of the password, that's it. Have you ever got faced this situation? It actually... Actually, it happened with me once with the HSE Bank foreign forex card website. Well, then I realized these guys are using MySQL with varkar8. So that's something to be very careful about when you're actually using MySQL because it slips the data if it's longer than that. And there are also some data conversion errors that happens which should be aware of. So let's say I created a table with an integer column and I inserted a bad number. What do you expect? It's an integer but I put a bad number. MySQL says, okay, I have inserted. When you actually go back and see, I have a zero. What? Let's see. You're supposed to look at the warnings and then see what the warning was. But try doing the same thing with Postgres. It says, sorry, you can't do it. And there are more things. If you look at dates and try to put a date, it puts 0, 0, 0, 0, et cetera. So there are more quicks like this but I'm not going to get into all of them but I'm just pointing some of them. But actually, there's something... There's some parallels to this. If you actually try the same thing with PHP, take a bad number and the connoting integer gets 0. No wonder PHP guys lost my SQL. And actually, I tried the same thing with Python. It gives an error. I'm a Python guy and I feel at home with Postgres. So if you like Python style of things, probably you should try Postgres. So that was a fun part. And this thing can be fixed in mySQL because it's a very complex setting saying that don't show me warnings but show errors instead. That's a small thing but if you don't know, it can actually lead to data loss. It's a pretty dangerous thing. But let's look at more deep things like how this database is stored the data on the disk and actually see what implications it has on database maintenance and other things. So if you look at mySQL, there are two different engines. We have MyASM and InnoDB. So MyASM stores, creates a directory for every database and keeps two files. There's one file for all the tables and one file for all the indexes. And if you look at InnoDB, it just keeps all the databases in a single file. If you have 10 databases, all of them just go into the single file. There's an option to make this split into one database something but that's the default thing. If you try Postgres, it's fairly coarse-grained. So what it does is it has one directory per database and there's one or more files for each table or index. It's actually very, very important because this has implications on what happens when you try to add index or when you try to add new columns, etc. Also, if you see, since Postgres has one file for each table and index, let's say you have 10 indexes and you're using two of them, only those two will be loaded into Operating System, Bofocache and others won't be touched at all. So you won't get the penalty of having so many indexes created. But if you look at MySQL, all of them are sitting in the same file. So probably all these indexes are mixed together in the same file. So when you're loading a disk block, you might actually load indexes for all the indexes together. So probably using only a small fraction of that. So that way, Postgres probably can feel it's very nicely designed in the file structure. Now look at the database maintenance part of it. And what I've seen in the disk layout has actually a lot of implications on the database maintenance. So if you look at how do you create an index with MySQL? So let's look at MySQL. What happens is, when you create an index, it locks the entire table for writes and it has to recreate the index file because it's a single index file. It has to create the whole index file. But during this process, you can't write anything. So the database is locked for writes. So you have a write traffic that will max out the number of connections you have, and the database will be unusable. Not only that, so when you try to create an index for MySQL, it actually not only creates the index file, but also makes a copy of the database file. So you need to actually have a disk almost like 50% of the disk space free. So it makes one copy of the table data and one copy of the index. And I don't know for what reason, it actually creates all the indexes from scratch again. So if you have 10 indexes, it's going to take 10x long time. If you have 20, it's going to take 20x long time. So I don't know why, but that's how it works. And so I used to work at Internet Archive and wherever they had... And if you see Create Index, if you want to create index, it locks the entire thing. You can't use it, right? So at my work at Internet Archive, people were very scared of using database. They were using MySQL and the database has grown very badly over time as it happens. So there was a big table with 20, 30 columns and there were a lot of indexes. And they want to add a couple... If they want to... They know that a query is running slow, they can't do anything. They had to find workarounds to run the whole database. So they had to run as a cron job in the night or when the load is slow or et cetera. So I looked at it and I thought, let me volunteer to fix this problem. So I said, I'll volunteer to add new columns and new indexes. So what happened was, since adding a new index rewrites the whole thing, so first I added new columns. So it rewrote the whole thing. I thought it would take an hour down time and plan for one hour down time. It actually took three hours to add new columns and then rebuild all the indexes. Then I had to add some new indexes. So it did the whole thing again. So the whole site was down for seven hours and it was one of the high-trophic sites, one of the top-net sites and we were down for seven hours because of my SQL. And if you look at... Can someone change the battery? Okay, so InnoDB... Sorry, yeah. So InnoDB also, it used to happen the same way, but I think it has improved in the recent versions. I have not checked the new versions. So even in Postgres Create Index locks table for writes. But there is another variant of Create Index, where you can Create Index concurrently. It doesn't hold the lock for entire duration. It holds the lock for shorter bus. And then, so it's slower than the plain Create Index. But the nice thing is now, you can actually create index without stopping your operations. That's a really nice thing. And also, each index sits in a new file, so it doesn't affect everything else. Same thing for Drop Index. If you want to drop an index, if you look at MyAsim, so all the indexes are sitting in a single file. So how will it drop an index? So what it does is it rebuilds entire file again. So it's going to take same amount of time as Create Index. So what you do is just let the index lie around and... Right? So same thing. So with MySQL, with Postgres, it's instantaneous because it's just a new file, right? You just have to remove that file and then mark it that the file has been deleted. So that's nice. So now, the same thing with adding columns. If you want to add a column, the same thing happens. And with Postgres, adding a new column is almost instantaneous. It's almost instantaneous if column has a default value. So now, this ability to iteratively measure database performance and improve the performance by adding index is really nice because you can iteratively try something and then create index and see if that's working or if that's not working, drop the index and try something else, which you won't be able to do with MySQL. Now let's look at the connection model of MySQL and Postgres. So MySQL use threads for each connection. So the nice thing about using threads is creation of threads is very, they're less expensive. You can create thread pretty quickly. The cause is it's difficult to scale on multi-code systems one because there are threads and second is the implementation of MySQL itself since it's using threads, there are more locks involved. So it won't scale so well on multi-code systems. And it's difficult to monitor threads. They're not like process. But if you look at Postgres, since it's a different process, they offer better concurrency and there's complete isolation between processes. If one of them is not behaving well, you can just kill them and nothing happens to the rest of the system. The other important thing is it plays very nicely with the unique tools. You can look at PS, TOP or KLA process, etc. And the cons is a lot of overhead in creating a new connection. So typically use a thread pool or there is a server-side pooling techniques, sorry, connection pooling or pooling techniques. I'm just going to show how it looks at the top. So if you see, this is all the process run by Postgres. So there's one process for writing stuff to database and there's a while log, etc. So these are the process that Postgres runs and if you actually see there, there is a copy command running. That's probably backup or something. And if you see, that's a PID of that connection. Now, I want to build a new index now and that's going to come on my way. So what I can do is I can actually kill that process. It's a kill and then give the PID. It will just abort the backup process. Or I could even do kill my stop, same as saying control Z. So I can pause the backup process and then build my index and then continue that. There could be even times where there's a firefighting and the site is slow and the database connection is reached too high and it's about to tip off. You just look at the top and see, okay, that's connection. That query is taking too much of CPU and it's probably run by one of your colleagues. It's not in office right now. So just pause that query and then send a mail saying that when you come back, continue like this and monitor the database. So these kind of things are possible not because I know Postgres internals because they're spaced nicely with the Extinguistic Tools. Now let's look at query planning. So I'm going to take this query. So I have a database of names, names of people and so I have name, year and number of people with the name in that year. So I'm trying to see how this query performs in my SQL in Postgres. I'm trying to understand how this query works. So if you look at the query plan, there's an explain command. It's explain and give the query. It tells you how the query is being executed in the database. If you look at my SQL, it's actually one long thing. I broke into two parts to show it here. So it says it's probably not that clear. So it couldn't find any key to use. So it's actually going to scan all the rows. So the number of rows it's going to scan is this. It's going to use a wear condition filter and then use a file sort for sorting it. And if you see, so let's how do you improve it? You create an index on the total number and then if you try the same thing, it actually says now that it's using a names total index. It's using index to ring it and the number of rows it hits is just 10. That's better, but this thing is not that it's clear, but let's try doing the same thing with Postgres. Just take without index. It says that it's going to do a sequential scan on names and it's a cost it's involved. So the cost is internal matrix saying that so many operations are required and it's going to sequential scan on names table and then filter by year equal to this. And after that it's going to sort on the total column and then limit and the total cost would be something like this. That's a cost translate, there's a correlation between sorry that's not number of milliseconds. The cost is internal number. So that map, there is a correlation between that and the actual time taken. So now let's create an index and see what happens. Now it says it's actually doing index scan. It's index scan backwards because we are finding the top names in the tier so it has to come backwards from the index and then filter by this year. Could even do explain analyze since it'll actually run the query and actually tells you how it performed. You could actually say it actually was removed by the filter and so on and it took so and so time. So if you see the actual time took is this and the cost doesn't translate directly to it but actually there's some proportion to that. So this is nice because you can actually see how a query is being executed and actually see it's using the right index or not the right index, remove index and then tweak this query. To try a bit complex query I've added a group by here so it's actually doing pretty complicated query plan. So it's actually doing a bitmap index scan. What's a bitmap scan? It creates a bitmap. It's a one bit for each row and then fills it with zeros and then goes over this and then if this condition is true then it takes all the rows where there's one and then goes over the names and then aggregates that and then sorts it by that key. So Postgres can do this kind of fairly advanced queries and I think surely Postgres has an upper hand in executing complex queries. So that's what the query planning. Now let's look at how replication works in both MySQL and Postgres. So MySQL has a variety of replication modes so you can replicate by statement or by row and there's a combination of that, you can actually the different modes, whether it writes a binary log and then replace that or there's something called global transaction ID. So each one of them has some limitations doesn't replicate completely so for example, if you look at statement based replication there are some queries in Postgres which are undeterministic, you can actually update something and say limit to, that means you update rows but only update the first two rows which first two rows, if you don't have an order specified it can update any random two rows in the database. Now if the same thing is applied on a slave it could update two different rows. So I find the options that offer are pretty confusing and the row base there are some limitations as well but with look at Postgres, it's pretty simple and straightforward. So what it does is it maintains something called a val log, right head log. So whenever a disk block is modified in Postgres it's written to a right head log and that gets replayed. So this is at the disk block level, not at the database level it's lower below. So what it does is it takes a disk block and applies on the disk at the slave. So there are two modes, synchronous and asynchronous. When you do synchronous mode what it does is the master waits until the transaction is committed on the slave. Asynchronous it will be a small delay and there are two modes of doing it. One is log shipping. You take the val file and then send it to a slave and then do a replication or you can actually use the database to stream it over the network and then the slave can again replicate to other slaves. So that's how replication works in Postgres. So let's look at data recovery in Postgres. So MySQL have not showed what kind of things it can recover because I don't think it has any measures like this. So Postgres I said has a overhead log. So once you take a base backup of database, you take rsync of the whole Postgres database and then get the val files, it applies there. So what you can do is, let's say you have by mistake deleted some rows, you actually wanted to replay back up to couple of minutes before that. What you can do is you can actually copy all the val files and then say that replay up to this timestamp. You can actually come up to that and it stops there. So you actually really have a time mission on the database. You can actually go back like one day back or two days back or actually one day, five minutes or specify timestamp, it goes and stops there. That's pretty nice feature. So if you have crashed your database or did something nasty, you just can roll back the moment before that. So that's the comparison I have and let me show you some quickly some of the interesting features of Postgres. So there is there's something called partial index. You can say that great index only if some condition is true. For example, there are some queries which are running pretty slow, you want to optimize that and you don't have time to like spend time to change your code. Let's say there's a query like that where email is like %spam.com that's taking 10 seconds to run, you want to make it fast. Just create an index where email is %spam.com. So it will actually use that index. So you can actually have the where conditions in creating index itself. You can have functional indexes. You can actually use a function in creating index. So if you have a query which uses functions in where clause or group by et cetera, you can actually say create an index on that. So it will use that index for that query. And the new document database bus, so Postgres has JSON data type. You can actually create a data JSON and actually create a JSON blob into the table and you can even query with that. Where author is marked when give me all the books to actually get that. So they have JSON and JSON B both. So JSON B is supposed to be binary format and provides little more features and JSON was I think added in 9.3 and JSON B added in 9.4 or something. Both of them pretty much do something and JSON B has more features. But if you see this if you look at the document databases or semi-structured data you can just use Postgres. You don't have to go out and use something like MongoDB or something. You can just do everything in your same relation database and still get the flexibility. And this is another beautiful thing. There is some extension called PgStat statements. You can actually this keeps on logging all the queries how much time it took. You can actually find out what are the queries that are taking most time. So total time by number of calls as t and then you see it's actually telling me it took so much time and it called three times and this is the query that I need to optimize. And the next query is this. So these are the bottlenecks in my database. Now I can go back and create index and try to optimize them. So that's what I have to say and that's the summary. So I think Postgres is better than MySQL in data consistency like I showed in the first slide and query planning, stability, database maintenance and data recovery. So I'm sure like there are cases where MySQL runs faster and useful but I think for considering the advanced feature it's worth trying Postgres for your next project. Do you agree? So creates for the image Postgres logo that I've used and thanks and I'm open for questions. Hi. Hi. One of the things that people would pick MySQL for is that of the PHP MySQL that PHP MySQL allows you to see in a web page, you know, you don't have to have a machine to actually see your database and everything. What is the is there something like that on the PG as well? Sorry. PHP MySQL. Yeah, so there is something called the PHP MySQL that allows you to see how the database is and do all the database operations on your web instead of client. I think you're talking about PHP MyAdmin. Yes, PHP MyAdmin. So there's something called PHP MyAdmin as well. You can do the same kind of things. Okay. Hi. I have one more question. This is specifically on Postgres. So in Postgres we are actually trying to do a dump and load for every newer version of Postgres. We are trying to avoid that and we realize that there is this replication option available recently in 9.3. Is there a way to really do this in the older versions of Postgres? So streaming applications. What are you trying to do? Replication. The DB upgrade basically. So if you want to really do a DB you have to do a dump and load. Which version of Postgres are you using? 9.2 is what we have. So all the major versions are format compatible. So 9.2 to 9.3 should be able to do the replication. So I think you can do streaming replication from 9.2 to 9.3. Okay. So the document actually says that it's not supported. I don't know for what reason. Okay. So I'm not sure I've not tried that but what you could try is so there are two ways of doing replication. One is streaming replication because that copies over the wire. Second is log shipping replication. So you take the vial files and then put it in the directory. So periodically you r-sync it to the slave machine and then that's something you can try. So there's something on archive command. So when the log files reaches a number of limit it calls archive command and copies the archive command decides how it wants to copy. So you can use that to send those files to the slave and slave can replay it. We actually learned that we actually learned that streaming is not supported in 9.2. No, streaming replication is supported from 9.0. Okay. So A doesn't have streaming replication but from 9 they have streaming replication. Hello. Hi. Here, here. I can see you. Can you see here? Yeah. We have one question. Does PostgreSQL support default failover mechanism or any options for high availability? Could you please repeat the question? Does PostgreSQL support failover mechanism? Default? Does it have any failover mechanism or default? Yes. So streaming replication is what you use for high availability. There are two modes. One is synchronous and asynchronous. If you use synchronous application then every update that happens to master it makes sure the slave also got the copy. Okay. But how do we set up on high availability? If a master goes down we have to manually bring up the slave, right? Okay. So there are some tools to actually ultimately do that but I have never worked on that. So I think people use something called PG Bones or something but actually that sits before the database and then that can decide which database to connect to. Okay. Is there any possibility to set up a replication like synchronous between one master to slave and an asynchronous between the slave one to slave two? Yes. I have already showed that because you could do that's called cascading replication. So there's one master to slave and then slave can replicate to other slaves. We felt actually like when we do a synchronous replication from one master to slave it was bit slower. It was still waiting for the slave to complete it but in the slave when we checked it the query didn't get completed actually. Okay. So well I'm not sure. So the synchronous replication is will be slower because the slave has to acknowledge back saying that it has completed its operation. Okay. Hello. Yeah. Yeah. Which versions of MySQL did you take into account while comparing the features? So my Asim I've actually tested with the latest version that comes with open to 15.04 I think that's 5.6 So you know DB? I've not done this test. Some of the experiments were done and 5.5 something. And you didn't test the Parcona binaries or the MariaDB binaries? No. Okay. Second question on the positive side. Want to know what are the sharding, are there any sharding features which are by default supported or you have to build a layer up front? Okay. Not sure. So you can actually share it with multiple tables or across multiple databases? Yeah. Multiple hosts having a chunk of data. So I don't think so. So Postgres usually has a stand is anything that goes to Postgres score is very well tested and it's going to be like rock solid feature. So the kind of things that are experimental kind of things comes as external tools. For example, replication has been done as a third party tools from quite long time but it took so long time to come into Postgres. So there are some tools which actually does all these things as an external application. They sit in front of the database and then they take care of doing that doing that thing. Okay. Comparable to MySQL fabric? Well, so I've never worked with MySQL this thing. Okay. Hello. Can you hear me here? Yeah. Excuse me. Here. Here. Here. Okay. Yeah. So we have been using PG SQL for a long time but we don't know what is the clustering solution for PG SQL to have a master master kind of a node similar to MySQL clustering. Okay. So I'm not sure. So I don't think Postgres has master to master replication. No, you're talking about master and master replication. What I'm talking about is a cluster. Like how MySQL gives a cluster software itself. Okay. Like MariaDB cluster. Okay. Yeah. So I'm asking what is the... So I'm not sure what my cluster does other than doing the replication. A cluster can give you a good HA and load balancing as well. Okay. In terms of say for example a cluster is having A and B node and you can just push one statement to A and once another it gives you a SQL statement to B. Okay. And yeah. So I don't think Postgres supports something out of the box like that. Okay. Okay. There are commercial extensions to Postgres like and they might be supporting something like that but I'm not sure. Okay. Thank you. Because that is a big talk in terms of scalability of nodes when we need to... We don't need to scale up we want to scale out. Okay. Thank you. Hey. Yeah. Okay. Yeah. So I have a couple of questions on the comparisons that you made between MySQL and Postgres. So I'm not sure which versions of MySQL you compared but MySQL has a fast index creation that doesn't take lock on InnoDB. So... Doesn't take lock on? If you are using InnoDB engine. Okay. So that's what I mentioned. So the version that came back in 19.04 had this issue. But probably the new version got fixed. I've not tried on new version. No. I think I'm talking about at least two years ago MySQL had fast index creation when you started using InnoDB engine 1.1 That's the one part. Other part is that MySQL 5.6 has a GTID base replication. Okay. So you can use GTID. Okay. And I think the friends that people were talking about the high availability and the true multi master setup. Okay. So MySQL has a Galera replication. That's a true multi master replication with the synchronous replication. Okay. So let me add one. See the thing is with Postgres when I see it's working as I'm pretty much confident that it's working, it's working. Okay. MySQL there are a lot of quakes. For example if you use a replication the replication could go out of sync and you wouldn't even know. The kind of things happen with MySQL. But how you can make sure even if you're using Postgres. By the way Postgres is a good standalone database by the way. I agree that Postgres has some features that is if you're using as a standalone database it really outperforms the MySQL. Okay. But when it comes to reliability and availability with the scaling then there I feel that Postgres really fails in terms of replication because still it is in a catch-up mode plus also it doesn't come with proxy mechanisms. Like now coming with the MariaDB MaxScaleScam or like ScaleProxyScum right where you can actually do the sharding and load balancing by using the ScaleProxy and you can rebalance the read and writes across the cluster. I think so even for Postgres there are third party tools which will provide that. That's not part of the core Postgres. I agree. But now I think the tools that are building on top of MySQL is more about on top of the MySQL core. Okay. So I think that makes more sense to understand because you have to understand the replication technology as well. Okay. Like your proxy has to be aware which kind of replication is are you using synchronous or asynchronous if there is a deterministic where it should go. Okay. So I think that becomes more compelling like that's why I feel that people are using MySQL more as compared to Postgres when it really goes and let's talk about the multi-data center application. People are facing problems, right? Okay. No. So the thing is so I won't be able to answer the question of multi-data center things because I've never worked on such a large scale. But I'm sure like Skype is using Postgres and they're trying to reach one billion operations per second. Okay. So they're scaling out like that. Okay. So I'm sure like Postgres is also possible to do it. Okay. But it's not that I can answer that question. Okay. So the last thing that like you said okay Postgres does the consistency of the replication, right? How do you really make sure like MySQL there are there were problems obviously. So like MySQL like Perkona guys have built up checks and tools where you can compare the row by row data between the master and slave and that that's how you guarantee that okay the slave has the data as master. Okay. How you guarantee the same thing into Postgres. Excellent. Actually give a brief history of like how Postgres and the design philosophies of between Postgres and MySQL. Okay. So MySQL started as to build a fast database server. Okay. And Postgres started as a standard compliance complete database server. Okay. So that option of MySQL has been to make sure things which are only standard that goes into MySQL and only when stable things and sorry with Postgres. With MySQL the consumers not what meeting standards but actually making something at fast and practical more people can use. Okay. So MySQL has come a long way to meet all standards and Postgres has come a long way to become faster. Excuse me. Let me just take down this. You have a database buff at 3. So you can take all these questions to the data board. Can I finish this question? Yeah. Okay. Sorry. Sorry. Yeah. So with Postgres if something goes into Postgres it's fairly can be confident that it's not going to fail. So if there's some error it'll actually stop there. It won't actually continue replicating. It'll stop replicating there. But with MySQL it'll actually continue doing it and you have to use these tools to figure that out. No, no. So see it maintains checksum for every VAL file. It checks that and if it gets a VAL file it's not matching it won't be it won't stop there. No, no. So Postgres doesn't do roll level operation. It does the block level. Shall we take the question offline? I think it's yeah.