 Hi So yeah, Peter Eisenchild. I've been postcards contributor for many many years work for second quadrant doing postcards development and support And I'm here to talk about the new logical replication feature that we've been putting into postcards 10 So for those of you are not at all up to date with the postcards versions postcards 10 is not out yet Right, so this is a preview So is this the mic you can't hear me Okay, it's the microphone working. No, we're using this one here And the one two three four five is it is this on well, then let's It is on okay, I'll just try to be more loud Okay, so yeah, so postcards 10 is not out yet We'll have more on that this afternoon at the panel of what what postcards 10 is doing and so on so this is all a preview But one of the reasons I want to do it obviously just to tell what we're doing but also just to encourage people to You know Download it and test it out before we release it possibly so what I'm gonna do is first I'm gonna do a demo and and then what if we Have time which we'll hopefully do I'll go then back to the slides and do some theory Bigger my screen So as I was saying this is postcards 10 That I updated yesterday, so this is all in flight Is this big enough to see reasonably I'm gonna do some SQL on that kind of thing. Yeah, okay So just to do to start doing a db with a data directory Normal stuff Start postgres, so just because I'm putting you know on a replicate from one instance to another instance I'm gonna put two instance on the same host now So I'm gonna give them two different port numbers in reality. You can do this any way you want and In this case I have to do Wall level Logic call and explain later why this is a little bit at the bottom of the screen. I'll try but it'll scroll up So you'll see that All right, can you see that any postgres so postgres running now? One side thing to point out in postgres 10 We finally changed the log line prefix default to show something useful so you can see that this is all deep default Right, so you can you see timestamp and pit that's also quite nice And you see some new interesting information there of what addresses it's listening on So this is these are just other improvements in postgres 10 that are kind of nice We you know try to Have more useful defaults So this is gonna be our our master or primary how you want to call it so that's running now So I'm gonna do another one data directory to So I'm just gonna start the second postgres on the second data directory We give it a different port number and in this case. I didn't have to add any extra options for wall level anything like that so We have two instances running now, so I'm going to connect to the First one create a database Insert some bell some stuff my favorite test data here always very easy up To Cool so now we have some data and now we want to replicate that so now it comes a new thing We're gonna create a something's called a publication So there's a new command for that great publication even the top completion Just give it a name. We always call them might pop because we haven't thought of any better names yet and Four table test one Hey, so this is a new command so It's something so What a publication does is basically just group tables together that you want to replicate and in other systems You might call this maybe a replication set or a table set or anything like that So this doesn't do anything interesting really just kind of creates a grouping of some tables, right? so there's a You can do select star from Publication that's a system catalog. So it's there, you know There's also backslash command to show these and because all the good letters were you taken its backslash d big r p Okay So p was already taken and a little r was already taken so big rs for replication Hopefully you don't have to type that that off right so it has a name It has the owner in this case and then some attributes that we'll come back to So again, this didn't really do anything. It just registers to some stuff in the catalog So the interesting thing will happen if we go to the like I So on the receiving end again, we have to create a database to put stuff in Now we have to so one thing that this does not do yet is Replicate any kind of schema, right? So this the schema that you want to that meeting the tables and all that stuff is not replicated So you have to create that yourself and that's something that could be addressed in the future But right now you have to do that yourself so put that one in here and In reality, you might use PG dump for with a schema only to copy the entire schema, right? so So now we have a primer with some data. We've created a publication Create the table that wants to receive that data and now in order to actually start this whole thing up We're gonna create the opposite of a publication which is called a subscription So this is a new command as well Create subscription Some name and then you can tell it where you actually want to connect to so this is a connection string now so in this case our Endpoint here is the other host that's run locally on this port and the database name is test and then you could add other things like Username and things like that. So but this is just default So we tell it where to connect to and then we also have to tell it what it should pull, right? So that's the publication. So we just add the publication name here that we have my prop and It should be it So now it told you a couple things that already said that it did stuff So it created a replication slot on the publisher If you don't know what that is not that important right now and it's synchronized table states. So it Claims it has pulled the initial data down. So we can check here All right, there you go so Just to verify that this actually works we can insert new stuff in here What's next three? There you go delete stuff And it's gone right so that's replication so the RS here there you go So that's basically the gist of it of it, right? There's some attributes on nuances and then things that I'll explain but these are actually the all the all the pieces out or two That's it. There's publications subscriptions. You tie them together. That's it So if you want to monitor that at some point It's actually very similar to physical application you have some of the same things so you can use peachy Stat replication It's the same peaches that replication that you know because it's the same mechanism on the sending side It's all the same mechanisms. It's just instead of sending physical bytes over the codes that interlogical Information and sends it over but as far as the receiver is concerned as far to send those concerns It's very similar. So you can use the exact same stuff. You have a you know the same sort LSN tracking there of how far it's gone and One thing one other thing that's new in post-cris 10 actually as of only a couple days ago is this kind of stuff here down here the the lag tracking and so you don't you can track the the lag in a time Value instead of just these these abstract numbers there. So you can use all the same stuff and Replication slots you can look at in the same way it's the same Thing again. Yeah, the subscription connected and create this line It's all very good and very detailed questions. We will maybe come back to those later. Okay. Yeah, but Yes, essentially on the receiving end. There's something new PG stats subscription If you would have doing physical application You might in this case look at PG stat wall receiver, but this is so new so it shows you what the subscription is doing Yeah, not super interesting unless you kind of throw a monitoring system on this that really tracks all these numbers And you know, you can compare these numbers here with that and that so if you want to really do detail debugging All the information is here. All right Any questions on what the demo before I go back to sort of the theory? Yes, please No, only one way You can well you can hook it up in two directions, but then it's just gonna keep on rolling and you know You can do it filter and do manual things that way but In the in principle is just one way Absolutely, and you can wire those together with partitioning perhaps and the interesting things will happen. Yes. Yes Question on what data types you can replicate that the data types are Communicated as Strings essentially so if you have an a data type called H store here and it sends data It expects it to do something related on the receiving end. Yeah, okay, so That's our work. So let's look at some slides. Yeah, so here's kind of a sketch of what these objects mean so Everything here is by database. So if you have an instance multiple databases, you have to every every one of them Is completely separate so you can have you know, you could even Set this demo up having one database just replicate into another database on the same instance that really wouldn't be quite a sort of Interesting example, but you could do that too, right? So it's just per database Inside the database you can have a publication or many publications and they can contain some tables all tables or no tables and They can write to a subscription can pull from one or many publications and The tables are matched by name so the publication knows what tables it has but the So if you drop a table that's part of the publication will complain and you have to use cascade and that kind of thing But the subscription doesn't really know what tables it Contains until things start actually come in over the wire So the subscription kind of just takes table name it gets just like the database the data type name, right? It just matches them by name So if we had done this demo and the target table Was not there yet. We'll just give you an error until you put the table there the question over there So You question was since we're not Publishing or replicating DDL yet. Are you allowed to change tables? Yes, you can change tables and there's actually some Thing things that are handled so if you have the target tables a different column order that's handled because it's mapped by name if you have You know was it if you have more columns then that also works on the receiving end and things like that it if you add a column on the publishing end, let's say and then that data starts shipping and then the receiving end is going to complain That you know, it can't find that one column, but as soon as you add it it will then keep going So it's pretty robust. It's not going to break like Dramatically as in sort of older trigger-based system. Perhaps if that's your experience. Yes, please If you have an additive change you want to add it to the replica first. Yeah, that might that might be good General guideline. Yeah, so the syntax of the commands Just to kind of give you some some of the variants that you could do so you kind of multiple tables in the publication There's a special Way you can create a publication that contains all tables including tables you create in the future So that's just a special case if you just want to replicate everything Okay If the transaction fails transaction transaction integrity is is fully There yeah Well, that's how this works right everything is this goes over the right a head log and the right a head log is decoded and put back Together on the other end sort of roughly speaking Correct. So on the on the on the sending end The question is the right a head log is not going to be recycled until all this stuff is shipped out essentially So on the on the sending end. This is almost exactly the same as what you have now physical physical application right, so if a you have the replication slots that Preserve the wall until it is acknowledged on the receiving end It's almost exactly the same as you have now with physical replication, right? So Publication for all tables and then you can have some options that if you create a publication that You can choose if you want to publish insert update or lead delete or not So you can do it use that for special use cases perhaps you you don't want to publish Deletes if you do some kind of archiving replication, so you just copy all the inserts and maybe the updates But any deletes you just skip so that might be for some specific use cases, right? Or you just want to maybe just you just want to log all the updates perhaps I don't know that really makes any sense, but you can do that And then they're just don't show up at the receiving end. It just is as if that didn't happen, right? So you can alter publications you can add tables remove tables from it on the fly That's totally fine or just completely this so the first one adds some tables the second one completely replaces a set of tables Or or just remove tables And this is obviously very shocking that you can also remove publications and the multiple of them and Again a publication is just a logical grouping. So this just kind of removes some catalog entries essentially Question in the back. So a little louder Is a little out again Multiple subscriptions for the same publication. Yes, they're independent Yeah, because every subscription has a different replication slots, so they're completely tracked separately that that works So you can combine, you know, it's basically an end-to-end mapping of publications and subscriptions Yeah Okay, so the question that's that kind of goes much deeper into what I was gonna say later If you have a physical application from a primary to a standby and then you have a logical application that is Looking at the current primary using the host information and then you fail over How do you manage that? There's a couple of problems with that One you can change the connection information, that's basically the next slide the alter command you can change that of course you can also use DNS C names or virtual IP addresses to have that move around automatically And you need to make sure all your ARP cache and all that stuff happens That's the same Mechanism that you would use for any other HAA solution. The other thing is that basically what you're describing doesn't actually work Yet because replication slots are not replicated so at that point the thing is broken and there's something in an eternal patch On the hackers list that is called a failover slot that should fix that I don't know where that is that right now. Do you know that's still somehow in the future, right? Yeah, okay So basically that kind of thing doesn't work yet, but that's definitely something you want to we want to fix at some point So Yeah, we'd like it if that work, but there's some problems with that so you can there's a bunch of options with subscriptions Which have to do with so a subscription as soon as you start a subscription as you as we saw in the demo It connects out and it does reserve stuff on the receiving end There's some options you can fiddle with to avoid that and One of the reasons for that is perhaps if you if you take a Backup with PG dump and then you restore that somewhere else You don't necessarily want to start replicating right away depending on what you want to do Maybe you setting up a just a copy for development or something like that so You can fiddle with these options that it creates subscription, but the subscription is disabled or it doesn't connect Or it doesn't have any doesn't do the initial copying of the data So you can have a sort of a detached copy of that so That really depends on the use case of what you're trying to do is so that's why the subscription have a bunch of options when you create them so and you can alter subscription enable disable so that basically just stops your application restarts it or Change the replicate the publications that work with it and The the last one to refresh is if you add tables to a publication Then you have to run this command to refresh that so it Goes out and fetches the new table set so it knows to initialize its its internal state And then you can drop it and again here's the thing dropping the subscription by default connects out to remove the replication slot But if you Let's say the primary voice is somehow gone That won't work then will fail so you can give it the option of just not trying that and not dropping the slot if you just if you just do that and leave the slot there then bad things will happen because the Stuff is going to accumulate right so this is this needs to be managed carefully as everything Because you allocate some resource and say here I want to replicate the stuff just hold on to that for me until I get it But then if you never come back it piles up right So that's just something that you have to implement locally see how you how you want to monitor that That is the same thing that applies to physical application right now, so So subscription so here's some configuration settings that are somewhat of interest to setting this and So the first one I showed in the demo you have to set wall level to logical that just means you can the wall Information is put into the walls to enable the logical decoding So and then the other the next two settings max wall centers max replication slots If you were using physical application now You might have seen those because you have to set those but in postgres 10 we change the defaults for those to be non Zero so you can use it out of the box now. I think they're set to 10 by default So that's why they're in parentheses if you want to do a lot Then you might have to raise those but the out of the box defaults are now a little bit friendlier so I have to worry about too many of those and You know if you want to apply to a non-local host you have to set listen addresses so you can just connect that's nothing new and To configure access control you just use normal HPA stuff And there's also a plug to a new feature on postgres 10 scram authentication. That's also new So but otherwise the HPA stuff is is the it's pretty much the same as as before So basically you have to make sure you can connect to the host and you have to set wall level to logical then you set The other two are more if you need to have more resources and I'll configure by default On the under receiving end as I showed in a demo, you don't have to set anything by default But here are some settings that could be of interest Again max replication slots you have to set and this is totally weird because you don't actually use any replication slots on the receiving end, but that Max replication slot setting also sets the number of replication origin tracking slots therefore some historical reason tracked in the same way, so That's something you should probably fix because that's super confusing, but you basically have to set that to also a value that's high enough and and The other ones are basically how many worker processes for various scenarios Can you allocate and those worker processes are the just a general background workers that also apply to a parallel query for example anything like that so I think the default for that is maybe eight or something so you know if you have a lot of parallel query going on and a lot of this then you have to raise that always state or just Not there's not gonna be enough processes available And then you can also find you in that like how many Replicate workers do you want to have specifically for logical application? And there's a Parallel setting to that for the parallel query so you can kind of split up the buckets of how many Processes you want to allocate everything and then the last one is how many workers do you want to specifically have for the initial syncing? so the initial copying of the original data and You can parallelize that because it can parallelize it by table So if you have a lot of tables you can it will by default use to per subscription But depending on what you want to do if you want to get done quicker or more gently you can we're slower Yeah, or more gently was my term, you know, then you can tune that too But they're at least to get started. They're set to something reasonable So this is So something we're working on of exactly how to Do the sort of privileges and permissions around all this so The way it works now is There is a privilege to Create a be able to create a publication that exists Publication is not something very dangerous because it as I mentioned several times it just allocates and catalog objects In order to add a table to a publication need to be the owner of the table. So that's pretty okay In order to be able to connect to replicate you have to have the replication attribute on your user That's the same thing as we have so far in order to be able to And I showed the HPA stuff the same stuff In order to create a subscription right now you have to be super user and That's just because we haven't thought of any better way to do it But that's maybe something will fix before the final release because the Subs the only reason why subscription is special is it creates a connection to the outside and maybe you want to restrict that somehow It doesn't buy really by itself if it has any superpowers in the in its own database But somehow you want to have a way to tell Not to allow everyone just to make external connections. So somehow we have to add a user attribute or something that controls that But that's basically the general idea of what were we're putting together how I show it those does the same mostly the same views and In Postgres 10 most of this stuff will also show up in PG stat activity. That's a new thing all the somewhat new thing that all the Background workers will also show up in PG stat activity. Is that committed yet Robert? That all the stuff. Yeah, so that's in there already I want to actually see that Okay, so we have the auto vacuum launcher here a logical application launcher. There it is. See that's cool So you can also use PG stat activity and you see all these logical application background workers running around and you can see if They're waiting on anything like maybe IO way or anything like that. That also shows up there So there is a There's two kinds of background workers running on the subscription side one is the launcher that always Starts right away more or less and sort of monitors. If you have any subscriptions And if you have any then for each subscription you get a worker, which is this one here How does it know what from the PG subscription system catalog? Yes All right, so that's the nice thing essentially more or less for your replication Monitoring you can use almost exactly the same facilities that you have now for physical application for the same reason synchronize Replication is possible and just works. There's no nothing special that had to be done to make that work So we just on the on the sending end you can say synchronous commit remote right remote apply all these things and you can have all these facilities that determine like who is the Synchronous standby and who are the other ones and all these forum things that they're adding now that all works just fine and so Cascading is possible and that kind of ties in what the question was earlier and not in a sense that it's there's sort of global awareness of cascading But you can hook these one-to-one Connections together and have another one hanging off that in any way you want right or Going around in the circle and as long as you filter things that should work so You can do that, but it's not in a way that you know Maybe Sloanie if you have used that maybe that there's global awareness of all the nodes if you have node one two and three In a cascading setup that one knows about three even though they're not talking directly It's not like that. He has just a one-to-one thing And then this could be something that could be improved in the future So why are we even doing this? Perhaps we should have started with that So two big use cases essentially one is any kind of partial replication if you just want to replicate one database or one table Some tables part of a table certain actions on a table That's what this is all about and they're you know the actual use cases for that or any kind of data aggregation into And any kind of data warehouse kind of analytics kind of things for archiving Anything like that. Just a toolkit to do interesting things and another big use case is the upgrades from major version to major version That we want to facilitate with that So that's not going to really come into play until we release Postgres 11 so that you could use this to go from 10 to 11 but we hope aspire in the future that if you have multiple hosts available or Facilities to do that You just point one to the other it replicates over you switch over and you're good to go so that's another alternative to You know PG upgrade is great for in-place upgrades But if you don't want to do that, you don't have disk space or anything like that You just replicate to another host you can do all the synchronous replication type of thing So you can do it completely lossless Instant fail over question there Question is if it's is it backward compatible to be geological now Yeah, so if be geological what he's asking about is a Similar product similar to this but as an extension and Which for the most part actually completely inspired this feature, right? But if you want to replicate from Something that is pre postgres 10 you would at least you know my recommendation would use PG logical And that goes all the way back to 9.4 and then You can keep using PG logical with 10 to and then I presume we're gonna keep updating it for quite a while, so Well, PG logical well, that's kind of my tomorrow's talk really if you want to go into that But there is some pros and cons to both of them eventually they should kind of merge right? so So those are the big use cases or anything else you could think of where somehow a physical application is not working for you Yes, please How do you achieve fail over with this? Question was you do you want to do fail over from A to B? that gets promoted to master and Then you want to fail back later so it doesn't really Work that way in a sense that there is no real master and standby in this case because any Any node can do anything right and There's also no restrictions on writing to if you if you have a subscription writing to a table You can still write into that table do anything you want It's not like it's lonely old on this so that put triggers on and prevent rights Maybe you want to do that yourself, but it's it doesn't do that. So if you fail over And then you want to fail back you have to set up the subscriptions and publications yourself in a different way So this is not going to and this kind of goes into my Couple slides down is this is not going to replace your physical application just for your basic stand-byness because Physical application is just nice and simple Okay, in it for that use case of I just have a complete copy here And it's always there Whereas here you can set up a lot of different things a lot of different ways and there's no real designation of who is the the leader and then not and So you could implement that but you have to do a lot with yourself Questions. Yes, can logical application be applied between between schemas and local host? Yeah, I don't see why not. Yeah Any thoughts to you do allow logical occasion to transform expressions as you any thoughts, but well, I think that would be a sort of a Plausible feature down the road. There's no reason why that couldn't be done. You just have so this is all built on On the logical decoding feature that was added in post-cursing 9.4 So this is just one consumer of that and the way that API works is you just have a bunch of hooks and you can consume data and Then you can do anything you want with that data. So this Implementation is just a very Simple implementation of that it just takes the data that it publishes and writes it back into the table But you could conceivably hook in anything you want there to Rewrite that data anyway you want so that could be some feature down the road that we could add the quest questions in the back Yeah, is the plug-in for logical decoding separate What it is Compiled as a plug-in that sits on this key could I don't know if you can do much with it on its own Yes Yes, if you if the writing to a subscribe table fails because of uniqueness contrained And you do in some writing also. Yes. They're a good question. So if if writing to a table On the subscribing and fails for any reason such as those kind of user space reasons then it will basically keep trying that and And you will see errors in any of these monitoring tables or in the logs to see why it failed and They have two options at that point is one is that you Somehow fix that situation like change the unique constraint violations or put the table in place or add those columns or anything That was missing of that sort or the alternative is you can kind of skip that record that it was Inserting and there's an explanation in the documentation of how to do that. There's a very specific Internal function you call sort of to sort of pop a Record off the queue in that sense to discard it So you can do those kind of things. Yeah, it will stop the replication for that subscription because that stuff all comes out of the Replication slot that is P. Geological what you're talking about P. Geological has more facilities to do conflict resolution filtering Which which might come to this at some point? Lot of questions now, okay, let's make sure we get all this in here We can take a couple more questions now. It's not there's not much left here. Yes that there Do triggers work on a subscription and yes, yes, no problem. Yes No, you're right. You can only for the initial Think you can only Replicate what's on the master now? Again that works through the replication slot as soon as the replication slot is created at that point everything's preserved until you consume it Yeah Maybe over there you cannot replicate catalog tables. Nice try Last question maybe for now do we maintain the order of the tables and subscription? Do we well the data roughly speaking the data comes to the subscriber in the same order as it was Published it's not exactly correct because there's some reordering to make it look correct, but That but the way it is order the insertion should work under all those constraints It's so that that works. Yes, it's basically there's a lot of internal work that make is to make it happen Okay, maybe last question for now Does the subscriber apply single-threaded? There is as I showed there's for each subscription There is one worker so it is single-threaded per subscription. Alright, so move on. So this is stuff. That's currently not implemented I mentioned the schema stuff One kind of big thing is it doesn't replicate anything but sequences right now and that's just because the logical decoding API doesn't support that and No other work arounds have been implemented yet. That's just you know Someone who just had to sit down and code that and that's definitely something we want to do soon so the If you have So this is not a problem if you just replicate if you have like a serial column The and then it inserts, you know numbers in a table the table would replicate just fine including all those numbers So if you just want to make a copy for archiving or analysis then that works just fine You have the numbers there But if you want to fail over to a copy all the sequences are still going to be on one or you know start value So then you have to maybe use PG dump or something like that to move all that over So it's not great yet But it's you know very similar if you maybe know familiar with Lundista and these kind of things They also have to play tricks with sequences like that Also be doing replicate truncate for similar reason that it's not exposed in that logical decoding API Again, if you if you use sloney in the olden days that was also a problem in the beginning So that if you just truncate locally it's just not going to replicate And we only currently support replicating from a base table to a base table. What does that mean? You can't replicate say from a table into a view Or you can't replicate from a view into a table you can't replicate from a table into Into the root of a partition table. You can only replicate from the same partition to the same partition And that's also just Could be fixed you just have to go in and implement all those different cases So here's some other sort of ugly things perhaps or or things to be concerned about a little bit. So Again, I said it's not going to replace your physical application just for one-to-one copy because there's just a lot more moving Parts that you have to worry about so you can probably fiddle all this together, but you know, it's not going to be as easy There's a thing called replica identity Which you have to think about what that basically means just to summarize it is you have to have a primary key on a table Up to some other conditions, right? But generally you need something like that so Otherwise you cannot replicate any updates or deletes because it's not going to know what to key those on You can insert an insert an insert because all that does it just depend rows But if you want to replicate updates, you have to have either a primary key or something that you declare instead of a primary key so another thing That is just general to the logical decoding is that if you have a long-running transaction It's not going to start replicating that until the transaction commits Which is unlike physical physical application because physical application just moves the the bits and bytes around and They were then interpreted on the receiving end whether they were committed or not But here in this case because we have to wait on for the transaction to finish before we know in which order we have to send it around Long-running transactions will basically create appearance of lag and I already mentioned initially the sort of we're not quite clear yet how to Package that best of how PG dump should interact with subscriptions by default Should it just dump them so you when you restore them their application starts up right away should have dumped them in a disabled state there's also issues with Like the connection string for example that we put in the command might contain a password You know, maybe that's not a good thing to do, but it's possible and in some cases necessary So that is a secret information so not everyone can read that out as part of PG dump So we need them will probably have to be a bunch of different sort of marginal options that you can pick all this different behaviors right so finishing up This is for Postgres 10 feature freeze is presumably tomorrow, so Anything will anything that's listed here is not going to get done for postgres 10 but Certainly a lot of those are reasonable to attack at least initially you postgres 11 Multimaster stuff is probably in you know in the future at some point something we want to do and The better integration with physical application is what the point I already made about failover slots like if right now if you have a subscription if you have a Logical application hook up And you do a physical application failover the whole thing is broken you have to start again And proposals for that out there. We kind of know the issues, but you know, there's just a lot of work and coding and thinking to do Some credits I'm just here. So as the spokesperson I didn't actually do most of the work Main author in for postgres 10 is Patrick Yellenek Who is currently at home in Europe? A lot of the reviewing and also work on logical decoding and in earlier and a lot of work in early releases from Andrews Freund a lot of Who is here if you want to find him Lot of reviewing and testing in this release came from Eric Rikers who we don't know who he is but he's on a main list need a lot of good work and And and he sends email through a tour relay. So I don't know what that is about and Craig ringer is also has also contributed a lot to just a general area of logical application in postgres So those are kind of some people if you know that see their name somewhere you can also Thank them and many others have contributed as well. So at this point, you know, it's committed a Lot of the initial work came from people affiliated with second quadrant but right now it's sort of, you know a community property kind of thing and Many other people have already Pitched in so it seems to have some good momentum Advertisement for tomorrow at 9 a.m. I believe it's actually in this room. This basically the I Think of it as a sort of the follow-up to this if you want That's just a you know a second quarter as a conference sponsor and we get like one Talk slot that we can do whatever we want. So we're just going to keep talking about logical application And I'll kind of contrast these different solutions that people mentioned here a PG logical and EDR Which is bi-directional application and this new thing here and How they fit together and how they came about and what the different trade-offs So if you have questions about that then I invite you to back here tomorrow at 9 Otherwise we have maybe two more minutes for questions specifically about This okay Yeah No, I don't think so Let's do just these two more questions then You can do copy on a published table and you can copy into a published table and then it will replicate. Yes Yeah, yeah, that's fine copy. It was fine and last question maybe Yes Yeah, we'll hang on the master. Yeah Yeah, yeah So you can also find me outside on the second quarter booth throughout the day or just catch me tomorrow here So, thank you very much