 Hi. I'm terribly sorry to get between you and some beer, so I will try and keep this short. So quite often when I give these talks, I do talks about futures and all the cool features that are coming in later versions of Postgres. I've decided to do something else today because we've got a couple of product announcements that we're happy to make at this conference. So there's going to be some announcement emails coming out as well, but I just wanted to explain briefly what those product announcements were and then just explain how that fits into some of the other things that we do as well. So everything you see here is all fully open source and where appropriate is in progress being submitted through to Postgres Core. So let's go straight on to the first one is PG Loader. I'm sure that some of you have seen Dimitri's presentations about PG Loader. You should be aware by now that we're now at version 3.2 of PG Loader. So Dimitri's had a lot of experience at making the load utility very flexible and this particular release is the point where he's managed to finally crack parallel data loading and that's why the slide says that it's up to 10 times faster. Originally he wrote it in Python and there were some problems with the global lock that meant that we couldn't get the parallelism to work properly. So as of now PG Loader is an extremely fast load utility and it's got so many different formats and options that I'm not even going to try to list them for you, but it's extremely flexible load utility. What Dimitri's been working on in the background is that PG Loader also does completely automated conversion of MySQL and SQL Lite databases. So you literally just give it a logon to the MySQL system and say convert it and it will connect to Postgres, connect to MySQL, it will copy across all of the DDL including intelligent transforms and then it will load all of the data across as well. So we're talking about a single command line to convert whole databases and so this is not just a data load tool, it's actually a migration tool that's also particularly important. When you look at the number of databases that we see in modern enterprises if you think that the average database was going to take a month of manual labour to edit things you can see that we would just never ever get adoption rates up there but with this kind of approach we're actually going to take, rather than taking it a month, we're taking as long as it takes to type the commands to do the migration. So given that 90% of databases are around a gigabyte in size this is firmly aimed at converting the vast majority of databases even if it doesn't convert the big complex databases. So the next thing to talk about is Barman, which is the backup and recovery manager for Postgres. Version 1.4 was released last month and supports 9.4. It fully supports multiple servers with multiple backups and configurable policy management to allow you to specify your backups and of course does full point-in-time recovery when and if you need it so you can easily test what's going on. This version now does file level incremental backup which is very important for larger installs. We've also got features there to do the backup from standby nodes as well as full compression. So Gabrielle is doing a talk tomorrow at noon to give the full details on Barman so this is me just announcing this as part of the other announcements really not to steal his thunder. So tomorrow at noon. And the next one is Rep Manager. Rep Manager version 3 is now tagged and ready for announcement. Version 3 because we've completely redesigned the earlier versions to take into account the later capabilities of Postgres 9.4. So there is a version of Rep Manager called version 2.02 which supports 9.4 but that version doesn't support all of the features. Rep Manager version 3.0 supports 9.3 onwards. So for those releases we have full support for things like cascading, replication slots and you have a choice of whether to use PG base backup or rsync depending on how your systems are configured. This version also provides fully automated failover including a witness server so it allows you to manage quite complex installs so it's specifically designed to manage more than two nodes in that config. So that's released today. It's just not announced on the public mailing lists yet. Next thing to talk about is of course the BDR project. Some of you have already heard details about that. There's a full talk by Andres tomorrow afternoon at 3 o'clock going into details of that project. The project itself, just in brief, is the next generation replication project. We've already been going at this point on the BDR project for three years so a lot of the features that are in 9.3 or 9.4 have actually come from the BDR project already. It's fully open source and we've designed it as a submission to Core PostgreSQL so every single aspect of this project is being submitted to Core. That doesn't mean it's automatically accepted by Core so there's been a lot of redesign work and internal reworking to get it to an acceptable form. The emphasis that we've placed on this project throughout its development is that we're aiming to not only make it a submission to Core PostgreSQL so that in the future these features will be available to all the point is that it's working code now. So for the people that need these features we've actually got versions that you can make use of right now. So BDR comes in two separate variants. The first variant is something that we're calling unidirectional replication or UDR and the variant that we have here is it's PostgreSQL 9.4 plus an extension so it takes stock PostgreSQL, you just plug an extension into it and it will use the logical decoding feature that we have in 9.4 and when you put those two things together you get that holy grail that we've been talking about for some time which is logical replication. Now at 9.4 that is data only we are not replicating DDL or sequences yet in 9.4 but those two operations are in full swing to make it into 9.5 I'm not going to prejudge that situation but if all goes to plan we'll get all three of those things working in 9.5. Now the plug-in itself we're hoping will be accepted by Core in 9.6 but of course that is yet for discussion as well. So what does UDR provide? Well the first feature is it provides zero downtime upgrade now that's not very useful because it allows you to upgrade from 9.4 to a later release and obviously there isn't a later release yet but it works so cool but the main thing is we like to plan ahead so the second feature that a lot of people have been looking forward to is something that we call selective replication and that's the idea that you can specify particular tables that you would like to replicate rather than the whole database and what we've implemented is we've taken the concept from slony known as replication sets and implemented exactly that concept into UDR so that you can specify particular tables to be moved across to other nodes. There's a great future for that way of doing things in the sense that what we have already implemented as well is the ability to select particular types of operation that you wish to replicate for example to replicate inserts but not deletes which allows you to feed a data warehouse with changes from your operational system. So UDR is an extension that works now as part of the 0.9 release of the BDR project. The second variant that we support in BDR is what we refer to as full BDR or just BDR and that's where we have a modified version of the Postgres server working with the extension and obviously modified version of Postgres server sounds like fork but that is not going to happen. What we have is code that's fully open source, all submitted to core and a lot of it is actually already in 9.5 but there's still many things to go yet. The full plan is to allow very large clusters up to 48 nodes is what we've talked about in terms of support but we have already tested higher numbers of nodes for working together. The design that we've gone for is a fully interconnected mesh of nodes and we are replicating everything. So we're replicating the data changes, we're replicating the DDL and we're replicating or I should say we're not directly replicating sequences but sequences work in a global manner so that the IDs that are allocated from a sequence never cause duplicate entries on multiple nodes. So this is full multi-master and we are hoping that this will eventually get into Postgres core but please be aware that that could be 9.5, 9.6, 9.7 even if there is going to be one. So in terms of this making it into core Postgres we could be looking at possibly as many as four years from now before it's a done deal part of core. Because we understand that baking good software takes time the idea is that we're releasing it for your use now rather than sit and talk about it for another four years. So overall by the time all of it gets in that could be a project of up to seven years in duration bearing in mind that we've already done three years on this project. So let me just mention one last thing about that is that we've got as of now four people working pretty much full time on the code writing the manuals, we've got people working full time on QA and because there's a number of variants we've got quite a lot of testing to do quite a lot of feature development but all of this is also available on multiple platforms in binary form so we've got Debian installers, RPMs, it works on free BSD so there's a lot of attention being paid to the capability to run on multiple platforms so it's not just the sources available this is actually available in a practical usable form. So while I have been highlighting these additional products or additional tools that you can use with Postgres I just wanted to also say that we do spend a lot of time and attention on features for core Postgres as well these aren't things that we've released as separate products they're just things that we're submitting directly into core some of them go in quite quietly there's a couple of things that are my favourites there such as locked row identification previously when you got lock weights in your application it used to just say there is a lock and you sort of think well that's nice now what do I do and actually we spent some time looking into the detail of that worked out how to pick out the particular detail of which row had been locked and now if you get that error message in the log you can actually do something in your application to improve things so there's some big features there like the lock scalability for example but there's also some very small features where we're paying attention to the level of detail that you need as application developers in order to use the Postgres system so all of this work is fully open source and explaining it to you now both to highlight the features of the new release and also to explain that if you do work with second quadrant you can see exactly what we're doing with the money is putting that back into releasing open source code for everybody to use so be a time