 Okay, I'm going to plunge right on in because we have so much stuff to talk about. Specifically, we're going to plunge right in on Django 1.7 introduced native migrations and Django 1.8 introduced Django Contrib Postgres. And there's so much cool stuff in here and so little time, so let's plunge forward. The first thing I want to say is most of what I'm going to talk about in Django Contrib Postgres was done by Mark Tamlin in, believe it or not, Kickstarter or Indiegogo. I think it was Kickstarter project for it who deserves endless praise for these wonderful new features. So first I'm going to talk about migrations and everyone's going to say, wait, migrations aren't Postgres specific, why are you talking about migrations? And the reason is that a lot, some of the stuff we're going to talk about there in Enabler for other things and Django 1.7 migrations are just amazing. So if you haven't used them just, you know, go to your laptop, I won't be offended, install, move to using migrations. And thanks Andrew for those. So a quick overview of for anyone who hasn't used the 1.7 migrations or have been treating it like a black box, migrations are built around this notion of operations. Each operation in a migration moves the database back or forward through its schema migration timeline so it adds a field, adds a table that corresponds to a model, does something like that. There are lots of individual operations, but there are two super interesting ones for our point of view, which are run SQL and create extension. These are operations that are built into, that come with the Django framework, core framework now, and you can use them in your own migrations. Run SQL, probably you can guess what it does. It applies raw SQL directly to the database and now everyone's saying, oh god, it's one of those database guys talking about raw SQL, I'm going to launch again. But really, work with me here, it's very, very handy. Specifically, it's very useful for things you can't do directly in using the models yet. Like creating indexes for some new types we're going to talk about. There's also a create extension operation, which runs a create extension command. You can sort of pattern is forming in this naming convention. Specifically, the extension mechanism in Postgres, which was introduced in 9.1, something like that, is it's a little bit like pip for Postgres. It allows you to add packaged extensions into your database. Previously, you had to drop a .so file here and run the SQL here and do all this stuff, and it was just horrible. Specifically, we're going to talk about this extension hstore, which needs to be added to your database before use. It doesn't come with the core Postgres installation. A lot of this will make more sense later. At the moment, I'm just teasing you. One thing to notice about hstore is if you have it, it adds a query to every time you connect to the database, but we'll talk about that also. So that's all you need to know about the new extension facility. Let's talk about 1.8. 1.8 adds some new field types, which are array fields, range fields, and an hstore field. 1.9 is going to add some more cool stuff, but this is what we get in 1.8. So an array field. In Postgres, since pretty much the dawn of time, arrays are first-class types. You can have a single column that is an array. Now, old-school database people like their heads explode at this point because this is denormalization and this is horrible. Forget those people. They're old and in the way. This is a really, really cool feature. An array field is now a field that you can import from Django-Contrib. Postgres, and it lets you use those directly. Before, there were tons of field of things you could find on Django snippets and things like that that would let you do it, but now it's fresh out of the box. And these map into Python arrays. Why I said array here is because I'm old and in the way. What I mean, of course, is a list. One thing to remember is that Postgres arrays are a homogeneous type, unlike Python lists which are not. So you pick a type, ints, strings, something, and that's what you get in a Postgres array. Python will let you put, of course, anything you want in a list, but don't do that if this is what you're trying to model. So, the other thing is that Python multi-dimensional arrays are rectangular. Remember Fortran? Just like that. Isn't it great that we're honoring our ancestors in this way? So they're traditional multi-dimensional arrays. They're, we say, I'm going to create a three-dimensional array with dimensions of five, seven, and ten, and the whole thing appears in this populated for you. They can be not, individual entries can be null if you need to represent a hole in the array, but it's not like, it's not like a list of list structures where each individual element can be any size you want. So once you've got it, as the dog chasing the car said, now that you've caught it, what are you going to do with it? You have array field queries. The basic one is contains. The contains matches if the array field on the left contains all the, all of the entries of the list on the right. So, A, B, C contains A, B, but it does not contain A, D. Order is not important in a contains query. So you can, you, they can, they can appear at anything in any order you want. You can also have contained by, which is effectively the reverse. Matches if the list on the right contains all the entries of, of the field on the left. So this, this is not contained by that, and that is not contained by that. But that is contained by that. We. Order's not important here either. Okay, that's easy. And then you have overlaps. So, which is the any version of, of this. So, this overlaps this, but that does not overlap that. And those, and those return a Boolean, then you also have the, the predicate underscore underscore len, which returns the length. Which returns the length of the field on the left as an integer. So, this is kind of approximate syntax. You wouldn't actually write this in your Python, but you get the idea that if it's storing this value, you apply length to it, you get two. Now, something to remember about this is that unless you've created a particular, an expression index, which we'll talk about in a bit, this is going to do a full table scan. It will pick up every single row in the database that, for that table, rattle it to calculate the length and, and filter it. So, you probably don't want to do that on a big table. You can also apply transformations to it, such as an index. This takes, like, for example, this filters on everything whose first element underscore underscore zero is, is the string A. If there's no array zero, entry zero, for example, the, the, the, the array is of length, is of length zero. It, it doesn't match. It, it simply returns false. There's no error. So, if you put five million there and there are no five million length arrays, you'll just get back false for everything. One thing is directly, you can't specify this programmatically. You, you built in a, you built in a zero there. Unless you're going to do, you can do string substitution and use quarks to do that, of course. You can also slice the array, which is pretty cool. You can say, okay, anything whose entries zero to one are that. Or, probably more useful, zero to two contains A. You can also index array fields. So, you can just say dbindex equals true. And you're done, right? Wrong. Sorry. This, it creates a b tree index on the array, which is pretty useless for an array. So, this is one of the downsides right now of using these is you have to, if you want indexing on them and you almost certainly do want indexing, you have to do some special magic. So, let's talk about that. But first, let's talk about how Postgres does indexing. Postgres supports different kinds of index, which most people who are just using the Django arm never see because you only get one kind out of post, out of it, which is, unless you're using Django, which is a b tree index. B tree indexes are great. They're really, they're nearly the optimal solution to a particular problem, which is they're very fast, they're compact, and they provide total ordering. So, you can walk a b tree index up one side and down the other. You can use it to accelerate queries like greater than, less than or equal. It's, b tree indexes are great, but they're not perfect. Specifically, a b tree index requires a totally ordered type, like integers. Every integer is greater than, less than or equal to another integer. Those operations apply to any two integers. Strings are totally ordered. Floats are totally ordered. Points arrays in each source and things like that are not totally ordered. What does it mean for one array to be greater than or less than another? Well, you can make up something. You can say, well, we're going to compare the elements in order and then, you know, all that. But most people don't use arrays that way. That's not a very interesting way of using an array. What they do, you do do on arrays are things like inclusion, like the contains operation. What you want to do is say, does this, I want to find all the instances of an array that contain this particular element, no matter where they are in the array. So, Postgres has you covered. It has two different types of indexes, which are just engine indexes. For those studying along at home, if you just sort of understand for generalized index storage technique engine stands for generalized inverted index. So now you know. Genindexes are generally used for types that contain keys and values. Arrays, in that case, they just contain values, not keys. HStores and JSON are good examples of things that are key value pairs or just value lists. Genindexes are generally used for that kind of data structure. Justindexes are generally used for types that partition a mathematical space. So like a point or a range or a rectangle is an example of something that would be indexed using just. If you want, talk to me in the whole, and I can explain the details on this for the moment, just roll with it. And the good part is once they've created, they just work. You don't have to do anything magic to them once you have them. They're updated and maintained and managed and dumped and restored and everything, all the good stuff happens to them in Postgres. So we support Genindexes, generalized inverted indexes. The nice part is these accelerate contains contained by an overlaps. You get, you get the, you will use the index as appropriate to make these guys go faster. It doesn't help length or slice operations though, so be aware of that. So what this looks like at the SQL level is you create an index on app underscore model using Gen field. That using Gen is the part that indicates that what you want is a Gen type index. One thing to notice Gen can be large, especially if there's a lot of data, if there's a lot of data in the underlying table, and they're not free to update. Just don't, you know, run around creating them just cause, create them if you are going to be doing the operations that will be accelerated by them. Specifically, if it's a small table, don't create a Gen index. You know, you're throwing, it's, unless it's a pedagogical exercise for yourself, for some. If, let's say you want to index length, for example, a very common query is, get me all the arrays that, where the length is greater than seven. I'm having a hard time coming up with an example of why you'd want to do this, but the world is wide, so people do this. What you can do is create an expression index. Array underscore length is Postgres' array length operation. With the field, you have to tell it which dimension you want the array, the length of. Remember, multi-dimensional arrays? So dimension and dimension numbers are one based. Why yes, Postgres was invented in the early 90s. Why do you ask? You can also index slice operations like that. So if you're going to be doing a constant queries on does this slice equal that slice, you can do those. The double parans indicate this is an expression index, so it's actually going to calculate this expression for each entry in the table and stuff it into an index for you. Postgres is smart enough that when you do the query and it sees a matching expression, it will use the index instead of running the expression again. So that's pretty cool. Also, Postgres arrays are one based as well. What can I say? It seemed like the thing to do at the time. Okay, so now we have all this good this machinery for doing array fields, but why would you ever want to use an array field? Well, the first one is the underlying data really is an array. You want to store an array. You're getting things like, for example, a very common situation is you're recording raw sensor input from a sensor and the sensor is hand saying, okay, at this timestamp I recorded 23 samples, here they are. And you want to just dump this into a field. You could denormalize it, sorry, not denormalizing it, and create two tables. One for the base sensor and one for each of those, but wow, that's a lot of overhead. So just stuff it in an array field. One of the best uses for an array is this a replacement for a many-to-many table. One of the things that, so for example, the classic social networking problem, you have things and you have people and people like things. Now, in your basic relational database model, the way you build this is you build an intermediate table that has a many-to-many table between people and things and every time someone likes a thing you insert an entry into this many-to-many table and then you have one billion people and 12 billion things and that's kind of a, you know, 85 petabyte many-to-many table. Huh, okay, probably not. So what you can also do is, in the people field, store an array of everything they like and in the thing field, store an array of everything that everyone who likes this thing. The entries in an array can be key fields, so these can be integers that index to a to the other side, much more compact and efficient to query and then you index each field. I'm not sure I would build Facebook this way, but if you have a smaller system, this could work very well. You could also use this for denormalizing the results of an expensive query. For example, you're doing that, you're caching the results of a query, you might keep them the many-to-many field for some other reason but have an optimization where you're storing it as a denormalization. Denormalizations like this generally need to be approached with caution. I won't say never do them, but because you do have to worry about maintaining them, make sure the data stays up-to-date, things like that, but it can be very useful. Okay, and now we have each store fields. Who's ever used each store in Postgres? Okay, so each store is a semi-built-in hash store data type. It's like a dict, a Python dict, that can only take strings as keys as values and values. So it's a single level, it has to be a string on each side. Pre the JSON type in Postgres, this was the only way of storing unstructured data, associative data like this. It's not super powerful and you notice the semi in semi-built-in and we'll talk about what that means. So first of all, how do you get each store to work? Because if you log into your average Postgres database and try and create an each store a column, you know, create table, blah, blah, blah, x, each store, and it throws an error saying I have no idea what the each store type is, because it's not actually built into Postgres. It has to be installed in a particular database before you can use it. It's not part of core. The good part is Django Contra Postgres comes with an each store extension that will install it for you. So you create a custom migration and we use an empty migration, add the each store extension to operation to it, and it'll apply it and you're done. So that's good. There's this one weirdness which only really matters if you're really pushing the performance of your database and for some reason you're not using connection pooling, which you should be. The problem is that it has to do, every time you connect to the database, Psychopegee 2, which everybody's basically using if you're using Python, has to connect to the get the object identifier for the each store type because it could be different in every database. So this adds one query to the connection. It's usually not a big deal, especially because Django has this thing about asking all sorts of questions for the database on every connection, like what your time zone and things like that. So one more query among friends. But you do have to be aware that's what's going on. If you see this query fly by, now you know what's going on. So each store types are represented in Python as a dict. The keys and values must be strings, not integers, not lists, not anything else. They're translated to and from the database encoding. So if your database is in something besides UTF-8, they'll be translated to the right encoding on inside of Python. Please say your databases in UTF-8 because it's horrible if it's not. Really don't do that. So it makes database DBAs cry. But if in the off chance it's not in UTF-8, it'll do the translation for you. Each store supports contains and contains by. Both the key and the value have to match in this. And you have has key, which matches fields containing a particular key. So that's cool. And you have a has keys which takes a list. So that's pretty useful. And then there's a keys which returns the list, which matches the list of the keys in the field. So you can say, give me the, give me all the fields whose, rows whose field has keys which contain either A or B. So that's pretty cool. And values does the same for the values of the h-store field. So you can query on values as well. H-store fields support GIN indexes. So exact same syntax at model. And accelerates contains, has key, has keys, but not contains by. Sorry, just the way GIN, just the way the GIN index is built for an h-store. So why would you use an h-store field? So it's great for storing very rare attributes. Andrew actually touched on this at his talk, his great talk earlier about database anti-patterns. What, which is, you write a CMS or some other system, even an inventory control system or something like that and you send it to the customer. And you send it to a bunch of customers. And you don't want, you want the customer to be able to add attributes to items. For example, an inventory control system you might want to be able to tell people, okay, for the item they want to be able to add ISBN if it's a book or color if it's a thing that comes in colors or sizes, but not every item is going to have that and you don't know when you ship your product out to the customer which attributes they're going to want. Well you could create fields, individual fields in the database, but that's first kind of hard in Django and second of all that way, madness lies from a migrations and implementation point of view. Well, but what you can do is create a single h-store field and put use it as the place to store all of these random attributes. Generally, my rule of thumb is if there are going to be fields that are null 95% of the time consider an h-store field instead. But one thing to remember in this in this use is that null fields take zero space in PostgreSQL. They don't actually take any room on disk, so you're not costing any space by creating null fields with zero use. So and another use for these is if you have an attribute that's populated very, very, very rarely that it's not worth creating a whole field for, this is another solution to that. And user defined attributes which we just talked about. That being said, if you're doing greenfield development right now you probably want to use JSON instead especially once 1.9 comes out and we have first-class JSON support in Django. But if you have to do something right now there's no JSON type in 1.8, so use JSON if you need it right away. Or you're trying to plug into an existing database which has h-store. Now in increasing order of coolness we now come to range fields. PostgreSQL now has native range types. Range types span a range of a scalar type. So for example 1.8 as in for range includes all of those. That this is an inclusive range, so it includes those guys. You can also write them as exclusive bounds. So for example 1.8 it goes from 1 to 7 but doesn't include 8. So far so good. And that's the default. Notice that this is the PostgreSQL syntax. Obviously you can't write this in Python and have it be syntactically legal. That would be awfully cool, wouldn't it? But there are languages that will let you do that. Anyway, but this is input. So the default and the default is this open on one and closed on one side open on the other range. If you omit a bound it means all values greater than or less than. For some types, particularly dates, also have a special infinity value. I guess there's a special end-of-days value in types, I don't know. So if you see one of those coming out of a query, you might want to. PsychoPG2, the database adapter that pretty much there's no reason not to use and if you're using Django you have to work hard not to use it. Includes a Python range base type that handles all the various boundary cases and the infinity special cases and that's what the range fields in Django are built on top of. Out of the box, 1.8 supports integer range and big integer range, so 32-bit and 64-bit integers. A float range, date time range and a date range. I am pleased to say that date time range is timestamped with time zone in Postgres because if you're using timestamp without time timestamp tz you're probably making a very bad mistake. So everybody go and make sure that you're using timestamp not timestamp tz not timestamp. End date range. So contains contained by an overlap kind of work the way you'd expect to comparing two ranges. There's also a fully less than fully greater than which is true of both the upper bounds and lower bounds of the field are greater than or less than the comparison value. So if the whole range is to one side or the whole range is to the other. And adjacent to is you is true if two ranges exactly bump up against each other there's no space between them there are no values. This is a place where the parentheses the open bound is useful because you can man on a closed bound there's no way of doing this with a continuous type. There's no way of writing two float ranges two float ranges that are closed that exactly bump up against each other because there's always another float that you can shove into there. So this is why an open range is important. There's also not less than as the fields contain does not contain any points less than comparison value and not greater than which works the other way around. Range fields use gist indexes which you could probably have guessed this syntax but here it is at model using gist field. Note that the read you'll have to for now you have to drop this into a one SQL migration. There's no way of saying this just at the Django model level. That's okay that's what runs SQL's for. All the comparison operators that we that we just described are accelerated by the gist by having a gist index on a range. So why would you use this? Well here's a problem. Let's say you you're running a hotel and your rule is don't allow two bookings for a room to be inserted in the database for the same room where the dates overlap. Okay and you want to do so how do you solve this problem? There's actually no way of solving this with traditional unique constraints because there's nothing that's necessarily unique. You could say room 102 for Monday through Friday and room 102 for Tuesday through Saturday. There's nothing you the those two taken together are not unique but they but they are still that's not valid to have both in database at the same time you double book the room. Scratch scratch scratch so how do we solve this problem? Postgres to the rescue we have a facility in postgres called an exclusion constraint which is a relatively new feature i think one two in postgres but it will allow you to to not have this situation arise. It's a generalization of the idea of a unique constraint. Let's stop for a moment and think about what unique means. Unique you can say unique says well no two values can be the same for this column when you insert them. Okay that's fine but you can also say don't allow any two values which pass the equality operator in there. Now we've said it the same thing but we've said it in a slightly different way. We've said don't allow two things that where this particular operator equality are in the database at the same time. So we could say well what if the operator is in equality? What if the operator is some other operator? So we could say don't allow two things in the database at the same time where the overlaps operator matches them. Hmm well that's interesting because now and then we say well okay and let's let you add them together. So we'll say these two fields can't pass this operator and these two fields can't pass this operator and and and. So we've generalized the idea of unique to let you use any operators in combination with and and I think an example would be very important here. We'll keep the faces. So there's a catch. You have to have a single index for all of it for this whole shebang and since range types require just index the index has to be a just index. But the problem is we talked about this example where we're using room and room is just an integer or string you know however the hotel wants to represent them. This is a scalar value and scalar values don't have just type or don't have just indexing. Uh oh we just blew it. I was there's I was going to show you this great example of how to solve this problem in Postgres and now I can't. Well of course you can because there's a module called be tree gist which lets you create these indexes on mostly simple scalar types. It's a postgres extension is part of contrib. It has to be installed in the database but it ships with postgres which means you can use the create extension migration to get it into your database. So let's talk about how we'd actually use this. So you know import the models import the date time field and here's a booking and here's the room and here's the range and okay this would probably be a foreign key to a room but you know you get it. That's easy. And then we say okay what does it look like? Well there's the integer that Jen created for us and the room and the range and we're all set. Okay so far so good. Now we create this extension be tree gist and we add this exclusion constraint. Now notice what we're doing here is we're saying add this index where the room is equal and the date is with at and and and and is the postgres version of the of the overlaps operator. So when you do an overlaps query in Django what you're going to get is this double ampersand. Okay now let's try adding some rooms with all of this. Say we're going to add a room and save that worked. Add a room and save so because it's the same room but an entirely different range. Add a room save so far so good because same range same time range but different room. Okay room do do do one two three save oh notice however this overlaps that bang oh look it didn't let me do that pretty cool and notice that this means the constraint is being enforced at the database level sure you could write code in python that would do the query and if if it returned an overlapping range say no there's exception but but the nice part about doing in the database is if you're doing bulk imports if you have other kinds of queries you you can this means the database itself enforces it. So you should use ranges range fields represent ranges I hope you get a lot of value out of this slide um you probably figured that one out it's more natural and you get better operations than the judicial high low pair of stuff in the database and you get more database integrity and more interesting operators available now coming soon we have json fields they're not in one eight but I think they've I'm I'm almost sure they've landed for one nine these are fields that support arbitrary json structures there's still a little bit of a work in progress both on the jango side and on the json side on the postgres side postgres has two json types json and json b sorry about that json stores the raw text of the json blob white space and all it is literally the text it is in fact a wrapper around postgres' text type json b is a compact indexable representation it's a lot like it's it's similar to bson but it's a lot better um so I use json instead of json b it's faster to insert since it doesn't have to process the data it just shoves the raw text in the database and json allows for two highly dubious features that people use in json which are duplicate object keys of the same level and stable object key order these are not allowed by the there's nothing in the json spec that says that these will these features are available in json but people still use them bad people they should feel bad and it's also this is okay if you're just logging json like you have a log table that's accumulating api calls or something like that where you don't really need to process the stuff you just want to have it somewhere so json b pretty much every other application you want json b it's it can be indexed in useful ways unlike json the first coming json field in Django uses json b as its underlying representation so just roll with it json b has gen indexing json json the only kind of index is available for json or b tree indexes that treat them like strings which is pretty much useless you get these operators and the query has to be against the top level of the object in the index to be useful so you can't index you can't do a query if you have a nested json structure you can't query for a key that's way down in the structure it has to be at the top level that's being worked on you can query nested objects but only in paths that are rooted to the top level so why would you use json support in general you're logging json data you want audit tables that work across multiple schemas this is a very common problem in database in relational databases where you want a single audit table that handles all the other tables rather than having one audit table for every other table which is that way madness lies it's a nice way of pickling python objects so you can so that other tools can read them the standard libraries that pickle python objects into json kind of put a lot of cruft into them for my taste but it's it work they work and for the other things that you used to use each store for like user defined objects rare fields things like that the new fields have admin widgets that go with them that are really very cool each store in json widgets are really only good for debugging because you know you basically get the raw text spat out to you and but you know why are you using the admin for this and the unaccent filter which i'm out of time for so just read about in documentation and thank you questions if in postgres the arrays are one index the example you gave in a query set filter was zero index is jango converting the yeah it's always it's python style indexing within jango because otherwise you would probably go insane yeah so and it does the zero to one conversion for you okay cool thank you quick question about expression indexes are expression indexes be tree indexes or can they you can have be tree gen and just indexes yes it's a little bit of an advanced class but for example you i can definitely see things like if the output of the operation like a slice is is jiner just a jiner just indexable type then by all means you can have a jiner just expression index and postgres does the right thing the various geo extensions do this a lot for cool stuff so i'm used to having to be a super user to create extensions is there any hope to not need that requirement when i'm putting in migrations um nope okay yeah sadly well yeah sadly because because you could really seriously screw up your database with a with a bad create extension super user access is going to be required which is really annoying if you're example on rds where you don't have super user so in the case of that you don't use a create extension situation you write many angry letters to amazon asking for that extension on the next release sorry one more question of course do you have any tips on migrating mysql to postgres high res there's not that i can deliver in a small thing like this i will say that the actually the database layer is usually only about 25 percent of the the nightmare the rest of the nightmare is the at the application level it's not so if you've been really ruthless about your about your database agnosticism in jango you probably can get away with it without too much hassle and there are tools that will probably do it for there but a surprising number of mysql applications rely on things like the return primary key on query on query for null thing that mysql does and you know db and things like that so most of the time you spend most of the time looking for mysql isms in the code actually rather than the database there are tools out there that actually do that will do the dump and load conversion for you hi are there are there any performance improvements moving from a many to many table to an array potentially huge because the when you think about how the many to many query works it has to potentially suck up a huge ton of rows and go through these enormous indexes indexes are fast but they're not infinitely fast and so if you now there there is there are upper bounds I mean one billion row item indexes are not going to be any more fun than a billion row table but potentially you could this could be a huge win especially if you're using a gin index because the way a gin index works very quickly is it indexes so if for example if you have a key of four the gin index records everything that has a four in it and can go pull those rows potentially much faster than running through a many to many table thanks and smaller you mentioned that the new fields only will only work with JSON B what do you recommend in order if you want to use the plain JSON for those sort of dirty evaluate your life choices it's unfortunately the bear JSON type does not have a lot of that good stuff in it you can get some of it by writing if if you absolutely must you can get you can write expression fields that extract the stuff out of the that can that extract those fields out of the JSON type and index them as B tree things at at some point though you're probably better off doing the JSON B unless you absolutely must have those two misfeatures sure well if if querying isn't necessarily a goal and you just want to get access to it could you like subtext or subclass like the the field and then make it at least accessible sort of to do your own well in theory you could alter you create it and then alter the type back to JSON and probably everything will work just fine at that point I haven't tried it but it's it would certainly be worth a go the other thing of course is if you want the performance enhancement you know the the the these these small but not zero performance enhancement for on inserts that JSON has over JSON B absolutely yeah your booking example is really nifty I'm wondering a couple other things so would it be possible to create multiple conditions using the B tree gist extension if so how would you recommend accept handling you could create as in more than one predicate on the not quite sure of the question sorry so let's say that in addition to the date ranges you also didn't want certain types of rooms to be booked on yeah I mean I mean it can be arbitrary things yeah it can be essentially the only reason in that case I had to use the the B tree just hack is because I because one of the thing and the whole thing has to be the set has to be a just index but all all the indexes that are that are apparent up here in the exclusion constraint have to be of the same type so that's why I had to first the int the the the string that was the room into it if for example everything everything is a you can write exclusion constraints as any combination of Boolean predicates and it together okay so effectively there's no reason to have a situation where you could have multiple kinds of integrity errors that you'd have to you'll really hit one it'll if you had you can have 12 different exclusion constraints on the same table there's absolutely no limit to how many you can have except that of course has to check them all on every insert which could be a little bit annoying so it'll you'll only get one out it won't tell you oh by the way these 12 different exclusion constraints all failed you'll it'll hit one and then stop the insert and you'll get that one and then you can when it comes to giving feedback to a user when there's certain kinds of integrity errors how would you recommend handling that on that that's that is a little more complicated in that case you may need to I would I would actually do both both query the database in advance to get the right kind of error back to the user but still have the constraint in case you need you have other applications that don't run through the same UI that try to insert data you try and bulk load it from someplace else using the copy command or something like that that way you know because you'll inevitably what will happen is like take the room example you know your hotel's running fine and then hotels.com comes in and says we'd love to send you reservations but we have this API and you know so that kind of thing I'm sure room reservation systems are a tiny bit more complex than that example but you get the idea so do you have any insight into what might be coming in one nine and beyond with regards to Postgres well in at this point one nine is pretty well locked down so you can just read the release notes you know I'm not a contributor to that stuff directly so that would be the best example I wasn't certain of your involvement okay yeah thanks for the great talk how do you feel about the funding model of the contributions to Django for the Postgres stuff and are there any other things that you'd like to see funded in a similar way um that's probably probably a longer bigger question than I can answer right here it is interesting to me that I have very strong opinions that companies that use open source should give a little more back than they do the there are certainly features I would love to see like being able to push different foreign key foreign key cascading models specify those from from the model level for example if you want to have something other than delete be able to specify that and have that implemented by the database rather than Django implementing it but yeah getting getting money for these kinds of extensions is an issue Kickstarter is great but it does tend to be a popularity contest and it locks out a lot of developers who wouldn't otherwise have you know who are brilliant programmers but haven't worked as hard to but don't have the high profile and that's a shame I think so tell your employer to write big checks is my answer great and if you need anything done with your Postgres database give us a call and thank you