 Hello everybody welcome back to the PostgreSQL experts track here apparently since right after I'm right after Josh the CEO So we're going to talk about PostgreSQL and JSON. So what's the state-of-the-art in 2015 with PostgreSQL and JSON? So that's me. I'm a consultant with PostgreSQL experts The build.com is my personal blog to answer the most often asked question the slides will be there once I have a Chance to upload them that's personal of my proof. That's the business blog My Twitter account and my email address so So for the one or two people in the room, maybe who don't actually know what JSON is it's abbreviation for JavaScript object notation It's a text format for serializing nested data structures And it's based on JavaScript's declaration syntax just the name so job You know in 200 years no one will remember what JavaScript is, but we'll also be using JSON probably It was originally designed because you could just drop it into the eval function in JSON and have it pop out as it as a Part structure, please don't do this You know we went through all this trouble to get rid of SQL injection attacks. Now we have JSON injection attacks. Oh well The JSON primitive types, but the original intention was that you could just drop it into eval back of the day John they consider it has strings which are always unicode Which is good because character encoding was the worst idea that computing ever had De facto JSON is always UTF-8 in flight UTF-8 is as close to a perfect solution to the character encoding and problem That will always have and everybody's net and everybody who's has experience with carry character encoding says but UTF-8 is horrible And you're right, but it's the better best solution will probably ever come up with Numbers which are either integers or floats Booleans true or false and the the singleton value null and that's it. That's all you get in JSON You then can build things up into arrays and In hashes or dictionaries or whatever you call them the JSON spec calls them objects, which was like what were they thinking? They use the brace notation Like that except not curly braces, but that blame that on keynote the keys have to be strings the values can be anything and You you can you build those up into more complex types There's no type declaration mechanism in JSON There's no way of validating anything except that you write code that says this works or this doesn't This is why objects kind of an unfortunate terminology because that has connotations that don't apply in this case It's just a dictionary if you're a Python guy So everything is delegated to the application in terms of correctness You get this data structure and it's up to you to decide if it makes sense. You're not So the good part is it's super simple so generate and parse you have you are spoiled for choice in any modern language To how many ways you can parse this thing The specs really easy to read it's five pages with big pictures You know so which is refreshing coming from anyone ever use ASN one Am I the only person older? Okay one one other person is old enough to remember ASN one JavaScript is a whole is seems really nice compared to that or even XML And it's a defective standard pretty much everybody uses it once and then once while somebody will pop up with XML And I'll have to remember XML You know post format is still used but applications that do that are wrong so But the bad part is there are no higher-level standards. What is the standard for a date time in JavaScript? There isn't one Yeah, I don't know you figure it out you it's a thing, you know You know and now we have a json injection attacks and just don't use a valve But what this talk is about is PostgreSQL has json It's a core type. It's not a contrib module. It's not an extension. It's not nothing you boot up postgres. It's there It was introduced in 92 and we enhanced it in 93 and then in 94 it got really very cool And we liked it so much. They're actually two json types in postgres There's json and json B. I will try to always write the postgres types in lower case and the abstract Concept of json in uppercase in this talk. I probably will get it wrong at some point Jays the json type is a pure text representation So it what it really is is a wrapper around the text type in postgres With validation json B is a binary type. It's parsed on the way in and reconstructed on the way out You can cast one to the other The json type stores the actual json text white space and all It's the way it came in directly into the database The good you what you get out is what you get in It's checked for correctness, but it's not otherwise processed You can't put in syntactically invalid json into a json type, but it doesn't do anything else to it So why would you use it? Well, you're starting the json never processing it All you need is a place to log like and this happens, you know, you're logging API requests or something like that. That's reasonable Or there are two json features that you need to support That order preserved object fields. This is not in the spec It's not recommended But there are applications that expect that if that you can reconstruct the exact order that object that key value pairs in an object came out as Or more than one key of the same at the same level If you're if your application does this it's broken and you should fix it But there are enough broken applications out there that we couldn't just get rid of this ability Or for some reason maybe compliance or there's regulatory or you're just feeling paranoid You need the exact json that you put in including white space all the keys in the same key value things in the object in the same order You know, whatever Or You're on nine three or earlier because it was introduced tonight for so if you're on nine two and nine three you're stuck You get json. Otherwise, you pretty much want to use json be It's parsed and coated on the way in It's sort of a relatively compact parsed format. It's not compressed Exactly in the way that we think of deflate or something like that compressing things But it is a more compact format because the white space is squeezed out and stuff like that There's lots more operator and function support in Postgres for json be then there is for json and You can build indexes on it, which is really super important Now they're just types in Postgres They're fully transactional. You can have multiple json and json be fields in a single table You can use it the toast mechanism so they can be up to one gigabyte in Size if you're feeling adventurous It can be nullable, you know, so they're just fields There's nothing magic about them. It's just like it's just like int. It's just like text Json and json be both supports really basic operators There's arrow double arrow pound arrow and pound double arrow This gets an element or an object as json This is this is the first mistake everybody makes when they're writing this is that they put a single arrow in Expecting to get a text field back and instead it comes up with a json object and you go, huh? What? So what you want is double arrow That gets to the rear object field at a path and cast it to text both json and json be support these But then json be adds a lot of stuff like containment That goes with containment the other way One thing that can be a little surprising is Kate containment is not a recursive operation It doesn't search down a deeply nested structure. It only works at the top level of what you're pointing it at It doesn't it it does not apply to individual keys. It doesn't apply to nested elements So what I mean by that is for example here You're saying does this contain that and the answer is yes. See there's a a a colon one. There's an a colon one. Yes does This array One two three contain one and one three. Yeah, because both one and three are there. Okay, that makes sense and then or does this a Which which key a which points to an object be Which has an object which has two keys in it B and C Does it contain this and the answer is yes? Okay, those are that's reasonable so far so good, but there's some unexpected results for example This doesn't as far as for the purposes of the containment operator This doesn't contain B7 because it isn't at the top level there Similarly it doesn't work on a single key So saying contains a is false because it's looking for the exact same JSON structure There are other operators that achieve these results, but containment is of it Then we have question mark question mark or bar and question mark ampersand So the key on the right hand side appears on the left hand side or any of the keys This one works on a single key. This one works on arrays pretty neat It uses the postcode skill type array type not the json array type. That's an easy way to get a weird syntax error Examples so this one does work is the key a in this sure is Are any of these are our are both of these keys in there? Yep, they are are any of these keys in there A or Q which has a and B. Yep, they are but Is be in there? No, it's not because again only works the top level Let's cast this array to JSON be We have a json array here and array one 100 oh wait no it only works on keys and keys can't be integers in json Okay, that's the shame And we cast this this is okay. Well, we'll cast this to We'll cast it to json be Now that doesn't work because it has to be a postgres type array not a json array There there are functions to convert back and forth between the two of them So you're not to you're not stuck, but it's surprising the first time you run into this There are lots and lots and lots of json functions We are not going to grind through them all because we'd be here until Saturday But you can create json from records you can create them for arrays all sorts of stuff You can expand json into array a record is arrays row sets all kinds sorts of things Many of these have both json and json B versions So you kind of have to check the doc sometimes see which one they have if there's a json and Unfortunately, there are some things that are pure json like the creation functions that don't have json B versions Which is a little weird, but there it is So here just for example of two of the cooler ones There's row to json which accepts an arbitrary row and returns a json not a json B object For non-string or internal types basically not the core json types It uses the output function of the of the postgres type So whatever you get when you try and cast that type to a string It handles array and composite types correctly turns those into proper json things So here's an example we have a really we have a table rel We say row to json on all the fields where the ray like this to order by id limit 3 and you know outcomes Three rows of json output rather than three rows of relational output. That's pretty neat and tags is an array field in this schema and sure enough we get the We get the arrays the as proper json arrays. That's neat One of the things you can do with this For serious applications as you can use this as a trigger to append to an audit table regardless of schema This is a constant problem that people run into is that you want to create an auditing system that logs every change to To a set of tables, but how do you handle the fact that they don't have the same schema? That do you create separate audit tables for each one? Well, that's not so great because they have to write separate triggers All that kind of stuff So this is nice because you can unify it into a single json blob and just log that And that means you only have to write one trigger because you don't care about the row type that comes in the function will take care of that for you Another example json be each text it takes a json object and returns a row set that is you know a set of well as if you Did a select star from a table Of key value pairs and it returns each of these as text object You can write the world's most expensive entity attribute value query this way You wouldn't want to do it, but so we take the we take that rota json. We take rota json inside and then Using this rather exotic query. We turn it back into relational data except having turned it into an individual row Wild I'm sure you can think of all sorts of applications for this, right? Yeah, not really seriously One of the nice things though is you can expand json be into Into into things you can join on That's the main use of it. You can write a sub query that you can use for a join So if you have a json field and you want to join on one of the fields inside of it But you don't want to extract that and keep it in a relational field you can use this as a sub query. That's nice It's often more efficient to do it this way than to write than to write extraction operators So then the other nice thing about json be is you can index it You can index the textual json type which indexes it just the way it would a text type which is basically useless I mean, how often do you compare for the exact json text? You know not very often You can do an expression index on extracted values And we'll actually give some timing tests for that later But that means in advance you're knowing which fields or elements are going to query on And if you do know that it's probably better to make that data relational in postgres anyway Because relational data is still much faster than json data Json be has gin indexing gin stands for generalized inverted index indexing The default index type accelerates these query these operators Again it must be against the top level of the of the object You can query nested objects, but they have to be in paths that are rooted at the top level So you start the top level work your way down There the index doesn't index way deep buried inside this object. Is there the word green? That's not a that's not something we index right now There's a different, but there's a second kind of operator Which is there's an optional gen X which is optional you can you have to specify it when you create the index? And it only supports this however. It's much faster on paths than the default one so if all you're going to do is This perhaps with nesting searches down like This so for example does this does this document contain At the top level a tag name tags a key name tags which has an array one of whose elements is Q u is key Q ui The default will search for everything that has tags has key Has this value and then do a recheck on the path structure It works, but it's a little slow This this variation the JSON path ops variant will go directly to that entry so it's faster So if you're doing this kind of query you probably want to use this variant so which so if you just you if you just need that That'll be faster if you need the other supported operators You need the bit default which is j which has the name JSON the ops, but it's the default you get if you don't specify anything But let's find out because here we have test results some performance testing So this is my usual caveat when I'm doing performance testing There's an infinite universe of what you do with your database versus what I did with my test database So always build and test using data that simulates your real application I hate for people to emerge from talks like this and say Postgres is better at Blah because it depends on what you are really doing So don't take them as being applicable to every situation and these are relative not absolute results of course So for this test way I had a four-column schema, which is an ID one of the primary key is a begin First and last name which are text and then tags which is an array of short text tags Each entry had two extremely common ones one per record So it had either one or the other for every row and then a diminishing number of rare tags So we could get both the idea of how things perform when they're searching for a very common entry as opposed to a very rare entry I was using an i2 2x large instance one of the reasons our AWS bill was large last month Running Ubuntu 14.04 These are actually done on 940. I ran them before 941 came out And I did basic tuning for the instance size, but I didn't do anything exotic as far as setting up post-gres goes 10 million records generated at random a Variety of schemas one is a pure relational schema. So each of those columns was its own column I did a hybrid one which is I put the names in relational But the tags is jsonb the variables tag set is jsonb and then Jason and jsonb and then one that's pure json and jsonb for the non primary key column Ran 100 iterations on each test threw away the top and bottom 10 to make sure get you know Outliers or Amazon weirdness on the instance I only time the execution time not the time included to return results back to the client in part because it was being written in Python and the test harness Creating objects in Python from this these kinds of results is relatively expensive and that could seriously distort the results So the first one is load 10 million records using copy didn't rebuild the index ran it on these variations and Ta-da So here's relational lower is better in this case So relational fastest followed by json interesting followed by the hybrid one followed by jsonb Okay So relational beats everything which is no surprise there because the the relational data path in postgres is very highly optimized Jason be is slower to load than Jason because it has to process the stuff as it comes in not just check it for syntax But actually pass a parse it apart and turn it into a new format. So this is expected But everything's pretty much the same order of magnitude. These are apart, but they're not hugely a part So the next one is let's do a sequential scan for a single last name Without an index single last name For the relational and hybrid use the relational field and use the double arrow for json and jsonb I also tried with the containment operator for jsonb since you can almost always rewrite one of these as one of these Well, okay, this is why we use jsonb Relational hybrid are the same no surprise Jason with the double arrow is Jason be excuse me is tiny bit and then there's json which is a huge outlier This is having to part parse and process each of these as as the as the row comes out So, yeah, so Jason be Jason to actually use is dramatically slower than Jason be And relational days about twice as fast as Jason be There these are roughly the same speed. That's in the noise You know that that's with well within the margin of error for for how fast they are Okay, be tree index look up. So we created traditional be tree index Directly on the last name for relational hybrid an expression index For json and jsonb. I also tried gin indexing on the jsonb field using containment All pretty much the same Relational hybrid pretty much the same and hybrids a tiny bit slower probably because the size of the row Jason be interestingly enough. It's a little bit faster. It's like it's not a lot faster But without but just outside the the error on this Jason's the slowest no surprise and json gin is about the same speed So they're all pretty much comparable Jason he's actually faster than anything else including relational in this case Jason is a somewhat slower due to the extraction overhead And using an index like this is always the fastest way to look up a highly selective field like a name things like that Using the gin index is very comparable to be tree and what's nice about the gin index is I didn't have to index on a particular field I just indexed the entire json field and I could I could query on any of the contained fields in the json document Which is nice This is a big improvement over 9-3 Common tab tag look up by sequential scan so every record had a male or female tag 50% 50% Scan looking for all of one Using the containment operator for the tag array using the containment operator for Jason be And I also tried with a more traditional relational approach with a join table A secondary table of tags and we join on that Okay So relational this is this is doing a search through relational as traditional postgres array field That's the best followed by Jason be and the join kind of didn't pan out in this case The join table is a huge loss in this scenario because the field is very unselective You know, I'm returning five million rows either wet Five million rows, which means that I'm joining doing a relatively big join in the middle Jason bees slower than relational, but you know, it's in the same general range. These are not orders of magnitude apart But let's try it with a rare tag one that that's Only only zero only point zero seven five percent of records have this tag So it's very selective and I use containment use containment for Jason be and I also tried this with the joint table In both cases I had indexed the the joint table on the tag, but in the sequential scan case. It didn't use it Okay, join table really wins in this case because the tags very selective Jason bees significantly slower than relational, but the relational join wins over both of them It's unsurprising because this can really very quickly isolate which rose hat possess this rare tag Jason be remain slower, but comparable In this for this one like we saw Rare tag look up by index create a gin index on a relational ray field use the containment operator Use the use the containment operator for Jason be and also try this with the joint table Okay relational Jason be relational with join Relationals fastest in this situation, but Jason be performs comparably and if you're storing rare tags and you or Attributes of any kind and you don't need full Jason consider using an array field Instead of a Jason field because the extraction will be faster in Postgres Gin indexes and selectivity Who are familiar who's familiar with the term selectivity as as it relates to query planning and optimization? Okay? I should explain this The the the top of a view of selectivity is how many rows am I going to get back versus how many rows? Am I going to have to look at? So for example if you query a 10 million row table and you're only going to get back three rows That's a very selective query and generally an index will be better If you're querying a 10 million row table and 5 million are coming back That's a very unselective query and usually postgres will do a sequential scan So that's the that's the easy version of what selectivity is It's one of the most important characteristics of how postgres decides what kind of plan to do The problem is as of 9-4 gin indexes on Jason be fields have hardwired selectivity calculations It just says I think I don't actually remember the number, but I think 10% are going to come back It doesn't consult the actual underlying data The result is that even if it'll almost use the index even if the selectivity is very low Even if it's going to retrieve a lot of rows. It will probably use a gin index anyway So this is this can result in some very bad performance in the cases of low selectivity. So just be aware of this The moral of the story is don't create gin indexes to do queries that are going to be highly unselective Because it'll use the quit you'll use the index and you'll be in bad shape And this is an area that definitely needs some coding attention inside postgres So just out of curiosity, okay, we've been sort of hand waving instead these indexes exist But how long did it take to create them? So How long the create the last the be tree index on the last name the gin index on relational ray and The gin ops and path ops on Jason be Interestingly enough And this is very surprising for especially with people who have been using gin index for other things in postgres The be tree took the longest to build The gin array index was very fast the path ops one was very fast and this one was still was slower, but still Quite a bit faster than the be tree as of 9 for the gin build time is really fast and The the path ops build time is particular is very fast Gin indexing on arrays is also fast. So that's pretty that's that's a big change over 9 3 and earlier and How big is all of this stuff? The answer is roughly the same size Jason be is the largest interestingly enough Jay fall Jason and relational plus Jason are roughly the same size Interestingly enough the hybrid is the smallest Compared to the road compared to the pure relational and The reason is that the original data was very Was what I didn't put a lot of gratuitous white space or anything like that in the test data So the immense so Jason be actually had to add space to it when it was adding its internal structure For for documents to have a lot of internal white space those numbers may very well be different and lastly So how big are the indexes? The primary key index gin index blah blah blah This again anyone familiar with with previous implementations of json or of gin The new gin compaction feature makes a huge difference here. It's tiny Compared to any other index size Indexes on just the tags are very compact The path ops indexes are expected somewhat smaller than the json Indexes so now that we all know that what do we know there was a lot of data just dumped on you So let's talk about some conclusions So here's my one slide over simplification. This is what you should do everybody just go and do this For the basic set of attributes just use relational data if something's a common attribute And you can define common for your own applications But let's say 10% of the column of the rows will have it make it a relational column Remember null null is free. There's no space cost to a null column in Postgres For use either array fields or JSON be for extended attributes things that are not common in your in your rows and Use file system story for really big stuff because really big stuff In the megabyte range is not is not nearly as efficiently stored in Postgres as it is on the file system and Always use JSON be if you can there's no there's no compelling reason unless you have one of the you're in one of the the early mentioned special cases to use JSON Okay So obviously the gorilla in the room is I could tell the people are laughing who remember blazing saddles is MongoDB MongoDB recently had a very major rev version 3 is out now and I did previous tests on 9 on 2 thank you and I didn't do them on the last iteration of these slides because 3 was about to come out And I felt that it was unfair of me to compare the absolute most recent shiny version of Postgres versus a relatively old version of MongoDB So let's compare to Mongo 301. It's new. It has a faster storage engine And this I ran this on different data sets each data set has a million records each They're all basically pure JSON documents each dot one document has one test Was a document with four fields per JSON document the other one had 200 fields for JSON Which I consider kind of a destruction test for JSON Okay load time this is how long it took for to load the data set with four fields I did it two different ways. These are all based on insert not copy in Postgres copy would have Mop the floor with any of these This was inserts done in a single transaction, so begin one million inserts commit And I yes, I timed the commit at the end be fair And it was it was fastest followed by Mongo Which doesn't really have an exact parallel between these two cases and This is with a this is with a begin insert commit for each one of those million rows Which was unsurprisingly a lot slower because doing it one million f6 here Here's with two and here's it with the 200 field case Everything got slower the gap between post-crisis Mongo did close, you know to be let's look fair is fair But post-crisis so faster in the load case here, so let's query for a single field single value, so This is all one of these these have generated random data, so the data is quite highly distributed on it so Pick I picked a single field. I think it was number three in the four in the fourth case and number 12 in the 200 case And I had a bunch of different indexing things here in Mongo There's no index and an index on that particular field Okay, so again, this is the four field case each document has four fields relatively short ones Here's post-crisis no index Mongo with no index. We're still faster Pretty close though. This this this cult bar was way up here into in Mongo, too In 200 fields, let's be fair Mongo is actually quite a bit faster than post-crisis I think the reason but Mongo does a lot of a lot more compression of tags now in the sense that it will It doesn't store the same tags over and over and over again in the JSON field I think that's one of the reasons for this Again, this is without an index. So this is sequentially scanning all one million records In this one, I created a functional index on that column. I'm searching and a field index on that on the field I'm searching in In Mongo and in this case They both run really fast. I mean, you know, we're talking fractions of a millisecond here But post-crisis notably faster In both the four and the 200 field case So our index lookups on a be tree style index because these are both be tree indexes is quite a bit faster. Yes Which for the insert? No, none. There were no indexes at insert time on either one Yeah, so it was not is there was no index update overhead on I in either post-crisis or Mongo The indexes were created afterwards So yeah, because you know it'd be kind of unfair to create indexes on one and not the other And then try to time them out So So we're still faster on a be tree on a be tree query, but I was not doing too badly. You know have to say Now we created gin index on The whole on on on the whole JSON document So this is a gin. This is with a gin index. This is what gin pattern ops And this is with a Mongo field index gin index is a lot slower Pattern ops is actually faster than a Mongo field index and note that this is on a particular This is a be tree on a particular field. This is a gin index on the entire document We didn't have to specify in advance which field we're indexing So we're actually doing pretty well there gin index general I the gin index that's not pattern ops not so great So this does of course mean some functionality limitations like we can only use the containment operator so and on the 200 The Mongo the the Mongo performance is about the same there Gin index is really slow compared to the pattern ops index Now lest we forget here's what happens with this is comparing it to relational data Yeah, so Let's let's we forget how much faster relational data is That's yeah for people who can't see in the back. There's this tiny little blue bar here. That's about four pixels hot white Compared to Mongo compared to Postgres, so Mongo three is much improved. They really did it's not just hype I don't know about the seven to ten performance increase their quoting, but it is faster It's performs really well for extracting a single field from a large JSON body that it The extraction operation has obviously been very much improved which is where Mongo really fell down the last time It's where it had to pull a field out from the middle of a relatively large Yes, I Indexing a single field performs comparably Postgres is faster But it but let's be fair. We're talking point two milliseconds versus point three milliseconds We're not talking point two milliseconds versus ten milliseconds for how fast it is It doesn't really have an equivalent of Postgres kill Jin now I know people who are familiar with Mongo erasing it but say but the text feed the The full text search the problem is that's not equivalent to Jin because it goes through a stemmer It goes through it's it's much more equivalent to Postgres's full text search type indexes So it's a very different animal from a Jin index. It's designed for full text searching It's not really designed for this kind of like tags where the where the data is Is is is only interesting to computers. It's not natural language So game on now. We know what we need to improve So Mongo does really well and it does do better than Postgres for documents with a large number of JSON fields that you are scanning sequentially How how many people build applications where that's their center their performance bottleneck? Yeah, thought so In most other cases though JSON be does perform better Even against the Mongo Mongo 3 And relational performance. There's no comparison, you know So if you if you have a field dinner if you have a relational column versus a JSON field Postgres is going to run a gazillion times faster the gazillions of technical term So here we are in 2015 We still have world-class JSON support. We are not faster than Mongo in every single way anymore So let's but we are still fast very really really fast and we are particularly fast if you do this Which is what you should be doing Which is that you don't use Postgres as just a pure JSON store But you use you use JSON for what it's good at and you and use the relational data for what it's good at If you do that you'll end up with a really really fast application By the way, if you haven't already check out ToroDB which is a server that runs on top of Postgres that speaks the MongoDB wire protocol, but fans out the JSON Interrelational columns and it's just a Nile. It's Mongo any version So it's not it's not yet production production release, but it's really a very neat project check it out and That's my talk questions Think we have plenty of time Less than there used to be so HStore in the in the good old days was The only the only thing you got as far as this kind of key value store these stands for hash And it was a key value store. It's in contrib. It's still in contrib The downsides of it were it wasn't hierarchical. It couldn't handle arrays. It was only key value And it was a postgres specific kind of thing and it was in contrib Which meant you had to install it which was kind of annoying As now there's HStore 2 which actually uses the same underlying binary format as JSON B But has HStore style syntax for compatibility with previous versions of HStore HStore was was the best way of handling this up until we had JSON B Now I would suggest using JSON B because kind of everyone gets JSON B and the performance is pretty much the same It's at now basically each sort to and JSON B use the same underlying data form It's just their different syntaxes for writing the data structures Yeah, I don't know if you can do that with it. I Think there is but I'm not a hundred percent sure it's not as easy as just doing a minus Which is kind of a shame there should be an intersect operation or like that but Easy to write because you could write it built based on the rosette creation stuff That would be fun probably wouldn't perform that well though Now you have to unfortunately now you currently have to pull it apart and put it back together And that is that is a shame That is that is a limitation on JSON B I'm I I don't know if it's going away in nine five, but it should go away promptly. I would hope It's not hard to write, but we don't I don't think it's in court currently Not familiar with react It would be a roughly comparable. I well first I wouldn't do it that way But you know that that would be kind of an odd ball way of doing the schema It would probably be it could potentially be faster Because the JSON documents themselves will be smaller because you're effectively peeling out the first layer of JSON structure By making it relational that being said you're losing a huge optimization possibility there Which is just make them relational columns Right at the top If I understood your question correctly, which I may not have well really but purely I might okay maybe I misunderstood my my what I would do is if you have if you peel off the top level and Instead of saying a is a JSON is a simple JSON type I would say a is an int and B is that you know That's what I mean by making the top of a pure relational column Because you save a lot of storage because it's not a variable length type and all that kind of so It'll be faster if you make each one a JSON field because because again, you're effectively losing the top layer Yeah, of but you're also picking up some overhead because each one now needs a length Feel needs to keep its length around since their variable length well it's um this You it's not so much what you're it's not so much paying for storage But every bite you have to pick up and rattle is a performance here, you know and that so Your it'll it'll all I can say is sure it'll be faster But you know use Postgres for what's good at which is the relational that you know It's really really fast on the relational data other question yes, sir I I wouldn't be surprised if there are surprises there. I would be surprised if it's impossible Well, is it because? Okay, you know, that's that's that's a shame, you know again That's there you know There's some obvious functionality that that that the Postgres could definitely grow in this regard It would be nice if we I don't know if there's a reason for that or not truth to be told. Yeah, so Okay, thank you very much