 I want to welcome you to a deep dive into PostgreSQL indexing. Today, we're going to talk about indexing and Postgres, when to add, and when not to. My name is Lindsay Hooper. I'm one of the conference organizers, and I'm going to be your moderator for this webinar. I'm here with Ibra Ahmed, senior software architect at Pricona, who has a vast experience in software design and development. Prior to joining Pricona, Ibra worked at Enterprise DB, and he's contributed to the PostgreSQL community, as well as other open source communities. He's a seasoned speaker and has given more than 15 Postgres talks in the last year alone, all across the world. These conferences that he's spoken at include Postgres Conference EU, Postgres Conference Asia, Postgres Conference New York, and Pricona Live. He's also authored multiple books on PostgresQL. So welcome Ibar. Take it away. Hi, everybody. My name is Ibra Ahmed, and I'm currently working at Pricona as a senior database architect. And prior to joining the Pricona, I worked in Enterprise DB. And in the PostgreSQL, it's been 16 years I've been using PostgreSQL and working with PostgreSQL. I totally have 21 years of software development experience, and I have two big books on PostgreSQL. So today's topic is deep dive into PostgreSQL indexes. So today, we will learn about the PostgreSQL indexes, how we can create that, and words internally have the information we have information about the indexes. Let's try. So before going to the actual PostgreSQL indexes, we have to think what is the index actually. So for the clarity, I have an example here, which is a book. So when we are calling heap, what is the heap? So heap is a book. So I'm relating heap to the book. When we are reading the book, like from the first chapter, it's kind of a heap for us. And when you go at the end of the book, you can see the index of that book. So that index contains the pointer to the book. So when you want to search some word, you go to the index, you see the word, and you can see the page number. So the page number is the pointer for the heap. So on that page number, you turn that page number, and you can see the word on that page. So whenever, in normally database, when I'm calling the heap, which is actually a table, and in this concept, I'm talking about the book. An index is the last pages of the book where you have to find the index of that book. So some key terms here, whenever I call tuples, it's actually a rows, and rows are stored physically in physical file of the disk. So each table consists of physical file or files. It's a separate issue. One table can contain multiple files, but currently a table data store, a table tuples store on a disk file separately. So here, I have created a table, create table foo with a column ID and a second column name. So if you go into the detail, we can see tuples and rows of synonyms whenever you use row or tuple synonyms. The second thing, REL file node. REL file node is a name of that file, right? PG class is a catalog, catalog table, and REL file node is a column of PG class which contains the file name of tables. So we're REL name like foo. So what we are doing here, we are selecting a file name of table foo. And what is the file name? 16384, it's a physical file on the disk. We just created a file. So we can go to the folder PG data, which is a data directory of PostgreSQL. So under the data directory, you can see, you can see here, this is the database ID. And in database, you can see a table file. We already see this file is linked to the table foo. So but its size is 0. Why? Because we just created that file. We have not inserted any data in that file. So what we learned that one table, one table tuples store in a separate file on the disk. So another point that it has does not have any order. So when you store data, like 12345, and when you select the data, it's not stored 123. Initially, it's stored in order, but it's not necessarily, because you can delete the data, you can reinsert the data, you update the data. There is no order of data under that disk. So it's not necessary you get data in order or not It's totally unordered data. So tables and heap, select whole data. When you want to select the data, whole table, like select static from foo, you want to select the whole data, right? It makes sense. Selects name from bar. It makes sense that you are running a sequential scam of the table. Yes? Why? Because you want to read the whole table. So you are selecting first row, second row, third row, fourth row, fifth row. So till end of the row, you need each and every row, and sequential scan makes sense here. So but when you want to select name where ID is equal to 5, 4, 3, 2, does sequential scan really make sense here? Let's go to the example. Back to the example of book. When you are reading a book and you want to read book from first page to last page, so you definitely do that sequentially. And that makes sense because you want to read first chapter, second chapter, third chapter, fourth chapter, till the last chapter. You are reading the book sequentially. But when I ask you to read some specific topic, what will you do? Is it makes sense to read the whole book to search for that topic? No, that doesn't make sense. Or doesn't make sense when you are. So what will you do? You go to the index. And search that word I have asked to read you. So it doesn't make sense to sequentially scan a table if you don't want to read the whole table. You want some specific information. So how actually you are selecting a data? So here I am. Another example here. Create table foo with an ID and name. So I have inserted two rows in there. One Alex, two Bob. So Postgres has some hidden columns. So one is CT ID. So what I have done, I'm selecting CT ID, comma, static from both, but foo. So what I am doing here, I am selecting and one hidden column which is a CT ID and all the columns from foo. So all means ID and name. So here, what is CT ID? CT ID consists of two values. First value is the page number. The second value is the offset. So block and offset. Block and offset. This information can take. So what we have here, Alex is on the ID and Alex is on the block zero and offset one. Page zero and which number? So when I said the offset, it's actually offset, but right now for this example, I call it a tuple. First tuple, second tuple. Don't confuse with the offset right now. So what we read here for CT ID, block number. First, which is zero, which is zero and one is the tuple. So we call it a tuple, not offset. So to be very clear here. So first block number and second tuple. So how to set data from the heap? When we scan, what will we do? So we go for Alex zero, which is block number and offset one, which is a tuple one. Bob, which is ID two. So we scan the table and go to that page and get the tuple. So this is how we are reading the data from the disk. Maybe you can see the colors. I change the color, but it's too small. So we have end up with the heap and there's concept of how to show the data, how we get the data. So we will start studying the indexes because we already, I have explained that when you are reading a book, it's a huge task to read a book when you specifically need some information. So when you need the specific information, you need pointers. So indexes are entry pointer for the table. So you need some information to go to the index, you got that value from the index, you got the pointer from the index and you get the pointer and go back to the table and extract tuples from there. There are some exceptions there we'll study later on. So where you are not going to the heap, but it's separate, I will cover that at the end of this presentation. The sole reason to have an index is our performance. So advantage, performance. Index is stored from separately from the table's main storage. This is a drawback because we require more space. Like if your table is one GB, maybe your index is 100 MB. So you need one GB and 100 MB extra. One GB is a table space and you need extra 100 MB. So it's a big drawback that you need separate space. So if you are creating unnecessary indexes for your table, so you are eating your disk space. You have to be careful that you have to save your disk space also. But the advantage is the performance, you will get the data out quickly. So here in this example, explain. So maybe you already know that what is explained. So we are just printing the plan here. Explain, select name from bar where ID is equal to five, four, three, two, it's the same time. It's a sequential scan on bar, you can see sequential scan. Create index, bar, IDX on bar. So this is the syntax how to create an index. Create index, bar, just go IDX on bar or bar. So create index is the syntax, bar, underscore IDX is the name of the index on bar, bar is a table and ID is the column name. So explain, now we are explaining the same query. Explain, select name from bar where ID is equal to five, four, three, two. So now you can see index scan is used, a bitmap index scan is used. Just forget about it, what is bitmap. Just see bar underscore IDX is used here. Bar underscore IDX is used and scan is index scan and this one is sequential scan. So what is the advantage? I already told you, advantage is performance, the cost. Cost is, the query is reduced. So don't be confused, it's not millisecond, it's not second, it's a cost, it's an arbitrary value, but the higher the value is to slow the query. So 40% improvement we got just creating one index. Why we don't have too much because the table was not big enough and not have a complex queries and not using. So it's a simple example that where you are getting a 40% improvement here. So now we got to the point, sorry, that where we have learned index is necessary, it's give us the performance, it's necessary. So we will start creating an index now. Create index IDX B3 on bar. So we have created an index. So I already told you that stored in a physical disk a separate file at the table, its own file or files as its own file. So we will run the same theory, select well file node from BG class. We're real name like IDX B3. Instead of using the name of table here I use index name. So I got the well file node 16425. So we go to the BG data directory and you can see the file is here. So here you can see we have some data here because we have created on already created table which has a lot of data. So we created a index on that and we have data on index file. So we studied that every table had its own file, physical file and index has its own physical file where all the index data will be stored. But we have one that we have created an index on a single column like this one. Create index bar underscore IDX on bar and ID is column of table bar. So what is this? This is the index based on a single column. That's it. It's a simple one. We already seen that. But so post this, we have learned that we have created an index using a single column but and drawback was we need a separate disk space. So here when you are creating a data it's it will lock the table. And because it need to create an index for that. So Postgres provides a way where you can concurrently create an index without locking the table. So here create index IDX B3 on bar using B3. So it's not concurrently created. So you can use a word concurrently create in this concurrently. Same, otherwise everything is same. But the problem is the performance. It will take more time to complete. It will take the previous one where we have we are locking the table is 12 seconds. And where we are not locking the table using a concurrently it take 23 seconds almost a double the time. But advantages it's not locking the table. The disadvantages it will take time more time to create a table. So expression index. So what we have learned that we can create index using a single column. So we can create a column with multiple column but in some condition when we have learned that our queries are not based on columns is based on some expression using some columns but on some expression. So what we can do we can create an index on that column which is used in that expression but in that case we have to filter. We have to filter or you can see the index is applied on that column only not on that expression. So there is a chance there is a way we can evaluate that this expression is most likely used in our queries. And we are choosing that query too often. So what we will do? We will create an index on that expression instead of using only the column name we use the whole expression. So here we are selecting a letter where lower name like text one which is any query like lower name. Important thing is lower name. Lower is a function and name is a column name and this makes it our expression. So instead of creating our index on name we are creating index on lower function name and name in whole expression. So you can see we have created an index on whole expression. So now we are running the same query we're running the same query and you can see that index condition is this one. So the whole index condition become expression and the value. So it's more easy for the executor to run that query. So second part, so maybe you can think that you can only use some functions in the expression but here I have some complex example. Explain static static from bar where dt, dt is in a column which is a date column plus interval two days is a complex expression. It's not only a function I'm using interval plus days. So this is an expression. So I am creating an index on the whole expression. So I can create an index on dt column only and then apply interval two days on that. But now I am creating an index. So in that case Postgres will store this whole expression, I've learned that whole expression and store that data. So whenever I use this condition it's easy for Postgres to extract data for you. So previously we use a simple expression here we use a big complex. So you can use but be careful when you are using this you have to study your queries. What kind of expression you are using in your queries and when you are using some expression in your query which is consistent and you are running that query quite only then you can get that and make index on that. That's much better performance than creating an index on single column or multiple column. So what is in a partial index? I think I have told you in the initial slides that when you are creating an index advantage is performance. What is the disadvantage? Disadvantage is the disk space. So if you are creating a lot of index on your machines you're getting space on the disk. But there is another thing if you have a huge table, huge table and you are creating an index on that. So the index will also be very huge because it has to store the pointers and values on that. And even if that index contain multiple columns it has to store that information. So what if I have a very huge table and very huge index but when I study that my queries only hits first 1000 queue 1000 rows only. But I need, or you can say 10% of the table or index. So is it necessary to create a complete index on that table? No, so on the left side here you can see that. I have a query like this where ID is less than 1000 and I have created a simple index on that. So you can see when I get the size of that index it's 214 MB, 214 MB index size. But when I analyze that my queries always hit ID less than 1000 and it's 99% happening that I'm only curing less than 1000. So why I have created an index of ID which is greater than 1000 but problem is I have to create a full index, yes? No, there is a way where you can create a partial index. In this case, the Postgres will create an index for you where ID is less than 1000. So in that case on the right side you can clearly see I have created index ID expert on bar ID where ID is less than 1000. So in that case my size is only 240 KB but these both queries work almost at the same time. No difference of, I'm also hitting an index here. I'm also hitting on the left side I'm also hitting on the right side. So index hits perfectly but only advantage I'm getting here, the size. I'm getting a reduced size index here but disadvantage, which I already figured it out the disadvantage is that if I hit the query which is greater than 1000, this index will not hit because I have not created an index where ID is greater than 1000. I've only created an index where ID is less than 1000. So what happened? When I hit greater than 1000 sequential scan or some other index which is applied on greater than 1000 will hit here. The timing, so here I have already answered what will happen when we query where ID is equal to greater than 1000, but index don't have. Okay, so what kind of an index we have here in the Postgres? B tree index is the one option called index method. So B tree is the one of the main index of the Postgres field. So, quoted operators less than, when you have a less than operator when you have a less than equal to operator equal to operator greater than equal to greater than you can use B tree index. So when we have created an index without using a word using it automatically create a B tree index, B tree index is the default one. So create index IDS B tree on foo using B tree name is same create index IDS B tree on foo name. Using B tree is optional. So it automatically create a B tree index but you can when you want to create on some other kind of an index we will study later you can use, you have to use using there. So explain analyze select static from foo where main is equal to some text. So here you can see index using IDS B tree on foo index condition have been used. So what is B tree? It's a balance sheet. You want to read that? You have to go to the Wikipedia. I have provided a link read is so what you can read how to access that and everything. So B tree index example here. I think I have told we have seen this picture before. So create index foo ID name. So we have ID Alex name ID one Alex is a name two ID name is Bob and CT ID is zero one and CT ID is two. I have created an index on name here. So CT ID is zero and one for Alex CT ID zero or two and Bob and B tree B tree you have a searching path how to search it will give a pointer we reach at the pointer and we will get the value of CT ID that Bob is stored on zero and one we will get there. So zero and one will be get from the B tree and zero one we will go to the name and we will search for that. And in the actual table we have stored the data on order data in the disk. But in the index which is the B tree is stored as a B tree. So we will use a B tree search algorithm to search the value name and we will get the position of that name and for the position we will go to the heap and extract the data from the heap with some exception we will study later. One exception is there. So what we are doing here when we have created an index B tree index it stored the data in the B tree form. So when we ask for the search the data Alex we go to the B tree and using a B tree search algorithm we search the value of Alex we search and got the pointer from there we go to the heap which and extract that to pull trajectory. So hash index. What is a hash index? Hash index only handles it actually it's created a hash for the value which we are creating there. So when we are storing the B tree it stored the data in a B tree form when we are creating our index using a hash that it creates a hash for that value. So we already know that hash only works for the equality of letters. But we cannot because we cannot do greater than on hash because hash is a random value. So we cannot predict that this value is greater than that. So if you have a name foo or name bar and you have calculated a hash for foo and bar it's some kind of random values. So it's not at random but it's not ordered value so you can put greater than and less than on that. It's just you can equate that value. So it's used for equal operator, equality operator. So before for the B tree we have used using B tree here we are using using hash. It's compulsory because if you don't use using hash it will create a B tree index. So for using explicitly creating a hash you have explicitly specified using hash. It will create a hash index. So you can see in the query IDX hash is a new hash for you, hash index and it's used here. So next, if you see I have explained my table here bar so you can see we have it's a typo here so sorry. ID hash is a hash and name which is just a typo I will correct that. IDX hash is a hash index. So Postgres has another type of index. It's a bin, block range index. I think is it not enough to have a hash index and B tree index? Most of the time it fulfills your requirement but there is a chance of improvement here. If you have information which have correlation with the physical location of your table, explain it. If you have a column which has some correlation with the physical storage of your table, like, I'm storing the information from January 2020, February 2020, March 2020, April, May, June, July, I guess, September and continues. And I'm not deleting that information. I'm just adding, inserting that information. So it has correlation with the physical location of the table. Why? Because it stores sequentially. I'm not talking about this question in other time. It's a sequential data. So if you're storing data in the physical location, so when I am calculating I can calculate that this from here to December, this physical location contain the data of that table. So serial information, like I'm storing a data serially, like serial number one, two, three, four, five, six, no gap and no information break and we are not going to debug the serial number 100, 100 and there is no serial number 99 again. So it expanding the serial numbers. So this is the physical correlation of your column with the physical disk. In that case, you already know that you have that kind of information. You can optimize your space and everything. So you can use a bring index. So how you can use a bring index? You can create index, ideas, own, table, own, use this table using bring, like you have done with the hash table. So you can create a bring index. So what bring index store? Bring index, hash index store, hash values, between index stores, city ID and values and something like that and hold between. And what bring index store? It store a page number and the minimum value of the column and the maximum value of the column. That's it. You know this page contain minimum value of this column and maximum volume of this column. Using three integer or big integer or anything, you can get the whole information of that page. So here is an example. Here I have done a sequential scan. So on the left side. So you can see the seven, three, nine, seven millisecond. Seven seconds. And here I have used a bring index. Have you noticed that only 4.2 millisecond spend on that query? Why? Because I have used a bring index. And I have a physical correlation of my column with the disk because DT, I'm appending updates one by one, one by one. I'm not going back to the dates. So Postgres knows that this page number contains from this date to this date. So it don't need to go to the each and every triple to see the values. It already has that information that this page contain this date to this date. So it has the information of in between all the values. So it's not reading all the values. So that's why it's really quick. But the problem is if you break that concept to physical correlation, won't work. So it's sure you have to read carefully that your data is physical correlated with the disk. So then you can use it. You can have a really good performance here. So other than the performance, I have created a three index here. Create index, B3, using B3, hash and bring. And you can see size has to move more space B3 to a less space. And you can see it's only a 42 kilobytes. And it's above a 21 MB or 48 and hash took a 48 MB and it's 48, I think 48 NKB also 48 MB, 21 MB and 48 KB of bring index. So it's really fast. And sorry, fast and took really less space. So another index, which is generalized inverted index. We have already studied three indexes before. So one more index, which is generalized inverted index. So this is to handle where we need to come index a composite value, it is a composite value. So I will give you an example here. Like I have another document, a just B document. So here, select distinct name, GT from bar where limit is equal to five. And I want to select a value where name is equal to Alex. Is there a way to create an index on this? So only way I can create a bring, bring, have B3, hash in depth to create an index on name or GT. That's it. There is no way I can create an index inside that values or the composite values. So here company, GIN index. So I have a creating index, create index IDX underscore GIN on bar using GIN name. So when I have a query here, I put in a query, the name contain name is equal to colon Alex and query plan without the index is sequential scan on bar. You can see the time is one second. So on the right side, if you see, we have used index bar, IDX GIN and index bar, you can see index IDX GIN is used here. You can see that and time is four, seven, five. Almost half the time of the sequential scan. And then in the document for the composite values, you can use as use GIN index. Another index, which is generalized search stream, it is actually, it is a framework. It is also a pre-structured access method. It is an indexing framework. Use for index is a complex data types. What is the complex data types? You find a point within the box. No other index can do that. Use for the full text search using within the int array. So we have covered B3, hash, the brain, GIN and JS. So B3 use for the index for most of the queries and different types. So almost you covered, almost you covered everything with the B3, but in some condition you need hash, which is used for the equality operators and for the faster speed and low disk usage. And you have some sequential data correlated with the disk. You can use the brain. GIN use for document another is GIN use for full text search. So when I was telling you about the information that there are some exceptions. When you are reading information from index. So what is an index? So let's suppose I said to you, go to the index in the book and count the word Alex. So there are two ways to do that. One way you go to the end of the index and search the word Alex in the index and get the pointer there and go back to your book and read that book and search the word index, read the word index and count it. And there is another way, which is the faster way. When I asked you to count the word Alex, you go to the end of the book and search the value Alex and see how many pages the word contain Alex and count that and give that information. Go back to, don't go back to the key. This is called index only scan. Means when you are looking something from the index and you are hitting the index and you're all the columns are present in your query are part of that index. Like you are curing some values and your calls and your target list and your columns are all part of that index. Curie will not touch the heap. It will directly give you the values from the index. So it's not reading the pointer of heap and reading the data from heap. It directly reading the data from index and give it to you. So it's a performance feature. So when you are hitting index only, you get better from both of us. Like if you are hitting a sequential scan on heap, it's a low slope. If you are hitting index scan is for, but when you are hitting index only scan, but the only condition is your all columns called should be in that index. So here is the query. So select ID name from this table and this table. I have one of three column ID name and DD, another column from bar. Where ID is equal to greater than this. I have an index on ID and name. And in the first query, you can see I have selecting three columns. One ID name, two DD, three. And I have an index on ID and name only. So when I query the table, it hits index. Which one? Index scan using IDX. In the second query, I am selecting ID name from bar where ID is equal to this and this. So ID is present in the index. Name is present in an index. ID is present in index. ID is present in index. So all the columns are present in index. So Postgres will directly read the information from index and give it to you. So here index only scan hits. When you are hitting all the column in the index, you're getting all the information from the index. So I think it's clear. So I wrote some queries because people were asking from that. We need some queries that how to detect a duplicate index or something like that. So these are the queries. So you can see I have wrote the queries. If you run this query, you can see index key one, one, two, column name and at the end, you can see a real name bar, index scan two, and index set method is B tree. So you have two indexes on B tree with this on the same index key. So index key is the column name. So you have on a column name index but with a different name. You can create with a different name using same column and same value. So what is the use of that? You are wasting a space. So you have two index on same column on same table with different names. If you run this query, it will give you the information. That this table has two indexes on same column. You can write your own queries where your one column can hit multiple indexes and you can detect it. Here I am just detecting where you have a separate index on that column. But if this index is another part of another index, then it's okay. Maybe you want to use that. Sometimes not, but you can use that. But here it's completely unnecessary to have a single index with the same key on the same table with different name. So here is the query. Some people ask if I want to know what kind of operator can be used with which index access method. So in this query, just place the AM name. So if you write here B3 or hash, so it will give you the operator's name, like equal operator greater than less than operator name that these operator are supported with this index. So here I just, because Jenner just has a smaller list. So I just paste information about it here. So index stats, so pgstat statement and pgstat user index has some information. So you can combine them or you can query them to get some information from them that about the indexes. Index stats, why is it coming back? Unused indexes. So here I'm curing pgstat user indexes. So after long, a long time, I see after four days, five days, one month, I just query my stats statistics and see I have created some index one month ago or some time ago and that index never hit by my query. And it taking GBs of space. Oh, why I have that index, which I am not using since months and months, still there and wasting my disk space. There are two problem can be here. One problem, your index is not hitting because of the problem in the query. The second problem you have created an unused index. So both can be the problem. If your query is wrong, your index creation is wrong, or you have created an unused index. So fix your problem. If your ID scan is zero, it was never been scanned. The index never been used. So drop that index, fix your queries to use this index or fix that index. Otherwise there is no use of index to have there. I think covered a lot of thing with shots by enough time. I think that took 53 minutes already. So I have some minutes for the questions. Hi, yes. We've gotten a ton of questions. And actually I gotta applaud everybody who was online because people have been going out of their way to answer questions in the moment. So great job y'all. So I've pulled a few questions. The first question I have here is how do I know when to use the default B tree and when to use hash? I was quite clear about that. When you want to query greater than equal to and less than operator. Like if you have an ID is greater than 10, but you cannot use the hash index because hash will not work for the greater than and less than. If you want to use greater than less than, you have to use B tree. If you want to use an equality operators, like equal to a name is equal to this value is equal ID is equal to this, then you can use the hash index. But B tree can be used anywhere even for the equality operators. But hash is only for equality operators. How do you build both a hash index and another B tree index by using a B tree as the underlying data structure? Actually it's two separate things because when you're creating an index, it's a separate thing. You are creating an index of B tree and you're creating an index with the hash. You are hashing the value. And if you want to hash that value and then again you want to use a B tree, it's not possible because right now I don't know about that. But right now you have two separate options that you can create a B tree or you can create a hash other than we don't have an option as far as I know. What is a generalized inverted index? Anyways, I said that generalized inverted index when you want to create an index on composite value. Like composite value, what is the composite value? Composite value of value contain a multiple values. It's not a simple one like a document. Like you have on a column which is name. So name can be Alex, Bob, something like that. But what is a document? So if you go see this document, it's not simple. It's a composite value. Like this document which I name is a name. You can see name, Alex, phone number, this. So one column contain two values. Name is equal to Alex, phone is equal to 333222 and phone again contain three values. So this is the composite values. So for the composite value you want to create an index, you have to create an agent index. Great. Why would PostgreSQL allow two indexes on the same column? Is there a use case? I don't know. Why? I think I will ask why they are allowing it to create the same index. I don't know because, but at least we can create that. Same index on same column. But for this question, this is a good question actually. I will ask somebody that who is involved in the create, do I think the index is something? Why it's allowing that? But at least in this version it's, you can do that. But I will ask that person. Great. Two more questions here. How do you easily find which index is corrupted? Two way I said that. One is your brain, yet you have to create an index. You have created perfectly according to your data. Second thing, you have to see your statistics. How many time you have hit your index? Second, third, you have to see manually the performance of your index. That you can create a different index and you can see the performance if you are getting the performance index score. And because it's, all three are important because if you are creating an index according to your data, it will give you the benefit and you are testing your index that it's giving the performance is also important. And third important thing that when you are running after some time you see that your index are not hitting that might be some problem with the apkiri or indexes. That's important thing. I have shown that if you go here. So if you see here that you have an index and your index scan is zero or even four or six and a very less time you are hitting your index and most of the time you are not hitting on the index. Even you can write this kind of more queries that you are always hitting the heap scan you are not hitting the indexes. So you can write your own queries. These queries are just an example. So I have a blog post. You can go to the Prakona blogs where I have write nine most used queries to detect your problems and anything with the indexes. So that nine or 10 queries, complex queries. So when you run that queries it will give you some information about your indexes. So just go to my blog post and you can see that. Okay, and our last question here. Will new values get indexed in a partial index if they meet the condition or will only existing values get indexed? The values less than that where condition will be indexed and value greater than that or not fulfilling that condition will not be indexed. That's the answer. The value within that expression will index and value which is not within that expression will not be indexed. Even after that it will add it. Okay, well, those are all the questions I've pulled. Neal is currently asking where can I find the blog mentioned by Ebrar? I also search this question because here it is. Okay, great. Oh, thank you for putting that in the chat. So with that, Ebrar I wanna thank you so much for taking the time with us today to all of our attendees and participants. Thank you so much for spending your mornings, evenings or afternoons with us. And I hope everyone stays safe and healthy and I hope to see you on the next webinar. So cheers, thanks everyone. Thank you everybody. Bye.