 Anyone from the audience want to add more of their stories and learnings, we'd be happy to hear that. I have a funny one, why don't you go ahead and share your mind after that. So mine was like, it's a B2B app, that means like you're sitting in Bangalore, most of the customers are sitting in the east coast or west coast of US. So it just becomes very normal that whenever you add a new feature you need to add a database migration, that is like you were using Python Django at that time, that means like we had Django migration. So the idea was like first you had the migration to the table with all null values, so application does not know any of these things that is there is new column, so your API works as smooth and there are no, I think breaks in the product, but this was an assumption everybody was working, but what happened is one time we had to make a migration and it is like after an India time there was no customer, but unfortunately what happened is somebody, some programmer or some organization or programmer figured out a way we have an API and he figured out there's an API key and he has done a scheduled job, I think like midday 2am or to upload some details. As a result what happened like we were running the migration, suddenly the migration script stuck because there were a lot of table logs. We were figuring out where is this table logs coming and we figured out there is one particular client connection is hanging through and we cannot abort the transaction because we did not have what is a rollback plan because for us like migration would normally go through in 2 to 3 minutes so we don't have to worry, but this person was like uploading like say like almost like a lot of media files it was going on for 10-15 minutes and the DB state was like stuck and we can see like it became like one kind of an outage issue because new updates were not going through and a lot of 500 errors were happening and we have to kill the connection and reset the migration and that is how we fixed that. It was not something really good but that we learned very hard. Swannan do you want to go? Okay cool yeah. So the reason why this story is very funny is because I caused it and the reason I learned a bunch of things from this is because we were smart enough to design the database in a way that it did not do lasting damage and so two lessons to learn from this story. So context in my previous company we had this concept of seller scores where there was an app and there's a whole bunch of things but okay let me give you quick context on what the app was. The app was used by real estate agents and they would have different actions about the contacts that they've uploaded. You can think of it like a tinder for real estate agents where they would do left right swipe on their contacts based on a score and this score was like a likelihood of this contact buying the house in the next one year and we had routine deployments. We would constantly refine the algorithm that would generate the score. We had data scientists working around the clock improving our model and on one, I think it was in July or something, we realized that the last time we did the algorithm deploy something was not quite correct and the scores were not right. All the scores created after a particular date they were incorrect and so a lot of realtors were getting incorrect matches or they were getting irrelevant results and so we figured okay you know what this is a generated data let's just wipe it and we'll regenerate the scores for September and there was a lot of like there was a lot of debugging and last minute panic debugging that went on before we settled on this because we realized that this is what is happening but when we figured out that this was the problem I said okay fine you know it's pretty straightforward we'll let a batch up to delete all the scores from September then we'll turn the ETL job and it'll regenerate the scores in the meantime the data scientists folks they fix the algorithm they'll fix the algorithm and all within a couple of hours it should be okay. The ETL pipeline was part based and it used to take about an hour or two to run and things were looking okay and so I went I wrote up the query and we run and I ran it and suddenly we realized every single score from the database had vanished and we were getting 10 times more complaints and I couldn't figure out like what like what is going on we had double checked everything the query was okay everything seemed okay but we couldn't figure out why the data was missing and then and again this was this was nighttime India like 2am or something for me and my pair my manager on the other side it was daytime for him and both of us had double checked and we couldn't figure out what the problem was and it turns out it it was a copy I was copying queries written from Vim and pasting on a Rails console and pasting them and executing them or in a pcc console and the query that read something like delete from seller scores where they'd get it and food the the where they get it and food it was never copied and only delete from seller scores was copied and without a where clause a delete query just delete everything uh when we realized it it turned out that in in the copy page there was like a new line character which is not considered like the same line and then the where part was stated like a different line and so console uh waits for a new line character to execute your statement right and so the moment it saw the new line character it just ended up executing the query and we ended up wiping the table uh and so the lesson learned was this that never copy paste always have stored procedures or the tasks or or predefined tasks to change data and the reason we did not do lasting damage on a or we did not have to resort to backups is because the schema was very nicely designed uh and everything all the data that we lost or supposedly lost was completely generative data that we could have there was no transactional data and uh uh that that was that was like sometimes you make mistakes but if your schema design if your data design is robust then you can limit the damage of uh your mistakes like for example if you have transactional data then you have to handle that differently or if you have irrecoverable data then you handle that differently versus if you have generated data then you handle that differently and then so that's it that's the lesson thanks swanand anyone else nablu or prathamesh is your prathamesh you have a sequel war story uh not really a war story but like mostly uh we have a lot of time series data and uh we have to manage the um like we have to manage the age of the data very diligently otherwise the data is sort of keeps increasing a lot and uh yeah i mean we don't i don't have any war story access as of now just that you have to be very diligent about the queries indexes so that uh like the model or the schema that you're designed for uh actually take us for all the excuses so how often do you delete the data or do you just archive the data uh so we have a rolling mechanism uh any data that is uh beyond last 28 days uh that automatically so we use a we use timescale db for our time series database extension and anything beyond uh the 28 days it just gets rolled over and gets automatically deleted uh for any fresh data uh we use different tables for archived data as well as fresh data so uh that is one strategy that has helped us in terms of uh showing the most recent data and then showing last seven days data 14 days data last month's data uh the queries for month's data are a bit slower compared to uh the fresh data but the expectation of the users also in that case is that the data will come a bit slowly because it's a uh like it's a month's data or two weeks data so that is manageable and then using timescale db uh basically because of the compression algorithm that timescale db has uh like managing the uh queries is very easy another advantage is that you can use it with any normal post griscule table as well you don't have to have everything as a hyper table or um the the general timescale db nomenclature that is there you can actually make joins between a normal post griscule table as well as a timescale db table so that works out pretty well uh the disadvantage there is that because it's a hyper table uh you cannot really change it once it is set in stone it is set in stone you have to drop the table and recreate it uh if you want to change any schema um but we have done that two three times so uh that is no longer a problem like we have basically dropped the use table and then recreated it again um from the backup or restore mechanism but uh the main advantage there is that basically you can just write the queries in raw SQL pure SQL uh there is no different syntax or language that you have to learn it is just post griscule syntax and um you can just make it work with any framework or any other library that you use for your web development thanks for the miss anyone else want to share any of their stories uh yeah i have one so this is less of a war story and this is i don't know how to categorize it it just didn't go well for us um so what happened is by the time i joined my organization we had already a migration from mongo to post gris underway and so like why the migration was started and how it would go was not really planned and we were so what we decided was we would move uh some of the regions to mongo and some of the regions to post gris and we ended up with our app running both on mongo and post gris at the same time and and then then at one point we just realized we couldn't continue doing this so we just decided that we'll carry out the whole migration and the migration did not go smoothly and we lost a customer after that everything was put on hold we were planning on maybe rolling back everything back to mongo and we stopped all development at that point all future development and then we probably spent two months figuring out the migration and the migration went through completely after two months we moved completely to post gris so yeah so my war story would be please plan out your stuff else you'd be caught in our situation