 I think it was a good thing that he presented something related to real-time solutions. So hi, I'm Hussain. I'm from Mintra. I work in the search team of Mintra. I've been working on this auto-suggest improvements since last one year. So the idea behind it was that during the search session itself, he wanted to make sure that the solutions are ranked based on what kind of intent you have shown in the session itself. So the requirement was to kind of get the events in the real-time, process them and kind of update the context of that or intent of that user in real-time. So that whenever the next session event happens, we are able to adjust our results based on that. So I think just going by the scale of the problem again, because there are events involved in the session, it's like a lot of events. We took events from search to the list page to the card, kind of all of the events together and tried to map them on a single map. And then the second thing that we did was we wanted to try most of a lot of sophisticated models, but instead I think we went for something very basic, which is based on the definition of a context. So a context is basically something which kind of grows within time and it switches when you're done with it. So we wanted to make sure that we are able to create a vector which kind of grows with time and when the time comes it kind of switches back. So we went ahead with very simple exponential decay where we were finding the similarity between events using the cosine similarity, kind of growing the vector until the similarity is good. And then as soon as we used to have a lower similarity, we used to just decay. I think it sounds very simple, but it eventually paid us good results. I think our CTRs kind of bent up by between 2 to 5% for autosuggest and we were able to showcase more content to the user, relevant content to the user. And it kind of was a good thing itself. So I think the next takeaway for us is that we wanted to actually try out LSTMs and see whether we can in the real time be able to take that much data and predict a similar thing better and I think that's it. And I wanted a lot of feedback from people if there is any solution. Excellent. So simple solutions to a complex problem. I think a round of applause for him. Questions for him. We have a few minutes before we close. So questions for him. How do you check for typos and all that? Is that captured in this kind of autosuggestion? Or is there a separate model for like Leverstein distances and all that people use to correct for spellings and all? Yeah, so amazing thing. So what we kind of do, so in the real time we don't do the spell check. What we end up doing is that during the search we kind of do a lot of spell check. So we take that as a feedback and put it into autosuggest to make sure that next time the user searches something, the spell check happens for it. So during the autosuggest we do not do the spell check. But I think we take the feedback from search and the spell check that happens there and use it in other ways. So it's in sync with that. So it becomes another item that you are finding similarities for if I get that today. Right. It might eventually. Eventually in the post post. Okay, more questions. Oh, yeah. Go ahead, Sushant. There, there. Nice talk. Main thing is like your user base in Indian user base so how you handle the multi-language thing. Some people like write in Hindi language or some other languages or Delhi, Delhi, New Delhi. How you handle that multi-language thing? Again, I think I'll go back to the fact that our autosuggest kind of power, it's kind of powered by search. So as we are kind of also translating the search in some ways, so that kind of becomes an input to the autosuggest itself. So we kind of continuously, we have a little bit of translation built in our Mintra search where we kind of take the different languages, words and kind of map them into the meaning of the in English or whatever. So I mean that kind of ideally goes, goes as a feedback to autosuggest. As of now, we don't have a lot of linguistic, I mean, sorry, vernacular data. I think we all need to do better at that. Yeah. More questions. No. Okay.