 Glad to see you guys sitting here at this hour on Friday. So and also privilege to conclude this great event and the opportunity to conclude this event. So today I'm going to cover how are we finding Dilkidil at Snapdeal for you guys. So for me, I'm Gagandip Juneja having around 70 of experience into software development, working as a lead software engineer with Snapdeal and also working as a PMC member to Apache Blur and Apache Blur incubating. So this is how our agenda session would look like. Introduction, collaborative filtering, a brief discussion, capturing data or solution, technical challenges, what next? And definitely I don't let the bell run. So what are the recommendations? So from for any e-commerce portal like Snapdeal, so for us recommendations are like predicting a new product or new items a user would like to buy or like to see on our website. So what kind of problem that we are solving by doing this? Like personalized user experience, let's say if you're coming on our website with something in your mind and you will see directly on the homepage of the site itself in the recommendation section, so how great your experience would be. And obviously from company standpoint, increasing conversion rate. So we already have the talk on collaborative filtering but anybody don't have idea what the collaborative filtering is? Okay, one or two hands. So okay, we can quickly cover this. So collaborative filtering is the idea where we try to find the, we try to predict items for user based on the other users, which we predict somehow that they are likely, they are having similar kind of interest with the user. So let's say in the diagram, I hope you will able to see, you are able to see it. So from the diagram what we can see is we are having three users, user one, user two and user three and we are having four products. One is grape, second is strawberry, third is watermelon and fourth is oranges. So in our database what we have is we have the history that user one has purchased grapes, watermelons and oranges and user two has purchased grapes and watermelon. So based on this history what we can infer is that grapes goes very well with watermelon and if any third user comes to our website and we know that the user has already purchased the watermelon, then we can easily predict him on the grapes based on the historical data that we have. So if you see from the mathematics point of view, so we just need to create one metrics, item to item metrics, which represents how many times item one and item two have been sold together or bought together. So you can see that item one and item three is here, the cell of item one and item three is having score two, so it's been sold twice together. And if you want to predict for user three, but our recommendation, we just need to take a dot product with the user three purchase history and we can find in the green cell you can see that we can recommend item one to the user. So this is a very brief and basic idea how collaborative filtering works, a lot more details into it, but this is how it works. And yeah, so now we are going further, further in the session we are going to discuss a lot about data, like how we are generating recommendations and why do we want to do that and what are the new tricks or tries that we are doing with the data. So for that first let us understand, let us take one example of our user browsing history snapshot so that we can refer it to for our upcoming algorithms. So this is the snapshot of a user history with Snapdeal. So I can summarize this data for you, from this data you can see that the user has visited nine days ago, he's visited mosquito nets 11 days ago, again 11 days ago, 11 days ago, so couple of mosquito mats 12 days ago, user has purchased one AC from Snapdeal 15 days ago, T-shirts 18 days ago, some ACs he has already visited 20 days ago, 21 days ago, 21 days ago, and few months like 210 days back he was interested in or maybe he was looking for phones on Snapdeal. So this is the kind of raw data that we have, if we can summarize this data, the user is interested in bottles and mosquito nets, maybe because of summer or rainy reasons these days and he has already purchased T-shirt and AC from Snapdeal and some days back he was looking for mobiles on Snapdeal. So we are going to run plain collaborative filtering that we have already discussed on this data that there is, let's see what are the recommendations coming. So recommendations coming are some ACs, user have already purchased T-shirt ACs, then bottle, mosquito nets, bottles and some irrelevant stuff. So do you think this is the right thing to show to the user in the recommendation section? If he has already purchased AC from us, then there is no point showing him AC because there are very less chances that he'll purchase AC again. T-shirts, yeah, if he has already purchased T-shirt we can show him more T-shirts. Bottle and mosquito net, he was recently interested in is coming little lower in the list. And some irrelevant stuff that might come because of the user behaviors that we have captured through the collaborative filtering but user might not be interested in this stuff. So what are the problems that we have faced with this plain vanilla collaborative filtering approach? One is old purchase that we have already seen that user has purchased AC and T-shirts are dominating in the recommendation that we are generating. So that is dominating the recent interest of the users. So one important point that we'd like to mention here that is going to be based of our solution that we have developed. The one is that the user interest decay with time. If user was interested in iPhone or maybe other phone seven months ago, so there is no point recommending him similar products today, right? Because if we revisit the collaborative filtering technique it simply works on a feedback capturing mechanism, right? So feedback, so like this is well-proved at Netflix or Amazon, like for Netflix they are having explicit feedback for a movie of user. But for us like Bottle, we can't ask every user if he is visiting something that rate this product or rate your interest for this product. So we are having a challenge in capturing the external feedback. So how we are working with collaborative filtering techniques we need to capture implicit feedback and that is static for all users. Like for that what we are doing is so we are having multiple activity types on our website like user visit, user might have added product to wish list or cart or user might have purchased product from us. So for these all activities we give static score like one to view, two to maybe wish list, three to add to cart and four to purchasing. So these are like, so based on that we can generate recommendation but these feedback mechanism is not capturing the time. So which is the key here because user interest decay with time and might be, so user might be interested seven months ago in iPhone five but today he might be interested in iPhone six. Undelated category product that we have already seen are that some chapel and all other products are coming on the recommendations. Thus user might not be interested in all those products. So no point, so if user has like, if we are having user 10, user's 10 activities in a particular subcategory or particular category on our website. So we don't want to show him products related to all the products, all the activities that he has done on our website. We just want to show him some products in that particular category. So this is again a challenge while working with collaborative filtering. Though we can overcome this but let's see how we are doing it. So first going further into this discussion let us see how we are capturing data at Snapdeal. So we are having product beauty line. So that is a unified data platform where all kind of events on the website, mobile apps and all that comes into a single platform. So for us it is a single point of contact to get all the data. Another major thing that we face, the challenge that we face was like, so from the history what we have seen is user tend to log in just end of the workflow while only checking out, they tend to log in. So the activity like for visit or anything that happened in between, we don't have that those activities are captured against cookie IDs only, not emails because we don't have user logged in with our website. So migrating those activities from cookie to email is a challenge in itself because there might be a possibility that same browser is used by two users or single user might be using two browsers or multiple machines or maybe mapping mobile and web data to a single email ID. So for that we are having one intelligent system in place based on the user history or user buying pattern or user visiting pattern. This components map these activities to their email IDs. So this is how we generate data for our recommendation process. So this is how, this is our recommendation solution is, so we are having a lot of data, visit data, wishlist data, car data and purchase data but as I've already discussed we don't want to generate recommendations for each and every activity that user did on our website. So for that we have written one component called seed generator that creates or that picks the best item in each category for us. The best item how it calculates, so we will discuss in the coming slide but for us getting best seeds in every category or subcategory is the, this is done by this component. So then we have two further component, one is exploiter and explorer. So in the exploiter, the name gives you a bit of negative but in exploiter what we try to do is we just try to generate recommendations based on the user activity that we have. Here we just try to expand the seeds that we have. We try to show similar products to those, those seed items, so there is nothing that we are doing anything, machine learning or anything stuff here. This is just we are exploiting what user did on our website. In the explorer part, what we are doing is, let's say if user is inactive for multiple days on our website, so there is no point showing him similar products again because if seven months ago he was interested in iPhone 5, so there is no point today showing him in Google 5 or something like that. So for us, but we do, so in a coming slide what we see how we do this explorer part, but for that what we do is we try to capture user interest and based on that we, on the user behalf, try to explore products. Merger is a component where we merge all the results coming out of exploiter and explorer. So we are not generating product based on the ranking mechanism, we are just taking the global score mechanism so that product takes their own place in their recommendation feed. So that is somehow taken care by the merger or we will see it in a detail in the coming slide and from the merger we do generate the recommendations. We do some redundant removal kind of stuff in the merger and we do generate recommendation out of that. So key terms that we are going to use very often in the coming slides, one is categories and subcategories, so you all must be aware that the entire product catalogs we are having around very big, 12 million product catalog at Snapdeal. So we have divided that catalog into categories, subcategories and more granular level but for simplicity sake, we can just think of divided into two things. One is categories and subcategories under a category. Second, based on the historical data that we have, we divided subcategories into two types. One is where single purchase can happen, like you have seen the example of AC. So from the historical data that we know is people generally try to buy a single AC in a particular season. So based on that knowledge that we have with our data base, so we divided our categories into two parts, one is single purchase and other is multiple purchase. Where multiple purchase can happen like t-shirt, user can buy multiple t-shirts, research time. So based on the historical data that we have, we found is how much time a user generally take to make a decision before buying a product. Like for AC, somehow we find that it's 21 days. So we know that till 21 days, the user is going to research in that particular category, so we want to boost that category a bit. So that is what we're using research time there. Expiry time, so every product has its expiry. So based on our repurchase behavior or the historical data that we have, we are finding a term called expiry time so that we can utilize that to remind the user, okay, you can buy it now again. So the first component that we have is clean seed generator, where we try to pick best item in each subcategory. So the main idea behind picking up the best item is, so again, the concept is user dynamic interest. User interest changes with time, so we use this fact here and and try to find best item based on the exponential decay function, okay, what kind of item user would be recently interested in. So it also use multiple component that I've already discussed, research time, recency is also a key there, recency that I've already discussed and subcategory affinity. So let's say in a particular subcategory, some item is coming out of best seed, right? But in multiple categories, items might have same score or we may want to boost some item. So for that, we are using a term called subcategory affinity with some formula, we calculate user interest in that particular subcategory. And based on that interest, we try to score products, so score seeds accordingly. So again, so we are using exponential decay with time because we know the interest decays. So from that raw data that we've already seen, where user was recently interested in bottles and mosquito nets, he has made two purchases, one is AC and another is T-shirt and user was interested in iPhone or some phones seven months ago. So based on that, this is our seed that we have picked. For that we are going to generate recommendations. So you can see here is the mosquito net, user visited 11 days ago and the bottle user visited nine days ago, but bottle is coming down in the list in comparison to the mosquito net because of the subcategory affinity of the user. So user tend to buy in this particular category a lot. So we want this to be ranked higher, ranked or scored higher than in comparison to the bottle. Purchase AC, so it belongs to the category where only single purchase can happen, but we will see how we are utilizing this space. T-shirts, again, multiple purchases can happen and the phone he has visited. So these are in the order of their score. So another component that we have discussed is exploiter. Here we just try to exploit user input that we have. So for us the user inputs are like the seed items and the seed items that we are, sorry. So here the seed items that are generated out of seed generator and we are expanding. So we are, how we are finding similar products to their seeds, the first component we are using is collaborative filtering. Second, so we are using image, so we are using item with similarity of collaborative filtering to similar items for that item. Another thing is content and image based similar products. So what we want to do is based on their brand, their price, their description and all that stuff, we want to find similar product based on the content. And image based similarity, also we are using in some particular degrees because in every category image based similarity doesn't make much sense. So the third one is cross selling. So we, based on the market basket analysis, what we find is what items goes well together. So based on our data that we find is for frequently bought together products and that we use to cross sell these products based on the purchase that user already made on our website. Candidate recommendations are coming out of this component. So we have one rule engine in place, like we have already discussed. We are having two purchases happened with us. One is AC and that belongs to the category where only single purchase can happen and the T-shirts where multiple purchases can happen. So for the category where only single purchase can happen, we are utilizing this space for cross selling. Like for AC we are very much interested in showing users a stabilizer that goes very well with AC and for the category where multiple purchase can happen, we utilize the space for both similar product and the FBT that goes well with that product. Global weighted scope. So what we want to do is, so we are having around 50 million users with our database. More than that, I'm not updated with the numbers, but the context here is we are getting only 1 to 2% of users that are coming daily on our website from the entire user base that we have. So for every user, so we don't want to generate our feed or recommendations for every person that those are inactive for a long period. So for that we want to run our components, exploit our explorer on a different frequency. And for that, we want to have global weighted score in place so that we don't want to rank each and every product every time, but what we want to do is we just want to give a score and based on that score, the product takes their place automatically in the entire recommendation feed for that user. So this is the exploiter function that we are using. So this is a function against score and activity times, score decays with the activity times. So how old are the activities, score would be less for that. Because we want the recent products to come up in the list. System-generated companies. So we are having another component in space. We want to utilize for, like if we want to do forcefully recommend something based on our business strategy or some other factors. So initially we are using it for predicting user repurchase. If user have already purchased some item and let's say we are notified by the system that the expiry of this item has already reached and we can ask user to rebuy it from us. So again, that idea is right product to right customer at right time. Explorer. So here, so if let's say user is inactive a long time on our website. So the data of that user that we are having with us is kind of stale, right? If user was interested in something seven month ago, or eight month ago, so there is no point showing him similar product again. But we don't want to lose that information of that user we have, of that user we have with us. And we just want to compute here the category affinity of that user. So that based on that affinity, we can recommend him the trending product in those particular categories today. So if user is coming after seven months ago, so if we are having data like he was interested in iPhone five at seven months ago, so today we can recommend him iPhone six for that matter. So if user is, so that also applies if user is not coming in a particular subcategory with us, or particular category with us. If user is not doing any activity in phones category today, so we also want to recommend him other the products in that particular category, but obviously they are having low score because of the recency of other activities. So exploiter versus explorer, since we are having global weight is core mechanism in place, so explorer versus exploiter graph automatically takes place for a, let's say for a longer period, if user not coming into that particular category, so in that particular category, explorer takes a lead than exploiter, so explorer, product recommended by explorer are coming top in comparison to the product recommended by the exploiter. So we simply calculate here user affinity, based on our formula that includes interest decay with time, discovery top trending products in each category based on the sliding window algorithm, based on the product launch date, visit date, number of times it has been visited, purchased, and that that changed very frequently in a day multiple times in a day. So based on that, we try to update the explorer component of our feed multiple times a day. So this is how, again, it's a decay function, decay with user, decay with time, and so again, if the activity is older, then it decay with time, the score decay with time, merger. So since we have generated a lot of recommendation from our different components, so it's time to merge them, and you can see that exploiter is generating some recommendation candidate, explorer is generating some recommendation candidate, though they have their own global score, so we need not to do any ordering or anything here, we just need to club them, and but what we also want to do here in the merger that remove redundant products, like if same product is coming from two seeds, maybe because of FBT or anything else, we just want to find overly similar products over there and remove them, and other thing like sold product, inactive product, we take snapshot of our inventory system on very frequently, and we try to update our feeds from the already sold products in that category, and based on that we generate recommendations. So this is how exploiter versus explorer graph works in a merger, so you can see that till 50 days for that particular instance, exploiter was taking a lead, glue is exploiter, right? But after 50 days, explorer takes a lead, so explorer products that are coming as a recommendation from explorer will be having a more score than exploiter after 50 days for that matter. So based on this algorithm, let's see what's the feed that we are going to propose. So this is our feed. So we know the user was interested a lot in mosquito nets, so we are showing some mosquito nets in the top in the list. User has already purchased AC, and that's two few days back, and still in research period of that particular stabilizer. So we are showing him just after the mosquito mats, the bottles, genes because user has purchased T-shirt from us, so genes goes well with T-shirt, so we are showing him T-shirt, iPhone 6 because trending today against the iPhone 5, that was trending seven months ago. So this is the feed that we are going to generate. What are the technical challenges that we have faced while building these solutions? Again, the big challenge was data because that the algorithm, if you see collaborative filtering content, image similarity and all that, that takes us a lot of time because those are kind of n-square problem. So that takes a lot of time and we want to update our feed frequently in a day. So for us generating, so for us like writing the entire whole solution in a single goes was not a good thing to do. So the components like image-based similarity, content-based similarity, collaborative filtering, that we run daily. So because a lot of products on, because a lot of things are not changing over there. So, but the feed exploiter and explorer pod, we want to run multiple times a day for inactive users because if user has purchased something at 11 a.m. in the morning, we just want to show, or visited some item at 11 a.m. in the morning. So we just want to show him similar products or explorer products, just at maybe 1 p.m. or something like that because we are not real-time yet. So the challenge here was like, so everything, so the core here is the everything decays with time, right? So today if any, so if today on 17 July somebody did an activity, so tomorrow it's going to be a one-day role and day after tomorrow it's going to be a two-day role. So for us calculating decay every day was an issue. So for that what we did is we take a reference rate, let's say 31st July and we are calculating decay against 31st July. So till 31st July, for the July month the decay will remain same. So these kind of challenges that we have faced and again we need not to run the feed for the whole data till 31st July, but at 31st July we need to run a full feed with all the raw data and all that stuff and to recalculate the user decay and all that function. So the incremental behavior gave us a lot of challenges. Optimization and handling data screeners, so we are having again I have told you that we are having 12 million big product catalogs. So for us, but all those products are not sold on a daily basis. So for us while generating recommendation, data screeners was a very big problem. So and we solved this by doing a lot of things like grouping all the stuff that were technical solution that we find. Tweaking algorithms to increase the relevancy. So like we have already seen the decay function that we have. So those are not very much that simple while calculating because we need to some cap like for a particular item if user visiting today we don't want to score get beyond 0.7 or something like that. And the lower cap also we want to restrict to some limit so that we cannot run off infinity or any kind of problems. So for that we need a lot of tweaking. So let's say also we want to do something like that if user has purchased AC yesterday, right? And after three days back he's visiting same product again. So we don't want to consider it a revisit because user might be showing it to some of his friend, colleague or he might not get the delivery yet and so he's just visiting that product again. So here we just wanted to treat it as a purchase not a revisit again. So this kind of solution we did by designing our decay functions accordingly. So that for a particular period the purchase will take its lead. So we are having a lot of component in place. So for us the challenge was orchestrating multiple flows. Some let's say explore runs six to eight times a day but explorator runs four times a day and some other products like system-generated recommendations run once in a day. And so we need to orchestrate all those flows and generate warning message if data is not available or maybe stop the process or kind of stuff. So that we have handled through Uzi. So what is the new stuff that we are going to work on? So first is again the motive behind is personalized touch for users. So for us, like we are having multiple components already written like user classifier and product classifier. So based on certain attributes we have classified users and classified products on the similar attributes. So for what we want to do is we want to map so user and products are based on that attribute. So for example, let's say if we somehow find that this user generally purchase when deal is going on. And we also find some product that goes well in deal only. So while in the deal period or sale period we try to show user those products only. So we are working on that. So we are having social data, we are gathering social data of users. We are trying to utilize social signals of users to show him the recommendation product there. Friends are buying, visiting or doing an activity on that. So we want feedback capturing mechanism in place so that we can come to know how people are liking or responding to our feed so that we can optimize our solution. Again, the thing that we want to do is real-time recommendations. Since we need to current data maybe in batch but we want to build a model on that and try to show user real-time prediction maybe similar products or something like that that we want to do in real-time. So this is it. From my side, do you have any questions you can ask? Hello, Gagandeep. You mentioned there is a research, yeah. You mentioned there is a research time window considered as 21 days for ACs. Like considering that the user would research for this period of time. Now, given that the product catalog that you may have would be in millions, how do you identify the time window that you may actually want to narrow down for a given product item or a product category for that matter to decide research time window because this cannot be automated as a process. So how do you identify this? So for us, there are multiple factors. We are having category stream that guide us because they have market study or thing in place. So we don't have data for all 12 million products. So we are having categories teams that provide us inputs and we do have a component in place that helps us in validating those recommendations that they make. So do you mean it's manually driven based on human insights? No, so it's first they have given us but yeah, we have system in place to validate whether. So those are like based on the market research that they are having, right? So what we are having the system in place is where we validate whether this recommendation is right or not and we update accordingly. Okay, thanks. Hi, this is Vijay. Do you collect the information in the server side for every request or through JavaScript? Can you come again? Do you collect all the information in the server itself or through JavaScript up to the page loads? The reason I am asking this is, will you be able to figure out all the robots access and remove them from all the data collected? So if I understand your question correctly, so how are we serving these recommendations? No, what do you do with the junk data? Maybe like visits by robots, et cetera. Okay, okay. So again, the system that I've discussed beauty line. So all the scrappers or the junk data, the scrappers, cookies, and that we are getting. So we are having system in place to remove that from the rod. Okay, but that is your own build solution, right? Yeah. Okay, and are you also capturing events on page load like for a single visit? He's moving mouse over for a few things. He's trying to collect. Yeah, so for that, so we are like, for that we are having some components, but for us, getting data daily from those components is not feasible today, but we are working on getting dual time and all that stuff. But that is not in place today. But you're collecting and using in some cases. Sorry? You're collecting the user click behavior and mouse behavior and using in some cases. Okay, thanks. Hi, I have a question. So you mentioned that you have users from different channel and you use mapping from cookies to email ID to identify the users from different channel. But as you said, you have about 50 million users and I believe that number of actions that user does on the site would be very less. So by using the user action on the website, I feel that accuracy would be very low because 50 million users and number of action are like, would be very less compared to that. We are not mapping it to number of actions. So you said that you see the action, sequence of action that user does on the site. Yeah. And based on that you identify user uniquely when you're mapping cookie with email. No, actually the thing is, let's say for a particular period if user is not logged in and we are getting some events from that user or particular cookie. So for us is to map those cookies, events that are coming through cookie to their email ID, right? So if we are not getting email ID, so we are having a lot of like component, a lot of things to get this email ID. Like, so for us in a log we create a time series, right? And we find when the user was last logged in and when the last log out happened. And so for the event those are not mapped to any user in between that. So based on the learning of the user behavior pattern we try to map those cookies to the multiple user that are coming, those are coming through those browsers. Yes. So even if I log out, will you be able to identify that it's me? So if user, let me give you an example. So if user A logged in, right? And did some activities and user A logged out, right? Some activities happened from that cookie. We don't know whether user B did that or user A did that, right? It's fine. Which is fine. And after that user B comes, user did login from that particular cookie and logs out after that. So the events between the user B logged in and user A logged out are not mapped to any user right now, till. Yeah, so that means. So based on the behavior of the user searching pattern or user visiting pattern or user buying pattern that we have we try to map those items to the users. Same user. So based on that we find whether it belongs to user A or user B. So you're not able to figure out, let me come again. So user A is logged in, okay? He does some activities then logs out, okay? And then maybe he or someone else comes to the same computer does some browsing. So these browsing after log out, will you attribute it to this user who had logged in? So we try to divide those events between user A and user B because user A might have done those activities after logging out, right? Okay, okay. My question is towards a trending catalog. Hello. Yeah. So you mentioned about collaborative filtering. So my question is about trending catalog. So how do you show trending products and make sure that kind of a bubble doesn't happen? For example, you show a particular, there are like few hits for a particular product and then you show it as a trending item. And again, the hit keeps on increasing because it is shown as a trending item. So how does the trending catalog on those pieces work? So you are saying some product is a, continue to be a drop trending that particular category? No, basically I'm asking about the algorithm or the way in which you show products as trending and make sure that the bubble doesn't happen. For example, after reaching a particular point, maybe that product was not liked by people but it was viewed so many times but that should not be shown as a trending item, right? So this is very a non-problem Britney Spears problem if you see. So we have handled it, a lot of papers are there you can read. So we are, how we are handling is we try to identify these items. So in that particular category, so we try to see how other items are behaving, right? If it's being a top for let's say a couple of days or couple of weeks, then we try to solve it accordingly. Because for us, like other items in that particular categories are again gaining the interest of the user. So for us, those are more important than the product which is a trending for last week or last month. Okay. Okay. Yeah, I'm referring to the bias. Sorry? So basically, I mean... Maybe if you are not done, we can discuss offline. Okay, yeah. Okay, Gangandeep, thanks for the insights. I have a question that is not actually related to the recommendation systems. But like off late, we came to know that Mintra has shifted its entire business to app. So I'm not the right person to answer that question. Okay. Any insights like from Snapdeal? Like... No, nothing. Thank you. Okay. Business Insider and all that own open platforms that they are to understand. Hi, Gangandeep. My name is Sunil. I'm a product manager. And thank you for the informative session. Thank you. I just want to know how do you measure the performance, continuously measure the performance of your recommendation engine? So performance in terms? So whether it's working itself. So for that, we are having AB system in place. So we are running multiple algorithms accordingly. So at a time, we run two algorithms. And based on that, if you are rolling out new changes or anything for that algorithm. So we try to find how users are responding to that algorithm. And again, I've already mentioned that we are having in our wish list to having feedback mechanism in place. If user is coming and clicking the second product on our recommendation feed. So for us, it's an opportunity to analyze why user did not click the first item. Or maybe, so I can tell you, based on the historical data that we have, so what we have seen is a user try to click the second item most in the list. So we are using these kind of user behavior. Utilizing these kind of user behavior in our rule engine and all that stuff that we can configure automatically. Yeah, every company has, yeah. Hi, this is Amol. Actually, my question is something like there is a purchase which has actually happened from your recommendation. The first ranked item is bought. And it's a single purchase. So how soon you will going to be removed from the existing recommendation result? So right now, I'm not, so we are not real time. But yeah, we do run this component four times a day or five times a day. So based on that, let's say if user did purchase at 11 AM, so one PM that the first run after that going to happen is going to remove that product. Okay, so currently what is the turn around time? And replacing it with the similar products, yeah. Currently, what is the turn around time? Turn around time. Like if I bought the first ranked item from the recommendation, how soon it will disappear from the existing recommendation? So what I'm saying is we are running it three times, we are running five times a day. So you can divide it in 24 hours and see. 24 hours of, thank you. Sorry. No, no, no. We just run for active users and update. So we are having incremental system in place. We do run only for active users. We have time for one more question. For the implicit feedback, like visiting and adding it to the basket or purchase, you said like you mark it as one and two and three. Is that arbitrary, discrete numbers? So this is an example that we have used, but based on how the feed, how these algorithms, these functions are behaving, we tweak this because these are working as a wait for us, right? For us, if you want to make the decay more and so based on that, we are tweaking. So you have a confidence score for each of different event. Basically, purchase is treated as more weightage than the visit. So that's what we want to do, because if you think from the perspective of the user, for us, we are capturing feedback, right? So for us, if user has purchased something, so user was more interested in that in respect to the product that he has visited. So the way we are capturing them now. So this is static, nothing related to confidence level. Okay, thank you, Gagan. Thank you. And with that, we come to the end of the 2015 edition of Fifth Elephant. We hope you love the conference and you come back again. Thank you.