 brief to the point probably would prefer to have more questions than because it's difficult to talk about implementations now but what we can do is we can share the experiences and what we feel till now the kind of success that we have seen with the implementation of Julia. It's been a quite a long path being at it for about one and a half years now. It was not two hours satisfaction but it was very encouraging. Once we got into a stage where it was at a level of implementation we have taken it next step. It is when you production in about two more weeks and next a production it's fingers crossed, a lot of dependencies on it. This will give, this is just to give you the context, just the complexity not from what you do. When you say number one it's essentially that there are millions of queries without number from a real life which translates into a multiplication factor of those number of queries for the complexity of the query. It is about all B2B, B2C products and services so it's a very long tail. There are multitudes and clusters of different rule sets because all the one rule set will not fill everything and it is across a large part of India which also means that from location availability and distribution it's a monthly distributed system with fragmentations and clusters. So, it's a beast we built up and how we scale it has always been issued. What we have done the project on which is going live is just on a proper subset of it which looks at the sort of the complexity to the whole hop but it is currently not in production. So, there is a learning process and the learning digest comes in and that's a production process. So, what we have worked on now which used to take a lot of time because it was an end-of-day process in the night is a learning part which learns the whole night and then it gives the digest in the morning for the production system to use just to get on that point. So, this we call is essentially called the pricing engine. Effectively, it tells you how much to charge a business for or what is the kind of demand, how to prioritize that. So, one part is obviously the money which is important for any business. The other part is to give the SLA. So, if I know that these are one of my part queries and expect a lot of queries on that part and I want to minimize and have my expected delivery time tokens level, I would rather push my top 10 percent of the query and 50 percent of my best machines and the remaining 90 percent can take the remaining 50 percent. So, all of these things happen and that's how we do it. It matches demand and supply. Obviously, it's an econometric model. This is where the parts are. It is a long tail. It's about 236 K categories. So, this is 100,000 is the focal category we have done. There are these many locations from where you make some money or the other. India has over 46,000 pin codes and our pricing is at the pin code level. So, it's a complex thing. And we do use velocity as soon as the velocity means current trend because if we don't do it, surely we'll do the most popular movie. But you'll have to give time, impact, flexibility to what is currently on and that is what we mean by velocity. It is our time series historical values and finally, of course, you have to give to a price point for every computer. So, if you look at that, it's essentially an N cross N cross P problem where each of them are in blacks and thousands. I will use blacks and India. I love using that factor. Pricing packages also add to the complexity because every time, velocity also takes in this factor. If I've had the success of selling a goal, it means my pricing is right. So, I can't charge just make the entry premium. So, that's how you keep on looking at it. How much your soul also tells you how the demand is reflecting and changing and how the chance for a pricing I will put. Similarly, if you're not able to sell over a particular time, how you've got the price and all this is happening automatically. And finally, this is there. This is what you're going to every night. This is where the computer is learning from. Roughly about three gigabytes per city on an average. The large cities like Bombay, Delhi. This will be over five small cities like Pune, Kulwata. It will be probably over one and a half. And what this is what it has to give. It has to give subscription local merchants. It has to do with inventory. I just explained to you the amount of booking done is inventory computation availability. And it has to be also used during negotiation. So, on a negotiation on the fly, you're getting on this learning and you're doing the best fit and you're telling the guy sitting across the table from you, being used by a person who has no idea of technology, what this has to do and that has to happen. And on top of that, it is an illogical system. It's just a rule based. There are no laws. There is no logic. It's just business experience, embedded in routes and doing some sort of data level filtration. What are the challenges? The challenge is obviously of search. Where you start from? There's a whole start problem. You have to have a relevant search results. It's nice to say, but defined relevant, valuable, is not anything to do with logic again. It's perception based. So, relevance is purely a rule set. There's no logic why a guy in Chennai would kill me if I give a saloon 10 kilometers away and a guy in Mumbai would love me for the same thing in the same distance. This is just local learnings. This just happened. You can't program for this. You just learn from it and you apply it to your pricing. So, it's completely based on mathematical models. When you say mathematical models, it's a Bayesian note. Given he's here, what are the most likely paths? What are the different ways? And you've learned this over time and you continuously keep on learning. So, the performance impact and in retrospect, they build over time. They evolved. So, they have the complexity of going from A to B to C to D to Z. And now we can crunch it and do it. So, we need to move towards mathematical models for the realization we have been having after seven years of experimentation and some sort of sedimentation of the rules. And this is what the times were taking. The current PhD code, of course, was a PhD. With a lot of C modules, which is not talked of here, but it used to take about 14 minutes for the duration. This is not bad, given the kind of data and all that we're doing. But this is great. We are going to do the same in four seconds. We have our experience and tools in the right charts. So, that's the kind of crunching we're able to do. What that does is it makes us free to experiment on something on a realistic basis to apply more rules. Maybe make that again 14 minutes, but probably 40,000 times the computation of the complex if we need to. And this we are implementing still. This is actually not very easy. It sounds very, what is there in that? It's because of the complexity and favorability on that. But we are not sure it can take multiple inventories in one of the code and you have to refer to it. You have to distribute it so that others don't give the solution. But I want to have it easy again and say, oh, you have to solve this problem. So, that should not be the thing. Fairness, arbitration has to be there. Second way obviously we are going to a recommendation engine. Kind of like a hodgepodge between a graph building and a kind of cluster analysis and some sort of a peripheral node which is coming as a conjugate of its graph and list. So, ready engine we will go with this sort of hbase. We still are waiting on that. We may replace that with some sort of a older graph. We would look at, excuse me, graph. We would look at the older databases like some sort of ironsystem of course. So, we are looking at that part and this is where we are still implementing and in search how you solve the whole structure. So, effectively what happens is at 5 o'clock in the morning, my robots start firing queries both in as user and sort of looks at the past results and keeps studying the system. By 8 o'clock things are cached, looking at trends, looking at past few days analysis and we are ready to meet the demand of the day without really computing. So, the core start problem essentially is we do it so that after we have done our simulation run, about 85% queries don't even hit the database. They are just surfing the cache. That's the learning that we have. This is a rough requirement architecture. Very simple. We have made it as an independent entity and this and this are. So, this is for us in just the database was always more of a data sink. The search also today actually happens on our own CDF structures. But we keep a copy of everything on the DB for the moment except for this one there is a digest PTA which converts this data into C structures on these the search happens. That we are giving on the route to the deployment additional platform and then the same synthesis of this data in Julia structures for the logic to work on that part. Right. This is just one sample of the kind of benchmark. So, there are about we have looked at 286 hot categories, number of encodes in Bombay. So, this 286 is a category which will be where I'm obviously Bombay has about two lakh categories from that top one person categories what we think a point one person actually and there we are looking at the 206 code and this is the kind of rows and things that we are doing. That we are saying is a maximum number of categories a model user would take. Remember in VDV that now our categories of particular users less can be thousands. There is key makes much and votes and there are thousands of types and all of these categories. This is just a model user. Now, this is a kind of compute you are doing for the budget calculation for a single workshop on the kind of things that he is taking in all the way. So, we are so this is basically kind of it is very unfair to say but let's say we are looking at a 60% type calculation being done through Julia and we are looking at back in decimal team. This obviously has to be rationalized and all of them are very logical at this juncture. When we do it, it did make a lot of sense to make it a better probably doesn't make sense now. Those all those things are happening and this is the kind of benchmark on which we are doing and this are the graphs the red and also obviously there is a 550 percent reduction on that part on that part. This is what is interesting and again this performance is a little better. And time. Thank you. Right. So, questions please. Yes sir. You said that you spent almost a year or so doing this stuff. Yeah. Now, how much of the gain actually came from just the making room in your organization for experimenting with a different system. Very good question. Came from Julia as a project. So, let's talk of one week. So, we are creating CPHP. The kind of maturity we have worked as an organization for the business roots. Obviously Julia has no contribution to that part. But when we were looking at a particularly better object and faster with the compute. So, one of the problem with C is the NSE coding. It's very unfollowing. So, the kind of skill sets I need to make an optimal program often makes it a constraint factor hence a lot of the business budget which is on the fly is getting done in PHP which adds to the problem. So, the process I will go to all these three together have to be done and resources although that is part of the process. So, we will look at Lua which we will talk about in the previous session and we will look at go go with Lua embedded so that Lua doesn't thread go threads. If you increase if you create instance of Lua from inside go expectively going in Lua and try to take that out. We used some sort of I don't know if you have heard Sphinx. It's a good text search engine where you can do some interesting stuff with the compute on C index. So, because of the C index we tried that and we tried Python call which we had. Because P is not just PHP it's Python call. So, what really took the case away was it was very intuitive to program. And I had my business users who are actually data scientists mathematicians who understands the maths and economics of it being able to deploy a program slightly more for human use. Otherwise it used to become so program used to become a spectator sport. You know the data was by the subject matter expert in the business who was sitting and telling the thing no no no I don't want that I want this. So, we have taken that out and that's the biggest advantage of a language like Julia from the business and really make it effective to see code as you see or code as you use your hands. I guess that's the biggest advantage. Doesn't ask you a question. Thank you. Any more? Do you have time for one more question? There's none. Oh, yeah. What was the purpose of that? I mean it's not, when you look at the graph you just question what are you doing for 7 years so I'm not prepared to answer that. But see, we got the business to a state where we can deploy Julia so it has a merit but of course with time you have to move and accept that the fact that it is super normal faster and that gives me a scope of doing more things. So I will actually make Julia come to that level but probably with 40,000 more from the patient to get started. Alright, thanks guys.