 First of all, thanks to everybody for coming up for this talk and to Haseegh for putting me up over here. That being said, my name is Akash Gowil and as of today I work at InstaMotion.com, where most of my work revolves around enabling payments across devices, across internet connections, mostly for TA2 and TA3 Indian cities. This is how I can be found on the internet and I happen to love the web and the web development that comes along with it, mostly going to the amazing community that the web has. I also happen to work on CheapAss when I have enough time and inspiration that being a site project. So in its very current form, it's so very simple to use price drop alert service, which accepts a price drop, you know, a product URL, your email ID, and as and when it figures out that there's a price drop alert that happened, it shoots you an alert, you know, through your email ID or through a push notification. The service was launched about a couple of years back, more than that, and ever since then I've been sort of single-handedly owning up to different roles that come along with, you know, maintaining a product, product management, support, social media management, sales and marketing, and whatnot, right? So it's been a pretty fun ride since then. So I would love to give you a brief idea about the scale at which CheapAss is currently operating at. So to back that up with some numbers, it's being used by about 5,000 active users of today, tracking more than 30,000 products in total. It has sent over 5,000,000 price drop alerts and help people save about 10,000,000 in total. Now you don't have to take my word for it. Here's a sample report from Amazon since when the service has been running. You can see about 1,000 products have been bought by people ever since the service was started. This is just on Amazon, right? The total amount of these goods purchased were about 25 lakh rupees in total, right? And Amazon paid me a tiny commission for helping them make more sales. Ever since the service was started, it garnered a lot of interest across a lot of hot platforms. It made it to product hunt, hack and use, straight up on hack and use for more than five hours on number one spot. It made it to react rocks and a bunch of other publications. Sorry about that. You know, which mentioned Cheapass as one of the best price tracking services for Indian websites. It has also been appreciated a lot on Twitter. A lot of folks with a lot of following have recommended to use Cheapass to their followers, right? And what this has done is brought in a lot of organic use to the product. A lot of people saying a lot of good things about it. You know, people having genuine reach. A lot of genuine users coming in because of that. You know, somebody pretty happy with buying cheap charities from Cheapass. So before I go into the core of the talk, I would like to take a brief moment to thank, you know, all those people who work behind the scenes, writing free code, you know, open source software that Cheapass currently uses. So Node.js, Redis, MongoDB, the Park, React, Redux, Grunt, JQuery, Viral, what not, right? A bunch of technologies powering Cheapass servers. All right. So in this long journey, I've learned a lot of things, a lot of things, to be honest. So let me, you know, cut to the chase and present the five take learnings that you had while I was building this. So a lot of backend folks or Node.js folks or folks who have used, you know, must be aware of this thing called dynamic require. What it basically is, is that you put in that required statement within a function and you call that function as in when you need that, right? Sort of lazy loading a module as in when you require that. And if you compare that to statically loading a required module, it basically puts it up in memory as in when the node process boots up, right? So look at this harmless little function, which whose entire task is to, you know, send an email, which depending on a condition, it will either require the SES module over here or it will fall back to the, you know, the regular way of requiring things. Now raise your hands if you think there is something which can go wrong with this code at scale. How many of you think that something could go wrong with this code? Okay, no one really. All right. So what if I, okay. So what if I give you a little hint and say that, you know, if I call this function a thousand twenty five times in a, in a synchronous loop in a for loop, there is something that could go terribly wrong with this. So what really happens is for every single dynamic require that you do, a Unix machine opens that file in memory, right? And when that happens, every single Unix machine has a upper limit to the number of files that can be simultaneously opened on that process. Right. So how I discovered this was when I was sending mass email to folks who are using cheap hours about new that I had built this code work pretty well. The number of users were in, you know, less than thousand. And when the number of users went beyond thousand, my code just stopped working one day. So it was a pretty interesting find. So all I had, all I had to do was honestly to pull these dynamic requires up as static requires and the quick scale pretty well. So the dynamic requires a pretty terrible idea at scale. I mean, if you're not really sure of what it does, be very careful of what you're trying to do. The databases. I mean, as a front-end engineer, I've hardly ever touched databases and, you know, backings. Right. And when I was starting cheap as the concept of good schema designs was pretty alien to me, to be very honest. Right. So this is what I came up with very initially. So if you look at this scheme, I am using MongoDB, a no SQL solution. And what I've done over here is I have put in one schema within another schema which is available as an array inside that. Right. And that schema, which is present over here is nothing but product price is free. Right. So every single time she pass goes out to figure out what the price is currently, I sort of need to bring this area into memory, add a new item to that, insert this whole document back, which sort of update that document back. Right. So at scale, what I figured was. Arrays as a dateable document value turns out to be a pretty bad idea. Why? Because insertions, updations, aggregations, whatever you're trying to do with areas that doesn't really scale because MongoDB gives you enough power to do things with documents within a collection. Right. But when it comes to processing things within areas, which are stored within documents, it doesn't really help you that much. Right. So all I really had to do was pull out all these items from array into its own collection within as, as their own documents. Right. And then MongoDB assisted me immensely when it came to bring a bunch of operations. These are price values. Right. To figure out is this item a good buy at this price or not. Right. So a lot of maths had to be done. So what I realized was, realized was creating multiple, multiple documents. All right. So random again. And so it's okay to have bad schema designs to begin with. You know, it's going to teach you a lot of things and you will be able to refactor your schema as you go. So don't really think about, you know, is this a good scheme or not? I mean, just go with the flow and just work on the side project you are working on. So let's continue the same database example further. And let's say I want to process all those price points which are, you know, stored in the database in whatever format doesn't really matter whether they're stored as documented in a collection or as items within an array. Right. So this is a cheap as homepage, which shows you every single item which folks are tracking on cheap as website. And it sort of has some logic to it. It's sort of set by the most hottest products. It shows whether or not this product is a good buy at this price or not. And a lot of factor that that come in right. So a lot of processing has to be done in the back end to figure that out. So this is some initial code which I wrote to process this data in the back end. So forget about the clutter and focus mostly on the find and I think each and the find one, for instance, over here. So what the find actually does is that copies all the relevant results in your memory. Right. And what I mean by that is that whatever query you supplied to the find that will basically go to the database, pull out whatever the relevant results are and put that into your node memory. Right. And when that happens for cheap as it's every single product which exists on the platform. Right. So a dot find is not a very helpful operation when you're trying to do something like this. So I talked to a bunch of folks at the workplace and you know, they recommended me why don't you go ahead and use something which a database exposes instead of any PI, which a database exposes instead of, you know, you relying on pulling everything in memory and then processing it. Right. So what I did was use this thing called aggregate, which works directly upon a collection. You can pass in a query over here, which I have done. I have asked you to group by some new price. Right. And believe me or not, what it did for me was it brought down the time to process from four minutes to about eight seconds. And this is an ear back. This was tweeted back in 16th of May, 2015. So I mean, this could help me scale at that point in time. Right. And it's still powering the products that you see on the G-pass server. So it was a pretty good, interesting find, which as a front-end engineer, you would probably never really encounter. So pulling data in memory to process is a pretty terrible idea at scale again. I mean, you have to be really careful of what you find statement is bringing in memory on with care. So we tend to spawn a lot of process in the backend. Right. So say, for instance, we want to generate a PDF document for this. So how would you do that? No matter what backend you're using, you will have to create a new process, which will do this particular job for you and store that at some location, right? For your primary parent process to read from. So what I was doing was using something called as capture Gears. Now what capture Gears does for you is it's a wrapper over Phantom Gears, which helps you in, you know, taking screenshots of the page. And you can even pass a selector to that, you know, and to spin up this process. So what happened one fine day is that Amazon changed, you know, the homepage. And what happened because of that was capture Gears had a existing bug in their systems, where if you pass a cap, pass a selector and it's not found on the page, then it basically does not talk back to the existing node process, which is fun, which, which spawned it. Right. So for the simplicity of this discussion, let's say there is some sort of inter process communication happening between your node Gears and Phantom Gears at the moment. And this communication just got disconnected. It didn't happen. So due to the code that I had written back then simply relying on the fact that Amazon's bomb will always be present. My code spun up a lot of such processes. And when that happened, as soon as it would start up the server, the memory would be flooded with, you know, capture Gears processes, Phantom Gears processes. Right. So never rely on systems you, you know, sort of didn't build. This is my key to take away from that again. I mean, if you are spinning up a new process, make sure you clean up after you are done and otherwise make sure that, you know, you put a good end to it. Never leave it hanging out there. So this brings me to my next point. Never rely on systems that, you know, you didn't, you didn't build and you don't have ownership over. So cheaper relies on a lot of third parties to be up and running. For instance, Flipkart should always be running. Amazon should always be running and whatnot. Right. So here are my top three learnings out of that. One would be cookies. So what happens is back-end can track whether or not you are sending a cookie along with the request that you made. And if we're trying to make a server to server request, then your request client does not send a cookie by default. Right. So what if a back-end server at Amazon decided to decided one fine day that, you know, if the client is not sending me a cookie, it means it's a bot. So let's just block this request and throw a bot page at that. And that's what happened one fine day. However, what they actually do is they will send back a cookie in the first request that you made in its response. So all you can really do is, you know, grab that cookie from the first response, make another request with that cookie and make a sound like a genuine user. It's also called as a cookie jar. Any pattern that you are trying to have while you are making, you know, request to a third-party server, that is an idea. I mean, if you consider, if you, if you say make 30,000 requests at 12 p.m., 6 p.m., 9 p.m., 12 p.m., then pretty sure there are smart folks at Amazon who can figure out that, you know, this is a bot server who is making request. So that has happened several times the cheapest. I mean, I'm not really sure if it's a legit way of doing it. But what I did was, you know, rotated the IPs, random proxies, user regions, and, you know, make sure that I avoid the honeypot while I'm trolling. Contain security policies are very interesting new concept in the, in the, in the new domain. Flipkart recently implemented this where what they did was they ensured that only secure origins can be injected onto their page. So if you happen to rely on bookmark rates or extensions, then you do need to push in some JavaScript onto the parent page, you know, for your, for your code to work. So you're pretty much screwed with that info. I mean, there is no way around it. But thankfully Flipkart reverted it, reverted the addition very soon. So, yeah. So those were the five topic learnings out of that. Well, as I told that, you know, I was also responsible to go for the product, making sure that people came in every day, two cheap hours and use that. Here are the five few growth hacks as I would like to call them. Okay. So generally offer privacy. I mean, people claim a lot of about privacy. People say that they're not selling you data and whatnot, but I don't know. I mean, I never rely on them. Right. So what happens at Cheapass is everybody gets a unique dashboard link. Nobody's dashboard link is same. Everybody gets a unique price tracking link per product. If you and I are tracking the same product, you and I will have two different URLs for the same price drop that you have been tracking. There are no passwords to crack. The OTP only log in. Right. So if you as an independent developer are trying to maintain a project, then why do you really want to have the hassle of security in that project trade spam for profit? If you're trying to grow a product, you at some point in time will need to send out emails telling people about what new pool things are doing with that product. Right. But I honestly do not again appreciate people sending me emails, random emails. Right. So how do you go about this problem there and you really want people to come back to your website and you also do not want to spam them. So what I did was I captured live deals from Amazon, Flipkart, et cetera, in real time. You know, and send them along in these emails, which I was sending. So this is a real price drop alert email that was sent. I basically captured through that capture just process a screenshot of a live deal that was on an Amazon at that point in time and shot them in email. And it said that, you know, industry standard of open rates is about 45% people are really happy if it's a to 10% and cheap as emails have had about 40% open rate. Consistently, and that's pretty good. Tiny incentives go a really long way in India. And what I did very initially to grow the product was I partnered with free charge. I bought a few thousand rupees of coupons and I asked people to post about whatever they are buying through cheap as on social media basically rage about us on social media. Right. And they actually did. Right. So people posting pics of what not, whatever they have been buying, right? People buying iPhones, some dresses, all for the sake of some 25 or 30 rupees or 50 rupees of coupons. People do that. Creating more clients is really easy. You know, we as developers often tend to miss out on the fact as to how will the consumer want to consume our product. Right. We build a website or we build an app. You don't really care about where is my gentle line? How are they actually consuming my product? Right. So what I did was I created several different clients for people to consume cheap as the way they want to write there's a website for it. There's a bookmarklet for it. There's an Android app. There's an iOS app. There's a Twitter bot which weeks out real-time price of alerts. Right. So seem in point consumer several mainstream clients which people like to consume on a daily basis. There was a one-time effort in building the iOS app. I collaborated with the designer, got the design done, coded the react native app myself. And you know, once this was shipped, the Android app sort of rewrote itself. It wrote itself. I used the 100% design of the previous application I had built and the react native components could be used up to 80%. I had to rewrite very little code to power the Android app. For instance, here are the US and Android screenshots. I mean, they look pretty much similar except that you see a pretty native component. You see native components on both the screenshots. That's the beauty of react native. Here's a cheap as alerts. What I was talking about it has sent about one lakh tweets already. And it's a bot and some 230 people track it. Follow it. To end the talk, I would like to give you some Gyan with, you know, whatever I've learned over this last couple of years. There's no perfect product, you know, never really rushed behind a perfect product. You will never really finish that. It's okay to have outdated libraries, you know, what the technology that I showed you initially, they are all outdated currently on production. I mean, I'm still using Node.js 0.10.32. I'm using some MongoDB version 2.32. And the latest are, you know, Node is 6.0 right now, more than that probably. Right. So it's okay to have already libraries. I mean, your code scales. It's okay to not build everything you can dream of in the first card. I mean, take your product as people want it. And it's great to get that feedback from consumers because the revenues which I showed from Amazon, that wouldn't have happened if somebody didn't ask me to build for Amazon. The product in itself has been evolving for over two years now. The iOS and Android apps were basically, it only has to begin with. Only when somebody asked me, why don't you build a mechanism wherein I can add a new product to the Android app? I was like, cool, let's build it. And I did that. And build something and ship. You will never really finish that side project you've been working on. So if you really want something done and something of yours out there, just build and ship it because there is no perfect time to do things. Just do it. That's it. Thank you. Any questions? Am I audible? Yeah, sure. So you mentioned that you had some problems with arrays in MongoDB. Yeah. So I just want to, can you please explain a bit into detail that what were the problems you said, because like I have very minimal experience with Mongo and like except array absurd, I was able to do most of my operations. Right. So what I want to tell you about arrays is that every single absorption you are trying to do on an array that requires you to pull out that array in memory and then probably insert an item and that inserted back. Right. And if you're trying to do a lot of processing on those array items, MongoDB does not really give you aggregation level APIs, which it gives you with documents within a collection. Right. So if you put those array items say within a document within a collection, then only you can consume say dot aggregate API, which MongoDB exposes. So that's where I told we're in, you know, four minutes to eight clicking sort of a improvement in processing. Yeah. Okay. Thank you.