 Hi, my name is Rakesh. I am Rakesh314 on Twitter, so you can follow me there. Go ahead and wait. And I'm not from Bangalore, I'm actually from Bombay. And I'm here on sort of a holiday. You guys have got some awesome beer props. Just been one blur the last 15 days. But I came here with the intention of solving a problem that I have been facing in my start-up. So I run this company called Air Reception and Debugify is my competition. No, but it's all good today. It's awesome. We need competition in this space, growing space. So anyway, so I run Air Reception and I'm a cheap ass. I don't like to spend money on infrastructure and stuff like that. So I would rather optimize my app to sort of milk every piece, every ounce of CPU power from that. So yeah, so there is this issue of scaling, which is a big problem. It's only a problem if you are big. So in the sense that most people don't have a scaling problem and that's cool. Don't bother about scaling if you don't have a scaling problem. Don't go there. But once you start hitting the scaling issues, and so then the first solution is obviously throw money at it, get faster hardware, stuff like that. Don't touch your code. It's too complex. And then lastly, when you're as cheap as I am, then you will start thinking about, oh no, I don't want to throw more money at the problem. I'm going to try to fix my code now. So this is where I was at. And so the problem that I wanted to solve particularly was that of breaking up my applications into pieces. One of the ideas of scaling, at least one of the ideas, is that you don't build large apps at all. You break up your apps into small pieces and you deploy each of these small pieces independently. And talk that length about this at the JS foo in Bangalore that happened in the last event. And I had not actually gotten to do anything about it yet. I mean, I had like a patched up solution that was working. So I'll explain the scenario is that you break up the app into pieces and then you figure out how you can get the apps to talk to each other. And the benefit of doing this is that you can now take an individual app and scale just that one app because that's where you will have your performance button. Generally, it's not your entire stack that will have performance button. So you just take one thing and now you can scale this, make this run across multiple CPUs, across multiple processors, computers even if you want across the network, you can do all of this kind of stuff. And you just need to deal with small pieces now. You don't need to deal with your entire app. But of course splitting it up into multiple, there's one more benefit is that when you deploy, for example, you're not deploying the entire app, you're deploying just one thing. So everything doesn't go down for deployment. Just one little thing goes down for deployment. And yeah, so with this in mind, I was thinking of coming up with a solution to deal with this. And my idea of coming to Bangalore was to sort of sit down, geek out and maybe do something about this problem. Turns out I ended up getting drunk too much and yeah, there was not too much done. So warning, everything I'm going to show you right now is all written on hangovers. Just warning you. So I created, so based on this idea that I just discussed, I created this library called Qmin, which stands, which is C-U-M-I-N, which is a Q, minimal Q, get the pun. So anyway, so that's what Qmin is. And I thought I would take the, this is the first push to get, so it should be up now, hopefully. Anyway, so what Qmin does, and I'll show you, I'll just quickly show you stuff from inside my examples folder. So if I look at, right, so this is the way I would split up. This is just a fake simulation of an app that's doing something. Essentially what I'm doing is I'm calling Qmin.NQ, which takes a message and dumps that into a Q so that somebody else can pick it up later and process it. So I had to go through a lot of cycles to make sure that it's only two methods really on the library, so it's really, really simple to use. There's no reason why you shouldn't, unless of course you are not dealing with scale problems, in which case you shouldn't. So there's one method called NQ, you call Qmin.NQ, pass it a key, which is just some Q that you wanted to NQ under, and some data that you want to pass it, right? And so similarly there is a listener, listen.js, which is human.listen, and you know you can then get data one after another. This is a very simple, simple node paradigm. Now to go with this, so this solves the problem to a large extent. I've tested this for three, but I don't have benchmark numbers, but it's like a couple of thousands every second that can easily handle that. I'll probably show you something right now. I created a companion project as well. Oh, by the way, I just thought I'll do this as well. Oh, yeah. Right, so probably land up on NPM as well. So I'll show you a companion project that I created along with this called Human Monitor, which also I guess I'll push to get up now. And right, so Human Monitor is a little tool that lets you monitor what's happening as your messages are passing across the network, correct? So to give you an example, I can start Human Monitor here, right? And so, yeah, socket IO and all that so that I get geek read. So this is a small, simple dashboard of what's happening in your queues. Now, meanwhile, yeah, over here, so I'll start enqueuing items into the queue and you can see that the queue starts getting popular with messages. There are messages now sitting in this queue. And, you know, in a different shell, I can write. So as you can see, what this means is that messages are going in. There is nobody who's processing this message yet. The app that is responsible for processing this message has not come up yet. But this means that anybody who's using the app right now, though, from the front end, will see that the app is working, right, because it has handled your request, it's done something with it, right? It's not finished, it's pipelined yet, but the user does not need to know that. So your website looks up, right? Things might be down in between, that's okay. The website looks up. Also, everything that's supposed to happen is now stored in this queue. So whenever the rest of your app comes up, it can pick up from where it left off, right? So I'll just show you the listener that obviously now picks up and starts fleshing out to the queue as it's going, right? So you can pump in data as you want and then pump data back out. Now, both of these are obviously independent processes. They're two separate processes. They might be running on different machines. It doesn't matter. It just keeps chugging along to ensure that your data is done well. Now, one last thing that I want to talk about is the problem of deployment and shutting down your apps is that when you kill an app, you might have things in the event loop in Node that are still being processed and not completely processed yet. And you kill the app and you might lose things that you had got done, right? So that's one of the things that I have built into this, is that it can do a graceful shutdown. So when I control C, it actually checks what all's pending, makes sure that it's all cleared out, and only then does it shutdown. So, yeah, so it does graceful shutdown internally. Obviously, now that I've killed the app that's listening, the queue is building up. So anyway, yeah, that's all I've got. Thanks. Any questions? Sorry, any questions? Where are the data, sir? It's using Redis at its back end. Any plans of opening this as a hosted service? As a hosted service? I don't know. It's a simple NPM install. It's a widely needed hosted service. Because you take care of the time and availability, which is what I mean. But then, all right, fair enough. So this is actually designed for very high... So the data that I actually have to deal with is very high throughput. If I have to even go over a network, it will be painful. SQS and stuff like that do queues as a hosted service. So that's probably a good idea. My particular use case was that I wanted an internal infrastructure, and I wanted fast. I wanted to be able to hammer lots of messages to it. So you can have multiple listeners, sir? Oh, sure. You can have... There's no problem. And you can go in the cluster and... Yeah, absolutely. There's no other Q management in the world. There are tons. Everybody... It's almost like how people create CMSs in PHP. People seem to be creating... But yeah, so there are tons of Q apps. And in fact, one of the things that I'm inspired by is this thing created by TJ called Q, KUE. Which was closest to what I wanted. But that was also not designed for high throughput kind of scenarios. That's what I wanted here. Why are you using like... Honestly, because I'm dumb and I'm not actually looked at that. Secondly, I thought reinventing the wheel is obviously... It looks like it's an interesting idea. Thirdly, that looks like a hairball to me. It looks like some Java guy came and decided that we need all these factories and all of these... It just looks like a hairball to me. So I just... I don't want to do that. Okay, one last question, baby. So thank you very much. Do you have any private impressions on this? No. So guys, this is like the end of the meetup. It's not quite finished. I would like to thank... Swissnex for hosting the event. And also Hezgeek. Kiran, Sena, thank you guys for shooting the talks. Jitendra also, thanks for advertising. Mendoches on the Facebook groups. He actually works also. Oh, thanks. And now we finish with some snacks offered by Swissnex. That direction. And yeah, I think it was awesome, right? So if you have feedback, don't hesitate to tell us. If you have a mic, please. Yeah. What is the next meetup? The next meetup will be, I guess, in two months. Mendoches on the Facebook group. Ah, yeah. And we'll continue to use this format like flash talks, big talk, flash talks. So yeah, next time, please just make me a twitter me if you're interested to talk about something. Thanks, guys.