 Today there is a guy called creationist on footer and he wrote the basic library that not uses for the movie now. He wrote the same thing using you all today and twice as fast. It's not the JS part that is the great thing about it. It's the whole async wave programming that is the strength. So what do you mean by async? In any programming language, in any program where you write, you are going to have a pandas sequence and the execution is going to happen sequence. So you will be able to have a statement and once that statement gets completed, the execution goes to the second statement and goes on. Right from assembly language to any language in any high-level language, whatever you are going to write, it's all going to follow the same current term. But there are other event-based systems apart from Node.js. For example, I think if you take Python, I think we have this. For Ruby, there is another framework which does that and for C, there is this event clip, which basically is not used to give you an event-based model. So the major difference between an event-based model and the normal programming parameter is that. So in normal, you have lines, say, print or something. So everything can sequence. So first you print it and then you are going to get an input from the user. So assuming that it's a command line, maybe it's like waiting for the user to write something on the console. So what happens now is, like, this multiple statement, when the control reaches here, it's going to wait. It's going to wait for any high-level operations to be going to happen. User input is one high-level. So there could be other high-levels, like maybe you could probably query the data, or probably you want to read from a file. There could be so many ways by which you can do input and output. And for all these, that particular statement is going to wait. And the statement after that is not going to get executed. So that's the traditional programming parameter, which is almost like most of the languages problem. So here what will happen is, if you are going to do any operation, especially an input or output operation, what it's going to do is it's not going to wait for the execution to stop to come back to the next step. So what you're going to have here is something called a callback. So if, for example, we're going to have the same thing here, so it's going to be something like, say, we're going to have a callback here, which is not for anything. And then you're going to continue. So what's going to happen is, let's assume that this particular function is going to take input from the user. So what happens is, as soon as the control comes to the statement, it sets up the even, and then it's those sets of the callback function, and then the control immediately goes to the other statement, and then it starts exhibiting here. And so here we are waiting for the user input, or whatever, or whatever it's going to do, or what the operation is going to do, maybe we're going to wait for a file, or we're waiting for a database to return back and all that. So once that happens, this even gets triggered, and this callback gets called. So here it's giving an anonymous function to it. To be an anonymous function, now it would probably be a function under the word. So that gets invoked, and this is putting the code getting executed. So if you look at the and there's a thing that if there are a lot of other statements here, this also gets executed. This gets executed here as the control code. But here what happens? We don't have this statement, nothing is executed to this statement. So this is the basic algorithm changed in Node.js. And now we'll come to the second part, Node.js part, the JavaScript. So most of you I assume would have done some kind of JavaScript. What happens in JavaScript? Any event, by itself we use the event model. You have an event listener, which has a program. So the JavaScript language itself has capabilities for doing event based operations. So that was one of the reasons why this entire node or the entire sync concept was implemented in JavaScript. I guess I read it sometimes, I'm not sure how authentic this information is. When the person who was behind Node, Node.js, when he actually wanted to do something like this, he started off with the event loop in C. And at the end of the particular point, he felt that it's very difficult to have callbacks in the sync. And I think later he considered Root and then there were some issues with that. Then after that he saw JavaScript and JavaScript since by the language implementation of the language feature itself had supported all this callbacks and all that. So it became a perfect match. So the author, Ryan Dal, he tried doing this a couple of years back also, but it wasn't working out because JavaScript was fine, but it didn't have a good implementation. Then Chrome came up and the V8 engine came up and JavaScript became insanely fast. And that's why he ported this. Otherwise this wouldn't have happened. They wouldn't have been in Node.js, it would have been Node.lu or something. It happened because of V8. That's the other part, right? So this is the concept and this is the sync part, this is the JavaScript part and then how we are currently it's implemented. So the implementation was your Chrome when it became a source project, they had this JavaScript engine called V8. So in a root sense you can compare it to a compiler which can take any JavaScript program, compile it and then run it. Compiler press probably an interpreter. Compiler may not be the correct one. Just in time compilation. I'm just like saying I'm not talking about V8 but I was trying to give an analogy of JavaScript, V8 and maybe C and GC. It's quite similar to how people work in Java world. So it's more like an interpreter plus compiler that you can think of. So what happens is it basically when you give a JavaScript program to it, it compiles it or it's not compiling it, it interprets it and it's going to run it, right? And V8 is exceptionally agile and because of that and one more important feature of V8 is that it's very clever. So instead of Chrome code base if you look, the V8 engine is like, you know, is separate from the other Chrome code, the other Chrome code part, just to display the windows and stuff like that. So what it was, it took the V8 code base and then wrapped it out around the event lib library of C. So essentially what happens is you write your code in JavaScript which gets run by this V8 engine, right? Not on the browser context as a separate process. So that's how your node code is going to run. Node, the JavaScript part and the implementation. First of all, like how string works. Like here the callback function may take some time, right? It's like if the file upload function under something like that if it resorts under. So that will be pushed to callback function so that will take care of that. So finally at the end of that we need to communicate again to browsers. So you are talking about a web server or any server when you upload the file from a website. So it will go to node there and then it will process it and the callback function will return a message to the browser. So how do you treat that? So I understood your question. The thing is node here is not just a web server. So that's what I'm trying to check. My question is like how stringing works. So stringing it has to return the client idea or something. Like two websites, so two websites. So from one page I am uploading the file. So that has to return which browser has the product. So let me come to that. Before that Node.js is not just a web server. So you can write a web server on Node.js. So if you typically take any web programming combination like you have in a patchy server, you have a PHP scripting language or probably you have a Python with some framework and all that. So Node.js is going to actually have the entire stack. So even the web server is going to be on top of Node.js. So you are actually writing a web server, not just the application program on top of it. So to come to your point. So what happens is when you are implementing a web server every time there is a request count, that itself would be a call count. That itself would be a separate process. So what happens is let's take the same example. So this is so getting good. This is like when a client request comes. So there is a caller which gets assigned to it. And then the web server is free to accept calls again. It goes out of it. And the control is inside. This gets executed. This is done. It gives a response back to the client. So the main advantage here is that you can... So the actual thread of the actual process is that I mean the client request, right? Instead of doing a point one, it can still process other client. So there would be team happening finally. Every client would probably get a call. That's a major advantage of when you implement a server. So ideally there are a lot of implementation for web servers. Even XMPT servers. So is your question that how do you retain... When a call back comes, how do you retain the previous information on how to relate it back, right? That's what the question is. Do you understand... Yeah, so do you have a program in Python? So you understand Jojo's? What happens is when a request comes before you actually send the request for processing you can extract out a client ID from session or whatever and that variable will be available when the call back is called. So you can just say, okay, this ID is available because it's bound in a closure. You can say, okay, this ID is handed back this information. So the information gets bound in a closure. That's the whole point. So even here, right? This callback will have access to the... So there's going to be a closure in one. So here's the question. We're assuming that this callback gets called in the same request, right? If you're calling it back in another request then obviously you're not getting any context when the callback happens. No, the context is in the callback. That's all you said, right? No, I understand that. We're assuming that the callback, the event was fired within the same history we request because if it came in a different history request then the whole stack has been cleared by then. Basically you can have something in the static map somewhere that's saying this client ID, this is the output. By default, it's to keep it down. When the processing is done, put it in the output back in there. When he refreshes the page or something, you can get the same client ID session. You can send him back in there. You can use session... There's another way. You can map the client's session ID, something with... There's something called... Someone sent events now using web sockets and all. So you can actually send... Even to the client saying that the work is done, you can pick that up now. So you don't actually have to wait for the next refresh. You can actually map a web socket to a client ID. Okay, that's pretty cool. Yeah, that happens if you're going to do web sockets. But if you're just thinking about traditional web servers and all. In a traditional web server, apart from sessions, if there are going to be multiple requests from the same client over a period of time, all of those requests are technically not related. They are required because of the response. That's it, too. Actually, having some kind of an ID, if you don't know a session, you have some ID in the URL. How does it happen in a traditional web server where it's either a response of multiple processes or multi-processing? No. Have you used Apache? Yeah. Have you used any of those? Yeah. You must actually know graphs, right? Apache, increasing, engineering, response, etc. So what happens is when you use Apache, every request spawns a new process. It depends on the idea you're using. The whole idea is spawn something. And it asks for some allocations for memory and all of it. When you're using engineering, there's a huge fat pipe somewhere, maintained by the OS. When a request comes, you just say, okay, look, this is a request, this is a response. Just take the two handles, keep it together, and delegate the work to somebody else. When the work is done, that comes up, picks up the request and the response, done with all the manipulations, writes the response and it's done with it. So the whole part of the main process is just take the request and the response, and put it in the fat pipe. That's exactly what happens in Node.js also. When the request and response, you get them and you call, you say, fire the callback and then you're done. Essentially, doing the same thing. You're putting the request and response in a fat pipe, and then the callback is done. It says, okay, now I have the request and response like a manipulator. It's not like Apache where you have to spawn and date till the whole process is done and then you kill the thread and spawn a new thread. That doesn't happen there. So there's no cost of spawning processes, more threads, additional VMs, nothing. That's why it's fast. That's the basic programming kind of shift. So it's not the first async framework out there. I mean, Nginx has been waiting for quite some time for the same reasons. The whole benefit that it brought to this, since you code in JS and a lot of people already code for the browser, it can also code for the server right now. That's the best thing. You have one language across the platform. That's one of the best benefits of this thing. Actually, Nginx and this thing are different. Because Nginx, you're basically spawning things. Here what you have is a single thread. No, Nginx, you define how many workers you have and then each worker, so the main process takes a request and response, puts it in the pack file and the workers process and then operate on the request and response. But it doesn't spawn any additional workers. You can specify initially how many workers you have. For example, you can put Nginx in front of it. So, I mean, it's the back end that's just a curve over there. The blocking part is not happening. Exactly. So that doesn't happen in Nginx also. Let's say you're serving a static file. It has to read the file from the file system. That's not blocking. What happens is it uses the libEvent itself and when the request comes, it puts it in the hard file and says, hey OS, take up this file reading part. Whenever you're done with the file reading, give me back the file response and tell me about what the request and response objects are. And there's a callback that says, okay, just respond this request and this is the file system file that I've got. Just write it back. So Nginx works quite similar. The only difference is Nginx is completely written in C. So you can't really do a lot with it. You can't hack around with it much. This on the other hand is very much easier to hack around with. You can start writing TCP servers and you get big applications. The scale of doing things is quite wild in Node.js. That's the whole point. Also, just to clarify, Node.js as such is not a server. Node.js as such is not a server. It's just a runtime. So right now what he's saying is he's comparing Nginx with a web server written in Node.js. Or a web server written in Node.js using the event that I own. So Node.js is, what is it? It's a runtime environment built on VAs. So VAs is a VM and Node.js is just an extra APIs. IO APIs are not good. VAs is pure JS. VAs is just like a GDM and Node.js would probably be... J2E kind of extra APIs. Probably called a framework. No, it's a runtime. Just like J2E has extra APIs. Confusion about Node.js, especially when people talk about the advantages because people generally compare it with other servers so they probably start thinking that it's a web server. But as such, Node.js is not a web server. You can write. Node.js is like a platform where you can write web servers. You do that because benefit of using Node.js is only when you're doing IO. That's the biggest thing. If you're not doing IO-based application, it's just another platform. It's just another tool. Nothing beneficial about it. The whole benefit is on asking IO. If your application doesn't require IO, it's just another tool. You better write it or anything else. You're familiar with it. So one of the biggest disadvantages is you cannot have concurrency. The whole thing runs in one single process. You cannot sponsor it. There's work going on. You cannot have multi-threaded applications. So for high-scaled websites, people have this very vague definition of what high-scalability is. You don't need to sponsor threads to be scalable. Even model-scales amazingly. You don't need threads at all. Threads make sense when you want to do parallel computing. Then it makes sense. So you don't write really computing. What does Facebook act more like a threat? Actually, the OS spawns the threads. OS is responsible for managing all this. For your application, there is no thread. Everything runs. So there's a saying, as Ryan said, in Node.js, everything runs in a separate thread except your code. Everything is multi-threaded except your code. Your code is one single thread. So when you do a parallel... There's some processing in Node.js. In that case, I had four different... I tried to call the call function, possibly four different take-time-consuming jobs. In that case, how the whole thing works? So let's say you are hitting IHDT API. That's actually the call. Let's say I'm processing an image in Node.js. So you should spawn a new process and do it there. That's the thing. You should never write blocking code otherwise you're defeating your purpose or something like that. No, but if I make a sync call, say yes, call back. Then again, you have to write an application that takes a sync call. You can either read the binary data and run loops over it and process it and all, but then it will be blocking. Or you can create a separate process or you can use a module that creates separate process for you and let it... But you can't do some kind of call back which is... So what if I say this is a function and this is a call back, which returns the process image. In that case, I'm giving an image path. This function takes something and then processes it. That's the implementation of that function. If the function implements it outside the VM kind of... Let's say you're talking about video processing in VM impact. You can spawn a separate process saying this is the input file. I have impact taken output the generated file and let me know. So you can do that. In that case, since you're spawning and you process completely, it will be our sync. And you will be blocking. But let's say you started doing all that FFF does in your code itself. Then it will be blocking. But that happens as a call back. No, not always. You can let sync code also. No, no. Can that be written as a call back? So everything can be written. So if you write this function in your code only, it will be a sync code. You just say function A, execute something, and it will suddenly value to function B. And function A, let's say it's outside somewhere. Let's say mode or a module. It might be a sync. But if you write it inline somewhere down there, then it will be a sync. It will just jump to that function and start executing because it's part of your code. If you want it to be separate code, you spawn a separate process. You can write a sync thing in the same part. You can. But not in the same file itself. The moment you write it in the same application, it becomes sync by default. It's like a separate application and then call it. You can spawn a child process, give it an input, get an output back from it. So the spawning process is done by mode or you have to use some... You can actually call OS methods too. You can call the OS methods. You can ask node to create for you also. So there's something called... V8 isolates. It's a V8 that the JavaScript engine has been using. Right now it's like one huge VM in a process. That's it. What... In V8 3.1 or something, I think they have come up with many VMs in the single process. So after that, you don't have to create a process of anyone. Probably nodes 0.6, you'll have the same VM, with the same process with multiple VMs. So it'll be much faster. The cost of... Instancing a process won't be there. VMs are already there. You just start using them. It's like the benefit of engineering. It already has a working thing. But it's not even... It's not even multi-processed. It's a single process multi-stated. And you can pass messages around, like it's an active model and all. You just... Immutable messages. You pass the messages. It's got a pass and drag message. That kind of thing. So you can do all that. Probably in some more time. Writing a sync code in your application itself, in some more time. Right now, if you want a sync code, you have to extract it out into a separate binding. Separate code. Either binding or your own code will be fine. You can have... You can just probably put it as... Yeah. Node 100.js calls... responds node to learn.js. But you can have a callback inside the callback. Right? You can... We'll write a sync code inside. The whole idea of a sync is not just callbacks. The whole idea of a sync is the function your calling itself... should not be consuming your current thread. It should be spawning its own thread working as a returning battery. That's all I get. Just because there's a callback doesn't mean that it will be a sync. Yes. Please. Please don't. Are we true? That was... One more thing to add to this. Okay. So if you want to pull some external web servers past that... all those things, Node.js would be easy. Easy as one. There are a couple of libraries like DOM, JS DOM, or different. Where... You know, using jQuery, scrapping data is one simple thing. But when you... When the same thing can be implemented on the server, hopefully it will be awesome. We like scraping data already. If you like scraping data. You do it. For example, an application with Node.js is not booted yet. An application with Node.js is not booted yet. Are you using a web counter? Have you guys ever seen a web counter with so many visitors? Okay. Let's say if you are running a very high... high-graphy website, this counter has to be safe against... It has to be locked, right? On every implement. Let's say if it's on 20 and two people simultaneously come, how would you manage the concurrency problem? So the concurrency problem is there. You cannot write concurrent applications in Node.js. You can attempt, but you probably won't be able to do that. No, but again, if you're writing a very simple counter, it will work. Problem will start happening if you spawn like four instances, and you share this variable across. Problem is, we cannot write multi-sided applications, which can be concurrent. That's why concurrency is still not there. People have tried to concurrency. The internal is a fork. We have it. Yahoo has concurrency, but it's highly debatable that if it works or not. We cannot write highly concurrent applications. That's the whole idea. So it's not good for writing enterprise business-oriented applications, which have transactions and so on. Unless there is a good reason for Node.js to be used, probably not use Node.js. It is the only good tool if you want a high-level base application. So probably most websites will be fine with whatever. You can write a... So if you're writing PHP code, continue with it. The only benefit you can have from Node is, you upload a file and it has to be processed. You have to know. You won't be spawning multiple processes in a row. You can create a message queue. It's great for creating message queues. Anything which they're saying will do. Exactly. It has one purpose and it's great at that. So if you're writing an application, you could use essentially a Python in front of Node.js. Yeah. I think there is a Node.js for WCI. So you can use it as a front-end. Instead of engine, it's our project. This works. In case no one knows it yet, JS for website is in Node.js. Yeah, completely in Node.js. It's got no good reason to be in Node.js. It's completely in Node.js. Yeah. And after three weeks, it actually crashed yesterday for the first time. It's running single process. No issues, nothing. Was it traffic like? Not that great, but it crashed because the persisting IRC logs to a remote server. So there was some network glitch and it failed. And I haven't handled that either. It was a typical number. I'm sure it's getting more than 1000 inch. Probably not. You can look at the stats. It's not that big. We have not actually publicized the JS for website so far. The people who have seen it are those who accidentally walked and found the site. So, I guess once you fix up the last few things today, we'll start. Actually, remember, you are allowed to be built. Go and have a look at it. Okay, so I guess you are done. He won't say hello. Hello. So. Yeah, let me see. Yeah, I'd like to break the idea. Okay, we'll shut off.