 We're testing the AV system. The actual talks will begin in about 20 minutes. And I will have proper announcements in about 10 minutes. But I wanted to remind you that if you wanted to do something in the on conference today, make sure that you go downstairs and write your activity on the whiteboard. We have a workshop going on, another VR workshop, different from yesterday's, I believe. George who gave the talk yesterday is also giving a workshop from 10.30 to noon as part of the on conference. And I've already been approached by lots of people saying, ooh, how do I schedule something? So the way to schedule something is to go downstairs to the whiteboard and write it on. To start a discussion or share your ideas, you can sit and code with friends or strangers, whatever you want to do, if that space is for you. Oh, sorry, I thought you were on. So good to see you all back here today. It is not so good to hear that. Turn my game down a little. Thanks. We'll be starting our first talk in just a few minutes, but of course, there are morning announcements. Today at 10.30, which is 10 minutes after this talk starts, George, who gave the midi talk yesterday, is doing a workshop in the on conference area from 10.30 to noon. George also lost a universal adapter yesterday. It's black, he can plug his American stuff into it. If you happen to see it, could you take it to the help desk so that he can get it back. We also have a VR workshop this afternoon, a different one than was yesterday. I think it's at three, I'll double check and let you know. And that also is in the workshop area downstairs in the banquet hall. We have flash talks today. At 4.30, we have slots for four people. So only four of you need to be brave today. Yesterday we had slots for six and there were absolutely no takers, which was very disappointing. So I hope that we'll have four people come up and get their names on this piece of paper so that they can give flash talks at 4.30. If you are sitting next to a pile of feedback forms, you are super extra lucky today because you get to fill them all in. Actually, I'd like you to pass them along the line, along your row to the people who don't have them. It's just like school, right? This is our last day to harass you about feedback forms. I think some of you turned them in yesterday. I think we got some through our email thing too. We can't make the conference better unless you tell us what to change. So please do tell us how we're doing. Coffee vendor was late this morning so I'm a little bit tired and I apologize if you can't hear me because I'm just really sleepy. We're gonna start our talks. You guys ready? All right. So the first talk of today is the dark arc, oh, see, I need that coffee. The dark art of webpack bundle tuning, which also sounds like it's a music talk. Vijay is the principal architect at Infosys. He's an open source evangelist and developer advocate within and outside of Infosys. He's been working on front end development for over four years now and especially likes webpack. It's becoming more popular these days but people keep having trouble getting it to work. Vijay is gonna simplify this for you. Thank you. Good morning, folks. I hope all of you slept nicely. Am I audible back there? So while all of you are still settling in, let me quickly introduce myself. My name is Vijay Tharab. I am from Pune. I worked for Infosys for last 14 years now as a principal architect. My Twitter handle is here and also if you want to follow along with these slides or run forward, before I go to the slides, you can actually look at the slides over here. You can open it in your browser. I mean, your mobile and you can follow along. If for any reason it is not visible here for you clearly. So I will be talking about webpack bundle tuning. So I hope all of you use webpack. So quick show off and do not use webpack. Do not use. Okay, I still see at least 10, 15 people, 10, 15% of people not using webpack. All right, nice. So I would like to take a minute to explain. So I recently heard a quote which said, every application becomes large enough that it should not be loaded at the start, the whole application. And you only need to load a very small part of your application at the start. And for doing that, you really need to split your application into a smaller bundle and then deliver one bundle at a time. And that is where things like webpack come into play. I mean, webpack is just one of the players. There are other players in the market as well. But webpack is one of the popular ones. And why is webpack popular? Why do developers love webpack? One of the reason is because it has a versatile loading system. Meaning it can load not only JavaScript, but it can load images, style sheets, fonts, PDF files, you name it. I mean, things that you cannot even imagine getting loaded and bundled, those things get loaded and bundled. That is one of the beautiful things that webpack has. Secondly, as I was saying, that we need to do bundling. So that is probably where the webpack has must react that it allows you to make as many number of bundles as you want, you have full control on what you want to do. Then next part that comes in in every JavaScript application delivery is, okay, now I have a new application and I want to deploy. But clients already have an old copy. So now I have to do cache busting somehow. So webpack gives you it in free. Of course, just one line of configuration and you have cache busting set in for you. Fourthly, it has a buzzword that a lot of people already have been talking about that is tree shaking. That is if you are using let's say one or two functions in say D3JS or low dashes, then in the erstwhile or even jQuery, right? I only wanted to use jQuery for $.ajx, but now I'm just getting the entire jQuery in. You should not be doing that. And you should not be doing manually something for it. So webpack gives you a help by doing a tree shaking so that it will shake out anything that you don't need. So it's like literally taking a tree which has connected nodes and then it will shake it. Whatever you don't use falls away. So that's the tree shaking. Beautiful feature that webpack offers. And lastly, webpack has a very big ecosystem of plugins. Probably there are about 200, 300 plugins right now and writing one yourself is not very difficult. It used to be, but they have improved documentation a lot in last three, four months and it has become actually quite manageable to anybody to write the plugin now. So you can go freak yourself out. So these are some of the key reasons why people like webpack. So those who raised their hands, I would definitely urge you to take one more look at webpack, it has been awesome. It has improved a lot in last one year and it's really, really useful nowadays. So but as I said that it has so many features and so many configurable options and there are so many plugins. Does that make it a smart AI system? It does. It is still a dumb system. That means if we do not configure it right, it is going to create these undesirable effects, right? So one of the key undesirable effects that I came across for webpack is very, very big bloated bundles. The thing for which I wanted to use webpack that it should create small, small application bundles for me so that my application's delivery to the browsers is very fast, it actually did exactly reverse. It created bundles, yes, it split my entire application code into bundles but all of them were really, really big because I did not tell him the right things. But today I learned and I'm going to teach you how do you fix that particular problem and make webpack understand what, how you can fix that thing. So I'll start off with my product experience in a summary, a very quick summary. So if you look at, this is one of the plugins output for webpack. It is showing that, you know, I had five, six, you can forget about service worker but I had six bundles and total size of all the bundles put together was 5.73 MB and it was when I did not tune it at all but when I tuned it, the bundles, number of bundles remained same but if you see the total size of all the bundles now it has become 2.6 MB, that means I saved overall at around 50% and it also, I mean, since I am already showing you a splitted version already so I mean, initial bundle size was not so much of a problem for me but yeah, if you do not configure by default that will be a problem for you to have everything will get bundled into a single chunk or single bundle and everything will get downloaded for you at the start which is not a very optimal scenario. Plus yeah, I mean, out of 2.6 MB maybe I'm going to say another 500 maybe because I still have the AOT compiler in it. I mean, this is my real actual production bundle experience that I'm showing. So how did I do? How did I go about, you know, having the 50% reduction in the bundle sizes? So the most important tool that you need for this particular, you know, activities built in into Webpack which is called as hyphen hyphen JSON option to the Webpack utility. Webpack has a command line, so Webpack CLI. If you pass a hyphen hyphen JSON to it it is going to output or emit a JSON file which will tell you everything about how Webpack did what Webpack did. So now, but obviously it is not very readable by humans. So you need to process it and there are these three tools. Webpack has its own analyzer tool or a website. There is another tool called Webpack visualizer and then there is one more tool called Webpack bundle analyzer. There are two of two tools by same name. So that's why I've written by Thor. If you search along with the keyword Thor on Google you will find it. Or you can simply use these three links at the bottom. You know, shortened links for you easily. Use a tiny cc Webpack analyzer one, two and three. So those are the three tools that we can and I'm going to show you one of the tool in more detail. Resort tool tools I will not show. Each tool has some advantages, some disadvantages. So the Webpack's own analyzer is very good in order to identify the duplicates which is probably the most important thing which I've used it for because I saw all my bundles are bloated. I was like, okay, I don't have so much code. Why my bundles are bloated, right? And then I went about identifying the duplicates. So it gave me built in that the duplicates were given to me automatically. I didn't have to search for them. It directly showed, okay, this particular, you know, module seems to be appearing in bundle one, two and three. Are you sure you want to do that? Or you want to take it out in a common bundle? You know, that kind of help it offers. So, you know, using that kind of help I quickly try to move or tune my bundling and, you know, get rid of a lot of duplicates. It also gives another key reason why I like this particular analyzer is because it tells you as to why certain file is included in your bundle. You know, this happens a lot with me or used to happen a lot with me with low dash earlier. That, you know, I never asked you to get sort by then why are you including sort by in my bundle? I don't want that file. That is seven KBX, I don't want it. But sort by is used by some other, you know, file that I have requested, some other function from low dash that I have requested. So, it tells you the entire graph as to who is dependent on what and how is the whole tree is being built upon. So, that's a very good feature that the webpacks analyzer gives you and it looks something like this. So, this is the stat.js one that I was talking about. I already have it generated and this is how it looks like. This is how it shows the modules and chunks, assets, and if there are any, okay, so assets is where you will know every JS file and which chunk it is part of now, et cetera. And in the hints area, right now, there is no such duplications, but if there are any duplications, hints is where you will see those duplications being shown in. And this is the other visualizer that I am showing here. So, visualizer is also very good because it shows it in a visual format which is also something I really love. So, you just drop the JS one file over here. Okay, so this is the kind of sunburst chart it will create and it will also, if you hover on it, it will tell you that in my application, 99% of the code is coming from node modules. Only 1% is my application. That's all. So, that doesn't seem very good and it also tells me out of that 99%, 11% is by view and 19% by D3 and all that kind of stuff. So, we will be going through this little bit in little bit more detail. But that is the, visually I find it very good that I can easily spot, okay, who is the biggest culprit? That is one of the good thing that I like about it. But lastly, the webpack bundle analyzer is almost like combining both these upper tools and it's really good. And we will be using that tool in the rest of the session. So, I'll show you as we go along. Okay, so enough of talk. I'm going to show you now a demo. I created a small demo to showcase, you know, obviously all the bad things that I can do purposefully and then I'm going to fix it, right? So, here is a small demo. So, there is a bootstrap on the home page, usage of bootstrap.js. Then there is a tab for movement.js. Movement.js, I hope most of you would know it's a date formatting, parsing, manipulation library. So, I use that on one of the tab. I use open layers map. I just wanted to use some map library and I used to know open layers. So, I use that. I use load dash on another tab and then I use D3 where I just draw a small square using D3.select and then just fill it in, right? So, application is using right now Vue.js for MVC, right? But it need not be Vue.js. I just use something. I want to use something. So, I use Vue.js, a bootstrap CSS for styling, movement.js, load dash, open layers, D3, I covered all that. So, this is the application that I built in order to do this particular exercise and then I ran, I did the webpack hyphen hyphen.js so on out of it and then I created these reports, the ones which you saw earlier, the same ones that you saw earlier. And this is the actual third webpack bundle analyzer by Thor, that will create a bundle output like this. Now, your application code is out there somewhere. The blue guy out here, that is your application code. And everything else on the screen that you see is all the libraries that I brought in. So much of libraries for such a small application, right? But thankfully this guy makes it very easy to clearly say, okay, who is the culprit? Boss OL.js is just way too much. You really cannot have OL.js for this big. I mean, for what are you using that? You have to find an alternative for this D3.js also, really big. Then there's a huge block of locale from movement.js for whatever reason. Then there is load dash.js also fairly big. Vue, ESM, you know, you need, you can't really do much about it, but yeah, if you can do something about it, it would be nice as well, right? So that is, you know, that's how, you know, we can identify our culprits, right? Sorry. And the most important culprit is that I only have one vendor.js file right now. I have static.js, vendor.js, that is this whole block. And then there is this small block here, which is my applications.js, only two.js files. And that is probably the worst culprit that I have right now. So I'm going to fix this particular culprit first that I have only single vendor file. I'm going to split it into multiple bundles so that my, and I'm going to show you actually before I do that, what is the outcome of having this particular, you know, single bundle. So I'm going to just do a hard refresh of this application, the application that I showed you earlier. And you can see here that I have a app CSS, a vendor.js and app.js. I only have app.js and vendor.js. And you can see that vendor.js was 324 KB, which is fairly high amount right now as compared to application.js, which is 3.2 KB. So it's like 99, 1% of the whole thing, right? And if I go across to movement.js or open layers, nothing is getting downloaded in, you know, all these PNG files are for the tiles for the open layer. So ignore that, but nothing from my application is now going, getting downloaded. So everything, even though I never went to load as JS or D3JS pages, all the logic and all the libraries that were required to show those pages were already downloaded on the first page, which is absolutely not very good experience. End user experience when you are talking about, you know, flaky mobile connections, et cetera. So there are a lot of things that that could be done better, right? So that's what we will go about now. And I will walk you through how do we do that? So first we are going to fix the bundles. So we are going to do lazy loading, meaning I'm going to load the, I'm going to split them and I'm not going to load them at the start. I'm going to load them only when I need them, right? So this is how my, you know, the landing.view file or the place where I was having all the other bundles loaded earlier. This is the original code, untuned. So as you can see here, on top I have a static imports of all the other components that I have written in Vue.js. I have a component called movement, not to confuse with movement.js library. This is my page which was showing some text, right? In movement.js. This is my low dash page. So these are all my components which use all these libraries internally. So I had statically imported all these components in my application and I was, you know, doing this routing here and I was giving reference to these components. So what I'm going to change now, very small change, not too much and that's why my title said that small tweaks can earn big. So you know, as you are going to see in this slide and next four, five slides, that I just tuned small, small items, but the rewards are very big, okay? So first of all, as you see, I removed all these static imports above and I also removed static references to the components and I did what is known as Webpack's dynamic import feature. So as you can see by dynamic import, I mean that instead of saying import here, the one static, I give import with a parenthesis and give the component name that should get loaded later. Now yesterday somebody was showing require.ensure. That is exactly same as this and only thing is that require.ensure is a Webpack 1.x version of the same construct. In Webpack 2.0, 3.0 now, you know, you use rather this import construct instead of require.ensure, okay? So what essentially I'm saying here is that, whenever I require movement, you import at that point in time. Right now you don't import anything. This is just a promise that I need movement component, I need load-component, et cetera, et cetera. Okay, so that is all the code change that you need to have to bring down that 900, sorry, 340 KB vendor.js to split into four, five bundles and the first bundle to come down to about 40 KB now. That's all you need and I'll show you that. How it changes. Secondly, I'm going to optimize the open layers component that I had written. This is my open layers component and you know, I was getting a reference to the variable oil from open layers library and then I was using to create a tile and a view and then I was creating a map out of that. So that was a very simple usage of open layers. Now this open layers library is not a ES 2015 compatible. So what open layers guys did was there was a lot of you and cry about their file size on their GitHub page and so they rewrote their library and now they have come up with a new library called OL with version 4.0, okay? So this slightly older code that I have, I wrote it about couple of months ago. So now it is definitely not in beta. So now what I'm going to do is I'm going to change the application to use OL instead of the open layers, the one which is below. So as you can see, I removed the from open layers line and now I'm importing specific things that I want from OL library. Now this is what will enable me to have a tree shaking going on. So instead of, you know, me saying that, okay, I want to use something from open layers and so give me the entire open layers.js one big library. I said, no, I only want tile layer. I only need map view and whatnot, right? So only give me that. I don't want anything else open layers because hundreds of features are very good for you. Just keep it with me. I don't need that, right? So by doing this, it is going to do a tree shaking and because of the tree shaking, your bundle is going to shrink and we will see how big or how small it is going to be, right? And in your code, the only changes were that I don't use now OL dot, et cetera, because I have imported the specific objects directly on top. So in your code, there was hardly any change in order to use the new library, right? So in last six, eight months, I have seen a lot of libraries going through similar transformation and another transformation that I'm going to show you is for D3. Again, if you remember, I was only using D3.select here, if you can see here on top, right? That I did import star as D3 from D3 and then I was using D3.select and then I was drawing a rectangle, that's all, right? But just to use D3.select, I got the whole library earlier. So again, in case of D3, similar changes happened that I went for a different version of D3, which is, and now with the version four, I think version 4.1, I guess. From version 4.1, they support ES2015 specific modules. So you can only import whatever you need. I mean, you can import D3 alone, but just for the sake of showing you that you don't even need to do that, you can always just say, okay, I am not interested in anything else from D3, just give me D3.selection. Then it will only import D3.selection, right? So that is what I did, D3.selection. And then here, the change that I brought was I removed the old line, which was importing the whole D3.select library and I imported just the select function from D3.selection. And you know, just the reference changes. Very small, small changes, not too much. I'm not re-transforming my modules and I am emphasizing on this for any folks who are not using bundling and code splitting, that it is not a very hard exercise and it is also an incremental thing that you can go about. So you don't have to set out, okay, I am going to just, for one string, I'm not going to do anything, I'm just going to do, no, you don't have to. While you are pushing out your code changes for your functionality, you can also push out these things, yeah? So next is, next online is the load-ash. Now load-ash, unfortunately, does not have half-truth statement, does not have a ES2015-based load-library yet. That is, if you go and get load-.js, it is common.js-based. Common.js-based library means that you cannot pre-shake it, okay? While I said it's a half-truth because there is a load--es-library now, which is actually compatible with ES2015, but very few people I have seen to be using it. So if you use that, great. Then you can go for pre-shaking. But if you don't, and for any reason you cannot because you are dependent on some library which uses load-, and there are like thousands of libraries which depend on load-, right? So then you don't have a scope of using load-es, right? So in that case, what are you going to do? There is no ES2015 modules in your hands to do pre-shaking, right? Then you have to now depend on something more, more than what you have been doing so far, whatever you have learned so far. And you have to now take help from some plug-ins. I talked about 200, 300 plug-ins, right? So one of the plug-in I'm going to make use of here. So here I only have one image. So this was my original code that I was importing underscore from load-. And I was using underscore.pick, one of the function just to give an example. So the change that I did was I said, okay, I'm going to use import pick from load- slash pick. Very similar to what I did in case of D3 and open layers. But if I do this much and if I try to see whether pre-shaking has happened, it has not happened. Because load- is not a ES2015 library, right? That's why I have to use this particular plug-in. Now this plug-in is, even though it's a webpack plug-in, it is written and maintained by load-team itself. So you can depend on it. It's not some developer in corner who is writing the plug-in. It is the load-team's own plug-in called load-webpack plug-in. And you just need to give reference to it, that's all. You don't have to do any configuration or anything. It will find out, so two things you need to do in order to get the pre-shaking going for load-. One is you have to do this kind of import and use it. And second is you have to refer this particular plug-in in your application code. And that's it. Then you have the load- as well pre-shaken or at least things that you don't need, they will not be part of your bundle. All right, next in line is movement.js. And on top, I just reproduced the image that I had shown you earlier. So in the movement.js, there are two parts. One is movement.js library itself. Again, movement.js has similar problems as that of load- that it is not a ES2015 library. It's a load- is a common JS format library. So you cannot re-shake it. And unfortunately at this point in time, I do not know any plug-in that will do the similar work for movement. So that part I'm not covering, but I also realized and a whole bunch of you probably if you go back and see your modules, you will find, if you're using movement, that you are bundling all the local files, such a big number of local files in your bundles. Now I'm not doing anything for Russian. My client base is not Russian, Armenian, Italian, none of it, but I still have all the files for movement.js in my code base. So how do I fix this? I particularly wanted to take movement.js because there's just so many people who use movement.js and so many people are bundling it without realizing it what to do about it, right? So the fix is very simple, oh, sorry. Fix is very simple. You just need to add a new plug-in reference called context replacement plug-in, which is a Webpack's own plug-in. And you tell him that, boss, whenever you come across a path movement slash locales, I only am interested in these languages, English, Hindi and Marathi. Those three languages, my client base only uses these three languages, so I only want those. I don't care about any other language. So please keep those, remove all the others. And once you give this instruction to Webpack, it will drop everything else. And that's why my talk's name is Tuning it. We are Tuning. So this is the default behavior. If you don't give this line here, Webpack will automatically bring everything. So we need to tune him here and there. We have to tell him, nudge him in the right direction. That's what I'm doing right now, by one by one, right? And nudging him, to fix how his default behavior is. Last and lastly, I also wanted to touch base upon the CSS usage that I was having. I purposefully had added bootstrap.css file in my first application code. And so as part of the optimization, I removed that. And I changed over to SCSS version of bootstrap. That is SAS version, right? And I added a customs SCSS file and custom SCSS is something like this. So I said, boss, I'm only interested in these modules from bootstrap.css. I'm only interested in scaffolding, types, grids and buttons. And I don't want anything else, okay? Just for reference, this is how the big file looks like and the whole thing, you know, the things that I did not include. Things that I did not include, right? Obviously, it is going to bring some optimization and some benefits to me. And we will show you how much benefit I got out of this. So here is a report. This report is created by the Webpack's own analyzer. So the screenshot is from the Webpack's own analyzer. So this is the before tuning application, the one which we have seen already. You can see that there were app vendor and manifest. Manifest is a default bundle that Webpack creates. You can ignore it. Yeah, it's anyway zero bytes. Yeah, vendor, there was only two bundles. So vendor and app. One was two MB. Yeah, it is before GZIP. So when I serve the application, GZIP was on. So it has converted to GZIP number. And application was seven KB. But you can see that they all had initial flag here, meaning when you hit the application, all of them are going to get down there, right? And this is the report afterwards now. So I have this app and vendor as is. And then I have bundles chunks created zero, one, two, and three. I haven't given them any names, but so they will be just referred by their ID, zero, one, two, three, which are additional chunks. But the initial flag now is only to these ones. And you can see two MB vendor has become 236 KB. So almost 90% smaller for my homepage load, right? And even if you combine all of them, your total, okay, sorry, that size I don't have here. Sorry. It doesn't matter anyways. I'm going to show you one more report wherein the comparison is given. Yeah, so the most important it is first, first thing is going to be much faster because earlier you were trying to download two MB of JavaScript. Now you're only downloading 241 KB of JavaScript, right? Let's quickly take a look how it looks like in the application. So here is the app. I'm going to do a hard refresh to show you. This is the tuned application. So even now I have only vendor and app, but the sizes are small, 29 KB now. And this was the old one just for your reference one more time. Three, three, 24 KB. These are GZIP sizes. The one which I have on the screen are, because that's how webpack reports without GZIPing. So I just kept it as it is, right? So a much smaller delivery total, only 41 KB transferred now, earlier 349 KB transferred. And now the magic is here. When I click on movement just, that's when the bundle for movement just got downloaded. When I click on low dash, that's when low dash bundles. When I click on D3, when D3. When I click on open layers. So that is the lazy loading that I was talking about, right? So, and this is how the webpack bundle analyzers output will now look like. So your application is somewhat bigger now. I'm just trying to search where it is. Okay, it is still not able to spot it here quickly. But you have a different outcome now. So you can see that, you know, open layers. I did not bring a single OL.js, but I rather brought only things that I needed, a small, small, small, small module. And you see that there are many more than what I requested. I requested only for four or five things, but obviously they had dependencies internally within OL. They figured it all out and brought whatever was required. Whatever minimal that was possible, they have brought in. They have not brought in anything more than that. On movement.js side, you can see that there is only one movement.js. The locale is so small that it is not even visible here on the corner. So that is the type of improvement that you can visually see in your own application. This is just a snapshot for you to compare. So first page, as I said, we had 350. Okay, these numbers are a little bit off than what I'm showing you now. So 89% saving I had. So just because of open layers tree shaking, I saved about 41%. Because I was only using D3.select in real life, I know that we will use more than that. So you may not get like 88% as I am showing you here, but this is just to show you in this application, I was able to get 88% saving for D3 alone. Low dash, 86% view, 31% movement.js, 80% and in bootstrap CSS, I got almost 57% of improvement. So overall whole bunch of improvements that I could bring because of doing all the techniques that I showed you. So just want to recap quickly whatever I showed you today. I showed you that it is very important to do lazy loading using the bootstraps, sorry, webpacks import function. So this import with this parenthesis, this function is not available elsewhere. It is available only because I'm using webpack. Then we should be making use of reshaking as much as possible. And that is not out of the box possible all the time. So we need to look out for every library's version which is ES 2015 compliant so that you can get help of reshaking for your application. Then wherever that is not possible, like the case of load.js, like the case of movement.js, try to find out plugins for webpack which will help you doing this particular specialized culling, low-cales. I don't want all the low-cales. So I want to cull them out. So something like that. And if you're using bootstrap which quite a few might be using, then also make use of SAS-based version of bootstrap so that you will only include whatever you need and rest will be dropped, could be dropped. So thank you. And I wish you the best for having your tuning journey. So if you like this particular session, please tweet and have a webpack hashtag in it so that all the other webpack users also can benefit out of this. You can find me on GitHub, Twitter, Medium, everywhere as a lot of people. Thank you. Oh, by the way, the code is here. If you have some questions, you can ask them now. There. In your first slide, I saw the two images. And in the second image, there was a file called app common which was taken on 1.2 MB. So I just wanted to know from where the file came. Oh, that is my own bundle that I separated out that in my application, right? Yeah. So, right. So here, the app common was a bundle that I created myself. So webpack, as I said, webpack gives you full flexibility as to how you want your bundles to be created. So by default, it will create a bundles in certain fashion. Okay, so most of the times the default is creating a vendor bundle out, okay, which will include everything from node modules. But in my application case, what I realized was that I don't want everything from vendor bundle to be, I mean from node modules to be included in one single vendor. Rather, I will create something that is required, absolutely required for my home page dashboard to come up. So which included a charting library and some of the other libraries like that, right? So those parts I picked out and created a separate and then there was, okay, and the polyfill was separate as well. Polyfill is there in the left side also. Okay. Oh, sorry. Just curious about the app common. Oh, I'm sorry. App common is something that I don't load at the start. I only load whatever I need as part of mean and moment my home page is loaded in the background, I load app common essentially. So basically I say 1.2 MB of load at this to render my page. That is the reason why I took out everything that I don't need and put it in a separate. Sorry. Yeah, I'm Pradeep, nice talk by the way. Thank you. So my question was with respect to the legacy application which users require JS. Yes. There is already a Glify.js compiler which way we do all the optimization stuff. Will Webpack suit for it? I tried many times, but I failed. I mean, it's like a lot of plugins are in Webpack, but some of the other it shows a pair saying that Webpack is not, it's throwing up and not many forums were there. Have you tried this scenario? Yes, I have extensively worked on the required JS. So I can give you more details. We probably offline, but there is a R.js which is a sister library along with require which I used for my purpose. So yeah, straight answer about Webpack. Can I go with back way? No, it is not that easy. But R.js pretty much does everything that, but it needs more configuration, more manual effort than Webpack.js. So, but yeah, using R.js, I achieved most of these things in my old application, not the years 2015 reshaking. That is not possible, but bundling, splitting, lazy loading, all that is possible using require plus R.js combination. Maybe I can give you more details of. Hello, thanks for the talk. That's it. How do you handle responsive design for like if you're creating an application, we to say a product which is for both desktop and mobile. So is it better strategy to have a separate code base and then bundling it separately or using this media queries and then having a single code base and then doing some kind of splitting using Webpack? Like how Webpack can help in that scenario? Okay. So it is, it depends frankly. Quick and thorough we depend on your context, but if you really want to split the code for the mobile and for the desktop, it is possible and you can have multiple entry points, for example, given in the Webpack, so that Webpack can create totally separate bundles and using those entry points, you can have totally different code rendered for different things. Right, yeah, I'm assuming that you have separate code base. That's what you said, right? Right, yeah. So yeah, absolutely. Sometimes it is just enough to use media queries and get going, but sometimes you are just bringing your way too many things for desktop which you do not need and client really, really needs super fast, you know, screen painting for mobile, then probably it is right idea to create a different code base. You know, or even in the same code base, you can have blocks and then using specialized Webpack blocks by which Webpack will only pick those parts for you, but it's very risky whatever I just said from testing perspective. Question here. Hi, so suppose I have a requirement of including a JS file, which I can't put inside the vendor thing, so I just want to use CDN to import this file in my index.sdnl page. Now, how is it going to impact my application? Am I abusing Webpack? So what happens in case of Webpack is that internally, as I said, there are a whole bunch of plugins and this view JS application used something called as HTML Webpack plugin to create an index.html for you. What it does is essentially see, it is going to create these bundles for you and I also said that it has cache busting built into it, right? So that means every time you re-run the Webpack, it is going to create a new name for your JS file. So obviously you cannot go and change, okay, Webpack has run now, let me update my index.js commit and push, no, you cannot do that, right? So that's why it creates the index.js for you and by which what I mean is, it simply adds all these new files at the bottom of your body type, okay? For all, reference for all the JS files. So coming to your problem, so your CDN.js as long as it is sitting above these, since it is from CDN, it will get loaded first and then these files, so the order, from the order perspective, it will be already resolved for you. It will be resolved for you. We have a question in balcony. Hello? Yeah. So we usually have most of the application again on desktop web, so we have different screens which have around 50 JS files to 200 JS files for one screen and we have circular dependency also, like from one screen to other screen, we need to have all the live objects all the time. So what do you suggest we need to have all the JS downloaded together at one go or we should go for clicking and then download on demand? Like what would be the better way? Right. I'll transcribe from my experience perspective. So in Angular world, I had similar problems in Angular 4 for example, that I had two modules and both needing some common information. So what I did was I created a common module and kept the code outside in a separate bundle for it. So that both the libraries will just refer that bundle. I ensure that that bundle is getting loaded before. That way my bundle size is optimal, but yes, at the start only the first bundle and the common bundle, both will get downloaded. But second bundle will not have contain the code from common bundle, that's all. Does that help? Question there. Hi. Thank you for the talk. In your example, you had spoken about Lodash. And you also mentioned that Lodash might be used by other components as well. But you picked out, you had used only pick from that. So but the other components which are using Lodash may need more than that. Yes, I agree. Does it replace me? Oh yeah, that is taken care of by Webpack for you. So at the first slide when I said that, what Webpack does is it will parse your entire JS graph. So let's say that you are using, I don't know, D3. Okay, I know that D3 probably does not use Lodash, but let's say D3 depends on Lodash and uses Lodash. So movement it will see that you are importing D3. It will also go and see what all these D3 needs or D3's code that your referring needs. And then it will parse all that and it will pull in. And it will see that you have that Lodash Webpack plugin. So it knows now, okay, you yourself use underscore pick, but let's say the other guys use, you know, let's say sort by or map whatever from Lodash. So then it will take all those and it will put them in the respective appropriate bundles. And everything else that is not referenced will get filed off. Pretty much every library these days is at least common JS and he's able to parse and understand that part of the story. You know, as far as the graph creation in its mind is concerned. We can create the whole power graph. We have a question this side for a long time. Is it possible to use like some plugins to do that reshaking stuff? So as I was showing, reshaking is primarily dependent upon your modules, whichever you want to get reshaken to be written in years 2015, at least in 2015 first. So given that you have those, you don't have to do anything specialized as much as long as you're using Webpack 2. Webpack 2 will take care of reshaking for you. If your modules that you are concerning about are in years 2015 format and you are referencing in that question, no extra plugin is required. This is about the lazy loading feature that you spoke about. So I'm just curious to know, how does it work internally? So the moment it captures the first reference, that is the moment it loads that particular JS or how does it work? As I said, whenever it comes across this import block, it is going to create a split point. They call it split point. So it creates a split point. Meaning from that point onwards, it is not going to parse the, I mean it is going to parse whatever was the parameter that was given to the import because it knows that you want to load it sometime. But it is not going to include, so the parsing is one activity for Webpack which it is going to do, even continue to do, but it is going to not include that particular code content in your main module, where it found a import with a square bracket. It's going to create that part of the separate file. So it's going to parse at the time of building? Or it will be parsing at the time of building, it is going to parse everything and create bundles for you all of them separately. Okay, okay. And then it is up to you, how do you want to load it? So I use whatever was possible in view.js, view.js uses that function. Maybe we can take this offline. I mean because for Angular, there is different way to do that. Yes, please, we would also recommend taking this offline, we're running out of time. I'm sure you must have a lot of questions since Webpack is an interesting technology, but we'll have to cut this. Maybe what I will do is I will just register for a, you may ask me anything session for the unconference area during the day and then whoever wants to discuss Webpack with me can just stop. Right, that's a great way to use the unconference area. Thank you, Vijay. The next three talks are not about JavaScript. They're about alternative languages to JavaScript. So if you are a JS Paris, you should probably go down to the Banquet Hall now. If you're not, you might want to stick around for the next talk. It's called TypeScript, all the things. Prashant is going to talk about TypeScript. He is a full stack software developer at equal experts. He's been writing JavaScript for years, but it was love at first sight when he first saw TypeScript. He's going to walk us through all the rich features of TypeScript and how to use it effectively. I'm an engineer at equal experts, and as you can tell I'm quite fascinated with TypeScript. So in case you haven't heard of it already, don't worry, because I am deeply aware this is 2017 and I'm not here to make your JavaScript fatigue any worse. But in fact, on the contrary, I want to show you how using TypeSystem like TypeScript can actually relieve you of said fatigue, which means that you can go back to doing what you love most, writing amazing code. So I've been working on and off with TypeScript for over one and a half years now, and in all this time I've seen it grow tremendously, like whether it be the incremental advances in its TypeSystem that just keeps on getting better and better every month, or the rich ecosystem around it that also keeps getting better and better, or the amazing developer option and the engagement on GitHub. In fact, Stack Offload had this to say about it as recently as last week. So they've had to omit TypeScript from their recent language rankings because its tremendous growth has kind of started to skew the numbers for the other languages out there. Is that crazy? But the point I really want to make over here is that there's something very interesting and important going on here in this space. And whether or not you end up choosing TypeScript for yourself, you should definitely be taking a very serious look at the state of TypeSystems in JavaScript today. It just happens that TypeScript is the oldest and therefore it has the most mature ecosystem and it also has the best in class tooling, all of which I'll be showing you later on today. So it should probably be your first obvious choice that you should definitely take a look at. Now for a quick intro, so it is two things roll into one. TypeScript is a static type checker for JavaScript and it's also a transpiler. More formally, the TypeScript language itself is a statically typed superset of JavaScript that compiles back down to JavaScript. It's an open source project by Microsoft and it comes with its own compiler which incidentally and interestingly enough is itself written in TypeScript. So the entire TypeScript code base compiles itself. So you know that JavaScript is a dynamic and weakly typed language. It has an implicit runtime type system but it's not really exposed to us as developers to take advantage of. So TypeScript, so basically because you don't have a strong type system in JavaScript, which means that there's nothing to prevent you from writing illegal programs. So that makes you run into all the known bugs that are so easy to make inside of JavaScript. So what TypeScript does is it introduces an explicit design time or compile time type system to JavaScript that not only prevents you from making those mistakes but it also comes with all these other benefits. Again, all of which I'll be showing you later on the examples. It's also transpiler. So you can compile all your ES6 plus code all the way back down to ES3. So in this respect, it's very similar to Babel except that it's much simpler to set up because there's just one configuration file that you need to think of to opt in and out of newer language features. So you don't even need to install any additional plugins once you have the TypeScript package installed in your project. So before I go further, I wanted to just quickly address some of these questions that people keep asking me about who are just getting introduced to TypeScript. So firstly, TypeScript is not something that you're forced to use with Angular because it works just as well with React or Vue or for that matter, any other third-party library or framework. Like I said in the previous slide, you're not really forced to choose any ES6 plus features or TypeScript because all those things are first-class citizens inside of TypeScript available to you. And just like all the newer language features that's coming up in ECMAScript, just like they get stripped down back to older versions of JavaScript, even the types, annotations that you will make in TypeScript get stripped away or rather called erased from the code that is generated inside of JavaScript. Lastly, you can use TypeScript in any environment that you use JavaScript today. So of course the web but also the also node or even mobile or native. It just works wherever JavaScript works. Also, I just wanted to leave this also in here to quickly let you know the things that TypeScript is not or the things that it does not intend to do. So it is not an academic language. It straddles a very fine balance between productivity and type safety. And all it intends to really do is to add a very strong, a very advanced type system on top of JavaScript and then it gets out of your way. Which means that if you're a professional JavaScript developer today, you can continue to build upon those skills and you can get started to take advantage of TypeScript's language features and you can be productive within a day or two at most. It's very simple and straightforward to get started with. To get started with it, you just need to install it like you would any other node module. And all these editors have first class support for TypeScript but Visual Studio Code which is also written in TypeScript has probably the best support so you should probably be checking that out first. Once you have the TypeScript compiler on your project, you can initialize a project with this command and when you do that, it spits out a TS-configure.json file which is the one single source of truth where you can configure everything about your project including opting in and out of ECMASTERX features. And you can have multiple copies of this. So in case you want to have a finer-grained control over how you want to structure your project, you can nest these files inside and you can change your settings according depending on what features you want to make available to different parts of your project. I guess that is it. So I was kind of low on slides and I have more things to show you in code. So let's begin with some examples and I have tons of them to show you here. Beginning with the simplest one which I think this is better, is this better? So this is an add function or so it seems because it takes in two arguments, A and B and it applies the plus operator to them. But the fact is that depending on the input types of A and B, the output of this function will vary. So if A and B are both numbers, the output will be numbers, yes, but if any of these is a string, for example, it actually becomes a concatenation function and the result will be a string, right? Let's see how we can express that fact inside of TypeScript. All I need to do is rename the JS file to .ts and when I do that, let me also give these things type annotations. So let me say that A and B are numbers. That's how you can specify the A and B and numbers in TypeScript and when you try to use this add function, it very helpfully tries to suggest to you that not only this function takes two arguments, but both of them have to be numbers and it's gonna be satisfied only when you give it the two numbers. Also, now because it knows that the result of this operation is a number, it has automatically inferred the return type of this function, so only the method applicable to numbers are available to you. So you could do this. If you wanted to try to pass it something that is not a number like this, it's gonna scream at you and it says that out explicitly that two is not assignable to a number, just do the string. By the way, in case you did not know, if you prefix a string in JavaScript and the plus or minus, it actually tries to coerce it to a number. Go figure, but TypeScript knows that. Similarly, we have a function called, I have a function called concat, which is different from the first one only in the terms of the second argument here. So it says the second argument B is actually a string. So that makes this function return a string and TypeScript is aware of that and so all the methods available to strings are available to me here as well. Remember I said it's easy to switch over or opt into new language features. So over here, I'm adding ES 2017 to it, like which came like some months ago, I guess. And when I do that and reload my project, this one onwards, the two methods introduced in ES 2017 are now available to me like this. It's that easy. All right, let's move on to something a little more advanced. Is it visible? All right, so I want to talk to you about interfaces. Interfaces in TypeScript let you describe the shape of your objects. They're like, they provide abstractions which act like a glue around which you can model some very complex type safe data structures and they also help unlock some very powerful design patterns. So let's see, here we have an object called blog post. It has a certain shape and structure to it. It has an ID property, a title, author and comments, which is an array. See the author itself is an object with a first and last name and comments is an array with author and body. You can note that the author object here and the author object in the blog post are the same. They're probably coming from the same data source. So I can express these relationships in TypeScript using interfaces that look like this. So I have an interface called post which has the ID title, author and comments properties. The author itself refers to an interface called author which has the first and last name and post comment is an array, which is again an interface which has the author and body properties and author of course refers to the same interface down there. Make sense? So I'm able to assert the fact that this object here actually conforms to the post interface. So it has all the properties from post except that doesn't quite because I can see an error here and the reason for that is that like I said the comments object is supposed to have both author and body properties and it is missing over here. So the moment I enter this back and comment it, it is available and from this point onwards, my code here is type safe. There's absolutely no bugs here in this piece of code. So which means that I can, and this is very type safe. So which means that if I change this body from a string to something else that is not a string like a Boolean is gonna complain again. All right and what this does is that it also enables some very powerful refactoring features inside of TypeScript. So from this point onwards, by the way, so these files could be spread across hundreds or thousands of files. It doesn't matter. If you wanted to rename your F name thing to more formally a first name, all the properties that refer to that, all the places where that first name is referred to get automatically refactored to the new name. That's amazing. Now I'll show you some methods that operate on these objects. So let's say I have a method called get full name which takes in an author and here I can annotate this object with the complex type that is author. And when I do that, let's sprint this thing and when I do that as well, I get autocomplete suggestions for the properties available on the author. So like this and this. Let me show you another example, a more complex one. So let's say I have a method function called get first commenter which can take in a post object and return the name of the first commenter on the post. So when I start doing that, I can inspect and see the things available to me on the complex object that is post. I can dig down deeper into comments because I'm looking for the first commenter, right? And it is an array because it shows all the methods available to arrays. I need the first commenter so I'll get the first element on that array and now I have the properties that are available to my comment which are author and body. I can dig further down and go as deeper as I want but I just wanna stop here because you'll notice that just up above I'm using the get full name function which kind of does the same I want to do at this point in time so I can just call get full name and this works again. The reason for that is that author here and the would get full name method here function here actually require author to be present. So the conditions are met and the code compiles. I'll actually show you how it runs to show you that it actually does work. I'll pass it the block post object that is up there. Run it. Is this visible? You see the first element grace hoppers name printed out there because that was the first commenter over here. So next I wanna show you some more features from the powerful type system available to us and over here I have an interface called player. It has some first name, last name scores and birthday properties and down there we have Tendulkar, who's a player? So he also has the same properties and these are his scores from the 1998 Sharjah Cup in the Australia New Zealand. Anybody old enough to remember that? So below that we have some other functions defined on the player object. So we have get full name as before get net score which actually sums up all the scores of the player and prints them out and age on the match day which is like the final day of the match of the series actually. So just wanna print out the players but age basically on that day and down here is the stats function that brings everything together to print out my series. Let's see what this prints first of all. So here we have Sachin Tendulkar his full name. He turned 25 the same day and he had a score of 435 throughout the tournament. He single handedly won us the match and the tournament by the way. So this is fine. This is great. Let's move on to Ganguly here because we also have Ganguly stats. Here we go. Up front we see that there's something wrong here and TechScript tells me what's the mistake because I have asserted in my post, sorry, in my player interface that birthdays are supposed to be dates but over here in Ganguly's case the birthday is actually a string. So of course we could change it here. We could fix it or even better still we can take advantage of what's called union types. So this birthday now means that my birthdays can be both of type date as well as string. So that satisfies the compiler for now and things look good except that down here I was assuming that birthdays are only dates so I could assume the get time method on the date object to be present but that's no longer the case because birthdays can be both dates as well as strings now. So I need to refactor this a bit and introduce another abstraction. So I have written this function differently so instead of passing the player I'm passing it the date object explicitly and I need to introduce another abstraction in the middle called I think get age, yeah, yeah. What it does is it actually checks if the player's birthday is a string type. If it is it converges it to a date using the date constructor otherwise it passes it on transparently. We also need to fix this thing down here now because we have replaced the abstraction with the get age function and let me replace the stats function with this one over here. Let's see if it works and it does. So we now also have sort of ganglis stats printed out there, awesome. But we also want to see how kumbe fared in that tournament which is like this. Again there's a problem over there let's see what it is. And here the problem is bigger because the statistician probably forgot to record his birthday at all. We don't have a birthday field for kumbe. So over here I want to introduce another feature of the TypeScript type system. It's called an optional property. So I've made the birthday property itself optional which means that you can have a string birthday or a date birthday or you may not have a birthday at all. Right, things apparently look good. So let's go ahead and print kumbe stats as well and let's see what we get. Well, no, boom. We get a crash and the reason for that is the reason for that is that over here I think it spoke about get time function over here. The time doesn't exist at all like this one. It doesn't exist at all in case of kumbe. So it's gonna crash. And I would have expected a strong type system a good type system to be able to catch these kind of errors. That's why I need to obtain to the strict mode of TypeScript which I had turned off to false by default. For you to be able to appreciate this more and I just reload my project and the moment I do that, it starts to show me more errors. Beginning with this one itself now. All right, so beginning with this one where now because I am expecting the age on match day method expects a date and the play dot birthday can be both a date as well as not available at all. So I need to refactor this as well which I do it this way. So I'm now explicitly checking if it is an instance of date and if it is, and if it is, I pass it along and if it is not, I should probably just turn something like not available or something, yeah? So that fixes one part of the bug but there's yet another and this one is slightly deeper. Over here, kumbe did not get to bat in the last two matches, unfortunately. So we don't have any scores to record for him but because now we are in the strict typing mode and we have a set that scores can be only numbers like an array of numbers. So we need to assure the type system that we are operating the right kind of data. And for that I want to introduce yet another instance of union types. So over here, I'm telling that the scores can be both numbers an array of both numbers or nulls. So here we are. So this gets fixed, but of course you'd expect that other things in the code will start to fail because over here, the reducer expects the scores to be available as numbers because now they can be nulls as well and it doesn't know what to reduce upon. So there is a simple way to fix that. Let me first give it an initial value of zero. So we begin our reduce function with a zero function, a zero value and we also assert that because we have initialized it as a zero, the aggregate or the value that keeps on getting summed up is a number as well. And now we have to deal with the score which can be both null as well as a proper value. So we just need to deal with this here. So in case it is a legal value that is not null, we keep it as it is, but if it is not, we use zero here. Make sense? And now if you print this out, everything should just work as it is. So you see at this point in time, my code is bug-free. I'm very sure of this fact that I have no mistakes in this piece of code. All right, I want to show you something else now, something a little more advanced, but I'll just briefly touch upon it. And they are generics. Anybody use them before? More strongly typed languages, yeah. So, generics are a very powerful programming construct and they help you write reusable abstractions. I'll show you by an example to make it more concrete. So here we have an ES6 map. And in this one, I'm setting it to a first, to a key value pair. The key is a number, the value is a string. And all right, so but there's nothing in JavaScript to prevent me from doing just the opposite, I could. I might as well set it a string too and a value of numeric too, right? Let's see how we can fix that and we can guarantee some more type safety in the kind of arguments that we can pass to it so that it's easier to deal with it, right? Otherwise you'll have to constantly check for the kind of data you're working upon and you have to make additional checks on it all the time. So we can make map generic by seeing this that the key can only be numbers and the values can only be strings. And the moment I do that, the second line starts to fail. Right? Prashant, can you increase the font size a little? Oh, sorry. Some of the people at the back are not here. All right, sure. Is this better? I don't want to increase it too much because I'll have to, but is it visible now? Cool? Similarly, we have the ES6 set, which looks like this and we can make it a generic as well so I can say that the set only accepts numbers. Yeah, fine. Set.add numbers, right? Everything good. But if I try to add anything else, there's not a number, so I can't say. It's very easy to write generics in your code so I think I have, yeah. So I have provided an implementation of generics I've taken a subset of the API available in the set interface. So I've added the add size here and delete functions and I've made it a generic. This is how you do it. You templateize it with something within these angular brackets and from this point onwards, whenever you instantiate it, you'll have to give it a concrete type of the kind of object that you're working upon. I'll show this to you with a more concrete example because I've written a test case here. So I have, wow, something's failing. All right, doesn't matter. I think I need to install Dave anyway, but yeah. So I have an interface called employee and it has a name and ID property. And so when I instantiate my simple set, I explicitly say that this set can only have instances of employee interface or employee objects. So from this point onwards, all the operations that I do will be type safe and they'll be restricted to the type of employee case and order, all right. I could show you that it works, but I guess you don't care. All right, so I think I have by now laid enough ground to be able to show you how you can do react in TypeScript. Sound interesting? So of course, you'll need to import the react module but there's a problem here. We are in TypeScript land and react was written in JavaScript. So what we need is a bridge between TypeScript and JavaScript and that bridge is called a type definition file. A type definition file is a collection of interfaces that describes the public API of the module that you're working with. So in case of react, what I'm looking for is a way to tell my TypeScript compiler that this is how the react API looks like. Let's see how we can do that. So there's a huge community effort around it called definitely typed. It's a repository that has over 3,500 type definitions that are maintained and created by the community and you need to plug one of these things if you want to use any module from JavaScript inside of your TypeScript projects. So how you can install them is this way. So when you install TypeScript, when you bring in react into your projects, you'll also have to install at types.react package. So this automatically pulls the type definition file for react as defined on definitely typed inside of your project. So there are some projects which are of course already written in TypeScript. For them, you don't need this because they're already first class citizens. But for react, you need to do this explicitly. And when you do that, and when you click on this one, you're actually taken to the type definition file of react. You can see it's a huge file, tons of interfaces and methods defined on them. And then most of these cases, if it is well maintained, yes, you also get a good amount of documentation around the interface, around the methods and the properties available to you inside of react. So this is also going to give you a complete intelligence whenever you try to use the react API inside of your own project. So now that once we have that in place, take a moment to see what's going on here. So this is how we define, declare a react component. And the more astute of you, we'll see that we're using something akin to generics here that we saw in the previous section. And you'd be right. So what we have here is that we've defined the interfaces for the components, props and state here and here, which we pass on as generics to the react component declaration. And when we do that, you can even begin to imagine the possibilities that then unlocks. When I try to access that this dot props from here, I get a type safe intelligence on the props that I have to define this component to have. I can't set it to any other props that are not already defined by the contract. Similarly for state, this dot state has something over here and this is what I get to see. As well as, so I have also this other component that I'm trying to compose from my main component. So this is called list item and has its own props over here. It has two properties, value and highlighted. I highlighted it as Boolean. So at the call site, when you're trying to use this component, when you're composing it, you also get intelligence for the props available to use for that component. So like this. And here, this is a Boolean, highlighted as a Boolean. So it is satisfied only if I pass it a Boolean. But if I try to pass it anything that is not, it's gonna complain again. Also, because over here, value is not an optional property. So if I omit it from passing it around to list item, it's gonna complain again. And let me show you something even cooler. So here we have, we are destructuring on this dot props, right? So when you inspect this, you get intelligence for all the properties available to you. And as you start using them, the list starts to shrink smaller and smaller. Of course, you can use the spread operator to store all of that in some other variable called other. So when you do that, when you get to see the three properties that are not values that has already been extracted. So I don't know about you, but the first time I saw this, my mind was blown. But there's one more thing. If you're a fan of inline size and react, you're in for a treat. I just want to show you something and leave it there. See what these are. And if I want to some text, some text property that had more intelligence that I wanted to value with, but let's just forget about it. Okay. So that was react. Next, I want to show you some more advanced features of the TypeScript type system and the language itself that can help you design better software. So here we have something called Mixins. One of the card rules of software design, a good design, is that you should prefer composition over inheritance, right? Everybody knows that. Mixins let you do just that. So they're kind of a higher order function that can dynamically modify the behavior of an object or a class they're working upon. So here I have an interface called animal with its methods defined eat and move methods on it. So here we have a class P line, the cat family that implements that animal. And so it also has to provide its own version of eat and move methods, right? Similarly, we have a bovine class, the cat or family, which also implements the animal interface and therefore it also has to provide its own implementation of eat and move methods. In fact, I want to take a moment here to show you the tooling around it. So let's say if I have another class called canine, the dog family, right? So when I do that, touch script can go ahead and give me some default implementation of the eat and move methods automatically. And when I try to use this, like this, not only the two methods are available to me, but if I want to define another one like this, which does not already exist, it can even go ahead and add it back to my class. One more thing, yeah. So for instance, if I, for any weird reason, I want to reduce the merge function from load dash and it's not already imported up here, it can even go ahead. No, I think I'm missing load dash right now. All right, so I think this is a good example of how live coding works. I install load dash as well as the types slash load dash because I also want the type definitions to come along with it. Let me see if I have it now or I just reload my project for a minute and see if anything works now. Yeah. Now I see an option to even import load dash, the module load dash, and all I need to do is this and I can start working with this thing. One more feature that I want to show you, which is great for quick refactorings, is this. So if you have a bunch of lines that you want to quickly refactor into a method of its own, you can select them and you can extract them into a private function of its own. And then from that, you can, of course, you can, you know, rename it or refactor it to your heart's content. All right, so back to, back to Mixins, right? So here I have some stuff going on. Let me just remove this piece of code. Some ceremony, which I'd recommend you to just ignore for now, but Mixins takes, Mixins take advantage of a very lesser known feature in ESX called class expressions, which looks like this, but I'll just suggest you ignore that and just focus on these two methods that are defined here. One is the outer function called moor, which has the move function defined inside of it. And similarly, we have rower, which has roar, we have meower, which has meow and hunter, which has move, okay? So let's see how we can make use of this. Over here, I'm mixing in the moor Mixin on my bovine class, which we defined up there. And then you do that. And when you instantiate that, you remember that we only had eat and move methods available to me, but because I'm mixing in the moor, I also have the move function on it. You see what we did there? Without even touching the original bovine class, we've added a new functionality, a new behavior to our class. Similarly, we have, we are mixing in the meower to the feline family. So we're making the cats go meow. And when you do that, we again get an additional method on it, courtesy of the meower Mixin. So that gives us the meow function available to us as well. And this is even more interesting because these are Mixins and they compose linearly. So you can have any number of them apply to on top of each other and the outermost will have precedence. So in case it defines a property or a method that is already existing in any of the inner ones, it will override that behavior. So here we have the feline, which can draw, so I'm kind of trying to make a lion, I guess. But I'm also mixing in with the hunter and hunter comes with its own version of the move function, right? So let's see what happens. Feline has the move function is daintily walking, right? We add this roar function to it so it can make, we can now roar. And now the hunter function, I think it's not visible over here, yeah. The lines doesn't just walk, it chases, right? And when I run this piece of code, I see that those changes have actually applied. What? I guess I left something there and I shouldn't have, fine, forget it. Just trust me that it works. Too much of life going. All right, so I want to come to my last example, which is also the most complex, but it tries to cover everything that, I wanna show you everything that we've covered so far. Just to wrap it up. And it's an exercise in design. So first I wanna show you this class, which is the basic player controller. What it does is it defines some crowd operations on the session storage. And so here we have the create, read, delete operations. And we have two utility methods called serialize and deserialize, which read data to inform the session storage. So they parse adjacent data coming in and out of it. So this thing works. This works pretty well, but it's so terribly designed for many reasons. For instance, we have here hard dependency on session storage, right? So in case you are interested in writing isomorphic apps, this is gonna bomb inside of Node because Node doesn't have session storage. All right, but even in case you want to make it reusable, or for instance, if you wanted to switch to using local storage, or you want to also have that facility available to do you tomorrow, this is absolutely not reusable and you have to rewrite the whole thing again. Thirdly, we have these utility methods here. They do their job, but the controller need not really be aware of how I'm massaging my data. It just needs to have the data available to it and it should just move on from there, right? So let's see how we can improve on this design using everything that we've learned so far. And the linchpin that makes it all possible is interfaces. In case I did not say it before, I'm gonna say it now. Interfaces are probably the biggest reason you should be using TypeScript. So we have a few interfaces defined here. The controller interface with the familiar crowd operations on it, a serializable interface with a serialized method, deserlizable with a deserlized method and storage provider with save, get, update and delete methods on it. So let's see how we can use all this knowledge to improve to write a better version of a player controller. So this player controller seems slightly longer in length, but it's so lightweight because it doesn't depend on anything concrete at all. I mean, it doesn't have a hard dependency on it. So we are seeing that the player controller implements the controller. It implements serializable and deserlizable. So that means that you have to have the create, read, update and delete methods defined on it as well as the serialize and deserlize methods that are coming from the other two interfaces. Over here also, we are doing dependency injection. So in the constructor itself, I'm asserting that I need to construct this controller. I need a concrete implementation of a storage provider. I don't care how it comes. All I need is to depend on the interface and not on a concrete implementation at all. So that means that I can, from this point onwards, assume that because I've been given something that is a storage provider, I can call methods on it like the save, get and the delete methods, right? And it becomes very flexible. Similarly, over here you'll see that I have let these serialize and deserlize methods unimplemented because I don't really care right now, but I'll show you how we can make these things available when we are actually composing the whole software together, the whole program together. All right, so let me show you an example of how we can implement a storage provider. So this is an implementation of the storage provider interface, so we have to have the save, get, update and delete methods. And over here, I'm not ashamed to depend, have a hard dependency on social session storage because there's a very specific implementation of a storage provider, right? All right, and similarly, I have yet another implementation of the storage provider which is a memory storage provider. So in case you're not to have an access to session storage, you can default back to this one. And over here, I'm replacing the session storage with an actual in-memory map, all right? So it has very similar interface and it has to have that because it has to implement the storage providers from the outside, it looks just the same, but inside it is dealing entirely on maps instead of the session storage, okay? Does it seem like it's getting somewhere? One more important point is the mix-in. So you remember that we had left the serialize and deserialize methods and our new improved player controller unimplemented? Let's see how this works. So I have two mix-ins, the serializer and the deserializer, which has concrete implementations of the JSON, Stringify and JSON parse methods. Let me just do them all together. This is where we do that. So over here, I take the base player controller, I mix in the serializer and the deserializer, mix-ins to it, and I get something called a serialized player controller, right? So at this point, even though I do not have any serializer or deserializer methods, we have them available at runtime using these mix-ins. Okay, the last piece of the puzzle is where it all comes alive. So I have a test here, right here, where we are initializing this and you remember that we promised that we need to pass in an instance of a storage provider when we construct the player controller, right? So that's what we are trying to do here. At this method, what it does is it tests if session storage is available. If it is, it immediately returns the session storage provider, the concrete implementation and if it is not present, it'll get an instance of the memory storage provider. Doesn't look cool. Let me show you how this actually works. So now, talking of isomorphic apps, this is as isomorphic as it gets. Let me show you how this works inside of Node first. I'm trying to run the tests. All right, so when I run the tests, we come here and we are in the middle of a, we have a debugger breakpoint and what do you expect the player controller to be provided as a storage provider? Remember we have the session storage and the memory storage. What do you expect over here? Memory, and that's what we have. We have an instance injected into it of the memory storage provider, which has, as you'd expect, one of the objects added already to it. So it has a size of one. The next step in our, the next, the very next step over here is something that deletes, tries to delete that entry. So if I go ahead and step over the next line, I would expect this to have gone. That's what happens. All right, so this is how it works in Node. Let's see how the same code, without changing a single line of code, will even work in the browser. So all the tests ran, they passed. Okay, let's go to the same point, the same line in the breakpoint. We have the same, how can I get rid of this? I've zoomed in in too much. All right, so here we are, the same exact same TS file and let me add a breakpoint over here and let me refresh this. All right, so now we are in browser land and what do you expect the lay controller to be injected with session storage, right? Let's see what we have near a session storage provider. All right, in fact, we can even go to the application tab and see that we indeed have an object created over here and the moment I go to the next line, we'd expect it to disappear as well. And it does isn't that cool? All right, so I guess that was a very breezy introduction to a lot of typescript in a very short time, but I hope I've shown you enough to convince you of the enormous value that you get out of using typescript. The larger your code base, the larger your teams, the more complex your projects, the more value you can derive out of using typescript. So I guess what else can I say? Let's go ahead and touch with other things. Thank you. Before we start with the questions, there's an announcement. Somebody left their Royal Enfield Bullet key in the parking lot. It's with the security guards. The bike number is KA51ER1581. If it's yours, please go take it from the security guard. And now we'll move to questions. Hi. Hello. Hi. So thanks for the nice presentation, but I had a lot of deja vu after seeing your presentation because I did a lot of work in GWT back in the day. And this seems like GWT. So in many ways, because it had static fact checking, it had tree shaking, it had generics and all that. And it was basically taking Java code and compiling it to JavaScript. So, but you know where GWT ended up. So it is good. The problem with GWT is I have to begin to answer your question. GWT is for Java developers who don't know how to write JavaScript. This is for JavaScript developers who want to up their game and start writing better JavaScript. But just one thing is, you know, I mean, if there is ES 2016, ES6, why would I invest in HypeScript? Yeah, like I showed on one of the slides earlier, you don't really have to make that choice because ES 2016, 17, 18 and everything that's going to come after that is going to be first class in the inside of JavaScript itself. So they intend to be fully compliant to the TS39 recommendations. So you are pretty much in safe hands. Don't worry about that. Gentleman in the same room. Hi, here. Great demo and it was very engaging and informative. I have one question. So this was a very small application that we showed of how to use TypeScript on the React site. Angular we all know. So let's see if I have to use it in my actual real world project. Have you used it in React? And if so, what are the problems we have seen of using TypeScript with React? Yeah, so TypeScript, the more recent version has started to shift with the JS compatibility mode, which means that you can incrementally migrate your existing code bases or legacy code bases to TypeScript. So you don't really need to go all out in at first. You can all the newer files that you write from this point onwards could be TypeScript. And you can also add something like a directive, directive at the top called TS check in your JavaScript files that also can begin to give you a good amount of intelligence even in your inside your JavaScript files. But anything that you see will work with the regular React but not with TypeScript with. No, there's nothing like that. It is fully compliant with JavaScript as we know today and tomorrow and also with JSX and also with JSX because of the type systems that they have. Sorry, JSX, of course. Yeah, I showed you JSX in full in school glory. Thank you so much. It sounds like you're going to have a lot of people coming up to talk to you later. All right, cool. Thank you. So it's break time. This is the flash talk sheet. Notice how empty it is. There's a man over there who has a flash talk so it's going to have one. Don't let him be the only flash talk this afternoon. Please come see me. I will be down where the coffee is because I've gotten word from my colleagues that the coffee is set up now. So if you want to find me for a flash talk, please find me down in the coffee area and I will remind you, please turn off your phones or at least turn them to mute. I heard a lot of notifications and I'm really glad you're popular but it's kind of irritating to everyone around you. That's it for announcements. We will be back at 1210 for the next of our talks on JavaScript alternatives. Shiva Kumar, please report here. Welcome back. Hope you had a great morning tea and yeah. So yesterday we heard a lot of complaints around screen visibility. You would like to get some feedback on is the screen visible today? Were you able to look at the slides properly? Was there a color distinction font size? Anything feedback? Yes. No better. Cool. Moving on. Please remember to put your phones on silent. It's slightly disrespectful to the speaker. If your phone rings in between, you start getting notifications messages, whatever let's put them on silent and if you need to take a call go outside the conference hall. The next talk is by Nihar Venugopal on reason reason is a new rapper on a venerable old functional language or camel with fantastic type inferences and a blazing fast compiler. It has trivial interoperability with JS code and yeah reason is worth a look guys. Nihar is a CTO at insider.in. So he would definitely know we're good to do this. Is everyone back after Chai Pani coffee? Everyone's feeling up for this or a cool. So my name is Nihar. Thanks Akshay for the intro. We're going to talk a little bit about something. That's a little bit on the bleeding edge, but at the same time is also really really old and it's reason. What are my objectives with this talk? My objectives with this talk is to convince all of you to write functional code are the objectives of my talk to make sure that all of you leave here as converts to the cult of the reason programming language actually not as you might have guessed from the title of my talk. I actually want to convert all of you into who was tank fans. How many of you have heard and the reason is you back in college my stated objective is really simple. I want to get the song stuck in your head before this talk is over. So let's go through what reason is reason is not a new programming language. That's a distinction. I want to make upfront very very clearly reason happens to be a new rapper a new syntax a new sort of tool chain around this slightly academic programming language called OCaml. How many of you have heard of OCaml? It's been around for a really really long time, but no one's ever used it for front-end development before, but there's been a couple of recent developments that have happened which have allowed this. Let's see what those are. So first of all, there wasn't very old programming language called camel, which a bunch of people sort of sitting in France decided to work on then it got changed into this language called camel light which had a bite code interpreter. It had a VM pretty straightforward stuff then it changed to camel special light which also had a compiler that could spit out native code and not just bite code for the VM and we all know compiling to native code was much faster, but then something very interesting happened along came this language built on top of all of this called OCaml and OCaml actually stands for objective camel and one of the thoughts behind it was what if we don't what if we take this functional language of camel and allow people to write some object oriented coordinate as well be a little bit more pragmatic. Let's not be sort of these priests of functional purity, right? So along came OCaml which lets you sort of dig into the imperative parts if you want to while still giving you all of the benefits of a statically typed a strongly typed functional programming language and then 20 years later there was this project that was open schools called Buckle script by Bloomberg and Buckle script is very interesting because Buckle script enabled this 25 year old programming language which could do all sorts of interesting things to actually emit JavaScript pretty neat, right? And then in 2017 Facebook open source reason and what reason essentially is is take the OCaml language. We can have it spit out JavaScript, but hang on this sort of 25 year old functional programming language. This syntax is kind of off, you know for a bunch of us who come from a very web development environment. We like the way our JavaScript is written. We like our brackets. We like seeing things in a certain way, right? So why don't we take this old functional programming language and make the syntax a little easier for JS devs to understand because that's always been the difficult part, right? We can bike shed indefinitely over this language has referential transparency. This language has why but at the end of the day if it looks like garbage, I'm not going to write in it. That's the truth. That's what it all boils down to. Does it look pretty enough for me? Does it look readable enough for me? So that's what reason does very simply OCaml can can spit out bytecode or it can spit out native code. We replace the OCaml compiler or the back end of the OCaml compiler with Buckle script, which takes the OCaml syntax constructs an abstract syntax tree out of it and then spits out bytecode native. We make it instead generate JavaScript. And then we update the syntax. So we've taken this old way of doing things and got a brand new way where you've got a functional programming language giving you JavaScript. But this is the how let's get into the why so reason and Buckle script are sort of evil twin sisters working in tandem together while you can use reason with other OCaml back ends as well. It's by far the best ways to use Buckle script because it's just amazing. So quick recap reason is a new syntax for a very old programming language OCaml. You can use both the entire JavaScript ecosystem as well as the OCaml ecosystem, right? You're talking about a 25 year old programming language in the plethora of libraries that have been developed for many many decades that you can use in your front end code or your server side code today and you're talking about the entire npm ecosystem. Bring those two together get functional sort of the advantages of a functional language and you get reason. Who remembers the chorus of the song? And the reason is you this is pretty much what it is, right? I told you I'm going to get it stuck in your head. So essentially that is what it is, right? I found a reason for me and blah, blah, blah. And what is the reason? Right? First of all, the reason is type safety. The reason you want to use a functional programming language or you want to use strongly type programming languages, you want type safety but I shan't did an excellent talk on type script and I'm pretty sure he sold you on types and how important they are for your front end development. Types aren't meant to make writing your initial code easy as far as we know types are meant to make your code easy to refactor. That's been the case always one of the biggest reasons why we haven't jumped into writing type so much is because it's so verbose writing it out. But reason solves that in a very intuitive way because it has a world-class type inference system. You actually need to write very few types with reason and we're going to see how that works. The reason is performance. One of the advantages that you get with having the compiler itself written in OCaml the compiler that converts reason into JavaScript is that it's actually compiled to native code. So you have a native binary which is compiling your functional language into your JavaScript code which is blazingly fast. In fact, some of the times when I run it on about say a few thousand lines of code you're looking at sub second compile times and this is not even in watch mode and for those of us who are used to you know, Babel and Webpack and just you know sort of staring at a computer screens while we go get a coffee while our build is running. This is a God said one of the other aspects of performance will also discuss is the fact that because of the deep static analysis the compiler can do it actually ends up emitting very optimized JavaScript. It's also pragmatic. Remember what I said that OCaml is objective camel. So if you as a JavaScript developer don't want to drink the school aid of I need to write functional code anywhere and everywhere, but you actually want to say you know what? I may not feel and I may not sleep well at night, but I do want to write some like hacky imperative code. Go ahead and do that. The compiler's got your back. It's also phenomenal tooling. We'll see some examples of that where one of the advantages of reason is it brings together several different tools from the OCaml ecosystem for you to get a great type system. Great editor integration and stuff like that. It's also interop for any compiled to JavaScript language. One of the most important things has to be what wait. I already have an existing project. How do I sort of use this new goodness with my existing code right and one of the ways that you end up being able to do that is because reason gives you a way to interop with your existing JavaScript code. And all of this put together ends up giving you really good developer experience. One of the biggest points that I want you to take away from this entire talk is that if you use reason, you will be more productive while at the same time not sacrificing any of the developer experience that you're accustomed to. So let's go through the type system in a bit more detail. I won't go into it as much detail as sort of the type script talk because there's a lot more to talk about here as well. Firstly, it's sound. What do you mean by a sound type system? It simply means that you're not the compiler when it's sort of checking your types. It's not trying its best. If the compiler says that your program compiled successfully, all the type checks have passed. It is bug free. You've covered everything. It's sound. It is not going to make a mistake. It's inferred which means a lot of common patterns that you'd see in other sort of languages with types where you explicitly have to write a lot of types. You don't need to do that in reason. The compiler is sort of almost like a pair programmer with you and it automatically infers a lot of very interesting things. There's 100% coverage. So if you ever used flow or type script, you also have to that's the term that we use in reason which is strongly type. So you define your record type over here. I'm saying type talk has a title and a length now. Usually what you would assume is that if you had to write any functions that deal with this type or if you had to write any code that deals with this type everywhere in your code, you would have to say, okay, this thing that I'm doing, it's of type talk, right? You don't need to do that in reason. You just write it just like you would any other just object. It automatically looks at the shape of your object and figures out that the type it corresponds to is talk. That's it. You don't have to annotate it at all. And this is beautiful. This is the way it should be, right? Why should we sort of obviously there are a few edge cases with this where if you have types which are very similar, you still need to annotate it. But in my experience, the vast majority of code that you're writing, the compiler just gets out of the way. And let's say you write a function. So here I have a really simple function called trim talk length. Now look at the syntax. How many of you think this looks very similar to JavaScript? The function is equal to it. Just we've just used f u n instead of function, right? That's about it and I'm doing a spread operator over here and there's no return statement because in the functional language, everything is an expression. The last line is your return statement, right? And look at the type signature of this function. It's automatically inferred without me annotating a single thing that this takes in a talk, a value of type talk modifies it slightly and returns it. So you get all the goodness of types that we saw in the earlier talk and we generally know types are good for us without the verbosity of writing it out, which is the holy grail. Let's look at something interesting that is born out of the type system and other interesting things in the language pattern matching. How much of a JavaScript code ends up being switches and offenses pretty much most of it, right? Any conditional branching logic that we need to do it ends up being that how can we have the compiler help us with that? So this is a pretty simple recursive factorial function. This is how most of us learned. It's really bad way to write it because you'll get a stack overflow, but still this is how you'll do a recursive function here. I've used a slightly different syntax, which is a little just to sort of illustrate that there are two ways to write functions in reason. So here this is the name of the function. This is the this is the value that we're passing notation to write it. You can also write it in the more JavaScript similar way. If you feel more comfortable with that again, it's completely up to you and here I switch the value, right? And I say if it's zero return one, if it's n to n into factorial n minus one pretty straightforward. This is how you do a simple factorial function. How does the compiler help us with this? What if I forget the case where if it's n multiply n into factorial n minus one, right? I get a warning here. Okay, fair enough. Maybe the compiler can figure out that there's something wrong with the switch that you're doing. And remember here we're switching on the value of the thing that we're passing into the function, but here's what's really cool when you run this, right? The warning that the compiler gives you is not that there's something wrong with your program. This is the part that I want to highlight the compiler actually tells you you forgot to handle a possible value here for example one. It's actually giving you a value that you can input into your function which you haven't covered in your code. This is literally the example of someone sitting next to you and saying bro, you forgot in this case. This is what compilers are meant to do, but somewhere along the line they became these monstrosities that only serve very obscure error messages, right? The first time I saw this it blew my mind and I was like agile what where program what I just need a really good compiler, right? So pattern matching is cool, right? You know what's cooler is pattern matching with types. What if we can take these types and we can do interesting things with pattern matching with them? Here's a slightly this is an example from a thing that I've been working on it's a little verbose but where with me so huge line of amount of code will go through it very quickly. The first is I have a simple phone number right which says that I can have three types of phone numbers. There is no phone number. There's a phone number stored in the international e164 format which is just a string plus 9 1 something something something or in some legacy system of ours we actually stored country code and phone number separately. All of us have been in this situation. We have backend systems which have different formats then those formats update then in the front end code we have to change a bunch of things. You have to write a huge amount of if I says to handle all of this. So what do we do here? We define a type right? This is known as a variant and we say that our phone number can be none or this or this and the two sort of inputs that you see to the variant are called this is like a constructor. We're saying that e164 takes in a string does something with it. The legacy phone number takes an int and a float and does something with it. So here I'm just going to define the three new phone number just takes in a string legacy phone number takes an int and a float because it's going to be pretty big and no phone number is just none right? It's just that this person doesn't have a phone number now we let's say you need to write a function to format this phone number right and displayed very nicely to the user in a standardized way. This is how I'd write something like this. We switch on the value or we switch on the type and we say when it's when he doesn't have a phone number we send him a really nice message when he has the legacy one if you notice we are able to take the intent of load that we are passing to legacy format and rename them as prefix and number so we can we can have nice looking names there as well string of it is a very simple function to convert string to an end and string to float and this is how we end up formatting the whole thing and if it's e164 it's already a string just display that as is we log it this is what the output looks like right this pretty cool we've just taken a bunch of our if else logic written it a bit more concisely that's nice not too different from javascript. How can the compiler help us imagine I remove the case where I don't handle the case where there is no phone number here the compiler really gonna tell you forgot to handle a possible value when there is no phone that's it now I know that I need to sort of handle that case as well. Why is this useful as well for refactoring imagine tomorrow your back-end comes to you and says we have decided on our own custom in-house phone number because we know better than the rest of the world right we all work with back-end like that and that they come up with a brand new spanking format which is unlike anything anyone has ever seen right now you need to handle that case everywhere in your code how do you do that you just have to update your initial type definition for phone number the compiler will tell you every single place whether that's format phone number whether that is add phone number whether that is take phone number and display it in emoji whatever are the functions that you've written the compiler knows all those places it will tell you that you haven't handled this new format right we need now let's get into performance really quickly so one of the obvious things that I talked about is how fast it is also because it's able to do a lot of very complex static analysis it does dead code elimination and more importantly it pulls in very very little runtime most compiled to JS languages to prevent to preserve the semantics of the original language pool in a huge amount of runtime JavaScript to sort of keep the environment the same but one of the reasons that reason was invented see what I live there is the fact that OCaml and JavaScript in a lot of cases have a one is to one similarity right whether that switches whether that's objects whether that's arrays whether that's like binding so in a lot of places you're able to very simply translate things over without needing to pull in any of the runtime let's take a look at how that looks I'm looking at our factorial function again the similar one that we saw earlier when I compile this what does the output look like as a compiler if any of you are doing and this is the output but hang on a second I just sold you on how there's minimal runtime but why is the code so big right does anyone know just doesn't have integers right so we're still when it compiles that code it wraps all of that code with sort of integer multiplication and stuff like that to preserve the semantics of integer code right so it ensures that safety but what if you were to rewrite our factorial function using floats because JavaScript only has floats right so I've done a bunch of float operations over here you just have to annotate it with a dot your multiplication and negation or whatever this is the compiled output if you remove the top comment and the bottom comment I can guarantee to you none of you would realize this has been spat out by a compiler this is exactly like any of us would write the code right and this scales to thousands of lines of code not just a simple factorial example so one of the biggest selling points of reason and on the coolest things about it is that you don't have to treat the compiled JavaScript as some sort of build artifact write your code in reason let it spit out JavaScript check the JavaScript into your source folder and have the other people in your team who don't know functional languages use that JavaScript because they can look at this file and understand what it is without knowing to know reason and understand that so this makes it very easy in large teams for you to start adopting it incrementally right I mean this first time just like what the hell is going on right all right it's also very pragmatic in terms of it gives you a lot of escape hatches to interrupt with JavaScript to write imperative code I'm going to skip over some of this stuff because it's a little dense and also because we have a lack of time because there's one very important thing I want to talk about right how do you do interrupt pretty straightforward if you want to use reason from your existing JavaScript just compile reason to JavaScript use it if you want to use existing JavaScript in your reason code you basically have to summon the old gods perform an arcane ritual and sort of pray for the best but essentially there's a way to do it using something as a foreign function interface one quick note about reason is that it was invented or put together by Jordan walk who is also the inventor of react when originally react was written it was written in language called sml but they very quickly realized that react wouldn't be adopted very widely if it was written in obscure ml language so it was converted into or you could write it in JavaScript but now we have the tooling with buckle script and stuff like that to actually use react the way the authors of it originally intended which is using reason ml so it's the same team that's built both so if you're getting into react development I definitely check out reason react it has a beautiful api which you can sort of quickly see here where everything is sort of pure functions to be able to do it and that's something that you can dive into right this is ffi which is how do you deal with existing JavaScript this looks super historic and complicated but what I want to point out to you is this is the just if that's generated right reason doesn't understand what document is or what get element by ideas but you can write an interface that compiles to this JavaScript which again looks completely handwritten right pretty cool. Now the thing that I'm most excited about for reason is not the fact that I can write type safe pattern matching I can do all these cool things today the reason I'm most excited about it is the future what happens so currently this is how reason works right? We compile to JavaScript which we can run in a browser we can run it and note but remember at the very beginning of my talk I said that OCaml has a brilliant native compile to native sort of process as well and the Buckle script compiler itself is all is also written in native code so what if we could take the reason compiler or the OCaml compiler and have it on the server side generate a binary for us imagine you're writing isomorphic web applications where you're sending client side JavaScript but your server isn't absolutely completely optimized tiny binary OCaml code when compiled to native I have seen servers that start up in six milliseconds right imagine the kind of performance we can get out of that and imagine the future where the same compiler is giving you binary for the front end and binary for your server so you write type safe code which compiles to JavaScript if you so feel like it or actually compiles to all of the possible places it can be deployed in binary basically what I'm trying to tell you is we have reinvented Java after all these years right once run anywhere except not in bytecode but in binary there are still a few things left for wasm to be ready there's a garbage collection and tail called recursion optimization stuff like that left but we're very very close to this and we as JavaScript developers now have access to writing code in a functional language where the types are inferred for you where the syntax isn't that different and feels familiar to us and to actually deploy native code right you don't have to feel like the rust guys of the C++ guys are better anymore or doing some sort of restoring magic right we can do this and that's why most I'm most excited for reason that was my talk couple of quick shoutouts to a few friends uncle and Sanders fees from Twitter for helping me with feedback from this and thanks to the organizers and all of you for listening to this and I hope I've got the song stuck in you in your head and I hope I've given you a reason to try out reason thank you so much for just one question which is can the question just not be on the patents the act reason. Yes, right and it gets compiled to like drama. Let's say you get a run time error correct so how do I accept that error back into reason code? How good is it that there's a couple of there's a couple of things there if you ever if because of the fact that your code is strongly typed if it's going to be entirely reason code you're not going to get a runtime error that's just not going to happen if you ever written code or any of that sort you just don't get runtime error because you've covered every single possible case as long as you model your types correctly right and you say that this value can not be present as well you will never get undefined is not a function you will forget that phrase even existed right but if you do end up getting a runtime error because you're mixing your strongly typed reason code with sort of some of your untyped JavaScript code because of the nature of how reason compiles to JavaScript right and it doesn't spit out sort of code that looks very obscured stuff like that in my practical experience is very easy to go back and just do a one is to one mapping because I know exactly where this is come from because it looks very similar because the variable names are preserved firstly right the file names are also exactly preserved and it even tries to make sure that you know if you're if you're using a particular data structure try to preserve the same data structure and the compiled code also has extensive comments on what is the equivalent name of the thing in reason so it becomes pretty easy for you to figure out what where did it come from thank you thank you okay any other questions happy to meet you guys outside come fight me on this thank you we announced in the morning that we are going to be looking at three alternative languages which are coming up in place of JavaScript we learned about typescript then we had reason which is based on OCaml and the third language is pure script we have Vimal with us who's the CEO of just pay and they build and maintain large scale payment apps he will talk a bit more about the framework they've built and then dive into rationale for choosing pure script as their language a quick intro about myself I love for the front end and business logic came after I got introduced to this and Closure so I've been programming in Closure for five to seven years and the real art of the form of code and introduction to functional programming for me happened through Closure so this talk is about some of the things that we have learned over the last five years in just pay so the biggest problem in a company like us is so so managing managing the ever increasing complexity of like a lot of vertical solutions if you look at a company like us we power payments for most of the top large enterprise companies and that needs a lot of vertical solutions we call ourselves a micro app platform so we are a small app inside many other apps like Flipkart Amazon make my book my show etc etc and all these need customization and ability to manage and control complexity and we also develop apps like beam which many of you might be aware of UPI app and how a company which is a small company which is almost running like a bootstrap company like us can actually manage such kind of large complex apps with a very small and young team and half of our company is composed of almost pressures and interns so this is the problem that I wanted to kind of attack head on right how do I get like 100 interns and like give the kind of experience that I built over 10 years in one year so this was a very very exciting problem that I wanted to address and being a closure programmer my immediate approach was like pushing for closure right and you guys get into arts like closure is like the art of programming and once you get it you are going to become amazing programmer and the whole conflict of closure versus Haskell and but my team wanted Haskell and how did we end up into Haskell and pure script in fact so that is that is going to be the story of my talk and focused on the kind of apps that we have been building and the learnings so let's start with why are apps hard so this is something very experientially that we have figured out that most of the models which are in the industry are not suitable for transactional apps most of the most of the apps that you use right 80% or 90% of those apps are transactional take a flip cut after you have done your search the rest of them your conversion funnel take a payments app it's a conversion funnel take even a travel app most of the parts is like the entire product team is thinking about conversions right the even based model is not something suitable for modeling the conversion funnel or modeling the modeling the user experience of a flow or a business business logic right just not that even if you make a DSL in a language like JavaScript and like have mashed it up through like a JSON like structure and create a workflow there are lots of workflow systems which tried to model the business logic but they are not composable so the whole point of like taking your business logic as reusable building blocks and putting them together is not is not at all solved in the industry the second thing I think I think all of you relate to this and this I since I have had the experience of the early days of how system engineering works and how deep it was compared to how the JavaScript ecosystem works it feels a lot to me like a pop culture everything is like getting started first like how do I get that excitement today and like the next week it's going to be something else right so there is no focus on really looking at going deep and investing time in going deep and learning the fundamentals and kind of looking at really the ever increasing complexity that we are getting and decreasing the complication so complexity is good the world moves towards complexity but the thing is the complication is moving like multiple times forward than the amount of complexity that is going on right that that we need to control so there is no theoretical approach except for let's say if you can get into Haskell you can really get there but there is no path towards getting into Haskell so this is something that we wanted to solve the third thing is kind of related to my previous point there is a lot of fragmentation what we see is that most of the things like front end back end persistence services integration analytics infrastructure all these can be unified with very few primitives and the focus of the mathematicians are the ones who think like this right look at lots of things around the world and say like all these are nothing but numbers all these are nothing but changes this is nothing but calculus will model everything right such an approach is not there so we wanted to bring that to the industry and make it practical so yes so these problems are there and as I said we took on a mission saying the me being a technology person and also business and the product guy I always see all of them as the same thing and why can't we unify why can't product managers designers and technology people work together and all of them can understand the code if we make that happen we can really make it 10x better right so how do we end up solving it we focused on user experience or UX as the primary abstraction that we wanted to extract there are lots of ways in which you can abstract things out but like we wanted user experience is something that developers relate to product managers relate to designers relate to so how do we get something as designer type as user experience and merge it with the math of Haskell and like make it practical right and a product managers are extremely interested in conversion funnels and business flows we wanted to model as gold gold trees which I will get to and we wanted to get into your script or a dialect of Haskell which is based a lot on category theory and make it practical and as I said how do we unify and figure out the building blocks and there are only three building blocks I'm going to get into the details of this so what is meant by UX as conversational flows so imagine any app an app is nothing but somebody who is sitting in the middle between a system and a user and just doing a conversation imagine an app in which you can just say like okay welcome the user tell the user this ask the user for these things thank the user delight the user crosses this to the user notified to the user sounds very familiar right anybody a non-tech person will be able to understand so this is the primary abstraction that we wanted to anchor on for app development that use user experience becoming the vocabulary for our framework right so the other side of the equation is like the language to talk to the system like while talking to the system we talk about like okay detect whether there is a sim present in the phone so there is like verbs and nouns which again relate to a normal person which we can create and make it a DSL and make it very very intuitive right the next is even after you create a DSL or a domain specific language the next thing is to how do we kind of make them into modules and make them composable so it's easy to come up with a flowchart like approach or a or a way in which you can write like a long essay and but but the thing is it goes out of control or program will become like big big narratives which are not reusable so that's where the problem of problems that functional programming really really addresses come into picture so for us to really make the UX or the UX DSL and the system DSL as a narrative at the same time make them into Lego blocks and compose as a hierarchy right needs us to solve state management it needs us to solve control flow control flow like how you kind of do asynchronous computing versus error try catch etc versus like retries etc making them all as like very intuitive modular I would say annotations that you can put in your DSL and anybody should be able to understand right that's where functional programming shines and we were able to create a flow monad how many of you know about monads heard about monads right so that's the scary M term so what we have really done here is like we have created a custom monad which which actually models the flow structure and state and and and also how do you kind of mash up the computations together it's called a continuation monad which you can email me I will I want to take a separate lecture on it sometime right and if you look at this slide take something like for app like beam for onboarding it's very easy to think like okay onboarding is composed of maybe an introduction then mobile verification and creating a virtual private address understandable right and introduction is composed of like maybe three screens three welcome screens three welcome screen and like help screen couple of help screens mobile verification is composed of two types of verification one is like by sending an OTP and verifying but if some of them SMS is not reaching then like there is another way of like sending one SMS from the phone right so this is like two goals and this is like this or this if you look at the first set of goals is like introduction and verification and create VPA in verify mobile it is a goal a which is like verify mobile with OTP or verify mobile sending SMS and again verify mobile with OTP is composed of a UI goal versus API goal versus a module which is like a poll and wait for the OTP etc. right so if you look at this anybody our product managers understand and even people whom is not introduced to this way of programming understand so how can we make something which is as intuitive as this and as composable in fact the interesting part is I can take parts of this and put it between different apps like the UI is completely abstracted out of this and like lots of different solutions that we built in our company use the same code and that's the that's the beauty even base programming model will not allow us to reuse all the components which are like spread out like spaghetti right this is extremely tight. I can hold my UPI onboarding goal and have it everything is like tightly composed right. I can take this thing and like take it out put it back in so that's the beauty of it and things like retries etc. very hierarchically handled like a retry within mobile verification will happen inside that itself if it doesn't happen it will go to the parent and that will have a strategy for retry all of that becomes extremely we have a formal way of doing all this still being very very practical and user friendly so this is another way of looking at the same goal tree which is looking like a narrative welcome ask and ask is this ask which is a goal tree or a function which is composed of couple of other things etc. and the subflow this is again another sub function which is being called which is I think when you look at the slide it is pretty intuitive and the code exactly looks like that this is what we were able to achieve. So so the the story behind how we entered here is as a product guy I was actually one year before really looking for this and I was starting to write closure macros and like going to my team saying hey guys I want something like this learn closure I I want the shape of code to be like this right and we ended up into Haskell which the next slide is in a few slides. I'm gonna go to but the interesting thing is we are actually this is embedding things like monads alternative operators types and all in fact types and functional programming has been the theme today right and I am going to again be the strong advocate of the same philosophy of the previous talks and the beauty is it is giving you something like this and it's again fully typed pure script is fully typed and a small mistake it will catch it. Yeah, so that that is about talked about pure script. In fact, most of the talks before have been covering many of the good parts the one thing that I want to cover maybe which is unique here is like the the model of programming we chose something called continuation passing style this is again something that scheme and list and all have been having it Haskell also has it by the way a quick intuition for continuation is nothing but taking computation blocks and having the ability to control the control flow right having something like a function returns a promise but the way in which you resolve that promise looks like it's a synchronous call rather than making it look like a synchronous call which if you really look at it in this code I am doing a show UI which is like a UI screen is shown and the return value of that is what the user entered which is being piped into my API which is being piped into my another UI and so on right and if I can show you a little bit more code here like this code has a little bit more complex thing where I am showing a UI and like it is the return value of the UI I got it as a user choice and I am able to kind of look at all the different types of user choice and handle it and like why fork it the flow into few other part the lower parts of the gold right cool. So we looked at our DSL and why functional programming the third is in app development itself using all these theory and the tools we still need to unify we need to actually think creatively about what are the commonness in fact for us the inspiration for that came from just going into category the level at which mathematicians are thinking if we put that kind of a thought if you bring that culture we started thinking why not find commonness between everything and unify what we ended up doing is just say that app is nothing but three kinds of building blocks one which is obvious which the react world has done it which is the UI structure being like a like a hierarchy and like clearly taken as a bunch of components that we can make and reuse the thing that we really innovated on is the goal tree which I talked about the third building block is data as types in fact algebraic data types and all the great things that you heard your script also has it. So in fact what we have achieved here is make all the make something like maybe there is a directory structure in your app which actually puts up all kinds of types you are all your interfaces all your data models everything is algebra algebraic types and like don't repeat don't copy between types your entire back and front end everything is unified and they are all captured in the same set of types because we control our front end and back and we are able to do similarly the goal trees also the back end is nothing but extension of the goal tree. So why do we need a concept called back end so that is what in our company we do like there is no back end you just has have flows or goal trees the code which runs in the front end will have effects in which it can manage the UI etc the one which runs in the server can maybe manage your red is database etc it's just like the difference in a little bit of effects but like it's just extension of your the DSL or the code that I showed in fact you can call one function there which will run in the back end and your concept of API is back end front end everything is gone right. So why pure script so this is in fact a big debate and still we do sometimes debate here I really like closure but given that we are a team in which we have like 100 people who are like almost precious closure needed or list needed a lot of experience from other fields in my opinion maybe there are great musicians there would be great closure programmers they would have understood the world through many other ways and closure is like a canvas in which you can make whatever you want it's like clay right rather Haskell has a math approach maybe initially it is like a little bit of a path of thorns but if you can cross certain level right I think you can so what I was looking for is like what am I maybe learned from my Bach music right is there a way in which programming itself is there a path Haskell has path it has like something like okay this is how you learn calculus in your school this is how you learn your tables there is a path closure I was wasn't able to find a path closure was like art school this is like the math school right so we were able to put it to practice and make it into a math school and the architecture wise considering we are a very transactional system and not like a game kind of system Elma architecture didn't suit us initially we tried this and but the thing is we are actually exploring parts of our app encoding something called FRP the pure FRP like a like reactive banana you can check out that library in Haskell which uses a concept of FR type of FRP which is much more the creator of FRP is the one who advocates it so we are also going to embed in our goal tree that FRP to okay so how does this give 10x so we get reliability for free from freshers it's amazing like we have to do it to see it we are able to manage so many projects with almost no middle management women I'm sorry we are going to have to cut you in your respect of time sure I one so well it has the unification aspect of functional programming has really really given us the ability to go 10x you can in fact ask me questions on this maybe later and we have apps in production which have potential to go more than 100 million customers all three like five six apps are in production and we are running a school of functional programming to extend our learning to the masses and next is we are focusing on visual programming on top of platform taking questions for the stock but women will be available in the speaker area outside in the bank at area if you have some questions for him coming up next is Vivek he is going to talk about scheduling background tasks in JavaScript so JavaScript is single threaded and your functions often need to be run as soon as possible but at the same time you don't want it to get in the user's way what's the solution scheduling so Vivek is a front-end developer at housing that dot com and he's going to tell you about how do you do that? Now we will I work with housing dot com I'm available on web with this Twitter handle how fast is your website? So we have been hearing this from long right and you will keep hearing from a lot of people like how fast is your website and we have been hearing this from long as well DTFB time to first bite then cashing first paint meaningful paint and your Google pay speed made if it's more than 90 95 you believe that these all contribute to have high-performance way back and I'm pretty sure you would have reached to this stage if I start talking about performance today so today we'll see a different way of looking at performance so before we move on let's have a look at some myths and misconceptions we have about performance so first misconception we have is my app loads in X dot X seconds you'll often hear people saying my app loads two seconds three seconds so when exactly is the load when how do you capture how do you capture your page load like when your window loud event fires when your document complete event fires or when your users is that your page is ready we'll see that in some time so other misconception we have is my page load faster than my users are happy so how do you how do you know your users are happy right page so unfortunately today we have tools and techniques more concerned about first time page load and not the users entire journey through your app so you have a paste test you have Lighthouse all these are more focused on first-time page load and not through users entire journey that is why reality is your bad user experience can happen anytime and you should know it so user associate performance with their entire journey and not at the first time so lots of things happens after your page load user will click user will swipe user will tap on a button user will keep scrolling things and in case your app doesn't respond that's a bad user experience and unfortunately bad user experiences sticks with your users most than anything else so as we see like performance and far these are vague words there has to be some metric to measure them I created an example just to mimic real world example where we'll load a map Google map on a page and we'll have a button which will open a model for us let's see what happens pretty fast of almost four seconds now this is a bit too much but anything beyond your hundred milliseconds is a lack for a user and age loaded in around two seconds and in background you can see there are lots of markers we started placing on a map so these kind of problems can be solved by time to interactivity which can be tracked via a paste test or something or any other tool but what if these kind of things happens after your page load let's say user at e-commerce website user is scrolling through different items and you have an infinite list and you reach end of the list you go and fetch next set of data and at the same time when your data comes and you're trying to append it to a DOM your user is busy in clicking some carousel or trying to open on a preview item user will feel the lag if you extend the limit of 50 ms so these are the problems we saw so far load is an experience and you cannot measure by a single metric interactivity is crucial and often overlooked we do agree to this point and entire experience matter to your user and not just faster page load right so what we'll see in this talk today example of a faster page load which we saw just now what are the causes of these bad interactivity and how do we measure them so we'll measure them real users now we might be testing in a lab or in-house by putting a slow 3g network or a 2g or something of that sort we might be getting some numbers but how many of us really know that how many users are getting a page load more than 4 seconds 5 seconds or 6 seconds we have no data for that right so we'll put something in our production code which will give us the actual time to interactivity and how much time it what kind of long-task we have throughout users entire journey then once we identify them we'll see tools and techniques to boost this performance and fifth part which is a scheduling task by priority so what are the main causes of bad interactivity and bad experiences so we all know this so things like this long running task in your main thread often causes your bad interactivity right things like this shorter tasks are always better so coming back to our example again happens so if you open a timeline view we can see our click event was waiting for this main thread was busy in putting all your markers on a map right so what are the culprits for these kind of experiences there are lots of things happens in your user journey right you might be queuing your analytics data when user is browsing your website and at some point of time you might be you might try to send them in one go and it might happen that same time user is trying to perform some action so typical example you will see often is scroll behaviors usually you'll see most of the website will have a jank in scroll that is just because of this kind of these kind of things like user is scrolling a page and you are busy updating in some other part of Dom or busy in getting some other data or lazy loading certain other things and which are blocking in your main thread so these are some examples so how do you measure them on a real users we can measure on web test or lighthouse or things of that sort but how do we measure real users so we'll see we'll put a time to interactivity measurement in our code and yeah this time to interactivity will not be based on just your window load or document complete there are other aspects like with the rise of PWA and server side rendering apps what you do is you ship your HTML faster to your user but shipping HTML faster doesn't mean that your app is performant what you do is you ship faster may be App Shell model in a PW you ship faster and then you might be busy in getting some other resources for your app you might be firing four events in parallel to get some other JavaScript to load other parts of your page and user is trying to click on a search box or something and that same time some data comes and you try updating it so you don't have a control or all these things in JavaScript everything is a main thread you just ask you just wait for function to get executed right will be measuring input latency for critical events things like add to check out payment parts stuff like that whichever we feel for our app is critical we'll try to we'll try to track them whichever goes beyond 50 milliseconds and this is cool stuff which is a performance observer coming up in Chrome still experimental but this will tell you what are the long tasks which are getting executed in your app this is like throughout your journey you just initialize it at your when your page loads and then it's asynchronous and you don't have to worry about performance of this stuff otherwise you might end up putting a code for performance which is hampering your performance so you don't have to worry about this so we'll see that so let's start with the time to interactivity on real users so if you put this piece of code in your application so there's a policy for this so window time to interactivity cannot be calculated on just window load or document complete so if you notice here the function itself says first consistently interactive now how do you see first consistently interactive the example which I told you with PW and all like you ship app model and then you're busy in getting some other resources for you user and user sees the search box faster and it clicks clicks nothing happens so even if you ship page faster I mean it's of no use because your main thread was busy in doing some stuff so get first consistently interactive thing checks on different parameters it checks on it also internally uses performance observer which gives a check on long task which was the which was the long task which was running the main thread then which were the resources you were busy with getting in and based on that it will give you the TTI value our job is to track numbers right so whichever user this we put it in a production code so whichever uses filling time to interactivity more than let's say three seconds we should report to our analytics server of four seconds we should report our analytics server will come through this part like why we should report to analytics server so you can see this you can check this out time to interactivity polyfill source code of it on what parameters it checks on then this is input latency we in our example we saw there was a four seconds of lag right you clicked on it and it took four seconds to open the model which is pretty huge so you can test it with pretty simple example like you can current time minus your time of event you get the lag so if it's going beyond 100 milliseconds you should be worrying about again report to your analytics server performance observer performance observer has different types here we are just putting a long task so anything which is in the main thread taking more than 50 milliseconds performance observe will start complaining about it and again your job you'll have some number right 100 millisecond it's a problem for me so you'll start putting it in a analytics server again so there are different types one is a long time there are some resources which which are taking a lot of time to load you can check this out in a API now we saw how do we identify on a real users these performance problems now how do we fix them so there are lots of tools we have in browser API which we can utilize so first first is a request idle callback combined with a scheduler so we created a scheduler for our task and not just relying on a main thread of a JavaScript so we'll see it in a moment request animation frame we all know like for any visual updates we'll use request animation frame then wave workers has been there from long but hardly we use it just because they don't have access to DOM we try to ignore them but they are really powerful and the new thing which is a shared area before now wave workers cannot share the data whatever you pass you pass it as a mechanism as a messaging system you pass message to a worker now with shared area before you can share the data across multiple workers so scheduler pool what it is so we created a scheduler pool before I jump into that let's see why we created a scheduler pool is a pre scheduler world so blue is a main thread and you have task 1 task 2 task 3 to execute is a pretty usual way of running your JavaScript right you just call function one after the other and it goes on to main thread so you can see we want to push a new task we don't have control over new task we just say run a function so if you see task 1 we put task 4 goes to main thread task 1 gets executed a task 4 has to wait for task 2 and 3 to get completed right what if I want to put task 4 first before task 2 and 3 so with scheduler what we did is we created a fast lane for it the fast lane is kind of a different queue all together in which you put a new task so you'll say priority so we call it them as a priority queue so we say okay my this function needs to get executed first so I put I say this is on priority we put it on priority this gets executed first and then one two and three so our scheduler pool interface looks pretty simple it just exposes you add job function so you can use add job you pass whatever you want to run it as and you can specify whether it's on priority or not you can have as many different queues as you want but as of now we are just maintaining two queues one is a priority and one is a normal queue which is if your job is on high priority we'll push them in we'll make sure that it that gets executed first and then your other task and we are using request ideal callback for this purpose these are the code looks like the new job is a function this is the main core of our scheduler it keeps checking it runs in a request idle callback and it keeps checking for a deadline so you get a deadline in a request idle callback which tells you how long you can execute your task so we keep checking we keep checking timeline time remaining we need at least five milliseconds to execute something and if you started we just pull out priority jobs first if it is priority jobs are empty we pull out the normal job and we run your job for you so job is pretty generic it's your function function call now what you do inside it is up to you whether you make a network call whether you run some for loop inside it's up to you but one thing to notice here is you are passing a deadline again a deadline tells you how long you can execute your job so let's say you get a 10 milliseconds and you have 500 of items to put it on a map now obviously it's not possible but what you can do is in 10 milliseconds of time you keep checking for a deadline just like this while look at that and as long as I have deadline I'll push how much ever Marcus I can push on a map on a map so let's say I end up pushing 30 Marcus out of 500 now I left with some 470 Marcus what I could do is I can say add to incomplete jobs this again goes to your priority queue add incomplete jobs and ships your job so when next time browser gets free your incomplete jobs gets called again so let's see just to reiterate on example of a bad perceived user performance I'll run it again for you now this is the use of scheduler you can see there was a 4 second now there is no time lag at all and Marcus also we are able to place now we'll just reload this page and I want you to keep eye on this part console reload also we are able to place just notice on this number there's no lag at all what are these numbers so these are batches so if somebody asks you you want to put 500 Marcus on a map or thousands of Marcus on a map you might come up with a solution of batching things right this is a normal way of doing things like I cannot put everything in one go I'll batch it but when it comes to batching you'll have a certain number you'll say okay I'll batch it in 50 50 and I'll place them now problem with 50 50 batches is that you don't know how long your browser is free now with scheduler scheduler knows how long your browser is free and keeps telling you okay you got this much of time put how much ever Marcus you want so I put 30 Marcus so if you notice here sometimes I put 10 sometimes I put 32 sometimes I put 17 so my batches execute with this approach and that is where user will not feel like at all now let's see timeline view of this now so early timeline we saw there was a huge main thread got blocked and click event had to wait here we divided into smaller now smaller task execute faster and click event also gets executed in between then again we start little pool you can find it code over here we have for we are using request idle callback underneath it to maintain this pooling request idle callback we know how many of you using it in a production request idle callback so request idle callback is nothing but you get a you get some time when your browser is free and before just after your frame come it happens in your frame so yeah you execute task when your browser is free react fibers mechanism is based on request idle callback so whatever new reconciliation happens on from on browser that is based on request idle callback so there and there's strongly using request idle callback for the smaller updates and smartly it's very powerful you can come up with some different pattern altogether just like we came up with schedule request animation frame we all know like you get some time frame between layer before your layout and paint happens so any visual updates you want to do like you want to animate scrolling via JavaScript you can use request animation for it web workers yeah web workers are there from long how many of you are using it extensively except WebGL Pratik Bhatnagar did mention yesterday about use of it in his PWA call talk for what so we have a universal react application and we use web workers as for PWA so we use it for a bunch of things mostly for testing stuff and also for basic offline functionality yeah so I created one simple example where so mostly web workers the reason people don't use it is you don't get a dorm access to it so you can just process some data or you can make a network calls and stuff like that you can read a file that is where web workers you will see hardly usable but web workers are really powerful you get you execute your things in completely different threads like a multi-threading in other languages it's actual multi-threading so it so let's see one example of uploading images six images now let's see a in a normal flow how it will look like in a timeline so file reader so for reading files file reader is again a thing so you can see some differences here and your function are getting executed in the main thread the function getting executed in the main thread but stuff like this where you are loading of images so bunch of things are happening here you don't just read local file you're reading a file you are creating a MD5 out of it you are put you are pushing it on AWS on response you're again sending it to your server lots of things happening doing this in a main thread let's say there were six images in one word doing this in a main thread is definitely going to take some time and user will have to wait but workflow and it is completely free user can do whatever you want in this time frame and at the same time you can see your worker thread was busy in uploading your those four images across the network and reading them and creating a MD5 out of it the problem with existing web workers again it's not being used is they work on a simple message mechanism like you cannot share data across web workers so you pass messages you can say serialized things or the network because if you change it in a main thread it will not reflect in your worker so that is where you get limitations of it but they are becoming smarter now with shared area before this is something new again coming up with shared area before you can actually share data between your main thread and your web workers so things like Java and programming language have multi-threads multi-threading a mechanism where you share data and to update some data between threads you acquire a lock right because you don't want let's say two threads are trying to update the same value they read same value 5 5 and this guy update 6 this guy also update 6 but ideally one thing should happen at one time there that is where you acquire a lock so same mechanism coming to share data buffer so I will just show you an example where this is how you can create a worker is a normal way of creating a worker with length of 10 and shared area buffer with this much of land and we have to wrap it with a type direct you cannot directly use this shared area buffer so we wrap it in 32 area and we place them numbers in this area and then we say to work okay this is it do some work on it to main thread and what we are trying to do in the same time up after let's say five seconds we will try to modify the shared area buffer okay this is our worker file it's a separate file where we get a message we again have to wrap it in a typed area because you cannot directly work on a buffer we are just putting a console.log where we will print first time what we received at after 10 seconds when main thread updates what is the value ideally we should expect the updated value in this so this what it means initially it receives a zero value what the main thread passed then main thread sets the new value and it gets updated in the worker thread so this is just example of where the JavaScript is going going to and this also comes with your atomics which I said like you acquire a lock before you updation of values and stuff like that so this is the whole motive of this talk like whatever happens don't get it don't get into user's way which comes with the takeaways now so you should keep on your long running task using performance observer time to interactivity polyfill not just window load then you should keep track of input latency for your critical events second thing is you should reduce long running task breakdown task into smaller chunks and run them by priority. We saw just one example of map where just putting task in smaller chunks how the performance got boosted break up jazz preface them 50 ms is your upper bound and then use priority pulling like schedulers request I will call back animation frame wave workers and the last part is pretty important the whole reason of doing this talk the whole reason of boosting your performance is this you should ask these questions so your performance matrix correlate to your business matrix right so you should be asking these questions do users with low latency are having a higher drop off or do users who are experiencing lag have a higher drop off based on this data you can take performance fix this is on high priority. So this is what I'll leave you up with. I wish you a happy users. Thank you keep tweeting on housing engineering to handle about full stuff we do we have questions. Yeah in the balcony as far as I understand the logic please put the mic closer to your mouth. Am I audible? Yeah. So as far as I understand the logic about the priority scheduling it's a part of the event loop itself is it? It's not a part of it closed your mouth. It's not. Okay now it is. Yeah, I got your question. It's not part of your event loop. It's something before you put things into your main thread. So you have two different cues to handle and the scheduler utilizes your request ideal callback. So request ideal callback tells you that this is a free time. The free time scheduler gets activated and it's holding two cues for you. One is a priority queue and one is a normal queue priority queue it will push us and push into main thread. That is how it is. You cannot get a different thread altogether for that purpose you need a worker for it. But as of now we are just restricting our scheduler to do just one specific job. But yeah, you can extend a different library maybe to push things to a worker and things like that. How do you do exception handling in this one? Sorry. Hello. Yeah. Yeah. Okay. There. Yeah. I'm not seeing exception handling in your code base there. Code base. Yeah. So scheduler job is not to handle exceptions. So if my task actually throws some exception it will actually break the scheduler right? Yeah, that's a good point. I think I never thought about it, but yeah, we can catch around and keep pushing other tasks in queue. Okay. Yeah. We have a question there. Hi. Hi. As you said, like the web workers do not have access to DOM so is it like there are certain libraries like very good search libraries that I came across, which I feel it would be better if you know if it can be spawn in the web worker so that it runs in the background search is some text for me and then give it to the main thread. But those X is basically the navigator object. So is there a way to walk around like the web worker can ignore the DOM part and then just focus on the processing part and give me the search results. I didn't get your question. So basically there are some libraries which you makes use of the DOM. Okay. Is there a way to work around with and maybe you can just use it in the web worker? No, you can't get a DOM access in your web worker because DOM is a tree and when you pass DOM tree to workers, you're allowing it to mutate the state. Now the moment you start mutating with the state, the browser cannot handle your DOM tree, the main core thing of a browser is a is a drum. So work around is the way we did it is the upload example upload images example I showed you. So we capture images first and we pass in the URLs to it that these are the blob URLs and based on that then it's worker's job to read that file based on your file reader will do it will upload upload to server and get some data and it will send your it will post a message again back to method saying this is the job is been done then it's so it's more of like how do you structure you can't get a DOM access for sure. More questions in the balcony again. Can you hear me? So JavaScript was originally meant to be single thread web workers was a was quite a recent addition to the single thread nature of JavaScript. So my question is in most cases of interactivity right you can actually offload a lot of computation to the server. It need not necessarily happen in a web worker or a parallel thread or a lot of scheduling use cases you talked about. So when do you take this trade off of or what are your thoughts on taking a trade off of pushing a lot of this computation onto the server than you know doing a lot of this stuff of parallel tasks and threads and such and if you can give me some examples of applications which you will prefer doing you know more of a worker and scheduling than you know using APIs to do a lot of the tasks and then only whichever is specifically the interactive elements you will push to the client. So the problem is more on a producer and consumer when your producer is faster than your consumer this is where so let's say you have server which is a producer and your browser your JavaScript is a consumer when your producer is faster than your consumer like let's say we can batch those 50 markers via network call we get a 50 markers but 50 markers putting is also not you can't put it in a single go right as we saw like even if you are putting a 50 markers and the same time something happens user tries to click we have a question there if you have seen Google Maps all those markers that you see they are only on a focused area they won't show multiple markers at the same time it's also to do with the UX perhaps but you can see that Google doesn't you know prefer that that sort of a technique. Yeah, yeah so Google also asynchronously loaded loads these markers but if you're looping around the markers then you have no clue over main thread right I mean main thread definitely gets blocked even you you're pushing to show marker to a library but you're busy in looping around. Hi Vivek, good talk. So my question was when you use that while loop to decide which function is to be executed next can we use a generator there instead because that looks more like polling if I'm not wrong. Yeah, so we can pause the execution and then set off a different function and then after the time remains it can again. Yeah, we can think of that. So is there any benefit that this has over that? No this just work for us like as long as we got a free time we executed those tasks and moved on. More questions? Hi if for some reason my main thread gets blocked will my worker that will still work in parallel? Will it work and if the main thread is back into the normal execution it will be able to give past data to the main thread? Yeah, so it's an event listener you have right? You know main thread when worker when worker says post message your event listener will get pushed to a main thread's event queue and whenever your main thread gets free that listener will get called. One more question. Is there a way we can create a master thread like I may have multiple threads getting data from couple of API calls and in one master thread. I want to collect all the data and then pass it to the main thread. This is what this on a similar line. That's what we are talking about like the scheduler itself have two different queues and its job is to wait to push what on a main thread schedule pool will be able to achieve that. This side. Am I audible now? Yeah, so my question is I used my worker earlier so I was uploading a file so it was working properly and then the main thread was open for other tasks everything was fine so the problem that I faced is once the particular upload operation is done there are listeners we can use but what is the best way to notify the user that particular long running test is over so I could not find a better way from web worker how can I just inform user? You can keep posting. You can keep notifying your main thread about it and then you can notify user that 30% done, 60% done, 90% done and done. And once you're done you can terminate that work. This is a JS alternative ramp for this morning. Those presenters, they'll be downstairs in the back. If you check in this morning check and say there are only 300 people. Take your badge and occasion discount all the cool things that you see here. So stop by the help desk at this time this afternoon We have two workshops going on tomorrow. One is on React Native and the other one is Building Pipelines with backplated either of those. It's interesting to you. Come see a staff member and we'll show you how to get her oversubscribed so we will have flash talks this afternoon. And for those of you who want to come back, there's a special treat. I'm going to sing to you about it. Forms from this morning. You can give them to me now. We'll drop them in one of these round bins that are stationed around the place. The round bins are not in trash. Unless you really want to trash the conference. Anyway, it's lunchtime. So questions? Okay, or you can take it down to the round table which starts now which we will be handing around. It's more interesting for you to use the wars. There are a lot of JavaScript frameworks. Everybody's got an opinion about which one is the best. So we have three to talk about this afternoon. Each of these is simple short 10 minute talk and afterwards there will be a round table discussion downstairs in the banquet hall. And the first one is UJS. So that's for the first start project and second start project on GitHub. Two of the largest companies. Yeah, guys, we will be talking about UJS. Rahul Kajan, I'm from UJS. I work around the more focused ones. Also, do you pick a framework or applies to everything? You want to choose something? First, we want it should be doing justice to our team. So it should be simple enough with fellow learning so everyone can pick up. Not all the members of the team are the brighter find of the world. So it should be something which has self-learning out. Without UJS, only buzzword is it's simple. You can get it, get started with a single line of code. It's very natural. Second thing you need is resources on the tool. You find something which is simple. You want to do some X feature or you want to do something with this and you cannot find the documentation. So whatever you choose to have a good documentation. I present UJS as best in class documentation. Next, it should be performant enough to handle your task. So there's a third party framework. The URL is here. Let me talk about slowdown. Let's assume vanilla is the fastest. Same application in vanilla is the fastest in all these frameworks. Each framework would add some slowdown. The framework application would be a little slower. So lower the number, faster is the framework. UJS is there at point 1.04. React is there, Angular is there. I didn't pick react at all. So react is at par with UJS. Next, you would be choosing for features. First one is declarative rendering. So we are far from jQuery world where you would be writing some hooks into the DOM and fetching some event or listening to some events and manipulating the DOM. You want something which renders our single source of truth into the DOM. We just maintain object in the script and that is converted into your UI, writing a steamer for this thing for long. So what you did, it picked up HTML or say templates as default API. So with plain HTML and few DSLs like for if and for loops, you get power to write declarative render function declarative rendering templates for the templates. Sometimes you need power, more power, which you want to do something which is not possible in template world. It allows you to write JavaScript functions. So there's a declarative way of rendering an application using the JavaScript function and for the react world, it supports JSX. Next last task is composition. So when we are building our applications, it's built into smaller chunks which are used across application. So it should be composable. Any component can be used in other component. It should not. There should not be any fixed foundations on this kind of thing. So the template you can write just like HTML and one component can include other components inside it. The web web components back to control which has something compound content distribution. So a component want to distribute this content into various various holes inside in its child. So like this component allows its parent to put something in header something in main something in footer. So slot is a placeholder which will receive content from its parent. Sometimes what you want you want to pass down content in the child but you want to use data from the child. So we have something called scoped scoped scope. Sorry. So what are these? So these allows you to take data from your child and render it in parent. So I call it content replacement. We are replacing content inside a child optionally. We have multiple combination application component need to communicate to each other. So what we have we have attributes and some event listeners similar spec has been added to give us what you can pass down props props are similar to attributes which are passed down to children and they use it as state and when something changes if you want to notify its parent it will fire an event and parent can listen to those events just like a steamer passed on message prop which is a string. Hello Mary listen to the event key up and keys and call submit method install declarative and CSS. So this is just for a long time and now we have made it horrible. There are tons of CSS in the solutions each of having its own DSL and you are trying to mimic what you could easily do in CSS in JavaScript. So you adopted all the classical technologies with tuning around it and allows you to write CSS what you already been known so other than this you had something called scoped so when you write a component you want the style does not leak this component. The style is something specific to this component. You can write scoped on the style and the style won't leak outside this component ever. There's no restriction on choice of language on styles. You can choose anything CSS style is anything. It gets rid of append only styles would be writing optimized styles for small components and which won't ever leak. Yeah, I next you want to build some large application so you would be needing solutions like client side right routing so you have a built-in backup solution for client side routing next you need to maintain centralized application state so this is similar to flux design. There's few constructs which are added to this thing something is actions actions is where your synchronous or whatever computation or tasks want to do actions commit mutations mutations are changes which should be applied to state and mutations apply to state and your components re-render. So it's quite simpler than flux I assume next thing optimization efforts you have built a large application and it's taking too much time just dig into it right should component update right some conditions and some but somewhere someone else would come change something and your component is not updating even though both are changing so you take care of this thing instead of depending upon props what is that it injects dependencies in all the state what you've been using so if state changes you exactly knows which part of the which component is changed and it re-renders only that part that's how on the slowdown view is fastest among all in amazing markets you have slow network connections so you want SSR you offers you best in class SSR SSR functionality so you don't have to write some extra code on thing it's already built in just build your components they are SSR already developer experience so this is the most important yes developer experience at highest priority when you was created so this is how we're building application there are some templates some script some styles in lying in different directories you want to change something in template then you go to style and add some style there you never know what there are too many files to look into anything to find something solution you have a different solution extract is component in one single file and all these single file components your style your template everything lying in one single file editor support view support view has good plugins for all the editors so you would get syntax completion and everything in VS code webstorm sublime and then tooling around it you want to build something so there's a CLI which can scaffold you project and it can build you a single component all right Rahul has laid the gauntlet in the framework wars no you don't get one more minute because we have the next competitor Sapna is coming up to talk about meteor as another framework she's a software expert at node experts and she's going to tell you why meteor is the framework to choose and don't worry if you didn't get the full story on Vue.js there's a the OTR afterwards so thank you hello everyone am I audible to all I am Sapna and I'm here to give you an overview of meteor so before starting with that I would love to give you a brief about me I am a software developer at node expert at node expert we are a group of JavaScript developer who loves to do some cool stuff and we are also an official partner of meteor apart from that in part time I am a speaker at meteor Noida and JS fusion meetup group recently I have worked on one of our client product in meteor which is a full packet SEO tool and is completely a real time application so it is currently live a month back it got live and it has currently been around 10k users on the daily basis so in this talk I'm going to cover what meteor is and how we are going to use it sorry how it works and why we shouldn't or should use the meteor so the first thing which must be coming in your mind is that what this meteor is meteor is actually a meteor is actually a full stack platform capable of developing the modern web and mobile applications by full state here I mean that you the only thing required for developing your app is meteor so the meteor comprises of few concept which are reactivity isomorphic and hybrid app by hybrid app means that we can build the app for all the kind of devices using the meteor the next thing is that reactivity I read it somewhere that reactivity means the real time response and data right but I didn't get it completely so I relate that with this example so this is a chain of instant reactions to every action so we can relate the reactivity with this so what actually reactivity means that change in any whenever there is change in any data source then the client connected to I will say here the meteor server get automatically updated without any refresh so let's take an example let's take a more understanding of it through this diagram so here a client request for some sort of data we call it the subscribe it subscribe let's say post data and the server pushes that data to the client and but how it works actually it has a always a listener that listens to the data the subscribe data and returns the updated data every time the updates are made reactive style programming is writing a code to render the value whenever they are available unlike in the conventional approach where we write the code to get the data and then render it so it actually pulls the data from the server for rendering remember that meteor pushes the data from server instead of pulling the data so let's get more understanding of it through an example here we have a A and B A plus B equals to C and on the first call whenever the value is A A value is 4 and B is 6 we get the result 10 and in any point of time when this value changes to 3 and B value changes to 3 then also we will get the result into C is 10 unless and until we make a refresh to the page or we call that function again so but in reactive approach programming what it does is whenever there is an update is made the all the dependencies are also get updated it does it using the tracker I will not go deep into that so before moving ahead with the reactivity more please make a note here that meteor is completely isomorphic JavaScript framework which means that the only thing required for developing your app is the knowledge of JavaScript and the server side also it uses JavaScript so we don't need to use any other thing so JavaScript is everywhere it is just an example now what makes the meteor the meteor basically it has couple of concept or will I will say that core concept which makes it the completely completely reactive application so the first is DDP quick question here how many of you have heard about the Redis and the all the Redis DB or the other DB which uses the public subscribe system great so meteor also uses the same concept but instead of giving it to the database only it provided to the entire app cool right so the DDP is just a crime server protocol for querying and updating the server side database along with synchronizing that update to the client so it uses the public subscribe messaging system for this there is some some client subscribe to some data from the server and the server pushes that data to the client so we can say that it binds our client and server together so let's take a more understanding of it through an example here a user subscribe to post data to the server whenever the another post is added the server pushes that data to the client so there is always a web socket connected to client and server so this is provided by DDP the next thing is that mini-mongo actually I just forgot to mention that meteor works best with the MongoDB so it it works well with the other also but the feature it the MongoDB provide meteor provide it only with the MongoDB so it has a mini-mongo which is which is the complete re-implementation of our MongoDB API to the client so it can create a copy of MongoDB to our browser cache so we can you can take a look using this example here we have the mini-mongo cache and it is always connected to the the through the DDP to the server so whenever the changes mean the MongoDB pushes that update to the mini-mongo and through the tracker we can track the update and we can re-render it to the client so this is the complete architecture it has we can use angular view to react with meteor for the front end so it is the complete architecture which I have already told you about so it has client cache in the Mongo whenever the update is made it pushes that update to the this cache and this cache will re-render it to the view so now why why you should use the meteor or where you can use the meteor or you can say the advantages of meteor whenever you have the need of having the reactive data in the in your application like any changes made to the server should instantly push to the client then you can use the meteor along with that if you have the requirement for mobile and web development along with the same set of code then you can use the meteor it uses the single language that means you do not need to learn about any other language like PHP, Python etc so the next thing is that scalable by scalable I mean here that meteor comes with its own hosting environment which is galaxy you can use it to it pro it is specifically meant for meteor only so it provide a scalability in that way but but lot more can be done on that than path so it the last thing is that it has a large community and we have a lot of open source packages which can help you to develop application very quickly so we can say that meteor can do a lot of things right just like a Rajni sir so now you all know that what meteor is but it does not mean that we can use the meteor for all the kind of application it has some dark secrets which I am going to reveal reveal to you meteor actually ships all the template in CSS and JS to the client so it takes few extra second to load for the first time when our application is not required to be real time so we do not need to use the meteor as there is always a CPU and memory cost which require the synchronization of client in the server so in that case we cannot use meteor when we only when there is only a requirement for API then we cannot use meteor we can just simply go with the node server so that was a very short description about meteor from me now you all know that what meteor is and how it works and people will say that it is not scalable much but I don't think that it need to be cause for scalability we have we can use micro service architecture but that's the topic for another day and I hope that this session was helpful to you and you can use some day meteor in your application thank you so go through the gauntlet Sapna attacked and what happens next in the framework wars we're going to hear from Vincy who we saw on stage yesterday about angular and that will conclude our our big framework war fight and then we can take it downstairs one of the questions asked a lot between javascript developers in special and conferences like these is what framework should I choose right and and I think many times standard politically correct answer that you get is oh it depends on the kind of project that you have to use and this is the thing that is running in our mind what the hell does choosing a right framework for the right kind of project really mean and I'm sure we have done all of this before when choose a framework go Google up for framework benchmarks into a bunch of blogs look at a particular blog post count the number of greens and you know hey the frame of the most number of greens wins right you've done that we also done a couple of other things obviously you want to try and worry about things like how how is the developer experience about it? How can a team be productive? I mean what is the cognitive overload that's coming? What kind of a tooling support does it have? How can I scale my apps and end of the day? How is the support from the core teams and the communities? So these are the points that I want to kind of touch upon in terms of how angular fits into all of this right before we get into this a quick understanding of the differentiation when you use a terms angular J s and angular so as the core teams and the community what you have agreed upon is whenever you use a term angular J s it always refers to the angular 1x branch to the older version of angular and you also the logo for that is a little different. It's the a with the white border around it when you say angular it always means angular to and above so angular 2.x or 3 or 4.x or 5.x which is going to be recent. I mean it's going to be released very soon and so on and so forth. So the whole idea is angular to onwards is just angular. I mean just like the way you you say react or you say meteor of you you don't say react 15. I'm over I'm using react 15 or I'm going to be using react 16 to say react is the same thing with angular. So it's just angular all the way above from here. We go to restaurants. I think there are two types of people. So there are people who prefer the alakata style who like to choose their soups starters their desserts and the main courses and there are people like me who prefer the thali, you know, I let the experts take the decision. You know, you tell what is good for me. I'll have that I get to choose where I like what I would take something or not but at the end of the day I get a whole some meal and stomach is full and I think if you're if you're my kind of a person or you want to try and build an application which is complete and full angular is one of the good choices over there and the reason for that is because you get everything out of the box. So everything right from the component design right from server side rendering where they want to build it for the mobile you want to build it for desktop everything is available and you can obviously pick and choose. So for example, if not one material design engine material take it off put in another component library still works. So you get everything out of the box which will get a choice is over there and I think one of the advantages of taking advice from the expert or believing on the expert is you tend to be ahead of the time. So we saw a lot of talk today about type script and how type system is so great angular has been using type system right from day one when anger two was launched right and at that point of time they actually face the brunt of the entire JavaScript communities thing you guys are doing something really ridiculous. Why would you go for type system but angular decided to use type system right from day one now everybody understands that and we're starting to adopt that across different frameworks. Same thing is true with decorators. So as an angular developer we have been using decorators right from day one the specs for decorators are sort of being pushed into 39 somewhere around starting of this year. So you always been ahead of the curve and a lot of things you know talk about observables talk about ahead of time compilation. It's all been there to help us stay ahead of the curve with that and if you want to talk about developer productivity I think this is probably the best tool change that you want to use if you're working with angular. So type script angular CLI visual studio code strongly recommended and angular essential. So that's a very nice extension by John Papa that gives you a lot of goodies when you're writing angular code. If you use that kind of get into a Zen mode when you're writing angular there are additional libraries sorry IDs out there called angular ID and stateless not tied it quite a bit but check it out a lot of things. So I think it's front end developers we have all been used to debugging in runtime you know you write code refresh your browser go figure out what's breaking look at a console and stuff like that. We always been used to debugging in runtime not a great thing things have been a bit better now so a lot of debugging now happens during build time right a lot of linting happens during build time but this is great but ideally you should be catching bugs when you're writing code and if you're using the tool chain you're able to catch a lot of your bugs when you're writing code so things like typos things like white spaces things like you know you're imported a module the path is not right or the module name is not matching what the import is all of that gets caught as underlines red underlines and you're using visual studio code and the excellent and all of that so that helps you be a lot more productive when you're writing code and yes Oh if you're using any of the because any framework please try and use prettier awesome tool for formatting your code makes like so much easier for all of us. So that's one a couple of other extension that I really like this is one of my favorites currently it's called import cost and what it does is every time you import a module into your application it tells you what is the payload that's getting added in very very sweet. Have a look at that. So as you're importing a model you know whether it's a right thing to do or not or even the way you're importing a model are you using the entire thing or just the extension model that you want to part of it. So that's a very nice tool definitely check that out and and so import cost does not is not not limited only to visual studio code or angular it's actually used anywhere and I think the extensions are available across other IDs also do have a look at that the second one I want to talk about is snippets so the angular TypeScript snippet gives you a whole bunch of ready to use snippets for pretty much every code piece of code that you want to write with angular you want to create a component you want to create a module or a service you just hit a hyphen tab get a whole bunch of syntaxes selected and add your code obviously the amount of time going to spend writing code with snippets is very very low which is great but I think even better than that is the code that you're writing now conforms to the angular style guide so you're following the best practices that the Anglo code team is advocating and the code that you're writing is very consistent across your entire team so if the entire team are using the same snippets that you're writing your code piece is very consistent and you can just jump from one person's code to somebody else's code so a lot of productivity and benefits come in when you're starting to use snippets and all these things around a brain angular is very easy nowadays it's just about updating your npm modules and nothing is ideally nothing more than that so import update your npm modules and you moved on to the next version of angular there are some things that can break so a good thing is they have angular has also published a very nice document called the angular update guide it's up it's select the drop down select a version of angular select the next version of angle you want to migrate to and it actually gives you a checklist of what are the things that you should look at before upgrading during upgrading and after upgrading so you can just see the amount of effort that the team is putting in trying to make sure that our developer lives a lot better that's one and lastly the developer experience is not just about the technicalities and the technical aspects it's also about the community the angular community is known to be a very very welcoming community I'm not saying that other communities are not but a lot of effort goes by the angular team to make sure the community is welcoming to everybody who's coming into the system and even angular as such is very open to partnering with others if you think about it you know they scrap their own language and rewrote angular in typescript which is actually by a competing company so angular has been very open to adopting the company and I'm not sure about other frameworks but angular has a code of conduct so if you go to the angular rapport there is a read me on a code of conduct which says if you're working in the angular community in the open source across GitHub across gator across any of the meetups that you're doing there's a code of conduct that you should be following so we are very particular about how you are improving or how people behave and stay in the community and lastly but not last year so angular is built in open source right and I think this is like a difference between building an open source and open sourcing your software so when I say by building an open source is that the public or the community has as much of visibility into the project as the code team members have and with angular every single design decision technical specs design pattern or all published in the open source so you can go and read that not only that even the weekly meeting notes that they have are published so you can go and see so that's the meeting note for September 11th that happened so you can see what was discussed and what's the roadmap going ahead everything is publicly open and there are no secrets when it comes to working with angular and finally last but not the least it's not bad in performance at all it's angular's performance is comparable with the rest of the other frameworks so this is a comparison with angular ujs and react so this is the community supported version of ujs so if you look at it all of them kind of get a 91 on a lighthouse score it's not at all as bad when it comes to performance that's about it from me so yeah do give angular a chance maybe you'll fall in love with it so I want to invite all three of our competitors back up on stage because I want to hear who won I'm not really qualified to decide I'd look at pretty slides and go well like that one but we had our three frameworks in the frameworks wars ujs media yeah I really do so people who want who think that ujs did well please stand up stand up stand up there's a reason why you're going to be standing up stand up hey smattering how about media the smaller smattering looks like we might have a definitive winner here going downstairs to the hall to do an off the record roundtable discussion so if you want to talk to them about frameworks they'll be down there the rest of you get to stand up again because it's stretching time so stand up for a second we're going to do feedback stretching because this afternoon the only thing on my mind is feedback so our feedback scale goes from one to five so we're going to start with one that was too easy I need to make this more challenging for you fine we're going to count down five two three one two three four three one two three one two three one two one two one one and let's try it with a different body part how about our feet one two three four five one two three four five one two three four one two three four one two three one two three one two one two one one and then we're going to jump and then I'm going to let you go off to break you'll be really super energized after jumping so it's going to be five four three two one just like we did ready one two three four five one two three those numbers in mind when you're filling in your feedback form or two more talks and our flash talks test from your beverage break we are about to start with our next talk which is about scaling Node.js but before that there's also the framework what's OTR going on in the banquet session a banquet hall below so there's still time left for that if you want to attend that and this session is from Avinov here who works at Flipkart Avinov is working with note for over three years now he leads the mobile and desktop UI teams where they have been using Node.js based servers for a while now they have ample learnings as the battle use traffic spikes that he would now like to share with you check check you guys can hear me cool awesome cool all right so pretty much all that I was going to say as an introductory slide as already mentioned so yeah my name is Avinov Rastogi I lead the mobile and desktop web teams at Flipkart we have been working with node for quite a long time I presented about a node learning we started using node at JS for 2014 and here I'm back again with more learnings in those last three years so let's dive right into it right this talk will mostly be about a lot of what's like what goes wrong what you can do to fix it but not necessarily a lot of how I will touch upon it but not too much like there are already a tons of tutorials online on how to do all of these things cool all right and that's my tour to handle reach out to me at any time for more discussions around this or anything JavaScript all right so since we're talking about scaling node what is scalability right so this is the sort of official pseudo official definition on Wikipedia essentially scalability is the capability of a system handle a growing amount of work or its potential to be and last accommodate that growth this sort of incomplete I would say it also includes the fact that you need to handle that growth efficiently without losing or degrading the losing performance or degrading in any manner all right so scalability is typically classified into three categories you have the three axis you have horizontal scaling on the x-axis which essentially means you add more machines to your cluster when you want to handle more traffic then your vertical scaling on the y-axis essentially adding more resources to the same machine essentially increasing the number amount of RAM CPU memory network etc. on your machine the third one is not like it's sort of debatable mostly it's about data partitioning and you know things like MapReduce to distribute your load better that you can just break your data into pieces you work into pieces and run it in parallel but it's also it can be used to sort of include the factor of optimizing your code or things you can do at an application layer or system layer to just generally improve the performance and you know figure out the bottlenecks to fix them so that you don't need to add more machines or resources so let's quickly look at horizontal scaling a bit first so horizontal scaling it's like I mentioned adding more machines and it definitely improves reliability improves fault tolerance in the sense that if one machine goes down other machines can handle that load right and it adds the factor of high availability to your systems you essentially have a 99.9% of time or what they say or 100% and this is what it looks like essentially you have a user I don't know why he's on a horse but you have a user and you talk to a machine which is let's say running a node process which talks to an API layer you send a request from the user to node through some machine layer through the through your operating system and goes to an API comes back response goes back right and you how you horizontal scale is essentially you add more machines over here you add a new machine and you add a load balancer in front of it so your user hits the request to the load balancer the load balancer based on some algorithm may be least connection algorithm where whichever machine has least number of connections get that request may be round robin may be some other fancy algorithm distributes that load and essentially works in the same way now again again as I said you get reliability high availability all of those things when machine goes down you are up and running but the drawback here is your horizontal scaling is usually quite easy but it's very costly it's easy to add more machines like if you have stateless machines you just pull up a new machine or VM and just run your code on it and your load balancer will take over the rest but it's also costly you have to add a lot of hardware and it becomes more complex but more efficient if you're able to do it dynamically for example if you think like Hiroku or any other clouds provider like AWS Google Cloud Platform and I think it becomes more complex but it also becomes more efficient you pay for only what you use that kind of a scenario right so that's pretty much about horizontal scaling we can go deeper into it but I want to focus more on the other two axes right so let's talk about vertical scaling which is going to be the focus of the stock right essentially it means adding more resources to your machine so on your same machine you increase the amount of CPU RAM etc and I also want to talk about sort of mixing in the z axis over here where I mentioned about optimizing your code right so how that works is that if you're running a node process node by its I would say definition of your node by its implementation detail is single threaded so what that would mean is if you are running a CPU bound load or you know CPU bound code your application is CPU bound on your node process it is going to peg one core what I mean by that is if you have a 20 cores on your machine and you run you know a process which takes up a lot of CPU one core out of those 20 will go to 100% and the other will remain at zero that's how the operating system handles it that's how node handles it right so essentially from the outside in you will see your server is doing all it can and you know it's bottleneck you can't scale up any further either a response time goes up or your requested throughput doesn't go up things like that right so an obvious solution to that is add more node processes right you simply on the same machine on the same machine you add more node processes like this and you have to add some mystery layer over here which sort of looks like a load balancer and that's what it sort of does but now it resides inside your machine right and your user talks to that layer and then it talks to node right so that layer distributes the requests right so now there are two ways to do this ok so this is the sort of a bit of a how part. So this is the what you need to do the how part over here is that you can either run the server on multiple ports that you know you have a server.js which listens which takes a port number from your environment or something and then you boot up 20 node processes on a 20 core machine for example where every process gets a different port like eighty eighty one eighty two eighty three right whatever the other option is to fork child processes that you have one parent process and you fork multiple child processes all listening on the same port and your parent process disputing requests right. So, let us look at how to do that quickly. So, there is this module node module called npm cluster now just cluster you do an npm install cluster to install it simple as that and it is essentially an extensible multi core server manager for node js right and this is how it works you simply import it you get a flag immediately which says if it is a master or a slave to the first process you launch from the command line becomes master you fork any number of processes. So, essentially it creates a bunch of slave processes and those slave processes get this flag is false. So, you require your actual code here and those 20 slave processes or n whatever that number will will be the number of processes that you run and you will have to manage how you do routing how you do management log management monitoring all of that yourself over this. So, this is the entry level thing how you do it. So, to make it easier on top of this there is a tool built called PM2 and again it is easy to install again came install PM2 with a hyphen G flag of course and PM2 claims to be an advanced production process manager for node js internally uses cluster again, but it provides a very nice clean wrapper around it right and what that means is it provides features like log aggregation graceful reloading auto restarts if your process crashes and a bunch of monitoring tools and it is as simple as this to start a PM2 process. You create a JSON file where you give your app name your app path node and be equal to production very important one of like biggest easiest things to do to improve your server performance instances the number of processes you want to run giving it as 0 means just take the count of CPUs run equal number of processes you can give anything like 1, 2, 10, 20, 100 you it spawns that those many processes exact mode is essentially do you want to run it in cluster mode or fork mode you can look up the documentation what that does and it is able to merge logs and format it in custom formats it is a PM2 super useful and apart from just having easy node startups and you know handling scripts it provides really great monitoring tools this is the built-in monitoring tool it provides you real time like memory CPU usage per process whether it is online offline output and error logs a lot of interesting metadata and the capability to track a lot of custom metrics real time for that machine aggregated across all your processes so it really works well and we use this quite heavily at flipkart right so this is all well and good so you have a bunch of you know node processes running and you have your code which you haven't touched until now this is all just have been running on top of it and what this means now is that since you are using multiple node processes right and your code is running in multiple copies you should be able to consume 100% of some one resource at this point of time right either your process is very memory heavy so you consume 100% of the memory or you consume 100% CPU or 100% network right or 100% disk so those are the four resources I'm going to be focusing on essentially your network that memory disk and CPU alright so before we get into those details I want to talk a bit about the optimization cycle so when you are using node there is something else you need to do apart from just writing that code so before I get into it I want a quick show of hands how many people using node right now in production or personal projects cool awesome okay a fair few that's good so this should be useful if you are running it in production right for real serving real user traffic right so this is what I think the optimization cycle needs to be especially for production systems okay I want you to run load tests by load test I mean generating an artificial load on your system as in simulating use real users by hitting your server by making artificial API calls by creating any system that you can stress right start from the client layer and you stress every system all the way to the back end right the most obscure services that you can imagine your system you have to stress test everything right and find the breaking points find where it breaks find the bottlenecks that's this step right and that is and this part is something I'm going to be focusing on a lot like what can break what can be those bottlenecks right and once you have those bottlenecks you need to fix those which is not always easy you will run into a lot of very interesting issues at the OS level which I'm going to be talking about it was very interesting for us to have these learnings when we tried scaling for our annual sales right so fix it and then you do it again and again and again right you keep doing it because every time you fix one bottleneck you will find a new one at some other level that maybe your one machine was doing say 10 QPS very low but okay all right it was doing 10 QPS you find a bottleneck you fix it now it's doing 20 like you're happy wow I double my QPS but now there is some other you know some other bugs some other bottleneck which will allow you to go up to 40 if you fix it right things like that though you don't always just double your QPS you going small increments it's always finding the little things cool so these are the four resources I will be focusing on and for each of them they categorize these bottlenecks as either application layer or system layer essentially application layer is code that you write bottlenecks or bugs in your code and system layer is essentially the limits and just works of a system that you're using you could be using any operating system to run node maybe CentOS, Debian Red Hat, Windows Ubuntu whatever right so we will talk about both cool so let's start with network okay so within network the first thing I want to talk about is bandwidth you would usually think on a server why should bandwidth be a concern usually servers have pretty good you know data center connectivity good connections good bandwidth but when you look at an entire system end to end the slide I showed you earlier with the load balancer in machines and node and APL here your request and response the entire stream goes through multiple channels it goes through many different network hops even after coming into your internet it or to your cloud provider and each of those links may have a different bandwidth and what happens is if you're serving HTML pages if you're rendering on the server using any technology may it be node or not right it can choke certain networks because HTML as text grows really fast like the moment you add automated rendering of you know using some template or react or anything like that you loop over something and suddenly you have a ton of HTML so let's take a rough example right so if you have a thousand kb page which is frankly not too much in this day and age and you are doing 100 qps per machine and you have 100 machines in your cluster you are doing 10 gbps output right that's 10 gigabits of data per second that your cluster node cluster is outputting right and this is an issue we actually ran into at Flipkart that when we were serving HTML rendering entire pages from the server what was happening was at some point of time the load balancer itself started choking obviously numbers were quite different from this I've just tried to take as round numbers as possible but yeah that happened and the solution to that is pretty straightforward again but you have to monitor all the connections it may not be your node server that jokes it might be some other service down the layer whoever is handling this traffic right so solution is pretty simple you start compressing that HTML right and what that means is you Gzip the outgoing HTML and plain text usually gets great compression ratios a few years in Gzip and again two simple ways to do this that I can talk about one is you use the built-in compression module with express if you're using that a very common server layer for X node and that's how simple it is you just import compression and you say abdut use compression and your output is automatically Gzip right but the problem here is that it has a very high CPU cost and since node again is single threaded you end up with you know your main thread blocked for a long amount of time after the server has actually completed processing that request which has generated the response but now your main thread is blocked in compressing it so it cannot take any more requests even though it is not actually rendering anything which is frankly a waste of resources right so the way we solve this was we move to a co-hosted engine X or you can use any reverse proxy in front of your node machine which supports compression and what this does is that it allows node to just offload the HTML to another process living on the same box so it doesn't hit the network layer on the same box it hands over that HTML stream and essentially asks engine X to Gzip it and engine X is again using a low-level API super efficient at compressing and hardly consumes any CPU at these scales and just talking specifically about compression right it's highly efficient and effective and you can also use engine X for then multiple other users like proxy caching you know caching static files like if you have a service worker or JS file something like that so it has multiple uses right and this is what it looks like so we added this PM to layer that was a question mark earlier and now you put an engine X layer in front of it and you start compressing it here so your node renders the HTML and goes through PM to and your engine X is compressing it so outside this machine when it hits the network layer it's already compressed and if you assume usually text gets around 10 10 is to one compression that means that 10 gbps essentially becomes one gbps right and scales linearly according to that cool so that's at your system layer essentially right the next thing is network profiling you need to profile all the resources that you're using and I will talk about each of them one by one right what I mean by network profiling is you need to monitor all the network resources that you have and that's not just bandwidth you have to consider network resources like your ports and sockets and all the files you have open and think like that and there's a limit to how many upstream and downstream connections of machine is allowed to make right so the commands you can use on a UNIX based machine to look at that data is things like net start which gives you network statistics for TCP connections that you have open LSOF gets gives you a list of all the open files on your system and when you're using you know when you put when you put load on your node machines you will see that you run into a bottleneck at some point of time that your machine just stops accepting connections it may be just says connection timed out or you know some layer above you above the node machine will say service unavailable things like that so when I forgot about watch watch is essentially simple command which just runs any command you give it and just keeps running it every second every two seconds and gives you the output so you can use it to watch the output to have another command right so yeah going back to ports and sockets right so why is it a concern? Why does it get blog? So the reason is that everything is a file in Unix this is a very popular adage that everyone says and it's partially true what I mean by saying everything is a file in Unix is pretty much everything that you do on a UNIX based operating system is represented as streams of data right and a stream of data is identified by file descriptor in Unix and what that means is you have a unique ID generated for every stream and by stream I mean it could be your STD out STD in it will be your network streams your ports sockets FM reports everything right everything is given a file descriptor and your operating system comes with a limit on the number of file descriptor it's allowed to make for security purposes to prevent DDoS kind of attacks so since there is a limit for everything how do you find out what's the limit on your current systems you use something like you limit so you limit is a user limit command on all Unix machines which essentially tells you that for this user these are the limits in place and its output looks something like this or this is a very truncated output it gives you a lot of other numbers but to focus on if you look at the maximum number of open files in this sample is 1 0 2 4 that's really low but if you have something like this this applies not only to files but as I said all streams this means this will also cause issues with your number of ports and sockets that you can open apart from that there are also things like maximum memory size that any process is allowed to consume for node that's another interesting tidbit of information that node processes by default allowed 1 GB of memory if one node process consumes more than 1 GB it's usually killed by the operating system right and you will you will get a kill signal in node so that's a limit you can set here then there are limits on other interesting things file sizes also have limits and then in another interesting thing and anecdote I can go into is there is this thing called core file size right so let me explain what that is so an interesting thing an interesting issue we ran into was that you know we were using some C bindings some low level code from JavaScript some libraries which essentially a built from C and those go those libraries run in the operating system land in C land so what that does is if there is a problem in that code if that crashes for any reason how you debug that code is usually back cold dumps so a cold dump is essentially a dump of the entire stack trace kind of a thing that the operating system creates when the C when your low level processes crash and what what is how what happens here is and what we saw happening was that a cold dump was not being generated for us so it was very getting very hard to debug and the reason was that if your core file sizes 0 which is the default for most operating systems for most production operating systems is that it is 0 and when this is 0 the operating system doesn't generate a cold dump so while you are debugging on production systems you need to enable this increase this number to a decent number which allows a cold dump to happen and then remember to disable it because otherwise crashes will just fill your disk because it will keep creating cold dump files right so that is an interesting issue we ran into and were able to figure out and that really helped us a lot right. So the next thing here is so now okay you know the limits and you can only increase these numbers up to a certain extent right what do you do next so this is your system level issues that your system allows you to open certain number of sports and sockets you increase it but you're still hitting the limit right so the next step is you optimize now you look at your application line right what can you optimize in your code right so one obvious thing to optimize on the number of connections you're creating is a keep alive header a keep alive header on any connection essentially which looks like this is a connection keep alive and give a timeout this will say that for 200 seconds keep this connection alive that reuse this connection for any new TCP requests and basically just reuse so you don't end up with the overhead of creating new ones plus you create far fewer connections right the next one is connection pooling that you can use connection pooling to say that okay this is a pool of say 30 100 1000 connections on my machine and a node process when it wants to make a request we'll pick one connection from this pool if it's free and use that connection once it is done it will close it and put it back in the pool if nothing is free the other requests actual client request will have to wait and that is usually better than letting your system go to 100% usage and then fail right so you essentially put an artificial limit that okay this is the number of connections I am allowing right so it allows you to reuse sockets on your side and to do this in node you need to pass a custom HTTP agent to fetch all right so how that works is the inbuilt HTTP module provides this thing called an agent and fetch in its config takes agent as a parameter and what that does is it essentially allows you to set up a keep alive connection again so the difference between this keep alive and the one I showed earlier was that this is for the fetch calls from node essentially the outgoing calls from node and the one I talked about earlier is essentially the response headers of the response which node is sending to the client right so on one side you are solving for the incoming connections from the client and the other side you're solving for the outgoing connections from node to your API right so this solves both of them right now the next thing a word I mentioned earlier was ephemeral ports and ephemeral ports are essentially short lived endpoints that are created by the operating system when a program requests an available user port right and usually this port this ephemeral port essentially ephemeral means temporary is typically assigned a number between 1024 and 65535 right so that gives you about say 64000 numbers to look at and since the operating system is itself using some ports and sockets for various things this also includes you know sockets within the system like UDP sockets between processes or if you're using something like that so you end up with let's say 40 to 50,000 ports which if your user process can actually access right so before I get into why that's a problem let me talk about TCP connection states so when you have a TCP connection it goes through multiple states from like you must have read in college right in network subject or something that a connection will go through the whole SINAC phase where you send a request you get a SIN packet you get an acknowledgement packet and so on and so forth and it ends with the fin packet that's a TCP connection state goes through many states and the main ones you need to look at are established that when a connection is established and when either the client or the server closes it either goes into a close wait or a time wait state right so now these are very natural that like that's the life cycle of a TCP connection and now the interesting thing here is that the default time wait when a TCP connection in it is in a time wait state the default timeout is two minutes right so that's 120 seconds and let's say you had around say 40 50,000 FM real ports available if you have a time out of two minutes right that means you will run out of the number of ports on your machine if you get about 400 queries per second on one machine right so depending on how heavy of a load your machine is lifting 400 QPS is a very small number for node right can easily do thousands of QPS if you're not doing very heavy CPU bound work so it's very easy to run into this limit and we saw this happening right so you need to think about this again you can use you limit to increase this number and then you need to look at connection pooling and you know using connections to prevent hitting that number cool. So next resource I want to talk about is disk I know this is getting very low level into operating system but these are real problems we ran into right and I wanted to share this so the next thing is disk and for this there are a lot of obvious things you know use log rotate don't let your logs fill up on the machine keep deleting old logs compress them move them out try to avoid disk IO in the hot path by hot path. I mean that while your request is being processed like you use something like app dot use and you get a request or don't access the disk in that if you need to access the disk in your node in your node file then you need to just you know do it once at the start of a server but don't do it for every request. It's very costly even if you're doing a sync disk IO it is still a hop for your code base and make sure logging is not sync. That's an interesting issue we ran into we were using certain logging libraries on the server and which we realized after a long time that it's actually a synchronous log of that for every request for every error or for everything that we want to log every metric it actually blocks the main thread synchronously writes to some port or to a file and then comes back and that helped us a lot just removing that synchronous logging helped us a lot. So make sure you understand what module logging module you are using so that's about this. It's pretty straightforward right next. Let's talk about memory memory essentially is the Ram you have on your machine. So again for memory also you need to profile it right. So memory profile has a bunch of things but the simple idea of memory is that your memory usage should become a straight line for a given QPS it should not be going up or going down or going down is may find but your memory consumption should not go up at the same request rate at a consistent request rate memory usage going up is a sign of a leak. So how do you find that you use that there are many different tools. So one of the best tools is your Chrome DevTools. You can connect Chrome DevTools to node very easily. There are very good articles by Paul Irish and a bunch of other guys on how to do that and essentially what you can do is then use the timeline and enable memory profiling. So this is a screenshot from an old Chrome version. It looks a different a bit different now. I think the top tab is called performance and you can take memory snapshots there. But essentially what you get is a timeline while your application loads for the JS heap size the number of HTML loads that you're creating the event listeners things like that and what you need to watch out for specifically for memory is that this blue line should go straight or down right and these jumps going up is essentially when you create new objects and the going down is essentially garbage collection that when any variable goes out of scope and you the node or your client browser essentially this garbage collects it and it frees that memory right and remember garbage collection is costly. It's not free. So the more variables you create even if you're freeing it your memory usage may be consistent but your CPU usage will be high because of that right. So you can use this to find to see the trend and identify any janks that are happening. So essentially you will be able to see skipped frames whenever GC happens for example, because that's a CPU bound load these red signs are signs of skip frames right that can happen. The next thing you can do to profile memory is use snapshots. You can do heap snapshots allocation timelines timelines allocation profiles in Chrome DevTools and I won't go into the details but a heap snapshot is essentially a memory distribution for your in memory JavaScript objects. It literally just takes a snapshot of that and gives you an entire tree of what your JS heap looks like and how you use it is that you take a heap snapshot at two different timestamps not very far from each other but maybe you know you take it once before you scroll the page and you once after you scroll the page something like that and then you compare and Chrome DevTools come built comes built in with like a diff tool which shows you the difference between two snapshots. So essentially you can find out look at this object has been created these many times and usually if that number is very large because it means that object is that object is getting created again and again or it's causing a lot of GC by getting created again and again and destroyed right. So another interesting anecdote here is that this is an actual issue we ran into if you have code like this where we are trying to implement you know sort of a timeout for a API call that if the API call doesn't return in say one second we want to reject we want to fail that request we would rather serve an error in the page to the user then block all the systems for 10 seconds right. So what happens here is that you it's it's a very simple homegrown technique where you essentially reject you create a new promise which rejects after one second and then you make the actual fetch race with that promise. So if actual promise completes within one second this promise dot raise results and if this call gets called first it gets rejected because of this timeout right and you get the error in the return response works pretty well consumes infinite memory right. So the problem here is that when you when you're doing a set timeout in a promise it does not get garbage collected especially if this whole code is in a singleton right. So that's a very interesting thing that happened and the easy solution here is that just like store this timeout in some variable outside and then clear it once you resolve these promises in either case. So that is what was hitting us really hard. So that was causing a memory leak which got fixed because of this fix cool. So let's move on to CPU right. So CPU it's mostly to do with application land is very little system level optimization that you can do it's like the operating system is already very super optimized for managing CPU loads and scheduling tasks on the CPU and the cluster module that I talked about itself uses internal OS scheduling algorithms algorithms to schedule you know that which request of which node code gets run at what time and it does it pretty well. So for CPU the first thing obvious thing is make sure they're running in no DNV production for example especially if using react on the server or a lot of other rendering libraries change their behavior. If you're in development mode of production mode and they mostly always use no DNV as the flag to check that and they disable a lot of logging and error checking in production mode which speeds them up really fast. So again we go back to profiling. So now how do you do CPU profiling right? It's pretty straightforward with node you literally just have to do node hyphen hyphen prof abler.js this generates sort of so how CPU profiling works at a high level is that whatever process you're using to profile it will take snapshots of what code statement in terms of memory addresses that this memory address code or function call is running on the CPU at this snapshot and it will do that hundreds of times a second right and let's say you let the profile run for 10 seconds and your output for this will give you something. Okay, I will come back to this code you and you see something like this that for all the snapshots it took which is the number of ticks this code this let's say category of code or this language of code was running for these many ticks of out of total snapshots that it took. So in this example, I will go through that code in this example 99% of time or 97% of time sorry was taken up by some C++ code running on the CPU. So let's look at this code once this is an example from nodes own docs. So this is a Node.js.org and I've shortened the code to fit in the slides. So essentially you have a crypto library which is now deprecated but yeah, so this is what it does you get a request for all and it generates a hash for the password and compares it to a hash which you get from the server right ideally you should not be doing this on the client or like it should always be done on the server even then I would say offloaded to an API and not in the node layer but yeah if you do this kind of a thing and you get this output and you can dig in further. So if you want to break down what C++ was doing it looks like this that same output will give you below that you know breakdown JavaScript was doing this these functions were running C++ was doing this and you see that out of the total C++ sticks 51% were going into this right and this is how you find out Oh in my code it's this one line of code this crypto dot this hash thing which is taking up almost 97% of my CPU if I replace this with a better library or remove it by uploading this to another service this code essentially becomes entirely free JavaScript is not doing anything in this code right. So this is how you profile CPU and this is a very rudimentary way of doing it but it works well there's a better way of doing this which works on OS X also it has been very hard to make these things work on OS X because of restrictions like if you're doing this on your dev machines I see a lot of Macbooks here on Windows I don't know what help you how you profile node but 0x is a library which essentially gives you one line profiling for node and it's as simple as this you install 0x and you use 0x instead of node to run your code right and what it gives the output as is a flame chart right looks like this looks very fancy but it's essentially the same thing that you saw earlier but now it's interactive and visual so it's easier to comprehend essentially on your X axis you have the call stack sorry you have your amount of time spent on the CPU and on your Y axis you have a call stack right so essentially you can see node over here was running for the entire time obviously because we are profiling node node is a process was running for 100% of the time now within node there was some HTTP parser which took this much time and there is some on something else which took this much time now this will call other functions which go up the chain like this so this essentially becomes your call stack and it tells you which file in which line in that file is taking time and at a very high level how you look at a flame flame graph is that a flat top like a plateau is bad that there was a function which took a lot of CPU time but didn't call any of the functions so that one function essentially is blocking your CPU and main thread and something like this is good that you just have a bunch of function calls the last one doesn't take any time essentially I can own one function and returns immediately so this is how a flame chart works simple as that and we found some very interesting things we use react to render in the server using this technique we found that react render to string was taking close to 60% of a CPU time right and there is nothing much you can do to improve that maybe templatize or memoize things like that but it does take a lot of CPU and second highest thing was a json.parse and stringify serialization takes a huge amount of time almost 15 to 20% of time CPU time was getting spent in that so keep those things in mind and these tools will help you find out what's causing those problems and you need to have real time monitoring in your systems and essentially just doing load test is not enough you need to look at your real time production systems and yeah this is the takeaway from today's talk that load profile fix and repeat I can't stress this enough you have to run artificial load tests on your machines you cannot rely on just you know assuming that this code will run fine or running a small load test on you know one-dev machine right so that's all you can find the slides on this link tiny dot cc slash scaling Node.js and that's my Twitter handle so this open for questions we have questions I don't see any right now can you write down? Thanks have enough all right. Thank you very unlikely that they're here so here we are almost ready for flash talks we would like to have every flash talk presenter down here for preparations if you are not yet down here please come down here right away I have a few announcements tomorrow at Hasgeek there's a WebVR workshop in the morning and I'm happy to say that the instructor from the workshop is downstairs in the round table area to take questions about WebVR or about the workshop so if you are interested he will be there until five and I'd like to announce the dream 11 prize winners they've been running a hack a hackathon and there are two winners whose Github ID's I have if this is your Github ID and you're in the room please go down and see the sponsors so the NCK are you in the audience come on down and the other winner is Varenya prize. Thank you very much. All right I think we are ready to start with our first flash talk talks are strictly limited to five minutes which includes getting your projector online. So here we go first flash talk is par shot talking about Microsoft Teams. That's a child from Microsoft. I'm in front of a bunch of Java faithful people. Maybe some of you are wondering like Microsoft are they still even alive? What are they doing here at the JS conference? We are very much alive and in fact the slide claims we love JS and it's on my t-shirt as well. So we must really love JS and then we are here too. I have been in Microsoft for 17 years straight out of college and been part of various parts of Microsoft last five years. I was part of being shipping ML all goes right? So my knowledge of JS is two months old before two months. I didn't know J of JS right? So why am I here? Couple of things some perspectives around coming to just development after such a long time and to talk about Microsoft today. Historically Microsoft has had a love hate relationship with developers right? We did lots of things that made developers like us, but we also did things that developers had a lot of angst about if you are developing for the web you're doing all kinds of stuff and then you come to I right and it doesn't work right? All kinds of compatibility issues and so on but Microsoft in those days had a certain attitude. We came up with Silverlight right and we said yeah, this is a way to do web development use Silverlight and after a few little time realize hey, maybe we need to pay attention to what's going out there right? So Microsoft these days is very different, especially last four or five years after say Satya has been on board. It's been a vast change. It's a lot more embracing of the open source community a lot more open to partners customers and developers right? And we've done a lot of things inside the company to support open source just a few things I've called out here. Most notably we had TypeScript Anders who is the creator of C sharp also created TypeScript VS code. A lot of you use VS code. A bunch of RxJS and so on right? So a lot of things have changed. We've also gone ahead and say some of the things here probably C and TK if you're paying attention to what's happening in the ML space C and TK sort of the equivalent for TensorFlow right? And it's enormously popular with community. So a lot of good things are happening. We're hiring a lot of good people as well. Sean Larkin from webpack is on the edge HTML team now a bunch of good names. But this slide doesn't call out Anders who was a creator of TypeScript right? So good things are happening. So Microsoft as a whole and from being inside the company for this long. I can see that change right? We are hiring the right set of people to change the thinking from within. So that's one part of what I want to tell you. The second thing is something called Microsoft Teams. This is an application that my team is working on here right here in Bangalore and if you used Slack you would probably identify with what teams is. Has anybody heard of teams here? Few people right? Hopefully by next year will be a lot more right? The thing with teams is that it is built on the web stack. Teams is part of office and it's built on web stack and that's huge right? It's a serious business being part of office. We can't screw around over there right? The other parts of office have been out there for years and years right Word PowerPoint Excel and they're at a certain level of maturity with teams. The execs gave us full freedom saying go out. Choose the right set of technology for yourselves and we chose to build it on the web stack. So it's a huge bit. It's a huge bit because you have 100 million customers who will use it because it ships to so many people out there, right? So teams is built on angular and type script right? Brief architecture slide essentially on the desktop. We have electron and type script. All right, two minutes. All right, okay, talk to me offline about lots of challenges we had with and continue to have. We are here in Bangalore. It's a fun place to be. That's all I can say. Thank you. Hello, can you hear me? So this is not essentially a very technical talk, but it will be something that I'm sure a lot of you will relate to. So this is about the life of our JavaScript developer. So just to give you a context. So all of you must be thinking that this is how our life is right? We always keep partying and life is good for all of us, but mostly it is like this for all of us like you know, especially with the changing frameworks. I chose this topic for today because the theme of today is the words of the framework and in these words, we are just developers in between. So just to give you a quick one like when I started coding in the late nineties at that time the user didn't matter like we used to create a website if the user enters wrong password like less in the wrong format. There was no validation done on the client side. It will go to the server hit the server server said it's a wrong format and then the user changes it. We never cared about the user so much and then from there we introduced a JavaScript as a validation tool, but then soon and more we realized that the user wants more the user was getting greedy and then Microsoft came up with silver light at the time of its last and we thought that okay, how about bringing in animation and gradients and things like that and we created good user experiences and the moment we're doing it the browser standards they thought that you know why are you using extensions? We'll give you a stable five and then we have to real on the whole thing and similarly on those lines then a lot of libraries came out like knockout and all and then jQuery was a good library then angular said okay don't worry about libraries. I'll give you a framework with which you can you can develop the entire thing and then by the time we started learning angular they said okay we are done with angular one we'll do angular two in a very different way. So stop doing whatever you're doing. So and then at that time we were actually going to start with a project and my boss is asking me okay, what do we start with? I said angular this one we cannot do. Do I have no idea about back bone? I had tried it once in a while. We can try that out but what about wave components? That's looks new and we tried out wave components and it failed miserably in I microsoft it took 10 seconds so we said okay that's not happening and I was kicked out of my job I was fired for that and then I became a freelancer and then I started learning everything I said okay I have to know everything and then in my new job when I I was hired again to say that you know okay now you have to tell us what is the framework that we use. I said what about react they said wonderful and then the moment I joined they said okay do you have anything else in mind? I said okay wait a minute this time I want to take my time I went to Himalayas for one month I did all the meditation I came back and then I said okay react looks like the best choice but then my boss said have you heard about view? I said okay what is that? Okay so he said I'll do one thing I'll buy you the JSPOO tickets you just go there spend a week there and then come back and then tell me your opinion so here I am for that so this is basically the kind of spectrum that we as UI developers have to use there's so many technologies on the website on the library, jQuery, docout, backbone, AngularJS, react view, MVC, MVBN plus even portals I think react is coming up with that now and the standards used to be 5, 6, 7 what not and the best practices react just came and they turned out the entire best practices they said that single responsibility is not a concern you know go there JSX way and then immutability and I was totally immutable when I thought about unimutability what exactly is there and then lazy loading, lazy evaluation, lazy everything and not just that the devices also first we started with mainframe desktop, laptop, mobile, watch, glass no free TV card and all everything so what did we do exactly? what about networks? Dialog 2G, 3G, 4G, Wi-Fi Wi-Fi and you know teams earlier I was too proud to be in the UI team now they said no no it's not like that we have to do the feature based team so you are now part of the shopping cart tip you have to drive the front end the back end you have to do it on the client side even what about server rendering and then the user also change you know when I was in college in 99 we used to stand in railway station in queue for hours to buy one ticket today the user cannot wait for 300 milliseconds so that was the thing and then all these UX factors you know the look and feel performance, interactivity, first print meaningful print, first fold, second fold so many folds and then readability your code has to read like a poetry how do I create poetry? writeability, maintainability, like that and then there are so many alternatives angular react view what do I choose? so I always say that you know if you love your developers choose angular if you love your users for performance use react if you love both of them go for view then and now then web components comes and says that I will throw all of you apart because I am the one who is going to replace you all with browser support and then so how do you save yourself just a couple of few seconds you are going to find out how to save yourself up line can I take like one more minute just as an answer alright okay so the first thing that you should keep in mind like over the past few decades what I have seen is that one thing that doesn't change is the declarative way of thinking for example when my wife just takes me today that you know get potato on your way back so she didn't tell me that you know take an auto go to the food mall buy potato give cash and then take another auto and go get back home that would have been the imperative way but the next way to go forward is actually the declarative way where you just tell your framework or your library or whatever that this is what I want to do and the library should be able to do it for you and the way to go forward is actually autonomous components like as you see like most of our frameworks most of our libraries everything they are going in the way of components because then this is not a new concept like in your object-oriented concepts you actually were thinking about breaking down your bigger problem into smaller components that can interact with each other and if you keep up your skills in these levels it will never you know will never stop thinking okay this is what you think you are when there is a wedding going on you are actually the priest the man wants to get married to the girl and that is more important but you think that you know you are the more important thing because your user wants to like for example your user wants to buy a ticket for a movie and you being a bookmasher in between you think that you are important but you are not important that the movie is important so you are actually like this priest like you know this is happening this is the main thing happening you are just making it you know you are just enabling that and if you are slow this marriage is going to get delayed so that's why you have to make sure that you are really fast in that so this is my last slide like what exactly you have to do is always keep the user in mind because you are just a logical interface in between the user and his work and you have to make them you know connect as soon as possible embrace alternatives I said embrace not embarrass don't fight for you know this is good this is bad embrace whatever alternatives you have and find out the good thing and then in a ways wait to interact better with the user just one small example we had a problem of bag mic right in this one so the mic is just a medium between me and you and our talk is more important the mic is just a medium so if you are creating that mic don't be that bad mic like you know don't ask the user to hold it like this create good mic that can be next time just for that last slide I hate our mics our next flash talk up is Nikhil he is going to talk about being a digital nomad in digital nomad 101 can you guys hear me fine good evening my name is Nikhil and I am going to talk to you about being a digital nomad just going to start with the definition hopefully all of you guys know what it is I work as a digital nomad I don't have an office I work from wherever I am like I was working earlier here in the conference and I work from cafes and when I travel I stay at AIBBs etc etc so basically a digital nomad is someone who does not have an office and works purely remotely and I am going to skip all of that so a lot of my friends have asked me why I do this and the answer is the largest slide here you can basically work from wherever you want whatever location, whatever time for the most part and more importantly whatever clothes you want to wear in Bangalore especially something that is paramount is that you don't have to commute in the traffic and pay and taxes is a debatable thing but if you are good you can get a good pay and if you have a good CA you can pay less at taxes stress not only traffic related a lot of these things sort of add up like for example when I take 3 hours to get from Marathali to Sarjapur that adds to the stress which makes me not want to not able to put everything into my work so that is again something that is a big advantage health the excellent quality of air in Bangalore and having to settle for food that might not be of your taste these are smaller things but they all add up in the end you can cook at home or have a restaurant of your choice when you are out with a cold you wouldn't want to go to office and spread the germs instead you can sit at home and work so it doesn't really hamper your work as long as it is not like chicken pox or something but you essentially end up not missing that much work more time with family and friends you can stay at home or your home town as long as the Wi-Fi is good right or even better travel and explore new countries wherever you can go from time to time whatever you can afford and meet new people online you work a lot of times with people from around the world that in turning improves your knowledge of cultural nuances and just improves communication and your personality in general and there is a lot more going into any more details and because I said so much good stuff I should also get to the bad stuff so when you are working remotely from your house there is no structure you are the only person controlling you hopefully and there is a big difference between working in an office where you have somebody your your superiors or your colleagues and there is this pressure to perform not when you are working from home so that is something you need to force yourself to sit down for this many hours without getting disturbed which takes me to the next point the lack of social life which if you are an extrovert you want to meet people and talk to people doesn't happen at home so if you are that kind of a person it might not be for you distractions at home you have there is always some distraction or the other that is something that you need to handle and negotiate and dependency on technology it's a big problem in India especially because the network is spotty at places and then the wifi is not always that good and obviously job security if you are working at some of the places that I worked before somebody says ok we don't need this function anymore you are out of the job so that is always a risk for employers if you are using remote employees then you have a bigger talent pool not just limited to Bangalore or Hyderabad you can get anybody from around the world you don't need to pay for office infrastructure which is a big expense operation costs and obviously productivity some people prefer to work from home others from office I am just going to skip over this because I have 15 seconds how do I sign up first, HONU skills, technology and communication skills, update your profiles on stack overflow, LinkedIn etc and there is a bunch of websites that solely just do remote so angel is top-down crossover and that's it questions I will be available somewhere around here and if you want to know more about social media thank you that was a perfectly time talk digital mads know how to get it done on deadline our last flash talk is I can't even read my own handwriting and I have to put my glasses on so he gets an extra 20 seconds Laxia who is going to talk about compound components in React hey guys, hope you are having an awesome time here first of all I would like everyone to give a round of applause to the Hasgeek community because come on guys a whole week of conferences absolutely amazing awesome so my name is Laxia but I like to go by lucky because not everyone can guess how to pronounce my name right and this talk is actually about compound components but everyone I talk to them about all I can understand is so I am going to call it you can find me on twitter at double underscore Laxia because this is kind of weird because I like to think I am kind of hard to get but every time I say that most people say most people say you can't play hard to get if you are already hard to want so my talk is going to be a little abstract I am going to use React to kind of do it but this kind of applies to every other component based framework yeah assume a designer comes to you and says hey you know I want to build an accordion and this is what an accordion kind of does right so the data structure that you get generally is something like this where you have like a title and a content and this is the first version of a data structure you get so if you kind of build an accordion the rudimentary way to do this would be like just say accordion and sections and just pass in all your data but this is kind of tightly coupled to title and content so if you go to your accordion it is just a simple implementation I just want to show that it is kind of tightly coupled to section and title title and content and then based on your state you see which index is open and then you kind of add another modifier class to open it yeah so let's say a few few weeks later the API structure changes right so now it's no longer of the form title and content everything breaks now yeah because it's emoji name and info you kind of have to write this weird ugly transformation in your parent component and that kind of doesn't work because now it's sitting in your code where it doesn't belong to the accordion component now you have to write this weird transformation most people don't understand what's even happening here right your accordion component at this point is kind of opaque because no one can actually use it it's not flexible different way to kind of approach this is what is called compound components right a compound component is basically a set of components that compose together to give you a feature up so if you kind of look at this like the title and content if you kind of use react to write a components like this you have a list and map is generally what you want to use right because that's how you want to write your declarative code you have more flexibility here because you have a title and you have content you can pass in whatever data you want you're not constrained to what the data you're passing here is so this kind of works with your old data API structure and if you have a new structure it still works right because it's an accordion list an accordion section title but the data you pass can be your own it doesn't matter it's super flexible this way if you now look at it right so assume you have both your data you can take it and now you can start rendering it so I'll show you what it looks like so if you map over this and then kind of render both things it works perfect you have a different API structure now here's what the real power of compound component comes in right all the functionalities kind of baked into each individual component but they compose together from the feature right so if I want to actually let's say the designer comes in and says hey the accordions all opening downwards I wanted to open upwards you can actually take this content and then just put up save that refresh yeah now your accordion kind of opens to the top without actually having to go back in change any code manually right I have some more time I'm just going to show you a little interesting thing imagine tomorrow your product manager comes in and he's like hey we have an accordion and then I want to put like my ad right in the center right you want to get something like this working but you can't do that with like a closed API structure imagine trying to do that here what are you going to do you can't pass in like show add right can't say show add add to you can't say something like that so here you can actually render two sections and then just render whatever hedge one tag you want or whatever component you want here because that's what this component is kind of trying to do it's allowing you the flexibility to render whatever you want wherever you want yeah you can find the code at bit.ly cc accordion that's compound components yeah thank you guys alright that was a very very fast set of flash talks but they were all pretty good and wow that one was really funny we are down to the last talk of the last day of what for me and the rest of the has geek staff has been five days of conference for you it's been maybe three maybe two possibly only one our last talk is modular services in a Node.js monolith hey hi everyone are you going to introduce me? you go right ahead and introduce yourself you don't need me no no no I need you you don't need me Neval can introduce himself he's awesome okay hi everyone people at the back hello people at the front hello so I was stretching myself because all my blood accumulates right here and I mean I need it here right now so so I was stretching myself so my talk is about writing modular cloud applications and I'm also going to talk about microservices and monolith architectures and compare the trade-offs that is going to be the first part and the second part of the talk is going to be about implementing these modular apps in Node.js and I'm also going to discuss about the library RGJS which has certain benefits and yeah so hi I'm Neval I'm a full stack startup penguin I use penguin because ninjas are old school penguins are better engineers and I have done a lot of what-up startups so my first what-up startup was about daily arts I used to do art exhibitions along staircases and so the what-up here is what if you do art exhibitions along staircases and people don't need to take the lifts they can walk up and they can be some interesting stories that can be told along staircases so my second what-up startup is what if my chess board is split half into its size and I can play chess on half of the board so this is what I'm doing these days and you can play this game like after the talk if there are any comments criticisms, feedbacks or if you want me to add you know add me to your will because you like my startups so right so so yeah there are various decomposition techniques that we use in software development we have got microservices, modules and shared libraries so microservices and modules have our business logic and shared libraries are something that are more abstract let's define them so microservices are 100 odd lines of code and they are separated by network and each of that each of these services has its own DB modules are few files in size they compose together to form a monolith and they have a central database so these are two functional techniques that we will be discussing and these techniques are used across a range of organizations I have put three categories of organizations here so one is startups startups are pre-product market fit companies and they are experimenting with the space they are in then they are mid size companies so these are so companies that have found some traction but they don't know how big their market is and then we have unicorns unicorns are consumer internet companies with you know huge traction so microservices architectures come from unicorns and they were solving some interesting challenges for them so like I am quickly going through these slides so we can spend more time at the end and so microservices are come from unicorns they were solving interesting challenges for them microservices are easier to scale they are individual small services and they can be horizontally scaled every service does one thing they improve team productivity so each service is owned by a particular team and they have a lot of independence in what sort of you know database they will use what sort of technologies they will use and that helps them work they don't need a yes from anyone and because of these reasons they become very popular so so when they have become popular there are lot of tools that have come to you know for us to use microservices and these tools make it very easy for us to implement microservices like even if we are not unicorns I mean they are very easy to do now there are lots of talks that we can see and you know build these microservices and another important advantage for using microservices is that they enforce good design so each of these services you know has its own isolation so there is there is less coupling between these services and that sort of gives us a very good architecture so these are some good reasons for building microservices and now we are going to look at some reasons for not building microservices so this is one of my older slides and this is a true representation of organizations you know that exist out there so 95% of these companies are startups which means that they are like as I said they are still experimenting with their space and their team sizes are like 5 to 20 people then one out of these 20 companies become mid-sized companies and their team sizes would be like 5 to 50 people and one out of these 10,000 companies will become unicorns which are like you know which have like hundreds and thousands of engineers so these companies they all have very different sizes and obviously they have very different needs so now like we look at these same points from point of view of these first two companies and see how well microservices fit them or some other architecture fits them better so now there are some good reasons if you are a startup they are like some good reasons for not building a microservice so scaling is not essentially your biggest problem your biggest problem is that you are more likely to pivot this is a normal course that a startup takes they start with figuring out a problem and then they realize that ok nobody wants that and they try to do something else which means that there is a lot of code refactor with microservices it is very difficult to refactor your code because everything has its own boundary and with monoliths all your code is centralized so it is very easy to change your code with microservices each service has its own db and that also creates a problem if you have to change one thing from one db to another db but with monoliths you have central db and migrations are much easier so if you are a startup that is one good reason for you one very important reason for you to build monoliths first then the other reason for not building microservices is like the reason I gave was that they improved team productivity but that is only true for bigger teams so like a quick story like earlier we used to have soe architectures and the move from soe to microservice architecture was largely driven by agility and it was not just about scalability so unicorns formed independent teams cross functional teams which way which had a lot of independence and they could you know they could experiment at a massive scale to move user behavior from real world to online world and like independent teams were able to experiment much better so but again as a startup or a midsize company you do not really have that big teams so that is another reason for you to possibly not use microservices so the third reason that we come to is cost so when we move all our complexity to infrastructure we need somebody to manage that infrastructure and like I have seen a lot of small startups and in that small startup CTO has to wear a lot of different hats so it will end up that you know it will end up like this hat will end up with the CTO as well so it is better that you are focusing on the business problem rather than trying to work on this problem which you do not have and last reason for not building microservices is like they enforce good design yes they do but as a startup as a startup specially like you are still exploring your problem space and to find out that good design you need to know a lot about that domain so when you are learning when you are learning on the job it is easier to work on a monolith so this is something from Martin flowers blog and you know Martin flowers says that he has seen a lot of companies that try to go microservice first get burned by dragonfire but he has seen a lot of other people who try to build monolith and then split their monoliths into microservices like you know reach their end goals I think this is honey so so this this is something from Martin flower and and that finally brings us to our score for microservices versus monoliths so if you are a startup or if you are a midsize company then you possibly care about these other points better you want to be faster to market you want to have a code which is easier to refactor and you want to have low cost so monoliths 3 microservices 0 suppose like you go back like you leave this talk at this point and start building monoliths then what happens is that few months down the line you have a bigger team and you have a lot of customers asking for more features and what happens at that time is that teams all those big teams all those teams are working on the same code base and they break each other's features and when they break each other's features because true ownership is not defined so they break each other's features one what happens is that there is a loss of trust so they will say I will not use your code you will not use my code and there is no reusability so few months down the line shit like we should have gone with monoliths like why did we start with micro like we should have gone with microservices why did we even start with monoliths so that is the dilemma you left at at a few months down the line if you succeed to survive so but then there is a like this is the trade off and there is a solution to that trade off the solution is building modular monoliths so if you remember martin flowers slide he said bring break like build a monolith that you can break so that is a modular monolith and it has same advantages as a microservice it is highly agile and each of these modules are very composable so the final score is here and I will just give you a simple example of modular apps because probably we are new to them so international space station is a great example of modular apps so each of its modules was not just built in not just built in different teams but it was built by different teams in different companies like different countries and assembled together in space so if like if we can build something as modular and something in such a distributed fashion we can surely do the same with our software like this is where we should get the confidence and yes we can do this in our software if our modules follow these three rules strong encapsulation well defined interfaces and explicit dependencies so I will get to the theory of this in the second part but I will just give some more visual examples so that we can remember this presentation so how do I compare a monolith, a microservice and a modular app so I use a Rubik's cube example lot of people use spaghetti example so this is like a jumbled of Rubik's cube it is like a messy monolith if I ask you a question how many colors are there on this Rubik's cube it is difficult to answer if I ask you a question can you count all the red colors it is again difficult to answer but on the other hand in a microservice it is much easier to answer all these all different cubes are separated you know they are separated and easy to spot them identify them and count them on the other hand a modular monolith compares much better with a microservice architecture so these are the final two most important slides if you want to go back to your companies and talk about why they should be building modular apps then you can just use these two slides so a messy monolith it is easy to start crunching code and you will be building features from your first week you can start shipping out features but pretty soon because there is no because there is not much thought given to architecture you would start your code will start increasing in complexity and adding new features would get exponentially more and more challenging on the other hand with microservices your initial costs are high because you are focusing on things that are not like you are trying to solve the technical problems first rather than the business problems in complexity very linearly so this circle is the cost trade off and modular apps come in very well they are faster to market but at the same time they have a gentler gradient and much more into the future you can like when you hit those scaling challenges you can start breaking your modular app into microservices so that brings us to the first part of the talk now we are in the second part so it is 30 minutes over like it is the countdown time so the second part is about implementing modular architectures and like implementing them is Node.js and there is also a quick demo about like we will take a modular app and we will break it into microservice ok so I have been saying modules a lot what is the definition of a module so modules are self-contained interconnected units of business logic so they are self-contained which means that they do one thing and they do one thing well and they are interconnected units of business logic means that they reuse each other and they define very clearly how to reuse each other and with these they also follow these three rules of strong encapsulation well defined interfaces explicit dependencies that I mentioned earlier so now I can like talk about them a bit so our demo is of a booking app where like the user books a ticket of a venue and this particular module is a booking module so it has lot of implementation like all these files implement the booking module and what strong encapsulation means is that I should not be any other module should not be able to access any internal implementation suppose some other module wants to use cache.js then they should not be able to access it if they want to use something that like if they want to use this module it should only be for these interfaces that we have explicitly defined like you can reserve a ticket and once you have like you can hold a ticket in cache and then make a payment and confirm your ticket and book a ticket in addition to that so when we have these well defined interfaces the other advantage is that we can maintain them across versions and you know there is more trust between teams and the other thing on this diagram is the explicit dependencies so this module booking module uses some other outer modules which are there is a cache module and then there is a dv module it also it also defines which particular dv it uses so that is this is also very important because like one of the places where coupling gets introduced in our mullets is at the dv layer not just at the at the business layer but it also gets introduced at the dv layer so we need to define these external dependencies also because we want our modules to be reusable across applications and when we move modules from application 1 to application 2 then we need to we need to take our dependent modules also with it so modular design so like this is if we follow these 3 rules we will have a good module but a modular design is about breaking our application into different modules and when we are designing a modular app we can optimize for different characteristics we can optimize for readability, reusability, maintainability and agility and I will discuss each of these points through examples now and interestingly readability and reusability are like 2 opposing things because if we want our modules to be readable if we want our app to be readable then we will have bigger modules but if we want more reusability we will have smaller modules so sometimes these characteristics are also opposing so we have to find a good tradeoff so again this is our demo app and there are a lot of modules these are all the modules these ones and they are categorized into 3 directories so one is domain then database and interface so domain contains all our features our business logic modules and like a quick look at domain will tell us what this app does and what our code flow can be database contains is a MongoDB in this case and it has a lot of different collections and each of these collections is treated as a module on its own that is because we can inject them into domain and we can know what sort of coupling we are introducing at the database level and the third one is interface so interface is like various ways of accessing these are different protocols these are different ways of accessing our business logic so we can access it through we can book a ticket through web or we can book a ticket through SMS or messenger chat and in all these cases we will be using the same business logic module so it increases it makes a better architecture so now domain is about readability because our feature when we are building an application these are the features we need to implement and each of these features can map into a module so as readable our specs can be similarly our directories can also be very readable so breaking modules into domain can optimize for readability then maintainability means that we are taking care of future needs so in future if there is some module that is going to change or if there is some logic that might change in future then then we need to create that particular thing into a separate module because because it is easy to plug in and plug out modules and you know and change and try out different different you know variations that we want to try out so here each of the like collection is made into a module because because like later on one of our requirements is that we will want to break out these modules into different services microservices and when we do that we will want to have like each of those microservices should have their own DB so that is why you know like it improves our maintainability if we keep make each collection into a module of its own and the last one is interface so this is about reusability so like different like we can access each of these different you know each of these we can access our business logic through different protocols through different devices and we can use the same module so this sort of increases our reusability so like this I hope that clears what these three optimizations were readability, maintainability and reusability and the last one was agility so how do modular architectures help us create a more agile code see our application is successful and now we are spanning across different geographies then then we are going to have different teams for different countries so we are going to have a team like for example we are going to have a payments workflow for India a payments workflow US and a payments workflow for EU so when that happens like different teams will be working on each of these different modules and so breaking up so here we break up our business logic into different modules so that different teams can work on it in a parallel manner so thus we optimize for agility or teamwork so so great like this directory structure like this sort of architecture can help us create a modular app which compares very well to a monolith to a micro service app but like again like going back to the rules that I discussed earlier if you break any of those rules such as strong encapsulation then we can still end up with a messy monolith so like we need to enforce these like remember the three rules that we talked about strong encapsulation well defined interfaces and one more so we need to enforce them in our code base because if we do not then we are going to end up with a messy monolith and there are two approaches to do that so one of them is through code and the other approach is through configuration so when we are implementing modularity through code then we still have those directory structures we have those same directory structures in both the cases but when we are doing them through code then each of those directories will have an index.js file and that index.js file will take care of linking this module with other modules and creating a modular app on the other hand if you are using the configuration approach then we will use a library rgjs and there will be one more layer of abstraction and rgjs is going to you know take care of linking different modules into an app so next step is like I just wanted to compare the tradeoffs between these two different approaches so wait like let me ask are there any questions so far ok so we will continue so yeah so each of the modules has a main file index.js in code approach each of the modules has a main file index.js and this file is responsible for linking this module with other modules and creating the app so like all those all these index.js files will follow a similar pattern I am again using the booking module example so first we will import some configuration next we will consume other modules like we were using ticket database so we will consume some other modules and then we will initial like we will initialize inner implementation classes with these and lastly we will export what other modules can use from us this is exactly like npm modules but this approach has a small you know disadvantage one like these directories are hard coded in here so like so the modules is dot dot modules and like you know config is dot dot dot dot config so they are hard coded in here so what happens is that if we move this module to some other app or if we change the directory structure of one of these modules then we will have to change it everywhere where it is consumed and the other disadvantage is that these internal implementations these ones they are all synchronous so if there is some async initialization that we have to do we have to wait for it is to let say you know start before initializing the next module that is not possible in this approach on the other hand like with configuration like if we take the configuration approach there is some new syntax that we have to learn so this is the second approach implementing modules using configuration so each module consumes some other modules and each module provides services for other modules to consume and this like we define this in this tag plugin so each module has a package.json and it has a new tag plugin and it has two sub tags one is consumes and one is provides so arch.js like sort of reads it and when it is calling these implementation files in this case v1.js it will call the constructor function here and it will pass the modules that it consumes in imports and the configuration in config so so so basically both these approaches like I will quickly go into how this happens but basically both these approaches help us implement these three properties in our code cases and but there are slight you know there are slight trade-offs again here one is that if we use the index.js approach then our direct directories are hard-poded and the initialization is synchronous but on the other hand with arch.js like we use service names we do not use directories to you know to inject what the modules need from each other so we use service names which is that abstraction additional abstraction but we do not use directories and then we can also have sync initialization but then there is a slight learning curve that is the trade-off so I just wanted to pitch arch.js in a short one line pitch so what this arch.js allows us to do is that it allows us to move our dependencies into our business logic so arch.js improves project readability by moving higher level dependencies of business logic outside of implementation into configuration files what it means is that like for if a new user comes to my app if a new user sees my app like they can go through one or two files one or two config files and they can get a good idea of the project structure the code flows in that because all these higher level dependencies they have been moved out of the implementation logic and like a brief stream of arch.js so like are any laptops open right now so if there are then I just want you to shoot one minute I would like you to download this okay so if there are any laptops open right now then how can I go back to the first slide so if you can download this demo bit.ly arch.js minus demo while I complete this talk then we can then you can take a look at a demo app and we can do a quick demo so so again arch.js moves these higher level dependencies outside of implementation file into configuration file and that improves the readability this is derived from so like this library is derived from a library architect.js which was used by which is used by C9 in production and arch.js is a simplification of this library and therefore is production tested to some extent it just took me 10 days to create this library I am going to skip this slide so finally like so it works into like okay let me do so arch.js works in two steps like the step one is that it reads each modules it reads all the modules that my app is going to need and then it goes into their package.json and finds out which services are exported by those modules and puts them in a central registry so step one read all the modules that my app is going to need and then go into their package.json and like take all the services and put them into a central registry and then in the step two it will create this dependency tree and it will pass on these services and initialize our app in the right sequence so and it finally helps us implement these three rules of strong and encapsulation well defined interfaces and explicit dependencies well the strong like it has a better strong encapsulation because there are no imports between modules so only imports that happen between modules are through those constructs in that in package.json file consumes and provides and because there are no imports in modules it is a little more stronger in then it allows us to implement well defined interfaces when each of these services are contracts between teams and like they have to be maintained like over versions and it also like explicitly defines the dependencies on other modules and this gives us like we can very easily develop a tooling to give us a good bird's eye view of the project so this is a demo app and like this is a demo app and like I cannot do a demo because I do not have my machine but I took some screenshots to quickly show like how we can break a monolith written with rcjs into microservices so if you take a look at the demo app these are three really important files .js which is in the root it contains all the modules that you know my apps will have then the second one is this booking module which we are going to break into a different service and the third one is interfaces that is where that is where like you know our requests are going to come in from so this is a demo like I have got some screenshots of it so first in the first step we run MongoDB second step we run Redis server so MongoDB is the database that we are using Redis server is used for holding the tickets because we need to hold the tickets and then book them so it is used for holding the tickets and then booking them and then the third is the node app so node.app.js and then monolith this runs the monolith you know go over the code a little bit because we still have 10 minutes so I will quickly go over the code in 2-3 minutes so this is the demo app and this is the file .js that I wanted to show you guys and it has various modules this is the booking module it can like this is the one that this is the one that we use when we are running a monolith the module one and like there are some config options that are required by these modules you know that are passed to the module and this is the monolith this is the monolith like so it uses so it uses an express app and it uses the booking module and it uses some common modules which are which are basically our database so when we run this and we can like so these are these are 2 logs from the booking service check and block seats so when we run our test cases we can see that a seat is blocked by these 2 logs and then the second one is where I break the monolith so to break the monolith what I did was that in this file .js I created a main app and I used this .exportsbooking.client and .exportsbooking.server instead of .exportsbooking.module and like there is hardly any difference between them like this has a package this has a role client and this has a role server rest of the things are same so like now I got 2 apps here a booking service and a main app right so so the demo goes like this like first we start MongoDB then we start Redis server and then we start the booking service the command is .app.js followed by booking service and then we start the main app .app.js and the main app and then when we run our test cases the booking logs confirm seats and check and block seats come in come come here which is the service so that is how you break a like it is very easy to break a modular app into you know microservices using this architecture so there are some things that are not covered in this talk one is the so when we are writing monoliths first like we can introduce coupling in our business logic we can also introduce coupling in our database layer so that is one thing that is not covered in this talk and then there are some other features there are some more advance features of this library that are not covered in this talk and the last thing that I wanted to say is that please try this demo it is a very small library it is like some less than 1000 lines of code and it is looking for people who want to use it want to play it want to own it because it is a side project for me fun project for me so just fork it in and like go over the code and if you find it interesting then please feel free to fork it add to it etc etc that is all thank you we have questions when I like the expectation was that we were supposed to deploy two or three or more than that client versions of front-end app and for example we were supposed to give them branding but the API was I mean the data should be shared between all of them so we must have to start from that monolithic part to in that case what would you suggest I would suggest like if you are serving different clients then that agility things come into picture like let's say different teams are serving different clients then you break up that part of logic which different teams need into different modules and each of the team owns a different you know owns that part so like there was a slide back there where we said agility and you know and there were teams that were handling payments for US, India and some other country so the same you could apply for different clients you could have a module that you know three versions of like some strategy that gets injected into some mother module or something like that more questions I guess man thanks Navel that brings us to the very end of the conference and I would like to take a moment to ask all the volunteers