 and she is a developer at least years and she is like one of the developer for Jita if you are using it please welcome Thanks for the intro So I know the topic doesn't match what he told that's nobody's fault but mine but it is pretty much the same thing I'm going to be talking about performance Welcome everyone after lunch I guess you're all full I hope it doesn't bore you much So today I'm going to be speaking basically about my work at Atlassian from the past one year we have been trying to set up all our teams to be very sensitive to performance front end performance and my team was one of the first to adopt this and so that's what my talk is going to be about how to get speed as a feature in our front end teams and basically what cadence do you have to follow as a front end developer So the goal of this talk is basically what does it take to build a culture of performance sensitivity in your front end team So you all know how important performance is and there's been multiple researchers there's been multiple talks on this I think like 2018-2019 according to Google their whole focus is on performance but for me it was not the same at least my career graph I did not care about performance for the first half of my career and only joining Flipkart I was like Abhinav Rasogi told me like no jank there should be no jank in the website and that's the extent of my knowledge until then but in the past one year at least I have dealt deep into what it takes like to debug when performance issues come what do you do, how do you monitor and what do you monitor for So this talk will basically I'll talk about what all I learned during that time and what we have implemented So all of you do all work on JavaScript what do you do work on front end what is the what is the ratio most front end JS So in JS at least according to this research we all know performance is important So according to this research So this is by Tammy Edwards as part of her company her research company what they did was they slowed down Tesco's website by 500 milliseconds and the interesting thing to watch out here was that people thought that not only not only was it slow but people thought other things about the website which were related to features that they were developing they thought the features were childlike they were complicated or hard to navigate this is an important thing to keep in mind about where users mistake slowness for a badly built website badly featured website So your performance problem on your website might be bigger than what you know it's used called JIRA and it's grown up to like 100 you might have to move within teams a lot and we work both with like legacy pages the older pages that are served using JSPA should remain the same whether it's bigger scale or smaller scale So one thing that I learned working on this product is that front end performance is hard it is so hard to convert something slow to work faster and worse than that something I learned is that front end performance is fragile So it takes a whole army to build a fast website it just takes one person's mistake to slow it down for a day it is yeah and if you don't have checks and balances in place these things can slip unnoticed and gather into a big mountain which you cannot debug later like if it's loop or somebody not debouncing quite a lot of people but I still think maybe this might be useful and maybe it can be used as a point of discussion later on in my talk So I'll move to the demo now So the Chrome Profiler is basically one of the tabs in that Chrome provides as part of its developer tools So you have your, it shows your resume elements there's your source code, it shows console and so on there is a performance tab So this performance tab what this allows you to do is two kinds of recordings and I use both of them extensively So one of them I use when I want to make interactions with my website and see what kind of a JS stack I get, flame charts I get within that or this one I use to reload the page and see page load performance So let's reload the Google Chrome website the Google website So as you saw it basically knows when the page load has finished and stops profiling immediately So if you want to have a look at what this says right now I suppose this might be the most interesting part for us but even more helpful I feel at least is the screenshots that it provides on every frame here So you see that there was no activity until here and then there was a load and then you get the, then it comes back here So this page basically took from here to here to here to load something and this I feel is very, is like the highlight of Chrome Profiler for me I'm always looking at places where all the stuff that I am interested in loads and I select that much part of it So once you make the selection all of the part below it kind of molds into that selected part of it and you get more interesting data One of the, I have used basically this network tab the flame charts and the call tree here Like all the performance debugging that has been useful to me and I have been able to get some outcome out of it by using these three tabs at least So the network tab is quite interesting in the sense it shows you at the exact point of time that your scripts would have loaded Like for Google at this point you can see that the logo started loading here and then you can investigate what happened before this if this is too slow for you But I think still the most highlighted feature of Chrome Profiler is its flame chart This will help me numerous times I might be able to show you an example too So what the Chrome Profiler shows is what JavaScript it exactly ran So this can correspond to the JavaScript that you have in your code base If you are running unminified stuff it's very easy to correspond what's happening where and the longer this flame chart is the slower your call is There is something here that is holding up So the flame chart the way it works is for every call that this So this is a JavaScript call and for every function that it calls you have a step in the flame chart and a deep flame chart means that the call tree was very long So when I see a deep flame chart I basically try to go to the call tree and I try to find the most So the call tree here shows you how much time it took and what percentage of the original call it was and many a time I have found that I have been able to find expensive calls using this and been able to debug my performance problems Other than this one other thing that I find useful is these markers that the function we have and I use it extensively to debug Moving on now that we have established that it has a DOM content loaded I kind of pushed whatever I wanted to load over there to here by lowering this because the pack tree automatically lowers priority and so I had a bunch of stuff happening and some time to interact it So this I would consider as the first meaningful paint Maybe it's different for you maybe you don't need the tree there or maybe you don't need the sun maybe you just need the mountains Next step you will download and parse the JS and then your site becomes interactive and that for us is the time to interactive So these are the two most important metrics that can so on but this is to go when you want to to work with well good performance right but there are some prerequisites for this at least for me without these prerequisites our software development lifecycle wouldn't be set up for performance In case wherever I work performance I meant to staging and we use a service called LaunchDarkly and that allows us to roll out each and try and make it better and pushing up how much of the tree feature if you're not like I'll improve the performance we at least have a goal saying that you will not regress the performance and we discussed per budgets for the designer this is the hardest part so the designer wants a feature right you have to come up with the with the decision of whether it is really critical whether this feature needs to be at the beginning for the user and how much it will impact performance you need to make these at least have these thoughts in the planning phase during the development phase during development we have a cadence of performing QA demo at the end of every PR and QA demo's criteria of done will have profiling in it so in the criteria of done we will open the Chrome profiler I will sit with another developer we will open the Chrome profiler and we'll look for anomalies we'll make various interactions we'll record profile and see if something something we can catch and that has helped us like things like the re-rendering continuous re-rendering that I showed you those things have been caught in QA demos themselves we do like I said we do use synthetic monitoring so throughout the thing we are releasing to staging and running tests on that and seeing all our time to interact at least we aim that it should not increase we add new measurements new features, new measurements and we keep updating our speed index as per whatever new features we have released pre-release so this is the stage we go through because we have launched our click so what we do basically is we release we release our feature to 1% of our user which is quite huge and we try to do real user monitoring with them and we try to find any anomalies during that time and this is basically where we spend a large amount of time like we find something here and go back into development and that's why I say that real user monitoring is the winner here so once we are sure that we are not performing we are not having any regressions and things like that we move to the release stage and during the release stage basically there is a company so I think that's an important step please welcome Ateesha because mostly upon webpack and code now I will be focusing only on this code parsing part we cover these two topics very beautifully so for browser if you look javascript is the number one candidate for performance optimization so even if you are javascript is 100kb bigger than your competitor you need to fix it you need to have some sort of mechanism to load different way or probably don't load that feature because that is the number one browser takes time to parse javascript and people are sitting a lot unused code we will see now these should be the goals 200kb looking why these seem very unrealistic like 90% of the code coverage if anyone has looked at its code coverage is anyone doing 90% of the code coverage is anyone here or you are doing so anyone have used this version 4 of the webpack which was released previously which is probably not need to split the code so now code splitting concept of code splitting GWT then webpack is using the same fundamentals from the GWT and it is building on top of it just to now showcase the code so the thing which I showed you my screenshot of the functionalities which I am serving so webpack takes this as its starting point so that is my router if you are using React it would be like similar to your React router or maybe any json router webpack starts to look into this figures out that let's say my listing is dependent upon these are my child so I only need to parse these children when I am going to a listing page and similar is with other so let's say you are on you can configure what it does is it tries to find out the chunks or the import statement which are common in these two modules and if these chunks are lesser than a given threshold which depends upon your configuration you can say today my bundle sizes is 1400 MB so I am not targeting that I will go to 200 kb straight away I will say that in two months time I will be reducing 200 kb and I put a hook in my PR so if someone is submitting a PR and he is increasing the bundle so it is his responsibility to just maintain that 1200 kb so once I achieve 1200 kb then I will figure out from 1200 to 800 what optimizations I can do to go from 1200 so this is something which you can do and there are various tools which claim that they are supporting this but to be very honest I have looked into all the tools nothing is giving code splitting they are formulating the name in some different way it says that we are supporting splitting with the people thanks a lot we have 9 minutes left again all the tools are not very 100% you have to do is to figure out so I will give you we were using still we are using moment js many people would be using moment js so it is it routes on itself and you look at the output these are in react router I am defining the router you write it write dynamic inputs there yeah so I use the json router instead of using that router what I did was I put that all my routes in radux is known itself simple json mapping and then there is a loader which is available radux json route loader which you can tell webpack to initially you can do that router and then figure out what are all the dependencies in between yeah thank you do you plan to do it? I will call him