 Good afternoon, everyone. I am Ankit Rastogi, and I'm a software consultant working with Zebia, and I'm located in Delhi. So the talk that today I'm going to deliver is home brewing rum. So as we can see, it's about brewing rum. So I would like to say sorry to all you guys for these setting up wrong expectations. It is not going up that how you can make your old monk or Havana or Bacardi. So it's going to be about real user monitoring. So the first question basically comes when we go through this slide is what is real user monitoring? Anyone has any idea? Mintra guys might be knowing. Because yes, exactly, but from performance perspective. So as my friend has said, it's about measuring performance from end user perspective and analyzing it further to make your key business decision. What generally happens? We develop code, we unit test it, we do acceptance testing. All this is done from developer's perspective, QA perspective, as well as a substance scene. But we always lack how performance is perceived by end user. So this talk is about collecting metrics so that we can measure how a particular user is expecting the application to behave, how he is receiving the performance from our application. So this is a sample dashboard taken up from New Relic. It's a rum provider. So in this, we can see a lot of things, just like page load time, errors, and page views. And again, they are segregated by browser types, much like Google Analytics, but for performance. So the first question comes to mind that why I should implement what are the advantages. So it is a known fact that the performance experienced by the end user drives your business. It helps in gaining retention rates as well as conversion rates. So for the business people, it's a quite known fact that there is money in the metrics. Business takes decision on the basis of metrics only. So it's an important indicator that the performance experienced by end users, it can drive your business. Then expectations. End users has a lot of expectations from your application. They want a real good user experience and that's what we really want to do. Then performance is also the key differentiator. It differentiates between you and your competitors. So after this slide, there is nothing to talk about from a business perspective. Rest is good. So how we can do real user monitoring? It's a very simple architecture, not even architecture, a block diagram. It is having a lot of blocks. So it is divided into two parts, a front end part and a back end part. The use of front end part is to collect metrics and the part of back end part is to visualize it and analyze it. So on the front end side, I have all JavaScript. And in that JavaScript, I have basically a plug-in architecture in which I have a core module. The function of this core module is to basically collect the metrics provided by their plugins and send it to the server for the processing. So what all can we track? We can track errors. We can have page load performance. We can track Ajax call. And even we can have a custom metrics. Let's see one by one how we can track these things. So how can we measure page load performance? So this is a typical flow diagram, how a user navigates from one page to another. In this step one, we made a request, either through clicking a link or submitting a form or other methods. Then request reaches the server. After reaching the server application, app servers process it and then give the response, which makes us a step three. Once the browser receives the STML, it begins to process it, which makes us step five. And step six consists of page rendering. So what key metrics we can drive from this? If you want to know how much network lag is present in our application, or in the network not in our application, we can determine it by finding the time interval between step two and one and step four and three. It will make my network lag. And how much time my server is taking to process my request can be found out by subtracting step three and two. Similarly, dome processing time, which is how a browser is processing the STML, passing the STML, can be found out by subtracting step five and seven. And similarly, the total time taken can also be calculated. So how we can do it? There are typically two ways through which we can do it. One of them is cookies. So network time. Generally, when we navigate from one page to another, there is an unload event happening. So just before an unload event happened, I will make a variable and store the time in it. Then the first byte received. Whenever a response came, just putting a script, a custom script which is having a time in a variable, just below the head, I can determine how much time it takes for the first byte on the end user. So by subtracting these two events, I can tell how much network time it is taking. Similarly, dome processing time can be calculated from dome content loaded event and first byte event. And there is page rendering time, which is dome content loaded event and fully loaded event. After getting all these metrics, we will send it to a server for further processing. There is another alternative method, which is performance timing API. It is a new spec, and modern browser supports it, and IE also supports it. So in this, we can get a more detailed picture of the timings, such as we can get how much time our application is taking for app cache hitting, then DNS resolution, TCP request, request response, processing, and unload events. So it gives me highly precise timings. So after conquering page load performance, let's quickly move to ejects call performance. Before further continuing, let me show you a very known code, how you can make an ejects call. Simple three steps make an object of XML HTTP request than open and send method. So if through some way, I can intercept the open and send method, I can calculate the timing between opening the connection and fully downloading the content. So we have used the monkey patching technique for the same, which I will discuss further on before that there is a ready state value. As ejects call proceed, the ready state value changes from 0 to 4. It varies from unsent, opened, headers received, then loading and done. So here also, we can determine time to first byte and downloading time, how end user is experiencing. So time between the send method is called. And when first byte is received, means ready state value first time turns into 3. Will tell me about time to first byte. And downloading time is time between first time ready state value is 3 and when it is 4. If you guys don't like maths, OK, just. So how we can do this? There is a technique or pattern called as monkey patching. So we can monkey pass the XML HTTP method like this. We save the original XML HTTP request to a variable, then define our own variable over it, and we patch it. So as we can see that we have passed the open method and added two listener on it. Ready state change listener and load end. And then we call the original method, open method. Similarly, we have done this for the send method. Just we have seen that there is performance timing API for the page load. Similarly for resources, also there is an API getEntriesByType. So this is how we can call window.performance.getEntriesByType resource. And we can get also key metrics from this, just like appcast time, DNS hit, then TCP timing, request and response. Now let's move to how we can do error handling. So basically, we attach usually our error handling function with onError. But it lacks a few things. It has error message, line number, file name. However, it doesn't tell me about stack trace, character number, and it doesn't even work for cross origin scripts. So how we can overcome this? We have a try catch block. In try catch block, we usually, all of us know that we can handle exceptions or errors. So it gives me all the things that I require to reproduce my error at the server side or a developer can reproduce it. It can give me key information, just like a stack trace, character number, and it even works for cross origin scripts. So this way, I can make a wrapper function and I can wrap any function in a try catch block so that I can catch the error and report it to the server. This is a simple example of decorator pattern. We can use this in this way. Now there are some challenges in error tracking. The first challenge is different browsers provide different stack trace information. Then there is step to reproduce error on server by developer is still missing. Even I get the stack trace, I'm unable to determine what step does my end user have taken so that I can replicate the issue. Then I need to manually wrap each function in a try catch block. Consider case I have a 50K LOC code or a 100K LOC code. I can't manually replace each function call. I can wrap each function inside a try catch. So we will see one by one how we can overcome this. So different regarding stack trace normalization, there are third-party libraries named as stack trace and trace. They are well-tested libraries and what they do is they normalize the stack. Say one browser has another method of accessing stack and another browser has another method. So these libraries normalize it and provides a single point where I can get all the stack information. Now steps to reproduce error is still missing. So what we do whenever a user interacts with anything in my application, I log it in an error, simple. So say I log 10 events in my array and whenever I encounter any error, I send all these 10 events attached to it so that it can be replicated. Just simple, I just attached listener change listener with an input tag. So if anything is typed, it can be added to that array. And if later on there is some error, it can be tracked. So need to manually wrap each function. We can use instrumentation, which I will tell later on at script time or at script serving time or at build time. So now I'm going to tell about instrumenting JavaScript. I will cover it quickly. So this is the basic flow, how JavaScript actually executes. I have a source code. I have a parser, parser generates a same text we having a grammar of JavaScript. Then there is a VM and runtime environment. VM produced the output. Now there is abstract syntax tree. What is abstract syntax tree? I hope some of you guys might know about compiler designing. There is the concept of syntax tree. What we do is we usually represent our source code in the form of a tree so that we can easily manipulate. In our DS, we usually have taught about data structure and we all know how we can traverse trees and et cetera. So this is a simple statement where a equal to one. What are the key components of this statement? We have a keyword, we have an identifier, we have a number and we have a equal sign. So can I make a tree out of it? And this tree can be a language independent way and this tree can also be used to transform from C to JavaScript or JavaScript to C and this is how these transformation and agglification, minification, other things happen. They use this parsing and AST concepts. So I have made a syntax tree. I have a variable declaration which represents that there is an equal to sign, then there is an identifier which tells about A, then there is a literal constant one. So how I can dynamically wrap all the functions in my code automatically using this instrumentation. I will make a parser. JavaScript code will go through it. It will make an AST out of it, which is nothing but the tree representation of source code. Then I have an instrumenter. Say I want to wrap my JavaScript inside a wrapper. I will generate the AST for the same. I will traverse the tree to go to a particular point. Then I will attach my intended AST to that tree. Then it will make my transform AST. Then from that transform AST, I am going to generate transform code. I know it sounds somewhat geekish. I have a sample demo for the same. Ankit, I think we are really about five minutes late already. Could we just move on? Just let me conclude. All right, please do. Okay, so there are a number of libraries available to do the same. We have ES Prima, we have ES Travers, we have ES CodeGen, then we have a FellaFella library, then we have Burrito, all our Node.js base, and all the minifiers, code coverage, our instrumentation tools uses these tools only. Then the last stage comes to analysis and visualization. So generally, we can go to any tool for these things, but I generally prefer elastic search, LogStash, and Kibana for the same. Kibana is the visualization UI for the same, and elastic search is the searching. You can consider it as a search engine. So what are the other use cases of real user monitoring? The other use case of real user monitoring can be if you want to refactor a large-scale code. You want to know the hot areas where you want to first step in. So how you can identify hot areas? By instrumentation, you can inject a code so that you can identify hot areas. And again, when something went wrong in production, how you can automatically roll back? And how you can say, I have set up a threshold of 10, and if I encounter 10 errors, I will automatically roll back to my previous version in production. So automatic rollback also, and you are using crowdsourcing power for testing.