 Hello everyone. So, past two days we have a lot of different kind of talks like related with testing, with automation and mobile automation and desktop. My talk is almost one of the last talk for this two-day event and it is quite similar to automation but little bit different from these all talks. I am not talking about how we can test a particular environment or something. I am mostly talking about how I can monitor that application continuously. So, the similar way you have a server side monitoring in place in your organization in some way. So, can you do the similar kind of things using your client side like the end user point of view as well? So, by taking these points I am just going with my presentation. My name is Bhupesh. I am a lead software engineer in Adobe and Noida. So, it is first time for me to be here in Selenium conference and so, we will just begin with the first one. So, can anyone of you tell me what is the production monitoring or even just a monitoring for you, for any environment? So, when you say a site is down, how can you measure like, what will be your parameter for that? So, when I am saying the production monitoring, it is not just related with the front end or back end. It is the monitoring for a whole application together. So, that means you are monitoring your application continuously 24-7. Any time in a UI point of view, anything goes wrong or from the back end point of view, if your memory consumption goes high, your processor issue is coming, everything should record it and the alert should trigger at the same time. So, on that point, production monitoring requirement. So, as I take it further, there are two bullet points. According to me, these are the two bullet points. Why do we need a production monitoring? Like in the startup environment or a new company, we are saying like, we have 100% coverage and automation, we are doing unit tests, everything is there. But when you go to reality, it is not true in that manner. So, when you talk about in that manner, like you have to release your features daily by daily and the automation is not that much stable. You have to make sure like, when you ship your code, after that, it should be up and running as well. So, as a tester, developer or any software professional, your responsibility will not be over just when you deploy the code. Your actual responsibility also started after that as well. So, early detection of any production issue is one key factor for any production monitoring. Another point is identifying the non-deployment time surprise. So, when I say non-deployment time surprise is, maybe your application is quite stable, it has all automation intact, your monitoring is quite perfect, everything is perfect. But if you are working in an aggregator environment, you are the front page of a shopping site, which is actually interacting with n number of different microservices. So, you don't have your own database. You are depending on other microservices for it. What happened? If any of these microservices is down and their monitoring is not intact in their site, ultimately, your service will hamper. So, the responsibility in these kind of situations, it should be your responsibility as well to take these measures in these situations. So, non-deployment time, it happened maybe someone else is deploying their build, but your build is quite stable. So, to mitigate these kind of situations as well, you need to identify these issues in the non-deployment time as well. Then, what should be the main criteria for application monitoring for you? One is the server-side monitoring. Can you tell me another thing, which can be another thing for the monitoring thing? One, I will do server-side monitoring, which I feel most of the companies are doing by using different tools like Neural Lake, Appnet, Appdynamic, these tools, all companies are using. But are you doing any other than that server monitoring as well? Yeah, client-side. So, are you using any particular tool for that client-side monitoring? But Fiddler can't monitor your application. Fiddler can track the data which is transferring from your network, but it can't continuously poll your application. It can't give you a proper continuous reporting. So, if not, then my talk is for that area only. I'm just representing one tool which I created, which is responsible to create a continuous polling on your application and from the front end is your point of view and push that data to one particular server and you can have a particular dashboard where you can always visualize this data and take a particular action. So, just going further, I'll take one example for that. One day, I just opened my Facebook page and I saw that pop-up, sorry, something went wrong. Please try again after closing that pop-up, and reopen it page again. Can anyone also face this particular pop-up in Facebook earlier? If yes, then do you know the reason behind it? We don't need even to know reason behind it because as an end user, my thing should be, Facebook should be up and running always. I'm not their tester or developer. This is Facebook's responsibility to make it up and running always. So, then when I just go to the network tab for them, I saw this error. There is a 400 in the network tab and here you can see one of their network tab API is getting failed. Because of that, I'm getting this failure. So, what I'm trying to tell you, in the front end UI, there may be any issue, but if you go to the network tab, you will get more further information about these errors and these particular exceptional conditions as well. So, we have to test our front end application in the point of these API transfer and the JavaScript binding and the CSS binding in that way as well. So, objective of client-side monitoring. So, the main objective of this client-side monitoring are identifying the current deployment issues. You are deploying something and you found some issues. So, it should be detected in the early stage. Like, if I'll take example of a company which is a particular zone-based, for example, a company who is operating only in India, their actual end user are using it in the daytime. So, most probably you are deploying your code in the evening time. So, if you have a particular mechanism by which you can identify these UI issues earlier at the time of when you are deploying your code, you still have the time for a whole night. You can fix it. At least the next day when your end user will come, they will not hamper with that. So, the current deployment issues you can identify by these kinds of client-side monitoring. Then measuring the page load time. One thing for a client-side monitoring is a very critical thing to measure how much time a particular DOM page is taking to get load and getting display on the web page. So, to measure this particular page load time, whenever you are doing testing and that time, if you are measuring it, it will not be enough. You have to continuously pull your application so that there may be time-to-time load change on your application. Whenever you are testing it, maybe that time your production doesn't have that sufficient load and you make it as a grain sign. For example, your Google page is opening in one second time. But if a whole load comes when it is in the peak load, it took three or four or five seconds. In that time, you have to measure, like, is it a failed condition for your application or not? Is it breaching the threshold for your particular web page to be gotten open or not? Then the third point on which I am focusing with this tool is the uptime of your application in the front-end. So, out of five, you are running, I will take an example of running this tool in a particular web page five times. And out of five, if two times your particular web page is not rendering correctly, it has some network error, these things, so you can calculate the percentage failure and it's the kind of uptime for your web page at the end user point of view. So, this is a high-level structure and architecture for this tool, how it is created. So, you can see two dotted boxes in a below. So, the left one is the new structure and the right one, the old one. So, I will start with the old one, which is actually using Browzerma proxy with Selenium. And if you have a particular automation in place, you just have to add Browzerma proxy with your code setup and then it is actually traversing your network type data with your existing Selenium script and with different, different browsers. Then you put that particular data in the form of har, then this har file is going to the sync a client library, which is just below that dotted line in the above box. This sync up library is a particular library, which I created inside my tool, which you can club with your Selenium code. This sync up library actually pushing this data to the sync up, by using sync up rest APIs. So, there are some rest APIs exposed to post the data in the database. So, this database, then this data is going to the Mongo and this is the second part. Here, you can see this is the sync up user interface. It is a dashboard where you can visualize the real-time data, which is actually representing the load time and the failure conditions or the performance of your particular browser. In the new representation here, which is, you can see the bottom left section. Here, I am not using browser for more proxy. I am just eliminating it from the structure. Here, I am just using the performance log, which is, I get from the Selenium directly. Then the same library, sync up library actually parsing this performance log and this performance log again going through the same cycle to the sync up rest API to the database and the real-time dashboard is showing these data. So, let's have a quick demo. So, it's a plain Selenium script. From here to here, it's the simple Selenium code where the desired capability actually, in the desired capability, I am opening the performance log for that and then here I am creating the web driver instance. Then after that, this is the customized function, which I created in the sync up library. This sync up util has two main functions. One is the start tracking and another is the end tracking. So, in between that, whatever you put, like if you are passing a URL or you are clicking on a button, you want hard data or performance data for a particular button click or you want performance data for a URL navigation. You just have to put these two functions, start, above it and then the accent and at last you have to put the end tracking. Nothing else, no weight, nothing. All other, like whatever load it took, like what is the timing it took to load that particular web page, what is the error conditions, each and everything will be recorded by this end tracking. So, in this end tracking, you will see there are three parameters here. First is the app ID, second is the module and the third is the coordinator. So, app ID means if you have an application, like in your organization, you have a number of applications on worded in your organization and you want the performance data for each and every application. So, the first task will be just give it a particular name to on-board this particular, any application which you want to traverse and like for here, Stack Overflow is an application which I want to measure the performance. So, that's why I wrote it in the app ID section. So, all the traversing happening inside it will be clubbed with that app ID and the UI, you will see this Stack Overflow as a particular application and inside it you will get all the data inside. Another is the user module. So, if I will go a little further, so you can see there are three lines, three times I am navigating in the URL section but all time the application is the same one, but the module is changing. So, in the first URL navigation, you are actually want to get the data for the user tab. Then, another one, you want data from the question section and the last one, you want data, network data for the job section. So, that's why these all are the different modules. Third one is the coordinator. This coordinator is actually, it's a name place holder by which you can identify the workflow. Like for what workflow you are actually creating this particular execution. So, this particular coordinator will go to the UI drop-down by which you can actually see all the data which is getting captured. So, for demo, I am just running this code. So, user then question. So, it is navigating on different panels and here in the log as well, you can see in the response and request section, whatever data you are capturing through your network tab, I am just printing it right here to display like these, whatever information I am, what are the information I am capturing. So, these are the information. You can see the network, like this data, like request will send, network dot request will send. So, this is the first call when you are actually traversing through a web page. This thing is actually working on the dev tool protocol. So, this all format, this JSON is actually on that following this JSON, this dev tool protocol, Chrome driver JSON protocol. Here, when I, this is the actual dashboard where you visualize this data. So, the application was Stack Overflow. Here, you can see user, test user 2, at the relgmail.com. So, in this particular execution, I made this test user 2 at the relgmail.com as a coordinator. That is why you are seeing this data here. So, when you navigate to it, so this particular panel is the live panel where you can, the script which I run, I run it only for 2 iterations. But ultimately, your goal must be, it should run in infinite loop and continuously in a particular CI machine or any other machines. And this, because if you can see here, load time in every 120 seconds. So, in every 120 seconds, this page is getting loaded and it is pulling the data from database and this particular UI is getting updated. So, I can even, if I just change the time, so now it's started pulling in every 1 second. So, I will just again, re-change it so that it will not hamper our flow. Here you can see, so all 3 URLs. So, if I, I have shown you this, we actually navigate for these 3 URLs. That's why you are seeing, so all 3 URLs. I mean, right now, all 3 URLs are in the past condition. That's why in this section, you are seeing this image. I clicked here. Now, the toggle button changed to so failed 0 URL. There is no failed URL. That's why it's showing 0 URL. Here, the data which captured through the performance log is showing here. This is the URL. I hit it for the 2 iteration. It took almost average, for 2 iteration, it took 1.6, 4, 66 seconds. The last execution happened at this time and there are more detailed link. Here, this is the complete data of your network tab. So, where you can see the response code and the files which are actually, you can see in the network tab as well. There are lot of JS files and XHR file as well. So, all the data which is actually transferring from your client to server and the data you are getting from the servers to client are transferring in the form of XHR protocol. It has exposed methods for transferring data from your client to the server. So, it is following the same thing. So, because when you are monitoring your client application, the application can be blocked because of any JavaScript binding failure. It can be CSS load failure. It can be an XHR failure as well. You are posting a data but because of some server is down or something, it is not posting properly or you are not getting proper data because of some server side issue or your JavaScript binding is broken because of the new deployment or something. So, all these things you can visualize here. Right now, everything is green. That is why you are not seeing any failure. Otherwise, if anyone of the status will be failed, you will see these status as a failure as well. Here, this one is showing the last two iteration things. So, the first iteration and the next one. So, first one took almost two seconds and the second one took almost one second. And this one is the top most network tab API status for that. So, it took like the top most network tab status for 204 in this call and earlier one as well. So, it is on a 400 or 500 range. You would see this here right here. Here, in this section, if you click on any of these blue buttons, here it will draw a graph. Right now, it is iterating only for the two data. That is why graph is not plotting very smooth. If you run it continuously 24-7, the graph will, the real pattern of graph you can see here. But still, you can see this particular time. It took 2.02 seconds and load time was that one. And then, now, the next iteration was happened with 1.8 seconds. Yes. And then, in the bottom, you will see the status code. So, the first one has the top most status code in the range of the 3xx. And the next one has the top most status code as a 2xx. So, these are, and if you toggle in between these, so, you can see that data is also changing. Then, now, I have recorded one failure condition as well. So, I will show you how the failure condition looks like. So, I run this tool in the, one day I had to run it in the GM side as well. So, GM.com has some broken links. So, here, if I will show you, so, if something is broken, you will not see any image which was showing earlier. You will directly see the failure conditions right here. So, and failure condition with the red color. Here, the average time and each and everything are the same. But when you see on that network tab, you can visualize. There is a 404 for this particular CVG file. And for the next one, there is one particular XHR binding was failing in that way. If there, you can see there are so all three URLs, but by visualizing here, only two are visible. So, when you toggle it, you can see all the failures as well. So, if you have any number of different URLs, but only few are failures, by default, you will see only these one. If you want to see all other as well, you can go further in that manner as well. So, now, I have to calculate the uptime of the application. So, uptime means, like I said, you are navigating in a particular URL and because of any binding failure and XHR failure, your network tab is giving you some error conditions. So, on the basis of that, this uptime is getting calculated. So, here, by default, the range of this uptime is 399 to 599. You can change it anytime if your organization or your need it in the range of the 499 to 599. In that way, error code status or status code, you can change it anytime. Then the number of days, for how many number of days? Because you may need a summarized report for a complete week. So, you can change that date for a number of days. So, you need a three-days report, which is actually starting from this 29th and then for each and every date, you need 24-hour data. So, right now, I will just click on that one because... So, you can see, I run it like on 29th date range. That's why here, you are seeing the count, but there is no count in the 30th date range. So, out of these six, five are failure. So, that's what the percentage was, 16 percent. And here, in the home panel, both links are passed, but the failure were in that one. Okay. The recent link, one failed on one pass and other are all failure. So, in that way, you can visualize the uptime of your application as well. Now, you have a failure. You want to see this, what are these failures as well? So, you can directly go to the historical panel from here. It will search for that particular date range only. And here, you can visualize these particular URLs were failed, all have 404 range, and this particular binding was failed on that. So, these three components, the live data and then the historical data, and uptime is the complete, like right now, the complete package, which is in that solution. It's still not that much completed to face. There are a lot of things which I am adding on that one as well, but it is in the production ready stage because we already using this tool, but the enhancement are still in progress. So, I need these feedbacks as well on that one. So, just to summarize on that, like technology part and these things. So, when the automation part, I'm using Selenium app driver and for making the older binding in that, I'm using the BMP browser mock proxy. For the Chrome browser and the new things, I am using performance log, that's why I'm using plain Selenium code on that one. For the dashboard thing, I am using Java Spring and AngularJS and MongoDB for database management. Then for deployment section, I am using Tomcat to hosting this and Docker to make it containerize. So, in that way, it's deployment things happening. Now, like right now, I created only Java bindings in the client side, but I also exposed REST APIs by which if you are in any language, you can expose, you can put your data on that tool as well. So, if you see that call, so it has endpoint API slash network data. Here in the module section, whatever module you have, you can pass in the D section is actually the body section, where you are putting the data in the body section in your car. And URL, wherever in URL you are navigating, you can put that in your URL section. The time it took, you can put it in the total time section and code message, code status, whatever user, actually user is a coordinator who is actually running that flow. So, you place that coordinator on that section. And you can see in the bottom section, there is a more detailed section. In the more detailed section is the place where you put your network type data, network type data inside it. So, the capability which this tool has, onboarding multiple applications. So, like I said, like you can see in the dashboard, there are stack overflow, there are n number of different three, four other applications as well. So, you want, if you want to onboard more applications on that, you can do this thing as well. So, one particular dashboard can summarize each and every application which your organization has. Then other is the 24-7 poll your performance data in the client point of you and put it on the server to visualize it. And the real-time client-side status of your browser. So, like I said, so in the real-time dashboard, it is giving you the real-time dashboard, real-time data. Then if you are searching for the historical event which is failed, you can directly, one way to navigate it through the uptime section, but another is you can directly go to the historical tab as well. And there as well you can get these historical data information. Other is the uptime calculator. Right now it's quite straightforward. Whatever the percentage of the pass and fail, I am adding more complex mechanism on it as well. So, uptime is for calculating the pass failure conditions for your particular network type. Graphical representation of the URLs performance. You can visualize these URL load time and these things directly in the screen. Then monitoring application directly, like on board your application on this particular tool, you don't have to change any code, code change in your actual source code. It is running from outside, outside. So, without any code change, you can run it using your existing selenium code as well. So, more things which I am adding on it are the, right now the alerts, whichever I can create, I can create it using the client-side library only. So, what I am doing, right now I am moving these things in the dashboard section so that if you use the New Relic, CloudWatch, these kind of tools, they have a dashboard where you can on board like these alert situations and these things. The similar thing I am replicating here as well. Another is at browser site analytics testing as well. So, because I am able to capture network type data, that means I have all analytic data as well, whatever transferring from your browser to the server. So, to further match it with the baseline data which you supposed to pass as an analytics data. So, this tool, I am enhancing it for this thing as well. And the next one is to support for 24-7 for the desktop application. I am, actually I am working with Adobe. So, we have a lot of desktop tools as well where we have to do longevity test for n number of like three, four days and in that manner. So, any time that may be possible because of some performance issue or something, your memory, your system memory got a spike on, or process called a spike or something. So, to capture these analytics as well. So, I am enhancing it for the desktop applications as well. So, this is all about this tool. Thank you. Any questions? Any feedback? Any questions? Hi. It's a wonderful walk around actually. It's a wonderful walk around. And I need to understand, say for example, I am using one of a C shop dot net application which is actually a standalone application which is in Windows. And we do have internal URL calls as part of AngularJS sites and all. On top of it, it will be running on the dot net application. So, can we be able to capture those kind of URLs all with this? So, my question is, are these URLs visible in your network type? Yeah. So, you can capture it. Okay. So, it will be captured in this UI and we can able to proceed. This is using the developer tool protocol which already Chrome has. So, it is allowing you to get all these statistics out of it. So, obviously, you can capture these data from there. Okay. And one more thing is, say for example, if you want, in browser site, if you are running this in a grid kind of thing, then how this report section will be manipulated? So, you are talking about when you are running in a grid or something. Yeah. I coded it in the Java only right now. So, I can show you and it is not a grid code right now. But still, whatever driver I instantiated, I made it as a thread local, which is a thread shape kind of thing. So, in that way as well, you can do, if it is your question in the threading point of view or something. Otherwise, whatever you like, each and every session through grid as well is individual code. So, whatever, like when you are traversing your application as an individual, so individual call will go to this particular dashboard only. So, it will handle that isolation because these isolations are actually happening in your client-side code. Dashboard is not doing any isolation. Your code site, if you are able to run grid execution parallelly successfully, then you can push the data here as well. You are talking about the browser level or something. So, what he meant is a metrics aggregation. Yeah, good. So, actually, these kind of feedbacks I also need. But right now, what I can told you what we can do in place of putting just because this user is the same, but we are sending directly from as a coordinator. So, if I in place of directly user, I Chrome slash this one, I can like differentiate different different data set in my execution cycle. It will visualize in different manner here. So, suppose this thing is not directly user, test user data, it is Chrome, test user data, you can directly go to this URL. It is giving you the data for Chrome driver only. If it is done by the Firefox, it will give you the Firefox data only. You can isolate it in that manner. And I am also whatever you are telling, I am taking for that enhancement part. Questions? Hi. So, how is it different from web page test? Web page test. Web page test. Yeah. So, web page test is continuously polling your application. Is it? Is it, can you host it? I haven't used it. So, can you host it in your premise here? So, if you have a staging environment which is a pre-prod, can we do the same kind of testing in the like what do you tell like these kind of tools as well? Is it possible? If my testing, if my test environment or staging environment is accessible outside the network, then yes, we can do it. If not. If it is not visible. For example, I will take the example of Adobe only. Our staging environment is that much critical as our production environment. We can't, it is not exposed to each and everyone outside, but it's that much critical for us. So, in that much, these kinds of scenarios, we have to make it up and running always. So, if I have to run these kind of tools in these environments, can it be possible? I guess web page test also have an option to host the web page test on a particular server. Okay, maybe. On your site. So, just wanted to know the difference between those. So, yeah. Replicate that only with some reporting differences, but otherwise it's looking same to me. So, the main purpose is to use the existing, you don't have to migrate your existing Selenium code. The same Selenium code you can use with this framework because and the dashboard is separate part. It is not, you talk about the particular tool, it is specifically for the web. So, what I am targeting, I am adding other features on the tool. Like, for example, I told regarding the analytics things, I am adding for the desktop performance. This is actual goal. Maybe this thing is possible with your tool, whatever you told, but the main purpose for this tool for me is that one to add different other things as well. And it also has the options to verify your application in different continents, in different geographies. Different environment. Different geographies. If you want to test how your product environment behaves in, suppose, US or Germany or something like that. So, you can specify. So, correct. So, different locations or something. So, this code is actually a code which is a client-side automation script. So, what you can do, you may have a different EC2 instances dedicated to different different zones. Because ultimately it is not a paid tool or something like New Relic or kind of thing where you pay money and they will host a particular solution. It is a solution which you can host in the impromises kind of thing. So, yeah, but that thing you were telling, it is possible. Hey, it was a good demo. How can I pull, like, how are you pulling the network data from browser to the dashboard? So, it is directly communicating to the MongoDB. How are you dumping the database? So, I will just again go to the little come from start. So, if you see that slide here, these browsers are running using Selenium and then performance data. Actually, this performance log is getting all the data from these browsers and Selenium using this performance log. Then I am actually, it has a lot of raw data as well. So, I am parsing this performance data using the sync up library, which is actually client-side library. I have shown you two methods. So, these two methods are inside this library only. So, there are no other methods. There are two methods which are sufficient to do this mechanism. Then this, the upper section is actually dashboard section. This is putting the data on the, with the help of that Carl, which I have shown you, to the MongoDB. And then there is separate part dashboard, which is actually pulling continuously and you are visualizing. Hello. So, how are you calculating the response time in a single-page application? Yeah. So, there are two parts. One is the BrowserMap Proxy and another is the performance log. BrowserMap Proxy, it is quite easy because it create a hard, which already has these, like, data, which you have to, like, the performance load time and you just have to sum it and you can print it. But in case of this performance log, it has some events, like, and the timing, time format, which this DevTool protocol used. It is not using the time stamp. It used a particular system date format. So, you, in the first method, which I shown you as a start method, I'll go to code again, this one. This method is actually doing nothing but just inside it here. It's calling the system.nanotime, a particular date format, time format, and it is storing it. And the actual work is happening inside this end tracking tool. So, this end tracking tool is actually getting all the data from the network tab, and then there are a lot of page load time events that are happening inside it. So, I'm getting the page load event time from this network tab data and then checking that how much time it took from the start till that page load event load time. But since it's a single page application, we are mostly having the Ajax calls, right? Sorry? Since it's a single page application, we are mostly having the Ajax calls, right? So, how are you identifying if all the Ajax calls are finished? So, when you say single page application, this project, I made for a single page application only which is actually a regular-based application. So, it is running for, I created it almost two years before. In between that, I migrated, when I moved to Adobe, then one and a half year, I moved to back-end services. So, it is not, after that, I again restarted that project a few months before. So, that's what I'm telling, like from past two years, this tool is running for a single-page application only. So, there's no issue with the single-page application number. Okay. So, will it introduce any kind of slowness, actually, because we are having a plugin which takes a performance data and all, right? So, will it introduce any slowness? And the performance data that you have captured, it can be more, right? So, slowness because of these tool bindings and these things? Yep. So, when I use browser proxy, because of proxy, it can happen, like slowness can, like it can introduce because now the data is passing through a particular proxy. But now, the new implementation where I am just using Selenium's implementation. So, whatever slowness, Selenium can create in my code. That's only the slowness on that. Okay. Thanks. Thank you. I think we'll wrap up. And Bupesh will be all here helping with us. Can we give... If anyone has any question and even a feedback as well, I'm here. You can give me your feedbacks and any questions. I'm ready. Yeah. Bupesh, one last question. So, do we have this tool open source when we can start using it? So, before coming here, I was planning to do it as a Git repo. But what happened, I got some feedback for enhancement and these things because of that, I just delayed it for at least a few months so that I can incorporate these feedbacks as well. And now, here as well, I'll see a lot of feedback. I'll incorporate. All right. Thank you. Can we give him a big round of applause for creating such awesome tool and which also means that you open source it and we'll use it in some months. Thank you. So, before releasing the video, I'll plan to release that tool. Thank you.