 Welcome everyone to the Continuous Monitoring combined with Continuous Testing by Yusuf and Bargav. Good afternoon everyone. Hope you all are doing good. We are doing good. I am Mohamed Yusuf from Conspect Digital, working as lead as debt. My co-presenter Bargav is sitting next to me. He is also as debt. So we are going to present Continuous Monitoring combined with Continuous Testing, our experience, what we have found in the projects and applications we are working. So a quick walkthrough of what I am going to cover now. Later Bargav will take the stage. First we will see the applications and releases, what we have in the context of DevOps, that how many applications and how frequently we release them. What is that in the, for these applications, how many require live streaming or basically requiring continuous monitoring? And how do we release these applications after doing this continuous monitoring activities? How frequently we need to do continuous monitoring? These details and what conditions triggering this visual anomalies? So basically the differences then continue, what is a continuous testing versus continuous monitoring, the technicalities of it? And what is the solution we have built internally? This is just a beta version. It's not a full blown version, but still it solves the purpose for us and we will see the details about that as well. And then finally we will see the demo app. Let me quickly get into the topics. So this is the thing. Applications and releases, we have totally 24 applications that myself and Bargav, what we are talking about in this forum. We have weekly fire releases, two verticals basically, but right now we are focusing the other one I removed, basically it's island guess, but the primary focus last few months has been maritime digitalization. Eight applications are dealing with live stream data out of 24. We need six hours of continuous monitoring of various widgets that stream time series sensor data. So this is the context why we need to build this or why basically the continuous monitoring the problem first of all arose. The continuous monitoring, basically these are the types of anomalies I can say. What if you say basically why we need continuous monitoring? There are certain conditions when we stream the sensor data continuously for us. If you stream just for 10 minutes, we may not see certain issues or no issues at all. But when we continuously stream, most of the times these are the common issues, plotting hang. It may run for three, four hours fine, but suddenly for 15 minutes or one hour no data, no plotting, but back end will have the data. So what's happening with the front end then what's the interfacing between the back end and the front end, those details, plotting hangs. Then white patch, intermittently basically every 10 seconds once there is no data and then again the plotting goes on. When this happens, when this pattern is formed, how do we get the background details? No idea without monitoring. Then plotting flickering. Plotting goes fine for hours together, but suddenly it starts flickering. It's most possibly due to the front end reason many a times we have seen but not the back end reason. Then single track anomalies. Single track basically is just one single line or one track which is only one sensor data we can say most of the times. That basically has these effects, white patch, flickering, plotting hangs in single track. Multi tracks, multiple sensors are configured. They continuously the data is streamed from multiple sensors and the same effects happen in the multi track. This is more complex than the single track and the no plotting I already mentioned. So these are the various types of anomalies. I can say visual anomalies that have the root causes associated with the front end most of the times. But there are cases back end also triggering these cases. So there are both front end and back end causing this anomalies. But we deal with right now only the front end detecting by monitoring the front end visual anomalies. So how we basically get to see these errors, the issues of anomalies when what we do so that we see this switch to different screens. For example, I am on page A. I start the streaming. It goes fine. No problem. I switched to now a different screen. It should allow me. But when I come back, I don't see a plotting or any anomaly, whatever I mentioned in the previous slide that might happen. So we need to verify there is no anomaly when we switch. Then where the streaming the plotting occurs within that canvas or the region we zoom in and zoom out the tracks or the lines. When we do this, the anomaly starts happening. Then the other one double click or right click within the plotting region. Next one, anything that we do on the page outside this, not moving to not navigating to any other page, but still doing some other activity on the same page. This also has caused anomalies. Then the last one, start the plotting, change the selection of vessel or sensors because we deal with maritime digitalization. So vessels and sensors are predominant here. So when we change from one to another and come back to this vessel on the same page only. These anomalies occur. So these are the typical cases what trigger when the production users upstream or running the application when they try to see the continuously monitor the things or when they try to report any things related to the functional behavior. Many of them see these are the things that happen when we do these activities, switching, zooming in, double clicking, etc. So we have captured these details as the root causes. Now a quick overview testing versus monitoring. Many times we basically interchangeably use the word but there is in our case at least we found it really different. Testing and theoretically speaking we might do 10 minutes, 15 minutes means that one test might take maybe 30 minutes also. But if a test single test runs for six hours, eight hours, you would not call it test rather it would be more qualified to call as monitoring. Because in testing we validate certain conditions and we expect the state to be the starting point of the application state to be so and so. And the ending point also for the validation sake so and so. But in the monitoring it can begin in any state and it can finish in any state. During this period we should be able to direct any anomalies whatever I listed up before. So in the testing we see predictable behavior, single shot plotting of data. In one shot I have 10,000 data points just I say plot all 10,000 data points are plotted. This is more functional in nature but not continuous. Once plotted nothing happens it's a static page now I can now perform the validation. Here only true or false validation either it's matching or not matching. It's basically a simple image comparison techniques we have been using for decades that's the technique we will be using here. So now monitoring by plotting basically it involves more dynamics possible to face various types and range of unpredictable behavior. I explained in detail a few of them in the previous slides they are some good examples. It is monitoring not testing because as I said we don't really begin in a known state but in testing especially in automation testing we begin in a known state but here that's not possible. And it is continuous in nature testing is not continuous in nature. And this is the best way to detect anomalies by doing a continuous monitoring not by continuously testing even if I run 10 times 20 times 100 times the same test I may not find the anomaly. But if I perform two times a continuous monitoring I might be able to figure out some I may be able to encounter certain issues. This is important longer than monitoring higher the probability of detecting anomalies. So if I run one over I may not find but if I run six hours definitely I will find at least one. The numbers may not be high but definitely there are some glitches. So here we can't rely upon binary evaluation saying true false false fail. It is a state wherein we have to determine based on the percentage of anomalies that we encountered. Let's say in my six hours of monitoring I just encounter only twice or once and an anomaly I might still pass the test. I might be okay to move the build to the next level promote the build to the next stage, a bit staging or production. But with testing we always look into the details. We basically need to take a call okay test fail okay now do the root cause analysis why test failed. But here we don't need to do that. And it is a case by case basis again via certain mechanisms implemented using some fuzzy fuzzy logic. So we need really high speed, high precision monitoring and monitoring test systems needed to be in place. That's what I mentioned we have built an internal solution for this. I would not say it is really high speed it is just the first version. So but still it is doing it's really it is really good for us so far. So this is the solution image vision. There are three models image graph. This is grabbing the screen image matcher. This is for image comparison. Action is there are two aspects here. One is visual automation, which is doing the tasks that typically conventional code based automation tools like playwright, WTO, Selenium, whichever tools that they do the typical automation activities emulating user behavior on the screen. This tool can do. But it is a visual automation not we pass your images the target image or the element image. If I want to type in Google search page agile India. I will what is my locator with visual automation that search box itself is my locator. There is no expert. There is no CSS as we do with Selenium or any other code based tools. So we will see that Varga will demonstrate that. This is the one I so far explained live stream plotting monitor. And this tool has the capability. So there are two aspects within this module action eyes. So what is this structure or the very high level architecture. So these are the components basically I can say there is a front end using that we write the test which is in a type script. The back end is in Python. So certain machine learning algorithms computer vision, etc. Things are there. That's a back end. I may write this in any language. So front end I can build in C sharp or type script, whatever. So that's what the test development could be say like software development SDK. Here we say test development kit. And these are the back end part of the interfacing part configurations. Jason providers dependency injection, etc. And there are some good very few APS that basically have this get the job done for us. Whatever image graph comparison live plotting, etc. This gets the line. This is a very high level architecture block diagram. You may write test in any language, but so far we are supporting only protractor. We are in migration to playwright as well. Adding the package for that. But it's a language neutral. We can build in Java also we can build in C sharp also. That's okay. The back end still remains in Python. And these are the different modules. So API stack, the engine is here. This does the job for ours together. If I monitor eight hours, six hours, this takes care. I don't need to write any code except just calling the API with the configuration I need. That's what I will do in the front end. So this is for image comparison block diagram. The previous one was image graph. This is for image comparison. As you see here, we have multiple algorithms mentioned. SSI P has perceptual lashing SSI structural similarity index risk plan. There is combination of two different machine learning algorithms. They are used for image comparison. This is one module. The last one actionize. So here we have visual automation and live stream monitor. There are two aspects. That's what this blocks depicting. And the engine driver. Of course, everything reminds the same only this engine part will be different between this model among these models. So I just want to quickly talk about the image comparison module. Before I show that, get into the demo part. What really makes us, what really made us to build this tool in the first place? It was image comparison, not live plotting. We are dealing with 400 to 500 images. It is not easy for us to build that. A solution wherein you traditionally depend upon those tools, whichever they provide, do a pixel by pixel comparison. It was not possible. So we had to do certain things using a computational algorithm. It helped us reducing the past percentage just due to some pixel mismatches. 80% of the things went out. So these are the explanation of that. Let me quickly go through. Next one. This I already mentioned. This I already mentioned. All right. So now we'll get into the demo part. So these are the anomalies that I mark anything, whichever I define as anomalies are in the images. Now we are running the test. So the test you see here, these are written in TypeScript Jasmine. So this is the sample application we built because our application is really heavy. It would be fitting for the demo purpose. So this is some extraction out of our application. And you see the plotting will happen as well here. The red line is what I don't want, but it is present. So my test should fail. And the report will be generated like this. So wherever red lines are present, they are failed basically. Because I define that red line as my anomaly. Whatever I define, I take a snapshot of it. I keep it in my baseline. I run the solution. So whichever I define that under baseline, they consider as anomaly under negative. I expect it's a negative condition check. I don't want them to be present, but they are present. So fail. There is an anomaly present. So you see net result falls. So in this case, it's a red line only present. Whichever screen it was present, it was failing. So basically it took some, for demo purpose, it took some 10 seconds. So those images are there. Now green. Here I expect a particular line to be present. If it is not present, then I should fail. It's a positive condition check. So that's my baseline. These are my baselines. They should be present while I start my application plotting. So now green is present. Now I don't care about red because I'm checking only for presence of green. Okay. My test now should pass. If I don't have any green, it's okay because I don't check the negative condition. I'm checking for the positive. So if green present, okay, I'm passing. If not present, I don't care. But in the whole transition, I expect at least once it should be present. It's not even once I will fail the test. So this is the report. So you see under past conditions, you have everything because that's a presence. All the images snapshots clearly having the green lines. So that's why it's under past. So in the field, you would not see anything. Failed is empty folder. And this is the simple, very quick demo that what we can do. This is a result that is generated in case you are just going to show again that the result. Yeah. Yeah, you see true, true, true. In case if it is failing, which image at what point in time, if I run for six hours, what point in time the failure occurred in only that particular point, it would fail. Not everything as we have seen on the red. Yes. So it gives the context what time it failed. So I can gather the system information and take a call around this anomalies. That's the demo quickly for me. Can you just quickly go for that visual automation thing? Yeah, sure. As I hope everybody understand what exactly the problem we are resolving. So, so just a quick overview of the applications that we deal with. So you might have seen like stock exchanges. Most of applications use most of the graphs and we deal with the live streaming of data. So live streaming of data. Let's say like for example, if I add a line over here. So if there is a data that is coming from the vessels and we get the live streaming and while doing the live streaming, I need to analyze whether this is plotting with the right data or not. That's a problem we are trying to solve with the test automation. And that's what yourself has explained right now. But to achieve this, we need an automation solution that is capable of running your test automation. Let's say protractor protractor can only bring you till here, but it cannot just do the validations and all. It cannot read all this data and it cannot run the from the algorithms and then it cannot show you which one is past and it can fail. So we tightly integrated our solution with protractor and right now we are doing with playwright. So we can do all kinds of things along with the player, with protractor. So I'll just show you a small video of Google. We all know it. I'll just show you how we can do any website automation. So I'll just show you with the same image vision that is tightly connected to protractor and we can also use it for the regular test automation as well. It's not just for the graphs and all. So just for the regular test automation as well, it can do all kinds of actions like the typing right click left click scroll up scroll down and everything. But there is no locator. It's only the images itself based upon the images. It does all this like the mouse move right click and then type now and then pressing the enter now. So it's it's entirely image based and then now scroll. So it scroll picks an image and then it scrolls over there and then picks another image and then it scrolls up. So that's how it is designed. So we use protractor, but we don't use experts. We use images. Images based automation for the complex problems that we have over the years. And the main benefit here is it helps us meant reduce the maintenance locators keep changing frequently and it costs a lot. But images don't change that appearance. The look and feel the visual aspects don't change that frequently. And even if it changes, it's a matter of just grabbing on small snap and just add into the repo image repo. So that's how basically it brings that value. And moreover for live streaming also we need visual interaction to be performed within image you perform click double click whatever the traditional conventional tools cannot do. So we basically need this kind of tool. So there are those are the benefits. So it as I said image comparison also comes handy image interaction also comes handy live plotting also comes handy. And that's how basically we could save some effort definitely we need to add a lot more features of course. But this is so far so good for us. That's the that's the thing we want to tell you. We are up for questions guys. If you have any questions, please do ask us. Thanks, Yusuf Barkov.