 Okay, so I'm Arun Ghatuk and I'm working as a software engineer at Fusion Charts currently. All right. So at Fusion Charts, we had been building charts, gauges and maps, which we will be licensing, which we license to our 26,000 customers and around, say, 700,000 developers worldwide. Many of you over here might know about Fusion Charts as a flash-based charting company. Right. About six years ago, we have completely transitioned to JavaScript-based charting and since then we have been making charts in JavaScript and HTML5 only. So last year what happened, we got a customer requirement to deal with large amount of data. Now, dealing with large amount of data is a pretty general use case as of now, right? Because we all often have to deal with large amount of data at one point or the other. So the question is, can we visualize this amount of data on our browsers? Can we explore them or just can we make some sense out of this large amount of data on our browsers? So, well, yes, the answer is yes, but the options are pretty restricted. Why? Because the rendering capabilities of the browsers are quite limited when we are talking about in context of this large amount of data. So how to tackle these limitations? We can have two approaches. One, we can handle the data. Second, we can handle the browsers. So first I will be speaking on how can you handle the data? So this is a perspective one we will be speaking. So as we know, browsers are not quite comfortable to handle large volume of data. Okay, sorry about that. So browsers are not quite comfortable to handle large amount of data. So can we just think about like, can we scale down the entire data? So what happens if we can somehow scale down the entire large data? We will be reducing the number of elements and specifically speaking, we will be reducing the number of visual elements if we try to plot them out. So if the requirement around the data permits us, then we can actually go for data crunching. So here we will be just giving a brief introduction of what are some of the common data crunching techniques. So I'll give just overview or discuss them right now in detail over here. So go with data aggregation. All of us, many of us are aware of what is data aggregation, right? So in this data aggregation, we cannot change the, we actually visualize or treat the entire data at a go. And the information is not changed. Only what is changed is the level of depth on which we are viewing the data. Say for example, we are trying to daily sales of a e-commerce website. So instead of daily sales, we see the monthly sales or we go on a step farther. So in data aggregation, what we are changing the depth of the details for viewing it. So this is one of the way we can reduce the data. Second is like filtering. We apply different kinds of filters and conditions which will actually reduce the entire large data to smaller subsets. So this is one way we can reduce our data sampling is but picking up a small data set from a large population and getting it represent representative of the entire population. So this also helps and you know, like data analyst and the statisticians love the sampling part and they have been using for long. So all these kinds of optimizations are actually work if the requirement around your data. If the requirement around the data permits us. What does when what does happen when it actually doesn't permit? Let us analyze that part. So we had large data, right? If we if we are talking here about large data, we see couple of patterns. So what I'll just mention two of the patterns, which is the most common ones. This is working. Oh, yeah. So these are the two common patterns, right, which we speak when you are speaking in the context of large data. We see lots of clusters. So if you actually plot or visualize large data, we see clusters, you see gaping and there's also distributions like some data will be tightly distributed. Some there will be loosely distributed. We see a lot of patterns when the pattern when the patterns gets repeated. We see trends and there is also an exceptions part, which either there is exception in the patterns or there is an exception, which we call as outliers in the data. So this was about the importance of large data. But what we are missing is some of the meanings even if you are speaking about large data. We also miss some of the meanings if you actually crunch the data. Let us say if you have a what benefit if we give if you are treating the high volume data is actually large. The chances are pretty high that you are answering much more specific questions. You can ask your data a lot of questions and you get there's a high probability that you get a lot specific answers and you have the freedom to go atomic. So from any level you can drill down to any minor details. So that's how we can go atomic to any level if we actually have the entire large data and that is the thing. So here we will be revisiting all of these in the live visualizations, which will be coming later. But let us speak of the other side of the coin. Let us speak of the other perspective that is this was the how we handle the data. Let us see how can we handle the browsers or what if the browsers can render all of the data high volume of data. So not only that, even if it can handle at what extent it can actually handle that. So we will be seeing a live demo for this. So here we have our data of the weights and the prices of a large set of diamonds. And we want to see how the price of the diamonds is actually varying with its weight. So let's check out this live version. So is it visible? So you see this is quite faded. So visibly we see this is how about approximately there is a million data point over here. It's down with opacity so we can't actually judge much. There's some amounts of plot over there. So this is how a million data if we actually render on the browsers looks like. So before I dive into the technical details like how we can actually render this kind of data high volume data on our browsers. Let us see if the problems whatever we were speaking previously is actually getting solved. So we will be exploring this entire visualization previously demonstrated visualization using one one of the common data exploration. A month or that is your Snyder's man mantra. So what he says is like first you give an overview of the entire visualization. Allow the user to zoom and filter and then you ask the details on demand. So let's see how this visualization helps how it helps us. Right. So first you see there is a lot of patterns over here. So you see there are some spikes and this is the overall pattern. So if you actually see this lot of clustering in comparison to this part of the screen it's so much clustered and clustered over here. So all we need to do is just you know zoom a portion of it and you will get a visual that this is this is the clustered visual what was coming before. On on a detailed look you see there is a white distinct line over there or a portion of there. This has got a special kind of meaning. This is the gap pattern which we are speaking about. So there's a discrete gap inside the visualization. It can have some different interpretations but this is how it helped us in analyzing the clusters and gaps which we are referring. What about the distributions? See some parts are very tightly distributed and we if you go over there there are some very loose distributions. Right. So we also have a distributions. We can see different types of distributions over here. We get a lot of trains. So this is one of the trains and at some part of the see the trains actually broke the width of this is actually smaller than that. So you know the often we also there is some exceptions on the patterns. Right. Apart from that we also have some outliers which is so much far and only you can realize these outliers only when you zoom zoom zoom the stuff. So when you see outlier the natural inclination is what is that point? Why why has it been plotted so far? What is so special meaning about it? And this kind of details on demand attaching a tooltip or any such activities help us. Let us see how filtering here the Snyder's man mantra we had been referring about filtering. So how does actually filtering helps us? So if you see that gap what we were speaking, so this is a discrete kind of gap. Now this is not a quite uniform behavior. If you actually filtered the different data and data sets, so these are actually different data sets. You see for some data sets this is not discretized. So filtering also gives us some inference some hint which were not very obvious in the initial phase. So to summarize this Snyder's man mantra is that what we got is like we got a glimpse how actually visualizing the entire large data is actually giving some more insights more inference out of it. So do you think like this all rendering in even as a large as a million data point was a straightforward rendering, right? Definitely not. And if you're an SVG is never an option for this because like for every plot every circles if you are thinking that if you are thinking that we will be having a DOM element. Now that's not an easy options and that that won't happen because it will be too much hazardous for the browser. We switched to canvas rendering. So we thought about, okay, let's try with canvas and try to render the as large data as a million data point. So what was the experience about when we tried with canvas? Right. So canvas was pretty fast when we tried rendering around 1000, 2000, 5000, 10,000 even 30, 40,000 something really peculiar happened when we tried rendering around 50,000 data. And what happened is we didn't get any script error. We didn't get any kind of error. What we got is like a blank visual. So it was pretty strange for me like what's happening. So if you if you have to understand that then it's just as simple like we all have been working with canvas. So how does canvas works? We need to understand that. So we give set of instructions to canvas say move to this point Roy circle move to that point Roy circle like this set of instructions we give on keep on giving the canvas. And at the end, all while this process, the canvas starts remembering or memorizing it saving in technical terms like saving it in context. All right. So what happens when we actually give this sequential informations for around 50,000 data, the canvas loses its context and it fails to render or visualize anything. So we actually land up getting a so we actually land up getting a blank screen. So that's how over-tasking of canvas led us to the context. So we already saw that we could have rendered 50 more than 50,000 data. But then what? How did we stretch the limit? So this is how we actually thought about like, let's do it like this. It's nothing but a batch processing. So you have an entire data. You divide the entire data into smaller number of chunks and then you visualize it. Say we I have to render around say or 10 10 circles. So instead of giving all the 10 commands, I give it in a set of two. And every time I give the command, I ask the browser to render itself. And this is so what happening is like there is a progressive rendering hovering occurring over here. So you see the this actually progressive rendering gives us two advantages, right? The browser didn't have a blank visual as we were talking previously. And second time the first visual was coming in the least possible time. Now that was pretty important because when you have a large data, you take some time. No matter what you take some time to render for. So you cannot keep your end user to wait until the entire rendering is actually complete. So what do you do? You give him something to see. And when he is seeing that on the next couple of iterations, we ended the entire part of it, right? So having said that, like, you know, this might be entirely not scientific to represent a smaller chunk of data instead of the larger one. But as it's a large data, even that smaller chunk has some patterns or some properties resembling that of the original one or the large data one. So what is all about rendering? So this was the only optimization that was required for rendering part. So was it all about rendering? I guess not. We had been speaking about clusters and when I tried to just render it, it was giving us so much faded view. I can't even see anything. So we needed something which will help us to analyze it better. What was it like? We needed the feature of zooming, right? So how can we actually technically have this zooming feature in inscribed, right? So how can we have it? See, like, this was the cluster, and this was the clusters we were speaking and we were not getting enough details out of it. So what we did is we zoomed in this part and we get something very interesting out of this clusters and gaps out of it. So technically speaking, this was pretty easy, right? Because zooming means only you have to do is to multiply the every coordinates of your plots with some scale factors so that it grows big. So if you say if you have 100 into 100 pixel area, so when you zoomed 10 pixel into 10 pixel window out of it, all you need to do is to multiply the entire coordinates with 10 times. So actually it grows 10 times bigger. So that's all about zooming. It was pretty easy. What about panning? And why do we need to pan actually? So when you say you actually zoomed in a small region and you find something interesting out of this. Now we look whether this kind of pattern, whether this kind of gaps or anything like that occurred in some adjacent place. So our natural inclination is after zooming, we seek for panning across. So how can we actually pan the data or scroll the data? All right. So this represents the currently visible window and after zooming, this is the effective canvas area, the zoomed canvas area, whatever speaking. So there was kind of two approaches for panning. One, we could translate the pre-rendered canvas, what was the previous one we were speaking. So you zoom 10 times, I redraw the entire canvas in a 10 times scaled version. And all I need to do with every mouse move on panning to translate the entire canvas. That is the most smoothest operation we can have while this panning interaction. But this has got a severe drawback. What is that? So you have a large data and you want to drill down to a specific plot. So it's like zooming 100 and 1000 times something like this. So if we actually try to make our effective canvas area 1000 times the size of our device screen height or width, that was a pretty impossible thing and the browser failed to do so. So this approach of increasing the total effective area was not going to work. If we actually plan to render, if we actually plan to pan in this manner. So what was the next option? Can we render the entire plot on demand? So say if you zoomed only a portion and that part of the portion of the canvas gets rendered. So that was pretty good for zooming. But when we actually tried this for panning, what happened for every mouse move, the coordinates gets replaced and for every mouse move, we have to actually re-render the plots. Now that was really expensive and it was a pretty bad UI. So because that was not happening. So what we thought is, let's move to the solution. Like we can apply this nine grid algorithm over here. So what is this nine grid algorithm is pretty interesting. So we never draw the full canvas. What we do, this is a visual area. We first draw this grid. And next what we do, we draw the eight adjacent grids next to eight each one in the background. So the user never sees this nine grids at a time. User concentrates on the center grid. And while he is busy in seeing the center grid, the rest eight adjacent grids actually is already been rendered or pre-rendered. So when the user actually starts panning, say to the right side, what happens is this center of the visible grid changes to this and only this three incremental grids needs to be rendered. So this was actually working pretty fine. So if I have to explain this over here, so what happens is like if you have to, so when I'm at maximum user can start from this leftmost point and scroll up to the rightmost point. So at maximum you have to render at least one grid out over here. So if we do that, then only it solves the purpose. So that's why we chose for this nine grid and we don't need more than nine because at maximum user cannot shift more than one grid at a time. So this was the incremental rendering that actually helps us the problem of mouse lag, of zooming even in the nth level. So that's pretty cool. Now, so let's recollect the Snyder's Man once again. So what we had previously was like we first gave you the overview of the plot, overview of the visualizations, then we actually zoomed and filtered and showed that there are some interesting patterns out here. And now the next thing, once we actually zoom and filter, things becomes interesting for us because these plots were so small in the initial view we didn't even think of having any details out of it. When we actually zoomed and found it, we found that this is an outlier. What is there? So we get interested out of it. We try to fetch some details out of it. So how can we do that? So because this thing at tooltip is not a big deal, but how can we do that in canvas? Because as I already said, we had been using canvas. So how can I actually have a tooltip over canvas or how can you have interactions over a canvas? So because in canvas, we actually doesn't keep track of the points what we draw. Like in SVG what happened, we had a specific, we know where the plots are drawn and we can attach the browser events to that and display the tooltip. So we will be using a cool approach over here. So this is how simulating browser events algorithmically. It will help you to add interactions even on a canvas element. So what you do is like you render a transparent circle over here, transparent SVG circle over here. And on every mouse move we track algorithmically that whether there is any point underlying that Howard point. So if we can track at every mouse move accurate enough and fast enough, what happens on tracking this transparent circle which was drawn in the left corner of the screen immediately transforms back to that point. And now when it actually comes under your cursor, now as this is transparent, the user does not get aware of this. And now you can actually attach any click event, any mouse over event, any kinds of interactions right on this. So it's like hacking the interactions events on a flat canvas element. So this was a cool approach. This was fine, but how to track so fast, so accurate, how can we actually track our mouse with every mouse move we need to search from a million data point that whether there is a point underlying that point being hovered. So for that approach, we chose to we chose to implement this Katy tree implementation. So this is the nearest implementation for algorithm for finding the nearest neighborhood. So what is Katy tree is like a space partitioning data structure where we actually divide the entire portion into k dimensional space. Here we will be speaking only about the two dimension because that's the point of interest over here. We are interested about x and y. This would be our dimensions. So primarily we will be showing that how to build a Katy tree and how can you search effectively using Katy tree, right? So these are the sample points which will be I'll be using to illustrate how to build a Katy tree and also how to search Katy tree in a pretty fast manner. So the algorithm steps for the algorithm is pick a random dimension that can be x, that can be y, pick any dimension, find the median point along that, all right, find the median point split or partition the data into two halves and then keep on recursively doing that with the alternate dimension. Let's see what we are speaking. So first we are taking the dimension y. Okay, this is the y direction. It keeps growing, increasing in the upward direction and x increases in this direction. It's a normal coordinate axis convention. So what happens when we find the median across the, with the y coordinates for this set of points, one seems to be the median, median most point and one comes in the center of the, center of the tree and the entire space gets divided into two, two partitions. On the next step for the next set of three, the logic or the convention is whichever coordinate is less goes on the left leg and whichever is on the greater goes on the right leg. So this part of the, sorry, so this part of the, this part of the tree goes on the right and this downward part goes on the left. So similarly, but this time we had to find the median across the x direction because previously we had chose, we had find the median across the y direction. So similarly, if we recursively iterate the process, we are nothing but partitioning the entire data into alternate grades, right? Each time we are choosing the alternate dimension and finding the median point dividing into two partitions. So this is the corresponding tree, sorry. So this is the corresponding KD tree that we've actually formed with this sample number of data. So how can you, so KD tree is already built, right? So how can you actually find, find what is my nearest neighbor point, right? Say this is my, this is the query point, right? In the green over here. So I actually have to find which is the nearest point, right? So all what I did is in first comparison, I compare whether the y coordinate for this query point lies to the, lies, is a y coordinate for this query point is less than that of one or greater than that of one. If as we see the y coordinate for this is less than, so if we actually for just one comparison, we actually discard this entire part of the tree. Similarly, on the next iterations we find, alright, so we need to find whether the x coordinate for this query point is less than that of the two or greater than that of the two. So in this way we find that it is greater than, so we discard the left part of the tree. And on the next we find that, okay, this is close to nine in comparison to, what is that point three I guess? So a five, yeah? So it's closer to nine. So just you see if you, sorry. So if you actually see that by making just four comparisons you can, we can actually find which is the nearest, nearest point we were speaking. So that's the beauty of KD tree. Now this four has some special meaning. This tree we had been showing with 10 elements, but this tree has four depths which can accommodate around 16 elements. So even if we had a 16 number of elements, just by doing four comparisons we can actually find which is the nearest point. So if you calculate for even a million points, so even if we have a million points just by making four, just by making 20 comparisons, right? Then we can find the nearest neighborhood point. So this was working pretty fine and it was making things pretty good. But what was happening is the building time for KD tree implementation was pretty high. Why? Because for every dimensions, every dimension when we are showing how we will be building the tree, we were showing that we have to find the median. Now we used to find the median using the sorting. First we used to sort and then we used to find the median point. Now every time for every depth formation, if we actually have to sort, now as sorting was the most heaviest operation over here, it was taking time. So we decided to do a little tweak to this KD tree which we named internally modified KD tree. So let us see how this works. So this is nothing but... So in this type of structure, previously every point had two legs, right? Here we have actually seven legs, right? So we have four tree structure. The smaller nodes represent trees and these middle larger ones represent the pivot points. So every level has seven elements, four trees and three pivot points. Now how to divide them? Again, division remains the same. Instead of median, you sort the array and then you find what are the four points which will actually divide the entire space into four partitions. So on the first level of building the tree, we divide the entire space into four partitions. And in the next level, similarly with alternate axis, we divided into another four structures. But as we didn't have so many amount of elements over here, so you can see in the next level of tree, there is only two elements. But as I said, every node should have seven elements. So I haven't actually drawn all the legs over here because that would be so much clumsy. Now that was the building the tree part. So if we recollect what was the tree structure for these same elements previously for KD tree, how is it actually different from this modified KD tree? The first observations will tell us that it has only two depths, whereas the previously KD tree had four depths. So the depth of the tree has increased. And as I said, sorting was the main thing that was taking, that was the main time consuming stuff. So as we here, we have decreased the depth of the tree. We had an advantage of this tree building time. So let's see how we can search this. So that was the building part and modified tree. Actually you can build in a much faster way than that of KD tree. What about searching part? So same example. This is the query point and I'm interested to find which is the nearest neighbor. So in the first comparison, I have to find that whether this point, we take each pivot point and we find whether the coordinate is less than that of that pivot point or greater than. So in this case, even just by making one comparison that this point is actually less than that of the first pivot point we actually discarded this entire tree. So just by making one comparisons, we discarded the entire eight elements and this is the remaining two elements, which on the next iteration we could have just got. So even by making just two comparisons, we can have a point that we, the neighbor point that we were looking for. Okay, so, right, but there is a catch previously in KD tree. There was only two legs here. Luckily we had that by making one comparison. This was falling in this tree and we were not, we are not comparing any other trees, but if the point is lying on the left, right most of right most tree. So if the point of our interest lies in this tree, so we needed to make four comparisons. Okay. Now that's the heaviest thing. While searching KD tree each level, we had to search only once. Here we need to make four search in the worst case. So what, so in a nutshell, modified KD tree although helped us in building the tree in the minimum possible time, rather better than that of the KD tree, but modified KD tree takes a longer time to search, you know, in the worst case scenario. So we had to take a trade off. So it might be seeming that, you know, if we increase the breadth of the tree, if we keep on increasing the breadth of the tree, it will happen, but we needed to take a trade off between building time and searching time. Okay. So these are some of the references I have been using in this presentation. You can learn about the visualization techniques in the Now You See It book written by Stephensview. The data which has been used is available over this link. I have provided a fiddle version of this, of the prototype which we were explaining. And in my data profile, we have updated some of the minimum codes. The major roadblocks, whatever we were speaking, whatever the major challenges, how we get them. So all you need to do is go over there and just try it out. It doesn't have the exact finish of the fusion charts, zoom scattered chart, but it definitely will help you understand the major roadblocks and how we can optimize them. So that's it from me. I'll be looking for some feedbacks and questions. Any questions? Whatever now the whole process of this rendering and then decision tree or I mean whatever that tree identification neighborhood you have done. Can you tell what was the volume of data you tested and how much time you took to render or to do all this process and render in a browser? Okay, so if you try for a million data points, it would have been taking around 3, the first visual will come around 1.5 seconds almost. And in the background, it keeps on rendering the rest part of it. So the user actually, we can engage the user with the first visual around 1 or 1.5 seconds. Okay, now instead of SDG, the canvas concept will be more performance what you are telling, right? So when we go for the zooming, is that like I mean the pixel, when we zoom the clarity will go compared to SVG concept on canvas. So how does that actually, are you limiting the zooming as well like he has to go to that point or how it is like? I didn't get you. If you're zooming in? If you use the canvas thing, when you apply zoom on that, as on one getting zoom, the clarity will go off that object. It's not that much clear. No, no, no, no. It's not pixelated view, right? It's not an image view. We are actually re-rendering the plot. So it's the same canvas element on zooming it. So let me illustrate. So say, you actually try to zoom this part. So what you're doing, this is in the zoom mode. So when you are doing this, you are actually re-rendering the entire screen. It's not the pixelated ones, right? So it's down because of the opacity what it might appear, but it's not the pixelation what you are doing. So that's not the thing. We are again re-rendering, which will keep the image intact. So at that point only you are applying the tooltips. Which points? You know, when the page loads in the starting, so there are a lot of million of data points, right? So that mouseover, when you try at that point of time, maybe because there are a lot of million points, it might not, I mean, we don't know which point we are doing. So you are applying the logic, that algorithm, what you explained. It is from the first level only. Yeah, it is actually from the first level. So the flow is something like this. So that's the part of the data. Then in the background, when the user is actually seeing the entire data, the overview, the first part, the overview of the data, the user is seeing the data, that time in the background we develop the tree. So for that few milliseconds time, when we were actually building the tree, that time if the user actually interacts, he might not get the tooltip because that time we are building the tree. But within next few milliseconds, once the tree building is complete, from that point of time, when the user interacts, the tooltip algorithm just works. Okay. Yeah. Well, it can be anything. So using the standard fusion charts model, I have tested around, it worked pretty good for 3 million data points, but there are some other optimizations which we have to, there are some other features which we have to ship out. So we cannot optimize the entire part of it in fusion charts model, but if you sacrifice some of these features, then it, this entire logic is capable to render around, I have tested it for 8 million data. Okay. So 8 million data, I couldn't create more than 9 million dynamically. So it's pretty good to even render around 8 million data. And you can actually drill down to a single data from this 8 million and have a tooltip. That's the beauty of this entire part. You know. It's so good to actually visualize the large data. That's it. Hello. Yeah. My question was about whether all the charts are drawn in here. Yeah. Whether all the charts are drawn in canvas or only few charts are drawn in canvas where this kind of performance issues are there. All right. So interesting things happening. See with every legend click, I'm just toggling the visibility, right? So here we have thought about having each canvas for each data set so that this becomes easy. So on every data click you just toggle the visibility of that canvas apart from that, like this this grid, whatever you are saying, you can have nine grids, nine canvas for that nine grids. That's pretty fine. Or you can all do in a single move also. Okay. Yeah. My question is about, let's say if there is a pie chart, right? There is a pie chart. There is a pie chart. There is a bar chart. There is a line chart. Okay. So does the same rule of using canvas for those charts apply or is it based on the data points you choose? It's nothing to do with specific charts. You can apply. This is a generic logic. You can apply this in any types of chart or any types of rendering. Okay. So the benchmark, as you said 40,000 data points for all types of charts, right? Yeah. It's actually canvas is like more of like painting an image. The more you paint in a canvas, the greater time it takes. Right. So it's something like this. It's nothing to do with a specific chart type or specific type of visualization. Yeah. So for this visualization, do you use D3 or any of the No, this is we had been using this with fusion chart zoom scatter model. All right. So zoom scatter, we have built this chart. The scattered chart is nothing but a XY chart capable of handling even a million data points. So that's it. All right. So D3 is a open source library. Great library are actually so we can also do you can also try out same things in D3. Right. But the problem is here in D3, it's more like a fundamental. All right. It's more like a drawing library here. You get the back up product directly. So as we have a license product, we cannot actually discuss our entire source code, but you using fusion charts like you get the better product. So when you can concentrate on other things and you get easily done, but everything what I said is pretty generic and it has nothing to do that you can't use it with this and that. You can definitely try out D3 for this. But again, if it will lack the final polish and you have to a lot of do these things. That's what we are doing on our part. All right. Hello. Yeah. My question was about how do you test these charts because you guys here so you guys actually sell these charts, right? So you have to be make sure that when you do changes in the code also the chart works perfectly. So do you do testing? I mean the testing of the charts? I can't follow. Yeah, say for example you released our next version of your product where the charts are some improvements are there. So how do you test these charts? How do you test these charts? Yeah, we have we have our automation systems running. So what testing you're speaking about the future testing? Yeah, all the features. So we have automated testing that runs on our servers. So every day whatever code we commit and everything that gets tested and the next day the QA team gets the report that okay fine these features have been broken or this has been improved. And even for the memory part, like whether the time taken to render because this is a highly performance intensive chart, we actually keep track of the things that whether we have degraded on the performance or not in order to give some extra features. So any testing frameworks you use? So it's like we have our internal testing framework. We use it for image comparison mainly. So we compare the both images in the two releases and that's how do you recommend any open source available? Sorry? Do you recommend any open source similar libraries available for testing? You can try out anything like Karma or you can try out like unit testing any frameworks right? Or even Mocha works good. Okay, thank you. Please meet the speaker outside. We're really running out of time. Okay, thank you.