 So, today, I'll be talking about synthetic monitoring, what it is, why do you need it and how do you want to do it, you know, like the basics to, you know, how to get it implemented at your organization. So who am I? I'm Suraj, I'm working on Grafana Cloud these days for past one year-ish. I've been busy building synthetic monitoring product for Grafana Cloud. Before that, I was at early stage startup, you know, and built bunch of core things from, you know, like databases to ETL pipelines and whatnot, including monitoring also. So that's who I am. So let's just define, you know, what is synthetic monitoring. So we all have monitoring on our systems, right? We monitor what our servers are doing, what our databases are doing, you know, how much CPU we have, how much CPU is being used, how many other, you know, resources that we have on our services. And we alert on that. But we, like most organizations don't really know how applications are doing from their users point of view, you know, are users in certain locations, seeing degraded performance, right? Your internal monitoring will say, say everything is fine, things are running as expected, you would not know these things unless you monitor things from your users point of view. So that's synthetic monitoring, you test externally visible behavior as your users would see it, you know, and if you have users all over the globe, you would need to monitor all over the globe. Like also synthetic monitoring is part of this, you know, broader black box monitoring category. So sometimes we will also refer to it as black box monitoring. But like in this talk, we'll just call it synthetic monitoring. So quick question, I just wanted to ask everyone, you know, like how many folks here have used synthetic monitoring or are familiar with synthetic monitoring. So feel free to, you know, unmute yourself and let me know if you use synthetic monitoring and if yes, which product do you use? I heard Cisco uses black box exporter in the last talk. We mostly use, it used to be a run scope, we are migrating towards neural network. I see, nice. Any other folks who are here who use synthetic monitoring already? So I guess it's not as common as, you know, other monitoring practices, right? So why do we need synthetic monitoring? Right? As we said, we want to find out how our application is doing from our users point of view. So to truly find out how our application is doing from users point of view, we actually need to pretend as if we are users and then test our applications as if our users are using it, right? And if our users are all over the globe, we would use it from all over the globe. So I want to share a personal story of mine, you know, like I made a change that broke production for four plus hours and we didn't even notice that production was broken for four plus hours. And the change that I made was, I added a route in load balancer to, you know, redirect traffic to our new website. So, you know, fairly small change. Nobody thought it would impact production, but I don't know how, you know, I still don't understand. I think it was something on like Google's load balancer UI that I did. But because of that change, actual production traffic was also ended up going to your website. We didn't notice it until late that evening. It was 11 p.m. and we had a scheduled maintenance. We started doing scheduled maintenance and then we found out that, you know, we cannot really use our production application. It was down and it was returning, you know, errors. And things were all fine from your inside, you know, servers or databases were all good. So what actually happened was, you know, our actual production traffic was being routed to Netlify because that's where we had our website. And I looked through header and I saw an X Netlify header. And that's when I found out that, oh, should we be messed up, right? And then we fixed it. Like even if, assuming we didn't have that maintenance window, things would have gone for longer and customers would have been impacted. But yeah, that's how I took down production for four hours and didn't knew it. Production is down. I mean, we also had instances where our monitoring went down and in those scenarios, like during an outage monitoring went down and we didn't have anyone alerting, right? Like nobody told us that our own, like customers told us instead of our monitoring systems. So that's not a good state to be in. Also, assuming you have, you know, regional deployments, right? You have configuration that apply to certain, like certain configuration apply to certain regions. Let's say you do a deployment and your team is sitting in one region developing and deploying, like deploying globally and things are only broken in that region. It would be really hard to know that because from your point of view, from your region, things are all fine, you can use your application, but customers in that region would either see your application broken or integrated performance. There was one such incident that happened with folks at K6.io. So Daniel is SRE at K6.io and K6 recently joined Grafana Labs. So I was talking to him about K6 and synthetic monitoring and here is what he said. So they misconfigured CloudFront and because of that, they had a reasonable caching issue. You know, nothing major, things were slow in one region and they wouldn't have found it because there was no way to find it, you know, because folks in other regions were seeing it all fine. But because of synthetic monitoring, they saw the response times and latencies were up in batteries and then they hand down and fixed it. So it's not just me who breaks things, like it's almost all of us. So another question that I want to answer is how to do synthetic monitoring, right? We talked about we want to monitor from our users point of view, we want to monitor from users point of view. So that means we have to leave our internal private network, you know, get out of it, you know, and monitor from actual internet and we want to travel through the internet, just like our users would, you know, going through different networks and then reaching our production services. So even if you, let's say, like, use Google and you set up your infrastructure on Google and you use Google to set up another monitoring, then you would still be, you know, going through Google network, unless you tell Google to not go through Google network. So it's a bit tricky to actually go through internet if you are not going out of that cloud provider. So, you know, because Google has its own optimized routes and whatnot, they do their own networking magic to make things fast and try not to leave their own network. So that's also another important thing when you want to do synthetic monitoring, right? Because you want to make sure that you're actually coming through regular internet where your users are coming through. So another is exactly how to do it. So there are two options. One is you can do it yourself, you know, Blackbox Exporter is popular project in Prometheus ecosystem. So if you have Prometheus, you would want to configure and deploy Blackbox Exporter at locations where you want to monitor your applications from. And also you have to be extra careful about the networking magic that other big cloud providers does. So you have to actually leave your cloud provider and deploy it on other cloud provider that you don't use. There is one more project from Google. I think it's called Cloud Prober. I have never used it, but supposed to be used for, you know, Blackbox monitoring synthetic monitoring. Or, and then there is always Chrome jobs. So you can write Chrome jobs to ping your websites and monitor them, but that's not really scalable and easy to manage. Also, there is a trade off, you know, there are certain pros of doing it yourself and then there are some downsides of doing it yourself, right? Like when you do things yourself, you have full control over what data to collect, you know, where to store it, how to format it and whatnot, right? Like you can do whatever you want. You don't have to start from scratch. There are existing open source tools, right? Blackbox Exporter is there, but I'm pretty sure there are others as well that can do based on, you know, how you want to monitor what you want to monitor and whatnot. But the downside that is that now you need to monitor this service, right? Now you need to maintain this service. And let's say you have, you know, certain assets like in marketing websites and whatnot. Then people who are not, you know, engineering savvy would have to go through the engineering to get their services monitored and, you know, have their checks up and running and, you know, modify those checks as well. So that could be seen as a pro and could be seen as a downside because now you have one more hop and, you know, engineering have to be involved. So yeah, and again, the meta monitoring question comes in, right? You use synthetic monitoring to monitor, you know, your overall application, but who monitors your synthetic monitoring, right? And as you I said, it's turtles all the way down, right? So, so another solution that you have is you use a managed service. It's easy to use no engineering overhead. Most managed services have a nice UI where people can just, you know, put an URL or endpoint and they would start monitoring it. They have a global locations you can select where you want to monitor your applications from and then they'll take care of the rest, right? You don't have to worry about operating it and then monitoring it and then again, meta monitoring it. They like, there is also another downside of that, you know, they might not have locations that you want, right? Some have, let's say you want to monitor your application from a location where they don't offer their pro, right? And there is no way to gather data from that location, right? Also, most, you know, managed services don't mix well with your existing white box monitoring, right? So you have all of your internal data, but then, you know, let's say you use an managed service. So that service will have its own UI, its own way to look at data. So people will have to learn how to use that service, you know, learn how to get data out of that service. And then, like, it's not easy to mix with your existing data. Let's say you're using Grafana and then there is, like, you can set up things to get your data along with your other data, but again, it's work. So, also, let's say you have some internal applications that you want to monitor, you know, they are inside your network. You don't want to monitor them out from outside your monitor, but you just want to make sure that they are, you know, things like VPNs and whatnot and things that are critical for your organization, you know, because almost every organization have internal applications that are not public, per se. So the only way to reach those applications would be to be in your network and not every managed service offers that. So that's managed services for you. So now instead of, you know, going over all the managed services and all the open source solutions, I would just talk about, you know, what we have at Grafana Cloud, because that's what I've been building for a past year or so. So Grafana Cloud Synthetic Monitoring looks something like this. It's inside your Grafana instance, lives as a plugin, you know, you can monitor things from all over the world. But, you know, people who know Grafana might say, wait, wasn't it called, you know, world ping, didn't you guys have guys had one more service that did exactly the same? Then you would be correct, you know, there was a service, you know, was because it's already EOL last April. And right now it's on read only mode and will be taken offline on August 1st. So world ping was the very first service from Grafana Labs. That time Grafana Labs was called Rain Tank. It was also a synthetic monitoring product, but it had, you know, a bunch of things that we wanted to improve upon and the ecosystem has decided to move on to, you know, Prometheus. And this world ping product was using Grafite as its data format. So it also had locations that would also, you know, run your checks all over the world. And then it would gather metrics and then store it in Grafite format. There was a bunch of downsides with that, you know, Prometheus is easy to extend. You can throw in labels and, you know, like two or a bunch of things with it. Also, Prometheus is very powerful. So Grafite was like, we were feeling like, you know, we could make it better, but because of Grafite we couldn't. So we decided to duplicate world ping and then move forward with the world and, you know, move to Prometheus-based world. Also, another, you know, major requested feature was, let's say you have a check running from, you know, a remote location. And only that location is failing. And there were no logs as part of all ping service, you know, like your checks would generate logs, but then we would, like there was no feature to show those logs to you, right? So customer would have to either, you know, create a VM or, you know, like use a VPN to browse services from that location and then debug that. And that was, you know, suboptimal and painful, you know, we don't want to do that now. So logs was another, you know, major missing feature of world ping. So like this is what it was. So 2015 world ping came out 2016, like it came inside, 2016 it came inside Grafana was an app plugin. And then like today we moved on from world ping and shut down world ping and have developed synthetic monitoring that is same product, but different, new and better in various ways. So if you want to know more about, you know, what is different from world ping and, you know, what's new in synthetic monitoring, my colleague Teddy wrote a blog post about that. I have new links to this slide. You can go to this blog post and then read more about the exact product differences that are there. So, you know, why new products? The world is moving to Prometheus, you know, almost everybody is adopting Prometheus. All major vendors support Prometheus ingestion, Prometheus querying. So we also wanted to, you know, move with the world and have Prometheus based metrics for your synthetic monitoring data. Also, the logs that I mentioned, right, we want logs, we want, we don't want our, you know, customers to have to connect through a VPN and then try to reproduce a problem that our probe is seeing, right. You know, sometimes they couldn't reproduce because, you know, internet, right, you know, like, it's not like the only way to know why certain things happened is, you know, collect loads and then tell you, okay, this is why it happened. So new synthetic monitoring products, products have logs. And also, like, because it is built on top of Blackbox Exporter, we have all the configuration flexibility that you get with Blackbox Exporter, right. We expose all the configuration options. Also, Blackbox Exporter exports Prometheus metrics. So because of that, our synthetic monitoring product also has Prometheus metrics and we integrate well with our Grafana cloud offerings. So, you know, like metrics are stored as Prometheus metrics in your Grafana cloud hosted metrics instance and logs are stored in your hosted low-key instance. So if you want to mix and match with your other data that you have in the hosted metrics and hosted logs instance, that would work out of box without any pain, you know, like you don't have to feed in data, collect data, get the data around and, you know, worry about it. It's already there. Also, it is inside Grafana. You don't have to use another tool. You don't have to teach people to use another tool. It's all Grafana and Prometheus. And it integrates well with cloud alerting. So Grafana has cloud alerting product. It's a hosted alert manager offering. So if you can actually use the same alerting infrastructure that you have, right, infrastructure to manage your alerts, manage your routes, manage your rotations, you could just use the same thing for your synthetic monitoring. You don't have to go to another product, add people to it and then configure email alerts, you know. And because it is an alert manager, you get all the great things that alert manager has to offer, you know, including places where you want to get alerted from, you know, like be it PagerDuty, email, Slack, Telegram, whatnot. So yeah, another bit that we have in new product is private probe, right? So private probes are probes that you can run in locations where we don't have our probes or inside your internal networks if you want to probe your internal applications. So private probes, like you give it a token, bring it up, then it would connect to our API, get all the work that it needs to do and start doing it works and collect metrics and logs from the checks that it executes and then pushes them right back to our cloud. So you just need to run a process. That's it. And if you want to run a Docker container, we also have that. And I'm assuming almost every organization can now throw in a Docker container somewhere, right? And if not, you know, we also have other ways to install. So I want to do a live demo right now. So all hail demo gods. Yeah, there we go. So I have synthetic monitoring installed in my personal Gryphana cloud instance. You can see it like that's the icon. Let's go to the home. So I created a bunch of checks for this demo. So this is last three hours. So I see I have, it supports DNS, SVTPA, DCP, and ping checks. I don't have any ping checks. So that's why it's only showing three of them. And then it has all the regions where we want to run it. So there is an instance for my DNS check. So I'm doing a DNS check on my website. And it says reachability 99.8% and latency up and bottom up. These are my, you know, STTP check. And this is my TCP check. So let's look at the UI. Let's go through and create a check. So this is the UI where you see your created checks. You want to see a compact view. You can see how many locations it's running from, you know, what's the frequency, how many active series it will generate. We don't build on your check executions. We build on the data that you generate in store. So there is no separate billing for synthetic monitoring. You don't have to worry about, you know, how often I'm checking. If you generate more data, you'll get bill more. And if you generate less data, you'll be billed less. And it's all part of, you know, your Grafana cloud hosting metrics and logs billing. So you don't have to worry about, you know, paying one more service and then, you know, what other billing rules that you have to see. Also, there is this new, you know, visualization view if you want to look at your checks at a glance. So I see like these two checks are failing. This is successful. This is successful. Okay, so let's go ahead and create a check. So these are four check types that I can create. I have to define a job name. So whatever you write will be sent as a job label in your promises metrics. So let's monitor Google because why not? So I can just say Google.com and I can just say service, you know, these are your query parameters if you want to send to your service. Let's see, you know, like where I want to monitor them. So all these are, you know, hosted probes and this is my private probe. I'll go over to private probes later. I'll show you how to set one yourself. I'm running this on my laptop right now. And it's executing checks and sending data. So I'll select all of them. I want to run it every 60 seconds. I'm out after three seconds. So we, there is this button. So what this button does is by default, we don't publish all the Prometheus Black OX Exporter monitor, Black OX Exporter metrics because, you know, we believe that, you know, not all metrics are useful. So we only, you know, like they are useful, but, you know, if you run tons of checks and if you are not using those extended metrics then you would end up paying more. So like if you want to only look at basic metrics, you can just leave this and check. But if you care about all the metrics that Black OX Exporter has to offer, you can check this and then you'll get all the metrics. So this is just, you know, if you don't care about all the metrics and don't want to pay a high bill, you can select HTTP method request what you had as, you know, normal stuff, TLS config, you know, you can give us a certificate. And then if you have a bearer token, basic authorization thing, so you can put your bearer token inside here or have your username password if your service is behind authentication. You can do validation HTTP versions also and see, like check for ourselves. Also do a regular expiration match on your headers and body. Like all this is already there in Black OX Exporter, this UI makes it easy to configure and you know, intuitive. So people don't have to write Black OX Exporter configuration. You can add labels to your checks. So they will show up in your metrics and logs. So let's say you have a team label, team Google and then, you know, you can use them however you want. Also we have an IP version selection let's say you only want to monitor through IPv6 or, you know, let's say you don't really care about this. And we have built-in alerts that are like, you can use this and then configure default alerts based on, you know, your time and reachability. So, and those alerts will be there in Grafana Cloud Alert. So let's go ahead and save this check. Okay, showing any because it's not running yet, it'll take a while to go and, you know, start generating data. So these are all the probes that we have been running for you, like you can go to like, see, okay, this is public. That means that, you know, this probe is hosted by us and you can run your checks there. There is also a private probe that I'm running. So this is my private probe that I'm running on my laptop. It says online, you know, I'll go ahead and kill it and then it'll show up offline. So it is offline, you know, let's bring it online again. So we have synthetic monitoring agent on our GitHub. So whatever code that I'm running as private probe is actually open. So you can go and look at the code if you want. The code is that GitHub, Grafano, synthetic monitoring agent. So that's what I'm running, like I'm not running some random code that, you know, you don't want to run on your network. Should show up online. Okay, so it is online. The success rate here is, you know, the amount of checks that succeeded on this probe and then there is this alert. So like, if you know how to configure ProMitch's alert manager and if you want to configure your own alerts, then like you can leave this alone. But let's say you don't want to mess around with the synthetic monitoring metrics that we generated. You just want basic checks on, you know, let's say if my service falls below 95% or 90% or whatnot for five minutes, then you can just save alert and then this would generate an alert manager rule in your Grafano Cloud alerting service. Same for other services. And if you remember in check creation, there was an alert sensitivities feature. So like we add low, medium and high label in your checks. If you configure that, then you can target those checks here. So these are three default alerts that we add for you. So let me go ahead and show those alerts. I'll go to the cloud alerting product. And if you see, this is alerting UI that is for Grafano Cloud alerting. It's, think of it as a hosted alert manager because it is alert manager, you know. This is alert manager configuration. You have silences, you have notifications and you have your rules. So there is one rule that I created, you know, website is slow. So if my website takes less than five second, I'll alert for greater than five seconds for five minutes. And I'll say my website is slow. This is what I have created. And these are the default alerting rules that are part of synthetic monitoring. And then we create this recording rule for you. So you don't have to type the same query again and again. So, yeah. I'll go ahead and, so we import five dashboards, five default dashboards as part of synthetic monitoring. So like we already looked at summary dashboard, right? But there is a dedicated dashboard for each check type. So there's a dashboard for DNS, Bing, HTTP and they are your normal dashboards. So you can go here and see them, you know, along with all of your dashboard. Put them in a folder. So let's look at the DNS dashboard. Let's see what's happening with my DNS queries, right? So look at DNS, it says Bangalore 3.57 percent. So it says uptime 100 percent, reachability 99.83 percent. So uptime means that only like at least one probe is able to reach your service and confirm that your service is up. And reachability means, you know, the overall combined ability to reach all these probes. So let's say if one probe is not able to reach or if it's getting error, then reachability will start decreasing. That means that you're not really reachable in certain locations. Let's see records, what not. And this is the logs. So you see that there are some errors coming from, you know, Bangalore probe. And you can see what happened. So it says error while sending a DNS query, your timeout, like really helpful when you want to develop something. So it says beginning check timeout. So it says check failed, duration five seconds. Okay, check failed duration five seconds. So that means if I go ahead and look at the check. So this is our check, right? I configured five second timeout, right? So like if it takes more than five seconds, I'll consider this check as failed. I'll run this every five seconds. Go ahead and increase and decrease this based on my threshold. But yeah, let's look at the HTTP dashboard that we have for our HTTP services. So I'm monitoring bunch of websites. So let's look at the Google check that we added just now. So this is our Google check. We see, you know, it's up, but it's not really reachable. Could be because of our threshold, but you can debug, right? Why it's failing? You have all the logs from your checks and these are your normal look logs. You can go ahead, explore on them, and then all the log queue that you want. Let's say I want to show me all the errors. I'm not a log queue expert for, right? Bad log queue, but whatever, you know. So it's cannot assign requested headers, okay? Again, I can go ahead and do show context, that TCP cannot assign, making a strict request, resolved, targeted, whatever. So there is some issue, like you have the logs to debug it, right? Whatever, you know. And, you know, let's say you have, it shows SSLX query, right? So you can alert an SSL also, so. I'm assuming almost every service offers that, but I just wanted to show that you also have this. And in the response, let's say you get a breakdown by, you know, how much time? So if you look at this, majority of time is going in resolve, and then the check actually fails. So that's why the other metrics are zero. But let's look at my website and see, you know, what's going on. So you can see that, you know, my home network is not as quick as, you know, other probes, so it's slow from my point of view. It's fast when we look at the reptilian and Bangalore problem. So this is what I mean when I say, you know, network is not uniform. So you have to actually monitor from all over the world and go through actual internet. Also, synthetic monitoring, like you can just look at your checks like this. What not? These are the probes. You can see what version you have and what not. Anyway, I guess that's all with the demo. I wanted to tease it out and, you know, say what's next as part of synthetic monitoring. So we are in the process of, you know, planning and building trace route feature in it to trace how, you know, Atlanta probe is reaching to your service. You should be able to run trace routes from our probes. Once we have this live, you will be able to run trace out and look at the data and, you know, visualize it and see how it travels all over the internet. Particularly useful for folks who want to do network monitoring and smoke ping. So the current ping feature that we have, it only sends one pin packet, but that's not really good when you want to, you know, monitor your response latency. So smoke ping would send a burst of things and then you can look at the data and see how it's doing compared to a single packet. Though no promises on when these things will be out, but hopefully they'll be, they'll be out soon. So, okay. So now let's say you say, you know, looks neat. How do I get it? You know, I want to use it to monitor my home lab. I want to use it to monitor my personal project. So Grafana Cloud has a free tier without any credit card or anything. You can just go sign up and use all the Grafana Cloud features including synthetic monitoring. So you have met its large spaces alerting and it's more than enough to monitor, you know, site projects, home labs and personal projects. And even if you want to play around with alerting tools, observability tools and learn things, you can go ahead and use our free tier, learn things, play around with Grafana and from its ecosystem. Grafite is also there, but I'm assuming nobody wants to learn Grafite these days. That's all from my side. Thank you, everyone. These slides will be available, I'm assuming on the Hasgeek platform as well. And if you want, you can go to my website and download these slides as well. I'm there on the internet. You can reach out to me on Twitter, write an email, my email address is there on my website. But yeah, so I'll stop sharing and we'll go ahead with the questions. Yeah, I mean, of course the last slide kind of got to me. I was, anytime this feature is coming to the open source integration. Like see, the probe is open source already. So is there a way to set up my personal open source probes across multiple other clouds? Let's say I'm hosted on AWS, I can probe in either error code, data lotion, GCP, wherever. But Grafana, I'll still live with Grafana Cloud. I don't have this feature on my self-hosted platform. So is that anywhere in the product pipeline? So in the last couple of months, we were busy with deprecating world thing and migrating those users off. So we were only focused on the cloud as a data storage because that's what we needed from an organizational product point of view. So that's what we have. As of now, that's not there on the roadmap because we have a free tier. So people say, oh, I want to play around with it. Then we just say, free tier is more than enough. We have 15K active series that are allowed. Also alert rules and logs are there. And synthetic monitoring, if you look at the checks, we tell how many active series it will generate, right? So some, like you can monitor hundreds of endpoints of synthetic monitoring with free tier. Let's say you have, like there is one part that you can do. Let's say you have your local Grafana and you want to install synthetic monitoring. You can configure cloud as a storage destination, install synthetic monitoring plugin on your local Grafana and provision those cloud data sources in your local Grafana and synthetic monitoring will work on your local Grafana. You throw the cloud once. Yeah. So it will collect data sent to cloud, but you can use it. In fact, it will work on your site. Yeah. Still seems around about we, but I'm pretty sure with Grafana track recording near future, we will be able to see this integrated with the open source version, but fingers crossed from my side with this. Other than that, like if I want to do this at my org right now, we do have a very distributed customer set. Then I'll have to go for the black box exporter route, I think, and set up custom pros to that, right? Yeah. As of now, if you want to store data on your own and store logs on your own, then I think black box exporter is the way, but I would say like, even if you have your own internal cortex and other services, you can still use free tier just for this. Install it in your local Grafana, provision those cloud data sources, then mix it and match it with your in-house data and then just check it out. I don't have any other questions, the whole thing looks pretty simple enough. I have used, I think, black box exporter once, we were back in only one of the orgs that are working, but that also is for just a simple cool system pink test, not a distributed test across everything. We just went monitoring our external info from one location, not quite all locations possible. But let's wait for anyone else from the audience to pitch in with questions, because this is definitely a super interesting topic, personally for me. So anyone else has questions on this? Yes, Suresh, this is Ashok here. So very interesting. We have used black box exporter extensively, use it all the way from SSH chat to ping and everything else. That's our easiest way of finding, okay, something is up on the availability front. So one question I had was, see, when we do it, at least currently, when we do it the cloud way, what are your thoughts on the latencies? Because I have to go back to cloud and then come back, then measure something bit. No, no, no, no. So cloud is where you store your data, right? So the checks are executed all over the world. No, I agree. What I'm trying to say is, since I'm storing the latest data, might not actually be the latest. Oh, okay. I might not get the real-time data. Oh, okay. So I think the latency should be fine. We have, I think two regions as of now in Gryffana cloud. I'm not sure if you can choose your region in the free tier, but... I think it will get better over time as joy, sir. Yeah. So I would say give it a go, install on your Gryffana, like play around with it and see how is the latency. Shouldn't be a problem, you know, from what I think. Yeah, in our space, like enterprise, pushing anything to cloud is forbidden. So yeah, we'll try another one. I see. And it's also not about just enterprise. I think most orgs, even if they are not enterprise, posting their own Gryffana and... It's a pretty standard pipeline, right? Like Gryffana cloud only becomes... Like it's only at a certain scale, it becomes, you know, important, right? Or like at a very early scale or a very late scale where, you know, managing a self-hosted cluster is sort of getting too much over it. At that point, you can rely on a cloud. But in the mid to mid tier, or go in superskid or like, you know, like as Ashok says in enterprise scenarios, people have their own Gryffana setups already. So there has to be a sort of a good way to integrate this pipeline with the Gryffana dashboard that people already have. As you currently said, we don't want to see the data in two, three different dashboards, right? The same would happen if we have our own Gryffana dashboard and Gryffana cloud dashboard just for the synthetic side of the thing. So some integration would definitely be super awesome and nice, but yeah, let's see how that works. So if folks are fine with keeping the synthetics data in cloud, then, you know, you can install the app on your local and configure it. Give it data sources, give it credentials to connect to cloud. So then you don't have to go to Gryffana instances to see this data. Like it will all be in single instance, but again, the data is really good. With some latency, of course, like if you're talking a pre-hop thing, then you have to pay with some latency. But I think like from a historical point of view, that would still be relevant enough. Maybe not a PZOP one point of view, but we can still analyze historical data from certain locations if too many customer complaints are happening from a certain region, right? We can figure that part at this point. Yeah. Anyone else here does this with any other tool seats in there? Or we can start having that panter station at discussion, I guess. We still have online time, still at least the administrative time right now according to the schedule. Okay, 12.15. So two minutes till the live stream ends. So you can still ask questions or we can take all the questions to the scheduled panter station that we always do, where we go off the record and we can talk about everything under the sun. So till that time, like are there people here who are already using other sort of tool seats to synthetic monitoring? Telegraph, Telegraph has a HTTP and a TCP plugin, I guess, with which you can do synthetic monitoring. We have tried to talk a lot of times. So I think a lot of the self-hosted setups would basically be deploying your Pintest monitor in across multiple locations and then get the data to stream to your central monitoring location or the meta monitoring location, something like that. Right? Yeah. I'd like to add something. Okay, yeah. So I've started consulting just a month back. So the company I consult right now, they're using Pingdom for basically Pintest and all of that. And recently they kind of faced an issue where the tests are basically just timing out, not even able to reach and all of that. So we're just exploring what new options are available or partnering them. And I think DataDog has been offering of synthetic monitoring because they've been using that. So what do you guys think about like DataDog synthetic monitoring and do you have any views on that? Because we have not been using Gifana Cloud so it would be difficult to suggest using a completely different product for it since they've been using it. So it seems like a good option, more or less. And also another question of mine was like if a product is region based, like for example, just in India, like how, like you won't have so many probes to check. Like mostly data centers are either in Mumbai or somewhere else. It's just like one or two probes that you would get in a specific country or something like that. So if you wanted to do more of a regional check, like if customers from Delhi can connect or customers from Chennai can connect, how would you do solve that kind of problem? But like something like DataDog or maybe we have to do something in-house. You're still limited by where data centers are hosted and sort of actually for our tier two and tier three cities and those customers, there are no data centers out of which you can run your black box test, right? So not really sure. You can leverage a couple of mid tier sort of bare metal companies which have certain other data centers but other than that, not really anyone else would pitch in on this. So I would say using bare metal offerings and then running private probes on them would be a way to go. And if you really want to check from you this point of view, I would say send a nuke to every employee of your company and then let them in. Yeah, I mean, yeah. If you do have a data center, you know, ask them to run a Raspberry Pi and... Yeah, that seems like a good option. I mean, I'm going to the extreme but yeah. Give your customers discount to run a probe from their end. Oh yeah. Just built it into the app itself. This is what I think a lot of telecom companies do, right? If they're putting up a tower on top of your building or something, you get a flat discount or some payment out of that. So this is already sort of a business practice in certain domains. So whatever is that, not sure. Yeah, I was in project before. So we used to use data docs for all of that and it didn't use to because we didn't have a website and stuff. So we didn't use to have that many of problems. So yeah, this is relatively new. But data docs also won't do like Indian region-based splits, right? It will still do maybe Delhi, Chennai, you know, Mumbai, Bangalore. There's no data center in Bangalore. So you're limited to Delhi and Mumbai and maybe Chennai. There is a couple of places. Hyderabad has a couple of places, I think. But not sure, you're limited by where data doc has a center, right? They don't have much fine green if I believe you only have India green. They don't have state green, right? Right, right. Also, like I was looking at Cloudflare workers, but they don't allow, you know, Golan. So I can't run. Otherwise, I would have, you know, deployed it on Cloudflare workers and suggested that as a solution. But yeah. Yeah, that's a good option. Maybe you can write something on your own in a language that they support and then deploy. Just write a shell script. Yeah. When back to old Zabix and, you know, those days, all probes were shell scripts. There was no fancy language. You used to use grep and said an opt to filter out the data point and send it. Right, right. Naju's anyway. If you point of time, every new software was a shell script or an extension.