 All right, thanks, man. So just a little bit of background about myself. So I'm Greg. I actually was working at Trey Gecko for a while. And over there, I was a software engineer where I was first introduced to Ruby and Ruby on Rails. And since then, Ruby has become my favorite language. I'm not very active in the group, but as a language itself, I do enjoy Ruby on Rails quite a lot. I think it's a very easy language to learn. And when I do site projects and stuff, normally I will just gravitate towards Rails simply because I'm familiar with the language and also this is a very good platform to build stuff on. So after leaving Trey Gecko as a software engineer, I started as a sales engineer and I'm not working at Datadog, right? And so Datadog is basically a monitoring and analytics platform that allows you to get insight into how your applications are performing. So I understand Conction you're at Shopify, right? Is that right? Yes. So Shopify is a user of ours and basically Shopify uses us to get observability over all the different systems. I think you guys just run like a Rails model of application and a couple of different satellite applications as well. Is that right? Yeah. Yeah. So they're like actually a very big customer of ours and they leverage Datadog very heavily to get observability over the systems. So I'm not sure if this is something that you guys do on a everyday basis, but working at Datadog, I come to realize like how important observability is. So a couple of things I just wanna talk about today. It's just five points, very quick, let's go over them over today's what I wanna talk about today, right? So using a platform like Datadog, when we talk about like observability, right? It's very important that all your different tools that on one platform are working at Trey Gecko. Personally for me, we had like our separate logging solution, which is like an elk stack. And then we had, I don't know, we were using New Relic for like APM and then we went to Heroku to understand like how all our dyno was performing and stuff like that. So whenever we had like an outage or there was a problem, right, we had to go to like all these different tools to try and figure out what's going wrong. So using a platform like Datadog, that's something you can figure out very quickly because all that data is available to you on a single platform, right? So I think a really good way to show this is through screenboards. So these are like dashboards and stuff that allow you to get critical metrics and data all into one view. So you don't have to switch between different platforms. So in the New York office for Datadog, we have like a really big TV, I think it's like a 65 inch plasma or something and it shows like the overall performance of like the Datadog environments. This is like a really good way to help your teams and people around you understand what your environment is functioning like. So over here, we've got like a e-commerce website that we set up and this is just showing like, front end SLOs, you know, Eric, I know you're a product manager and there's often like custom business metrics you're gonna track when you know you have new features rolling out, you might understand, you know, custom business metrics. You can like report that to your CEO or CRO or whatever. These are things you cannot do as well. You know, you send like custom business metrics to Datadog as we scroll down the rest of the page, you know, tons of other data. We can get like APM infrastructure, logs, serverless. Yeah, it's just a whole bunch of things. So these are really easy to build out. You know, they're built using widgets, basically just drag and drop them. You choose, you know, whatever visualization you want. Wait, hang on. I think that's bugging out a little bit. What's happening here? Yeah, I think just choose the visualization that you want the data. And yeah, that's it. Super simple to build out all to the UI. You don't need like a designer or someone who knows some custom scripting language. Very easy to build out. Of course, you know, first to get such a nice colorful, rich dashboard, we need to get the data into Datadog to begin with. So a really easy way to do that is through integrations. So the primary way you get set up with Datadog is through the use of an agent. Agent is basically like a lightweight service that runs on one of your machines. So we could use, you know, if you're using AWS, for example, very easy to set up, right? This is just SSH in copy and paste one line. This gets the Datadog agent running on your machine and that's gonna collect data and metrics into Datadog. And from there, we can show you, you know, really cool information. Like for example, if, you know, you've got tons of hosts running, right? We've got difference like 1,000. For the demo environment, we've got like, I don't know, couple of thousand EC2 instances running, right? So you can visualize what the infrastructure looks like very, very easily. So for example, you know, if there was an outage and you wanted to understand where the outage was, you can do that very easily with a platform like Datadog, right? So currently we're looking at our infrastructure by all the different availability zones. So different data centers that we might have in our environment, right? If we click into, you know, one of these ACs, you'll see that it's made up with a bunch of different hexagons. So if we click in over here, you'll see, you know, it comes with a whole bunch of tags, right? So tags are basically key value pairs and these allow us to organize and consolidate information within Datadog. And I'll show you an example of how you might want to use that information, right? So instead of looking at our data by our infrastructure by availability zone, we can cut this by, let's say cloud provider, right? So if we were on a multi-cloud environment where, you know, we had stuff on AWS, GCP, Azure, as well as on-prem hosts, you know, instantly I can see, you know, what the performance is on all these different cloud providers. And then, you know, I can cut this even further by saying, you know, I put back availability zone, right? So within AWS, now I have, you know, I can see within the different AZs, maybe, you know, for my on-prem stuff, I can see a tour of my data centers, all of the hosts here, something might be wrong, right? A lot of red. So something weird is performing here, right? But again, you know, if I want to dig into this a little bit deeper, I can also go into this at like code level. So a really handy feature that we have is the service map. So I think most Ruby people will be familiar with like micro service architecture. And, you know, as you start to scale up with product, you have all sorts of different services. They all talk to each other and it can be difficult to understand what that looks like. So with the service map, we analyze all of the different requests that are going on your environment. And we map that out for you. So this is what that looks like, essentially, right? We see all the traces going on your environment. We figure out all the different dependencies that they have. And then we show you what that looks like. So for me, you know, if I was a new engineer, joining a new company, I need to understand what the environment looks like. This is a super handy feature to have, you know, I can immediately see what I have all my web applications, what I have on database, what's on cache, serverless stuff, even external APIs. You know, it's super easy to understand what my infrastructure looks like. If I come in here, go into the website service, right? So I'll give you a scenario where maybe, you know, I was on the engineer on call, I'm on BejaDui and again, a lot telling me, hey, you know, web store service, not so great, 19% error rate, you know, 1,300 milliseconds latency, right? So I can just come to this page and I know these are all the other components affected, you know, all these other microservices, these are getting fed errors downstream. I can just come to this page and understand immediately what's happening. You know, if I needed to escalate to other teams, I can see, okay, these parts are involved. I might want to escalate to these teams as well. One thing about DataDog is that we try to be that, you know, one stop shop for observability, right? So we handle your infrastructure, we handle your logs, we handle your metrics. And because you are sending all the data to us, it's very easy to do troubleshooting and diagnosis when something breaks, right? So by clicking to the web store, it's very easy to pivot between these different data types. If I wanted to go see, you know, what was happening with the underlying infrastructure, I can click here, you know, I can go into see, you know, all the logs streaming out from this web store service. And if I click here, you know, this will take me to the APM portion, which I think most of you would find a lot of use from. So if I just tab out to traces, right? So within most applications, we want to understand what performance is like, what's happening in the backend, right? So with application performance monitoring, we can understand what's happening at a very deep level in the code. So over here, I've got top level APM metrics. We're looking at, you know, performance for a specific application, the web store service in particular. Over here, I can see, you know, top level metrics like, you know, the total number of requests we're getting for the service, total number of errors, what latency and all those requests are. And we can see a whole bunch of traces here as well, right? So these are all the requests that are going on in the environment. I'm just going to scope this down to 30 minutes because this is live, I can get a bit noisy. And on the left here, we've got a facet filter, right? So it's very easy to, you know, filter the different kinds of traces I'm getting in Datadob. For example, if I just wanted to look at only error traces, so I can just click on that. And maybe it was a specific error that I was interested in, like payment service unavailable, right? So I can just filter this here. So now I can see all the traces that come out on the web store, which are related to this specific error, right? I don't have to write in some special query language. So all that is available there. Now if I click into one of these, if some of you have done a bit of front-end development, you'll be familiar with, you know, Chrome developer tools and that waterfall when you inspect something. So it shows like the performance, right? So this is very similar to that, that it shows all the, everything that's happening in the backend as a distributed trace, right? So each of these blocks is a span and a span-based represents a function, right? So we wrap each of our functions and then in the datadog tracer agent, so we can understand what performance looks like for each function that is called across different services. So from here, I can see stuff that, you know, comes from the browser, it's a post request and it goes into, you know, our rack, as a rack request, I think it goes into Rails, the action controller, and it's calling this specific custom controller that I've set up, right? And I can get lots of really rich data from that. So over here on the right, you know, you can see execution time for all these requests. So for example, if you were trying to optimize performance, this is a very good place to start looking at, you know, I can see it maybe some service was taking, hogging up the execution time, taking a bit longer to process that request. And, you know, as I scroll down, hang on, let me just bring this up real quick. Sorry, just to take note, you know, you can see the distributed requests across the different services as a hit, like as it hits my Mongo database, then goes back into the Rails controller and stuff like that. But if I scroll up here, you know, I can see a tiny little stack trace of where exactly the error is happening. And then, you know, if I click in over here, I, of course, you know, it's always useful to have all the different data types when we're trying to do root cause analysis, right? Like I want to be able to go into my logs or check my infrastructure to see if that correlated with anything else, right? We cannot just assume stack traces will tell us exactly where the error is. So as I tab over, you know, you can see all those different data types. Like you can see what's happening in the infrastructure. You can get metrics about the host. So maybe this error we were concerned about is related to, you know, we maybe maxed out on memory when we saw when that trace happened, right? So that's really easy to do just from Datadog. And then we can go into the logs as well. So we can see all the different logs streaming out from the different services as they get sent to Datadog, you know, so as I scroll down, yeah, we can see, you know, exactly where the error is happening. We can see payment rejected due to a number of transactions. So that tells us, you know, we're probably getting rate limited, we're hitting an external endpoint too much and we probably want to introspect into our own code and optimize number of calls we're making to that endpoint so we don't hit that rate limit as often. Any questions around what I've shown you guys so far? Is this something that you guys do with your existing monitoring solutions? I guess Conjun's already using Datadog. So I guess this is something that you guys do. Yeah, we use grid box, quite a bit to just, like you said, the observability part is very important to us to make sure the hospital is going to go down. Yeah, yeah, and you know, having this ability to do all that correlation. So personally for me, when I was at TrigEco, right, we use SolarWinds for logging and then we use New Relic for APM and then something else for metrics, I think. I think it was Periscope or something. And you know, whenever there was like an audit or a problem, something went down, customer ticket we received, we have to go to like all these different platforms to try and figure out, you know, what exactly is the problem? But, you know, using a platform like Datadog, and I've used this for my side projects as well, very easy to figure out exactly what's going wrong because, you know, you can go through all these different data types very, very easily and just figure out exactly what the problem is happening. The next thing I wanted to talk to you guys about, sorry, does someone have a question? Yeah, hi, this is Himali here and thanks for the sharing so far. I think it's useful. I just had quick two questions around this. Do we have any alert mechanism like we can set alert if this error comes up then send out an email or integrate it with any chatting app? Yeah, we definitely do. So I'll be going over that and the second part of the demo, which I'll try to rush to because we're running a little bit out of time. But, sir, what was your second question? Second question is like in, so this is I think almost similar to Splunk, right? Splunk is more focused on logging. Is that right? Yes, more of logging mechanism. So we didn't find the alerts there. So I was just trying to compare both of them, how better it is like and all. So help with some comparison. Yeah, sure. So let's go into logs real quick, right? So logging very similar to Splunk in that, you know, it's very easy to actually know because Splunk, you need to write like complex BJAX to, you know, write some sort of query. Is that right? Yes. Is that how you guys? Yeah, OK, so again, querying data docs super simple, right? If I just want to do for the Web Store service that we're looking at before, I can just say something. Service Web Store, you know, and then now I can just look at all the different logs I'm getting in the Web Service. And then if I just want to look at error logs, you can use the facet filter over here. So super simple. I was just looking at no error logs from Web Store, right? Now, if I scan this out to the last one day, you'll see that we've got 10 million error logs. So this is impossible to control through. No one's going to make sense of this, right? A very powerful feature that we have a data of is patterns. So I think Splunk has something very similar to this, but basically what this does, if you guys don't use some solution like DataDog or Splunk is that, you know, instead of looking at those 10 million lines, we're going to analyze all of your logs and we build up 40 patterns from common patterns and events. We analyze those logs for you. And then, you know, you can focus on like the logs that are occurring the most often. You don't have to pay attention to, you know, every single error log that you're seeing. This is a good way to, you know, to prioritize optimizing that Web Store service. Now, from here, you can do a different bunch of things, right? So for example, that rate limit error that we're sorry that we were seeing this now, we can build this out into graph. Now this is just showing us frequency of how often those error logs are appearing. But from here, we can do a whole bunch of different things, right? So we can export this, we can build out new metrics to show this on a dashboard. So, you know, we can show this to our business team if they say, yeah, we're paying enough for our plan to some API, right? And we can say, this is not enough because we're hitting this rate limit too often, right? So we need more, we need a higher rate limit or better plan. You can build this out to monitor. So for example, you know, you wanna get alerts for stuff that's happening, right? So you can see, you know, this is the trend for how often we're seeing this error, this error, right? And we can say, you know, we set a warning, 20,000 when we see this 20,000 and then we can say 30,000, you know, kickoff and alert telling us, you know, this is a bit more serious. And I'll show you what that looks like in a second. But yeah, so logging data dog, very, very powerful. Something that I want to, you know, key differentiator between us and Splunk though. When we talk about logging as a solution, we need to think about retention and sampling, right? If you retain your logs, so you keep them in the logs explorer, you can search these in real time. The longer you keep these four, so let's say, you know, 15, 30 days becomes more expensive the longer you keep these four to be able to search them. And after a while those logs expire so you cannot search them anymore. The next thing we wanna talk about is sampling, right? It's very expensive. You're sending all your logs to your logging solution because if you're sending all those logs, you need to pay the same rate for them. You basically like maybe some logs are more valuable to you than others. Like for example, error logs very important, but debug logs not so important all the time, right? But if you're sending all those logs, you pay the same rate for them. So most people would choose to ignore those debug logs but then I'm sure all of you guys have been in a situation where you're doing solving a ticket and then you need to go into the logs and you realize some of those logs that are needed, they're not there anymore. They've disappeared, right? So where do I go and find? Cannot, so use a static customer. Sorry, shit a lot. That's it, you know, I can't do anything. Yeah, I see a function nodding a bit. This seems like a common problem in Shopify. You know, whether my order go, those things have gone missing. So what DataDog does, we have a very clever solution to this, right? We have logging without limits, which is basically we decouple ingestion from indexing. What that means is that, you know, you send all of your logs to DataDog from there, we do a whole bunch of things to them. So the first thing is they get sent to a live stream, but the important thing to focus about is, you know, you can set a different set of filters for those logs. So for example, if you had, you know, maybe for your core application, your monolith application, right? Very important, you keep all those error logs so you can set a 30 days retention only for your error logs and you keep those and you only pay for them when you want them. Let's say you have a satellite application, not so important. You only wanna keep the debug logs for that. You can set a different index for those and you only pay three-day retention for them. So very, very cheap, right? And you only pay for it however much you value those logs. Now, the second thing that happens, all the logs that you don't index, right? So all those other info status logs that you're not keeping within hot storage, you send them to an archive. So this is like a bucket on S3, for example, or GCS, right? So this is your cold storage and only when you want those logs, you will pay for them to index them on demand. So this is called rehydration, right? So you just index them on demand and then you put them back in data doc and then you can do everything else. You can, you know, build out those alerts. You can show them, you know, you can browse through them through the UI and stuff like that. So that's a very key differentiator between us and Splunk is that we allow you to, you know, our logs are a lot more usable. So very friendly to search, very friendly to pass them and use them, build out alerts, you know, monitors, show them on dashboard and stuff like that. But you also keep those logs very cost-effective. You don't have to pay a whole ton of money, you know, just to keep those logs. It's a very convincing argument when you're trying to talk to your manager or, you know, your CTO about why we should spend more on a logging solution. You know, it becomes very expensive when you want to collect more logs. But with data doc, you can keep those logs down and you only pay for them whenever you deem those logs necessary. Right. I think it makes sense because now, because we were being costing a lot of it, we decided, okay, we will not depend on the older logs and we're used to archive it and only upon if we need it, we used to pull it up. Yeah. But I think on demand, getting the things is the right thing. Got it. Thank you so much for answering. No problem. So I'll just show you what that looks like really briefly, right? So, you know, you have your live indexes. So these are all the logs that you're collecting as, you know, you only want to collect your error logs for your satellite application. That's something you can do using your retention over here, right? So these are stuff that you're actively collecting, but you also have your historical indexes, right? So I'll show you what that looks like if you go to configuration. You go to rehydrate from archives. You basically just choose a historical view. You choose, you know, that certain time period of logs that have gone missing, but you now want to be able to search and query them. So you just choose, you know, maybe from the last seven days and then you can define a query from a specific application those logs were coming from. And you can choose, again, retention, right? So maybe, you know, you want to do a short investigation so that would be like three days or maybe a bit longer, so seven days and stuff like that. And yeah, from there you can just choose. So you only pay, again, you only pay for them whenever you want them. And if we go back to the logs explorer over here, you can see, you know, under those index, under that index view again, you have your specific historical indexes. So you can just choose, you know, one of these and then you just start searching and querying these in real time as well. Make sense? Got it, yeah. Thank you so much. No problem. The next thing I want to show you guys, I think we're just running, actually how are we doing for time? How much time do I have left? I have been tracking. Maybe 10 minutes. 10 minutes? Okay. Let's scroll the time boards really quickly, right? So again, working as a developer at Trey Gecko, a lot of the times, you know, I have to work between all my different teams or I have to work with the infrastructure team or the core team. And then we all don't know what data we're looking at. So a very powerful feature we have at Datadog is the ability to work between different teams and identify, you know, trends and reduce that time to figure out what's actually breaking, right? So I think a good way to show that is through the use of time boards. So these are very similar to the dashboard that I showed you before, you know, all sorts of data coming in from various parts of the platform, Web Store performance, you know, what's happening on our database and stuff like that. But if you're looking closely, all those widgets, they're time synchronized. So you can actually configure the time where we're looking at this data, right? So for example, if I wanted to look at, you know, maybe the spike over here in top users with fail login, I can click here, I can send a snapshot to, you know, maybe I was working with my colleague, John. I can tell him, okay, please take a look at this graph. And what's gonna happen is that he's gonna get a link to this exact URL. So you can see the timeframe over here. And now he gets a link to, he comes to this page and he's looking at the data in the exact same context that I'm looking at the data in. And now another very powerful feature that we have is, you know, you can correlate metrics, right? So for example, if we're looking at trace AZ by duration, we see a spike over here for one specific AZ, right? I can click here, I can do find correlated metrics. And what this does is it looks through all the different metrics that I'm sending and collecting with Datadoc. So for example, you know, on my EC2 instance, if I ran out of memory or my database, you know, we had, we maxed out connections at one point, right? It's gonna see this spike in this specific metric and it's gonna look through all the other metrics I'm collecting with Datadoc, right? And then it's gonna identify common trends for me. So now I don't have to, you know, go to different platforms to try and find that trend, you know, within all the different data types, you know, all these different types of metrics I'm collecting with Datadoc, I can do some sort of really high level analysis already, right? So this is a very, very powerful feature when it comes to, you know, trying to identify where, you know, my bugs are coming from or if I'm outage, you know, just as an initial investigation, really, really useful feature. Something else that we have as well is the ability to overlay events on top of these time boards, right? So for example, for this demo environment, we use Jenkins for code deployment. And, you know, if you're using like GitLab or something else for code deployment, you can do this something similar. So you can say sources, Jenkins. And what this does is it shows me an overlay of events that happened, right? So for example, if we go back to trace duration by AZ, I can see, you know, if I scroll down here, you can see this red line. And that indicates, you know, some event that happened in the stream over here. And now I can say, you know, maybe someone deployed a new version on the code base, which resulted in a spike in trace duration by AZ. So having this ability to do metric and event correlation, again, extremely powerful, you know, if we're trying to figure out, you know, what's gone wrong, who did what, you know, we don't have to point fingers anymore. We can see exactly, you know, what's happening or at least have a, derive some sort of hypothesis just from that. Now, the second question that I was asked was, you know, what kind of alerting capabilities does data log have? And for that, you know, we have monitors, right? So data log has a whole bunch of different monitors. Now, as a developer myself, I've been at that position where, you know, I develop a new feature and then I can subscribe to alerts and then my inbox just fills up with a whole bunch of alerts, which I do not care. Like, you know, some small error that happened and probably disappeared in a while, it doesn't matter to me, but I don't pay attention to those errors anymore. And, you know, that's not always a good thing because you get alert fatigue. If something critical happens, you know, you're not paying attention to those errors anyway. So with data dog, we give you a whole bunch of different alerts, right? So you can make sure that all those alerts that you're receiving are very specific and actionable so that you'll only pay attention to all those alerts that matter most to you. You have your standard hosts and metric monitors. So, you know, if one of your machines goes down or if you don't watch like ICP utilization as something you can do, but data dog also has a whole bunch of other monitors that rely very heavily on machine learning. This is, you know, handy for stuff like anomaly monitors one. So this looks for anomalies in trends. So for example, you know, if you had a website which in the daytime, you should have for a standard level of traffic, but in nighttime, less traffic, so less latency, but you wanna make sure that you catch latency in the middle of the night. But then, you know, that value is always gonna hit in the daytime, right? So what that might look like is I've set up a monitor before. This is an anomaly monitor which is watching for rack request duration, which is basically latency, right? So you can see this gray band. This is what we deem as normal latency throughout the day. But then, you know, if we see a spike in the middle of the night, we can be alerted on that. You know, and as part of, you know, good DevOps culture, we should always have runbooks. So for example, you know, you're a new engineer, you joined the company, you don't really understand what's happening with the system, but then you're somehow on call anyway, right? I've been in the situation, right? So it's super handy to, you know, have like a runbook, right? So in the state what's happening, when you receive that alert when you enter on call, this tells you exactly what you need to do to go and resolve the issue, you know, go and restart the server, go and look at this timeboard, escalate to this team, very, very powerful feature to have, you know, because you know exactly what it is you need to do, even if you're completely new to the company. But yeah, so essentially that was basically just what I wanted to show you around, you know, monitors, logging, infrastructure, what you can do with the whole data doc platform. Very, very powerful. There was a bunch of slides that I prepared which I guess, you know, I sort of skipped over in the essence of time, but the last part I wanted to go over was basically just a Q and A, but before I went there, did anyone have any questions around, you know, what I've shown you guys or, you know, anyone to dig further into some portion of the data doc platform? Can I ask? So, like you mentioned, it's agent-based, right? Like to install it's agent. So after you deploy your, like for example, you deploy a Rails application on HeroCool, and then you just install the data.agent on that HeroCool instance, and then you get all of these, like out of the box, even the logging everything. There's a little bit of configuration that you need to do. So it's basically just commenting outlines, you know, setting false to true on the config file, but then, yeah, most of this would be out of the box. So you will start seeing traces coming into Datadog, you start seeing logs coming into Datadog, stuff like that. Okay. Yeah. One of the big focuses for us is ease of deployment, right? So, you know, a lot of the customers, they don't have to hire like systems integrators or, you know, some consultant. All of this is like right out of the box, very easy to do. And we also have very comprehensive documentation. So if you go to the doc site, you know, if you search Ruby, for example, you know, very, very good documentation comparable to that payment company, which is escaping from the mind right now. But yeah, it's very, very comprehensive, you know, all sorts of data that you want to get, you know, in terms of like setting up the agents. Yeah, you can just come to this page. Very easy to do. I think I might be eating up into a conscious presentation. So I'll just quickly go over to the Q and A very quickly. But this is a good chance, you know, if anyone is likes T-shirts and stuff like that, you know, this is a good chance to win them. There's going to be two questions. So anyone, we're going to keep this a little bit interactive. So, you know, just be first to answer and then I will send you out a data doc T-shirt. So first question, if we go into it. So name two types of monitors on data dog. So we went through these pretty briefly, but if anyone remembered the screen, or if you're fast, you can go into the documentation page and just search, you know, data doc monitors and you get on the documentation page and you can figure out, you know, two types of monitors that we have. Anyone want to answer? Anybody remember? Can you make a tweet? I went over it on the page very briefly, but, you know, since it's a little bit quiet, I'm just going to go to this page. So someone just shout out two of the different types of monitors that we have. And you'll hear it. Yeah, I know. This one is anomaly, right? Is it other? Tresho? Yeah, the Tresho and the anomaly one. Yeah, so anomaly monitor is one type. And anyone want to volunteer a second? I see. Okay, you can go for it. Oh, it's right there, host and metric. All right. Do you mind just writing, like just shooting out a message in the chat real quick so I can remember what your name is and then I'll contact you. So I actually just put out your, or DM me what your, whatever your email is, and then I will send out a shirt to you. I'll contact you for shipping information for the shirt. The next question, sorry, we're running out of time. Sorry, Kankshan. Next question was, how does data doc differentiate itself from other logging solutions? And there was a very specific term that I used in that presentation. But if anyone remembers what it is, I'm going to give it five seconds before I move over to, you know, that little slide that I have. And since we have no takers. I think on-demand request for the archive logs. Very similar, but there's a very specific term that we use, you know, on this slide over here, which you should be able to see right here. If you go ahead and just say that, I will be sure to reward you with a very cool and trendy purple data doc shirt. Yeah, but it's a very logging without limits here. There we go. Thank you very much. And yeah, so that was basically it, right? Logging without limits, very, very innovative solution which differentiates this from the rest of the logging solutions out there in the market. So yeah, essentially that takes me to the end of today's presentation. You know, I hope you guys learned a couple of things about, you know, data doc when we can do, if you know, what if a logging solution, a monitoring solution that you guys are currently using, you know, you might think what data doc does a little bit better or, you know, maybe something that you're suffering with in your daily life when you have to sell the ticket, you know, where we can do it a little bit better for you. If, you know, anyone is interested in talking more about data doc and what we can do, I'd be very happy to talk to you. So just send me a message on Telegram or something like that. You know, I'll be sure to post my contact info after that. Yeah, that was it. So yeah, did anyone have any other questions? Otherwise, I'm just gonna hand the time back to Kangshun, which I have very ungraciously stolen from him. No problem. Anybody have any questions? Bye. Okay, I guess not.