 Okay, so I'll speak with, yeah, it's great to be here. Thank you very much. So, yeah, I'm going to talk to you a little bit later, and we have a community, a community, and I think it is pretty cool. And of course, that's going to have honouring to be done at the same time, right? Yeah, so I've been working, yeah, for a year now, but, you know, it is a really fantastic effort, and if, you know, 100% of that with Instagrams, yeah, we're trying to bring a lot of standardizations to the way that basically people, you know, really find Instagrams, or they break them, a lot of people, that's another thing to touch. So, you see, like this at the hour of the day, a lot of incredible format, and you can see here that, you know, hopefully it should be very easy to define very standardized functions, people should be able to share the same configuration, in a popular pattern. But, you know, the first thing I mean to do is be able to actually share the metrics. So, anyway, that's a little bit about making metrics. You know, I want to talk a little bit about, like, where M3 is next up with, I'm Australian, so I usually go there, down to the end, you know, the one that's the most in M3. So I hope these bring you up. And we also have, like, one of the main things that will happen, like, one of the most, right, so, is, like, actually, metrics, where I think that, like, the challenges in operating at the hour, in the beginning to see, people look at that and they can make a spread like that. So, you know, this term can come around pretty often. I want to, especially to give you an example of, you know, in high-parameter use cases, I think, in high-parameter use cases, really, very quickly. So to make a single metric, like, they should be satisfied. And you basically try to coordinate that. So basically, you want to contract the occurrence of different steps, those, over time. So, when you think about the dimensionality space in this metric, there's a few different dimensions that, you know, we would like to, tomorrow, I'm going to say, perhaps, route the status code, optimize the version. And then when you multiply these dimensions together, that's the resulting V&A time series that we need to track. So if we look at some numbers here, you know, you can roll these metrics up to make viewing the code fast. And here, you can see that there's a few metrics on the left here that have, like, a region tag, a priority tag, and, of course, a status. And there's a lot of each of these across as the route tag as well. So you're welcome to have this really stringing off the region tag, stringing off the client tag, and then starting the results of those all together. So, you know, that makes, at least, like, when you want to view these really high-parameter metrics, you know, in a, you know, across regions or across lap versions, quick. However, just to store the raw data still, you're going to be highly dimensional. And then, the example we have here is basically a 500-nation P-Route, my status code, by regions that, you know, backends already and, say, 20 client versions. And that's 250,000 V&A time series. It's expensive, a little too bad. If you have a whole bunch of client versions or, you know, have any other connections on it, it destroys millions pretty quickly. And, you know, you've kind of got people maybe you just don't need some of these tags. And that's fair, but there's definitely, I think, going to be some tags. However, to go into, like, a concrete example of, like, at least by some of these are important, if we're monitoring a, you know, a back-end that runs out of two regions, and I realize I'm debating what these regions need to go over. So, that's fine. So, say, you're not in the U.S. and, you know, if you're basically got the status quo metric, well, you're not monitoring my SQL Redis you just get, or if the protection is another subsystem, you're not monitoring. Mike is not working. So, basically, you've got two regions you've eaten off the U.S. and you're monitoring the status quo at the edge. And say that, you know, you're monitoring the status quo, so you're not really monitoring all the subsystems that use applications. So then, if Redis nails, say, in a single region, and you can see here that we've basically got these two clients that don't necessarily talk to the same resources. So, for a client, we want, like, three. It's only talking to my SQL to the client, and then hitting the application here in comparison data. However, the client, B2, is going to a different platform here by my SQL Redis. So, you know, it's a fancy chat feature that I'm using Redis on. So, if Redis nails, and, you know, something that you're not specifically tracking, you want to up this from, you know, the very edge of your network of, like, where are the errors happening? If you don't have the region tag, then you won't know basically where to start looking. And if you don't have the vibration, you won't know that, perhaps, you know, you should not be tracing the code path for the B2 point, not the B1.3 or the B1.3 pressure itself. So, this is why, you know, these tags can be really, really important. And, you know, so Prometheus really is an absolutely fantastic system that I think everyone should get started on Prometheus. It's no reason to jump into a more complex and hardcore lowering system. So, you know, you're really starting to hit the layer of, like, any multiple Prometheus instances. But when you do start to hit that, I think, you know, it's basically about choices like, how do you make Prometheus's or look to the blockchain storage? And this is, you know, the design of Prometheus and the blockchain storage is basically there to solve this problem. So, we have a simple high-fab electric, you know, and you can try to solve that again. It actually starts to look different because if you've got some that are very full, you might topple them over in the end. Basically, there's a new one and you're utilizing it for very high. Or, you know, you have to basically stick up entirely in the new one. So, can you mention to I'm already wrote today, like, basically the background of why it was even important for us to work in this space. So, we have a lot of metrics. I learned things, you know, very little in the 86 school. We first learned how to use the first tag, and they won't be starting to tag metrics. And, you know, we now have about 100 million pre-aggregated metrics per second. And that translates into, I think, 30 gigabytes per second of network traffic for many of our applications here. And then, you know, after that we basically down sample those metrics into 10-second tiles and 1-minute tiles that store basically a value for each one of those restrictions. So, as well as some 10-minute tiles and 1-hour tiles, which are stored for months and years. But most of the real-time data in the month data is stored in 10 seconds or in the last few days, then 1-minute for the last 30 to 60 days. So, I meant that, you know, network traffic is 30 million metrics per second. So, I mean that when you're doing metrics in our index. And then, you know, in Congress we basically have a portion of that 20 to 50 percent of the traffic to power dashboards and 150,000 of those every month. And now I want to look at, like, what why might be interesting to look at how a place with 3D foreground. So, you know, as I mentioned, the radius is an absolutely fantastic solution that you should definitely start using if you're just beginning time monitoring. And then, pretty much, you know, if you never really exceed a single meter, it's a very good way to really basically set up a module of so much GSD. But what to do, more if you want something that is most unified with the extension of the radius systems. Then, you know, we have a speed open source, we share the solution a lot to add to more of the people like, as we use the country back because we would love to have to, you know, re-evaluate it again the next time we go to work with another company. And, you know, it's basically pretty nice, easy to get started. We'll try to get both scout out but also scout in in terms of how you run it. We use a bunch of microphones, but we could write a lot of these in embedded mode. So, when you can see again that actually in itself, you know, it's a time series database in very index. It even embeds the extension key binary in the same binary when you run the values cluster. And it's actually very dedicated across as well. And it's actually used for cluster membership routing, metric rule propagation, stuff like that. So, basically, you know, the high level architecture, I wanted to be a little bit deeper today. So, it's kind of like an L7 obstructive registry design kind of with, you know, for cost efficiency, and actions, very, very low. Basically, no data every time so if we know that we're going to have to kill it, potentially in this mode as well. But we really try to make sure that the disk is not constantly writing itself so you're not being spent with the actions and memory is kind of looned in the other stage of the actions. And we also do all that in front as a series in the actual case. So, you know, pulling the whole bunch of raw data in front of the disk. You know, so I wanted to get a little bit into basically how we look at these time series on disk. So, you know, when a query comes in, once we resolve the action area, we basically, for any one of these time we go straight back, you can see minus 3, minus 4 hours, minus 1, minus 5, 6, all the way back, across one sort of a basically when we're on a single node and we're going through a time limit, what, say between two and four can, we basically first check a window filter which can match the capacity of the server if we need to look up. We basically search the disk for the given time limit for a given metric. And this is good because you know, a very sparse time series involves a ton of little guys in that big box. So, you know, we can give the window filter to give about one the same false positive range. So basically, you know, this data structure is a big set. You basically, the API is you can give it a y-ray or basically a string and say, I want to add this to the set and then when you can figure basically m and n and n, you can basically input numbers though that n can input numbers for your m and k. And k is the number of hashes, basically the hash that string in place ones in appropriate places across the mid-set. And then basically you call back that window filter and say, this string doesn't exist there. And I'm going to cover in that, in much more detail but it's a probabilistic set that goes into action as such. And we can figure out that this is basically give about one percent false positive of the naturally. So now I'm going to talk about how can we actually find that metric on the inner side of the window block. So for a two before the end of the window we basically do that window filter and if it says yes you can find that there for the two before the end of the block. Then we go to the summary file and we do a library search by the nearest index history. So if you can imagine that if you're looking for the metric cap and the current value is dark we're going to do a library search to start. We're going to go to that in this kind of lesson then that the current value is bottom. So now that's before the current tree and cap and then we're going to basically say that was the voice that we found now it's literally scanning and it is done so that the pages as they're being born and buried from the end up that we should try which he says about one or two pages for any country that we're looking at. So if in this example you go to and now with that cap knowing how the tax data offset we need to look at their file we're going to go to that data and check something before we check what we're going to find. Let's talk a little bit about the entry and entry index. The entry index is one of the same uses FST segments which we don't know what exactly it uses until the initial press state sector for the past index. So this is a search engine that's going to make a value so every time we find a label thing that we can pull up the roaring in that basically decode it by attaching the containers to get that memory and then be able to intersect a lot of these various file results together. So you can see in this example like if I was entering into this series it's a little bit like a trial a compromise trial so you can hear in this example we'll put the word R and C and they both essentially match the same value free so we compress the trial and the facts together and free in this case we basically put to a roaring bit about that action is much better than any sense. So for every term combination in form of say servicing this food we basically make store of this and that's the question and if you have a query that says servicing to N.5 N.5 version 3 you know we are basically to resolve that query we need to intersect we have servicing to N.5 N.5 version 3 and you can imagine where if you have a project CSA like N.5 then we threatening to span and match all the what the actual lane values are matched by as well and that's what N.5 does so once we found more concrete based than lane values in due access then we need to intersect the roaring bit paths associated with all of them together we calculate 1% or 3 of them and then as you guys are working with these classes as well so that you can just drop the beautiful indexes when they're there in the middle where actually basically pull back we have to pour the result during the time we go and that's great because in the meeting setting you have pieces of like elastic storage and similar you basically have a whole bunch of release to basically to get the name that's outside of your TPL where you have to do like expectations and stuff which is kind of sad to the point where we don't care about the database itself so you know essentially they're just a more efficient in-map so the roaring bit maps represent the 2 in the end space and the rest of the columns are over except 242 which is the 64 for every to 16 space they use a different container depending on the cardability basically in that range of integers that it's trying to represent so you know for a really dense bit map it'll probably choose it for a really dense key mesh in 2016 it'll probably use bit map it'll use a kilobytes memory of course once that it'll probably start with a rank it'll sort it out to do that and then the run-off encoder is trained for all contiguous points of types that actually exist like D.C. and here I'm outside so I can literally have a mesh in there inside the one on the other it's the run-off so you know I'm not actually going to jump in this but basically in the way of what I'm going to do it's kind of powerful but also kind of easy to use actually you basically allow your own competes with a convenience operator there are two roles to run entry again and entry for one there so you're not running like a compact so you're not running query nodes next to everything and you're not running a whole bunch of other roles because there's more that you can do for example but of course you can put them separately if you're not in my solution what do you use for graphite today? if you're using graphite you might have something that looks like this as I looked like this last week you can actually use graphite as a free now and basically sort of all you can do is hang graphite makes it free which is pretty cool because a lot of people still have pretty old and that's what sells for the metrics so you can use graphite when I was breaking and I was going to talk about the research but we don't have enough time to do that do you want me to run it better? yes great if anyone would like to hold my microphone that would be really so I'm going to explain personally this is the channel and we're talking about the area it's all and just if you want to look at it later first of all that you have is the anti-venom anti-venomator anti-permefius anti-romine this is basically what's the gist here you can see here that we've configured for graphite to some in a sum policy for two seconds and the only thing that doesn't match that is aggregated by me aggregation policy and for permefius we're going to just store everything in the eye between the graphs so when we started this place we were running through the screening we were going to base it on the purpose what's happening we're going to create a topology placement so you can see here we're going to use the topology proposed in 3D C in 3D B we're going to use these four shards because this is a cd shard then we're going to create an aggregated namespace which is for most of the media space we're going to create the aggregated namespace and so on we're going to use this graphite and now we're going to put the file which is now also running on this and then we're going to use