 Thank you. So hi everyone. I'm glad to see you on that task on today. I hope you all are having a great time I just learned a bunch of stuff from reddit and cloud friend that I plan to implement So I'm going to talk about some project updates today But before that as part that can introduce me already. I'm Sasha the monkey. I'm a software engineer at red hat Particularly working on monitoring platforms based around Thanos I'm a tennis painter and was a G soft mentee and I also help me insert in other projects like observatorium and mdocs You can find me as Suspense the M code pretty much anywhere So it makes sense to talk about What the Thanos community was up to over the last year especially at Thanos country and talk about the impressive Features that we've built with the community and the improvements that we have made and how you can make the most out of all of these I'll highlight some of the most important ones as I have like 10 minutes But feel free to explain the change log commit history To get a comprehensive view of the work that was put in so starting with the town square here and This has been the focus of a lot of work this past year So the first thing is distributed engine has now been released And this is a major chunk of work that was undertaken by Philip who is one of our maintenance and this involves creating a gRPC query API creating a distributed engine interface on our volcano engine and then tying it all together in Thanos and So normally a query would need to pull in all the data from stores before actually evaluating a query But with this this written execution a query can be transformed into some subqueries that can be delegated to either leaf or child queries in a sort of distributed architecture like we saw So a query is transformed where and that can drastically reduce the number of C's that each single query has to process thereby making your promcal Query that much easier to process The credit is actually read written by this distributed execution Optimizer in our canals one kind of engine and its associated interfaces and this feature can now be Enabled on Thanos using the query dot mode flag So the distributed execution mode is ideal for setups with independent carriers that are federated by a Central query and you need to query long-range data over multiple different sources So please try it out and let us know if you see any particular issues Next we introduced an SQL like analysis and query explanation feature by working with Some of our mentees actually on the Thanos volcano engine And so what this allows you to do is visualize how the query engine is actually processing your query and With what operators so that you can choose to optimize your queries as you wish by clicking the explain button and getting the operator tree on the query tree of your particular query and You can then check the analysis checkbox and hit execute and you'd get back the present And you'd get the same great tree but this time it would be a bit different You would get it decorated with the amount of time it took for each of those operators to actually do their work And this just gives you a deeper insight into where your query is spending most of its time and We hope that these can be used in flow like this where you sort of optimize with the query explanation until you're satisfied and Then execute and get the analysis and maybe optimize even further if that's even possible We also have a few Dedicated updates for the volcano engine in particular. It now supports more than 95% plus of promulgial expressions So it is already a very nice and viable Replacement for the permitious engine in the context of Thanos And we have also added an HTTP flag for dynamically switching between the permitious and Thanos engines on the fly on the Thanos square here and this just makes it easy to query both and get results on the fly Compare speed of sometimes even compare correctness and so on And we've also added a checkbox that kind of allows you to get this pan ID or the tracing ID of a particular promulgial query run so that you can find the trace for your query on a tracing tool of your choice and Lastly we also introduced the notion of query tenancy natively in Thanos now So you can fire promulgial queries from either the query front-end or the career with a sort of tenant header And then get back data for only the tenant you are looking for It's important to know that us doesn't really support Authentication or authorization features as that is kind of out of scope But you can pair it with other OSS tools to get authorized tenancy on Thanos career And as we are closely knit with the permitious community We actually leveraged from label proxy and imported it into Thanos to enable this feature And I don't want to spoil it anymore since my colleagues Colleen and Jacob will actually be delivering a talk about this shortly Right here at Thanos country now moving on to another component receive We introduced a feature for availability zone aware application So this feature kind of allows you to configure your hashring But notes and mentioned the azs on the regions that they belong to and this gets factored into our Ketama hashring algorithm And we ensure that remote right request get forwarded and replicated evenly across particular regions And that should ultimately help boost your ingestion as a little if you maintain that or even disaster recovery scenarios This is still an alpha feature. So there might be bugs. So feel free to try it at your own risk and it won't back to us We also added support for tenant specific external labels to receive hashrings along with a mentee of ours and With this you can kind of add external labels to hashrings for particular tenants Which would then add these external labels to all of the aforementioned tenants TSTVs And this is really useful when trying to partition and select tenant blocks from receive arbitrarily such as querying tenants that share in the same attribute maybe in your context and so on And finally, we also introduced Future for clocks cube protection. So there are user reports particularly from companies around telco Or so where due to some of their notes getting clocks skewed They get samples that are far too into the future and this disrupts the ingestion part And often causes gaps in data as samples from correct from ETI can no longer be adjusted because you already have a sample for the future and So you can set a flag with a particular time window on a threshold When you can ensure that you don't end up ingesting future time stamped samples beyond this threshold so then you can still ingest your normal metrics and get rid of the 40 ones and Finally moving on to the store gateway We have a new feature to enable selective index caching and so in terms right now We cash three things we catch the series the postings and we also cash expanded postings now but what you can do is you can use the enabling items configuration option and Can choose to only cash certain items in the house index cash to optimize for situations better suited to your needs So say let's say if you have some sort of memory constrained environment You can choose to only cash expanded postings maybe instead of everything and you can also set up different types of cash Showers to serve different things and maybe as some future work. We'd have tiered caching So that you can add multiple caches into store gateway We also added an option to allow specifying time-to-lives or TTLs for items within remote caches like Redis and memcached So things like posting C's and expanded postings can be stored for longer and can be retained for longer to benefit repetitive or longer ranged queries and There is a lot more that we worked on with the community to make Thanos the best it can be a ton of improvements And features went into the latest releases Many of which were from first-time contributors mentees and people who are looking to learn and get involved So definitely feel free to try out all these features see what works best for you and explore and We want your feedback and ideas about how you would like the future of Thanos to look like Do you have some ideas future requests or even cool integrations that you would like to see happen upstream? Then just grab any of us from the testing through all this conference or just visit the project here So the project probably and I think it's PP 18 or something here and Talk to us about it. We are all yours and we'd love to note these downs and maybe work on this with you actually Thank you Okay, thank you everyone. Thank you. So so many features. Yeah from the community