 Hi, my name is Willet Aynio. I'm a software engineer at Aynio, and this talk was originally supposed to be given by our CEO, Oskar Esarma, but unfortunately he couldn't make it, so you're stuck with me. So, let's get to it. C-standard. So it's a lot less data compression algorithm developed by Jankolet at Facebook. It's based on LC. It has a really cool logo. We think that it's a pretty important piece of technology. And just a quick shout-out to Jankolet. So he's a data compression expert working on Facebook. He's been an author in many, many cool projects. He has a very cool blog. Go read it. And he has made it a really big difference in the industry. So thank you very much. So compression algorithms. What are they? How do we classify them? Well, there's the Lossy ones, where you lose some data as you compress it. Which kind of sort of works out for things like videos, images, audio, as long as you don't go too crazy with it, like with that cat picture. But it doesn't really work that well, for example, for compressing backups. There you really want to make sure that every single bit is restored afterwards. So for that, we need a lossless compression algorithm. And ZLIP used to be the standard thing to use for a long time. And then there are some more specialized things that are either faster, but then don't compress that well, or then compress really well, but are super slow to use. And Z-Standard is pretty much like the best of both worlds. It offers really fast compression, even faster decompression. And it has a really great compression ratio. So if we look at a few examples, so on the right-hand side, you have, if you take the git tree of System D and compress it, you can see that Z-Standard does pretty well. So it did it slightly better than G-SIP, but much closer to LZ4 speed. And on the left-hand side, if you look at the 100 PG walk segments, it might not make too much sense intuitively to compress nulls, but when we run tens of thousands of databases that have very infrequent writes, we end up archiving a lot of log segments that contain very little data. So as you can see, G-SIP is just too slow to use. And LZ4, it turns even nulls into pretty big objects, so Z-Standard was clearly the winner here. And if we look at some of the data from our production system, so this is taken this morning, and it's a post-rescue all-base backups side-by-side, or the mean value of those running with snappy, which is what we used before, and then Z-Standard, that's what we're using now. And results are pretty gay. And it's been adopted by quite a lot of projects across the years, so other people have also noticed that it's kind of cool. Just a few gotchas, so when you start using it, don't do anything stupid, like trying to decompress too much data into two small buffer or introduce a new algorithm to legacy clients that don't know how to deal with it. Yeah, anyway, a bit about us. So we're Ivan. We run open-source data technologies in public clouds. We're using Z-Standard for transport and storage compression in various places. We're just sitting up in office here in Berlin. We're hiring, obviously. And I have a very limited supply of Ivan branded socks, so if your feet feel cold, come and talk to me. Yeah, so thank you.