 So, like, why is Bitcoin so hard to use and also why it's so hard to scale things like history? I think I was thinking more about the scaling aspect than the UX aspect. And I was thinking about it for a while and eventually they had a whole string of ideas and came up with this special fancy block store that in free should both scale and it should also provide really good, really nice UX. So they would like quickly recap how storage and the IPFS and the balcony of various works. So in IPFS, when you add a file, you chunk it into some size pieces, usually by default 256 kilobytes, then you get those chunks into what we call that protobuf, which is just a glorified protobuf that can link to other protobufs using CADs. When you construct the protobuf, we put this hash into a multi-hash, just a fancy way to express hashes with different algorithms. And then we put the multi-hash into a CD, which is the way to express a link to data with some certain encoding. And this makes it possible to then know how to interpret that data to further traverse the DAC or the data-assessing graph. But by default, all the links are linking to DAC protobufs. And so for a file, it's just a nice, by default, balanced tree of DAC protobufs for directories. We just get another DAC protobuf object per directory, and as the directory is really large, you get hands, but it doesn't really matter here. And then we put those objects into what we call a blockster, which is just a glorified value store for mapping sets to actual block data. We can also put those DACs into what we call car files, which is just a way to store an IKLD graph or a DAC on disk in a file. That's important because Valkhain uses those car files all. So search in Valkhain, you have the Valkhain chain. And it's run by storage providers. Each storage provider is essentially just starting a set of sectors. And a sector is just that on mainnet, either 32 or 64 gigabyte block of data. And you can, as a client store, what we call deals with miners, and deals are just pieces of power of two sized pieces of data that are stored within a sector, and they can be smaller than the sector. They cannot be too small because that's very expensive. So you kind of have to make your deals like probably at least four gigabytes in size for them to make economic sense. And they cannot be too big because you cannot really split deals like our sectors yet you kind of need to worry about the size. That is really annoying. Files are just the right size that's actually fine. Just create a car file from your file and you make a deal with some search providers. But even files are too small. You need to gather a set of files and put them into one car file that's not going to get and then you might need to worry about the sizing and so on. Similar with if your file is too big, you need to either split your file up before creating the IPFS tags or you need to split the IPFS or IPLD tag after you've created the file, which is also not really that easy or cheap to do. So yeah, aggregating data is kind of hard. You cannot really easily tell what size some DAQ will have just by looking at the root node of it. You need to traverse the whole DAQ. Then there are like some caveats so multiple different DAQs that you are getting in the shared blocks and that makes the sizing really annoying because you don't really know beforehand what DAQs are going to be aggregating. You may be dealing with graphs that are just like a lot of small blocks that are very expensive to traverse, like really deep. And so you need to really think about how you structure code in a way that it actually works. Then splitting data is also not easy. You have many, many, many blocks in front of similar problems, like with aggregation, just you happen to have more data. And yeah, like just going through really large graphs is just hard and painful and slow. And my problem with the current way of doing things is that usually when you create a car file, you probably are doing that from a block store. Now block store is just a KV store usually. And each time you create a car file, you're doing like tens of thousands to millions of reads per car file, depending on how big your IPLD blocks are on average. And that is very expensive, especially if you want to do multiple replicas of your files and do that a lot to scale it up. Like doing millions of reads probably per second is not easy. I was thinking, can we solve all of those problems at once? It means maybe hard, but like what if there was a way so that we didn't have to deal with splitting this data, we didn't have to deal with aggregating it. We didn't have to deal with block store or load when trying to build those car files for deals. Or I didn't even have to worry about giving the DAXB transversible. And still somehow be able to retrieve this data after deals are made. That's what we're practicing for this work. Apparently, all the indexes, not all layers, really only care about multi-hashes, not this. And there is just like one very, very special IPLD product that is called RAW. And it's just RAW, but RAW blocks can't have links. So what if we just pretend that all the blocks that we are storing are RAW and just build some very, very light DAX on top of those RAW blocks that we're storing that are not really RAW? Like this slide, that just makes it possible to have the other parts of the far-coin deal-starch machinery work, like deal indexing and stuff. So that was the core idea behind RIPs. And eventually, this drive had this architecture. So essentially, the core part is, there is a top-level index and those groups. Like each group is just a bunch of IPLD blocks that are put into this block store. There is a set of groups that are currently being written to. And then there is just a bunch of groups that are laying around like pool and being put in a far-coin, probably. And groups can also be uploaded fully. So we're not storing them locally. They're just somewhere stored with some storage providers. Each group is just deal-sized to put it somewhere between a couple of files into a couple of million blocks. So it's small. Like it's kind of cheap to keep indexed locally. But it's also big enough to manage all the higher-level indexes very easily. And then they're also very easy to scale. Like in this time, there's a lot of weird-looking decisions. But the aim is that it should be fully scalable in a pretty much linear way. So they started this Kubernetes. And this is a normal Kubernetes. But it brings two weird lines. It gives me a wallet and gives me another web interface. So if I go to this interface, it shows me what some answers the groups thing and some kind of space. So I can try to just use this Kubernetes kubo. So let's say I want to add some like Arch Linux mirror to it. And it's doing some things, starting it, but take a second or two. And the speed is mostly about something like this speed. Could probably be somewhat faster because some indexes are not very optimized or not optimized at all currently. But it creates some groups that build some virtual overlay hard file on top of it. It can be. That's it best to make following deals with it. Like really nice because I read just typed in two commands and sent one field to some address. And it just string datum file kind. I think it's kind of cool. So essentially what's happening here is when I do IPFS add, I have a special kubo note that it's running a plugin that is injecting this rips blockster instead of the default kubo blockster. And so all the writes and also all the reads are directed to this rips blockster. I can just make my attempts at being the content. The object stat and things can see some gets happen. When I do that, then it's just a blockster really. But it happens to store data in a manner that's really efficient for making file kind of deals. And also happens to make file kind of deals. And also a piece of paper should scale really well. That part, I didn't test, and I'm pretty sure that it needs a lot of work to scale. But there's some potential. So yeah, that's my weekend project, I guess.