 I just wanted to welcome everyone to this month's Mother Val Demo Days meeting. Today we have three awesome demos from IP Stewards, Lotus, and D-Rand. Just if you didn't know what Mother Val Demo Days is, once every month the Starfleet teams get together to share progress in their projects in the format of a demo, hence Mother Val Demo Days. Let's get it started with Jerapo from IP Stewards. So I'm going to present to you Repi, which is a new experimental down-to-client and the goal is to optimize for progress. So the main program right now is that we use Bitwrap, a Go Bitwrap plane, which works. However, it's not very fast. I have quite fast Internet and it's downloading at 3 MB per second. And just so you know, the file I will be testing is dist.ipfs.io because it's a pretty big folder, 50 GB, and it has lots and lots of people that download IPFS themselves. So it should download very fast because there are many peers in the networks that have the file. And yes, it's downloading very slowly. So I'll show you the demo of the program. So it's not downloading very fast right now. So it's only 75 GB per second because right now I have been, so my new client is using gateways for the backend. And the gateway are a bit overloaded right now. So I'm actually, that means that the speed right now, which is already, well, pretty time faster than Go Bitwrap, is actually needed by the file you get where I'm using. And so the main way that HP is achieving this speed is by multiplying the transfers. So instead of downloading blocks one by one or 32 by 32, HP is going to divide the graph in multiple parts in a tree following the links between the values blocks. So all the blocks in the middle, so except X, Y, and Z, or block in the graph you want to download. And then you have an algorithm that will repart partition nodes and gives them tasks on values part of the network. I won't go there in detail about how the algorithm work. If you want to know, I've made a presentation of it that moves the byte working group. So you can find the link right here. It's expressed with performance. It's very good, I would say. The positive throughput is just basically you take the sum of all the peers you have. And unless in H cases where it is a graph, you have more peers than blocks in the graph that are able to be downloaded, you should expect just the sum of all the throughput. And that's one of the main reasons how I can reach 2.5 to give it per second, which is the limit of my fiber, is that it will download from many, many peers in parallel. And the time to first byte, it's because at the start we don't have any peers, like we only know one block. So what Bracky does is that it will download from many, it will download the same block from everyone because it has nothing better to do. So the time to first byte is a minimum from all the downloaders, which is still quite good. It's nothing incredible, but it's still quite good if you have multiple gateways in the middle of values. The efficiency, the usage around 100 nanosecond to 500 nanosecond per block, which is just counting rapid, it's not counting the protocol behind, which is extremely good. In theory, you should be able to do 45 to be byte per second. Obviously, you don't do that. It's just basically rapid, it's like so fast that all you care about optimizing is your actual underlying data transfer. The memory usage depends, the bigger your graph the more memory it uses because we need to keep the tree. The network efficiency is still rather good. The wider the graph is, the better because we don't have those cases where you have two people downloading the same block as much. Right now, Bracky only works on gateway using the card response and the goal is to make it support more protocols. So I have not worked on this yet, but basically we need a slightly more complex logic that keeps track of the list of work because we have this logic of the metric, which is how many people are attempting to download some part of the graph, but I will work on this in the near future. Graphsync would be very easy to add because you can reuse the same logic for the car gateway because Graphsync can get a car over a gateway, it's pretty close protocol. They are both apple and server driven. That means that I ask for some requests and the server is sending me a bunch of blocks without having me to keep asking for more and more. So I can start a request somewhere and then stop it when I'm unhappy. The main reason right now it's not shipping in Kubo is because it lacks critical features. Content routing. Currently I just have a hot code list of gateways I'm using. We'll need to use DHT and IP and I. Now, having Bitwrap support, this is very important for Kubo because all the content we download today is in Bitwrap. And a small tweak to the algorithm. Right now, the algorithm I assume that everyone has all the contents, which is true for a gateway because even if the gateway doesn't have the block, it's going to fetch it for you. So I need to try something like this where it's able to remember, oh, this fear doesn't have that block, I want my story. And so that's all. Thanks so much. Up next, we have Magic from Lotus. Why is Bitcoin so hard to use and also why it's so hard to scale things like history? I think I was thinking more about the scaling aspect than the UX aspect. And I was thinking about it for a while and eventually they had a whole string of ideas and came up with this special fancy block so that in free, you should both scale and it should also provide really good, really nice UX. So they're like quickly recap how storage in the IPFS and the block when your verse works. So in IPFS, when you add a file, you chunk it into some size pieces. Usually my default is 256 kilobytes. Then you get those chunks into what we call that protobuf, which is just a glorified protobuf that can link to other protobufs using CADs. And when you construct the protobuf, we put this hash into a multi-hash, just a fancy way to express passes with different algorithms. And then we put the multi-hash into a CAD which is the way to express a link to data with some certain encoding. And this makes it possible to then know how to interpret that data to further traverse the DAC or that specific graph. But by default, all the links are linking to DAC protobufs. And so for a file, it's just a nice by default balance tree of DAC protobufs for directories. We just get another DAC protobuf object per directory. Unless the directory is really large, you get hands, but it doesn't really matter here. And then we put those objects into what we call a blockster, which is just a glorified value store for like mapping sets to actual block data. We can also put those DACs into what we call car files, which is just a way to store an iqld graph or a DAC on disk in a file. That's the important because Valkyrie uses those car files a lot. So search in Valkyrie, you have the Valkyrie chain and it's run by storage providers. Each storage provider is essentially just turning a set of sectors and a sector is just that on mainnet either 32 or 64 gigabyte block of data and you can then like as a client store what we call deals with miners and deals are just pieces of like power of two sized pieces of data that then like are stored within a sector and they can be smaller than the sector. They cannot be too small because that's very expensive. So you kind of have to make your deals like probably at least four gigabytes in size for them to make the economic sense. And they cannot be too big because we cannot really split deals like our sectors yet. You kind of need to worry about the size that is really annoying. Files are just the right size that's actually fine. Just create a car file from your file and you make a deal with some search providers. But even files are too small. You need to gather a set of files and put them into one car file that's not going to get. And then you might need to worry about the sizing and so on. Similar with if your file is too big, you need to either split your file up before creating the IPFS DAGs or you need to split the IPFS or IPLD DAG after you've created the file, which is also not really easy to do. So yeah, aggregating data is kind of hard. You cannot really easily tell what size some DAG will have just by looking at the root node of it. You need to traverse the size of the data. Some DAG will have just by looking at the root node of it. You need to traverse the whole DAG. Then there are like some caveats. So multiple different DAGs that you are aggregating can share blocks. And that makes the sizing really, really annoying because you don't really know beforehand what DAGs you're going to be aggregating. You may be dealing with graphs that are just like a lot of small blocks that are very expensive to traverse, like really deep. And so you need to really think about how you structure code in a way that it actually works. Then splitting data is also not easy. You have many, many, many blocks kind of run into similar problems, like with aggregation. Just you happen to have more data. And yeah, like just going through really large graphs is just hard and painful and slow. And another problem with the current way of doing things is that usually when you create a car file, you probably are doing that from a block store. No, block store is just a KV store usually. And each time you create a car file, you're doing like tens of thousands to millions of reads per car file, depending on how big your IPLD blocks are on average. And that is very expensive, especially if you want to do multiple replicas of your files. And I can do that a lot to scale it up. Doing millions of reads probably per second is not easy. I was thinking, can we solve all of those problems at once? It means maybe hard, but what if there was a way so that we didn't have to deal with splitting this data? We didn't have to deal with aggregating it. Like didn't have to deal with block store load when trying to build those car files for deals. Or like didn't even have to worry about having the DAX be transversible. And still somehow be able to retrieve this data after deals are made, just with practicing for bitswap. Apparently, all the indexes, not all layers, really only care about multi-hashes, not zits. And there is just like one very, very special IPLD codec that is called raw. And it's actually just raw bytes. Raw blocks can't have links. So what if we just pretend that all the blocks that we are storing are raw and just build some very, very light DAG on top of those raw blocks that we're storing that are not really raw? Like this slide, DAG just makes it possible to have the other parts of the far coin like deal storage machinery work, like deal indexing and stuff. So that was the core idea behind the ribs. And eventually, this drive had this architecture. So essentially like the core part is, there is a top level index and those groups, groups are just, like each group is just a bunch of IPLD blocks that are put into this block store. There is a set of groups that are currently being written to. And then there is just a bunch of groups that are laying around like pool and being put in a far coin, probably, and groups can also be uploaded fully. So we're not storing them locally, they're just somewhere stored with some storage providers. Each group is just deal size to put it somewhere between a couple of files into a couple of million blocks. But it's small, like it's kind of cheap to index locally. But it's also big enough to manage all their higher level indexes very easily. And then they're also very easy to scale. Like in the system, there's like a lot of weird looking decisions. The aim is that it should be like fully scalable and pretty much in a way. So they started this Kubernetes node and this is a normal Kubernetes node, but it brings two weird lines. It gives me a wallet and gives me another web interface. So if I go to this interface, it shows me what some answers like groups and some kind of space. So I can try to just use this Kubernetes kubo. So let's say I want to add some like Arch Linux mirror to it. And it's doing some things, starting it right, take a second or two. And the speed is mostly about something like my desk speed could probably be somewhat faster because some indexes are not very optimized or not optimized at all currently. But it creates some groups that build some virtual overlay arch file on top of it. If it's complete, that's the best to make Falcon deals with it. Like really nice because I really just typed in two commands and sent one field to some address and it just string.com Falcon. And I think it's kind of cool. So essentially what's happening is when I do IPFS add, I have a special kubo node that is running a plugin that is injecting this ribs blockster instead of the default kubo blockster. And so all the writes and also all the reads are redirected to this ribs blockster. I can just make my attempts at being the content, do like the objects, that and things. You can see some guys happen and do that. And it's just a blockster really. But it happens to store data in a manner that's really efficient for making Falcon deals and also happens to make Falcon deals. And also a piece of paper should scale really well. That part I didn't test and I'm pretty sure that it needs a lot of work to scale. But there's some potential. So, yeah, that's right. We can protect the guys. Awesome. Thank you so much. We can move on to Yolen. So I'm going to be presenting you a new gear and the 1.5 features. So these if not yet its main net for tier and that they are being tested and that's not so maybe a quick reminder. DRAN stands for distributed randomness and it is an open source software we've been developing just like a lot of other open source software we're developing. And DRAN is used by the legal phone trophy to run free public randomness service so that anybody can query public verifiable randomness from the legal phone trophy network running DRAN. And so just like you know you have DNS servers, NTP servers, certificate transparency logs and so on. You have DRAN that can provide random becomes. And the nice thing about DRAN is that it's fully decentralized. So you only need a threshold of nodes to be working as intended for it to work properly and for the whole randomness to be safe and predictable as well as bias resistant. And the very nice thing as well is that it's very stable. So you can take the DRAN beacons and there is a very easy way of verifying them by just checking a BLS signature against a given public key for the legal phone trophy. If that signature verifies you can be sure that beacon is valid and has been properly generated by the legal phone trophy working together with a threshold of nodes collaborating to produce it. And so that is yeah a few nice properties. And as I said it's open source written in Go and it's using a lot of fancy cryptography behind the hood such as very stable secret sharing, distributed key generation. But a pretty important thing here is that it's based on BLS signatures which stands for Benel in Chacham signatures. And more precisely these BLS signatures are instantiated on the BLS 12 381 elliptic curve which is a pairing friendly curve. And that is quite important for what comes next because BLS 12 381 is an elliptic curve where you have two groups on the pairing operation from these two groups G1 and G2 onto a target group GT. And an important thing about these groups is that the group G1 is a regular group of size 381 bits but G2 is a bit bigger. It's an extension field of dimension two and so it has two coordinates of size 381 bits and GT is even much bigger. It's a 12th extension field of G1. So it's like 12 coordinates. But this is not too important because we're never storing GT values. We're always storing G1 or G2 values. And currently DRUN works by having its public key on G1. So it means the public key for the group is 48 bit bytes. And signatures for each beacon are on G2. So each signature is 96 bytes. But there is actually no good reason for it to be like that. And that just means we have pretty big signatures and small public keys. And usually people do that because they have a lot of transactions being signed by many public keys and they want to include them in the block. And you can aggregate all signatures into a single signature pretty easily with BLS. But you cannot do so with the public keys because you want to know which address corresponds to which transaction you need it to do verification basically. So what people usually do is that they have short public keys and big signatures because at the end of the day they will aggregate all the signatures into a single one. But that is not how DRUN works. Each beacon has its own signature and we never aggregate signatures in DRUN. So for DRUN it would make more sense to have a big public key, just one. And then a lot of small signatures for each beacon. And this is exactly what we've done. So here is the anatomy of a DRUN beacon as it was previously. So the signature RUN G2 is encoded as a compressed point in hexadecimal. So it's taking 192 bytes, which is pretty big. And it even sometime was storing the previous signature, which is also a hundred and ninety two bytes. And then we also have the randomness value, which could be derived from the signature. So in theory, we could just give the run number and the signature. And that should be enough for anybody to verify a DRUN beacon and use the DRUN beacon to produce the randomness they need. So that was how it looked previously. And now with our new scheme, using G1 for signatures, everything is much more compact because now we have the signatures, which are only taking 48 bytes, which encoded in X is only 96 bytes. And that is much nicer for our HTTP realized because it's taking less bandwidth to transmit to the clients. And it's also much nicer to store. And this means we could save at least 37 percent of space on bandwidth just by switching to a new scheme using G1 for signatures and G2 for public keys. But, you know, we didn't just stop there. And so because using swapped groups also has other signification for many people that will be very interesting. It is that the fact of verifying a BLS signatures on G2 is quite expensive. As you can see here, the gas costs as estimated, if it was running on Ethereum, the gas costs of verifying a signature on G2 would be roughly 226,000 gas. And now with the swapped groups, since G1 signatures are much smaller, they are faster to verify on Ethereum and on any blockchain that is supporting BLS actually. And so now it would only cost like 156,000 gas roughly. These are estimates because Ethereum hasn't shipped native BLS support yet. So it's, you know, taking brain of soldiers. Anyway, we also didn't stop there because we noticed, well, we want to launch a new network for DRUN to use these new signatures scheme, you know. But if we were to launch a new network, we thought we could do more things. One feedback we had from DRUN users was that 30 seconds is long. While Filecoin Block Time is exactly 30 seconds, and it's fine for most Filecoin users, people were running applications that need randomness that is verifiable in a more frequent manner or a bit constrained by the 30 seconds frequency of the DRUN network. Like if you are, I don't know, a casino, you might want to be able to run a new draw every three or five seconds. And so we're planning on increasing the frequency of the new main network to five or three seconds. And that increased frequency would mean we would need to store maybe 10 times more beacons. So we looked at, oh, we were storing beacons and we realized it was not very efficient because we were storing them just like we were serving them on the HTTP relays. That means we were storing them in exact decimal encoding, which is twice the size of, you know, plain binary encoding. And we were also including the previous signature in each beacon. But that previous signature is actually already stored since it's just a previous beacon's signature. So you could just query the previous beacon and you would get that previous signature. So we are currently storing signatures twice. And that is not great because it's, yeah, storage, we don't need to waste. And everything was being stored in a bold db file and bold db is a plain key value database written in Go and everything is stored in a single file. And one thing we noticed is that bold db is not super charting and it's pretty slow when you have a big file. Like if you have a database that is waiting a few gigabytes, you are talking about hundreds of milliseconds or even seconds to perform one get or put operation. And that is pretty slow. And the bigger the database, the slower it gets. And we also have people asking us if they could store maybe the beacons in a MySQL or PostgreSQL database. And so we decided to rewamp our wall storage backends for Giren. So we optimized the existing bold db backend. We are now using binary representation for the signatures. So that is as compact as it's possible. And we are also, we've tweaked our bold db settings to use a few percent of 100 percent because bold db is using a b3 implementation. And basically, since Giren is only happening new data at the end, we never need to put them in the middle of the database. So we can just fill the database as compactly as we want. And that has allowed us to get a very nice, like a very nice performance increase. So currently storing or getting data from our bold db backend is almost 100 times faster, which is great. We're talking about the tens of milliseconds now instead of hundreds or even seconds. And also it allowed us to significantly shrink the existing database size by a factor of almost five, going under a gigabyte for all the past data Giren has been storing for the past two and a half year. And then we also, since we were working on storage, we thought we could just, you know, add a few new backends. And that's what we did. So we added a post-gray SQL backend, which allows a node to connect to a post-gray SQL database and store the vConsert. So they don't need to care about backups and data integrities, the database administrator that need to care about that. And also we added an in-memory backend because at the end of the day, we want as many partners to participate in the League of Entropy. And we don't want the partners to leave the League of Entropy because it's taking too much disk or whatsoever. The most important thing being that we have a lot of people that are participating to the threshold network to increase the trust we can put in that threshold network. Because the idea behind Diren's threshold network is that you trust that there is never a threshold amount of malicious nodes. Also you trust that the current members are not corrupt. And so the more members we have, if you can see a couple of people you know in there, you know, it increases the trust people would put in the system. And so that is a win for us. And so the in-memory backend allows you to skip storing vCons to disk and you would just store maybe the thousand or two thousand last vCons, which are anyway the ones people are most interested in using, you know, and the ones that are getting queried from the relies most often. And that's it. That's what we are launching in Diren v1.5. The League of Entropy is actually going to be launching a new main network using the new cryptographic scheme as well as the new backend, the new storage backends in March on the 1st of March. And this is very cool because it will enable us to do time lock encryption on Diren mainnet, which is a super excited feature we're very excited about here. And that we are hoping to bring to FVM and to, yeah, like to have more people using time lock encryption very soon. And finally, yeah, you can check the blog if you want to learn more about it, we will be publishing blog posts about the new cryptographic scheme, the storage backend, time lock and so on in the upcoming weeks. And if you want a quick demo, I can show you the testnet implementation. So currently, I told you we launched on testnet. So we have the HTTP relay here. You can see we have three chains. And if I go to the old chain, we can see it's a Pedersen BLS chain, the scheme. And so it's if we look at them and the beacons also he looked, we can look at the latest beacon, we can see we have a short, relatively short randomness and coded as X. And then we have a pretty big signature as well as a pretty big previous signature. And if we look at the new network, it's using the BLS chain on G1 scheme instead. So it has a much bigger public key compared to the old network if we compare them. But instead, if we go to the latest signature, the latest beacon, we can see now we're not providing the previous signature because this is an unchained scheme. So we don't need the previous signature to verify the beacons. And the signature is also much smaller. And that's it, I guess, for my demo. Thank you for watching. If you are in Tokyo next month for real-world crypto, a quick shout out, we are organizing a randomness summit. So if you attend the real-world crypto, don't hesitate to check out the randomness summit link. It's a free event. It will be in the same venue as real-world crypto. And yeah, I'm looking forward to questions or seeing you in Zerif. Thanks, Zeolan. Loved all that new updates. But yeah, thanks everyone for attending Mother of All demo days. And thank you to all of our presenters. Appreciate it. If you're interested in demoing our next demo day will be March 16th. Have a great rest of your day. And if you have any questions, you know where to find all the presenters. All right, thanks guys.