 Let me welcome everybody to this month's Motherball Demo Days meeting. I'm excited to show that we're stacked today with six demos from Consensus Lab, CryptoNet, Problab, Lotus, and PhilDepth Teams. Lukash, you can go first. Yeah, this is a data explorer. And what it makes it possible to do is you can go and see miners that are present in the market actor. It shows you which ones it can connect to and shows you which ones it cannot connect to. Yeah, when you click on a miner, it shows you the list of deals and they're not so pretty way for now. But you can look at deals. You can click on a deal and it makes a retrieval and that's quick. And it shows some summary. So this is a directory. Let's go to this deal. Yeah, if you're a directory, you can look what's in it. And it's one sub directory. So let's go see what is stored on Falkine. So it's an image of some sort of photo, like a different image. And it's like every single click is a retrieval that works. Also, those images are loaded. There are three links right from Graphsync. So I guess in some images you may be able to see that they load from top to bottom. That is streaming from Graphsync straight into the browser. I'm not sure if that comes through on Zoom. It has fairly low latency. But yeah, okay. So other things this can do, you can, like this is just a unique surface directory. But you might sometimes want to see the like IPLD view of it. So this is just dumping the internal go IPLD prime note representation. And yeah, it's just a dark PB node. And I guess we have more interesting dark PB nodes. So like this example, it's like some dark C-Bore thinking a directory. So let's see. This is some C-Bore NFT horse, apparently. So I got some image, some metadata. And yeah, we can look at an NFT horse. It's a horse. So yeah, some IPLD data. Other things it can do. And like, oh yeah. So the thing with streaming data is when you have a larger file, so like this 256 Mac thing, when you start downloading it, it's maybe very small. But you can see that as it's getting blocks from Graphsync, it goes straight into whatever. And it's actually like decently fast, especially given what the internet I'm on right now, that works. Other cool things. So if you, you know, sometimes when IPLD data, especially C-Bore is stored on Filecoin, it's stored as raw blocks. So they would appear as the file because it's in raw block and just some random hacks. So when you open that in the IPLD view, it's just some random hacks, which doesn't make sense. But when the Explorer picks up a C-Bore, you can view it as C-Bore. And as it turns out, in this case, it is the amt of sectors in minor state on Filecoin. So someone start Filecoin state on Filecoin, which is neat, and you can go through it. And like when you see a CAD, if that block was stored as a raw block in some Filecoin deal, it's not necessarily, the links in that block will not necessarily be in the same deal. So if I try to open a link in the deal that was in a raw block, that will not work. But we now have global network indexers. And those are nice. So I can click on Find. And I can find that CAD in a different deal. And I can go there and continue my adventure exploring the IPLD data that I was looking at. So yeah, that's basically all this can do. Oh, it can also you can also look at the same data on IPFS. So it doesn't need to be on Filecoin. And from this bind page, the top links go to IPFS. They just have a tendency to not work. Because right now, for whatever reason, the web-free storage references from indexers are not able to connect to them as maybe some updated provider entry. Not sure yet exactly what's going on there. Well, Filecoin miners are not providing bits of retrieval. So IPFS cannot fetch from them. But some of the time it works. And yeah, this data explorer is very work in progress. Just a branch in Lotus. There's the PR9382. And it does have some instructions on how to run it. You just need a Lotus or a Lotus Light node, which is very easy to set up and very light. And you just run those steps. And you can also export the network. That's it. And wow, that worked. Thank you so much for presenting your demo. And we're going to move on next to Will's recording with Uptime Checker. This is Willis from consensus lab. Today, I'll be showing you our Uptime Checker. In short, this is a live-ness registry that's deployed on FEN. So the first question we're wondering is, what is Uptime Checker and what does it do, actually? So to answer that question, imagine we have a bunch of nodes that's running in the decentralized network. For now, let's just say in the case of Saturn, and we have like a global distributed CDN. And these are the CDN that's running in the network. Imagine a user wants to fetch some information from nodes. You just ping one of the nodes. But then what if one of the nodes is down? How does a user know whether a node is up or down? And for example, what's the fastest node in the network? That's near me. So in this case, this is exactly what Uptime Checker is for. So it tells users which nodes are active or alive and what is the latency of a particular node. At the same time, the metrics above actually up to date. So these are the core questions that the Uptime Checker is trying to answer. In this case, we have what we call member nodes. So they are just a node running a certain common application or certain protocol across the network. And of course, then we will have what we call checkers. So they are just constantly going through a list of members to be checked periodically, just ping them to see if they are up and down and to see the network latency for these nodes. At the same time, once you gather the ping information to perform, like for example, liveness before, and show the last check time to show the recency and also to show the latency of that node in the network. And finally, the checkers also cross check each other. That means if one of the checkers is down and then that checker can potentially be removed from the list of checkers, so we know all the checkers are actually alive. So in this case, checkers check members and also check themselves. In terms of system architecture, we have the Uptime Actor. So here, this Uptime Actor, which is Rust-implemented compiler bosom, it's running on the deployed on FEM. It does the registry like CRUD of the checkers and members. At the same time, it also tracks the reported checkers that basically the checkers that are reported to be down. Also, we have the member nodes. These are the nodes that are participating in a certain protocol. And we are actually using the P2P to do the ping. And finally, we have the checker. This one is actually well implemented. And what they do is they expose some endpoints that others can query to get the result of the member node information. At the same time, they also cross check each other. So as you see here, and once a sound checker is reported down, the other checkers will start to report to Uptime Actor. And if a code from all like a two-third of the checkers reported a particular checker to be down, then that checker would be removed, automatically removed from the Uptime Actor. This is a high level or overview of the system architecture in terms of the function. So later for the demo, the architecture is a simplified version where we have four checkers and we have two nodes. So they are all loaders-based, and we only have one miner, though. So it's not drawn just to, for the sake of simplicity. So these two member nodes form a local network. And we also have one Uptime Actor that's deployed within this local network. Now for the do, I'll show you the demo. Now let's see our code setup. So until the time comes, I've already set up the nodes plus the checkers. So in order to see the whole end-to-end with the setup, we refer you to the previous video. But for this one, we have node zero and one already running. At the same time, we have four checkers. And from here, you can see node zero is actually running both the miner plus the node itself. And then we also have node one, which is actually just running the node and is connected to the node zero. Okay, what's interesting is that we check the bunch of checkers. So currently we're at checkers zero. So and here is logging, constant logging, the list of checkers as current register with the actor. And the checker is actually the actor IDs of those registered checkers. And one zero zero one should be referred to checker zero and one zero zero zero zero should be referring to checker one and so on and so forth. So later what I'm showing you is basically check off the nodes and we should be able to see the keeping of the responses. And probably what's interesting is to show you the commands that we spin up. So these are the commands that we use to spin up the triggers. And for example, we can see one is zero, one index zero. Basically, just tell us which one to use. We have nothing trigger. And this is the actor address. And this parameter is checker port. So this is where the it'll be port is. And same time, we also have a node info port. So if you call on a card is actually, so if you call worry this port, you will be able to get the uptime info of the nodes. So in this, look your host, we need one because we're running everything saying it's the same local network. So you see here we have key is using the actor ID and is referring to this multi address. And there will be the app. So this is actually the node zero. And the is online is true. So let's just focus on is online panic with the rest we are still turning here. And this is actually saying, okay, this node is up. And the same time, we also have the second node and is also up. So let's just call it another one. It should give us the same results. So yeah, no zero. And what those status is up. And this one should get is also up. So if you call every no four and no three, no four, it should give you the same result. So the time sacred time, I'm going to show it. So what, so this ideally is a happy part or so like a happy part where everything's running. Now let's just kill all the person. So in fact, I was just talking and okay, there's just control it's me. And okay, this is this node is cute. I do need to see what's the log. Okay, from the log, you can see the district constant trying to paint those multi address of the registered second node and basically no dash one. And they just count and it's just during the error. So in this case, we just tried to call. Yeah, you can see the status is actually forced. So it's just saying this one, this node, this address is not up. So and, and then just check another one. Yeah, it's also telling us it's down. So then now what's more interesting probably just try to just try to kill off any of the checkers because they're constantly frustrating each other. We just tried to kill off anyone. Okay, so okay, this is my doctrine ID one zero two. So from here, let's just check out. Okay, so you see in the log, it's saying actor ID one zero two is down. And then you just cannot connect to both the node plus another checker. So after a while, I think I was doing the routine after a while and both the rest of them all know that I think I'm going to check your number two, I'm checking dash two is also we're seeing this being reported. So after a while, like when the message is, is resolved or executed, then yeah, you can see here in response gets back. So the list of checkers registered in the actor actually reduced by one because one zero two is actually down. So, so that means this doesn't actually working now. With this, I'll conclude my demo and because of time, can I show you the post that I refer to your interest if you're free to check out our repos and other demo videos for the post. So thank you. All right, we'll move on to Irene. I'm Irene from from Cleveland and I'm going to talk about two projects. The first one is the web three storage bounty contracts. So what is it? So we all know, I think web three storage is these amazing tools that allow you to store fights on IPFS and and and and Filecoin in a very easy way. And the way you do it, we access this, this service is just you drop your file to them via the website interface. So so our goal is to make these tool accessible from any HVM compatible blockchain, for example, Ethereum or others. So something that is now from a blockchain you store directly to Filecoin and IPFS passing by web three storage. This is, you know, a nice addition on what the storage features that users can play with. And also, if for us, for CryptoNet, it's a very nice way to test the general idea of bounty contracts. A bounty contract are protocols where the users can just place bounties to the service of store this file to store the file that is that is linked with this hash and any, you know, storage deals we like to call can can activate and provide the service and then claim the bounty. So we do this specifically for a web three storage as a dealer. And how we do it is simple. Basically, we have designed a storage smart contract that has three simple functions. The first one is create the proposal where the that is used by clients that want to store a file. So and basically, this is allowed to create this proposal of I'm going to pay this bounty to anyone that we that we take this file and store it on Filecoin, for example, IPFS. A step the proposal is for the dealer and in this case is specifically for the web three storage dealer that check if the file is available if we can access the file that was for which the proposal was made. And if you can access it and all other parameters so the proposal can contain other parameters like for example, the payment to the bounty or duration and other other storage features, let's say if orders are fine, he'll accept the deal. So now we say that there is a deal active between the client and web three storage dealer. And we'll activate the web three storage service and storing with the Filecoin providers and an IPFS doing the usual service, the service that is already providing and then and this is like the deal is active, the file is storage. We have a last functions that we call claiming bounty that here is not that that's one that the dealer can call to get the bounty. So he needs to have some proof of storage that the storage was successful. And once that this is done, he can claim the bounty. Actually, right now we are not using this in the test in the test version that we have because the payment the bounty is set to zero so it's not really needed. But here you can see like, like, like this app is the smart contract is deployed on garlic test net and we have an app for clients that you can test if you scan the code. There is a small video here and I'll show you how it goes. You basically just drop your file, sign the transaction that is made by create the proposal with a meta mask, for example. And when the proposal is online, you get a notification and when the web three dealer you the web three, the web three storage dealer accept, you get another notification. And now at the end, you just have this nice link where you can retrieve the file. If you can check and retrieve the file here is actually we are still waiting for the confirmation from web three. This confirmation usually takes a few, few seconds because web three storage is you know, downloading the file and activating the storage deals with Filecoin provider uploading to the FFS node so that you usually need sometimes. Yeah, you can click on the link, the link to show you the file waiting for the confirmation. And when the confirmation is arrived, you can also go back to your homepage of the app where all the deals are there with the details of when you made the deals for how long is active and this link where you can you can still check the file. Okay, I think this is starting again. So I'll stop for this one and I'll go to the next project that is the retrieval pinning protocol. Okay, so in this case, what we want with what we wanted to do is to what we wanted to do is to focus on how we can guarantee retrieval from for files that are stored on a traditional storage network and in particular, of course, our use case is Filecoin. And we designed this retrieval pinning protocol where first of all, we have a fixed set of referees. So there is a network in the implementation only five referees, but you can have any numbers and we need to trust only three of them. So it's like this honest majority assumption, you don't need to trust all of them. And then again, we designed around this a protocol where the client and the storage to make some deal for specific for the retrievability for the retrievability features. So the client will propose in a similar way as you see in the other project, a deal with like some specific parameters, not just the data, the hash of the data, but also the duration for which the retrieval feature has to be guaranteed, the payment for the service, and very important, the collateral. So the collateral is some tokens that the provider put down, locked down in the contract if he agrees to provide this service. If the provider fails to provide this service, the retrieval service, what the clients now can do can appeal to the referees that we have seen before. The referees have this role that they try again to retrieve the file. So they contact the provider and check if the provider really was just a mistake or is really not providing the service. And if the retrieval works this second time, thanks to the help of the referees, we are all happy and the provider gets back his collateral and the payment for the service. If something goes wrong, what is will happen is that the collateral is burned and the provider will lose it completely. The provider will go to the smart contract vault. Again, we have the app live. The smart contract is on Ethereum Testnet, the Goerlich one. And we have the app that you can test. If you log in in the app as a client, this is what you basically see. You see this interface where, again, you can start creating these retrievability deals. And you can drop your file here very easily for the file for which you want to do the retrieval deal. You can choose the providers. So in this case, it's not like, in the other case, basically, this was not the step because it was only Web3 storage, the provider, the address for which you talk. But in this case, we are, you can choose which one. We have Web3. We're in a doc providers that we maintain. And you can choose that there are more parameters here about if you want to have collateral or not. Of course, without collateral, it's less secure. For how long do you want this deal, one week, one month? So this is all the parameters that you can choose. And of course, when you go and create a deal, this will ask you to connect to your MetaMask and sign the transaction. The transaction goes online. The providers will see this request as to sign, use the accept deal proposal to sign that is accepting the retrievability deals. And for that moment, we have a deal active. When the deal is active, if at any point you as a client, you have some problem about the retrieval service, you can complain. So you can go here. And this means you can request an appeal, as you see here. Requests and appeals will activate the referees, the referee network that will contact the provider again and try to, as we said before, try for retrieval. And basically, you can check everything, you can check the status of the retrievability deal. For example, if there is an appeal that is active, you can check what is happening while the referee network tries to recover the file for you. You can check it in the app as well. On the other side, on the provider, the interface is done by command line. We have all the code in our GitHub repo. You can download and actually you can test and try. There is a readme doc there. Actually, really, we love to have feedback from also the provider side. So if you're interested, if you think it is nice, go there and try to play with this. Here, what you can see is an example of the command line of what the client sees. The provider sees when he accepts a proposal. In general, I have to say that this is basically done. So the provider has to sign up to our protocol and while sign up, he can also choose some default values for the deals that he wants to accept. So like a minimum payment, a maximum collateral, max duration, max size of the file. There are these parameters that the provider can from command line decide. Of course, we have the default values, but any provider can choose. And what is happening here is that basically, almost automatically, when you have these parameters, the parameters that are in the deal are compared with this one. And if everything they match, the proposal this is accepted. Otherwise, no. What else about providers is that we actually, this is not yet completely deployed, but the theory for this is ready. And we want to provide a reputation score. So we are doing all this, not just to put a crypto economic incentive for providing retrieval, but we think it's also will be very nice if we can add a reputation incentive. So we want to provide a way to say which providers are doing well and which are not in the provider, at least for retrievability, for looking at the retrievability. And we designed the reputation score that has two components. The first one has the goal of incentivize provider that are willing to put high volume of collateral. So the more collateral that you accept, meaning that you are more secure, that you can provide a good service. And the second component is actually instead giving higher score incentivize provider that are taking many deals, meaning they can provide retrieval in a good way for large, large files or actually many files. That's it. Please go and test. We are really looking for feedback from clients and provider sites. Thank you. Awesome. Thanks so much, Irene. Up next, we have Dennis whenever you're ready. So hi. We as ProBlab had a look at the Hydra boosters in the last couple of days. And yeah, in this quick demo, I want to show or present some of the results that we got from our measurements. So for everyone who's not that familiar with Hydra boosters, Hydra boosters attempt to cover the whole hash space in the DHT so that every time you provide something to the network, you hit one Hydra booster. And so that if anyone else tries to retrieve that CID will also hit a Hydra booster and gets this content or the provider record much faster than if it was without those Hydras. And so as I said, the Hydras attempt to cover the whole hash space. And so we just, yeah, at first we wanted to verify this proposition here. The first thing that we took a look at is if there's actually a uniform distribution of hash, sorry, Hydra heads. And in this graph, we can see that's the case. So it should be a straight line. And yeah, well, that's the case. And the other thing is, is a Hydra head actually in the proximity of the 20 closest peers for every, every peer we can find in the network. And for that, we took full network crowds from from Nebula, then put all of those peer IDs in a binary try and calculated for each peer ID in the network, the 20 closest peers and checked whether a Hydra head is actually inside the proximity of these 20 closest peers. And the results show that it's actually the case. So in this particular example, we had around 16,200 peers in the DHT. And for 15,000 CS 700, there was actually a Hydra head close by, which makes up more than 97% coverage of the whole of the whole hash space. And so this gives us an excellent advantage into the network. And yeah, just a reminder, the provider that consists of the CID, TTL and the provider, multi hash. And also those Hydra boosters have peer records so in memory. And what we can do now is we can take all the private records that the Hydras know of and correlate the providers with their multi addresses and in turn, the geolocation from the IP addresses. So we can actually tell where on the in the world the CIDs actually reside. Since I'm short on time, I think I will skip the architecture. And so maybe just some general information and the Hydra boosters know of around one billion CIDs each day, one billion unique CIDs. So this is on the x axis in the days of the last week. And what we can see here is if we take the set intersection between two days, we see that only around 500 million CIDs actually intersect here, which means that in each day around 50% of all CIDs churn and leave the network. And if we assume that a CID covers around 256 kilobytes of worth of data, this means every day 120 terabytes, leave the date, leave the network, but also join the network again. And so this is just the CID churn graph is just another representation of exactly that. And what we can do is check which are the top providers. So here we can see which peer IDs actually provide how many CIDs. And if we just take a look at the top provider here, this is just one peer in the network. And this one peer provides around 13% of all CIDs of the whole network. And this goes down, the next one is around 9%, 7% and so on. And so what we wanted to do now is actually find out who those peers are. And for that, well, we, I thought these are maybe gateways or large pinning services and so on. And so we developed a tiny tool that's called Antares that you can see here, which is just a tool that sits there. It's a Lippie-to-peer host. It provides content to the network and then requests the content through a gateway or through a pinning service. And then just tracks which peer ID actually requested this content. And I forgot to say this content is random. And so no one else should know about it. And so if others request that content, we can track which peer IDs belong to which services. And, well, I'm running out of time. Otherwise, I would have shown you that. But it turns out none of the, well, I checked it with Infura and with Pinata. None of them correspond to these large pinning services. And also it's not no gateways or so on. And so, but I'm leaving it running. Maybe I will discover some of them. And maybe just one last thing. If we take it, as I said, we can correlate CIDs or provider records with peer records. And then in turn with the geolocation, we can have this country distribution that I told you about. And we see that more than 50% of all CIDs can be associated with the U.S. And then the Netherlands and France. So these are also quite interesting results in my opinion. And yeah, we are also looking at the dependence for content retrievals and content publications. And right now we are running experiments where we exclude hydras from content retrievals and content publications and just check how the performance differs there. And yes, so these will be the next steps. Thank you so much, Dennis. We have next from Veldav Zak. Hey, everyone. My name is Zak Ayush. I'm a developer advocate with the FDM team specifically. And so today we're going to be deploying a actor or if you're from the Ethereum ecosystem, smart contract to the Wallaby test net where we have the FEVM active. So we have the Filecoin virtual machine, the FEVM. And we have the FEVM, the Filecoin Ethereum virtual machine, which is essentially the EVM virtualized on top of the FEVM. Now, why do we want to do that instead of just deploying actors straight to the FEVM? Well, the EVM is widely adopted across many different blockchains. And there's a ton of robust tooling around that, including the tool we're going to be using today, Hardhat. So this allows us to take advantage of all that tooling and allow existing Web 3 developers to easily come over. And excuse me, my cough. I think the allergies are hitting me. Okay, let me just show you real quick. So just an introduction to Hardhat. It's essentially a development environment that allows devs to easily, from their computer, just write smart contracts in Solidity, test those smart contracts, deploy them to a chain, automate it. We can create tasks to automate our interactions with those smart contracts. And overall, just a very useful tool. There's a couple of other tools. Brownie, if you're like to program in Python, Hardhat specifically is in JavaScript. Truffle is another JavaScript, one that a lot of people know from consensus. And we have Foundry, which is kind of the newcomer, but seems to be getting some steam. I'll switch over to my VS code. So here's what I, on the left here, you kind of see what a Hardhat project looks like. Just a quick overview. We have our contracts directory where we'll write any of our smart contracts in Solidity, deploy. So to use Hardhat and deploy to a chain, you need to write a specific deploy script to tell it which contracts you want to point to and where you want to deploy them. And so here we have a simple JavaScript deploy script. We'll go over it in a bit. Deployments, which just gives you some metadata on where your contracts that have already been deployed are deployed. Node modules, if you're familiar with that, pretty self-explanatory, NPM, RER node modules, and scripts and tasks. So these are where we can write things that we want to automate. Tasks are a little bit more built in with Hardhat. So we can just call, we can type in the command Hardhat and then put whatever tasks we want in and it'll automatically run. Scripts are just like anything else that might not work super well with Hardhat. Right now, we do not have any. We just got this demo done yesterday and the FEVM is still a work in progress. So you can find the release schedules for that online. But for now, we'll be interacting with our contract using Curl and just contacting the RPC directly. And the other important file I want to point out is the Hardhat config.js. So this is where we can customize our Hardhat and tell what we want out of it and where we want to point it to. So you'll see here, we're pointing it to the Wallaby testnet where we have the FEVM deployed. And we've defined the RPC URL for Wallaby and a private key that we're going to be working with. Of course, I always have to tell this. I know people here probably already know. Never show your private key. Probably keep it hidden in an environment variable somewhere. Make sure you don't check it in to get just to be safe. I always like to put that disclaimer in. So we have it all pointed to already to Wallaby. If we come here and we look at the Deploy script, it has some requirements for some built-in, some Hardhat requirements, and a couple of Filecoin things. So it understands Filecoin language. And essentially, it's just going to call this RPC and send a post request to say, hey, we have this contract. Let's send over the bytecode and deploy it. And so what I'm going to do is I'm going to go ahead and deploy that. So I'm going to type in npx, hardhat deploy. And this is going to take a little bit of time while the deployment interacts with Wallaby and Wallaby confirms it. So we'll use that, and I'll look at the Solidity code and explain it real quick. You'll see, I'll try to give this a little bit bigger. There we go. We have what our Ethereum just would be. If we were acting on the EVM, but underneath that, since this is on the EVM, we have the Filecoin addresses associated. Particularly, we're going to be looking at this F0 address later. So it's deploying. Let's look at simplecoin.sol. So essentially, this is just a Solidity contract. It's like a very basic version of what an ERC-20 token may be, very dumbed down just for demo purposes. And you'll see we just have a simple mapping here for balances, a transfer event that gets submitted, and a constructor that assigns us 10,000 simple coins when we deploy it. And then we have a function to send coins and two other functions to check balances at addresses. We are going to be using the get balance and eth function in a bit to see what our balance is. Okay, so it is deployed. And we have an address right here. This is going to be important in a second. We're going to need that. So yeah, we're going to go ahead and interact with it. Go to my terminal here. So this should give you some more views. You'll see where I was already kind of messing with this earlier. And we're going to get this curl script here. So we're going to actually just testing to make sure it worked. You'll see it came back with nothing here. And that's because we need to point it at our contract that where it's deployed. And we're going to need to tell it our account, that F0 account. So if we come here in the two field, this is where we're going to put our deployed contract address. So 432. And in our data, our from is coming from our address, our deployer address. And in the data field here, you want to make sure that this is also the deployer address, which it is. And all this other data is like the function selector, which is essentially like taking some of the hashes of the function signature and putting it together. This is like all EVM standards, right? So now if we send it, awesome. So in the results, you'll see this 2710. And that's just hex. If you can vote it to decimal, that's 10,000. So we've deployed that contract, the constructor went through. And it showed us that we have 10,000 simple coin in our account. And you'll see that this is a very simple demo for now. But the EVM is really coming to life. This is what's so exciting about this. And a lot more features will be coming online. I hope to put this starter kit up and make it available for everyone to mess around with and hopefully have some more tasks and stuff on there. So you won't have to be using curl to interact with the JSON RPC directly. So yeah, I think that puts us right to time. Thanks so much. Perfect. Thanks so much, Zach. Thank you everyone for attending the mother of all demo days. And thank you to all of our presenters. If anyone's interested in presenting the next demo date will be November 1. And I will get that recording posted as soon as possible. Thanks, everyone.