 Just wanted to greet everybody and welcome you to this month's Mother of All Demo Days meeting. For those who are new here or haven't attended in a while, once every month, the Starfleet Endress teams get together to share their project progress in the format of a demo, hence the Mother of All Demo Days. This is also an opportunity for those in the community to share their own projects. So first off, we have an appearance from the PL Network CryptoSat. Amir, are you ready to present? Hi, everyone. Thanks for having me. I'm Amir. I'm a private manager at the CryptoSat. And what we do is that we build trust infrastructure for HEP3. And we do that by launching satellites that are able to run cryptographic computations. And before we get into that, we're just very proud to share the launches of our two satellites. The first one was of May of last year, almost exactly one year. And the second launch happened. This had a chance to be very, so now we have two satellites in space that are able to do really cool things for web applications. This is a little bit of our milestone. One recent thing that we're pretty proud of is that we participated in the KZG Ceremony this April. The Ethereum KZG Ceremony, what this means is that we got our contribution from our satellite and then sent it back to the sequencer. A little bit about the architecture, essentially these are CubeSats, so these are very small boxes that can fit on your desk. After we launched them into space, we're essentially able to provide a pretty standard REST API through our AWS infrastructure that is responsible for communicating with the satellites and making sure that your requests are responded to. With that in mind, I think we're ready to jump into the demo. This is the CubeSat simulator. What this does is show us where our satellites are in relation to Earth. These are low Earth orbit satellites, so essentially they change sometimes depending on the time of day. They have to be in range of a ground station for us to be able to communicate with them. That's obviously something that a lot of users are thinking about before being able to adopt this in production environments, and that's also something that we are always approving based on launching new satellites. Our next launch is scheduled for this year, and also with every launch, we also improve our capability to communicate with the satellites. This is a company called Cryptosat. I think the first thing that people are interested in is what is the way to communicate with the satellite? The first way to know that you're communicating with the satellite and not when anything else is using the Cryptosat public keys. This is a key that anything that is provided by the satellite will be signed with. I can also present it in a better way. This is a public key. If we want to verify, and we will in the next section, this is how we verify that what was provided to us was actually signed by the satellite. I should have mentioned before that this is a key that was generated in space. It was never on Earth. Once the satellite gets launched, we run a key generation. We publish only the public key, so the private key was never on Earth and never will be. This is a pretty simple example. The idea being that if you want a timestamp for the satellite to check of its own time, it will return it and you can also verify that the timestamp that you received was, in fact, created by the satellite or sent by the satellite. The next use case that is recently seen some interest in is your standard randomness. You probably heard of T-rand. The idea here is that you can request a public random number from D-rand. I'm sorry, I'm going to set that and then verify that indeed this was a random number that was generated by the satellite. Here we're talking about a different flow where it will be a private random number. Essentially, how this would work is you would create a client. You would create a key pair on your computer. You would give the satellite the public key and the satellite will encrypt the timestamp with the random number with your public key, so only you have access to that random number. It's a pretty long flow, so in the sake of time, I will skip through it. This is a demonstration on how the satellite can sign messages on behalf of users. We're pretty similar to what we've tried before. A nice use case here is the telling encryption. If you have ever thought about seal bid auction, for example, this is where this can be an interesting use case. The ability to create a key pair, publish the public key. Everyone can encrypt their bids before the time expires and then once the time expires, the private key will all be re-placed and the results of the auction will begin. We also have a TISCORD where we do a little bit more debugging the people that are trying to use it. But they need some help, for example. That's a great place to go. Obviously, we're on Twitter and LinkedIn too, so I'll share all the info right after the demo. Awesome. Looking forward to seeing all that information. Moving on, we have Ivan with Bedrock IP&I. I want to tell a super quick story about the recent encasements that have been done to IP&I. That is a major improvement for the user privacy story. Let's recap what IP&I is. IP&I stands for Interplanatory Network Index. It can be used for finding content on IPFS and the Filecoin Networks. It's used in the batch of projects that I will go through in a minute. Basically, the main purpose of IP&I is you give me content identifier. I tell you where content behind identifier can be found at. Before going through the recent improvements, let me just quickly recap how IP&I works. It can be explained in four simple steps. The first step is we do have a bunch of Filecoin and IPFS nodes that are connected to IP&I. Whenever they have a new data, they post announcements. That's step number one. Whenever new data appears on the IPFS or Filecoin nodes, they post announcements via the libP2p pop-stop topic. IP&I is continuously listening for those announcements. Whenever it picks up one, it would reach out to that node directly to the storage provider and would stage all the recent updates from it. The unit of update is what we call it advertisement. Basically, an advertisement is a structure that contains a list of content identifiers in it. By announcing advertisements, you tell the IP&I that I do have these CADs available at my node. IP&I would fetch it, index it, and that's it. Then after the content is indexed, a user can use it to look up some data. When a user sends content identifier request with content identifier to IP&I, IP&I would return it, hopefully, a list of providers where the data can fetch from. Then the user would reach out to these providers separately and receive the data and download the CAD picture. Why IP&I is awesome? It's awesome because it has two properties. It's an open protocol, so anyone can run an IP&I instance. Anyone can participate in it. However, it can be run as a centralized service, and running it as a centralized service provides some advantages. Specifically, the advantages can significantly help reduce time to first byte in the general content retrieval storage. One can find content on IP&I's core networks much, much, much, much quicker. What are the problems we tackled with the previous upgrade? There are two issues with the way how IP&I is used right now. Let's start with the number one. If there is a man in the middle that observes user to IP&I traffic, so they can see in open what content identifier the user is after, and they can just spy on the lookup responses to the user, and they can just reach out to the same storage providers and download the same data. By doing that, they can spy on what data is user after. The second attack vector is the rogue IP&I deployment itself. If IP&I deployment is malicious, someone can just spy on the request it receives and also can see what data the user is after. This is obviously not good. The way we tackled it is with recent upgrades to our previous story, which is called double hashing. What is double hashing? In a simple words without going to any details, now instead of looking up raw data in IP&I in open, instead of using raw content identifier, the user would use cash over there, this content identifier. In response, IP&I would provide the records, which is encrypted with the original value of the content identifier. What this means, that in order to make sense of such communication, someone is spying on the users, one needs to know the original content identifier, but the content identifier gets never revealed in the first place during such communication rounds. That enables much better previous story for our users. Double hashing is already running in broad. We're running a big IP&I deployment, which is called seed.contact. I'm going to show you right now how the encrypted responses look like. The one I have open here is the regular response. If I can send a request to seed.contact, if you see in my browser, and it would resume a bunch of the records where the content behind that content identifier can be found as, it's essentially a list of providers with their liquid identity, their addresses, so I can just go through this list, establish connection, download the data. So if I do a encrypted lookup, so instead of returning data in open, it returns obfuscated, I guess obfuscated data that I cannot make any sense of without knowing the original content identifier. So if I don't know it's in the first place, then I can not decrypt the provider records and I cannot spy on others' communications. Next step for us is to initialize the seed.contact. So currently we cover about 80% of the lookup request and this number is growing. So we need to update the existing clients. So the IP&I is used by a bunch of different projects, such as last seed, which is the Filecoin retrieval clients, such as Kubo, which is the most popular IPFS implementation. So it's used by digitalized CDN, like Saturn. It's used, yeah, it's a lot of places. So basically we need to update the existing clients and existing integrations so that they use double hashing by default so they don't send regular lookups anymore. So we need to work on the, with so-called writer privacy upgrades. So writer privacy would allow publishers to advertise encrypted data into IP&I. Right now this data is still in open. So we're protecting on the user privacy of the miners. And yeah, that would hopefully make IP&I even more awesome and even more usable. So and if you have any questions, any suggestions, or you want to use IP&I or like you want to ask something, please reach out to the IP&I Slack channel on the Filecoin Slack. Yeah, and that's it from me. So thank you a lot for listening. Thank you so much, Ivan. Thank you. Appreciate it. Moving on, we have Akosh with Consensus Lab. Oh, hello everyone. My name is Akosh and I'm here to present what I've been working on in the last few months, which is to run FVM inside Tendermint as an alternative node provider instead of Lotus. So this is part of the interplanetary consensus or what used to be hierarchical consensus effort, which is kind of a recursive side-chains organized under the Filecoin rootnet. And because we can't use the same consensus as expected consensus, because storage, you shouldn't be reusing it, I suppose. This is, we need a different consensus. And one thing we use is as a fork of Lotus, which we call Udekond. You've probably seen it many times presented by Alfonzo. And the other one is this Tendermint, which is an off-the-shelf Tendermint with FVM put in it. So it's all rust, what I call, but Tendermint calls me. So for anyone who isn't too familiar with Tendermint, it's a proof-of-state BFT protocol. And there is a generic component called Tendermint core now renamed or, you know, success by, succeeded by Comet BFT. And it's very convenient from a developer's perspective because blocks are instantly fine or you're only moving forward, never have to worry about rolling back your state. It's very easy to think about, but it doesn't scale to thousands of validators. So you have to be somehow sampling or letting people to opt into a subnet and around a hundred validators can run it. And they're nice, rust libraries. But what always comes with this is the Cosmos SDK. So the Tendermint is the bad rock of the Cosmos ecosystem. We are not using the Cosmos SDK because the Cosmos SDK is like these reusable modules of accounts and banking and transferring transaction. Instead, we are having the built-in actors and the FEM that we use in Lotus. So that completely replaces it. This is the lifecycle of an application. So I'm the application Tendermint calls me and this is the lifecycle of how a block is given and fed to the application. So you get a header, then for each transaction you get a deliver. At the end, you get to change the power table which is the next set of validators who can run or produce blocks. And finally, it tells you to commit the changes to the database so next time we all have to remember. But it has other methods to run the genesis, to check transactions that have been added to the mempool. And all this exists without Tendermint having any notion of what the transaction is. So in the tutorial, it's just key value pass but in our case, it's gonna be our stuff. And we will use the FEM to interpret these. So there is two, it's an evolving standard. So there is this thing, ABCI plus plus which at the moment adds two very important methods for us which allows us to inspect the transactions that are gonna be in the next block and inspect them again before we cast a vote on this block which are gonna be very useful functions for us to move into the hand and to replacing like full blown messages with CIDs because CIDs are not immediately available like the previous fantastic presentation show that you can have a CID but you have to be able to look it up somewhere and it's not something you can just trust. So Fendermint itself, this is kind of the architecture you have Lotus in the top and then some kind of nodes with three layers pushing messages between these nodes and the Fendermint code is one process that we have to run and then the Fendermint process is another one and because it's another process, we have complete freedom how we deal with data so we can use IPLD, Fendermint is not enforcing and not even providing us with any storage so that's completely on us. And we can also do our own network communication because we are not restricted to like Vasem or anything you can do anything you want. So effectively they are a proof of stake side chain and with a potentially more child child subnets under us so we have two important aspects. One is observing our parent and these are top down messages coming from the parent and we need to know when they are final before everybody applies it on the side chain and we have to agree that the bottom of messages are available so if someone sends us a checkpoint because that's how we propagate information up and down then the parent validators who are not running the sub the sub chain, the side chain can only apply this checkpoint once they understand that most of the majority of them will do so and they have the data. So even though maybe one of them don't have it but this time they can retrieve it from their own bodies and it's gonna work out fine. So checkpoints are one of these examples where you have it can be anything, the SAS can be anything because like you can imagine that for the route you have massive number of subnets potentially under it and they have to go through if they have to go through the route to send each other messages they don't know how many there's gonna be it's difficult to tell up front so we thought that definitely a checkpoint can only contain a CID to some kind of list of messages and we don't know how many there's gonna be so there's many two options one you send some commitment and then you feed messages one by one or like the previous presentation the IP and I said that you can advertise that you have it or advertise that you are able to serve anything from the subnet send the CID and let the notes come to you and for this to work there is just this two face publishing the first we publish the intent for the checkpoint to be included in the blockchain but not for execution because they don't have it and then let the parent validators get it and then vote again that they have it and that's when it gets executed and this is when these ABCI plus plus methods are important for this we implemented this resolver which is somewhat similar to IP and I think with gossip sub and bits swap to resolve content from anyone in any subnet and with that this is the architecture that you have your ABCI application which is Fenderman you have you just see bytes because that's what a transaction looks like for Tenderman and then you via these stack of interpreters refine it into more and more into messages that are more and more close to what the FEM can actually handle so this might be a CID then we hand it over to a pool to the IP or the resolver where you can get it back from the network and then next time if it has it it can just go through the to the FEM we have a roadmap but the green means here that it's done so we're less than halfway through our roadmap and this at this stage it's just FEM plus IPRE and all this demo that I did is like available on the website on the repo website it's like 50 minutes so I'm just now it's a very truncated version but there is a CLI and an RPC client and that's what I just wanted to show you here so I have a demo script that's checked in it goes through the steps of setting up a genesis file and we can have a quick look at that it's the genesis of Tenderman actually so if you just run it on this quickly then it has sections for like its own consensus nobody's interested and then we have our own genesis we can't use lotuses we don't run the full lotus we don't run markets unless people want us to run markets but it has its own accounts and a single validator because there's gonna be a standalone setup so in these right two terminals in the top I'm gonna start the Tenderman process and then the bottom I'm gonna start Tenderman and it's going to here it says that we are going through the genesis phase and it takes some time because the wasn't needs to be loaded but now you can see it's running blocks so with that I can go back to my other scripts I have created some keys so I have Alice and Bob and for those of you familiar with DFVM this is like the state I can I have this CLI to ask like what's the state of an actor and it will tell me the balance and what kind of code like EVM or account it is and what's the current state and then we can do transfers let's just do a quick transfer just so there are other things check out so here there was a transfer so now if I ask Bob's balance it shows that Bob now has a thousand tokens because that's what the transfer did and then we can deploy Febham contracts with this so that's when deployed as Febham contract and returned to me there's a bunch of addresses so this is like the delegated address which we can copy and give it to this command which calls a method on that thing this is very not something I would do but if you look at this is called a simple coin contract that I've deployed here so just to quick cover look at this is a solidity contract which gives the owner 1,000 coins of this thing and then it has a few methods like sending and getting a balance, getting a balance as a view and if I look at the signatures then get balance as this FAB to something and that's what we've been calling here but this is and actually this thing that it's returned is a hexadecimal encoding if you decode it it says 10,000 but this is not very user friendly so we have this other method call other thing other option to just run it programmatically so here it says 10,000 and then and you can see that these are the JSON things that come and go from the actual Tendermint RPC because that's what you're talking to and we're decoding it just to have a quick look at what this looks like so this example is like a script where you can get static typing interact with your FAB and contract so this is the contract I'm gonna actually deployed it again in this script and if the solidity compared to me this ABI so with that I can create this is not done by me this is another library but I'm just saying that this is a nice way of working with this so we can create a simple coin interface and then we can say that I'm just gonna connect to the Fendermint actually Tendermint but with my client I'm gonna read my secret key which is Alice is in this example I'm gonna check out I'm gonna query what Alice is nonsense so we can resume and send the next transaction so that it checks out create a message factory bind it to the client and then run this thing and then running it means that I have a client who can, which I can now use to send transactions send these ABCI queries which I read only things and do what is a call which is a trans it looks exactly like a transaction but it doesn't consume money for you know everybody familiar with the Ethereum call knows this that if you have this view like a pure view that doesn't need gas but we can use gas if we want so in this example we make actually two calls to the contract one is in transaction and the other one is not in transaction so they're not in transaction just queries the node that you're connected to that you have to trust it or you have to run it with a few samples and they might not even let you do this because it might be your own node that you're querying but it does put a load on them or you do this transaction in which case you don't have to trust them but there's gonna be a quorum because everybody runs it and you have to pay for it and this just asserts that these are the same so that's the test here and then now there are these library methods that are available now to credit the state and it gives you back something that is an actual statically typed actor state or deployed a contract with a method that is specific to FEM and it gives you back the return so you can read the addresses from it and this is where the static type incomes and if you want to get the balance then you can create a contract and call it and it knows that this is gonna return a big int that it can pass and you don't have to worry about the hexadecimal ABI encoded stuff yourself so yeah that's just an example of either calling or invoking a transaction so it has the same almost exact parameters except the call you can specify there on the blockchain you should run this yeah and the way we see this used is is that if you run a subnet then you might want to modify the FEM actually so some of the people interested in this want to run their own syscalls like they want to actually connect to an external database and maintain that so more like a Cosmos application where you do whatever you want and not restricted to what the Lotus version of the FEM lets you do and like you can't at the moment deploy use defined Basm contracts there whereas here you can just start your own subnet with the extra Basm, like extra built-ins that you want to run and that's up to you what they do and how they do it it's a more lightweight option to explore the FEM sorry if I went over the time but thank you very much for listening awesome thank you so much Akosh and then we have Brenda with an intro to Lassie so hello everyone my name is Brenda I'm a product manager on the bedrock team and so shout out to my fellow bedrockers here Lauren, DVD and Yvonne so today I'm going to share a bit about Lassie which is a new retrieval client that some folks on the team Hannah, Rod and Kyle started building out back in January so it's been a couple of months but yeah really excited to share kind of the progress that's been made and how it works and how you can talk to other folks in the ecosystem about it so let's get started so I don't know how you guys kind of feel when you talk about Flock on an IPFS but I think sometimes when I share about it too whether it's clients or to just my friends or my family it's kind of technically confusing for them and so for them maybe you're a client or maybe you're like a consumer and you're like hey Flock on an IPFS are pretty cool but how do I actually know where my data is and if I have stored on storage provider on Flock on how do I know which one has it do I have to like track all these things down and remember it myself so what if I'm a client and I don't actually know where my data is or which storage router has it and this is where we're introducing Lassie which is a retrieval client that will actually find and fetch your CIDs over the best protocols available on both IPFS and Filecoin so just to show a little bit more about how it works with this nifty diagram shout out to Lauren so basically on the client side you have the CLI tool Lassie and essentially how it works right now is that you give it a CID in this case there is this example CID here BA, FY, you know, dot, dot, dot, dot you just ask Lassie hey I wanna fetch this CID Lassie will actually go and query the IP and I the interplanetary network indexer that Yvonne shared a bit about earlier and how are they serving it so IP and I will come back with this group of providers both IPFS and Filecoin providers and essentially which providers they are and what protocols they're serving it over and so Lassie will actually go and ask these different providers hey like please provide me this CID and it will actually race these different providers and whoever returns the fastest that's where they get your data from so there's a couple of different Lassie modes that you can use it in the first one being the most straightforward which is a CLI you basically download Lassie and you basically run the space very simple command Lassie fetch and then you insert your CID and it will return you a your CID in car file format so that's a CLI you can also use the go client library which can be integrated directly into your go app there's also an HTTP daemon for integration to non-go apps and we'll talk a little bit more about this later some pretty neat Lassie features it's very lightweight but essentially the big one it retrieves seamlessly from both Filecoin and IPFS for you what it does it will find the content for you or you can also specify where you want to get the content if you know where your provider is and how to reach it it also it basically queries all these providers in parallel and returns the data to you from the fastest source as I explained earlier and then optionally you can see detailed progress information so there's a snippet here that kind of shows hey the step by step of what's happening you're fetching the CID it queries indexer for the CID here's the candidates that I found from the indexer and it's querying all of them and it kind of lists the progress of what's happening so you can see where your request is so Lassie also fetches and guards your data so basically all data that Lassie returns to you is in car format so it's verified data so you know that when you receive it it's exactly what you asked for so basically a data provider cannot provide you with false data or it could be something that is actually completely different from what you could ask for from your CID and then basically the output that Lassie has gives you everything you need to verify the content as well so Lassie cute but also it fetches and guards for you so really quickly I wanted to share where Lassie is being used today in addition to just individual like end users such as myself or other folks working at this company using Lassie and CLI but actually Saturn at the centralized CDN which many of you guys probably have heard about essentially it cached misses to popcorn and IPFS and it's doing this via Lassie so if you go to Saturn and you ask for a particular CID and maybe the Saturn doesn't, Saturn cache nodes don't have this it'll actually go and use Lassie and ask for this content from IPFS and popcorn and so it's being used today which is really neat and you can see just some basic stats that I pulled earlier today retrievals both to popcorn and IPFS are flowing so if anyone kind of claims to you like hey like retrievals are broken or retrievals aren't working they actually are working and so you can see here that a portion of IPFS IO traffic right now is being sent to both storage routers and IPFS nodes and basically yeah over like 113 million retrievals successful retrievals on both popcorn and IPFS this is just over a weekly seven day period and out of that we know for sure there's at least 147 plus thousand successful retrievals from over 63 unique storage routers and yeah possibly more because the way that we're separating this right now is purely based on GraphSync which is a retrieval protocol versus BitSwap and actually storage routers are also serving retrievals over BitSwap so it could be more than this number but it's hard to just differentiate that random object choice so yeah really exciting this is just from like one project the Saturn Varia project that's kind of in ramping and test run right now so just want to encourage everyone to try out Lassie and give us feedback have some links in the stack where you can have a base you can go see basic Lassie tutorial we have a GitHub that has more detailed information as well as HTTP spec and if you have any questions find us on Retrieval Help and I did want to show you how easy it is to use so for the purpose of this demo very simple I just downloaded Lassie and I am running Lassie Fetch I provide this CID and essentially what I'm doing here is I also downloaded the card tooling and FF MPEG tooling I'm basically extracting the car file and then I'm going to play it over FF Play and this video is basically a video that one of our teammates Rod had uploaded to Web through storage and so let's see what happens but here's a video playing very simple and easy to use so yeah let me know if you try it out find us on Retrieval Help if you have any questions and that's it for me awesome thanks so much Brenda that was great so everyone knows where you could find them just wanted to thank everyone for attending MotherVal demo days and thank you to all the presenters great job and those for those who are interested in presenting their demo our next one will be June 15th and for anybody who missed it and wants to share with their teams the recording will be up by the end of the day but yeah thank you again everyone appreciate it oh thanks Jarge