 Hi everybody, it's Misty. We have presenters from Consensus Lab, CryptoNet, and ProbLab and we can get started with Matej. Hello everybody. Thank you Misty. Let's start with our demo of the reconfiguration of state mission replication protocol. This is about reconfigurable state mission replication in Mir. So just a quick recap. What is Mir? Mir is a framework for implementing distributed protocols that we are developing at Consensus Lab. It focuses on Consensus Protocols. It is modular and flexible and it's available under the link here. It's clickable in the version of the slides that I linked in the Google Doc. It's a part of a Consensus Lab Y3 project about scalable consensus that you can also check out. Okay, this is our repetition, so I went through it fast. Now, in this demo, I'll be showing reconfiguration. So what is reconfiguration of such a system? I don't know what exactly the backgrounds are of the people present, so I'll have a quick intro of what that actually means. So in a distributed system, usually under reconfiguration, we understand dynamically the means at runtime changing the set of nodes running a distributed protocol. So we have some nodes called validators in Lotus that are executing some agreement protocol, and then we want to add some nodes while the system is running without disrupting the functioning of the system. And we also want to make it possible for some other nodes to leave and seamlessly transitioning to new configurations. The background of how that works in particular with our system. So state machine application in our system is implemented like this. First, we have the mempool that basically stores unreliably incoming transactions. So all the transactions that are coming in from clients, they are stored in the mempool, but the mempool barely gives any guarantees about whether these transactions survive an old restart or basically that's it or some storage failures. So this mempool is kind of a best effort pool of transactions. So from here, transactions make it in batches to the availability component. And the availability component in our system is basically reliable transaction storage. So it also executes a protocol and when it receives a batch of transactions, it makes sure that enough other nodes reliably store all these transactions, such that it's sure, such that it's certain that these transactions will be available for anybody who asks for them. When this availability of the stored transactions is ensured, it issues an availability certificate. So for each batch it receives, it issues an availability certificate when these transactions are available. But it is not guaranteed that all nodes issue availability certificates in the same order. That's why these availability certificates go to the ordering component of our system. And it establishes a total order of the availability certificate. So this is basically the consensus protocol, the core of the consensus protocol. This thing agrees on the order of the certificates. And then when at the output of the ordering component, we get an ordered sequence of availability certificates. They go to the execution stage. And at the execution stage, these availability certificates are transformed back to actual batches of transactions that are fetched from the availability layer. So each availability certificate corresponds to some transactions. We fetch those transactions. We know that they are available because it's certified by the certificate. We get the actual transactions, including the payloads, and we can execute them. More in detail, what happens is when this availability certificate comes to the execution stage, it comes ordered with respect to other events that happen. And our system works based on what we call epochs. It is not the file coin expected consensus epochs. This is a different notion of epoch. And so basically what we get here is certificates interleaved with new epoch events. All this is totally ordered. Then as I said, we fetch the batches of transactions from the availability layer. And then we have another ordered sequence of batches and new epoch events. And here, some of these transactions can be special configuration transactions. And this is what is important for the reconfiguration that I'm going to show. So some of these transactions are special ones that change the configuration of the system. They are filtered out here. And they are also ordered with respect to epochs. And then the system contains some configuration state. And whenever there is a new epoch, all the configuration transactions, they take effect. And all the components of the system receive an event that they need to reconfigure. And then that means that they need to create the connections to the newly coming nodes, maybe close the one to the old, close the connection to the old nodes, and do a bunch of other things that are required for the system to smoothly transition to the next configuration. And the transactions that are not configuration transactions, they're basically just application transactions. They are assembled into blocks and ships to actual execution. We dynamically change the set of nodes. This is what we will show in the demo. And we will show it in a chat demo application that I was using also last time to show the fault tolerance of the system. And then Dennis will show how this is integrated into Filecoin and how we can add Filecoin nodes running near consensus and still reconfigure. All right, so here I already have prepared four nodes that are running a demo chat application. If I just run it, they have a configuration, they have some initial configuration you can see here in the argument. It's a static configuration of four nodes that each of them loads to know how to connect to the others. For example, I can, this is a simple chat demo application. I can say hello from one node and the others receive the message. If everybody says hello at the same time or let's say they say hi, everybody gets the chat message in the same order. I modified the application such that I can have special messages. When you type a special chat message, it actually is interpreted as a configuration transaction. And I need to just tell the system which node and at which address will be joined. So I have another node ready here. So here I run the chat demo. I give it a new configuration that already includes the four nodes and itself. I tell it to use the leaping to be network transport. And I tell it that its own ID is four. So basically each node can be at initialization, can be configured with a static membership file saying that ID, node with ID four, for example, which would be this node is at this IP address at this board and so on. Let me actually copy this because this will be useful. So I'll just run the newly joining node with the new configuration. And here I can send a special message now that starts like config and node. And now I paste the ID of the newly joined node and its address. It interprets the message as a new joining node and it's adding the node. Now this node it was complaining for some time that it couldn't find the other nodes because they hadn't talked to it yet. But now the newly joining node downloaded the state. The state consists of all the messages that have been sent so far and can send messages here and all the other nodes they found. So it's integrated in the system now. So thank you very much. And I'll give it back to Dennis to show how this works in Udica. Okay. So in this, in the second demo, we will use Udica client instead of Mir's chat application. And the demo will be as follows. So we have three Udica nodes with proof of work in the root net. And then we will create a subnet with Mir consensus protocol containing initially two validators. And they will be creating blocks. And after some, after some time, we will add a third node to that subnet. And we will see that this new node will be able to create new blocks in this subnet. So this is, that was a scenario. And now we start our script with three Udica clients and with proof of work. Here we can see a miner and a demon for each node. And now we can, so we have started proof of work consensus on all nodes. And now we can see that each client is mining blocks and gets rewards because of we have only three nodes. The fourth node is, we can see that some build miner blocks. Now, when all nodes have enough tokens to create and join a new subnet, we create a subnet with Mir consensus. And it requires at least two validators to start mining in this subnet. Now the node zero is joining this subnet. And in well address parameter, we we provide a full node ID from Mir perspective. So that's lippy to P identity. Then we do the same for the second node. And now to, we can see that both validators are mining blocks in in the subnet with Mir consensus protocol. After that, we add the third node. And to add this node into Mir, the reconfiguration mechanism is used. We do the same, start mining. And now you can see that the third node is mining blocks. To prove that the last node has connected to the network, we will send tokens from the third client to the first client. We will send three tokens. And we'll check that the first node will get it. So we provide the address of the first node and send three tokens. And now let's check that this, the first node has received it. So we should have three tokens. Yes. So that's it from, from my side. Thank you very much. Yeah. Thank you. And we have up next, Nicholas, and he'll be doing a short overview of the Medusa project. Hi, everybody. Today, I'm going to talk to you about Medusa project. I've been working on the last few months at Kubernetes. I was going to talk about threshold network and unshaded data management. So let's derive writing. What is Medusa? Medusa, I'd like to describe it as a toolbox to do many things. To do programmatic access control, which is the first application we're going to focus on, on chain, time encryption, one has begun, then many other stuff. Basically, you can think of it as Medusa knows smart contract or any application, but we're focusing on the central application today to all the private key via threshold network. We have a demo of access control we're going to showcase later on on Gurley, a film, and let's dive right in what it means all this. So let's take a step back and what is the problem exactly? Like right now, in terms of private information, there's nothing private basically on chain. So everything you send is kind of public. And so the smart contract act as a third party, but it cannot hold anything private to its own. So it's a complete state public third party. And so that means the smart contract cannot sign or perform any kind of operation on chain. It's just attesting and running some logic, but not logic, which includes private information. What do we want to do if we could have the possibility of having a smart contract which has a private key, but we could do many things. We could do, for example, one thing, but which are the focus of this talk by now is probabilistic access control. So for example, you can access to this document if you show you have this entity during this event, or I will give this access to this mailing list to all ads protocol that AI user or anything, any condition you can actually prove in a smart contract always. So general concept of on chain balls. And so how does this work in the context of Medusa? So Medusa is kind of a network by itself, okay, network of nodes. It's a special network. So that means there's some kind of assumptions here about the honesty of the network. So we need more than 50% can be any threshold to be honest. And in this case, the network will hold the key without anybody knowing it. So no nodes of Medusa will know the key and nobody will know the key, but here the smart contract can use it. It's very, and when you integrate this design into the blockchain assets, then you kind of have this record like design kind of similar to chaining for those we know. So when the smart contract wants to, for example, let's say one operation you can do with a private key is decrypt something. So smart contract can see a decryption request as a Medusa network, and then second, the Medusa network will decrypt it and push back the plain text. Okay, so you really have to think about the Medusa as an extension of the smart contract in this way. And so what you can do with it now is you can do things like program to get control in a very easy manner. So basically the, imagine plenty of application that needs to have access control, like for document sharing or private meeting list or even the music platform, whatever, they delegate better than let's say the key to Medusa in some sense, they don't do anything, but when they use Medusa, it's kind of similar to, it's better the key, but they get to Medusa and they ask the Medusa network to operate on their behalf. And so any smart contract in Medusa, there's no registration, you just say, Hey, Medusa decrypt this thing. And that's about it. It's like one call. There are kind of two modes about Medusa, like you can have a global decryption where the Medusa network completely decrypt the cyberdex and push it on chain. So now you are rebuilding the message, but this is, this can be useful for application like, I don't know, bets or auctions being like this, but you can also, and this is the focus of the demo that we have today, about focusing on that we encrypting cyberdex. So if this is what we need for access control management like the document sharing, we request that you, that the Medusa network decrypts this document explicitly for Bob. So it requests the decryption for Bob and the Medusa network is decrypting it with just re-encrypt the document towards Bob. So how does it work? Let's say from end to end as user, so let's say imagine we have Alice, which has new super top secret documents. First thing it does is encrypting it. Okay. Right. So now it has a cybertext. Second thing it does, Alice will submit the document or the key of the encryption. It will submit the encrypted key of that lead to the encrypting of the good documents and will lead to an event on Medusa. So now the Medusa network is aware, but there is this encryption document somewhere, but it doesn't do anything yet. Right. And later on, it can be one week, one month, one year afterwards, then the bottom comes in and say, Hey, I want to read the documents. So now it goes to the document sharing smart contract and say, Hey, I want to read. And now the document sharing platform needs to say, okay, is Bob authorized or not? And this is custom logic. Anybody could come call this own custom logic. It could be Bob is more than 18 years old, or if he's taken, you know, if or somewhere, like it can be anything. And then if the Bob has the right to access the document, then the woman sharing platform will ask the decryption, the re-encryption to out to the Medusa network. And so now the Medusa network reads everything with the decryption request and the submission event formats. And we do a re-encryption. So it will never decrypt the site for texting at no point in time. The encryption will be revealed. And so then once that does meet internally, it will push the re-encryption on chain to the Medusa smart contract, which will push it to the document sharing. And then Bob will say, Oh, okay, my re-encryption is ready. It downloads it on the browser, on his common line interface, whatever. And then you do a local decryption. So then he can read the document by himself locally on his computer. And that's kind of the whole workflow we've been working on right now on this Medusa project right now is only me for this last month. And we just hired somebody that will have me on the smart contract on the backend side, Jonathan, and so on. And then we have a working protocol concept codes. You can check out the code, which is not completely public yet. So you need to ask me first if you want to have invites. It's based on rusts for all the backend and the 3DT right now, because we deployed the proof of concept code on girly. And the nodes communicate barely P2P. We were running server pretty big stuff. We have a demo, which I'm going to show you later. On the future, we want to expand a little bit the use case. So like I said at the beginning, there is a test control, but we can do or see one as we can, we can do time encryption, where upon any condition or witness encryption, general witness encryption, open any condition, then we reveal something. We can also do privacy on top of what are the conditions for somebody to be able to ask the decryption. This could be private as well. So you could not reveal that your name is bothered by your more than anything else, but if anybody could verify that at least you are more than anything else, then you are somebody. And MPC could help there. And also, we are kind of trying to have some kind of research acts on the extending and the scalability of the basic cryptographic primitives that we use, which is special cryptography. And as such, at least we have the first iteration. We made the first protocol set code about DKG. So the underlying cryptographic primitives, which can run with other nodes, which is not familiar now, and will lead to maybe a production thing later on, but we need to decide where we're going to go. And so this is a very brewing project, and if you want to get into a hear more and participate, or there's plenty of things to do, so don't hesitate to contact me. And now I'm going to show you a quick demo. This is the ACL contract, the Access Control List contract, where it lists only reader and writer roles. I write a text, a cypher, and something. It encrypts over the chain. And there's a reader rule I can show you later that will try to decry, ask the decrypt. And this is the main data contract. So let's say I want to type a small secret here. Okay, so I play with my Metamask, I confirm, it's good. This is on really if you're on testnet. Now I will switch to a reader one. So I'm just switching keys. And now I can read, I can read all the side products. These are all the side products, submission times, and I will ask this one. I just made one here, you see. I will ask to decrypt this one. So again, it's a transaction that was on chain. I asked to decrypt. So my keys is white listed already on the Access Control List smart contract. So I can already read anything at once. This is a very basic demo. But the idea is that anybody can code his own rules. Like there's no, it's just a major size, just a toolbox for people to use on tablets. And now we submitted the, the transaction and now we're waiting for the Oracle results. And in the meantime, because it takes a little bit of time because we are on girly, I can show you that there are four participants in the network right now. There have been four nodes that are holding the private key and Medisa. These are all the US addresses. And there is a district public key to where to add the chain groups. And here, I signed Medisa to Uber Banana, and that's it. So you see I can decrypt from the border as well. And that's it for me. Thank you everyone. Awesome. Thanks so much, Nicolas. And then the last demo that we have is for me honest. Hi everyone. So I'm going to start, of course, this is work that has been done in Problab, which is focusing on protocol benchmarking and optimisation. And I'm going to start with a little recap of what I said in June on what is a provider record and where does it live. In order to do that, I'm going to go through the IPFS design on the content lifecycle. So what happens from content publication to content request and retrieval. So assume you have a document, you hash it, and what you put on the IPFS, as you all know, is a provider record. The provider record includes the contact details of the publisher as well as the CAD of the person publishing the content. So what the DHT does then, it's doing some magic and it's finding a proper node to store the provider record. Then on the other side, on the retrieval side, the requestor would have to know the CAD of band. They're going to go and ask people through BitSwap. They're immediately connected peers. And if those answers are negative, then they're going to go to the DHT and ask for the same CAD, which the DHT hopefully will again do its magic and hopefully end up in the same node and request for the provider record. So at that point, the requestor has got the provider record. So they do have the contact details of the provider, which means that they are going to contact the provider, set up a connection and transfer the data. So what is the hypothesis of this work? So as we've seen, a provider record is a small file that includes the contact details of the content publisher as well as the CAD. And it is published in a number of different nodes in the network. So in the previous example, simplistically that was one node, but in reality is 20. And the replication here is done because we want this provider record to be live in the network, to be stored somewhere. So in case some of the nodes go offline or they're overloaded or they cannot respond to the request, this means that there are going to be others that have the provider record, which are findable, they are online, and they can provide it to whoever requests the content. If there is no provider record in the network, if all of the nodes have gone offline, then this means that the content is unreachable, which is pretty bad. So the hypothesis for this work is that because we've seen high rates of churn in the IDFS network, which would reach up to 70% of peers, have left the network within after only two hours after joining the network, this means that a lot of peers, a lot of those 20 peers have got good chances of having left the network, which leaves a few replicas of the provider record to the inside the network. So we wanted to see whether there are cases in the IPFS DHT where provider records basically are not live anymore, and therefore content is unreachable. So what we did is Miquel build what is called the CAD order, you can find this URL down there on GitHub, which basically is a tool that produces content, produces CADs, produces the provider records of those CADs, it stores them on the IPFS DHT, and then monitors those specific nodes to see if they're still online, if they're providing the provider record, either serving the provider record or not. So there are several features, I'm not going to go into the details there, but that's the main functionality. And we tested that over the live network, and we wanted to answer some questions. So one of the main questions is, as I said before, does the record stay live until the republished time? So provider records are republished every 12 hours to make sure that they're alive in the network and despite network churn, we're still going to find the next 20 peers that are online at this point in time in order to replicate the record. The answer to the question is yes. And we see in this graph where we have on the Y-axis, we have the number of nodes where PRs are available, and on the X-axis we have the time since the CID, the CID has been published or the record has been stored elsewhere. So it goes from 0 to 38. So the provider record would be republished at this point after 12 hours. But despite that, obviously the CID order does not republish records, that's the whole point. So we see that the record stays live to approximately 15 nodes for more than 35 hours, which is a good thing, means that the current DHT keeps records alive. The node content is unreachable. Now the next question that comes to mind is, if records stay live due to hydros, and yeah, we excluded hydros from the requests that we've been trying to make in order to get the provider record, and we found out that excluding hydros, we still have on average about 20, sorry, about 12 nodes that keep the record alive for more than 35 hours. Again, great news because it means we are not really affected or not affected, but we're not really dependent on hydros. So what does this mean practically? It means that perhaps we can reduce the value of K from 20, which is the current replication factor to 15, and we've done experiments on this as well. So we reduced the replication factor, we published CIDs, we published provide the records, and we monitored again for how long do peers stay online and keep the provider record live. Again, we see that of course there is a little drop down from 12, sorry, down from on average 15 to an average of 10, which again stay live for more than 35 hours, which is again great news. It means we can apply some optimization to the IPFS DHT as it is today. What else does this mean practically? As I said, the publishing table on the IPFS DHT is 12 hours, so perhaps we could consider increasing the publishing table, and we found out that we can at least double it because we've seen that everything stays live for at least 35 hours, so even more than double that, than double the 12 hours would still be okay in the extensive set of experiments that we've run. But what do we need? There is something that we need to be careful of at this point. So when we publish a CID, we're trying to find the 20, the K closest peers to the CID in extra distance in the CID. So we need to make sure if peers come and go, if those peers that we have chosen in the beginning of that publication time are still the closest peers after 12 or 24 or whatever amount of time we're going to choose for the republishing table. And again, we find out that 15 out of the 20 closest peers chosen initially are still among the closest ones after more than 35 hours. So we see here that initially it's around 17, it drops down to 16, and then stays stable at 15 nodes. So this means that 15 nodes actually keep the provider record live up until 32 hours at which point it goes down to 14. Now, question here is does this include the hydras? It does include the hydras, but again, if we exclude the hydras, we're going to see a drop of two to three nodes from that. So it would go from an average of 15 to an average of 12 and would stay like that for more than 30 hours. The conclusions of the study. There is a final recommendation that will be coming soon. We found out that definitely there is significant space for improvement. And these builds on the case that DHT servers and DHT, sorry, content providers are actually overloaded. They have to run high CPU machines. They have to consume lots of bandwidth and so on. It has been a long-standing issue in the IPFS network. So this means that if we can reduce the overhead, then this will have quite significant impact. Now, roughly if we go from K20 to K equal 15, we roughly have 25% reduction in overhead. And if we publish, if we increase the publish interval from 12 to 24 hours, obviously you understand that this has got about 100% reduction in overhead. Of course, it has to be noted that by overhead here doesn't mean the entire overhead of those machines is just everything that is provider record related. So sending provider records, receiving provider records, storing provider records and so on. We're not aware what percentage of the overall energy consumption of the servers this is, but definitely this is going to be worthwhile reduction. We've been working on this with the team. As I said, we've got more grants on radius. We have the final report, which is very extensive, several pages with many tens of more figures and results than what I presented here. You can find it in the github.com.com. It's all requests 16. It's soon to be merged, but if you're eager to find out more now, head there. That's it. Thank you. You can get in touch. We're leaving a pro blog on IPFS Discord and also on Fiber and Slack. Thanks everyone. Cheers. Well, thank you everybody for attending and Mother of All Demo Days. Thank you for all the presenters. Matei, Dennis, Nicholas, and Yanis. The next Mother of All Demo Days will be Thursday, October 6th. Thanks everybody.