 Hello everyone, welcome to this month's Mother of All Demo Days meeting. I'm excited to share that we have four demos today from DRAN, NetOps, and Consensus Lab. So we could start with the first on the list, Yolen and Patrick. Welcome to our demo. So we'll demo our Time Lock Encryption, Peter, or bringing using the different League of Functions Network. That's a work of the DRAN team. So together with Patrick mostly, maybe briefly, Time Lock Encryption is the capability of being able to encrypt something toward the future. So the idea was initially submitted in 1993 on a cyberpunk mailing list, and at the time, the only way to achieve that was to rely on trusted third parties, such as notaries, to which you would give the encryption key seal, and they would unseal it when the time has come. Since then, there has been a lot of research on the topic, but most of the solutions were either relying on proof of work, a bit like Bitcoin and so on, or they needed something such as a trusted third party, or a reference clock. And Time Lock Encryption is really interesting because it can help reduce front-running, mitigate, map, and so on. It's also possible to encrypt your Bitcoin private key towards the future, let's say in two years, so that if you die, your successors would be able to decrypt it in two years if they got the encryption. And if you're still alive, you could just transfer the Bitcoin to a new address and it's not lost. Our solution to achieve Time Lock Encryption is to rely on the existing League of Entropy DRAN network as a reference clock. So the DRAN network that is being run by the League of Entropy is made of over 20 nodes, or roughly 23 nodes currently, run by 16 unrelated parties, including big names such as Cloud Play, Thermal Foundation, and so on. And the nice thing is that DRAN is building a threshold network which we can trust. So here we can see DRAN becomes or mapping to a given time. So each 30 seconds currently, the League of Entropy is issuing a new round and this is perfectly deterministic. So we can say in five minutes, there will have been 10 rounds and so on. The security of the whole thing is relying on the BLS signature scheme which DRAN uses. And the nice thing with BLS is that it's a pairing-based cryptography to create a system and that's compatible with identity-based encryption. We use identity-based encryption to basically say, okay, we use the signature, the DRAN signature in the future as the secret key, and we use the message that is going to be signed in the future as the public key. So anybody who knows the message is the wrong number. But to know the secret key, you need to wait until the network issues the signature for that round. And so we are building on top of that to achieve a time lock encryption. So we already have a Go client which is working and a Go library. So the CLI tool, I think Patrick can demonstrate it now. So yeah, as Yolan alluded to, we've got a Go library, a CLI with which we can encrypt and decrypt things for the future. Some points to note here, it supports multiple networks. You can input the duration. You can also fiddle a bit with the output format that you would like. So let's first take some plain text message that we like to encrypt. So I guess, hello, critical labs. Yeah, very simply, the default settings here will use the test net for DRAN to encrypt for us. So let's encrypt our file here and let's set a time, I suppose, of five seconds. We can decrypt it nice and easily in a moment. And our input file, we're going to put in the encrypted data, which I guess I'll show you. That's nice and empty. So no one thinks I'm cheating. And let's input our data of text we had a moment ago. After a brief second, we'll see now in encrypted data here, we have a payload. We can also turn that into armor, which some of you may recognize from some other schemes such as PGP. If we are then to decrypt that as well with the CLI, it's also possible. We pass in our encrypted data and we output it somewhere else. So let's say decrypt the data here. We're going to pass in our encrypted data. And hopefully the output of our decrypted data will be exactly as we hoped. Additionally, hot off the press, in fact, finished earlier today. We've also worked on a pure TypeScript implementation of Timeup encryption. So now you can do this fully in browser as well. So if we copy the Cypher text we got over there, hopefully demo God's willing. We will be able to decrypt that. Demo God's not willing, it seems. There you go. Hi, but we can also encrypt and decrypt using the press cache last not. We can also do some encryption, decryption, just around. Hopefully we will be able to decrypt this using TimeLock and this also clear our decrypted data. That's awkward. Unfortunately, the demo God's have not been kind on this day, but very shortly these two libraries will be compatible and you'll be able to encrypt them one and decrypt the other. That's all I have on the demo side for now. Yeah, and so the TLE tool is also compatible with, you know, it's just like PGP. So you can use it to pipe data into its supporting streaming interfaces. So you can pipe data into it. You can pipe it into other commons and so on. So it should be fairly easy to use. And behind the TLE command line tool, there is the T-Lock library, which is a go library that you can use in your projects and that allows you to achieve the same functionalities basically. So the whole thing will be made public next week on the 12th of August for DefCon, because we have a talk that was accepted there. So by then the UI should have changed a bit. I can turn my screen again. So by then the UI should have changed a bit to look maybe more like that. And the library, the JS library, and the Go library will both be released. It's currently running on Testnet, on the DRUN Testnet. But we are planning to launch a new unshade network for DRUN mainnet, for the legal entropy mainnet in September, mid-September. So starting mid-September, you should be able to use time lock encryption in a way that is secure, because the Testnet is like maybe three, four parties. So it's not super secure as a threshold network. The mainnet instead is a threshold of 13 over 23 nodes. So it's fairly secure. You need to compromise 13 nodes to be able to decrypt anything earlier, which is quite difficult to achieve in practice normally. That's it. All right, we can move on to the next one. Thank you guys. We have NetOps and Travis Person will be presenting. Let me know when you're ready. Hey, everyone. So I'm going to be showing some of the work we've done over the last couple of months around improving the snapshot service that I would say most simply use in the Valkyrie community, particularly for mainnet. So this has been something that's been going on for a while. This has been originally spearheaded by Reba, who's done an excellent job maintaining the current snapshot solution that I personally use almost weekly for the work that I do. And now we're kind of taking over the stewardship of this service. And we recently launched a new version of it. Snapshots are a way to join in the Valkyrie network. So through those documentation, you can kind of read about this. But basically, snapshots are this small segment of the Valkyrie chain that contains enough state information to allow nodes to participate in the consensus mechanism that Valkyrie uses. Right now, the Valkyrie chain is upwards of 16 terabytes in size if you were to compute it from Genesis up until now. And doing so, it roughly takes about 36 days for every year's worth of chain data that's produced. So now we're coming up on two years, right? So we're coming up almost 60, 70 days of compute time if we're to reprocess that chain. So that's not something that's realistic for users who are trying to get in with the network or trying to restore from a data disaster if you lose your data store or whatnot. So chain snapshots enable users to get into the network relatively quickly. The work that I've done is just to operationalize this in a way that we can have better guarantees around the availability of the snapshots and then also put in alerting and monitoring places so we can understand how these snapshots are being produced and how well we're able to keep them being produced on a regular basis. So today we announced that these are in a soft launch phase providing snapshots for both mainnet and calibration networks. So just kind of quick overview of what this kind of looks like. Essentially what we do is we produce snapshots every two hours. We do this through a cron job that runs. We produce jobs, these jobs then go off and talk to a set of lotus nodes that we operate and then perform an export of chain data. We take that chain data and then we stream it up into S3 at the moment. That data is then made available to users who can find out the latest snapshot by visiting one of these URLs. These URLs redirect users to the actual snapshot that exists. So in this case you can see here if we make a request. So I'm going to move this toolbar. If we make a request to the latest calibration snapshot, we get back a redirection to the actual car file itself. And then if we were to do a curl request here to follow the redirection and then download the attachment here, we'll actually download the car file named CarPile here. So one of the kind of improvements we made that we had some feedback from users that the old snapshot system used to use this concept of the latest object that actually will reference the snapshot itself, which you can see right here. So if we do this request for this latest object, we'd actually get back the snapshot contents itself. In certain cases, users had slow download speeds. This could actually end up in corrupted downloads because the actual latest object would change out from underneath users. So instead of directing users to the having this latest represented snapshot itself, we redirect users to the actual like a static file that represents the snapshot, which then can be downloaded. Lotus automatically handles this. So if you put in these latest URLs into your Lotus nodes, Lotus will follow these redirects and download the file itself. We also support the same kind of behaviors. We have shot check sums so users can do requests for a snapshot, can also pull in the shot sum, and then verify the file integrity that can download. As I said, we provide snapshots for the calibration network. Now, this is kind of a new thing. The software we have can run against the network, so we could even provide them for the butterfly network if we wanted to, but due to the constant recess that primarily being a development network, it's not something we're necessarily looking to provide. Usually networks short enough that people can just think up relatively quickly. So if you want to find more information about this, there's an announcement post in the Filecoin Lotus Help channel at the moment. This links to all this information. It has some information here. It links to the public Notion page. We also have a pull request open to the Lotus documentation to add a section referencing this new information. We are looking to, this is a soft launch that we're doing, and then we're going to be looking to deprecating the current existing snapshots that are in the Filecoin chain snapshots fallback bucket. Primarily, this is to give Riva his time back and allow him to go off and do better things. So hopefully we'll be able to take this over and provide a good service to the community. In terms of some of the improvements, like I said, one of the big things we were looking for is monitoring, having some information. We're still working on this, but at the moment we have dashboards that we can keep track of the operation of these systems. So for example, this is the mainnet service, or we can see when the last job started, when the next job's being scheduled, then how long since the last job ran. These are the jobs themselves in operation, and this time span is the how long the job took to execute. And then we can see that kind of same information here in these graphs, showing how long these have been running. Along with that, we can see the nodes that these are operating against. The way this is designed is to operate against three nodes or more, and it round robins between the nodes as much as it can, to allow for the nodes to recover after producing a snapshot and to reduce load on any single node. So far, we've had pretty good uptime in terms of our operations. Everything's relatively nominal here, and it's been running for about two weeks in its current deployment. Thank you. We have next Andre from Consensus Lab. My name is Andre. I'm a summer PhD fellow at Consensus Lab, and in the next few minutes I will show you how you can take a pseudocode from a textbook or a theory for white paper and convert it to an actual implementation using the MIR framework. And MIR is being developed by the Y3 team, whose slack avatars you can see at the bottom of the screen. The outline is pretty simple. We are going after a high-level view of MIR, we are going to take a textbook algorithm, convert it, implement it in MIR, and run it. The basic abstraction of MIR is a node, and the node consists of modules, which communicate by exchanging events. These events can also be intercepted and later replayed for debugging purposes, but basically it's just an event loop. And this is the core of MIR, and we want to keep this core as small and simple as possible. And most of the complexity is actually implemented on top of this core. And today we'll talk about specific components called DSL, which stands for domain-specific language. And the goal of this component is to provide better abstraction, better programming abstractions. Because the core provides you with an interface to implement a module. But implementing a module, directly implementing this interface directly, it's akin to writing software in assembly. So there is a natural conflict between keeping the core as simple as possible and providing the user with nice abstractions. That's why we need this extra layer, or which is the DSL. One of the main goals of DSL and MIR in general is to mimic the way that theoreticians write their pseudocode in textbooks and white papers. This snippet of code is taken from the famous textbook of Khristian Koshan, Rashid Guravi, and we are at Riggis called Introduction to Reliable and Secure Distribute Program. So while there is no single university accepted pseudocode notation, there are some common trends. For example, you don't see people opening TCP connections or marshalling and not marshalling messages in the pseudocode. You are also quite unlikely to see people dealing with mute access and concurrency in general, again, in the pseudocode. So what you see instead quite often is this error, is this event-based paradigm where the protocol is basically represented by a set of event handlers which are executed one at a time without concurrency. So this way they can access the shared state without any race conditions, without any problems. So other than normal event handers, what is quite common to have in the pseudocode is condition handers. So this handler here is being executed when this condition over here is satisfied. And this is something completely strange to most programming languages. At least I've never seen any similar abstractions in programming languages. So just for the context, I'm going to quickly explain you what this pseudocode does. It actually implements an abstraction called business inconsistent broadcast. And the goal is for a single node, for the leader to be able to program broadcast a single message to a fixed set of nodes. And it should be able to do so consistently, which means that a Byzantine leader and malicious leader will not be able to send different messages to different nodes. So you can see it as some sort of a provocation prevention mechanism. Okay, so let us quickly compare the pseudocode to mirror code and see how they're similar or different. So the code in mid DSL, it starts from this command, from this function DSL.newModel, which creates this handle M, which is sort of representation of our module, which we'll use to register handlers. And as for the state, we can simply actually use a local variable. Here is an example of a simple event handler. In the pseudocode, what happens is when the leader wants to broadcast some message, it iterates over the list of all processes and it sends to each process the message. This is the first step of the protocol. And how we implement in mirror is very similar. We evoke this upon broadcast request function, which registers a handler for the broadcast request event. So the event has some data. And here we check some condition, which in the pseudocode, we simply could leave a comment that, okay, this handler, you can be only involved by process S, which is the leader. So in the actual code, we probably want to actually check this condition to avoid some silly mistakes. Then we also save the data to the state, but it's a minor detail for mostly for convenience. And then we invoke the DSL send message, which emits an event for sending the message. And then send this event to the networking module, which is mc.net. And the event contains the message that we want to send and the list of nodes to which we want to send this event, which is in this case, all nodes. So you can see that transformation from here to here, from the pseudocode to the actual code. In this case, it can be done almost mechanically, even though the actual code tends to be a little bit more verbose. And here's a slightly more complicated example. Here, when a node receives some message, it checks some conditions. And if these conditions are satisfied, it creates a digital signature and sends it back to the leader. And this is something where me slightly differs from pseudocode because creating a signature is an expensive operation. And we call that we want the protocol implementation to be basically single threaded. So that's why we actually create... We do such heavy operations asynchronously. So instead of just creating a signature in place here and sending it back to the leader, we send a request to the crypto module to create a signature for us. And eventually, the crypto module notifies us that the signature is ready and only then we send it to the leader. So the moral here is that sometimes, for the sake of performance, we may actually take a single even handler in pseudocode and split it into several even handlers and mirror, but still the transformation from here to here is still pretty simple. That's which is the main goal here. And finally, there is a cool thing that mid-DSL actually supports conditional handlers, which is fun because no other programming languages... No real programming languages support it, but it's actually quite common to use in white papers. So yeah. And yeah, let me just quickly show you that it actually runs, that it actually works. So this is a very toy application. As I said, it's a simple broadcast which allows the first node, which is a leader to send a single message to all other nodes, which are represented by the four terminals here. And they're just going to say, hello, protocol, labs, for example. And yeah, as you can see, the message is delivered. And it is done through a Byzantine filter and algorithm, so the leader will not be able to create. Okay, and finally, I'd like to mention that mirror is still work in progress. There are a lot of challenges that we need to address. For most of them, we sort of know how to address them, but we are still working on that. And I guess that's it from me. For today. And you can also come and chat with us in Mildev Slack channel on Filecoin Slack. Yeah. And Serdi is going to present your testing infrastructure in the next session. Thank you. So I'm going to talk about reproducible integration testing in mirror. I'm Sege Federo from Consensus Lab, one of the developers of mirror framework. And let me just quickly recap the completeness that the mirror is a framework to implement distributed protocols focused with focus on consensus protocols. It's made modular and flexible. You can find it here, GitHub. And it's a part of Consensus Labs Y3 project, which is also called scalable consensus. So the general architecture of mirror is even centric. So there are different modules that can produce and consume events. And basically the node operates by dispatching events from source to destination modules. And as any software, we would like to ensure its stability and correctness by proper testing. And with distributed protocols, and I think especially with consensus protocols, it's particularly difficult. And the consensus protocol is a critical part of any blockchain or distributed system that uses that. So our goals when we do integration testing is to ensure stability against different kinds of failure, like crushes network partitioning, Byzantine failure. And we would also like to catch some implementation bugs. When we think about integration testing of a consensus protocol, it appears a good idea to have such testing at deterministic so that if we get a test failure in CI, then we can take some kind of random seed and reproduce the test exactly, the failure exactly, so that we can debug it step by step. So to achieve that, we need to control concurrency in the node and between the nodes. And we also would like to explore different schedules of execution so that we use a ability to run pseudorandom schedules so that we can catch different bugs. What prevents us from achieving reproducibility? It's a different kind of inherent sources of non-determinism. So the sources of non-determinism in our case is mostly communication between nodes over the network. It can come because of unpredictable message delays or unreliable message delivery or out-of-order message delivery. And as well as communication between nodes, we can also have non-determinism within a node because our even dispatching between modules happens concurrently. And we also have a local clock in each node that can fire timeouts, so that is also non-deterministic. And when I'm talking about integration testing, I talk about a scenario where all nodes run on the same machine even within the same process. So they don't really have to communicate through network. They can communicate with some stub. But nevertheless, when we run several nodes, we need to run them concurrently. And that concurrency gives us non-determinism that we want to control. So how we can control that non-determinism So our first trial and a step towards that is introducing a simulation engine that I recently implemented within the mirror framework. So the core of this engine is the runtime. It counts simulated time. So it simulates time. It counts logical time. And it executes actions that are scheduled at specific points in this simulated time. So the simulation, the runtime knows at which instant of time each action should happen. And the action itself happens as it were instantaneous. And this runtime, it functions such that it executes the action scheduled at the next... Well, it executes the next scheduled action in the simulated timeline. And it waits until that scheduled action is complete. And then it knows that there is nothing more to execute at this time. And it can proceed to execute the next action scheduled in the virtual time. It may be scheduled in two hours of virtual time. But in our simulation, it happens as the next step immediately. This runtime has a notion of processes that can help us to control concurrency. And so it represents running actions within the runtime. So whenever an action is executed that is bound to a process that becomes a runnable, it executes some code. And then it should go to block on some operation or sleep in virtual time. So that the action is complete and then the runtime can proceed to the next action, to the next scheduled action. So processes can also spawn new active processes. They can virtually fork. Like one process can spawn a new process and then both are active. And until they both go to sleep or block, the simulation runtime doesn't execute the next action. And they can also synchronize and communicate with each other. And that is achieved by means of channels. This is the mechanism to synchronize and communicate values between processes. So how does it help to run mirror nodes in reproducible integration testing is that we wrap all modules of a mirror node and we run unmodified node core and also unmodified modules code. But we wrap modules so that we can get control of module execution and event dispatching. So in our case, handling of each event, it happens as if it was instantaneous. But we can also introduce some delays in virtual time. So the random delays in virtual time to simulate modules taking more time to execute. And since modules are not supposed to communicate with each other directly, only through the mirror node through the core, that is perfectly fine because we have full control over event dispatching and therefore we can schedule modules running with our simulation framework. The execution of modules, it's reflected as a simulation processes, those processes that were mentioned in the previous slide. And we track events that come out of the modules that are generated by the modules and that are consumed by the modules so that we know exactly what to expect from the core. We know how dispatches events and then we control concurrency through the simulation runtime. And we have to provide two substitutes. Before I mentioned that modules are not modified but with two exceptions. One module is transport which implements communication between nodes and we provide a substitute for that module called SIM transport that implements communication with the simulation channels instead of real network or Golang normal channels. And so that we can control messages passing between nodes as well as the events in each node. And we also have a substitute for a local timer because obviously we cannot use real system timer if we run a simulated time. We also have to use the simulated time. And since in Mir any modules are not supposed to use system timer directly but instead they emit events targeting a timer module that can install timers and send back a specified event when the timer fires and we provide an implementation a substitute for this timer module that is connected to the simulation and basically utilizes the simulated simulation runtime to implement this timer with the virtual time. And the implementation of these parts that are specific to Mir are located here in package deploy test but the simulation engine itself is located in package test team it's not really so coupled with Mir it's kind of independent. And this Mir specific stuff it uses the simulation runtime to work. So this is the code and I will just run two integration tests a number four and number 15. And the difference is that the number four uses real time and number 15 uses virtual time. So the test runs. And in the end we will see the difference in how much time does it take. So the number four it's with real time it takes 20 seconds because it has to wait all the timeouts it has to wait all the things in real time and number 15 is the simulation time and it runs significantly faster. So it's good in CI and also it does not depend on this on how fast is the virtual machine running the test. We had some failures because it could have virtual machine in CI it was sometimes a bit too slow. So real time doesn't really work reliably there. Whereas simulated time it just doesn't matter. It simulates time that that's all what I wanted to show. Thank you. Well thank you everyone for attending the mother of all demo days and thank you to all of our presenters. If you're interested in presenting next month the next demo day will be September 1st. If you have any questions for any of the presenters feel free to reach out to them. Well have a great day guys.