 So, we are dealing with coordination consistency problems in distributed systems, notable in distributed systems in which players can be malicious. So we call this Byzantine fault tolerant systems normally and this is the essence of decentralized systems. So I would call a decentralized system, essentially a decentralized system, not I, but the decentralized systems are distributed systems in which you don't trust any single party, but any single party can be malicious and then you build a system that essentially with this decentralized trust still works and satisfies its abstraction. So we are broadly looking into all of these problems, but we are focusing on classical problems such as consensus scalability and others that I will focus on. So why is this important? So consensus is the bottleneck if you think about decentralized system is the bottleneck for scaling and we know, for example, Bitcoin with all this, all its nice properties. So alone it cannot handle, it can be a backbone of the money system, so future money. So I deeply believe that it is, but it cannot hold the whole web three, for example, and the whole internet traffic on it because it has seven transactions per second. And the situation gets only slightly better with Ethereum. And if you think about it, whatever consensus protocol you put in, like however performant it is, it's going to be bottlenecked if essentially you're following this architecture in which every single validator validates on a single network, right? And basically we all process all transactions. So this is intuitively limited by single machine capabilities. And then it's important how do we scale this? Okay, but what are we scaling for? So what are the requirements to which we are catering? And then this is what happened when I joined PL. The idea was to basically build the whole internet, to bring the whole internet to this ecosystem. So this was the, this is basically the vision that PL has and then I accept it. So what this means in practice is we can define web three requirements roughly like this and I will spend a bit of time explaining like what are the requirements for which we are designing systems here. So one is web two scale throughput. So we are talking about billions or even trillions transactions per second, right? So this suggests to you that this single blockchain, however efficient consensus protocol is not going to fly. So you're going to have basically need to have some horizontal scalability, call it sharding, call it whatever you want, we'll dig deeper deep today into these things. But you need this, you need more than one network essentially, you need to partition the space and this basically forces itself as a part of a solution if this is a requirement. Then we have a requirement for what we call secure global finality. So this is a really, for me, this is one of the key requirements. So you cannot, there is this famous trade off between scalability and decentralization and security in these systems. And you cannot simply scale the system and say, yeah, I will forego decentralization and security. This is not how it flies. So you need to understand what is the adversary against which you are designing and the safe assumption is like your system is probably good if it's safe against nation state attackers. So imagine the powerful nations, we don't need to name many but you can imagine. So they decide to attack your system and this is the adversary that you're considering in the worst case scenario of your, basically the system that you're building. Now, the situation, as we say, like when we design this system, you prepare for the worst but you hope for the best. So most of the time the situation is not so grim, right? So when the situation is good in the system, you want system to perform well. So for example, if you're building decentralized cloud systems to decentralized cloud, you want, for example, in some sense to be able to have low latency, for example, not only high throughput but also low latency for parts of your applications. What kind of latency? Well, that depends on few things. Of course, the constraint is speed of light, so it depends on the geographical distribution of the nodes but it also depends on the application requirements, so you need to marry these two. We discussed horizontal scalability, that's demand based. So maybe your system doesn't need to support billion transactions per second out of the box but as the demand grows, your system should be able to grow. Then censorship resistance, this is related to this secure global finality and decentralization but I'm highlighting it as a very important requirements. So we need to be censorship resistance and partition tolerant. For example, there is a cap theorem impossibility result, FLP impossibility results, right, in consensus but you want still, for example, if the part of the system which is running a metaverse game, basically it shouldn't depend on a partition elsewhere in the system and should be able to progress if it can, right, and if it needs to synchronize with other parts of the system, it will be when the partition heals and so on. So this is the set of extremely high level requirements. I'm going to present what we on extremely high level are doing to solve this problem but this is a clearly mix of conflicting requirements. So we are approaching the solution by building, you know, at some point we will optimize for scalability but then we roll back and optimize against a nation state attacker. So this is the, and slowly we are building the components and combining them in a system that hopefully eventually is going to satisfy all these properties. Okay, so briefly about consensus lab. So you will hear today talks from the, let's say, inner members of consensus lab. So these are the people who work 100% throughout the world in the group and basically but we are broader than this. So we have, and I encourage here and I'm really happy that I see many members of Lisbon Portuguese academic community and this is one of the places where BFT research is like top in the world and we are fostering this collaboration. So one of the points of this summit today is for you to see like what we are doing and then that we foster collaborations through grants, through whatever, like we can discuss many different modalities but this is what we do. So we collaborate with academic partners but also industrial partners. I will come back as example to events that we are organizing and this is how, so we are not working only in our group. And when we work inside the group, this is basically we work in the open. You can literally read our notes from our last Monday's meeting, you can read them in clear online, all software that we develop is open source or accessible, you don't need to ask us to give us software, it's all there on GitHub, public. Our slack is public so what we use to communicate among us for technical problems, you can just join it today now and follow what we are doing. So this is the style of work. And then the impact, yeah, it's on Filecoin, IPFS and the Protocol Labs ecosystem but not only on that. So it's impact on the whole Web3 ecosystem and the whole, well, decentralized web in the end. This is the goal. Okay, so consensus level was founded but we need to, okay, now start attacking this problem that I hopefully described according to these requirements. And we separated our work in roughly, so these things interact with each other but one way to attack the problem is we split our activities and our projects into three pillars if you want. And one is related to this horizontal scalability. So one blockchain is, roughly speaking, one blockchain is not enough and we need many of them and we need to synchronize them in a certain way to provide scalability in terms mostly of satisfies these throughput requirements and the low latency requirements. Then in the second pillar, we zoom into a particular subnet as we call it, a particular partition of the state space. And we try to optimize consensus protocol that runs inside this particular partition of the state that we call a subnet. And then we do research, for example, in optimizing the throughput of BFT protocols on caring about if it's a data center, then we might want an FPGA implementation that optimize latency and things like that. And so these are the two components that, there are other things. So this is focusing on total order, consensus and total order. So we are exploring, we are interested in weaker semantics and whatnot. Is it CRDT, or is it causal consistency or anything like that? We need to fit these topics in these pillars, but we are still interested in that. And of course, parallel execution, because current blockchains most of the time, they execute transaction sequentially based on active state machine replication where you order the inputs to the state machine and then you just execute sequentially. This is the classical approach. Then we are trying to bring, and there is a lot of research in the literature, which tries to attack this problem to try to parallelize execution. Then you have problems with non-determinism, you have problems with conflicting transactions, and that's a separate pillar that we are paying attention on. We are focusing on, at least we are restricting the VM. So we are focusing mostly on Filecoin VM, which is the wasn't-based runtime that we are now deploying into the Filecoin network, and similar ones. But the principles, I would say, are applicable to different VMs, not necessarily only Filecoin VMs. The three pillars, like how consensus law would typically classify the projects. So one of the flagship projects that we have is Interplanetary Consensus. And Interplanetary Consensus, I would just sketch it on a single slide, but we have three talks, after my talk, there are three talks that will dig deep into what I'm just sketching here and hand-waving. But the idea is the following. So you have this, let's take an example of Filecoin Mainnet, the technology that we are building is really meant to be portable to other networks as well. But you have this Filecoin Mainnet, and suddenly this throughput, it has certain throughput, it has blocks every, a few blocks every 30 seconds, and then it gives you that much throughput, right? You want to scale this to achieve this throughput target that I was discussing, and now you want to essentially do something. So you want to spawn, so our solution is essentially spawning, having the, like validators or users on the Filecoin Mainnet do few transactions, invoke some smart contract in which they agree to spawn a subnet, and this is what they do. So the state where they basically spawn L2, we can call it L2 or like, we can call it L2 subnet, which basically points, checkpoints periodically to the Mainnet. So it's not running in isolation, right? It's not one off spawning the subnet. Periodically, we use the parent network to basic parent subnet to checkpoint critical information there. So this helps because the security, usually as you go lower in the subnet hierarchy, you will have faster performance but less security. And what this checkpointing approach brings you is that you anchor critical information in the parent subnet, therefore, thereby leveraging security of the parent subnet. And you can do this recursively. Basically, you can, for example, do this in order to optimize latency of a particular application. The Saturn image here relates to Saturn content distribution network in the Filecoin ecosystem. So if the content distribution network needs to agree on certain deals or whatever, we might use an L2 network. And then if we are operating in a data center level, maybe we need to spawn another subnet to basically cater for this low latency. That's an example, right? Another example in parallel could be a metaverse L2 which supports some gaming and then it needs to be like less than 100 millisecond because this is the, you know, the sprinters are disqualified in Olympic games if they start with less than 100 millisecond. So this is a human reaction time. So, you know, in gaming, this is important, one of the important latencies. You want to minimize it, but basically it depends on human, like there would be use case dependent subnets that you can spawn in parallel and, of course, they can talk to each other. So they are not living in isolation, but you can have atomic execution of certain operations across these subnets. And this is the main technical complexity that Guy Anna Fauston will go into details later on. So what we are doing to implement now, this is the horizontal scaling. But then when you zoom in, so which total order, which consensus protocol runs on each of these subnets, for that we are developing our what we call MIR consensus framework, which is basically currently facilitating consensus implementation, BFT consensus implementations in Go. So our design, so this MIR is not, some of you know MIR, from our previous work as a name of a BFT protocol which has high throughput, eliminate duplicates and whatnot. This is a bit of reuse of the name. This is really the framework which allows you to do event-based coding of some BFT protocol. And in the end, where we want to go, and Matej will talk in details about this, we wanted programmer experience using the MIR framework looks like writing pseudocode of a BFT protocol on paper. This is our ideal goal, right? And we are hopefully coming there. Then we have one implementation in this framework. We have one first implementation that's going to power initial subnets and I'm going to talk briefly about it. Okay, so then we are to implement the Intel parameter consensus, we are essentially going to use two actors or smart contracts in the Filecoin lingua, which are going to run on the Filecoin mainnet and that you can deploy on other subnets as well. So I'm stopping here for IPC, you will hear at least one hour of detailed explanations of this figure. Okay, so this is scaling, this is increasing performance, like throughput, lowering latency, increasing, but I mean, what about the nation state attacker? Like, what happens to that? So whatever I described in the previous slide doesn't really help us with that. So what we did in consensus lab, we actually took the look more deeply into expected consensus, which runs on Filecoin mainnet, try to understand whether it's approximately, it's a longest chain style consensus protocol, except that it's not a chain, it's a sort of a dug. Well, that should be a mental model. So we looked at, and Sarah is going to talk about this, we looked at certain attacks on this, basically vulnerability is not necessarily attacks and we are proposing now essentially a Filecoin improvement proposal to patch it and to change a particular aspect of it in order to, so basically to secure it better. More generally, for example, Filecoin is not a proof of stake protocol, it's actually building the minor power is based on the storage capacity that they bring to the network. So that's different than proof of stake, but once the power table is built, then it looks like a proof of stake protocol. So the weights in the consensus protocol build not on stake tokens, but on capacity, but once you build that, then for a period of time until you reconfigure the protocol, you run on these weights as in a proof of stake protocol. So what happens in proof of stake style protocol is you have long range attacks. You have that a membership of a certain, at a certain point can basically, for example, transfer stake or transfer capacity or whatnot to other nodes, maybe themselves, and then roll back and reuse the cryptographic keys to fork the network. It's possible in each and every proof of stake implementation. There are different assumptions like delete your keys and stuff like this, that rational players do not need to follow. But we were also researching like a more permanent solutions to this, which consists in our world of figuring out how to checkpoint Filecoin mainnet state on Bitcoin. Because in Bitcoin, you have the longer range flavor of attack, but it's cost tremendous amount of energy. In order to do that, you cannot just costlessly go back into the past, but you need to basically spend more energy than the other fork that you're trying to take over already spent. And that basically, even for a nation state attacker, it lifts the cost of the attack a lot. And this is like what we are doing. And for example, you cannot store much data into Bitcoin. We are abusing the op return op code of Bitcoin to store a CID content identifier. And then we are storing data basically for this checkpoint, still in Filecoin and IPFS. And Sarah is going to touch upon this, but this is a nice feature that allows you to, for example, boot the whole decentralized web by having your Raspberry Pi synchronized with Bitcoin first, fetch the checkpoints, get the CIDs, go to the Filecoin network and get the entire state. And you can do this so the setup on which you can run a Bitcoin full node is basically 100 bucks. And you can boot the whole web in a secure way from there, from this critical piece of information that looks like today's DNS almost or something like that. So we had this work and so the summary of this year. So basically we had a lot of publications. We had in the beginning of the year, we shipped the initial designs of interplanetary consensus. And then what's happening in Q4 is essentially we are launching the SpaceNet testnet. So the SpaceNet is the name of the testnet, which is based on the Lotus Filecoin miner. So we are essentially taking Lotus and we are enhancing it with capabilities for mirror, to run mirror consensus in the mirror framework and the IPC, interplanetary consensus. So we are doing, so we are launching the first, let's say the root net in the IPC terms, the root net we are launching in Q4. And then in Q1 next year, we have the IPC capabilities there. But at this moment, I want to, how do I stand with time? Okay, I'm off with that. But I will finish soon, but at this point, I would like your attention because I would like to introduce actually team members because I think this is great work that I'm enjoying a lot. And at this point, I would like to basically tell you a short story how it started. And so after I joined 15 months ago, very quickly, we got internally Alfonso Sara and George joining me and back in 2021, this was a small team. We had Vivian helping us out as an external advisor. But then in January, then he joined, helping us get the Tendermin POC as to power one of the subnets before we switched to mirror and we started switching to mirror when Matej joined us and basically started developing this. And then Sergey joined us helping the team basically to boost the development capacity of the mirror framework. And then we had three excellent summer interns, one of which you will hear today, Jan, who is working on FVM parallelization. We had Andrei who was working on this DSL for protocol implementation in mirror. And Shwet Shao was decomposing expected consensus, finding attacks, security analysis and so on. Then Guy came and basically destroyed everything that we were working on. Proposing two things and you will hear from him today. Proposing this patch together with Sergey essentially for fixing Filecoin mainnet consensus. Akos, who is at the loan to the FVM team, Filecoin VM team, and Wills, they increased our REST capacity because we were coding mostly in Go. And essentially we are now adding REST to our arsenal and we had Akos and Wills help us with that capacity. And finally we have Alejandro who is here today with us joining last week and Rick, who is also in the audience and who is joining us in November. So this is a tremendous growth and I'm really thankful to each and every member of the team, thank you guys so much. I mean, I love working with you and this has been great work. And yeah, we grew a lot. So we have also external grants and collaborations currently with these universities and we are looking always to extend this, right? Some events that we are organizing just briefly, Consensus Factory is the event where we partner with other ecosystems as like for example, Cosmos Algorand Ethereum Foundation and others discussing how do they approach to building scalable web-free and trying to learn from each other. And this is something that we started in Consensus Lab. Consensus Day started last year as a virtual event and now this year it's an ACM-CCS workshop which has basically publications from researchers from these institutions, so like top institutions in the world. And next year the goal is basically to launch it as a standalone conference. Development roadmap, I think you saw this. I think I can safely skip here. So the focus next year is on, you didn't see this, but the focus next year is essentially on deploying what we are deploying now on SpaceNet as a test net to deploy it on the main net. So interplanetary consensus with mirror framework goes to, is meant to go to the Filecoin main net in Q3 next year. By that time we will test the capabilities that we built basically in the SpaceNet test net. Then we have, this is the development roadmap just briefly for next year, but then our research roadmap, which might interest some of you more in the audience, is basically looking into many different problems. So Sara will pitch the only fence as one of our non-intuitive focuses because it's a end-to-end use case that traverses this whole Web 3. Okay, I mean, I had some requirements for you then I kind of sketched the design of the solution but does this work? We need essentially like a Web 2 use case that goes end-to-end and stresses everything that we are building but everything that other teams in protocol labs are building to try to understand does this work or not. So this will be the implementation of a social network like OnlyFence but in decentralized way, which is the OnlyFence and Sara will take over to discuss more. And then we have other topics that I will not dive in because of time constraints but I mean, this is exciting research that you can publish in top conferences. This is the goal and not only for publications but actually to help grow Filecoin and Web 3 ecosystems even further with capabilities. Okay, final slide, consensus lab summit agenda. We are diving into interplanetary consensus first. Guy starts going into details explaining you how it works and then what basically how we can even improve it. I mean, you will learn like what it is and immediately like what's wrong with it and how you can improve it. Alfonso will dive into the implementation details of our current, it's not a prototype anymore, it's really a bit more than an MVP. Juan will come to discuss like, okay, I talk about what we are looking in consensus lab but there are other teams in the protocol labs network that they're looking into other problems and now what are the applications that we can run in this whole thing. So Juan will talk about it, Matej will dive deeply into consensus development framework, how we develop protocols there, even base programming and whatnot. Sarah will discuss the security work that we did for expected consensus. Jan and Vivian will talk about parallel execution, the first results of our thread on parallel execution of FVM and we are concluding with Sarah discussing decentralized only fans. Thank you very much for your attention. It was a pleasure. Okay. Thank you.