 Hello, this is Marco and I will just give an introduction and a brief overview of our Consensus Lab team week. Later on in this video, Consensus Lab team members will give you a detailed lab base. So we met from 5th to 9th September in Istanbul in a nice hotel, nice view, maybe the conditions were not ideal. But anyway, we had a good time. So Consensus Lab grew considerably, first of all. So 17 people attended team week. So this makes by now 13 team members and two members were remote. And we were joined by Ali from the Parkland Ecosystem team and driving from the FVM team. And so we had 15 people on site plus two Consensus Lab team members were joining remotely. I would say this time there was a lot of uncertainty among team members prior to team week. So there were discussions around what is the purpose of hierarchical consensus, which we renamed later on to interplanetary consensus or IPC, like what will be team focus, what are the future projects and so on. So apparently, something members felt a bit lost, which so we are the critical part, but like stage of what Consensus Lab is doing. So we are trying to basically productize a year of work. So maybe this is like with certain dependencies and maybe this is a tricky point. But one basically summary is that one line summary is that we largely removed these concerns. So we brought all team members on the same page and basically discussed the issues in details and came to basically conclusions, which allow us to work for at least a quarter without basically ambiguity and uncertainty. So one pending dependency at that time, because I'm recording this a bit late, so I will give you an additional update on what happened regarding this was on BuilderNet. So BuilderNet is the testnet, essentially on which we plan to incentivize testnet on which we plan to test what we are developing in Consensus Lab, so near consensus protocol and inter-parameter consensus or hierarchical consensus subnets and communication among subnets. And that pending dependency was supposed to be removed the week after the week and basically it was. So BuilderNet is happening, I will cover that in the next slide. So what we did in the in Istanbul, so basically you will hear about project updates, not the selling disorder where I put them, we are going to start, so not necessarily a disorder, but basically Y3, which covers the efficient consensus protocol for subnets, Y4, which improves the Filecoin mainnet consensus, B3, which has to do with the post-POC development of hierarchical consensus of inter-parameter consensus towards production, and G1, which explores the, this is the project that explores concurrent execution on FVM. Many people, so the first face-to-face with so many team members, and we had a lot of opportunity for so lots of breaks among sessions which were very productive, so you could see team members basically having water cooler tops, so even during the breaks things were discussed, and I'm really happy with how this went, and basically how it went from the team building side and how it went from the work side, so basically from the execution side. There was a lot of new ideas and brainstorming, especially about what we are going to do in 2023, I'm going to briefly cover this. Essentially, there is, if you go to our Ctrap, you will see also the links towards detailed notes, so you can see what this was about. But key outcomes, so basically we discussed imminent impact to the consensus lab team on Filecoin expected consensus, because we have pending improvements on Filecoin expected consensus, as well as the security analysis, so the first, I would say, detailed security analysis of expected consensus. These are the products of the Y4 project, and we spend a lot of time on that, so we will hear updates on that later on. Then we also discussed the details plan for what's called IPC, so IPC is interplanetary consensus, this was hierarchical consensus before, and we confirmed short-term focus on BuilderNet, so this is this testnet that I described, which we want to launch with the FEM team and the Lotus team, roughly speaking, by November this year, in any case this year. Certain decisions were supposed to be taken after the team week and they were. I will give you a short update on this later on. Then we spend some time on discussion about what we are going to do in 2023, so rather than what came up is the basically top-down approach, where we take use keys, basically a social network or content dissemination, in a certain sense use keys, and then after discussing email, Twitter, and other examples, we converge to perhaps look at the only sense, which stands for decentralized only sense, which would allow us to basically go through our stack, through what Filecoin already has, through what consensus lab is developing, and basically try to understand if we can build such a content dissemination platform. But this is like, I guess in the next overview of the next team week, you will hear more details about this, because this is only for 2023 and it doesn't concern us, it doesn't change our plans for 2022 and at least Q1 2023, I would say. So BuilderNet was the main focus and here I will be giving you like a short summary of it. So we remain committed, this consensus lab, we remain committed to work together with the FEM team on BuilderNet, and Akos from the consensus lab team is moving temporarily to the FEM team directly to help them deliver critical pre-BuilderNet milestones, such as helping them with the EDM on FEM, and later on on another project, basically related to gas metering on FEM. So what we are postponing is basically what we want to do, postBuilderNet is focused on Rust-based IPC core. So we want to move from the current, I would say, Filecoin-Lotus-centric approach that we had. So while for BuilderNet we are still figuring out with Lotus team what we are going to do, shall we merge changes, IPC-related changes to Lotus, or shall we go with the strip-tutical client for BuilderNet? That's basically the topic of the design sessions that are going to take place shortly after the team week. Basically for BuilderNet, this will be still the code that we develop for Udico and Lotus, and then later on we are going to change our focus as a group to Rust. These are the decisions that we took during the team week. So the plan here was to add Mir to the BuilderNet as soon as ready, but since basically I'm recording this a bit later, I can tell you that since the BuilderNet is at least in the beginning is going to be based on a proof of authority or as some listeners or viewers might know this consensus protocol as classical BFT consensus protocol, we are going to run BuilderNet immediately in Mir. So basically this BuilderNet allows consensus lab to deploy to its two key sub-projects that we have been working on for the last year or so, which are basically IPC functionality and Mir efficient consensus protocol. Thank you very much and in the rest of this video, you will hear about detailed project updates. Thank you very much. Hi everyone, I want to talk about the G1 project where we look at scalable execution of transactions. And so as an update, Vivian and I have been working on benchmarking different concurrency control protocols and we've decided to do that on top of the reference Falcon virtual machine for two reasons. First of all, we hope that it will give us an idea on how easy or how hard it is to use existing architecture for concurrent execution. And second of all, we think that this will give us the most accurate results in the benchmark that we will do. And so on the road to this goal, we've started by implementing naive parallel execution of transactions. So what I mean by that is so currently you have a single machine that executes one transaction after the other. We've now made it such that there are multiple machines are respond and that each in parallel can execute transactions. Of course, in general, this does not lead to a deterministic state. That's why we need concurrency control that has to be in place. So the next step towards this goal then is to capture the memory rights and reads, which will then allow us to understand dependencies between transactions. We've done that by adding a wrapper around the kernel and this wrapper can intercept the system calls from within the Wasm container. And this allows us to track the memory accesses. What is next for us? We plan on implementing two different approaches to then to compare them between each other. The first uses pre-execution. So it executes the transactions a first time just to understand the dependencies between each other and leverages then this information to create a so-called fork join schedule, which is some data structure that can be added to a block and used by the other nodes in the network to speed up, considerably speed up hopefully the execution of transactions. The second approach we are thinking to look at is called block STM. And here there's no pre-execution step and instead the execution is done optimistically and leverages the order of a block, the order of transactions inside a block that we already have. We also discussed the workloads that we will use to provide these benchmarks. We want them to be as realistic as possible to also have meaningful results. So it was then suggested to us that we could look at the Ethereum virtual machine and try to understand which transaction types exist and what are the proportions are and in the current use and use that to inspire us to get a realistic FVM workloads. Concretely, we will then use or reuse the message vectors that are currently implemented for FVM testing. And I think this will be a convenient way of achieving these benchmarks. Yeah, that's it for me. Thank you. Hello everyone. Today I will give you the updates for project B3 and all of the outcomes after the discussions of our CL lab week in Istanbul. So for those of you that are not aware, project B3, what we are trying is to move hierarchical consensus into production. And the first thing that we agreed on in this lab week is to rename completely as we're moving HC into production, we're going to rename the protocol name and from hierarchical consensus or HC, we are going to move into the interplanetary consensus IPC. So from now on, probably you'll, I mean, it may slip a bit and we may still refer sometimes as HC, but we've changed the name of the project. And from now on, we're going to refer to hierarchical consensus as interplanetary consensus or IPC. So from this quarter, what we've been doing in HC or IPC in order to move it into production is first, like, you know, that there was an MVP for hierarchical consensus as part of Eudico. And in this MVP, we were using the legacy VM. But as we are moving Filecoin into the FVM, the first thing that we did this quarter is to start moving from the legacy, all of the actors and all of the processes that we have in HC, we started moving them from the legacy VM to target the FVM. So we rewrote all of the HC or IPC actors to FVM. We also wrote a spec so that we could start kicking off discussions with the community and giving a bit more of like a low level detail of how the protocol works. And we wrote a feed draft for discussion in the feed discussions report so that others could start interacting because before it was just CL or because this is like the ones that were involved in the design and the implementation of the protocol. And we wanted as we move into production, we wanted others to give their feedback and their insight about what we were doing. In parallel, as the team has grown, we are also improving the protocol design. So Guy is theoretically modeling the protocol and understanding how we can improve it, unlike its security, its performance from a theoretical point of view. We kicked off a Rust implementation of IPC. And instead of using, because in our current MVP, what we are using as a high performance, we are starting to integrate as our high performance consensus for subnet is mirror. And you'll see a bit more, a few updates of the Project WIFE 3 that is the one that is working on mirror. But like for the Rust implementation, we wanted to use one of a high performance consensus that is out there, which is NARVOL. I'll give you also a brief update about that and all of the work that Akos has been doing with the Rust implementation of IPC and integration of NARVOL. Also, Will's joined the team recently and he's been working on tracing and monitoring IPC. Because like once we move, as we're moving into production, we want to deploy the protocol into a testnet and we want to have tools in order to see what is happening, how people use the protocol, like have benchmarks on the delay of cross-net messages and so on. So we've been doing a lot of work on trying to trace the MVP so that we can start getting some numbers of the protocol. And then there's been a lot of discussions and design ideas. So we've tried to, like, initially we were really focused on the MVP. Now that we have the MVP, we've been writing specs, like onboarding people into the protocol and trying to discuss openly, like the design decisions, see if we've made some mistakes on blind spots and so on before we move into a production ready implementation of IPC. So one of the things that we've been discussing and like that guy presented in Istanbul is a proposal to simplify the IPC protocol. Because before we had like this complex architecture with subnet cross-net messages and so on. And the first thing that he realized, well, he was theoretically analyzing the protocol is that maybe we could have a simpler protocol that is easier to analyze and that supports all of the features that we had so far. So instead of having an overlay of subnets, the new architecture is just an account hierarchy throughout the whole system. So we restrict the cross-net messages to parent-child accounts. So instead of having these complex cross-net messages where that are hard to reason and like the reward model is not clear, guy's proposal is to have a hierarchy of accounts and like the only way to exchange value between or interact from a subnet to with the rest of the hierarchy is through a combination of parent-child interactions. So this really simplifies the model of the reward model, the fee model and a lot of other models in the system. And the overall overview of this proposal is that we move from a motorcycle playing submarine how as guy wants to refer to our previous design of IPC to just a motorcycle. So to have a simple model that allows us to do all of the features that we were doing but having a clear core and in the end we realized like after all of the after guy presented his proposal we realized that the number of things that needed to change in the implementation may not be that much. So we are focusing on trying like as guy keeps exploring and updating the spec towards this new architecture we will try to implement the changes so that we can have this simple model in our first version of the protocol. Then another parallel stream that we had in IPC is the implementation of IPC in Rust using forest, the forest Lotus client as a base and integrating the Mist and Labs implementation of Narval which is a high performance protocol used in the sweet blockchain and one of the things that we realized as we were doing this work we knew that this was happening while integrating near into Lotus we knew that there were certain barriers and limitations in order to integrate this kind of protocol that has its own mempool operation and its own execution model into the current architecture of Falcon and as we were doing the same with Narval we realized that maybe we need to change certain parts of the architecture in order to accommodate this kind of a BFT like or high performance consensus protocol into the base codes. Some of the problems that we saw as integrate as we were integrating Narval into forest is that there's a mismatch between Falcon block and the execution model because these BFT like protocols they don't have a concept of time or epoch there are rounds but they're kind of unpredictable so it's not as easy as in a longest chain protocol then there's the problem of gas limits because we as in longest chain protocols we have a gas limit for blocks this is not exactly the same for BFT like protocols like MIR or Narval and we came up with a set of potential workarounds for this but the problem with this is that these may affect performance so we would be doing a lot of glue code that may be affecting performance and after this we realized that maybe instead of using the current architecture for longest chain protocols that we have in forest and in LD code for Lotus we need to figure out a way of coming up with an IPC reference client that is ready for any kind of consensus protocol as we build IPC so to have a reference client or a core client that is able to support any kind of consensus algorithm so in the same code base we have a way of implementing any consensus algorithm and we as part of this work we started exploring what was the best way of like in order to reach high performance and accommodate any kind of consensus algorithm in an IPC what was the best way to get there so we evaluated what would it take to just treat Odeco from all of the five one specific things and come up with an architecture able to accommodate any consensus algorithm we evaluated instead of using Odeco what would it take to use forest as a base for our reference client for IPC we even checked other projects in the ecosystem like substrate if we could have an IPLD based blockchain that targeted the the FVM and have this reference client but based in substrate or what would it take to have a new model our blockchain from the existing models that we have so to take a deco forest take the the sinker abstract the sinker with an interface so that we can have different ones according to consensus algorithm take the mempool the current mempool that we have implemented and abstract it so that we can have different ways like different implementations for the consensus algorithm and so far this has been the the what we may be tackling in the in the midterm which is trying to extract all of the models from existing models from the either forest or Odeco base code so that we come up with abstractions to build a model blockchain that allows to implement any consensus for for IPC so what are the short term next steps for IPC first we want to try to deploy as the FVM team is building their builder net for to test the the EVM and the native factors like the new update for the FVM we want to try to have to share these tests so that we can start merging some of our code from a deco to their experimental MVM to branch so that we can also test in this test net the all of the code and the MVP that we've done of HC so far in this process what we want to do is to check our code and like move it from an MVP code to a more production ready code so that we have it there and it can be and we're making more resilient and we want to integrate mere as finalize the mere integration into a deco so that it can be used as part of the steps either in the root net or as something so to give in the build net the ability not only to test a VM user defined actors but also to be able to deploy new new subjects and for this we need to reach a future party between the FVM and the legacy VM actors of of IPC this is like this builder net is going to be our main focus but like as we move towards this builder net in parallel if there's spare time we want to explore this implementation of the of IPC as a product so to have this core client of IPC that can be used for to implement any root net or any subnet with the architectural instructions that I I mentioned and we want to start onboarding new use cases to IPC because we think like the most straightforward ones that we've been discussing a lot is the use of like onboarding Saturn the Saturn team and like with their layer twos and the layer threes and this architecture their geographical architecture into IPC subnets and we also want to to test like how could we implement uh lambda scheduling and lambda execution of of lambda jobs into IPC so this is briefly all that we've discussed uh around IPC and like the short term uh future of the protocol if there are any questions please let us know and more than happy to keep discussing thank you very much hello this is Matje and I'll be talking about modeling state machine replication systems and blockchain systems first let me say a few words about the context of this problem so currently there's a big diversity in agreement protocols and this is not likely to decrease it will probably even increase even more there's many kinds many families of agreement protocols BFT style on this chain style synchronous asynchronous uh and so on and uh they are working in very different ways now uh currently what's happening most of the time is that when somebody designs a blockchain system or state machine replication system in general uh they take some particular consensus protocol and center all the all the design of the other uh parts of the system around the consensus protocol now this might work for very particular uh concrete use cases but um I think there are some problems with this approach it gets pretty complicated in general there's many modules many components of full sludge smr or blockchain system including execution of transaction availability their reception their uh the responses to the client somebody needs to handle the state the state needs to be garbage collected maybe there needs to be some execution engine and all and and so on the checkpoints need to be made and there are many many interactions between those components of such a system and uh very often it turns out in the end to be a big mess in the implementation especially what can be said in general is that we are often lacking modularity and universality uh in in the approach of uh in the approach to blockchain and smr implementations the implementations and tend to be ad hoc with few reusable components and uh very often it ends up being one big uh monolithic system that is hard to maintain and hard to upgrade and keep up with the progressing with the progress of research on on these systems and protocols what is also a very important point that is maybe in my opinion a bit under estimated sometimes is that uh different people have different mental models of how the state machine replication system works uh everybody thinks slightly differently about what basic components such a system consists of uh what their roles are how they interact with each other and so on and uh this sometimes makes discussions about such systems complicated from from own experience I uh I can say that that sometimes it happens that most of the time of the discussion actually goes to getting everybody on the same page what we're actually talking about and uh what what we mean by the by the terminology we use so this is not optimal and uh what I propose is the is the following approach I would like to uh devise a clear and useful set of abstractions around the components of a blockchain or smr system I think it is it could be in a way similar not in all aspects but in some way similar to the osi model for networking but this would be for blockchain and smr systems uh in a nutshell basically it will be just a certain set of abstractions that everybody could refer to when discussing and implementing blockchain systems that would make the understanding of it easier which in turn would also make implementation easier so what we could do next is to progressively uh try to define such a model for smr and blockchain systems and uh even start by mapping proposed protocols and designs of systems to that model to that model especially the y3 scalable consensus project or the interplanetary consensus project that could be way to start uh we could keep updating the model as necessary when we when we encounter something that doesn't really fit and when uh the whole thing stabilizes we could even write the write a paper about it next year maybe and try to publish it somewhere and I have the latest update on the scalable consensus project also called y3 so what do we have so far we have the mere unico integration of the consensus module that means that a mere-based subnet can be now instantiated and run in unico just by booting up unico with the correct parameters and this is thanks to denis who implemented the integration what we also have is a reconfiguration that means that note can notes can now dynamically join the system and be added to the system while the system is running we have plenty of technical advancements so uh to mention the most important and interesting ones we have a simple independent availability layer that is currently being extended to a narwhal implementation andrey was taking care of this so what this basically means is that we have a separate module that is concerned with availability of transactions and is independent from the ordering which is done by a separate module and only cares about the order of the transactions that are delivered we have a easy pseudocode like way of writing protocols as mere modules is also thanks to andrey we have lippy to be based communication denis was the person implementing this part and the next two points are thanks to sarge and these points are a simple benchmarking tool that we haven't used yet to obtain some numbers but we're working on it right now and sarge also wrote a similarity time testing engine which basically means that we can speed up tests without having to wait for actual real-time timeouts that sometimes need to occur in the system one more thing that is important that we have is a more clear view of the architectural system which is described in the public design document okay so what do we not yet have and what are the next steps that we need to take with this project so what we need is now a more robust and better tested version of all I mentioned just before in particular this means the system needs to be able to recover from crashes it needs to support weighted voting to be amenable to proof of stake and proof of space time and other kinds of protocols and membership management we absolutely need to improve on the documentation of the system which is currently lacking behind a lot and we also want to make the system robust robust against malicious attacks or at least the more obvious ones such that malicious attacker cannot just come and destroy the system one big thing we want to do is to deploy our system on the builder net test net as a subnet and this is our next milestone for more details you can look at the roadmap the roadmap document that is public that is being modified right now and it will contain all the small low-level details and approximate timelines when like about this process hi everyone this is Sarah giving an update on our project y4 about filecoin mainnet consensus so first some good news thanks to the amazing work of our intern Trisha we have a formal security proof for ec that we want to submit to financial crypto 23 and also with our work we proposed some security improvements to ec and during our team week we decided that we will go ahead with one of our suggestions which was to replace the broadcast in ec so the block broadcasts with consistent broadcasts and consistent broadcasts will allow to prevent equivocation from an adversary so this will greatly stress strengthen the security of ec now we also made some longer term plan that are more research oriented we wanted to investigate a new consensus protocol for ficoin which is an adaptation of a bobtail bobtail is a protocol that was presented in the context of proof of work guy made a proposal to adapt bobtail to the proof of storage case so our plan for our plan for the next couple of months is to first finalize the details of the protocol make some design choice and at the same time see if we can work out a formal proof see what security guarantees this protocol can get so very exciting work coming back thank you