 Hello, everyone. So this is Marco. We have a very nice Consensus Lab team week in Belgrade, Serbia, lots of outcomes. And this short presentation is about these outcomes. So just as a reminder to all, Consensus Lab has three different organizations, it's projects into three different research and project areas. This is the blue project area. And project names start to be where we essentially develop and research and develop hierarchical Consensus, which is our version of sharding using essentially submits. And then we have Y area of project or yellow area where we the research scalable Consensus implementations and then the green one in which we are dealing with the parallel execution of smart contracts and arbitrary computation over data. So as you will see this update is organized into categories. So you will first hear updates on the projects in the blue area, updates on the outcomes of the sessions that we had during the team meeting in Belgrade, then on the yellow area, then on the green area. And then I will also introduce some other sessions that we did. But before that, I want to focus on what is Consensus Lab going to deliver as an MVP of scalable Consensus for Filecoin this year. And we had few sessions that are related to this and Dragan Ruzi helped us with this and these sessions were first session on FVM alignment. And then detailed session on test method productization and basically these outcomes were also informed or with our session on what comes next in 2023. Also offline of the team week, George and Jenny had Jenny from Lotus team, they had a detailed discussion on how we can collaborate with Lotus team so I included this into the summary of updates with respect to our MVP. So with respect to FVM alignment, Dragan updated us on the latest both of FVM. So I won't repeat them but it seems that built-in actors, we have the code frozen and I think the production version is moving to July and the programmability of FVM is looking for October for code freeze and accordingly we are planning our work according to the FVM plan. So we adjusted a bit because hierarchical consensus critically lies in two actors, which we need to implement in FVM. So these are subnet coordinate director and subnet actor. And currently in our first MVP of hierarchical consensus, these are goal vectors that are we now porting to FVM actors. So essentially, some detector will be definitely user defined FVM actors so it needs to wait for them to deliver. And SCA is currently built in, we are discussing with FVM team should it stay the built-in actor or should it also be a user defined actor. One of the action items of this alignment session is that Alfonso will work with Dragan to become part of file coin early builders program. Then update on George's discussion with Jenny and the Lotus team. So towards productization of what we do in consensus lab, Lotus team will appoint an engineer to work with the consensus lab on back porting Udico, Udico request to Lotus and engineer will provide us, Lotus engineer will provide us to be named, right, design an implementation review and from consensus lab perspective this can start in mid Q3. Now, Lotus team is currently basically doing its own prioritization currently it's big love for Q4, it might start earlier but this is up to the Lotus team currently we work basically with Q4 as a starting point. We have also testnet community engagement and the goal is to bring up a dozen SPs and maybe consider incentive programs, bug bounties and hackathons and other things related to testnet. And for the testnet what's and productization, whether we're going to focus on for this year, we defined a few must haves, which essentially you're going to hear in the updates on individual projects later on when I saw after me details what each of these must have sub bullets means. And we included some provisional dates, and so our consensus lab scaling MVP needs to have MVP of crypto economics for hierarchical consensus which we target for end of focus. This is, you're going to hear updates on this and MVP of one functional subnet consensus this is developed currently in project why three, you will hear more about why three later on. We have a proposal in the second part of Q3 select August or September. Utico testnet as I already discussed briefly, when we had a lot of team update from discussions from Lotus team this is targeted for late October, and full compatibility with FVM given that the FVM code moves and we are dependent built in actors is going to come after October so we are reshifting our priorities for example, we worked in Q2 on this but now we're shifting priorities around a bit. Specification updates are coming after passing the synchronization with Lotus team so late Q4. There are a few nice to haves, which I won't be reading out but they're like monitor visualization documentation and other things. These are nice to have in this year, they're definitely must have before to be defined deadline in 2020 when we're going to production. So in the following, you will hear updates on individuals on sessions that we had during the week on individual projects from blue, yellow and green work areas. You will also hear an update on H1 Retro, so first half of 2022 Retro, which we did also session four so we'll give you update on this. And we also had a very interesting session on what comes next. So preparing the consensus lab research and development agenda for 2023. You will also that they're like this is not all what we are going to do, we are actually in this what comes next we will hear like what we also plan for community work in next year but we have also very ambitious plans like apart from projects that we are doing in-house to even extend more out reaching community building. So with that I'm giving to consensus lab group members to give you updates on individual sessions that we had related to individual projects. Thank you very much. So everyone, so I will give you a brief update on the outcomes after our level to work with for Project B3. Project B3 is the one where we are trying to move the Chiropractic consensus to production and this last quarter will be mainly focused on targeting the FBM. So our MVP works with the legacy FBM and we wanted to all the companies to tell the FBM as we moved into production. So what we've been doing is mainly a list of high level milestones of what we've reached so far is first of all we have a custom built in Actors bundle, including the SCA. The SCA is the core actor for the operation and the implementation of SCA and now we have it and we can load it in any clients that runs FBM. Then we have a reference implementation of the subnet actor. The subnet actor is a user defined actor. So we've implemented it considering FACL and M2 and this is a reference implementation that governs all of the policies and the life cycle of subnets and that can be defined for users in order to determine the consensus algorithm and so on when they spawn a new subnet in HSE. And finally what we've started is the integration of and the rebase of all of the FBM runtime with CODCO and we haven't implemented all of the end to end mechanics of the protocol yet because once you realize that it works so we have a port. If you go through the experimental FBM M2 branch, you'll be able to test loading these custom built in Actors bundle, deploying your own subnet actors and so on. But we haven't implemented a lot of the parts of all the integrations of the protocol in CODCO because we wanted to wait for FBM milestone 2 to come and have a code freeze in order not to rewrite again a lot of these code. And as we wait for that code freeze what we will be focusing on is in defining the spec for HSE so having a low level of the description of the spec and all of the FIPs in order to start to pick up the discussion with your community and the core team that will help us to determine all the details and start with the production code. If you're interested in tinkering with these MVP of HSE over FBM, we have a reference implementation of the FBM that includes some new type for the SCA and the rust implementation of the F4 address, which is an address that includes subnet information about the subnet as a context in the address. Then we have a fork of the built in Actors, including the implementation of these SCA as an additional built in Actors. We have a repo with the subnet actor reference implementation for FBM. And finally, we have this branch of Eudico that includes, I mean, that is the one that works with to load these both in Actors and points to all of the right parts. We already re-based also FBM into Eudico, so we have like the built-in Actors that work now in Eudico with the same built-in Actors bundle as in Lotus. Meaning that we have in Eudico already re-based FBM, so if you want to use FBM built-in Actors without all of these forks and all of these mechanics, it's already available for everyone to use. As part of the three, we also discussed some of the open problems that we have in the Foundation and that we may have in the future. One of them was censorship resistance. We didn't have really clear how to fix the censorship resistance problem where a parent maybe prioritized some children over others or maybe completely censoring cross-net messages from the parent because the parents have children because parents have a lot of power over their children. And we realized after a lot of discussion that it's futile to try and fix the censorship resistance because we could be breaking Latinas. So what we want is to make sure that if a subnet is honest, there can't be a censorship resistance. And in order to achieve this as part of project work with, I think that you've seen already an update on this project. We want to explore the chain quality metrics as a metric that if it exists and is good, it ensures that there can't be censorship for children. This is like the first of the problems, but then even if we consider this chain quality property and we can consider there an censorship, we have the problem of minor or in this case, parent extractable value. And even if we have this chain quality, the minor extractable value or parent extractable value is still possible and this is something that we will explore and different schemes in order to improve this, will be explored midterm and they want to consider for the first implementation of HC. The other problem that we had and that we already knew in HC was related to data development because of, I mean, as a result of how cross-net messages are appropriated and executed in the different subnets when they are originated somewhere else, we need data to be resolved in order to messages to be executed. If we cannot resolve this data, either because it's unavailable or because it comes from a subnet that is malicious and doesn't want to give us this data, we may harm the loudness of the subnet that needs to execute this message. So in the current implementation, we have this problem, but in order to work around it, what we will do is that the parent, as a parent needs, is the one that propagates the message to a child's subnet. What it will do is check if the data is available before propagating it and check if it can be executed. And if this is not the case, after a few retries, instead of harming the loudness of the subnet, it will discard these cross-net messages, marking and plugging that the data is not available. And it will send a message back to the source subnet to notify that the cross-net message couldn't be executed because the data wasn't available. And in any case, we want, in order not to spam all the hierarchy, we want to give feedback about this failure to the destination subnet. This was the most pressing problem regarding the availability in the current implementation, but then we have a parameter availability related to fraud groups and state groups. As part of the critical model and different ways of incentivizing the behavior in subnets, what we realize is that for many of the fraud groups or state groups that we want to be ignored in order to report sexual misbehavior, we need data availability. Sorry, so what we will do is to implement storage interface in subnets so that any full node or live client in the network can persist on-chain data required to build these state groups in order for it to be retrievable and accessible. And some of the backends that we're considering for this persistent storage is, for instance, Valkyrie's first idea is to be transparent of the backend and to allow any user in the subnet or any full node in the subnet to have on-chain data available for its access in order to not have it gone. And hopefully, we will be able to give back from the work of other teams, like Kiptonetlack, that has this open program of data retrieved, evaluated, and it's available. And finally, as part of D3, we also discussed the Kiptonetlack model that we've been designing for a few months and discussing with Kiptonetlack. We already have a first draft with the basic building blocks. And the discussion in Valkyrie was mainly focused on how to report what are the kinds of detectable misbehaviors that we can report in subnets and how they would work and how we can build these structures. And we realized that, in the end, all of the misbehaviors or the main misbehaviors that can be detected and can happen if we don't have an anonymous majority in a subnet are equivocation, where there's a deviation on the consensus. Depending on the kind of consensus in the subnet. And we discussed the kind of proof that we can build in order to punish the parties that were involved in this initiative. And then we have invalid state transitions, which are situations in which the block may be valid, so consensus may be reached, but the state transition from a previous state to the new one may not be correct. And we can also build a proof that we can do a proof in order to report these kinds of misbehaviors. As immediate action items, we want to have a first implementation of the report of these misbehaviors. And we want to also start exploring how the use of work or other sake groups can help us with these problems and mitigate a lot. For instance, the invalid state transitions with behaviors. And also we want to understand the role of payment channels over it because the conversation be real. And we realize that payment channels may have an important role in it in order to propagate information or to make like what we call one course between different points in the project. And that's it from my side. Thank you very much. And if there are questions, please let me know. So this is the why for a date. So the motivation between projects why for is with recent advances in research in blockchain consensus protocol and hierarchical consensus coming soon. We want to investigate if it would be a good idea to change easy the the the consensus protocol of fight coin and maybe design a root consensus protocol that is well adapted to easy to hc so hierarchical consensus and and fight coin in general. So for this project, we first want to start by formalizing formalizing the security and performance requirements of fight coin. So we have discussed this with the team and basically, so we want the consensus protocol to be compatible with fight coins storage requirements. So this means that window post should be included in time, even in a period of congestion. Also, it should not be possible so for an adversary to center a window post or to fall the chain for a long. This will break the story security requirements of fight. Also, we want this protocol to be quite simple because that's much is to reason with. We don't like a high throughput because we have hierarchical consensus that will take care of of the throughput of all the one chain need that much throughput. However, we want to be able to scale to 10,000 nodes because like at the moment in fight coin there are 4,000 nodes. So in the future, they may be more so we think that this really stick to our protocol that scale to at least 10,000 nodes. Also, we don't need to have such a fast finality again because like everything will be happening like most of the transaction will be happening. So from the main main date, we don't need that that fast finality and censorship resistant I've already talked about this. So we formalize basically what I've just said, and then we want to look at the literature and especially new literature that has recently come up and study the trade-offs of the different protocols that have been proposed recently. And then this will allow us to make a choice. So first, do we want to actually change EC maybe we don't we actually EC is like the best choice. But if not, let's see something better out there or maybe we will have some ideas and want to start a new design from scratch. And then what we would like to do is write formal security arguments for the security of the protocol. Hi, this is Matthew and the update on the Y3 project about scalable subnet consensus. So first, what we have already that is basically the Y3M2 milestone that we finished recently. And in particular, that means we have a proof of concept near-eutical integration, where several components of the mere based consensus component are still missing or stops but the demo already works. We have crash for tolerance in the sense that the system operates even after one of the nodes crashes forever or less than the tolerated fraction of nodes crash forever. And we have a mere improvements basically every week. And we have Dennis, Sergei, Andre and myself working on the project. So what's next on the Y3 project is basically a Y3M3 milestone that consists of a minimum viable product of a mere based consensus component for eutical. And in particular, that is what we are envisioning is the stable architecture of the consensus component is efficient eutical integration without having to do double work. For example, if the eutical networking components sends some data around, we shouldn't be sending it around within mere as we are doing now. We want to have a narwhal based availability layer. We have to have a complete consensus protocol implementation with all the features necessary. We want to include basic tests, preliminary performance benchmarks and we want to support reconfiguration. And the immediate next steps to achieve this is basically writing code to implement all this. So that's it on the Y3 project. Now, we also had another session about representing and storing the state of the blockchain, either in a compact form or by storing all the inputs, because currently, what most blockchains do is append only state. So basically model the state as some accumulation of all the updates since the genesis block. And if the updates, let's say these are blocks have constant size, then with each block, the state grows and the size of the state is linear in the number of blocks. Now, this is great because it's simple, it's universal and it provides some good security guarantees, but it requires a load data rate. So this is perfect for Bitcoin, which is storing all the blocks forever, all the time. And in the particular case of Bitcoin, this gives us some tens of gigabytes per year, which is very feasible to store. However, as we are targeting scalable and high throughput consensus implementations, this append only representation of state might be suboptimal because it can result to even easily to petabytes per year that have to be stored. And a lot of this state might, a lot of these updates might not really be useful anymore. So the conclusion is that subnets require support for state compaction. The current implementation of the Falcon client actually stores all the blocks with all the updates forever. So next, we propose to explore possibilities of deleting all these state updates. And that's it for me. Thank you very much. Hello everybody, I'm going to make a brief update on the G1 RIA green one. So what's the current status. So we almost completed a state of the art in data mystique for an execution. So remember that G1 is about the ability to execute smart contracts in parallel in order to fasten the execution of the distributed applications that will be executed on top of a VM. There is basically one take away message, which is that many different techniques exist to implement deterministic for an execution. This is actually a fairly hot research topic since I would say about 15 years. And these techniques, they differ not only design performance, which we would obviously understand, but they also differ in the type of determinism, the guarantee, and they all present different tradeoffs. So together with my master students, we studied the different differentiation criteria that exist. And you have the list of those that we selected type of determinism, scope of determinism, the memory consistency model that the DC technique shows the supported parallel brain models, the supported system models. Its compatibility with existing hardware and software, the requirements in terms of configuration and finally the performance. All those different criteria, I mean, there are different choices that can be made by the DC technique, and obviously depending on the choices, you might want to apply that technique in a particular context or not. So that's the first part of the work that's been done on the G1 area during this period. The second part is about starting to design and implement the support for business training for Wazin. The initial prototype will be based on Wazin time. Because I mean this is a fairly popular execution engine and that's the one that's going to be used within FDM. So what are going to be the next steps? So implementing full support for business training in Wazin, that will be the first item. And then the plan is to compare existing DC techniques on Wazin concurrent programs. This includes obviously smart contracts that will be executed within Wazin, but we don't want to restrict ourselves to smart contracts. We also want to study as a relevant benchmarks. Finally, a special focus on smart contract execution, the idea is to find and apply the best DC technique within Wazin time to support the parallel execution of smart contract execution. Thank you. Since we are basically at the end of the quarter, we also decided to use this opportunity to do a retro on the first half of 2022, particularly the time since our last meeting. And try to determine as a group the things we're doing well, as well the things that we want to change for the next cycle. And so starting with the goods, we have been working in the open more and more. Since last meeting, for instance, one of the changes we made was to start publishing weekly notes of all of our project meetings. In addition to that, we have been putting out prototypes, papers, demos, or just a lot of material on the work that we're doing. The team has been growing. We just welcomed the first cohort of summer fellows. We have three PhD students joining us for the summer for the first time. And for the most part, we have been delivering on time on our roadmap, the different projects that we're working on. The bad we have at this point fewer external collapse than we would have wanted. So we're still doing a lot of the work internally, and we definitely plan to change this also to increase our capacity. Occasionally, we've got stuck in analysis and decision making. It's not a recurring, it's not a frequent occurrence, but yes, but even that is something that we need to debug. So in terms of the changes for the next quarter, we do want to have more collaboration within the team as well across the people working in the different projects. We want increased community engagement. This means organizing more events, putting out RFPs for our projects and for other problems that we think are interesting to work on. And also attending other events organized by the community, not just staying or on bubble. We're piloting also an asynchronous reading club. This is extremely asynchronous, so there is no set schedule. The goal is basically to document papers that we otherwise read and make them available for people who are also interested. And we're also piloting asynchronous speed dating. Speed dating is what we call our weekly meeting series. And so they're quick fire, 30 minutes, but we'll even then try to make them async also to accommodate new time zones that are joining the team soon. Finally, we have quite active and controversial somewhat session on what comes next. And of course we have a roadmap that goes basically until the end of 2022. So it is, you know, at the mid-year point, it's about time for us to just start thinking about the things that we want to do beyond that point. And so this is a long list of problems and projects that I won't just read out. But yeah, there's all sorts of things weaker than total order semantics. We have analysis of network topology, for instance, front-running prevention. DRAN just come out with timelock encryption as well, so that might be an interesting thing to explore. AI design consists of protocols is quite an interesting one. It's probably not something that we'll be doing internally. It's also not something on the very short-term roadmap, but we may put out some funding programs to support more work in the area. And of course you can read through the rest. So we will be developing some of these bullets into actual discussions on our GitHub repo, so adding a lot more detail, and of course we invite everyone to participate in those discussions. The last bullet there is also quite important. So we do our roadmap for 2022 mostly as a delivering production ready implementations. But our work is not done until things are actually running in the Filecoin main ads. And so that is what we also want to accomplish in 2023. And that was basically the summary of our meeting. So thank you for watching. If you'd like to talk to us, you can again reach us on our GitHub repo where we do have an active discussions section and also issues for all of our projects and a lot more stuff. You can also check us out on the research website and finally also email us and come talk to us in the Consistence channel in the Filecoin swag. Bye.