 Cool, Filecoin Master Plan. Who remembers what the next two things are in the Filecoin Master Plan? Start with a massing, massive amount of storage hardware. What's after that? Fill it with useful data. Okay, what's the third? Compute over the data. Awesome. So, hardware, data, compute, hardware, data, compute, hardware, data, compute, everyone, have it in your head. Easy? Easy to remember. I'm going to say this, if anyone asks us and any of the awesome things are happening later this week, we know what's going on. And what's our end-res strategy, starting from the bottom? What's down here? Keep critical systems, stewardship, keep critical systems running. I'll take any of them, all accurate, making sure this is our foundation. If this breaks down, everything breaks down, things need to stay secure, things need to constantly be growing and scaling to their new usage and adoption. And we need to be running and releasing these systems quickly as well. So, this is our framework for improving them over time and growing their adoption. What's after that? And, oh no, don't look at your tables. What's above critical systems, stewardship? What's next? Trolls. What is the wording next to number two? Grow talent in the network and growing talent in our team. So, this is where we are bringing in new capabilities that might be hiring new humans into our team. But this might also be scaling talent into the network on behalf of other teams. And of course, this is doing our work in a network native way. So, events that we are participating in, like all of Lab Week, like IPFS camp, Phil Lisbon, et cetera, but also hosting our own events so that we can get work done with the teams that we need to coordinate with. One thing about there, we now have a number of working groups emerging with our own virtual workshops and so on. That's likely going to double down and grow next year. So, things like the cod working group, things like the IPVM working group, a bunch of the, we're finally reaching the scale. We tried them, a long time ago we tried to generate a lot of this and it stuck around with a few teams, so the IPLD team, the IPFS call, and so on. A few of them have been running strong for years. We're now seeing many more groups adopting that working group structure. So, a lot of this growing team and network might happen through those virtual workshops too. So, if you want to work on a set of problems with other people in other groups and the other ecosystems, consider leaning on one of those structures. What is number three? Number three in our strategy for next year. Don't look at your tables. Someone tell me. Storage and retrieval, core of what we do. Make this robust, make this scale, make this fast. This is performance, this is making sure that user errors are very low and it's making sure that humanity's information stored in Falcun is really useful and that people can actually build on top of IPFS and get access to all of that data and driving adoption with users. Four, compute over data and state, or state and data, whichever way. FBM, new capabilities in the storage network so that we are launching new things on top of FBM, more chain space, things like IPC, and making sure that we actually have compute over data cod. Cool. So, that's our strategy. It is in front of you so you can always look at it there. Feel free to take one of these cards home with you to remember closely. Put it on your wall, stare at it every day, resonate with it. But the aim is that we are going to keep this constant. This also shouldn't be surprising. These exactly map to the things that we were doing last year. Keeping things running, talent funnel, robust storage retrieval. Literally, we didn't even take the title on that one. And then compute, this was breakthroughs, but really many of our breakthroughs are leading to compute and bringing the capabilities we need to make that happen. So, these are our top level principles that we went over yesterday. Optimize and scale, a big thing there is measuring. I think this is something that maybe a number of teams in their roadmaps haven't fully integrated into their thinking, how we are going to measure our performance, measure our reliability, get the data we need to accelerate development within our teams and across the network. And so, you'll see that in half a second. Shipping things, I think we're all aware of that. Teams are very excited to ship their stuff to real users, but if you're not actually generating impact with users who are using your stuff, it doesn't count yet. End user product in UX, we have high goals here. We are on an improvement gradient towards it. Crossing the chasm does not happen overnight. Web 3 as a whole has not done it yet, but this is what we're building towards, so start building in those deep levels of reliability so that we can upgrade over and over time and that it's smooth for many different types of clients who want to consume it. And then this is around growing the talent in the network, growing our own capabilities, but doing our work really openly and actively with the community. So, roadmap tools, great, that's a sort of outcome that can help strengthen our collaboration within the network. Reminders, reminders of the things that, you know, as you think about your team's roadmap, and we now spend this time together refining it. Remember some of these principles. Make sure that you're figuring out how to incorporate those goals into your work. Cool, I should have done that. Anyways, just like last year, we have kind of like high level OKRs as it, that map one to one to each of those goal areas. Now, obviously this was done yesterday, and so it hasn't had enough feedback. We need to sit down together as kind of like org leads to talk about how this work actually translates into very different teams, but I wanted to make it one layer more concrete than the beautiful strategy you have in front of you in terms of what does this look like to actually go forth and achieve this strategy in the next, I think six months. That was about the timeframe we were doing on things. And so what might some of those kind of like goals or sub bullet points within each of those objectives look like? So starting with critical system stewardship, this is growing the systems safely and robustly. So this is, you know, definitely security and really, really good kind of like burn down of issues. But it's also a lot of other things. And so one, I literally just copied this from the IPFS Kubo roadmap. Catalyzing actually, I think I changed the word to catalyze, but catalyzing growth of additional IPFS clients and implementations. That's a key part of growth, but it's growth from a network native network oriented way, creating a robust automated benchmarking system for LibP2P that maps performance gaps. We're doing some of this with test ground right now running on each PR. But if you're going to reach the level of Chrome or Firefox or many other testing suites that really give good feedback to development teams on where their gaps are with users. You need thousands, tens of, I don't know, you need lots of benchmarks running on each of those PRs. That's not just like, hey, did they interrupt? But like, is there a good experience for people who are building their applications on top of our tech? Is it working? Is it fast? I'm sure the LibP2P team is like, yes, please, if we're going to make it fast, we need a gradient that we are working against. So let's make that gradient. Again, these are placeholders. We are going to work on these. We're going to refine these. We're going to make sure we have something good later. Something, something, IPFS and Filecoin Network, Infra Security and Uptime, we should define this more clearly. I think making sure that Uptime is a defined term that we're clear about is really important. This might also split into two, but my slide wasn't not going to fit. And so that's, that's a really important, it's very core to this goal, making sure that we actually grow usage and adoption. This is probably going to be something in partnership with the outer core and ecosystem team. But if we stay static, we die. So we need to be growing. Otherwise, we're not going to reach Mars. We're not going to reach real user adoption, right? Like, you know, if we, if we stay at the, what is it, like maybe 3000 repos today are building on top of IPFS in GitHub. Obviously, there's more users who are not using GitHub and other things like that, but that's nowhere near the two million users that are writing in Rust or developers that are writing in Rust, that's nowhere near all of the applications being built on the internet actually building upon IPFS and Filecoin. So we have, we have to grow in order to reach that, that adoption space. Um, this, this is around dog fooding and actually maybe dog fooding is not quite the right word. It's transitioning our usage from web two to web three systems. Now that we have things like Saturn coming online, harnessing that opportunity, um, to cut our costs to centralize infrastructure and replace it with our own infrastructure. Every, uh, Filecoin that we spend on Saturn has multiple, you know, values that it delivers, um, into the wider ecosystem versus every dollar we give to AWS is a dollar that they then get to go spend on AWS marketing. So, um, let's, let's use our own tools and let's find ways that we can be more efficient and more cost conscious while doing that. I think we also just have a lot of room to cut here. Period. We have a lot of latent resources that we're spending on things that we don't need. Let's use that money for other things. Um, and so, uh, thinking of ways to do that. Remember the macro environment, we want to cut costs in a ton of ways as much as possible, uh, to conserve. Uh, and let's cut down, let's both like cut down the spending and whatever we do spend, let's try and spend it in our community so that it's much more useful to us. Yeah, we want to go fund lots of amazing development teams or developers who are building on our tech. So if we flow those resources into our ecosystem it has, you know, lots, lots of benefits for all of us and for this, you know, whole PL network as well. Um, there's a lot of work happening and has been happening over the past couple of months about making sure that we are snapshotting our chain and doing upgrades. Let's make that actually stored in Filecoin. Let's actually be using our own storage networks and demonstrating how a blockchain should use Filecoin to store its chain state and ideally bootstrap out of its chain state as well. So we had, we had a good conversation about that two days ago as well. So that's an exciting area. We'd love to see us push forward in. Um, and then there's like, you know, kind of some of these projects that we're in, you know, not, uh, not IPFS and Filecoin, but are like significant, amazing projects that we have built that we use super heavily and is critical to us getting our work done. Let's gain broader adoption for those as well. We have, um, you know, kind of like awesome web three infra tools and we need to be sharing those with the wider community and driving adoption for them as well. Um, placeholder XXX, maybe it's a percentage. Maybe it's something else, but, you know, actually bringing DRAM, LibP2P test ground into deeper into the web, their community and gaining usage for them. Um, this is nowhere complete. Question mark, question mark, question mark. I've probably missed like 12 things. It was three o'clock in the morning, but, um, this maybe gives you a little bit deeper. For example, I think I missed a lot of things around releasing on a regular cadence or, um, making sure that we engage really effectively with our communities. So lots of, lots of oversights. This will probably end up much longer, but hopefully gives you a slightly different, uh, deeper peak into what stewarding critical systems actually looks like. Um, I'm going to have to speed up because time and I want you to get to discussion, um, leveling up team and network capabilities. Um, we need to hire and we definitely need to hire more, more folks at the leadership level to help scale folks like me. Um, but also really help support a lot of the coordination and alignment that needs to happen between teams here. Um, whether that's investing in tools or whether that's helping make sure that we just are better staffed at, um, uh, people management layer. That's something we've been working on. We've made some progress, but we still have room to go. Um, so staffing up, um, at that level, definitely a big focus, um, burning down bust factor. Um, I think there was, this is on like the JS IPFS side, making sure that we don't have teams with less than two bust factor. Either we need to make sure that we staff that next person. So no one is, you know, holding the world on their shoulders alone, or we need to make the hard decision that this isn't something that we want to invest in at that point in time. Again, I don't think the case for JS IPFS, but, but the sort of thing where if we have people who are like, we have, you know, a third of a third of a third of a person, like we're, we're not making that person effective. We're churning them constantly. Um, you know, is, is that the thing that we want to be spending time on, or can we actually focus better in the short term? And so. Cancel our freeze, right? So we can, we'll set something down for a period of time. Um, things, not everything will break if you like, stop it in a moment. Yeah. So, but, and it's also as, you know, okay, if you're, if you're not going to have people staffed against this, you need to recognize that you're actually dosing other people who then get pulled in when, you know, something that's critical that we're depending on needs to be improved. And so that's a visibility level that we then need to go, um, we need to recognize an account for that as we are, um, thinking about how we're actually allocating resources to projects. Um, oh, taking leadership within these communities and growing the implementer and developer communities. So Supercord and Network Native Development and, um, and the work we're doing there, but for example, participating in key events, running and, um, spawning key events like the IPFS thing, um, making sure that we're really engaging seamlessly with the the network and with the other teams we need to work with and taking a leadership role in accelerating the number of other developers who are participating here as well. Um, this is a more around metrics and tools, making sure that we have the capabilities, maybe that's humans in our team, more folks on the DX side of things or more folks on, on the benchmarking side of things so that we have those capabilities to also invest much, much more in metrics, um, automation, uh, visibility into performance, et cetera. Um, and also we have a lot of things that we need to ship. Maybe those teams need staffing in order to ship, um, and, and gain significant adoption. And so where, where we, um, need to, uh, staff up teams with say like more product folks or more designers or, um, other resources that have different capability sets that should, that should fit into this goal. Question mark, question mark. We'll add more stuff to this. Hyperscaling fast retrievals. Um, this should be not surprising. Um, but 100, I mean, these are placeholder numbers. So talk to me if this is completely crazy. Um, but I think, I think doable. I mean, I downloaded station. I did 50,000, uh, uh, Saturn retrieval deals yesterday. Um, it's beautiful. And I think it's totally feasible where you have a, uh, a payment system, um, to achieve a significant number of active retrieval nodes. By the way, this, um, I think, uh, Bitcoin has 15,000. So this would make us the largest, uh, blockchain network in the world, I think should be pretty cool, be pretty cool by a number of nodes. Yeah. Um, integrating Saturn into the IPFS gateways and through that, using that to drive time to first bite down even more. Both of those things are important, but hopefully having the synergy between those two to achieve them together would be amazing. And there's an end of a sentence again, three o'clock in the morning. Um, making sure that we measure, measure, measure our retrievals and that we feed that measurement into reputation. I don't know where Marina is in this room, but I know that's definitely an area of, of, uh, intense discussion happening at the retrieval markets, um, summit tomorrow. Um, but this is super important. If we don't have this feedback loop, I was looking at the, um, dashboard that Jacob was presenting yesterday of like, what are the top reasons that, uh, retrieval deals are failing right now and it's access denied, access denied too many retrieval requests to the storage provider. Um, there's a couple other ones as well, but like a number of them, uh, I guess access control of like maybe the client told me not to share this data, but, um, we need a lot of visibility into that and we need that to hone our development roadmap and to make our projects actually deliver amazing user experiences. So, um, measure, measure, measure and use the, use the feedback loop to, um, help everyone prioritize the storage, create, create an incentive gradient for storage providers to behave really awesomely in this ecosystem. Um, otherwise the user experience, you know, what's the incentive for them? Uh, and, and there's an additional cost in serving retrieval. So why would that? Um, also a ton of work. I mean this obviously very important and drives a lot of work in things like, uh, boost, lotus, minor, other places. Um, but this also encompasses making sure that we're continuing to onboard new data really effectively and we're scaling as that adoption curve, uh, increases. So we've hit three Pebobytes per day. We still need to maintain that, but if we're projecting five Pebobytes, maybe even more per day next year, we need to be upgrading our tools to help support and drive that increased onboarding rate, um, kind of at a technical level. So boost, you know, obviously has areas to, to improve, to, to keep making that scale and Lotus minor also has areas to improve to, to do that growth. Um, and right now I think we're at 25, 26% of all data is being, um, indexed and stored in boost. I think those are about the same number. Um, let's try and get those to 75% maybe. Again, placeholder number, but 75%, uh, six months from now, I guess a little more than six months from now, uh, nine, eight months from now, um, would be, would be pretty cool, pretty cool to, to actually be gaining, um, that data stored there and then making it accessible to IPFS gateway, Saturn and Kubo. And again, many more things here that I'm probably not thinking of, um, and, and enlist. Um, and last but not least, driving adoption for computation. Um, obviously shifting FBM, gaining massive adoption from, um, actual developers and developers who are shipping contracts, making sure that those contracts actually get a lot of usage. Um, I initially wrote one billion fill, maybe that's a little aggressive, uh, but like 20 million, 200 million fill, like I could see it. This is, um, all of the ways in which people might use smart contracts to make new storage markets or to make, uh, persistent storage deals or to make loans to large storage providers. Um, there, there's a lot to be transacted there in terms of flowing value into the core capabilities within Filecoin and the more capabilities you have, the more value can flow in there. Um, and so let's actually drive that. Um, total value managed is something that Role came up with as an iteration on total value locked because we're not DeFi. We don't just want to like lock money up in contracts for funsies. We want to actually deploy it into storage deals, into storage provider operations, into collateral in the network. So it doesn't need to be locked in a contract. We want it to be, um, put to work inside Filecoin to generate, um, a lot of amazing development and growth. Um, something, something, users of L2 Filecoin capabilities. So we want to see those capabilities actually getting harnessed in the real world. Um, would love to see people actually putting up bounties for retrieval incentives and storage providers, you know, having, having that, that incentive gradient to make their data accessible. Um, and then we have a lot of things to ship, shipping IPC and driving adoption, shipping Baccalao and driving adoption, um, maybe something, uh, proof of concept for IPVM MVP, early stages to get, get a first, first thing out the door around, around that future breakthrough. Um, and these are all part of those interconnected adoption goals. So, um, I spent too much time on this, but, um, overview on like how we might actually break some of these things out and how they fit how the things you're working on, your roadmaps, um, fit into that broader picture. Um, and we'll again iterate on these feedback very welcome. They probably won't be, uh, we'll present them at the beginning of the year once we've had a lot of time to workshop them, um, and improve them with other people, uh, just like we did at the beginning of this year. Uh, so I'm going to go through, um, these once more, uh, just so again, like touch on them again, repetition is good. It helps us all remember things, and I'll annotate with a few, uh, a few more thoughts. Um, so, uh, in this one, this creative, robust, automated benchmarking system and so on, uh, we need to kind of suspect what we need here, um, and kind of where are we at the moment and where we need to grow into. Um, this, this layer, um, is going to be extremely important not because we're now, we're now definitely past the scale where we're being rate limited bar by our inability to know the changes that we make and the impact of those changes have on the rest, uh, of the system and the rest of the network. So, um, we need to get to a point where as you're writing a PR, you have a totally automated way of knowing how that PR shift is going to affect the feature sets and the performance across the network and we need to use computers to tell us like how well we're doing. And that's not just for your team. It's for all of the dependent teams and the teams beyond that that might be like introducing PRs and so on. Our community is now in the many thousands of people that are collaborating across repos. We're well past like the infrastructure, we're trying to like operate with like too little infrastructure and too little automation. So we need, this is like a big investment for us. Um, and then the usage on growth on tech, we have to define which one specifically on this kind of range. Um, right, if I specifically, um, so we, because of a lot of our focus on platform, um, and, uh, we didn't focus on IPFS as much. So we sort of like delayed the growth and adoption of IPFS. So now we have to like steep in the curve for a while to sort of like get back on track. Um, a lot of these, these growth numbers are based on very long-term goal setting of like many years and thinking about the growth rates across many years to end with certain kinds of adoption, you know, after, you know, five, 10 years kind of range of things. And so that's kind of like why that's, why that's steep. Um, on this one, uh, one thing I wanted to also add here, here's what I mentioned earlier around workshops and with, uh, working groups. So there's probably a set of areas where there are now there's a strong need for a working group, especially across the ecosystem. There are, um, many other groups that want to work on these kinds of things. For example, private data. How do you deal with private data properly? That's one key component. Uh, there might be a working group around what should go into five point deals and how do you reason about other information that needs to be tracked in deal. So for example, whether or not this should be retrievable, whether or not this should be indexed, whether or not this is like needs to be going into five point green and so on. Those are the requirements that people have circulated around, uh, five point deals. They're, and that's coming from many different teams. So there's, there probably needs to be some kind of working, working group around that kind of thing. So leaning into the working group structure, um, I think will be really helpful. Uh, and here I mean like open source oriented working groups where you have like, you know, some page somewhere or like a repo and you have like some kind of, um, continued way of like making progress on, on specifications and documentation of things and making decisions as a, as a community. Um, on this one. So this particular end-to-end retrieval measurement, testing reputation incentives. Uh, we also need to expect that one that is kind of like a project into itself that is not a, a, an easy thing. This does not just mean within our current projects get end-to-end retrieval testing. This means an additional thing, um, across the entire network. How do we know that all of the stuff that should be retrievable is actually able to be retrieved from everywhere in the world? How do we test that on an ongoing basis? How do we get performance measurements? Like with what bandwidth is it retrievable? With what latency is it retrievable? And then how to use all of that information to then drive incentives in the network. So part of the deal of storage, um, uh, providers is that they don't, don't just have to store the data, they have to serve the data. They have to make it retrievable. Um, so that might feed into, um, coupling to the, to the FACON Plus incentives or just the straight, um, storage, uh, providing incentives. Um, but we have to create like extremely good and robust ways of getting very high quality data first, describing where the problems are, figuring out, giving the network a way to improve the problems. And then at that point later, driving that into the, into the incentives. Um, as we, as we touched on earlier, like a lot of reliance on, on, on, on Saturn and station to drive a lot of this thinking. So that this thing might end up with a, with another tool that like deploy through station or that uses Saturn or something like that to then get this measurement. So like getting hundreds of active requests. And by the way, um, whenever you wonder what, like where's the demand for a lot of these tools, we, we have our own problems that we need to solve with these things, right? We have demand for these computers everywhere in the world to like, um, hammer the network and give us these performance measurements, um, and so on to then help build the thing, right? So it's very useful to like be able to use and dog food around, around stuff. Um, in the driving adoption for computation, um, some of these things are, and there's probably another one's here, like, you know, Medusa and a few other things that likely, uh, along with these might turn into, um, networks. So for example, DRAN itself, um, there's been a discussion for a long time. It's like, oh, should that be, should there be like a DRAN oriented blockchain network at some point? What's going to happen with the VDFs? There's a bunch of questions like this. There are many projects that have been kind of like, oh, this, this should be kind of like it's own L2 or his own network, um, into, in its own right. Um, this, we're, we're going to have to deal with that set of questions this year and figure out a good pathway to, uh, to enabling a lot of these things to, um, to, to become those networks. Um, the FEM, having, landing the FEM and having the ability to use smart contracts directly on, on FACON was one of the, uh, big major, uh, things. What important thing is like, how do we think about the crypto econ of these other L2 networks and create like a super robust, strongly reinforcing, um, set of economies. Cause a lot of these, especially if you create a new network, we're booting up a new economy and so when we boot up a new economy, we want to make it, um, like very reciprocal and like very successful growing together. Um, one thing I would add here, uh, we should consider adding, um, to an earth's point yesterday, um, so from like the pitch session yesterday, adding a very explicit thing about connectivity and bridges, um, bridges and composability and modularity have been in our slides for years. Like we, one of the, there's this slide that we use with Filecoin and all of the other blockchains and we draw like all these links across them and that's been like part of the message for, for a long time, but we now need to like, and there's been many projects to drive bridges and adoption and so on, but we now need to make it a strong KR for ourselves of saying, hey, like we need to prioritize the connectivity and interaction and you know, cross-chain calling and so on and that might involve data availability layers and, and so on, um, to really be able to kind of enable any network out there to be able to store data on Filecoin and use it. Um, some of the stuff around being able to store the, the snapshots of, um, where were the snapshots? The, the snapshots of Filecoin, there we go, Filecoin chainstay to store and store for Filecoin, being able to prototype this in a, in a chain agnostic way, being able to do this for all, for a bunch of other chains and showing how others should do this. I, this may, can make Filecoin a strong, um, service that the rest of the blockchain world uses and that can self, serve us also as an entry point for some of the data availability questions. Like it is possible that some fraction of that might also turn into, um, connectivity and, and, and, uh, be able to like lead many, many protocols of people are designing now the couple, like what data you need now from what data needs to be available long term and that might be the, the, these protocols lean on this for the available long term. Uh, cool. I think that's it. We know that these are not final. This is a view into the world, um, given what we know today and one of the things we're doing is we're getting deep visibility synthesized from each of the different teams so we can identify if there are bugs, problems, areas that need better alignment and we have time to iterate on those, um, and time to iterate together, uh, over the rest of this week. And so we're going to record these. We hope to publish them if something goes haywire or, uh, that's, we can almost deal with it. Um, but also we plan to still make changes and so we know that these aren't final. They're draft. Um, any questions about that? Maybe you heard it. My, my, my missive, um, do we want to sometime today, um, start getting things to get home or not? No. Before that's later. Defer, defer getting things into GitHub. The aim is this is going to be so easy that you just update whatever roadmap is linked in that Andres road mapping doc and myself or Katalia or someone can go and copy paste from that into GitHub issues and be like, hey guys, here's, we turned that doc into these five GitHub issues. You now own these. Please go continue to update maintain them over time. Um, and that things will get auto populated. We didn't want to spend our small amount of time together copying and pasting into GitHub issues. And so, um, also tools still work in progress. And so I want to make sure that it we've polished off some of those rough edges. So we will do that after lab week. Um, if that's okay with folks. Yeah. And if you, it might give you a good way of like thinking about how to structure this. So consider using your team. We've got six ish, six milestones telling each person that you each do one milestone. So you get a deal for how to make sure it works. Mm hmm. Yep. That'll be better. Can people start? I mean, is there a place for us to start sticking issues if people really want to use, use the tool? Is your mileage very? But yes. Um, yeah. Yeah. Okay. And you can decide, I mean, you decide whether you want to keep it in your repo in whatever repo you want to use. Or if you don't have a place to put it, we can put it in what was the repo that do we have with a roadmap's repo that could be like a good place to hold it for random things? I think, yeah, we can find a place for that. I mean, I think ideally you put it in the repo for your project. Yeah. So it sounds like you can create issues in your own project roadmap and then you can link to them. And it's the power of the internet. That sounds great. So yeah, feel free to to create your set of issues. All right. So, so we were all here for the session before. We were learning about all the goals that we have for the year and all of the challenges that we as an organization and as project teams we have that we need to tackle. And sometimes we hear about it's all working right? And sometimes we hear about like, oh, we need to hire an L6 or we need to find a team to tackle on this project or we need to build the security team to run audits for us and make sure that our code is always audited in a timely way. And often this might sound like challenges that are beyond our scope or beyond our reach. Something that someone else should take on. And so it might not be obvious if you are an IC or if you are a new to organization on how actually to mobilize resources, conversations, planning to tackle on those challenges. But like the reality is actually all of those things are within your reach. Like you are totally empowered to be able to take those on, ask around, figure out how you can contribute and how you can make things better. And so there are multiple examples of this. Like multiple people like jump all the time into these challenges and they help like helping a team structure the dogs or help a team hire someone or and the example list goes on and on and on. Awesome. So reminder, this is our Entrez 2023 strategy. Hopefully everyone knows it by heart by now. But we're going to go through quick roadmap presentations from each of the different projects inside of Entrez that are building on these areas starting from the bottom to the top. If you are working or you have a roadmap to share within the critical system stewardship bucket, first is going to be IP stewards, IPFS stewards. But everyone in that area, yes? Can we launch them just in the center? Ah, yes. Launched into the world. Amazing. First is probably going to be Reed, IPFS. OK, so on the IPFS stewards side, we have four main goals. The first is strengthen and grow the IPFS contributors community. We need to have more people pitching in and helping us to move IPFS forwards. And so what can we do as stewards to help make that community healthier and bring more people under the umbrella? So new contributor guides, first issue guides, setting up processes for issue triage and PRs and handling those in a timely way. And to help support those, we are looking at hiring a couple additional community engineers. Goal number two is fully transitioning to dynamic content routing. So this been a journey we've been on for a while of switching from using only the DHT to providing kind of a pluggable mechanism to talk to indexers and what we want to kind of we want to finish that off by also introducing a way of kind of like selecting what is the default. So when a Kubo node comes up or other implementations, they can go figure out what are the right content routers to use. And all right. And so that's number two. Number three, catalyze the growth of additional clients in implementation. So we want to continue to see an ecosystem proliferate of new new IPFS implementation. So a few things we're doing there is enabling verifiable retrieval. So that helps support like clients. So things like clients running in browsers that are going to be using gateways heavily. We want to change Kubo from the current model it's in into a library. So it's much easier for people to come in and use Kubo functionality in building new clients to fit specific scenarios. And then creating a first class. So really doubling down on the specs. So we want when you come in to build a new implementation we want to be very concrete like this is how a correct IPFS implementation functions. And we also want to put in place the processes by which we will evolve those specs over time. So a lot of that's already in progress. But and then finally, you know, we can't improve what we can't measure. And so we want to really prioritize with urgency getting in and developing a set of KPIs for IPFS, both figuring out what those KPIs should be and then also implementing the automation and dashboards necessary to to track those. So that's it. Thank you. These are by the way, this is the magic that happens with the roadmap tool is you can screen cap these and put them in slides. So I'm not most of this is just what I've already said. So it's just laying that out on a timeline. So on the problem side, which I believe is next. Yeah, we're going to hopefully collaborate a lot with the team on IPFS and other teams to look into how we can verify the correct operation. Of course, start start protocols. We're going to focus on the DHT and gossip sub for the foreseeable future. The DHT has got several boards and timeouts that might not be optimally configured. So that we want to look into that and gossip sub, of course, as we know, it's a very central protocol for file coin. So we want to look into how that is performing. We as I mentioned yesterday, many of the tools that we're building are already, of course, open source and documented and everything. But we want to kind of group everything into what we call that continuous measurement infrastructure and try to have, you know, the tools running, having some data warehousing and then analyzing the data to make them much more easy to consume by all of you, but also, you know, the ecosystem and other users. So that's another area we want to work on. And area three is the LiPi2P privacy guarantees that I mentioned yesterday. We already have a project ongoing with ChainSafe. We're implementing and it's almost, you know, complete from that respect something called double hashing approach for IPFS, which provides reader privacy. But as you know, with privacy, there is kind of no one size fits all and the community has been very vocal about several other kind of techniques that can go to improve privacy in LiPi2P. So, yeah, we want to, at the first place, go talk to the community, investigate what other techniques are available and are worth looking into more in detail. Yeah, so that's it from me. Thank you. And the next one is... Okay, first thing, the LiPi2P team will be working on is connectivity, especially to bring the browsers closer to the LiPi2P ecosystem. We already talked about the web transport yesterday. It works, but now we can use it. We need to use it to enable new use cases. For example, we can upload files from the browser directly to the Falka network. That's the kind of things that are possible now that browsers can connect. We'll also continue our work on WebRTC, making sure that browsers can also connect to a Rust LiPi2P nodes and later on we'll also have browser-to-browser connectivity so that any browser on the LiPi2P network can connect to any other browser including whole punching without any configuration needed. We'll be also focusing on interoperability. LiPi2P is the foundation for a lot of multi-billion-dollar networks. And we really need to make sure that this is a rock-solid foundation and that we don't accidentally break backwards compatibility. So we'll invest into having a testing framework that makes sure that LiPi2P is that reliable. And our last point on this slide is we'll focus on performance. And performance has two things. First thing is the connection establishment. Currently LiPi2P is not as fast as it could be. A lot of our transports are not as fast as they could be establishing a new connection, which hurts the time-to-first-byte metrics. We'll make sure that we are not wasting any round trips anymore and really have very fast handshakes. We'll also add support for better and faster transports like the quick work that's currently happening in Rust LiPi2P. And then the other part of performance is throughput. We want to make sure that LiPi2P is on par with HTTP. We already have measurements that show that LiPi2P is as fast, but we want to have a dashboard and we want to have continuous measurements to make sure that we don't regress on that. Question? Hey, Martin, quick question. Yeah. For WebRTC browser to browser, do you guys have any plans for running the stuck hand or turn service for that or using other kind of public infrastructure? That's something we still need to figure out. If you're interested in more detailed roadmaps, we have them all living within the different implementations of LiPi2P. So just head to the Go LiPi2P, Rust LiPi2P, JS LiPi2P repository to learn more there. This one. Hello, everyone. So I'm going to talk about IPFS and JS and what the plan is for the next year or so. So something that seems to happen often is people like, is there somebody working on JS IPFS? Yes. Hi, that's me. So we need to do clearly a better job of communicating the state of IPFS and JS, which includes, yeah, so doing blog posts, I'm doing a talk at IPFS camp, et cetera, and just popularizing it and which will hopefully help with recruitment, which is definitely something that needs to happen. So obviously the team right now is incredibly under resourced. So we definitely want to double the team capacity at least, please see me after class if you would like to help out. And then the big piece of work is obviously so Go IPFS renamed to QBO, and we want to make space for other implementations. So JS IPFS will be renaming to Pomegranate. It's a place-holder name, obviously it's not available on NPM, so we can't actually use this one. But there will be some community voting on names and that kind of thing. And the idea is basically, instead of having this enormous monolithic API that's basically been copied from Go IPFS, it's more to try and double down on the model that the Web3 storage folks have taken where you're using individual components of the IPFS stack to make a very custom version that will suit your particular use case. So if you don't need IPFS, don't configure IPFS. You don't need to have the extra dependencies and all that kind of stuff. If you don't need all these esoteric hashing functions, you don't use them. And just really let people make small, lightweight implementations that just speak to very specific use cases. And then Pomegranate itself will be like a toolkit for you to build this kind of thing with some sensible defaults that let you kind of get started quickly. Yeah, so we want to have, yeah, so V1 released in Q1 2023 with the full CI pipeline and network connectivity built on all the work that Martin just described in JS Lippie2P. So definitely taking the web-first approach, so using transports that let you dial Q-Bow nodes and Rust Lippie2P nodes directly from the browser and we will then, when there's a minimum level of functionality, we will then sunset JS IPFS itself. What's up? Thank you. Hi everyone. So quick announcement. We just discovered our team name yesterday. It's IPFS GUI and Tools which turned into IGNT which turns into Ignite. So, we kind of just brainstormed this roadmap which maps to the 2023 strategy, the place maps that are everywhere and that we should all know by heart now, right? So, this is, you know, I'll go into detail in a second. And there are some dates that we have in Notion that aren't in these slides, unfortunately, sorry about that. But our number one priority is to increase developer velocity and decrease onboarding friction. Right now we have a lot of technical debt that just makes things really difficult. So, that requires paying off tech debt from there being a lack of a GUI team for the past two years, increasing our internal developer velocity and updating to the latest IPFS ecosystem with all the updates that Alex has been making lately. And then we don't know what X is right now but we want to increase the adoption and usage of our products IPFS desktop and web UI. So, part of figuring out what that X is is developing and implementing a UX strategy. So, defining some metrics that tell us what success means, how to figure that out, what our user stories are, user personas and things like that. UI refresh for some of our items, some of our products where, talk about that here in a second on the next slide. And then there's sort of a, I'll skip over this one, but a dream goal which probably won't happen in 2023 but of like doing a distributed web three hosted version of web components instead of like a centralized server. But yeah, talk to us if you're interested in that. But some more specific details on our products for IPFS companion, there's an important call out that, Chrome has manifest V3, which I'm not the expert on. So, I'll have to point you to another direction but let me know if you have questions. We do have some work that's important that needs to be done before June 2023. And there's some cleanup in IPFS companion that we're gonna be working on. Updating the UI, we have some mocks in place on some GitHub issues. And then for desktop and web UI we're removing, you know, it's already pretty tightly coupled with Kubo. Like Kubo is the implementation that we use for desktop and web UI, but there's still some artifacts that exist from JS IPFS and other libraries and packages we use that we need to pull out. So we'll be focusing on a lot of that. There's actually an issue open right now which is a partial blocker for enabling web transport, web transport by default in Kubo. So there's an issue for that. I think I have it fixed now, Joropo. Bet me that it wouldn't be fixed today, but we'll see. And then doing some diagnostic tooling overhaul and then public gateway checker, we're not, you know, that's not a priority but there's a lot of activity that's happened with public gateways over the past year. And I think that's an important piece of the community that we can really benefit from putting a little bit of effort into. So that's our plan. Loaders and Actors, 2023. So again, we are still trying to not keep, you know, keep the network running, not kill the network. If there's an incident, we got to fix it. That's our top priority. We also want to grow in the team. Loaders team does have the domain expertise on follow queen in a lot of sense. So we want to help onboarding maybe engineers, maybe technical support engineers, onboarding them and ship them to other teams to support other effort. Also, we are breaking down the team to three different engineering tracks. The first one is like driving research to production. Basically, writing fifths, ship fifths, ship network upgrades. It's very important that for each network upgrade, we want to have a very far codename meme so we can have all those balloons. So that's our top priority, as I think. And the next thing is, Loaders was first created. It was like a prototype of showcase follow queen as a concept works. So there's a lot of different component in the Loaders code base today. However, as a client implementation, we don't really know who is our defined user because everyone was forced to use Loaders. However, now we have Forest, we have Venus, we have other implementation in the network. We want to make sure we know what Loaders is as a client implementation with our users, what's our use cases, and we want to simplify clean up the code base and deliver specific usage to different stakeholders of the network. And the last but not least, miners. We want to keep the storage fighter happy. Also, we want to make sure our storage onboarding can be robust enough so we can, the Loaders miner, the sealer onboarding can handle all the data coming from boost. That's why we want to module our Loaders miner do some of our architecture and to enable easy deployment for large scale enterprise level storage provider while capitalized there, just like we do care more about data onboarding storage provider as a software. So we want to make sure the solution can handle the incoming data instead of just like, you know, CC sectors. So that being said, our milestone again, the dormant dates, it's very placeholder. We don't know what we are talking about in general, but like in 2023, we want to ship a couple of network upgrades. The ones that we have in mind, it's just like a high level theme. First, we're going to ship a shark. Then we will enable FAM. Actually, yeah, it's February. Yeah, so it's current, right. We want to do the FAM upgrade to the network. So then we want to set up the user programmable storage market by enable some public API in the actors or and refactor the built-in actors so that other people can deploy user contracts and interacting with that. And then we want to deploy more like enhanced features to make community members can deploy interesting storage market, for example, using Halo to prove and better user crown and much more things. We have to go down the path exploring who is our users for the lowest client implementation. But so far a couple of things has been defined. Yes, we want to make sure that no operators can deploy a node easily from snapshot and starting up the daemon can manage the chain like state. We want to help deploy a following client API standard so like different node application can talk to the client implementation easily and maybe explore to a light client solution so that people can just building apps and talking to file code chain in the web browser and things like that. And the last one we're gonna do modular loader's minor is gonna be a huge effort. We're gonna take input from both team as well. But in general, we want to enable flexible scheduler, standard load ceiling manager process. And we also want to make sure as we scale people can still prove their storage through like redundant proving system. And I think maybe there are gonna be some updateable storage in the roadmap in collaboration with CryptoNetLab. So we'll see if that happens. But that's it for now for the loader's and actor team. Filecoin in for a 2023 priority. First objective, Filecoin core infrastructure will continue to scale and decentralize. Lotus Lightweight Chain Snapshot is another chain snapshot service that the Filecoin in for team has started to run and the official launch will be alongside the Lotus Shark upgrade happening at the beginning of November. The second milestone, snapshot artifact source decentralization and redundancy will occur in Q2 of 2023 to ensure that there is always high availability for the snapshots that are critical for new Lotus nodes to join the network in a expedient amount of time. The next area, Lotus Bootstrap Node and Disputer decentralization. Our milestone is to develop impact evaluators and service level expectations in Q1 of 2023. We are hoping to get more organizations within the PLN to run Bootstrap nodes and Disputer and to track the service levels and uptime of those nodes that are part of the official Lotus binary Bootstrap List. For the Lotus Gateway API chain.love we have a milestone around improved horizontal scaling and a website launch. The website is already launched and most of our horizontal scaling design has been deployed with the remaining item being to ensure that our multi-regions set up is operating correctly. For the Lotus Build artifacts pipeline we have a milestone around Lotus Build artifacts, dashboards and reporting and that is also happening right now. We have our first prototype of a dashboard available in the protocol labs Rafaana Cloud account. You can check it out. And our second milestone in that area is a Lotus release pipeline pre-architecture with validation and further observability aimed for Q1 of 2023. For our second objective it's our Web3 get off platform. We'll accelerate application productionization. Our first milestone is around general availability launching sometime in Q1 of 2023. And finally our third objective is that the next generation of Lotus DevNet will lower the bar of entry for creating new DevNet in the Filecoin network. And the first milestone DevNet deployment automation and tooling for CI integration and easy DevNet creation targeting Q2 of 2023. We're hoping that we can make the process of creating new DevNet so easy that we can spin them off as part of CI. And likewise it'll be so easy for other people within the PLN to launch a new DevNet and further accelerate the protocol and application development within the Filecoin network. That's all, thank you. Hi, so for 2023 priority for Sentinel team we're mainly in three main areas, Lili, PLDWN community. So I got feedback that I should explain what Lili is. So it's basically a Filecoin chain indexer. And the two important milestones for Lili, first one is for you to be ready for a VM adoption in Lotus and Filecoin. It's always an ongoing effort. And the second, we have a major change in Lili to extract chain states as the IPOD object to serve as the ideal for consumers. We got feedback from our users that Lili is sometimes not very flexible or is not customizable. So we want to provide this feature in order to achieve those requests. And for TODW, which stands for Protocol Lab Data Warehouse, so we want to provide a full blockchain data available in BigQuery by the end of Q4 and we'll start to include all the off-chain data in BigQuery as well in early Q1. And we also want to achieve Filecoin data validation and data quality checks in our warehouse probably in Q2 next year. And for the community, we want to collaborate with our important partners like Starboard to run and improve our Sentinel and Lili nodes and to collect feedback and to have more public appearance. For example, we're going to have a speaking file, fill this one next week to talk about Sentinel software and the data. Data say we're going to make it public in BigQuery going forward and to see if it will bring more value to our community and to bring more awesome stuff that will be on top of our service and data. And yeah, that's it. Thank you. So for CryptoEcon Lab, we've recently split into three groups. So we have one for ecosystem, one for layer two incentives and one for core protocol. And it's the core protocol working group. I'm going to tell you a little bit about here. So this is related to critical stewardship of the network and essentially it's anything related to incentives for the core protocol of Filecoin. So concretely things like onboarding, but more generally, how do we understand and keep aligned the growth of the network along the directions that we want. So the objectives that we're looking at here, I can tell you about the first objective. The first objective is to prepare the gas economy for scalability and feature upgrades. So what does this mean? Scalability, we're effectively talking about IPC. And here there's very difficult questions over what the economy should look like in terms of gas and in terms of collateral. So this is something that's critical for the network to look at. In terms of feature upgrades, effectively this refers to Filecoin virtual machine. So potentially the gas landscape in the future could be very different and this could change quite quickly. So this is something we really need to be on top of in terms of developing the tools to monitor and simulate what this gas look like in the near future as well as developing ideas to be able to deal with a network that might have much higher levels of congestion and activity quite soon. Another objective is to sustain the health of Filecoin's economy and escalate issues early. So there's two parts to this. One is to do with monitoring and detecting icebergs early. And another part is to do with a systematic review of essentially all aspects of the core protocol that we're looking at. So this is things like termination fees, things like locking, things like minting. So there's a lot of work to do here and a lot of these things could be changing or being updated or at least opening discussions in public Q4, Q1 next year. Another thing we're looking at is to develop, another objective is to develop capacity and to power new research and quickly form views on economic policies. So in terms of developing capacity, there's now three people in the core group and we have about five people who are partially contributing and we're trying to hire engineers. So this gives us a lot more capacity to engage on basically almost all FIPS I hope because many things touch on economics. So that's something we're looking to do as well as to publish a lot more research and hold seminars, open source, the codes and methods that we're using. So those are the objectives for 2023 and we wanna focus on in terms of milestones for the next few months. One big milestone is likely to be rebasing, minting to make it depend on QAP. Another milestone that we hope to release soon is to give some concrete details and make this public on a spec for what economic elements of ICP might look like. So this is relating to gas and collateral. And a third milestone is to propose an upgrade of or at least open the discussion for how we might upgrade Far Coins gas model to better support FEM coming along next year. Thank you. So on the dear end front, we want there again three distinct areas that we're going to focus on. The first one is refactor very central parts of the code base so that we are enabled to land on mainnet things that we have been developing and are actively developing right now. One of them is time lock encryption that Jolin talked about yesterday and you've heard a lot in demo days and other events. Another one is that we want to refactor the most central part of the resharing ceremony in Duran, which is the DKG. A lot depends on that. And if we want to move to a stage where we're going to be able to expand the League of Entropy and have more members, we need to have what is called async ceremonies where things run, we're a little bit more flexible with how we run things. And certainly a few things need to change there. So maintenance and active development of new features is area number one. Area number two, we think Duran D is a great service that provides great value and we want to drive community engagement. We want to have more clients, more users, of course, more LOE members as well. But primarily what this area is focusing on is make it easier for developers to go and use Duran in their own application. So we're seeing lots of new members coming in to our Duran Slack workspace and that's thanks to great public talks that the team has been doing the last couple of quarters but we do want to have things like hackathons, things like how-to guides, updated docs, a proto-school tutorial at least one so that it's easier for others to onboard. And finally, engagement, further engagement with the League of Entropy. This is a group of 16 partner organizations and growing. Right now, yeah, there is some engagement, there is some cross-collaboration but there isn't too much of collaborative development for Duran. If there was a way, we 16 partners, if every one of them did a tiny bit then we would see massive progress and we would be able to do a lot more. So we want to build on that. Incentives for LOE partners is something that has been discussed over the years again and again and yeah, we think it's time to do it now. So yeah, that's it. Docs. So others are supporting the engineering projects like effort, creating contents as needed for the end user. These are the docs projects in general what we are gonna be doing. First, we are trying to looking for a localization solution for all the docs side for the major projects of the PL stack. Now we have matrix, we know in which region we have the most visitors of our doc side. We want to help them to understand our stack and building on top of the technology we're building. The next one we want to organize and improve the UI and UX of the doc side so that like user can find the content easily and also knowing where to, and can have a deployment flow to actually build application using our end user docs. Also we know that Filecoin wants to have a architecture kind of because now we have like FEM introducing a lot of developers into the network so we want to make sure we have a section for that for LitP2P everyone is saying we basically have to recreate the docs so that like actually present all the progress that the LitP2P project has moving forward and also IPFS team I just heard that the team might want to focus on enable more developers to actually building the application to building the implementation along with building the applications so some reflector will be needed on the docs for that effort. The next one we want to enable robust and OSS contributions because we cannot do that education by ourselves. We only have three people over there. So we want to make sure that there's people who understand our like doc structure can create PR directly to the docs and enable more people to educate other folks to contributing to the stack. We in Q4 we want to slowly initiate the docs as a service so basically what we are looking for like maybe just like maybe we can have applications just like modularized and have people can just you know for any like PL or new projects if they have any doc site needs they can move since around click a button and just spin up a doc site because as we moving from like driving protocol to productization we think a lot of team will be creating new product that will need new docs and we cannot like just like you know have docs engineer for each product. So we are going to try to make this a self service like a platform to do that. It's the high hope for docs team for the next year we hope we can get there. Awesome that brings us through all our critical systems stewardship a lot of stuff in there but all really really really good really important moving on to growing the team and network. Anyone who's presenting in this section please come up on doc. I can do the first one which is from uni who leads our kind of like Starfleet events team which is just one of many events teams in the PL network. They have some focus on end res specific events events that are specifically targeting how we can help you know catalyze amazing amazing conversations and growth and development in these ecosystems. So first partnering with the PL network events team I think this is like teams across the entire network to solidify a 2024 events calendar and do a lot of cross knowledge sharing about you know how of all the events that we've run this past year and the years before that what has worked best and make sure that we level up kind of cohesive knowledge across this like really quickly growing team. So the items within that is getting a 2024 events calendar by the end of November which would be amazing we could all actually plan our schedules for next year and then have a playbook and tips for event planning. The next one is continuing to plan IPFS and other Starfleet kind of like sponsored events ones where we put a lot of time into the content curation specifically IPFS thing and IPFS camp for next year. And so I think the goals are to increase attendance on both IPFS thing from 90 attendees to 300 IPFS camp from 450 to 1000 which will be amazing. I think those are totally doable on that same vein growing the IPFS community creating a way in which we can help regional area hosts own more IPFS community events. This is probably additive to the you know ecosystem orbit programs but maybe focused a little bit more on like disseminating knowledge and best practices and also to host IPFS and friends pop ups and dinners around these major major events across other ecosystems so that we can have kind of more seamless cross collaboration with the many many different ecosystems that all build on on IPFS. So they wanna launch three regions for IPFS community events and for IPFS and friends pop ups if you have ideas on which places we should really really be present for IPFS please reach out to uni and we can help prioritize that. Amazing one care about the world that sounds great promote less waste in our event hosting process be green okay that's the that's the TLDR we all care about the world but also making sure that we're supporting local communities when we host our events so that there's lots of engagement and collaboration in the regions where we go and bring these events to. So finding more partner organizations donating a good amount of the supplies so that we don't end up with lots of waste at the end of events or dinners and especially food food related making sure those all go to a good place and you know feed people who are hungry at the end of a great event and finally communicate great events are definitely important when we get people in person but they're also important after the fact to harness and disseminate all of the knowledge and decisions and actions that got committed to in that period of time for everyone who wasn't able to make it. And so leaning into newsletters and other ways of sharing the output of events so that people can build upon that. So that's events. And cool so I'm gonna talk about the ecosystem working group that's the rightmost, leftmost sub team of the CryptoEcon lab. So the ecosystem solutions group were kind of this input output layer to the core protocol and the layer two groups to do communication and education type functions from all the good research and findings that are going on in the other two groups. So our main objectives are we want to establish CryptoEcon lab as a defining global leader in CryptoEconomics and kind of be the center of excellence. Another objective is to increase broadly the PLN's understanding of CryptoEconomics. So similar to what Yolan was talking about with security kind of going out and permeating in the network. We want everyone to be thinking like an economist. So we wanna be producing work that indicates that and have resources and things like that. A third item of our objectives is to ensure that Filecoin governance has effective mechanisms in CryptoEconomics because a lot of times when a FIP or proposal is CryptoEconomic it has decision-making, it has trade-offs, it has a lot of these things that are kind of core economic right in the wheelhouse there. And so we wanna kind of apply some research and rigor to our processes in governance. So looking out over the next six months some of the key milestones that we would have associated with those objectives are CryptoEconday events. So we're trying to kind of put a stake in the ground and have these quarterly. So we'd love like if you also have events that you wanna kind of co-host and have in the same locations. We do a lot of that with the fill events like Phil Singapore and Phil Lisbon. But we wanna kind of be a stake in the ground for that as well. We want that to be a hub where people come from all kinds of chains and projects in web three not just the PLN and kind of be a knowledge sharing hub for that. Another milestone is that we wanna start delivering educational workshops towards the goals that I was mentioning earlier. We wanna make workshops to make CryptoEconomic concepts and tools and templates accessible to everybody in the PLN. We want to mature our publication channels for the content that the other two subteams are producing. So publishing more academic papers, publishing blog posts and just getting the knowledge out there. And then we also want to specifically publish a state of knowledge report on economic governance across web three to kind of inform our own governance activities. So thank you. Hello, yeah, IPDX again. So as you probably heard a lot already, Testground is one of the projects that seems to be quite important in our network. So that's certainly our main focus and it's actually using all of our resources at the moment. And we want it to become the distributed and decentralized system testing platform. And for that to happen, we think that first we have to re-concentrate on usability of Testground that we want it to be delightful to use because otherwise we just won't get users. We put a number up there, but that's remade up and we are going to think about it more, but we think that by concentrating on usability we can grow the number of users of Testground. Then the second goal would be to allow creating large scale test plans that scale beyond what a single machine can do. And we want to do that by reviving the Testground as a service effort. And for that, we want to be able to among millions of test plans, which is currently not possible. And finally, we want to meet all the needs that LP2P might need because we think through that will support other teams that builds on top of LP2P and it will allow us to cover a wide variety of different test cases that we might want to test for. But for all of that to even be feasible, we need Testground to be stable. We need to come up with project management techniques that will allow us to track that and to make sure we move forward into the right direction. So that's another angle of our work or the next year. But that's not the only thing we work on. We also want to work with you. Like there are certainly so many opportunities for collaboration with several UX security net ops. We want to talk with you continuously. We want to know what's happening. We want to be able to help. And we also want to highlight that we want to stay heavily user focused. So one of the ideas we have for next year, that we hope hiring might allow us to do is to embed ourselves within every single team within IP stewards so that we better understand very specific team needs. But we also want to scale the learnings beyond IP stewards. So that's what we mean by working together in the entire network. And finally, like we don't forget about the things we already developed, we do focus on maintenance as well. So for next year, we also do plan to major unified CI releases. Probably a lot more as well, but it didn't fit into the slide. If you want to learn more about our specific plans, we do have everything public on our Notion page. The Notion page covers the general IPX direction that we want to move into. And we also have a very specific project roadmap for Testgrunt that's hosted in GitHub in the Testgrunt project. Thank you. Awesome, on to storage and retrieval. Everyone who's in the storage retrieval group, please come up. This is focused on making sure that we can onboard lots of awesome user data, make it accessible, and make it accessible across many different places and really focus on the speed of retrieving that content as well. Starting with Bedrock. All right, so priorities for 2023. So Bedrock is kind of considering ourself a team of platform teams trying to focus on three main working groups. And so one of the big ones is pairing up retrieval protocols in Boost to get reliable and performable retrievals. And so big goals for us, and you can check out the roadmap in Notion. We've got a full thing through the first half of next year. And so a lot of the work that we're gonna be doing there is working very closely with the Saturn team to unlock as Saturn starts to fall back to retrieve directory from SPs. We want that to work really, really well. So we're gonna be doing a lot of work to scale that up, doing a lot of performance testing there. And then on indexing side, we want to compete with Web 2 in terms of data discovery. Well, we want you to be able to find content fast. So whether it's IPFS or Filecoin, we want you to get it quickly. And then on the Boost side of things, working closely with the Lotus team on scaling data onboarding. How can we improve that throughput? Some of the ballpark numbers we're thinking about right now is how can we get to a Peppavite ability for a single SP to be able to ingest per day? And then, so some things we're asking ourselves of constantly, constant questions of how can we enable more compelling stories of data storage on Filecoin? And so this is us thinking about how are we working with our product managers on Bedrock to reach out to more communities, understand use cases so that we can make sure that all of the fundamentals there are working really, really well. And so also a lot of things that we're thinking about now is who are the people that we need to be working with, right? We need to work with the network growth team to talk about SPs and clients. We need to work with retrieval markets on to serve content, to unlock Saturn in the future. And then we're also gonna be working with computer over data to make sure that all of their use cases going forward, that they have the storage and retrieval needs that they need to meet. We're also gonna be having retrieval incentive discussion tomorrow at the Retrieval Market Summit. And so that's also gonna unlock a roadmap for the first half of next year on what do incentives look like in the network, reputation systems, et cetera, to make sure that SPs also have what they need to serve retrievals, yay. Okay, Retrieval Markets Working Group. So yeah, we've been hearing a lot about Saturn and Station over the last couple of days, but two teams does not a working group make. So there's loads more teams out there who are working on stuff in this space. This leads into number two, growing team and network. Gonna continue to grow the working group and just communicate the progress better with more demo days, more meetings, more events. The really big thing that we're trying to achieve with Retrieval Markets is just deploying retrieval networks. So Saturn is the one which is being built by the Retrieval Markets Lab, our protocol labs, but there are other teams out there, like Meson as well as Titan, and a few other teams that are building Retrieval Networks as well, or DCDNs. And so just like we've heard yesterday, it's good to have a few different teams working on these same sort of problems, and then they might start specializing in different directions, perhaps optimizing for video or metaverse or other sort of retrieval journeys. Crypto economic incentives, there's so many question marks around this space. As Jake mentioned, we've got this Retrieval Incentives Summit workshop tomorrow at the Retrieval Markets Summit. And that's gonna be focusing on the retrievals from storage providers, but there's also the incentives to make Retrieval Providers join these networks. And there's so many ideas that we can explore as part of the Retrieval Markets Lab and also in the Retrieval Markets Working Group, we wanna explore some of those ideas to improve the concepts and just see whether that takes us. And I think FVM's gonna unlock a lot of possibilities for us in that space. There's also, we've heard about all these data transfer protocol improvements of web transport, transport, and yeah, WebRTC, some of the stuff the bedrock team is working on, bits for graph sync, and how we can maybe do parallel or multi-peer retrievals. So looking into those as a working group is gonna be really important. And that will feed back into the Saturn projects and other DCDNs. Yeah, and we're now gonna hear from Satin and Station as well. All right, thank you, Patrick. Ultimate Hype Man, I'm Ansgar with the Saturn Project, and let's catch you up to speed in all the progress you're making in going into 2023. So tomorrow is a big day. Tomorrow we launch. So what we've been working on for a while, that team, those beautiful people on that table over there we've been working on for the last six months sees the light of day. And so going into 2023, the first goal is to grow Saturn to about 200 nodes worldwide. So what L1 nodes are, these are big BP nodes in data centers. And to put that in perspective, 200 nodes puts us in the same realm of performance CDNs like Cloudflare, Amazon, Cloudfront, et cetera. That's about how many points of presence they have worldwide. So that is goal numero uno. There's a second piece to Saturn's network though, and that is L2s. And the L2s are smaller nodes that run on machines like this. So as you'll hear from the ever capable Miro in a second, we're building a desktop app called Station that anyone can download around the world. And the first component in that desktop app will be a piece of Saturn's CDN. And that's what we're calling an L2. So another goal for Saturn come 2023 is to engender and get the L2 network off the ground. So that means getting lots of people all around the world to install Saturn through Station on their machines and start contributing to the network. Now, the third piece here, as Jacob mentioned before, the third goal of Saturn is to accelerate IPFS gateway traffic. So we have a huge corpus of requests that land at the IPFS gateway every single day and every single month, and it's fantastic to see that growth. And what we wanna do is leverage Saturn's large and growing network to speed that traffic up. So not just onboard new users, new customers to Filecoin Saturn, but take that existing corpus of traffic and accelerate that for everyone worldwide. And another key piece of this is also to try and drive down the infrastructure cost so that we can lean into the crypto economics of building our own network with Saturn and replace the IPFS gateway centralized infrastructure with Saturn's. And the final piece and the final goal for 2023 and our little band of misfits of Team Saturn is to onboard our first customers. So Saturn at large wants to be a paid for service at scale at the world class best performance. And to do that, we want to bring on customers where that is valuable to them and they will pay for that. And so in the beginning, we're bootstrapping the network with a pod of Filecoin to get nodes online and grow our network footprint. But over time, we want customers to come and pay and then the amount that we'll be paying out in Filecoin, for example, running L1s and running L2s we paid for, not just by customers, but then like customer and growing customer demand beyond that. So those are our four goals going into 2023. And next up, Miro. Thank you guys. Thank you guys. Thank you guys. Thank you guys. Ah, hello. I'll keep it short because you are all dark by now. For Station, we have three priorities for the next year. The first one is more short term and that's to launch it. We expect to launch it in the middle of the year starting with a beta in March. Then iron out all the details and then do a big public launch with a party, I hope. Then the more medium term goal is to add another module to Station. We are talking with the Backallow team. So ideally by the end of the year, you would have a Backallow node running in your Station, earning you even more Filecoin. All good. And then for longer term, we are researching a custom runtime that will make it easier for people to build their modules and the runtime will take care of things like sandboxing the code, making sure the modules are not using too much resources and other concerns we have around security and usability of Station. So that's it. Thank you. If anywhere, I would have brought my entire team on stage. All right. Storage products. CryptoNet focus on three different items, research protocol and products. And now we're gonna see a little bit of protocols and products for creating more for growing the network, for more robust and resilient storage. So we have a lot of FIPs coming up. They are about, with a whole line, I'm just gonna mention a few milestones. There will be a more programmable storage, the power of security FIP, there is napping that allows for storing into not only CC sectors and then there is a SAP piece technology that is the name for the on-chain inclusion proof for SAP deals. And so this is for the Filecoin storage protocol but then we have a lot of storage protocols that we would like to work on. One in particular that was mentioned multiple times today is the data persistence. It's a twist on the availability that allows for Filecoin to allow long-term storage of chain data. We have multiple steps there, starting from requirement gathering down to writing a protocol for pulling storage resources for miners. Today miners act as individuals. It would be great if we could be act as a pool and then for storing in this pool chain data from Filecoin and the checkpoints at some point Ethereum and so on. And the goal is to get this ready by next year. Then very briefly, yesterday I gave a quick demo on this data wallet. We're still working on this. We have a lot of on-chain storage products, whether it's retrieval, whether it's products that other people will build on top of FEM and we still want to, we have the dashboard live as of the beginning of lab week on onchain.storage or retrieve.org. They are two different apps. And the goal is to ship the dashboard at the end of the year. And as soon as we're gonna have more apps being built on FEM, we will, the plan is to integrate them. We also plan to build new storage products such as crowdfunding of storage or perpetual storage, which is items that a lot of people have been asking for. This doesn't handle payments. After audits and so on, we will be enable payments on this. Thank you. Okay, at least for buy first, very simple, three steps. Make sure we're still running very well. Second, make it better. So it's open source it out. So the community can run the buy first as seen as quality as us. So improve our quality. So the key milestone, we will have first try to make sure we have a zero downtime to anything upgrade from the cobalt point of view. Second is have a integrate with the center network to lower our TTF, the first bit deliver. Third thing is to provide a new service. So we have to have bad bit, deny service for the world to use. So allow some of the best DID not get served from us. The last one is more on how to be able to share our knowledge, how to run our buy first and allow the community to be part of it. Thank you. Hello, in terms of objectives for DAG house next year, it's I guess been a year since nucleation was as a concept was born along with like the meme and the drinking game and everything. And we've been working towards it. Last year was about getting stability and reliability and performance. This next year, we're still working towards that but more from a differentiation and growth angle. So we want to grow a ton, targeting 10% weekly user growth since price ago was introduced with our product and then actually do the nucleation thing. And there's a lot that goes into landing that plane because there are internal dependencies with folks like Outer Core and NFT storage as well as proving to external investors that we are a product worth investing in. Some of the key milestones involved in this, we are finishing up our end-to-end fast read story on Web3 storage along with some cost optimizations there should land by this year. We're rolling out our UCan based upload API into the core Web3 storage and NFT storage products as well as implementing UCan based APIs across the board. We'll be implementing metrics so we can better track what we need to inform our efforts. Nucleation is like the next big milestone. There might be some stuff in between that we might need to do depending on user feedback and how things are going. And then down the road, we also want to offer our users consumption-based pricing and really give them the option to utilize the cheap popcorn storage where today's experiences, they have to store in the places we tell them to tomorrow with popcorn at the forefront. We think we can disrupt storage pricing. Thanks. FYI, 10% week-over-week growth is gonna have you grow 140X next year, which would be friggin' awesome, so make it happen. Hello, my name is Raul and I'm gonna give you a roadmap update on the Falcon Virtual Machine project. We are currently focused on completing and stabilizing EVM compatibility on the FEM. By early December, we will be transitioning the wannabe bleeding edge test net into a stable developer test net, which we're calling BuilderNet. We will be launching a special developer program that will run all through mainnet launch. We project that Febben will hit mainnet by February 2023 in a network upgrade that we are codenaming Huga. We expect to start our scoping exercises for M2.2 of the FEM roadmap by January 2023. This milestone brings user-defined wasm actors to the Falcon network. It's hard to say when this upgrade would ship to mainnet because there are many external factors that will affect it, but we are very tentatively aiming for mid-year 2023. After then, we will switch focus to further programmability improvements, both at the FEM and the protocol layer. Our priorities for 2023 are, first, to keep to our estimated mainnet upgrade date for Febben as much as possible, which is February the 8th, 2023. Second, to tighten the scope of FEM M2.2 as much as possible to ship simple incremental and secure updates to the network. Third, to parallelize the development of FEM 2.2 itself with the development of tooling, SDKs, IDL, IDLs and other components that should a company was in programmability. Four, to continue the activation and support efforts to various developer communities through a number of programs, resources, and a lot more. Fifth, to onboard more FEM core engineers. And sixth, to be ready for upcoming protocol upgrades and network breakthroughs like IPC, retrieval markets, computer data, and a lot more. Thank you very much. So roadmap for the Phil Crypto team. So we're the team that develops the Rust Phil Proust library, which is a Rust-based software library which implements the Filecoin Proust, including proof of space time, proof of replication, empty sector update. So for the next quarter in 2023, our top priority is developing, delivering Halo 2 Proving System to the Filecoin mainnet. So a little background on it. It's a proving system that was developed by the Electric Coin Company, developers of Zcash. The biggest benefit for us is that it eliminates the trusted setup process of the Groth 16 Proving System, which is a process where multiple actors have to contribute randomness to develop, to generate the proof parameters. It takes months to complete end to end for each participant to complete their part. One thing to note about this is that Groth 16 won't be going away. So existing storage providers who are currently using Groth 16 will continue to operate as normal. The expectation is that new use cases and newer storage providers will also be able to use Halo 2. This is something that, especially the elimination of the trusted setup, should accelerate our ability to deliver new proofs onto the network. So for the coming quarter, we're looking at getting functional parity between Halo 2 and Groth 16, including CPU and GPU acceleration. Also by the end of this quarter is we wanna have the initial API available, so the clients, including Lirdis, the other third party clients can begin starting integration. Moving into Q1 of next year, we're gonna optimize, see what performance is capable of it. Optimization doesn't do any good without being able to benchmark everything. So we want, create a benchmarking dashboard so that we can evaluate performance between Groth 16 and Halo 2. And any of the additional proving systems that come out over the next year in the future. Another feature targeted for Q1 is recursive proofs. So doing proofs of proofs. And we also wanna get third party code audits started as well as the FIP draft and refine the API based on feedback from file coin clients as well as feedback on the FIP and code audits. For Q2, we wanna get everything to production finalized, send the FIP out for approval, get the API finalized to go along with the FIP voting. The biggest change for us will be I think in Q3. So up until this point, file coin proofs has been a library that's exclusively for use by file coin clients. We wanna open it up for proofs on L2, L3 applications as well as used by computer or data, lurk, FBM, or off chain proofs so that we can develop new proofs, new applications and sort of open up the API to smart contracts. Another thing that we're looking into is chain snapshotting. So being able to prove that someone's storing a chain snapshot to enable lighter weight file coin clients. Thank you. So just quickly, I think you saw this roadmap yesterday but this is our highlight. We split our roadmap in development and research. So this is what we are shipping essentially next year and then I have another slide on like more researchy things to fill up the pipeline that we are emptying here. So what's missing here is actually the milestone that comes in Q4. So we are launching the SpaceNet and I have the textual version which I will actually jump here just for a second. So this is a running Lotus-based SpaceNet is what? Like it's Lotus-based SpaceNet testnet that will enable users to experiment with mere consensus. So this is the new consensus protocol that we are developing with full support for FBM. So that basically launches Q4, right? And then we are adding the IPC in Q1 next year. So you're not spawning subnets still from the SpaceNet in Q4. In Q1, you're spawning now subnets. This will allow applications like Saturn and Bacalao and others who are interested come to talk to us to run your own subnet that's Q1. If you want to share like SpaceNet mainnet, you do it in Q4 already with us. And then M3 is the big thing, Q3 after testing bug fixing whatnot, performance testing, we go to mainnet in Q3. Now coming back here, we have a few other milestones notably related to EC patch. So FIP will publish a FIP for public discussion today. And please join the discussion. And this is related to really small but important things that we are planning for expected consensus on mainnet. For our research roadmap, this is work in progress. So it's a bit packed. The only fence, this was the highlight of yesterday. And you will hear again at consensus lab summit about it. So this is something, so this is a high level use case, web tool like use case with, which is very important for decentralization because some people are actually unbanked and they cannot do things if they're not in a crypto world. So it's really important but it exercises our whole stack. So this is what we want to do. For example, can we use subnets? Can we use Saturn as a building block to implement this? And this will exercise our whole stack. We want to focus on that. Some other things are here, notably they're relating. So we have growing team and network. For example, you're going to turn consensus day into a full-fledged conference run by protocol labs, consensus lab. And then we have a bunch of other ideas. We need to organize this a bit, but thanks a lot. So I started with this with our team just thinking about like what we care about the most from a user's perspective. We want security, we want to be familiar, we want to be reliable, and we want best price performance. So this is going to be like the center point of everything you're about to see. We actually have what we think are two roadmaps, one for end user and one for compute providers. Now obviously the two play off each other but it's really important for us to think about the end user benefits that you're going to see from each stage along here. Everything you see here is backed by a lot of thinking and issues and things like that but what you're about to see is really the end user, what they're going to experience either if you're an end user or a compute provider. So December 2022, data permanence is going to be powered by Phil Plus. We're going to do significant performance improvements and two times as many examples as we have right now. Those examples are underway. So what can you do with Phil or excuse me, what can you do with Bakeryao and make it great? You're also going to have your own dashboards so you can visualize all your jobs, see how they're working, see how the network is working as you make your own decisions on when to compute. In March 2023, we'll have launch support for Wazem. It's in beta right now. We feel like we're going to get to production by then meaning you won't even have to containerize your job anymore. You can just hand us code and we'll compile and run it for you. Significantly improve reliability with a lot of investments in transport layer. We're going to be doing some proofs of concept with other smart contracts including FEM and of course Consensus as they land and our hope is to start exploring using the chain for transport rather than what we're doing right now which is GossipSub. We also really want to focus on developer experience. We want a much faster REPL right now. It takes, you know, you have to build, you have to submit and so on and so forth. We want it to feel very native so that you make a change in your code and momentarily you're able to see that change. We are also going to hit 1.0 for our API and stability meaning we will now take legacy jobs forward and that will enable the network to be much more flexible, more, we'll handle versioning and things like that. We are also going to start our grant program then and what we're calling back of y'all season where we're going to support people ramping things up and prizes behind it and so on. In June 2023, again a lot more developer experience we think that's really critical including a lot of syntactic sugar or what we're calling syntactic sugar anyway. You give us a big job and we'll split it up for you. We'll shred it, we'll map reduce it, so on and so forth. We think we can do a lot of really cool things here. We'll also be supporting DAGs at that point and federated reads including reads across sectors. And then you heard from many of the folks including from station for example, we hope to have our rich local clients. So everybody can now be a compute provider. You just run it on your network and we will run it on your machine and your machine can now participate bringing back steady at home if I have any way about it. And then finally in December, excuse me, in September you heard from Consensus Lab, it's funny I go last and all my surprises are given away, right? But you see consensus and verification. This is gonna rely heavily on the subnet and incredible work that Consensus Lab is doing for deterministic jobs. So now you will actually have verification of deterministic jobs. We will also have a website integrating with Slingshot and other like data so that you will now be able to go to a website and kind of pre-select a series of actions that you might wanna do against a data set. So oh, it's a very large data set. I want to shard it. I want to parallelize it. I want to whatever kind of a no code like experience. We're also planning on supporting arbitrary networking via gateways. So the jobs will be able to reach out to the network and execute commands through a throttle gateway or obviously we'll be exploring decentralized solutions as well but nobody in the decentralized space has figured out arbitrary networking yet so we're not optimistic but we're hoping. In addition to that, nodes will be able to communicate with each other. So you can now deploy multi-node jobs that can communicate each other over the life of that. That should unlock a number of map-produced jobs and other jobs of that kind. On the compute provider front, December 2022, we will launch Phil Plus. You can see that there. The compute provider doing the job will be first in line to win deals, including verified deals. So we're really excited about that. Also a lot of simplified setup stuff. In March, we're gonna have a unified control plane across all of your nodes. So if you're a compute provider and you have many nodes that you wanna control, we wanna give you metrics, we wanna give you a dashboard and the ability to turn them on and off without having to log into each one. We also wanna work really hard on a partner program for our compute providers and multiple executor support and our API for the server side. We'll also reach 1.0 in March as well. In June 2023, we'll support additional deal engines. We're also gonna get to BFT 1.3, supporting unreliable nodes and making your job much more reliable, even on an unreliable network. And finally, again, you heard me give it away, or excuse me, you heard the station team give it away. By September, we hope to launch the, anyone can be a compute provider. You run it locally via the rich client execution. We also plan on either building or hopefully integrating with an existing reputation system. And I mentioned already the clusters of jobs that you'll be able to do inside a single data center. So that's back to you. Hi, so I think now I'm the last. Okay. No? Okay, keep going. Okay, so I'm going to talk about the Layer 2 incentives working group within CryptoEcon Lab. And this group focuses on Layer 2 research for incentives and what does this mean? So when you think about that figure that Juan talks about the cousin between the research and the product, I think, and our team believes that a big important part of crossing that is actually designing the right business model and incentives to allow for scale. And so our team is supporting that and supporting all the incredible new networks and new products and applications building on top of the existing stack and helping them design the right business models and incentives for scale. And for this, we have three main objectives. So the first is to continue to support storage growth with the emergence of new applications. And with this, we can think of applications built on FVM but also other projects such as Atlas, which is trying to build a business model and Web 3 native applications for geospatial data. The second one is to make sure that we can build robust and safe markets for retrievals. So you've heard from the different retrieval teams that incentives are a big part of it. So we want to support them on that. And finally, we also want to ensure that the compute markets are also incentivized and the right business models are set up. In terms of the key milestones, we will have three main threads that will be running sort of in parallel. The first is Saturn. So we will continue to monitor the L1 network and the incentives we designed to make sure that everything is working as expected. And also we'll start working on strategies to decentralize that structure to make it work and enable a lot of nodes joining the network and in particular the L2 nodes. And then we also have Atlas, which I talk about. So for here, the main milestone will be to ship an MVP and to build a community around geospatial data and research. And finally, we have a work that we'll be doing with compute over data and in particular with Bakalyao and supporting this initial exploration of incentives for that team. So that's it. Thank you. Hello, I'm the last one. So now we're gonna look at other parts of Kryptonet that are going to look more forward computing over the data on Filecoin. So the description of Alex yesterday that Filecoin is a storage for all the computers around all of these global computers for all the blockchain around the world can only happen if we can allow them to get access to Filecoin storage. And so we're trying to work on this and this is what we call Filecoin Interop and to export WebTree. And this would allow truly compute over data not only from blockchains like Ethereum and so on but also from other projects like Bakalyao, Lurk and so on. So the way we export is by showing a proof of the consensus of Filecoin into other chains and that's the first milestone. The second milestone is once we have the state of Ethereum into other chains we can go and read it from other chains. And so this is the second milestone. Reading Filecoin from Ethereum today is gonna be a nightmare so we need to improve the state of Filecoin the way that we commit to it so that it is way simpler to generate proofs. So these are the three steps to get into it. Separately we think that computation over data it's very important but it's also very important to have access control to smart contracts not only in Ethereum in every chain and we will be supporting this in every chain including Filecoin. And so the idea with Medusa is that you can encrypt any IPFS hash and you can release an IPFS in the content of the IPFS hash based on some on-chain interaction. You make a payment, you buy an NFT and so on. Medusa just for a week we shipped the demo on Medusa.XYZ like so first milestone is there and then there is many milestones ahead of time. One is deploying the production ready testnet in Filecoin as soon as FEM will support events and then we want to go and find some early users that would be supporting or integrating access control on their platform. Medusa at the very beginning is gonna be the MPC so the threshold network on Medusa is gonna be centralized. We will have meaning we will pick like in Ethereum who are the nodes running it. The goal is to find the incentive for that and these incentive schemes is way later next year and we want to be able if we don't like our MPC network you should be able to pick your own threshold group and that's what we're doing with Medusa as well. And then there are two other projects that come from the research side. Both Testudo and Vector Commitments do improve Filecoin proofs and I think there is a early, the graph cannot show but there is an early result from Testudo that we do improve 4x over Grot 16. This means that we could have hopefully very soon based on the timeline so we do performance prediction, we finalize the paper, we implement the proofs but the goal is that we will have faster proofs. This could give us faster power but also faster replica updates which is a lot of people would want it and if we can go 4x to 20x faster that would be a big win but Testudo is a universal trusted setup snack so it could be used by other projects that want to do compute over data and do proofs on those computation. And Vector Commitments, I think I told you a lot about Vector Commitments in multiple occasions, the idea is to replace Merkle trees as much as possible for the proof of replication but not only for those. As I said, having good Vector Commitments it is very important for exporting the Filecoin chain into other chain so that we can do easy proofs about the Filecoin state and in general whoever is computing over large amount of data they will have to drop Merkle trees and that's why we think that compute over if you want to compute over Filecoin state these two could be very strong components. And that's it. This is the end of our Endress Summit. Thank you guys so much for just everything. It's been awesome. I hope you all have had a lot of really good conversations. I hope this does not stop now that you continue having great conversations. I have notes on groups that I want to talk to you more but thank you so much. Please have an amazing lunch. Go and hang out and continue enjoying Lab Week.