 Hello, everyone, and welcome to our June Peel-Endres working group all-hands. As a reminder, this is our agenda for today. We'll start with an Endres working group update. And then we have two deep dives, one on FEM and one on Filecoin Chain Snapshots. So get excited. As a reminder, we are one of many amazing teams in the Protocol Apps Network where we drive breakthroughs in computing technology to push humanity forward. We think the internet is one of humanity's superpowers, and we want to equip it with the primitives and foundation that set us on a great trajectory for this transcendental decade or series of decades that are coming in the next few years. A majority of our work is across these many projects, especially IPVS-LIPP Filecoin, but there's many, many other projects that we work on and are building over time. And we're part of some awesome open source communities that help make these things a reality. Our mission on the Endres working group is to scale and unlock new opportunities for IPVS-LIPP and LIPP. We do that in a number of different ways. From onboarding amazing talent, driving breakthroughs in protocol utility, and scaling network-native research development. We are made up of these different Endres working groups, and they are constantly growing. So if you are excited about the work we're doing here, please reach out. We would love to connect with you. We have a lot of open roles. We work with people across the entire ecosystem. We also collaborate heavily with many PL Network teams that also have open roles. So let us know if you see something cool you want to work on. Our strategy for 2022 has forming components. One, we're focused on increasing the talent panel of amazing humans contributing to these open source projects, sharing our knowledge, building network alignment, building a great developer experience. We have two main chunks of feature development work we're doing. One, around robust storage retrieval across IPFS and Filecoin, really around data onboarding, data retrieval, data utility and accessibility. Really making Filecoin and IPFS do the core of what it is that they were designed to make happen. And then around driving breakthroughs in programmability, scalability, and compute. A lot of really exciting things that we aim to unlock in the future with compute over state in Filecoin, compute over the data that is being stored in Filecoin, and a lot from there onwards. And then we do all of this with a first and foremost focus on operating these networks and open source projects that we contribute to really effectively, keeping our critical systems running, releasing regularly, working openly and empowering many other teams to get their work done, and avoiding or minimizing tech debt and operational overhead, wherever possible. Last time we talked a little bit about these upcoming Endres milestones, and I wanted to point people towards a new Filecoin Core Improvements Roadmap. I believe this is going to be getting released on the Filecoin Foundation. YouTube channel shortly, but I gave a long talk about this at Phil Austin a couple of weeks ago, which really highlights three main tracks of development that have been kind of ongoing for a while. There's the core capacity and data onboarding. So this is storage, making storage work really well and onboarding the capacity to then store useful data in. There's a lot of things that we have done over the last two quarters as an Endres working group that contribute to that, and as a wider networking ecosystem that we're participating in, and some really exciting future work that's happening as well. We've been working really hard to bring a new function around programmability and computation to the Filecoin network, and so there's a lot of work happening here around FEM and other computation networks, and then an increased focus on data retrievability. So both storage and retrievability to make all of this data useful and accessible, some really exciting work coming with retrieval markets and retrievability oracles and other things that you've heard about. So go and check that out if you're curious about this roadmap and what's coming. We're working on getting this to a new format where folks can add to it. So if you are excited about helping out with that, help needed, and we'd love to make it a place where everyone can contribute their new milestones to. We also shared some of our high-level goals, trying to break down some of our OPRs and make them a little bit more concrete. So these are some of the goals we're holding ourselves to. We're going to refine these a little bit more and then try and finalize it for Q3. And now handing it off to a dean for IPFS. All right, IPFS. Trying to make the web work peer-to-peer using content addressing. Finding providers on the network continues to take under half a second. Are there KPIs around number of network nodes? A number of open PRs are pretty similar. All right, highlights. There's been a bunch going on this month. GoIPF has zero, 13 zero is released. Big features include from our friends in Golub Q2P, hole-punching and experimental resource management, as well as some changes around the Gateway API. Speaking of GoIPFS, it now has a new name called Kubo. You will be hearing it around. Thank you to everybody who participated in the renaming process. We have many, many years of docs, so there will be many updates to come on the names. There are pending service compliance tests now, which is very exciting. You'll hear more about it later. We have Reframe, which is a request response protocol we're using for things like routing. It's cool. You can use this to do delegated routing and combine it with web sockets to allow a browser to do peer-to-peer requests of data without anything weird around like WebRTC. Specs, there's effort to make specs better. We have a lightweight RFC process, and we have HTTP Gateway specs open. Check out the IPFS spec repo for more info. If you've used IPFS check, it now has an additional home, a check that IPFS.network, which will hopefully be easier for people to remember. And we have new tools like Ospinner, a tool for pinning your data to a pending service. Upcoming, we have the IPFS thing just happening next month for folks who are deep in the weeds and the IPFS things are just getting into it. If you're building IPFS implementation, we would like to hang out there. There's lots of things to talk about. Office hours are right after this meeting. If you haven't signed up and you want to go, sign up on the website. And we have some fun experiments underway around augmenting BitSwap to fetch data faster and allowing for WebAssembly IPLD codecs and APLs. Over to Alex for JS IPFS. Hello, everyone. So since our last meeting, what's happened? Well, we shipped JS IPFS 0.63. So this went out with the new version of libp2p, which is all built in TypeScript and is ESM only. And has these amazing things, these lightweight peer ID. So historically, we haven't been able to use peer IDs in the browser because the module itself just pulls in too many crypto dependencies. It's way too heavy. The new one does not do that. So now we can tell the difference between things like peer IDs and multi-edits using the types, which is lovely. We don't have to have everything a string of anymore. Read the blog post. It's got lots of details in it and also how to upgrade. What else has happened? So we've shipped a few versions of PHP with some bug fixes. You want to upgrade that as soon as possible. And your life will become wonderful and happy and everything will be nice and good. What's happening next? Well, JS IP to be 38. So this is similar to the last time I presented this slide. So we're going to have way better resource management. A lot of the resource management stuff has been slipping into our bug fixes because nothing has been breaking, which is great. It means that you can take advantage of those right now. We're going to be able to tag peers. You'll be able to say, hey, this peer is important. Please reconnect to it if we restart the node. And that kind of stuff is going to be very, very useful. YAMX has been upgraded from YAMX question mark to YAMX exclamation mark, which means it is almost certainly going to be in the next release, which is amazing. It's been a long time coming. And that's very exciting. Circuit Relay V2 is still a question mark. But that will be coming very soon. That's it. And then, yes, a little graphic. There's a little nod to what GOIP was almost called. But, yes, thank you. Awesome. Over to LibP2P. LibP2P, the networking stack used by Kuo and by Lotus and many other projects. Last time we presented the repo consolidation. GOIP2P is now almost a mono repo. And what this meant is that we now have all the code in one repo, but we also have all the tests in one repo. So all the flaky tests were also consolidated into GOIP2P. And we've made a lot of progress deep flaking our tests. Marco has built some nice visualizations that you can see. We've been chewing through those flaky tests, and there are still a few of them remaining, but we've made great progress there. On the highlights, LibP2P, we've been focusing on the resource manager across the different language implementations. You can see in the graphics, we now have metrics. And you can see when the resource manager is blocking resource allocations, and this can help inform set the right limits. We've also added support for canonical outlines to log misbehaving peers, and this can be plugged into tools like fail2ban that server operators use to automatically ban the IP addresses of nodes that are not behaving in ways that we want them to behave. Other updates, we've made progress on the WebRTC effort. We started a new collaboration with Little Bear Labs. They are helping us out on the specification and on the go in the JavaScript implementation. We've added IP range support to multi-addresses. Rust LibP2P is shipped not one, but two releases, and is redesigning a couple of interfaces. There's a new implementation of LibP2P coming up in Swift. This is really exciting. And we've made great progress on the whole punching measurements now that GoIPFX 0.13 was rolled out. Awesome. I believe we have a video from Rod for IPLD. Hello. A quick IPLD update from me. Main item I have for you is a GoIPLD Prime 017.0 release. I'd encourage you to go to the change log if you use IPLD Prime at all. There's three very minor potentially breaking changes, but unlikely anyone's going to be affected by them. But there's also three months' worth of work across the whole project. Main items are bind node, lots of work on it to bind node, hardening it and productionising it for rolling out in deployed code like in the data transfer stack. Lots of work in the scheme is a DSL in particular, so you can now parse almost all of the schema spec really nicely with IPLD Prime. A couple of other things across the ecosystem. We have a dean is working on some IPLD experiments in his GitHub. I encourage you to go and have a look at that if you're interested. There's some pull requests there that you can see the kind of work that he's doing there to support codecs and ADLs and some other interesting things. We have a new page on IPLD.io called the benefits of content addressing. This might be a good resource to give to people you're talking to content addressing about if they're wondering why on earth you would do this thing. Also feel free to contribute to that page if you have anything better to say there. And lastly, on the screen you will see a custom build of IPFS with a new version of Go multibase and a new base in coding called base 256 emoji contributed by Geropo. This is probably rolling out over time. It's in Go multibase and JS multi-formats already. They just need to bubble up through the stack. It's a fun base in coding. It's really an inefficient base in coding, but you can now represent CIDs as emojis. So if you want to have a bit of fun, you could have a look at that. Is it useful? Probably not. Is it fun? Kind of. And that's it for me. IPDX. Hello, developer experience here. Last month we completed our first GitHub permissions audit. We started with B2P.org. It was quite a big undertaking, but successful in the end. We didn't remove everyone. All the borders are intact, but we are definitely in a better place when it comes to who can do what in B2P.org. And next we'll be continuing with IPFS and IPLD orgs. We also set up a self-service process for requesting self-hosted GitHub actions runners. It's as easy as creating one PR and installing a GitHub app in the org right now. So come check it out. Testground work is finally picking up. There is a lot of cool stuff happening. I believe IP2P cross version testing might be out as early as tomorrow, if I'm not mistaken. And also, IP2P cross language testing is likely happening next week. And we have a test ground on EKS demo from BloxyCon scheduled for next week. So yeah, a lot of good stuff in test ground plans. What about next? So next month we are heading to IPFS link as well. So come say hi. Talk. Everything developer experience related. And before that, yeah, definitely drop by our office hours every Monday for PMUTC. See you there. Thanks, Peter. Over to file point, Jennifer. I'm going to try to build a decentralized storage network for all kinds of information. Next slide is about some matrix. So the total network capacity, we have reached 17.47 bytes, which is quite amazing. That's our Q&A power. With the roadmap power, we still have 16.5 bytes. For the data stored, now we are hitting the big 110 bytes, which is a lot of data. That is 10 bytes increased from last month. A lot of them are verified data, thanks to the Evergreen and data programs that we will get more details later. There's a lot of useful variable data sets being stored with all these, like, verified data, including OpenSea with LFTs, Internet I-Cive, audiences on the corner there. The daily data grocery is now 0.767 P bytes per day. That is over 100 terabytes every day, even just with verified deals, including a lot of Evergreen renewal deals for our next screenshot. So just keeping data persistent of Evergreen network is a huge effort here. I think it's going well. Now, for Evergreen highlights, the thing we have all been waiting for, the FBM, Big M1, is going to be shipping very soon. We finally have a date. It's July the 6th. Please mark that on your calendar. We have been testing a lot since Tuesday and the forest team. Now we have 100 percent full test coverage on the beauty actor. The actor really engrossed. That's ready to be switched from the ghostback actor upon the upgrade. We have done a lot of butterfly testing for over two months, obviously, and then have a very long list of the checkbox we have checked off from Loader Steam and the community. We have also upgraded the calibration testnet, which is the meannet testnet we have. On June the 16th, last Wednesday, we started to work with ecosystem stakeholders, like exchanges partners, to upgrade their node and make sure they work really well with FBM and be ready for the upgrade. A lot of community testing still ongoing with start riders and clients. We can never stop there. We are already scoping. Our work can be 17. The same will be around addressing a lot of flips that's in the backlogs, but mainly also working towards enable storage and road market program ability, so that once FBM2 is here, a lot of these cases can be built there. FBM team also has been non-stop moving towards M2 already. M2 is going to be the one that enables the program ability. We have answered a lot of questions in the AMA on this Wednesday. I think we have a link there. You can go follow up. We are also going to have a blog post to summarize everything by the Filecoin Foundation. A lot of early builders are building testnets, STKs, tuning smart contracts, followed FBM channel for the latest updates. FBM, EBM, repository, like this semi-secret project, now it's public. Roy is going to give a very deep dive later on as well, so I'm looking forward to that. The crypto team has been working on Halo 2. All the STK work is already completed. We are moving on API integrations and to enable M2 and usability and correctness testing, and hopefully we can have it in Filecoin very soon. Upcoming opportunities. Again, we have B16 upgrade. We have done a lot of testing. We are all feeling very good about this upcoming upgrade. However, this is the biggest upgrade since midnight left off, so we would love to have everyone in the PLN, in the Filecoin network, just to stay very responsive and reactive around the upgrade epoch this time, just so that we can be ready for anything that may occur. For Filecoin, we also have Booth. We will have a detail update later, but this is the go-to market software that the Biroc teams have been building. They just have published their first stable release that enabled lightning faster storage deal making experience, so please go check it out. Again, a lot of FBM, EBM, M2 opportunities that we are looking forward to. We will be at Field Austin. We will be at Field Toronto again with a lot of found workshops, including watching a Solidity contract to be deployed on FBM in real life, and also minor updated workshop from Magic. Come to say hi if you are around. I think that's it. More than enough, really exciting things happening. Over to our team updates. Jesse first with all of the NetOps work. We still keep checking our TDFB, the first high-to-first bit, still around 11 second. In the next slide, you will hear about what we are going to improve it to make it better to reduce the TDFB 95 part, the mean time. I think that will be a huge improve after we implement some of the improvement in our network. The IPFS cluster pinup update upload still keeps growing, which is pretty healthy. Now we have around 995 million pins, totally. I think that's a great number for us. For the IPFS IO gateway request, it's around 800 million. Still keep very steadily with whatever we have. We're hoping this number will slowly get lower, because if we want to have more people running the gateway with us together, instead of like we are the only one or we are the few who want to run it. It's the same to the unique user in our IPFS gateway. We're still keep growing steadily. We're also hoping we can get more support from the community, when other people are also helping us to run the gateway in the future. Our network uptie, DRAN, API, China.Lab, Sentinel, file infrastructure, no IPFS gateway always close to 100 percent. The IPFS gateway is 99.98 percent. IPFS bootstrap is 100 percent. I think all the key number we're still checking it is still pretty positive. We're hoping we can improve the TDFB variation, and we're also hoping we can reduce our IPFS gateway number variation. That's the people we all coming to help us. Thank you. All right, so this is production engineering. It's a new team that we've set up in their ops, and our remit is to look at the non-functional parts of our software. So things like performance and reliability, and security, and operability, and all those good things that aren't to do with actual features. So our current project is to look at the time to first buy in the gateway and improve that. So we're building new infrastructure to let us do A-B testing, so we can deploy new versions of the gateway, start-by-start existing versions, do comparisons with those. The first target is to look at garbage collection in IPFS, to go IPFS, because the current default behavior is to throw everything away that isn't pinned. And for gateway, we wanted to act more like a smart cache where we actually keep things that are useful in the block store for longer. So to do that, we're doing things like measuring how blocks are fetched, how long we take to fetch the network, or how frequently they're accessed from the block store. And we're going to use that to build a metric that lets us selectively delete things that are easy to recreate and to keep things hard to fetch and used quite often. We've got an ocean page with a whole bunch of stuff around what we're working on at the moment. And there are a whole bunch of new opportunities in this area. This is just one thing to time first buy. The number of things that affect that, I mean, just to be huge, but things like being able to cache better or being able to optimize production of certain types of directory listings, or even just looking into how BitSwap is used and tuned within the gateway itself. So I mean, there's more than just what I've listed here. So go and have a look at Notion and see how, if you've got anything that you want to suggest and add to that, then feel free to do that as well. Awesome. Great stuff. Be honest. Tell us about Problab. Hello. Hello. So some of you have heard about Problab here or there. Just a quick intro. It's a new effort that is working pretty closely with production engineering, liberty to be an IPFS steward, and with a mission to do scientific methodologies on measurements for measurement, benchmarking and optimization of the IPFS protocol stack. So the main mission, the main motto, if I should say, is that you can't improve what you don't measure. So that's the purpose here. And by measuring, we then go on and try to build optimizations with the other teams. Next slide, please. We've got several updates. We've got several ongoing projects, which you see in the bullet point list there, the DCR team table health. We're working with the LibitB team to measure the success rate of not hole punching, the new big thing on LibitB, BitSwap that Ian also just mentioned. We've been to several events in April in the PPF festival in Paris. We're organizing our own workshop DIMPs or DIMPS in a few days in Bologna. And then we're going to be at ACM sitcom, where we actually also have published a paper on the design and evaluation of IPFS, a great read with a very detailed description of how things work at the protocol level. The team is small, the PL team is small, with Guillem that joined a couple of months ago and Dennis, a great external collaborator that accepted our offer. But I should also mention that we have a great thriving community of outside collaborators that help make all this happen. Several opportunities I would definitely point you to a blog post that is on the IPFS blog. I'm going to link it in the chat. We've got a Notion page where you can find there, where you can come and contribute to a couple of hours. There's a dihab repository where if you want to suggest an idea for measurement in some part of the protocol stack, feel free to go and suggest it there. That's me. Thank you. That's it. Michael, tell us what's new with Nitro. Heyo. Yeah, we still have tons of uploads coming in and our growth is still looking really good. Some roadmap updates in the next few weeks, actually, we'll be swapping elastic IPFS and GoIPFS cluster as our primary and secondary store. Elastic IPFS will be coming to the primary store and what we wait on for the availability guarantee. In early Q3, we'll be shipping the standalone service and library for our new upload interface, which is built entirely on that elastic provider interface. That's really exciting. We have a demo of that coming on Monday. Web3 Storage Gateway is also coming out next month, same with W3 name standalone service, which is a public, the initial version, three IPNS service for people to use. Later in Q3, we're looking at a new console for the new services and everything around the new upload interface. Rather than last time where we built NFC storage and Web3 storage and separate sprints for different code bases, we're looking at consolidating the whole code base and having a reusable widgets that anybody can plug in for any of the stuff that we built as well. We also had some great libraries, ship Ucanto for our UCan-based RPC, Carbv2 index implementation in JS called Kardex from Allen. We've also got a new logo for Daghouse for post-nucleation branding. That's it. Who? Ed Rock, Jacob. Yes, so quick highlights. Boost 1.0 shipped last week. Blog post shipped today, so you can check it out. Take a read. We're excited to work on adoption there and currently working with Sentinel to try to get more metrics around there to analyze the deal ingestion through Boost versus Lotus Markets. Also, not on here because it's hot off the press. Picnic shipped their indexer this week. It's now in production, and so we'll be working with them to make sure that those are synced. Super exciting to see storage providers running large-scale indexers. Then in terms of roadmap for Bedrock, a lot of what we're looking at for the second half of the year is to really consolidate the work that we're doing and focus really hard on our top-line goals. What we're going to be working on, data onboarding, making sure that we're doing everything we can to scale up Boost in markets to get to this five petabytes per day goal in Q3. We're also going to be working on consolidating our data transfer streams and Boost into focusing heavily on scalability and reliability of retrieval. Really want to get to this P99 of data on the network has a 99% success rate with the retrieval. A lot of work we're doing there right now is scaling up the auto-retrieve project, getting a lot more data there, and so look forward to posting more metrics there on analysis of retrieval. Then also we'll continue to work with between indexer and retrieval on interoperability of IPFS Filecoin as well as performance of IPFS to IPFS retrieval on the network. Awesome. Great work. Progressive launch. Over to Patrick for Retrieval Markets. Hey. Retrieval Markets in a working group, we've got 10 grants in progress at the moment and 14 teams in the group and four more interested in contributing, totaling 50 plus active contributors. The Retrieval Markets team is looking to get a network deployed of retrieval providers or multiple networks. One such network is Satin, which has been built by the Satin team as part of Entrez. Satin has launched in a private mainnet, which has now got 29 nodes around the world and has served 15 million retrievals, albeit ones that we've set up ourselves rather than from external places, which is pretty exciting. Highlights. We've got two new hires into the team and one close to the finish line. There is a Retrieval Market roll-up of the first half of the year, which just goes through all the things we've been looking at, the different topics. We've shipped a new Retrieval.Market website. We've shipped a Web3 CDN Performance Comparison dashboard. Then for Satin, we've shipped a few things around the mainnet, a Performance dashboard, an Information website, and then also a website which shows node operators how much they earn in a dashboard. Thanks very much. Dashboards. I am excited about this. I will send out the deck so we can all look at the slides later. I believe David flagged that he and Wes were out, but lots of excitement here on the compute over data stream as well. Back allow has now public nodes as well, probably in early testing, have four releases, two new team members, and I believe they're also starting up a broader group around compute over data similar to the Retrieval Market working group. So some exciting stuff on the roadmap and some exciting opportunities going forward. Feel free to read the slide if you have more questions. Now we're going to jump into our spotlights of awesome new things to highlight first and foremost, Boost GA release. Brenda, tell us more. Hello, everyone. I'm really excited to share more about Boost. I know we've been talking about it for a while, but really briefly for those who forgot what it was or haven't heard of it before, it's a tool for storage providers to easily manage data onboarding and Retrieval on the Funpoint Network. And you can do things like have greater visibility into your dealmaking pipeline with a new web UI. You have a really lightweight client for proposing deals, so you don't have to run your full Lotus node. You can also make storage deals with HTTP data transfer and more. There's lots more in the docs that you can go and click in there. So I'm super excited to share that we can actually or that we have launched Boost as of last Wednesday. So for those of you who don't know, please go read about it. In earlier in the better off update, you'll have a link to the blog post that you can read as well. So that's super exciting. Just sharing really quickly for what's next. We want to push adoption of Boost across storage providers, especially those that are onboarding data, but eventually we do want to have a path to Sunset and duplicate legacy markets. So yeah, next we're going to basically push an option for storage routers, share more broadly our ecosystem that we're moving all the market's capabilities to Boost. And we also want to have measurable ways to see how this is basically helping the data onboarding rate. Also, we want to build and design for scaling Boost. If you know storage routers that have input on this, please point them to that link there in the slide. We have a discussion going on in GitHub there. And I think it's one of the biggest pieces of feedback from the storage router ecosystem. So if you know, please point them there to provide input. And also we want to develop a launcher show capabilities next. So yeah, thanks everyone for the hard work. Shout out to Dirk Anton, Arash, who also worked on this, Jake, obviously, and Mayank, who actually is like dedicated Federov TSE now, and he's been helping us a ton with support, troubleshooting issues, triage and improving dots. So that's been super helpful. Yeah, thanks everyone. And stay updated for more release in the future. Awesome. Thank you, Brenda. Jennifer for Lotus H2 Roadmap. Lotus, what's Lotus, in case anyone doesn't know, it's the reference implementation for the following network. Our team's mission goal is basically keep outgoing and growing and keep improving it so that the following network can be useful. We also want to enable storage providers to provide a lot of storage services for the network for all those amazing data. We also want to enable developers to build their own business and applications tooling and everything on top of Filecoin or like on Lotus so that users can actually use a Filecoin for some stuff. So to do all those things we have a lot of things to do. So we have our H2 planning, which is not public. I have a link there. If you want to see all the details, click on that. We can talk about like Q3 priorities like for now, because things keep changing. First, we have to continue working with our friends at FEM team to make sure that M1 goes really well, maintain that, and make sure that if anything needs support for M2, we're going to be there. We're also going to develop Network V17, which will be mainly work with CryptoNetLab, a protocol opportunity team to enable low one storage market programmability, just like for a power FEM M2. We're also going to be working on split store, which is built on Vizos work that's to help with chain management and the way we want to ship that in production for our users. We're going to look at signature domain separation so that for user contract we can handle all those securely in the network. We also want to improve our ceiling pipeline so that you can scale enough to work with boost and also maybe build city as a service for the storage providers. We want to ship our V1 API has a lot of improvement on our gateway APIs, also support some like FEM M2, maybe EVM, JSON RPC APIs with it, so that developers can deploy their smart contracts and integrate with that. We also want to ship a live kind, however, currently we don't really know how we want to do that yet, so if anyone has a previous experience on implementing a client for blockchain, please let us know. We would love to chat with them. We're also going to do a series of tutorials at workshops so that loans become easier for people to use because it's a very complicated system and software. A lot of things to do, but we're relatively small teams, so we're hiring a lot, so if you know any good EMs, software engineers, or technical supply engineers, please send them our way. That's it. Also, one more thing, we are still planning for our Q4 priorities, so if you have anything that might need to load this attention, please reach out to me, Jenny Jujo from Gwislac. Super great. Follow along closely. Over to Vick for the latest fifth from CryptoEconLab. Hi, everyone. Juan, Molly, and CryptoEconLab have posted a fifth discussion to introduce a sector duration multiplier for a longer-term sector commitment. The kind of idea is that since longer-term deals and storage commitments to the network are more in line with the network's goal to store humanity's most useful data, and because SPs take on increased liquidity and operational risks in storing deals slash committing capacity for longer, we want to reward that with a multiplier on their QAP. So similar to how Philpuss introduces a verified deals multiplier, which rewards SPs for storing useful data, this trip introduces a duration multiplier to reward SPs for committing their resources in that for longer. I've linked the discussion in the PowerPoint, so please feel free to provide your thoughts, provide input, suggestions, or questions, and we're continually iterating on this on this trip, and we want to get this out there soon. Woohoo! Everyone, take a look and add your thoughts and questions. Thanks to the folks who already did. Over to Russell for the Pinning Service Compliance Checker. Thanks, Molly. So everyone, just first up, this is very noisy, and I know that, but these gifts all loop multiple times. You can watch those as I continue talking about all of the items. So Pinning Service Compliance Checker, what is it? There are multiple Pinning Service Providers, and we want to make sure that they're all providing the same sort of support to our users. You can read a lot more in the launch announcement about the history and how it came about, but why did we need this? The spec was the only thing for services to base their implementations on. No client existed. There is now, and you can check that out, which can exacerbate feature disparity, which you can see in the different reports and where some services are failing compliance and passing. So now we can try and align them better. A huge shout out to Lawrence and Daniel. My time is up. Awesome. Well, everyone can look at the gifts and see the amazing work that has been happening here. Some brief quotes as well. And over to Ecosystem. Mosh, tell us about all of the cool stuff that's been happening in Ecosystem Working Group. Hi, I'm Mosh from the Ecosystem Working Group. Our mission is to see the long-term growth of the decentralized Web. We're cultivating a wide variety of stakeholders, aligning them with the success of IPFS, Filecoin, and the P2P, and treating them like gold. Okay, so what does that mean? In the past couple of weeks, our team's been really active. The big event this week is NFT NYC. We're doing events, running community meetups, and doing a lot of business development. So how does going to parties translate into 68 million NFT stored? Well, a lot of the relationship building, a lot of the founders and relationship building and even the technical decision making is happening in these really, really informal forums. So unlike any other vertical where you can do business development on email and Zoom calls, this is all happening in chats and events and 9am dance parties. And so that's where we build some great relationships, then follow up afterward, debug any technical integration questions or scaling questions or issues with retrieval performance or anything like that. Really go use our 360 tools to help them succeed. And our focus on this pipeline has yielded, I think, seven major partners, that's companies or projects with seven to 10 figure market caps. And all of those become multipliers that then enable lots of other builders and creators to use those platforms to create new content on the decentralized web. Another thing that's been really exciting is the one million grant program announcement for Filecoin Green. This is focused on any sort of software project, instrumentation project, experiments towards making blockchains verifiably sustainable. And then a huge announcement is the Brave wallet now has native Filecoin support and Brave is kind of the leading edge of web three native browsers. And you can now do all sorts of great things with your Brave wallet, including creating and manage Filecoin wallets, importing from ledger, sending receive file tokens directly from Brave. So really exciting. We have some more things coming up soon. Funding the commons at the end of this week. The outlier and tachyon accelerator demo days, the links to sign up to those are there. You can meet some of the really, kind of, most exciting and high potential startups building companies and businesses on top of our technology. Most of these are remote demo days, so it's easy to dial in or have it playing in the background. And then our orbit ambassador program is hosting a number of events all around the globe. Lastly, but not least, we're building relationships and lining up a bunch of infrastructure and hosted node providers for Filecoin and FBM. If you have any needs or requirements or suggestions for what those hosted node providers should do, please talk to me or better yet, Eva, so we can build those into the contracts and diversifications. Thanks. So much cool stuff happening around the ecosystem. Thank you for the update. Cool. And we are ahead of schedule. We're going to go into our two deep dives, five minutes each, starting with Raul on the VM FBM. Hey, everybody. This is Raul from the FBM team. So it turns out that the FBM team doesn't really rest a lot. And we've been hard at work shipping M1, which as Jennifer said, is going to be going live with a SCUR upgrade on July the 6th. And just as a reminder, M1 installs the FBM technology on the network and basically transplants all of chain execution into this new Wasm-based runtime. But also in the last weeks, we've started working towards M2, which is the milestone that most people really care about because it brings the much-desired feature, which is user programmability. And that is basically the ability to deploy custom contracts and actors to the network. Now, the first kind of workloads we'll be able to deploy to the network are EVM smart contracts. And these contracts will have the ability to interact with built-in actors. And just as a reminder of how this kind of works under the hood, the FBM is a hypervisor-inspired runtime environment built on Wasm. And it's capable of hosting contracts and programs written on four different runtimes and diverse runtimes. And the goal is to provide seamless interoperability between those kinds of workloads. So bridging kind of like translating calls and making sure that addressing and identity is well covered, cryptography and so on. Why, before we move forward, I wanted to just touch on one topic, which is why are we focusing on EVM programmability first before native programmability. And this basically is around the sentiment that we've collected in conferences. It indicates that the community is really eager to build as soon as possible. And really they want us to meet them where they are today. And that basically means that many of these developers are Solidity and Ethereum developers. And they want to use their existing tools and know how to just build on Filecoin and get started quickly. They're also very important. They stress to us that the reason that they're deploying Filecoin is actually to be able to use the native Filecoin features. So one thing that we're focusing on is on providing Solidity libraries, precompiles and so on so that EVM smart contracts will be able to interact with built-in actors and utilize and query and so on, Filecoin features and state. Now, they also want to be able to, another advantage of focusing on this is that there's a ton, a ton of ton and a massive number of contracts that have been battle tested, audited, that are production grade in the Ethereum ecosystem that will be able to basically port to Filecoin in a very seamless manner. And also the important thing here is that these contracts will be able to compose Filecoin features. And these are things like ERC20 tokens, NFTs, and so on, which are useful premises that then you'll want to be able to compose in Filecoin, Filecoin related features, like say for example, providing provability of NFTs stored in or tracked by a particular NFT registry and providing provability that the data that is represented by those NFTs is alive, healthy, being proven as a specific replication factor and so on. So this doesn't mean that we're not going to focus on native programmability. We're going to continue specifying features and specifying kind of like the design surface for native programmability so that we have a full picture, but we'll focus on bringing those native specific features to fruition later after we bring EVM above. So one thing that I wanted to stress is that this milestone is going to be spec first. And now we have with M1, we shipped a solid set of baseline specs that we're able to work against and this is spec 30, 31 and 32. So if you want to go check those out, feel free to go into the fifth repo. But kind of like the spec that kicked off all of the EVM, FEM work is this initial parent spec that you'll find the link in this slide said, but it's just in the FEM specs repo. We've also enumerated all of the technical areas that are going to require further specs. So the team is currently in a phase where we're doing a prototype and at the same time we're burning through these specs that will help us continue refining more and more detail. So things like a count abstraction for concretely, this will bring us the ability to execute native transactions and transactions that were issued from Ethereum wallets like MetaMask and so on, logs and events support, things like how do we support the EVM delegate call, opcode and a bunch of other things. So stay tuned. If you want, I would advise if you're interested in this work, subscribe to FEM specs and also subscribe to the fifth repo and watch the discussions and the issues. Bye-bye. Now, this is a complex endeavor. So you're probably thinking, wow, this sounds massive. Yes, and we're aware of it. And we're also aware of the fact that there are many unknowns that we're probably just not seeing yet. So in order to uncover these unknown unknowns, we embarked on an EVM prototype and shout out to Karim. He's a team member at the FEM projects, what are the implementation there? And yeah, we just made it public yesterday. So if you're interested in what's going on there, go to that repo FEM EVM under the Filecoin project organization. And as of today, we're able to deploy EVM bytecode and run a simple coin contract that performs state reads and writes. And so with that, we've got a surprise for you. We've got a small demo, which Stead is going to do live right now. Yeah. So what we're going to do here is deploy the EVM bridge actor, then deploy an EVM actor using the bridge actor, using an EVM message. So cool thing about this is literally going to take an actual EVM message, submit it to the bridge actor, and then execute the init code. The actual workflow here is going to change a bit, or the flow here is going to change a bit on mainnet, because we're trying to reduce the amount of, I guess, an EVM specific stuff you have to do. What you'd like to do is to abstract the account so you can just send, effectively, an EVM message to the chain, and chain will be able to deal with it, but we're not quite there yet. Okay. So let me quickly run the test. This will take a few seconds. It doesn't work. We can always switch to a pre-run version, but it's kind of fun. So right now, it's trying to deploy the contract. What we can see here is, look at the code. Can you guys see the code? It's reasonably available. So what it did was it went here, created the tester, then it deployed it. So it constructed the contract. So that's what happened here. It signed the transaction using the EVM format, but then it submitted a message, or basically it called the EVM bridge contract with this custom message. This message here is a Chronicle Legacy EVM message. It has a non-scas price, gas amount, et cetera, and it includes the input is the contract consult against the EVM message, not the PowerPoint message. You can see here that it worked, but let's actually go down and see what actually happened. So if we look at the... There it is. So basically what this did was it invoked this invoked actor method here, which went down here. Let's try to find or it actually... Okay. There it goes. So once it goes through some testing stuff, it executed the message. Oh, that's the long thing. Fine. So I think it actually did, because I can't drill through the code here. Let me just reopen the file. I think it actually did was it eventually called this function here, this create contract function, which takes the signed transaction and it constructs... So it uses a bunch of stuff here to actually construct an FBM actor. Then it actually executes the EVM bytecode on chain inside wasm. This is all that's happening this year. Sorry, I forgot to mention this. This year is the bridge actor. This is actually running as an actor inside wasm. So actually, let's go back to the top. Sorry. So at the top, this is the bridge actor and this is the definition of the bridge actor. We ended up calling into process transaction to actually execute the transaction inside the wasm container inside the FBM. That process of the transaction, it's... Then, Steph, unfortunately, we were at time. Are you able to show a message being sent to this contract? Well, so if you look down here, it's... No worries. Yeah, it is working. Yeah, I don't think you actually set this test specifically. Sorry, I did not write the test. This test does not actually send a message. As far as I understand, it just sends a message to the bridge actor, which then constructs the EVM actor. It doesn't send a message to the EVM actor at all. Got it. I just wanted to talk about what's next. We have several lines of work that are opening up where the team is working on the prototype. There's a bunch of things that we know that we need to go through and continue building out on the prototype. The work that we do here is then going to feed into the technical design. So there's a really nice feedback loop there. We're also working on technical designs. Some of the most critical ones are account abstraction, the universal stable F4 address class, and so on. So feel free to tune into the repos for that. There's also going to be a set of EVM focused community RFPs that we're going to be opening up. So things like Solidity Libraries and Precompiles for interfacing with built-in actors, automated test and deployment of existing Ethereum contracts from like things like Open Zeppelin Libraries and so on, and a bunch of other things. And also, we're going to be starting and Ali and Dragon are going to lead much of the charge here, a new early builders cohort that is focused on use case building, the previous cohort, or the existing cohort, the one that's running now is focused on tooling, and this one is going to be focused on use case building using Ethereum tooling, existing Ethereum tooling, and deploying on the five one effort. So that's all from the EVM team today. Thanks a lot. Awesome. And if you're a potential early builder, look out for signups. We would love to have you as part of this program, make awesome new things on top of FBM. And now handing over to Marcus for chain snapshots. Okay. So this is about the Filecoin lightweight chain snapshots. The project abstract is produced lightweight snapshots, which are required by new Filecoin nodes to join Filecoin networks in less than, than 24 hours. It would take weeks or months to sink a new node from Genesis on mainnet without snapshots. Snapshot service is a critical part of the Filecoin ecosystem. So this is an ops project aims to, to provide guarantees by operating a HA fault tolerant service, implement monitoring and commit to service levels. The snapshot service should be capable of producing snapshots from DevNet and TechNet as well to support rapid node bootstrapping and test, test environments. And this project is led by Travis person. Riva has been running a snapshot service since the start of mainnet. And the outputs of our first milestone will be in line with, with that service, which will produce a latest snapshot and timestamped car files stored in F3. And the net ops service will, will take over for, for Riva. Our goals are to guarantee snapshots are never older than four hours. In the meantime, to sink a lowest node from snapshot should be less than two hours. There should be redundancy in the snapshot service and storage and distribution of snapshots. And we will be reproducing snapshots as they are currently produced. And we will make sure there is monitoring and alerts for the snapshots with, with runbooks. A quick overview of how we're doing it. Travis wrote the PowerPoint chain archiver service, which has a node locker service and a exporter. We're, we're leveraging the, the Kubernetes cron jobs and using open source home charts that we've authored and just using Lotus. A quick overview is just that we're able to run concurrent snapshots and, and, and guarantee that each Lotus node is only running one snapshot at a time to avoid complications. Our progress so far, the, the, the work is done and we are now actually producing snapshots on calibration and mainnet. It's going to check out those snapshots and test them. There are some links here. And to, to get to our M1 soft launch goal, we need to complete the, the validation and learning and, and runbooks. The M1 soft launch is scheduled for July 20th. At that point, it will be, it will be for, for experimental use and testing and Rebus service would continue to run. M2 would be our production launch, which would be the end of Q3 2022. And we'll work on, on increasing our confidence in those snapshots and their, their correctness by having continuous validation running and improving alerting documentation and runbooks. And at that point, we hope that, that Rebus service could begin to sunset and we'll need to coordinate with the ThoughtPoint dev team. And then, and as well, make sure we do a lot of comms in coordination with the broader network is the snapshot links are probably baked into a number of, of, of tooling that's out there. But I'm, I'm moving into the best for, for the end here. You're probably wondering, well, you're storing snapshots in, in S3. We should be storing it in our TL web 3 stack, which we are definitely thinking about. And I want to highlight some of the challenges around this. So the snapshots are a little under 80 gigabytes in size and are produced every two hours. So that's one terabyte of data a day. And we also expect the size to, to increase. So we're talking about a fairly large scale storage system to, to support this. And then snapshots become less useful with time. So one, one feature we would like to see here is a, a retention period to keep our storage scale down and also to keep cost down. And I think the most important part is we have some fairly, some tight goals and, and service levels about being able to, to ensure that Lotus nodes are able to bootstrap as quickly as possible. So we need to make sure that any, any solution that we actually land on is able to support the download, the download throughput that will satisfy our goals. So this actually makes the snapshot service a fairly good kind of a case study. And we are going to be checking if Kubo and cluster would, would be able to, to satisfy those goals with benchmarking. And there's a whole list of cool tools that have come out in services that we also plan to test out. And we would love your, your feedback and, and input on this problem. And you can join us on fill in front and fill slack. And we would love to hear some, some of your, some of your, your proposals on how we could, how we could start snapshots with our PL website. That's the end. Thank you. Awesome. And that brings us to the end of our all hands for today. Thank you all so much. Especially for going a little for time and have a wonderful rest of your day.