 Welcome, everyone, to our September PL Endres All Hands meeting. Thank you all for joining us. We're going to do a quick working group update, highlighting some of the projects that we work on and some of the teams that are contributing updates on KPIs, highlights, updates, things like that. And then we have a number of awesome spotlights on new things that have launched or are launching soon across kind of the teams that contribute to Endres. And then we'll do two deep dives. One on Problab for not hole-punching success rate measurements and one from Phil Infra on IPFS operator. So super excited for those two. As a reminder, we are one of many wonderful teams in the Protocol Labs network where we drive breakthroughs in computing technology to push humanity forward. We think that the internet is one of humanity's superpowers and we want to make sure it is built on a robust foundation that is empowering of all sorts of amazing future breakthroughs and that we're helping drive those forward effectively. We do that through the building up of these awesome open source protocols and projects. We contribute super heavily to IPFS, libpdp, and Filecoin, but we're also constantly finding new opportunities to push this ecosystem forward, building and helping nucleate new projects and also contributing heavily to other open source ecosystems, building things like IPLD, test ground, d-random, and many more. Our mission is to scale and unlock new opportunities for IPFS, Filecoin, Libpdp, and related protocols. We do this in three main ways by onboarding awesome new developers and contributors into this ecosystem, driving breakthroughs in protocol utility and capability, and scaling all of our work through network native research, development, and deployment. We're made up of a ton of different teams that all participate in the Andres Working Group. If you want to join us, please let me know. We now have some of our first Andres teams participating outside of the PL Starfleet organization, so excited for that. And we also have a lot of open roles. So if you're looking to join one of our teams, please take a look here. There's an open job here. You can use your phone here, but we have a lot of teams looking for engineering managers, TPMs, product managers, infra engineers, software engineers, research engineers, you know, you work on it. We would love to hear more about you. So please reach out and see if you can get involved. Our strategy as a working group for 2022 has remained constant. We're now in almost the end of Q3 and have made a ton of progress against these items. The first one is growing the talent that is working in this ecosystem and helping empower and give them great developer experience in UX. Second is making sure that we have robust storage and retrieval across IPFS and Filecoin, helping many groups accelerate their data onboarding, building up reliable tools around retrievals and a lot of different layers of the protocol and then enabling effective developer adoption and usage. We have a ton of work happening around breakthroughs in programmability, scalability and compute. This is around FEMs and retrieval markets and scaling the Filecoin chain consensus and computation over data and Filecoin and many other breakthroughs that we're working hard on. And we do all of the above while keeping our first and foremost focus on critical network operations, helping kind of act as stewards around releasing open source implementations that push these projects forward. We are just finishing Q3. So this is going to be the last time we talk about our Q3 goals. As a working group, we kind of had four main foci graded here. We did really, really well on scaling knowledge and developers and keeping critical systems running. We did pretty good on driving through some of these network breakthroughs. We have now 36, I believe, smart contracts deployed on FEM Testnet, which is awesome, super cool. And we still have room to go on robust, accessible storage. We haven't quite hit our very ambitious goals in this area that we've made a ton of progress. I think this actually needs to be updated to two Pebobites as our all time high. And I think, yeah, our successful retrievals is like now back at 250K. So we're still making a ton of progress. We just have a long ways to go to hit our ambitious goals in this area. We are also spending a lot of time on the overall Filecoin Core Improvements Roadmap. There's a lot of work right now going into Filecoin Network 17, which I think we'll hear an update on later. You'll also get an update on ceiling as a service, which the Lotus team has been doing some awesome work around collaborating with our friends in the Outercore team. And then there's also a lot of work happening for future milestones, building the momentum around FEM and building momentum around things that will launch on top of the FEM in the future. So super exciting. And with that, I'll pass off to I think Gus for an IPFS update. Hey, everyone, it's Gus. For those of you who don't know, IPFS is a peer to peer web protocol designed to preserve and grow humanity's knowledge by making the web upgradeable, resilient and more open. We finally passed 500,000 new nodes in a week, which was a cool milestone, continues to grow at a constant rate. And we held the line on latency at 400 milliseconds for finding content providers. And we also held the line on PR's opening and closing at similar rates as they did last month. Product updates, we've got some new specs. We've closed out a redirect file support, which will be coming in the next Kubo release, I believe. Working on tar gateway response formats in a new format for specifying denialists for IPFS gateways. And I believe today we're cut Kubo's cutting the first RC for 0.16, which includes IP NSV2 signatures being required. So we're finally starting to chip away IP NSV1, redirect file support for gateways and are launching reframe routing support so that you can configure your IPFS node, your Kubo node you have fine grained control over the content routing. JS IPFS launched enhanced DOS protection and some pretty significant improvements for BitSwap. And on the hydras, we launched integration with SID.contact, which is a step forward for a Filecoin IPFS interop. We added S3 exports for all the data that the hydras see, since they have a pretty good view of the entire network. So that we can do lots of cool data analysis. And so, yeah, for the next month, we're going to be prepping for IPFS camp and working on road maps. That's basically going to take up all of our time. And we've got a new UNIX FS spec coming too, which is going to unblock some people working on other IPFS implementations and upgrading libpdp, which has a lot of fun stuff in it like resource management, which is something we've been wanting for years in Kubo. That's it. Super excited to see reframe. Speaking of all those libpdp goodies passing off to the libpdp team, Martin, maybe. Hello, I'm Martin. For those of you who don't know, libpdp is a peer-to-peer networking stack. It's used by IPFS, it's used by Filecoin, by a lot of other projects. A lot has happened recently. So first of all, we started revamping our dock site. It had been long neglected, but now Danny has joined us and is helping us rewrite everything and getting things up to date. Stay tuned for updates there. We'll have a libpdp day on IPFS camp in Lisbon on October 30th. Would be great to see many people over there. The community call is continuing to happen on a regular basis. Last time there were 16 people on the call. We never had that many people on the call. So that's really exciting. What happens implementation-wise? So libpdp has traditionally been very good at connecting standalone nodes, using TCP, using quick, using whole-punch thing. So that worked very well. We haven't been very good at connecting browsers to the network, which is pretty important because browsers are used a lot on the web. So we are focusing on two new transports there. One is web transport. It's a new protocol under development by the ITF. It's already implemented in Chrome. And since the O23 release of GoLip2P, which happened this week, it's also supported by GoLip2P. So what can we do with it? Any Chrome browser running JSLip2P can now connect to any GoLip2P node without any further configuration. Just works out of the box. So as I said, this is released in GoLip2P. JSLip2P will release this very soon. This will work. The second effort we've been engaging on is WebRTC, which would allow any browser to connect to any other browser on the network. We've been partnering with a company called in BearLabs. They've been helping us with writing the spec and writing the different implementations. And the Go and the JS side of it is currently in the code review process. And we're still working on the Rust implementation. Regarding the whole punching, ProBloB has been doing some exciting measurements and you'll hear more about that in the details. So I won't say anything more about this here. As I already mentioned, we released GoLip2P O23 earlier this week with experimental web transport support. We also have better handling for DNS addresses, multi addresses, which we use for WebSocket. Other than that, JSLip2P O39 was released. We now finally have Yamag support, which is super cool because it will allow us to deprecate Amplex, which has been causing us so many problems over the years at some point. It also has enhanced dust protection by introducing all kinds of limits. What's coming up for October, we'll be continuing the work on WebRTC. We'll be launching a website, giving an overview of all the different transport options that we now have. We'll also do some work on GoLip2P Day and prepare all the talks that we are planning to give there. That's it from my side. Hey, given this list of awesome things shipping, it's gonna be some exciting talks in Lisbon. So everyone better go to GoLip2P Day over to Peter for IPDX. Yeah, I'm Peter from IP developer experience. Our team caters IP stewards mostly. So we try to empower all the teams that you've seen so far to do their best work. Recently on the test ground fronts, we now cover interop testing in Lip2P with test ground between Go and Rust and between within the languages themselves. That runs on every PR in both Go and Rust Lip2Ps on GitHub management fronts. We added new feature that can automate config fixes. So we can now do things like ensure that every repository in any PLR has a specific setting enabled or disabled. So that's pretty cool. We also established a GitHub Management Stewards teams in all orgs with GitHub Management. So there are more people to review your PRs. Developments in GitHub Actions. I know many of you complained that there is no SSH debugging experience on par with CircleCI in GitHub Actions. So we want to help and fix that. So if you were missing that reach out and what's coming next, we are driving Kubernetes so that we can improve the process going forward. So that's pretty cool. We're also working on moving Kuba workflows from CircleCI to GitHub Actions. In GitHub Management, we want to start managing IPFS examples with GitHub Management. And finally on the test ground fronts, we are supporting Bloxico to bring test ground to EKS. We are going to support little barelabs on browser testing support in Lip2P so that we can cover JS as well. And we're going to do more stuff on the Lip2P testing side of things. And we're also preparing for a lab week and especially Lip2P Day out lab week. So we're going to be there. Come find us. We're happy to talk with everyone. Thank you. Awesome. Over to Filecoin. As it says here, Filecoin we're trying to build a crypto-backed power storage network for humanity's most important data. And we have some matrix to shell. The network's storage capacity continues to grow statically. It's slowing a little bit, but again, we have a lot of good capacity for a lot of good data available. So if you have some data to store, go ahead, come to the network. We have increased number of data storage in Filecoin over the past week. It's not like slowing down at all. So we are hitting almost like 190pip in total of bytes that store in Filecoin network. We also hit our new daily high of data onboarding, which is like 2.18pip, which is like super exciting, super close. No, once that's closer to over 5pip per day, that goes, so that's very exciting. Next slide. Molly has mentioned this roadmap. I'm just going to give a quick update on what's going on right now. Yes, we have the shock to release that will be our Filecoin network v70 upgrade. I have a slide later to go into detail, but the rough timeline is a target at mid-November for Filecoin mean net upgrade. There's a lot of amazing work having seen at VM. There's a slide later as well, a great conversation on it for address class, which is enable more user-deployed actors to assign subnamespaces. VM is also working on weekly or, say, every two weeks, at least every two weeks, there's a new release in the next upcoming one. We have these images on our PC, able to integrate with each tool, and it will be in Lotus very soon. So super excited. Foundry cohort zero is graduated. Now we're kicking off the early builder, Foundry, like, one really soon. I don't wish to start beginning. I really want to mention there are 63 smart contracts that was being developed in the last cycle of the release. So if you or someone want to start to build on EVVM, head into the EVVM channel, or the field will be discussed channel to join the network to deploy your own contract. We have more updates on the built-in net. That is an internship around the member of December, and more details will come in some day. As Mo back mentioned earlier, Lotus has shipped the enabler for city as a service in Lotus v1.17.2 right now is in testing. There's a lot of, like, actual city as a service integration work happens right now. And during that channel, if you are interested, if you are sort of weathered or want to try the service, help us to test it out. Go join that channel as well to set up. We also have an amazing update from our team on Halo 2. It's not fully integrated into the proof code and can be used to generate off-offering proof. Team is working on some, like, benchmarking. And the current benchmark shows the performance is slightly, like, slower. So in graph 16, our team is working on optimizing that. With Halo 2, proof recursion is released. We will have, like, even better understanding on the performance aspect of it. And the team is also working on integrating GPU field that is built by curve into Halo 2 so that the performance, like, meantime can be even faster. If you want to follow this work, go join the proof channel on the follow-up aspect. So many exciting milestones that are being built towards awesome work teams. Super, super exciting. We're going to jump into some more team updates from some of the specific teams working towards this, starting with NetOps. This week, our NetOps update was still around our KPI. Our 95TEFB is a job to, like, around four seconds. We're still working on it to make it even further. We're also working with the sent team to see if we can leverage our decentralized CDM to make our performance even better. For the idea of class 3 ping, still a lot of a ping there, 532 million ping haven't for the last two weeks. Our gateway requests increase a lot because the employer stopped their public gateway. It's a good thing that basically we can, that means we can handle more traffic with a very good quality. But in the meantime, we definitely want to encourage in community to run in the gateway with us. If you want to run the gateway for the community, let us know, we can help. So we want to get the number down, but in the meantime, we want to get a total number up for the whole work. The unified user is a tough medium. Yes, increase a lot. It's because the employer stopped their gateway, but we're also hoping the incoming, helping us to run the gateway, to make our IPFS gateway really decentralized to the work. Yeah, that's it. Thank you. Awesome, thank you. Over to Patrick for Retriever Markets. Hello, Retriever Markets Update. We've got from the Retriever Market Working Group we set out in 2022 with an audacious goal to build a Retrieval Network that would serve sub-second Retrieval as well, data stored on Filecoin. We're making good progress towards this, but I think that goal is perhaps at risk to get it done in 2022. I think we just took on a little bit too much, but I think by hopefully end of Q1 2023, we should start to connect these Retrieval Networks to storage providers and really start to see the integration between different parts of the network and the Retrieval Times time to first bite, tumbling down. The Retrieval Market Working Group has a collection of teams and they were working on different pieces of this overall puzzle. And inside Entrez, we have two teams which are working on particular parts of this too. The Satin team is working on a DCDN for Filecoin. Currently they're working on what's called the Satin Sunrise Program to get the first set of L1 nodes to join the network and give feedback. And in time for Lisbon, we hope to get the public launch of L1 nodes so anyone around the world can run these. These are the entry point to the Satin network that people can run and will be running primarily in data centers. There's also the L2 nodes coming down the line later this year and early next year and those are the nodes that can be run on people's home computers. And then there's Station as we speak about home computers, a desktop app for Filecoin. In time for Lisbon, we hope to have a closed out for release where everyone on this call should at least be able to download this and then by the end of this year, a public release of this desktop app, which will open up so many different opportunities, not just in retrieval markets, but across all parts of the Filecoin ecosystem. Awesome. You can all reach out to Patrick and the teams on Slack over to Vic for CryptoEcon Lab. Hi, everyone. So a quick update on the team. We're continuing to grow. So Sean Shrither has just joined us as a research scientist. Please welcome him. He's super nice, doesn't bite. Then the next thing is FIP 36. One final update here is, it's being put up to a poll on FillPoll right now. It's at, this poll closes September 28th. In line with the poll, the governance team from the foundation has released a governance process which you can read, which will determine whether or not this FIP will be included in the NV17 upgrade following the results of FillPoll. We have released like a public dashboard where you can see running poll results. I've taken a screen grab of that on the right so you can see the various stakeholder groups, how they voted, and also their participation rate. By a poll, this FIP has been pretty contentious. If you've been tuned in, there's been a lot of discussion and debate over the past three months. As a result, due to the need to find some kind of resolution, we have, the governance team has decided to put this up to a poll. Quick thing as to why this matters, we have really done some public AMAs and released public materials on this, but the short summary is, we believe that the FIP is imperative due to the potential state of the global economy coupled with the crypto downturn we are seeing at the moment. This FIP is designed to help stabilize a network to create an incentive structures that protect the network against potential, like shocks and continued downwards movements. So, we have publicly kind of stated that if this FIP were to be rejected, it may not be considered again until Q2 2023, just due to network upgrade timelines and the availabilities of teams to continue working on this. But so that is an important kind of opportunity cost or trade-off when kind of making a decision, when voting, but the important thing is regardless of your stance, please vote, it does not matter if you vote to accept or vote to reject. These are our viewpoints as to why we think it's good, but on a higher level, we think participation in general, it should be the goal. So regardless of your stance, please vote on a FIP poll, whatever bucket of a group you fall into. I like that idea, Nikola, but and the results will continue to be publicly available and therefore this decision will be made. And yes, I agree with Jenny, please do your own research. I've tried to link as many helpful documents as possible. The best place to start is probably the FIP draft itself and then move on to various discussion threads, et cetera. So the last thing is we understand that this is very complicated. It's not a simple decision doing your own research is important. So we are also hosting office hours tomorrow, Friday, 9 a.m. UTC to answer any questions. So just please drop in, I'll be there as some others will be there to talk about the FIP, so thank you all. Awesome, go vote. Big thank you to the GLIF team for making it really easy for people to vote, even using like GLIF wallet or ledgers or other things like that. So if you're a token holder, you can vote. If you're a storage provider, you can vote. If you are a client storing data on the network, I know some people have stored their entire backups, you can vote. And if you're a core dev, you can vote. So definitely come and engage, voting is good. And do your research to have your perspective on this. But be heard, cool. Passing it off to George for Bifrost. Hey everyone, I'm George from Bifrost. Echoing what just said earlier, we are now up to 1.3 billion requests a week, 12 million unique users. Most likely from taking on all the traffic from the now defunct into our gateway. Time to worst buy this down to four seconds from seven seconds, I believe, last time we reported it. We've updated the clusters to the latest version, which has a couple of fixes for go-with-team leaks and memory leaks, so that should make it a lot better. Also in the process of migrating to a new disc layout in ZFS, we should result in improved caching and monitoring. Thank you Matt Gettys for putting that together for us. Also our team has grown from two to four. Welcome Jeff and Carl. We're actually wrapping up our team week here in Barcelona. Thank you everyone for joining and thank you to Jesse and Hector for all the help with the logistics and the planning. Quick opportunities, so we've switched to consistent hashing which has had a huge impact on time to first light. We're seeing much better numbers. However, we're starting to identify some hotspots that I think we can improve the hash algorithm by tweaking the values a little bit. So we're actually going to do that test around next week. Bad bits, there's a bit of a pain point there. We're trying to block thousands of CIDs at once, which is what we've been getting on the in-depth abuse mailbox lately. It is causing the pipeline to lock up or create PRs with conflict, which ends up creating a lot of manual work. So we're looking for a fix for that. And working with David Justice to export given traffic logs from Elastic Search and to make query for user aggregation. And also it would allow us to trim our Elastic Search bill which is currently growing. Yeah, find us on Slack or in Notion, that's it. Awesome, great to see that increased usage without harming performance. Over two hour spotlights, starting with the shark upgrade, Jennifer. Of the shark to release, if we are Wednesday, I mean, codenamed the Network MD-17 upgrade culture, core contributor of this upgrade and ground-serial, decided to pop in one of the one that was super casual with me, saying that, hey, I've been surfing with some sharks in my vacation time. And I'm like, oh, great, cool, that's too bad. But anyhow, there's a lot of amazing flips that will be included in the Network MD-17 upgrade. So we're enabling beneficiary address for storage providers, which is one way towards better lending markets for SPS with FEM, it will be even better, I believe. Next to Cuba, we are resolve a significant weakening of FALQUIN polar security guarantee concern in this upgrade as well. We are focusing on enabling programmable storage market towards FEM launch of the 41 and the 45 are for that special call out to the decoupling FALQUIN plus some marketplace, this is our start, our blue heart of this upgrade. So we are making data cap and the Q&A brings to the sector to start trying to restore stuff to actually be associated with the data itself instead of the deal. So now you can base the client or anyone who cares about the data that's stored on FALQUIN can have the term on the data cap and extend that term if they want. So that starts private will store that data as long as possible on the FALQUIN network. It's a huge step towards user program, both storage market and yes, we're introducing our first fundable token contract that's a data cap actor is gonna be one of the building actor that will be shipped in this release. I link the fundable token standard and the token contract library in the slide. If you are interested in it's very early stage so we're gonna iterate over the time if you have experience on that but please participate in the conversation. I don't want to forget about 544 which is thanks to a huge we are enabling any metadata authentication for user data actor. We are trying to set a good foundation for a lot of use case that can be deployed with FBVM before the next network upgrade. So that's our goal. As mentioned, 536 currently is being pulled out right now if the community, the ecosystem decide if it is accepted because of the time sensitivity, we have the core that have agreed that if you were accepted, we will include that in the network in the factory upgrade and because it's extending the sector next lifetime we also want to make sure our power lifetime security is secured. So we are considering finalize the 47th as well in this upgrade rough timeline we're still targeting mid-October for the calibration upgrade and early mid-November for the main upgrade. Thanks to San Juan in Roscoeba, a huge job for all the development also shamelessly plugging, we're doing a lot of data onboarding and friends summit as Phil Lisbon on the run with the second the whole team will be there if you have any question about the shark. Last on we will be there. Awesome. Over to magic for ceiling as a service. So yeah, finally, after a whole bunch of discussions and a lot of implementation work, we've landed basically all the code that's necessary on the Lotus site to support seeing services. This is basically just a few new APIs. What they allow Lotus to do is to import sectors which have been sealed externally and also they may have been only partially sealed. So miners can pay some service to do pre-comput one and two for them. And then they can download those sectors and finish the ceiling themselves. This basically allows miners to optimize how they use hardware. This allows miners to pay other services to the ceiling for them. Maybe in the future, we can extend it so that you can have service providers run all the compute needed to run a Falcon node and you would only store data locally. This could let us deliver the mine falcon with an ass on your desk use case that is possible, it's just not right now. So we can do it. Yeah, the other use cases that can emerge from this work is seeing compute marketplaces which could morph into more general marketplaces later. This is also something that back low could provide in the future and integrate with this. So yeah, there's a lot of very exciting features emerging from this seemingly simple Lotus feature. And yeah, a lot of these things that are not possible with this is thanks to the design discussions that we're had for quite some time. So yeah, if you want to check out how it was designed then maybe they'll have more input on it. Check out that good discussion, can hang out in the fail, ceiling to the service selects and all. Yeah, that's ceiling to the service. Awesome, I know people have for a long time wanted to be able to be participants in the Falcon economy with devices with less, you know, GPU capacity. So that's super exciting and also super exciting for all of those poor proof of work miners who now no longer have a home in Ethereum who can maybe come and bring their services here without also having to invest in a lot of storage hardware they can partner with existing SPs and just provide their like GPU ceiling services to help scale our community here, which is awesome. And so thank you for working on this magic and excited to see it come to fruition. Passing over to Steve, speaking of the merge. Yeah, you bet. So this is a simple, you know, there's not necessarily a new announcement versus last week, but it's worth repeating, right? LibP2P is securing Ethereum's main net, a huge milestone here. And one of the things I love about this is just that it is very much a long-term effort, right? So there are talks back from DevCon too where David and Juan were, you know, already started to beat this drum and talk about how they could integrate. We had great notes and ideas being brainstormed, getting a, we had a, you know, connected with Parity to do a REST implementation of LibP2P in 2017, 2019. We got into the Ethereum networking specification. There was a whole bunch of community management work and, you know, implementation management that had to go on that Raul and others were helping lead. And there was key new functionality that had to be added like Gossip's, but which happened in 2020. So there's been a long journey. I love that we were planting seeds six plus years ago that are now really bearing fruit today. So major props to folks in the past like Raul, David and Juan. And also the current team, you know, there's been a lot of work going on behind the scenes to make sure LibP2P is secure and ready for this, you know, the big moment of the merge. And it's been successful. And so now we see, you know, multiple LibP2P implementations helping secure a network of over 400,000 validators. So great job to the team. And yeah, excited for more good times ahead. Awesome. Over to you, I'm currently last one at least, FVM. FVM, so what's happening? There's a ton of news across the board. I've structured them in kind of like the working groups that we have in the FVM team. First, a bunch of updates from engineering. We have, others have announced previously we have a new test net that is live, that is operated by factory solutions. This is Patrick. It is a test net that is updated roughly every week because we are conducting incremental delivery of the roadmap of FVM M2.1 for the Selenium release. So this is the release that went out two weeks ago. We had 63 smart contracts deployed on that network with this new release that went out last week, which is Copper, it's named Copper. We had 23 smart contracts deployed on the network and we're expecting to put out a new release next week. Hopefully everything will go well for this one because it's a huge one because it's gonna add full JSON RPC support, a theory of JSON RPC support to Lotus, which is then gonna allow unlocking downstream, a lot of downstream testing effort and integration effort with native Ethereum tools like MetaMask, Remix, Foundry because all of these actually access functionality in the Ethereum or EVM compatible network through that JSON RPC API. And also we're pushing for feature completeness of the EVM runtime, so which we're codenaming FVM by adding support for more workloads that are still missing from that implementation that are currently panicking. Now, there are several architectural changes that are going into the Filecoin network to actually support all of the changes that are kind of like all of the structural changes that are needed in the model to support things like addressing, native addressing in the Ethereum network in EVM contracts and also to support things like native Ethereum transactions that are emitted and issued from wallets like MetaMask, wallet connect and so on. We wanna support those out of the box. So there are two key changes that are going into that we're proposing for inclusion in FVM 2.1. That's the F4 address class, which Jennifer already mentioned a little bit about. This is a hierarchical, dedicated address class. So basically there will be address managers on chain where the first one is gonna be an Ethereum address manager and this is gonna be able to provide different addressing identities for actors on chain such that because normally just to back up a little the FVM is built as a polyglot runtime that is able to accommodate runtimes from other chains to make that integration of contracts from those chains easier over and simple. And in many cases, those runtimes actually come with their implicit assumptions about addressing. This is usually the case. So having a way that we can in the Filecoin protocol work with those addressing assumptions at a native level is pretty important. This is where F4 comes from. A contract abstraction allows us to validate Ethereum native transactions in the protocol within user space without actually having to modify the protocol. That's why that is important. And then finally, we're also starting the work stream on the EVM storage footprint optimization in partnership with consensus lab with ACOSH. Now on the developer experience front we had a ton of success with the early builders F1 program which focused all around the native tooling. There's a link in there with a blog post on everything that they built. We're graduating the F0 cohort in the next weeks and we're starting a new cohort. This is called F1 and this one is gonna focus all around EVM and Febham use cases. We received 55 applications. We have selected 31 of those but we've divided the participants into two groups. The core group which will get very close support from the FEM team and the peer builders who will be kind of like community oriented. Now we're also incubating the developer forums and the docs, that's a provisional URL that is gonna be moving to a Filecoin.io domain soon hopefully, Matt also published a Twitch stream deploying contracts, demoing how to deploy contracts on Wallaby together with Genpic. Genpic is also on fire. He's putting out a ton of walkthroughs on Observable HQ with like every release that we put out there. He's the first one to just grab but a sense of on top of it just experiment with it. It's just awesome to have that energy in early builders. On the product front, we're pushing for designing a what's gonna be the new network in a new Filecoin network which is designed to be a canary network as a spin-off economy from Mainnet. So this is a network that is gonna move well from Mainnet and it's gonna basically allow users to experiment with early technology such as the FEM before it's actually released on Mainnet by using real value. We're planning to launch Build-A-Net if everything goes well by November the 8th. So this will be just after Phil Lisbon and all the events in Lisbon, a lab where you can so on. So there will be, you'll hear a lot of drumrolls and a lot of presentations around this as well to pave the way. Addy is also conducting a ton of interviews across the org to collect input around use cases that people wanna build immediately on FEM. Remember the goal and the success of FEM is not just shipping the technology but actually building, enabling the building of the use cases that people are just have been waiting and dying to build for a long time. So we wanna make sure that we're able to provide, that we understand what those use cases are, that we provide guidance to these teams. It's likely that we're gonna be putting out solution blueprints going forward for some of these use cases to kind of provide light guidance on how to build things like, for example, compute of a data networks or how to build L2s or how to build stuff like perpetual storage and things like that that people really care about. Now, if you have a use case and Addy hasn't reached out to you yet, make sure that you reach out to Addy so that he gets a chance to put it on the radar for us. This is also very important because a key epic and FEM 2.1 is actually re-engineering the built-in API, the public APIs of built-in actors so that they're able to cater for all those use cases that people wanna build. On the audits front, we're booking two external auditors for FEM, preparing security audits. We're planning to involve the PL network at large to come and vet the code base. So there's gonna be some comms that go out there. We're also assembling an internal red team. So if you wanna participate in reviewing and auditing the FEM code base as it gets closer to being prepared for production, then reach out to Dragon here and he'll add you to the list of potential auditors. We're also gonna be inviting the research, the security research community at large to participate in reviewing. So we're scouting, if you have like really talented security researchers that are potentially working in academia or other places or maybe like more amateur or whatever that are not really gonna conduct a formal audit, but there could be acting as white hackers trying to break a test net that make sure to get to speak to Dragon as well so that we record them in our candidates list and we reach out to them. And as for upcoming launch plans, the current protection for Mainnet is February the 8th. If everything goes to plan, build a net. If everything goes to plan, it's scheduled to be launched on November the 8th. As I said a few seconds ago and also we're preparing our presence in Lisbon. So expect the FEM team to be there to meet with all of you to chat with all of you about everything that you wanna build and to do a ton of knowledge transfer as well. Awesome, super exciting. And we have just enough time for five minutes each for our two deep dives. Starting with Robelab, Dennis. Hi everyone, this is Dennis from Problab. At Problab we want to measure or we measure the performance of web three protocols and benchmark those protocols against target milestones and also propose improvements. And for this particular measurement campaign we take a look at the net whole punching success rate as Martin already teased earlier in this presentation. You may know that NetReverse is a quinch essential problem in peer-to-peer networks and currently we rely on relay peers that act as a proxy for all our traffic. And since Kubo 0.13, actually all pieces are shipped and enabled by default for an alternative technique that allows two peers to be behind nets to connect to each other. And this is whole punching by the DCUTR protocol which is also linked there. DCUTR stands for direct connection upgrade through relay and I will go into the details in a bit. And in this measurement project we want to find out the success rate of this protocol so how often our peers are actually able to connect to each other and maybe also uncover potential improvements to this technique. So just briefly about this DCUTR protocol. So whole punching in general happens when two peers simultaneously open a connection to each other at their predicted external addresses. And in this case, both routers of both peers update their state tables and have seen a packet going out. And if they have seen a packet going out they also allow packets at that address that the packets went out to go in. And if both peers simultaneously connect to each other they are actually able to establish a TCP or a quick connection as you wish. And so the problem here is the synchronization. So both peers need to do it at the same time and this happens at a rendezvous point. So we call it and actually since Kubo 0.13 all deployed Kubo nodes can act as such a rendezvous point. And Max has actually given a great talk about the whole protocol at peer-to-peer Paris earlier this year. I highly recommend check it out. It's linked down below. Right, so how do we want to measure the success rate of this protocol? So the challenge is how do we detect that it appears? The idea is that we just want to do a lot of hole punches to a diverse set of peers but we actually don't know where they are. And the main idea here is that we deploy a honey pot and attract those peers behind nets. And this honey pot is just a DHT server node that walks around the DHT and it's a very stable node. And this, and since it's a very stable node we hope that other peers are actually including the honey pot peer into their routing table. So if peers behind nets request content from the network they actually come across this honey pot. And if this honey pot detects a peer that supports this DCOTR protocol and is also only reachable by a relay peer which is the indicator that it's behind the net then we save this inbound connection to a database. Then we have a second component here that which is a server which just serves those detected nutted peers to a fleet of clients and also exposes another API to track the hole punch results. So these clients are actually run in a diverse set of home network or supposed to be run in a diverse set of home networks. And these clients are actually just DCOTR capable lip peer-to-peer nodes. We have two implementations here one in Rust and one in Go. And those clients actually just periodically query the server for nutted peers then perform the hole punch dance, the DCOTR protocol and just report back the outcome if it worked or not. And the Rust client is actually implemented by Eleanor and Max, so shout out to both of them. So the purpose so far is that the infrastructure is there so this architecture that you've just seen is deployed and it's working. And there's two Grafana dash was also linked at the link you can find down below. This is probably the most interesting part here. So what's the success rate and there are already some results here. So right now we have only four clients deployed and the success rate for these four clients is around as you can see 80%. So you can see the time on the x-axis and the success rate on the y-axis. The success rate is around 80% but it's less for peers that are in a VPN network. Another improvement proposal or another improvement suggestion that we could make. So the protocol actually tries this hole punch, this hole punching thing a couple of times. And but we found out with this preliminary results that actually if it doesn't work with the first time the second and third attempt also won't work. So we could actually stop the protocol there. And there are many more results for the four clients that I showed here. You can find it at the link there at more results. So what's next? We want to extend those clients. So the fleet of clients that we deployed. So this is a call out for you, for all of you to participate. Please check out this Google form. This will just ask for your home network conditions like which router you have at your home network. And then you will receive an API key and you can download these puncture clients that you've seen earlier and just participate and contribute some data of your own network. And yeah, that's my brief deep dive already. Thank you. Awesome. Now over to our last deep dive on IPFS operator, Corey. Hello, my name is Corey. I'm with the Filecoin Infrastructure Team and this is about the IPFS operator. The IPFS operator is a Kubernetes operator that is designed to help people run an IPFS cluster or set up IPFS nodes in the Kubernetes environment. The key feature of it is that it is a turn key IPFS cluster. You can see the GIF working over there on the right. That GIF is actually going through the entire process that is required to set up in a full fledged IPFS cluster. You could see that it's quite simple, boils down to basically one command. The goal of this project is to spread the adoption of web3 and enable it to be more easily run in high production environments, particularly those that are found users who might run Kubernetes. What we would like to do is we would like to, for Kubernetes operators, if they have a storage need, if they are searching for what storage projects or what storage product that they want to use, IPFS should be right there, where they're right there next to Seth and the rest when they go through their catalog and they wonder which product to make or which product to select, IPFS should be there right in their face and it should work for them and they should have a good time with it. This project is being developed in partnership with Red Hat. So I've included our GitHub link right there. We have moved recently. The code is now in the IPFS cluster org. It is the same code and the same project. This is just a zoomed in version just to really hammer home how easy it is to set up. You can see what we've got here. It's a very simple configuration file that you can set up. You can just use your standard Kubernetes utilities that operators of this type will be very comfortable with and apply it to your cluster. A brief overview of how this actually works. This is actually two Kubernetes controllers built into one binary. One of them is the IPFS cluster operator itself and the other one is a controller for LibP2P. This is an optional component that you can add in in case you would like to use whole punching services that are not simply the public whole punching services. But what I want to stress is that this offers a complete package for everything that you might need to run your IPFS cluster in Kubernetes, including all the things listed here, configuration, cluster following, all this comes straight out of the box. How do I get this thing? As I mentioned on the first slide, I want this to be right in front of your face when you go to select a storage platform. It will soon, although not yet, it will be featured on the operator hub, which is one of the things that you, particularly users of OpenShift will be familiar. This is a catalog of other Kubernetes clusters, or excuse me, other Kubernetes operators. It will sit in this catalog right alongside, other things like SAP or Rokr, something like that. Additionally, this is for internal use cases mostly at the moment. If you happen to be running on our Weave clusters, IPFS is available as an option. You can get a cluster built by us that will have the operator pre-installed. These are some features that we are soon to land. These are not entirely functioning if you were to run the code right now, but there are PRs available for it. What we would like to do is have better support for external facing services. Wouldn't it be great if you could take advantage of this one-click installation facility and then use this to operate your own IPFS gateway? I think so, and we can make that happen. We have some changes that are coming down the pike that will be landing before we get to the IPFS camp. Where can I learn more? I've put some screenshots here. Documentation can be seen in a number of places now. I've listed these in the comments of this slide if anybody is wondering, but you can always reach us at the IPFS operator GitHub page. Also, there are documentations that can be seen on Read the Docs. We are featured on Red Hat Next project, which I have a screenshot there. There have been a couple of talks that have discussed this project. One of them was by myself at the IPFS thing that happened in Iceland. And also my colleague, Oleg from Red Hat has done a talk at DevConf 2022, adding to this list. Oleg and I will be at the IPFS camp in Lisbon. So we will be doing a talk there as well. So we will see you there. Thank you so much. Super, super exciting progress, everyone. Thank you for all of the awesome updates and excited as we build momentum into actually getting to see so much of the community in person next month. So thank you all. And I hope everyone has a wonderful rest of their September.