 Welcome everyone to our August PL-Endres working group, All Hands. We have a packed agenda today. We have a very short working group update just sharing some of the updates on KPIs, highlights, and our OKRs for the quarter. And then we have lots of spotlights from boost, retrieval dashboard, SB reputation working group, and a ton of others. I will not list all of them. And then we have a deep dive on the Phil Dev summits happening in Singapore and Iceland in September, which is only one month away, which is super exciting. And then we should have some hopeful time for Q&A and other questions. So as a quick reminder, the PL-Endres working group is a gathering of teams working across the PL network, where we are motivated by building better technology for the internet so that we can secure a robust framework for future breakthroughs, starting with a lot of the amazing Web3 projects that have come out of protocol labs over the years, and are really spending a chunk of time on things like IPFS, LibPB, and Filecoin, which we'll hear updates from in a second. Our mission is to scale and unlock new breakthroughs for these projects by driving breakthroughs in protocol utility and capability, scaling network native research and development, and stewarding and growing open source projects, networks, and communities. We have a whole ton of different working groups. We're excited to welcome the data programs team to our working group, sharing some of their awesome updates expediting data onboarding to the Filecoin ecosystem. Our strategy stays very consistent with a first focus on helping critical systems be stewarded and grow over time, growing the amazing network of teams that are contributing towards these awesome protocols, and then two main pillars, one around robust storage and retrieval. So great to have the data programs folks joining us on that side as well, and then around compute over Filecoin state and data, building on things like FBM to add new layer two capabilities, additional chain space, and compute over the large amounts of data stored in Filecoin. A couple of updates here. You may see that we had updated this to a check mark last time, IPC M2 officially exists on Filecoin mainnet, which is super exciting. Ceiling as a service is also out there in the wild, getting used by a ton of folks, which is really, really exciting to see and great progress across the board on Saturn, Lotus, Minor, Station, and gearing up for both IPC M3 and the next Filecoin network upgrade and V21. Now we can do, okay, our update, starting with Lauren. Thanks, Molly. On keeping critical systems running, growing, releasing, scaling, and secure, since we're almost halfway in the quarter, most things are yellow on improving the IPFS Gateway debugability work on track. There is a MVP out and then iterations are in progress with respect to landing three high value FIPS. There's discussions under way for NV21, which will be planned for early November. So we'll see what actually ends up landing in that. On the five community bootstrap nodes, we're currently at zero additional that there's talks for two others and we are on track on moving users off of chain.love. And then on Filecoin chain robustness, we've launched the state profiling tool, so that is green, and we have two new disputeers and two new fault detectors moving towards that KR. And there's an amazing cinematic masterpiece from Tippi Flitz on how to use the disputeers that I recommend everyone check out. Steve, go ahead. Thanks, yeah. So for a hyperscaling, accelerating the talent and teams contributing to the PL stack, these KRs are all on track. A few details, the CryptoEcon lab has been updating their sales framework to better understand prospective clients. And so they've been going deeper in protocol design, mechanism design simulation, etc, with these clients and they've sent two work proposals to prospective clients last week. So that's great to see, congrats to DRAN, which have landed one of their lighthouse customers called Proof of Play. And then on this Helia IPFS effort, this is for work streams across Box of Kubo, Helia, JS, the PDP and Golub PDP, and those are all progressing in different ways. So so far on track for this objective. Thanks, go ahead, Matthew. I think Matthew is at a team summit today and I soon is out of office. So I am jumping in to help out here on the scale data onboarding and CDN speed retrievals to drive super linear adoption with lighthouse users. There are two partners lined up for when code is ready for our new project motion, easy filecoin integration tooling work that's happening across teams, which is really exciting. There's work in progress on unsealing fixes. And so we're on track for that one. For Saturn, the on track there as well team super close to meeting both latency and correctness goals for M1, which would kind of allow us to start serving that IPFS.io gateway traffic in prod, not just the mirrored traffic that we've been serving for the past couple of months. And then finally dot storage on boards, 100% of historic uploads to W3UP. This is also on track. These are in filecoin plus deals with a good SLA for you know, the future time to data in filecoin deals landed as well. For number four, upgrading filecoin with new L2 capabilities and chartable chain space. We have over 11 million fill TVL, which is super exciting. So making good progress over 200K wallets and just I think just breaking 2000 unique contracts. So good progress on our FBM adoption and capabilities. We're still I think at one end to end aggregator, but they have also done some of the IPLD reachability work. So making good progress on some of the later components of these FBM capability sides of things. For IPC, we are on track for a fundament based IPC implementation. Right now, three dev teams have expressed interest in this, you know, prominent early access dev teams and a few more in the works scoping. And then last but not least, there is a live cod test net that a number of people were building on. Sorry to seal someone spotlight thunder. Number of people were building on during ECC two weeks ago, which is super exciting. And they're still in early days that they're not yet advertising people to join the network in terms of number of nodes. But we're actively submitting early jobs and starting to run additional ML workloads on this new test net. So I think we're on track for some some really exciting momentum in Q4. All right, handing off to IPFS. Yeah, I'll take this. Sorry, IPFS we're making the web work peer to peer with content addressing so that content can be verified independent of the provider or the transport method. In terms of some of our KPIs. Yeah, no major callouts here regarding the public IPFS DHT and its network size and performance. Yeah, no major shifts. We also are looking at IP&I performances you can see in the bottom right hand corner. There's these graphs plus a lot more under problab.io. But yeah, nothing going to get into this time around. But these are some of the metrics we look at. In terms of the next slide, you know, this was a quieter month with events and vacations. But there's a few things we want to update on. One is Kubo022 did ship. So this includes IPIP412, which allows signaling for block order in cars on trustless gateways. So that's out there. We have the newest goal of P2P, which when measured in Kubo was reducing the number of dials by 30% with low to no latency impact. And there are a host of regression fixes, which is unfortunate, but also some important security patches. So if you are a Kubo user, please update. We've been talking, I think we've mentioned it before, there's been a major lot of work going on behind the scenes for IPFS Companion, updating it to use the MV3 model of new web extensions. So that's in our final beta right now. So please try it out. We'd appreciate any feedback because when we push that live into the Chromium Web Store, that does reach 60 to 70,000 monthly active users. So I want to get that right before we do so. And we will do a spotlight to get deeper in on this project once it's actually fully shipped. But this is setting the stage for a Kubo list companion, not needing to have a Kubo instance running in order for companion to work well and enhance the browser. On the Helia front, a lot more examples, even more examples have been added covering popular frameworks, bundlers and testing tools. And the forever land group, they have a pinning service in a gateway. They've been added to the web UI and the pinning service compliance suite. So another option there for folks. And we've mentioned probelab.io before, but that website is launched and fully announced the version one of it, I should say. Some things that are coming up, we're overdue on some communication regarding the Kademia DHT roadmap, and also work just happening generally in IPFS, particularly around areas like HTTP, debuggability and supporting large blocks. We will get those public comms out next month before the next all-hand, I should say. And on the, you've been a lot of talk before about the gateway conformance work, we will be wrapping a bow up on that so that there is a public dashboard so that you can see various gateway implementations and gateway instances in terms of how conformant they are. So that will be coming out, that OKR item regarding allowing content that's being offered in the browser to be retrievable from nodes like IPFS.io gateway. We're really pushing forward on that. We should have some good progress to share for the next update. And I guess the other thing I will call out is there's good work happening in LibP2P, enabling LibP2P with HTTP, and we'll be adding that into Kubo so that you can hit the HTTP trustless gateway API by doing that over LibP2P if you would so like. So yeah, good stuff and more to come. Thank you. Let's switch to LibP2P. Thanks, Steve. LibP2P is a modular network stack for PDP protocols, and it's striving to be the library of all Web3 products. Next slide, please. Normally we show our KPIs at this point, but we're going to skip them again this month because we have a whole bunch of new perf and interop metrics coming. And so look for this September all hands to have a new revamped KPI slide. So on to LibP2P highlights. Like I said, lots of new perf interop metrics. We've got new dashboards ready to be made public. There's Steve just touched on it. There's an experimental implementation of LibP2P plus HTTP and an exploratory refactor of IP and I, if you want to try it out, ping Marco. On the community front, we've replaced one of the bootstrapers with one written in Rust. We have a new community contributed hotline implementation of Rust LibP2P, and we're seeing lots of community contributions in JS and Rust. So the community engagement is going up into the right and constantly getting better. I think Marco is going to do a spotlight on our presence at IETF. So we can skip that center column here. We have some sneak previews of the new cool dashboards. And I want to call out that we're getting lots of love for JS LibP2P on Twitter with regards to implementations. Two things to highlight. Lots of security fixes, not lots, but important security fixes were made in go LibP2P recently. And there was a rewrite of quick in Rust LibP2P to use Quinn. I've been assured that this gets us close to a 10x improvement on quick throughput connections. So big improvement there and really that's pretty much it. I think the big call out here is that we're getting lots of community contributions, lots of engagement and the LibP2P community is growing. Our last community call had representatives from eight different implementations of LibP2P on it. So on to Filecoin. The Filecoin is a crypto powered decentralized storage network built for efficient, robust foundation for human latest most important information. Quickly, we are still a very large storage network. We are at 11.19 XB bytes of the storage capacity in the network. Many of them has a lot of data, which we are now at 1.3 XB bytes data stored in Filecoin. We are constantly reaching more than five pips per day from data onboarding perspective. Again, great job for data programs or any community member are onboarding data kinds to store and data on Filecoin. Next slide. Really quick, some quick highlights. The first one, Lola's just published our like July release, which is out with a lot of improvement for ESRPC service provider, storage provider and so on and so forth. One of the highlights that I want to call out is we officially launched the brand new slasher and distributor services. These are essential network services to maintain network security. Filecoin has two mechanisms, which is one of them is consensus thought. The other one is dispute window posts, which are a critical part for the Filecoin as approval storage consensus. And we launched services to allow user to round those, detect the bad actors, call them out. And by doing so, not only your security network, but you're also getting rewarded by doing so. And then the resource for running these services are very low cost. There's a masterpiece down by TP that shared in the loader's announcement channel just this morning. Please go check it out. Call to action here is if you care about Filecoin security as approval storage network, please run anyone. And if you know anyone that participate in the Filecoin and care about your health, tell them this exists now and ask them to run one as well. Right now, as of today, we have four active slasher and 10 active distributor in the network. By the way, four months ago, we only have one slasher in the network, which is concerning. But now we're increasing that amount. And I'm here to give everyone a challenge. Can we try to get that number tripled by the end of this quarter? That would be amazing. So please help us spread the news. Next up, yes, we finally got a date for the next network upgrade for Filecoin. It's codenamed as watermelon, because it's supposed to be very juicy. It's happening in November. Quodam are still evaluating all the fifths and figure out the priority and the scope. Some of the potential highlights are, as mentioned, since the pullback. We are also working actively on direct data onboarding, which from a long-term perspective, we are hoping with FVVM launch, we can have more dynamic different storage market evolve of Filecoin protocol. That's why we want to slowly enable that and less enabling basic storage market other than F05. The first step will be allowing data to be directly put into the sectors without sending a public storage deal via the F05. And a short-term impact for that is storage router can save a lot of gas for PSD message, which is the most costly part for data onboarding today. So I'm hoping everyone would love that. We are working with the DRUN team to slowly switch into the DRUN quicknet. Thank you, Patrick, for offering the implementation for all this. We are also considering to allow SPS to manage their post-stateline so that we can reduce the human operational dev ops costs for SPS, and they can spend money somewhere else. There's a lot of tips ongoing. I have a full list of the links added there. Please join the discussions, provide feedback. And if you want to contribute building Filecoin protocol, let me know. We can use more typing power. And another update, field dev summit is officially settled, and it's coming. I will have more details to share later. Next one is like I take it as challenges, but also opportunities with, as mentioned last time, there's a very interesting conversation going on how we evolve the Filecoin plus as a principle, but also the programs to run to incentive useful storage for Filecoin network, but also provide incentives for people to build in tools to ease a data onboarding process onto the Filecoin storage network. In the conversation, there's not actually a lot of interesting conversation on like what's the Filecoin storage network? What's the Filecoin's mission? People have different opinions on what a useful storage network is. Some people think, hey, useful storage network is when you have the the mind where you have client to pay for storage service. And some people like me, I would actually see Filecoin as a decentralized storage network that can offer very cheap storage for data set potentially has low funding to pay for very expensive storage service. And I love the fact that we have mechanisms to support that free storage like data network incentives. Again, to me, Filecoin is like three years old now. A lot of new people has joined the network. Everyone has different, there are very dynamic business models today in the Filecoin network. People have different opinions and I love that we are having this vibrant conversation in the community. So if you have something to share, please join the conversation. And the whole goal is to build a better Filecoin storage network together. So please, please comment if you have any, if you have any thoughts. Awesome. Super well said. Moving on to our spotlights. And we have a ton of them. So please help us keep it nice and snappy. Starting with Boost V2. Hello, everyone. It's nice to be here. And I'm here to share a bit more about Boost V2, which was released just a few weeks ago. The main change that you'll see in this version is that we've replaced the DAV store with the local index directory, which is powered by Yugabyte DB. And essentially the local index directory manages and stores deal data indices for storage providers. So as you can see in the diagrams on the right, hopefully it's not too small. There's information basically that maps CID to piece offset as well as sector details for the SP. So when one of their clients comes to request a particular CID on the Boost side, the lookups happen via the local index directory and then content can be located and retrieved. There's lots of benefits that we introduce as well. So SPs now have visibility into their storage health and they can identify fog pieces, which require either unsealing or re-indexing. So SPs can have better storage management services here. And it's also the first step we're taking for horizontal skill ability for SPs, especially as more and more SPs are storing client data and their power grows in the network. And we've also removed the dependency on DAV store, which lots of storage providers are running into issues with. So thanks everyone for the great work. Shout out to Dirk, Anton, Mayank, and the rest of the Better Rock team for all the help with this. And you can read more details in the links provided. Thanks everyone. Awesome. Super, super exciting. Over to Sheenan for Retrieval Dashboard. Hello. So Retrieval Dashboard, the goal for Retrieval Dashboard is to get an overall idea of how five coins store providers are performing in terms of retrievability and visualizing a nice public dashboard. On how it works, we have been running Retrieval Workers from five, actually those are actually coming from five different reputation welcome group members, more not joining. It covers, it's actually retrieving from different places across the globe, including China, Mainland area. More information about reputation working group in the next slide. And it is doing lightweight HTTP, graphing, and best wire retrieval. The major use case for it right now is for field plus program quality check, large data set issues, so that at another rate can do the due diligence by visiting the dashboard. It also offers a per client view. So like existing clients like Solana, Internet Archive, they can just use this dashboard to see how well their store providers are doing in terms of retrievability. And for new clients, they can look at this dashboard to see whether a new provider is good at retrieval. Also for store providers, they can take a look at the logs. So to understand how and to troubleshoot any ritual failure and see how will they do retrieval from other places across the globe. So there are a few links below, which links to the repo of this dashboard, as well as Retrieval Worker and the public dashboard. So feel free to look at it. Cool. Thanks. So cool. And also awesome to see all of the metrics in the field plus program increasing in terms of percent retrievability over time as well. So it's working. It's happening. Kero, tell us about the SP reputation working group. Hello, everyone. Thanks, Molly. Thanks, Sheenan, for the retrieval dashboard overview. So the retrieval bots, the five members that Sheenan talked about, there are five out of the 12 members in the ST reputation working group. The other members include Lassie from Batrock, Starboard, PhilScan, PhilRap, Ground Control, and PhilFox. So a bunch of us currently contribute to the reputation working group. Currently, there are six T plus unique metrics collecting, you know, reachability data time to first buy, retrieval methods, retrieval success rates, as well as, you know, non-retrieval related, such as location and blocks mined, so both on-chain and off-chain data around the SP reputation. Right now, the reputation database is polybase. We are super happy to be able to collaborate with polybase, which is our PLN, you know, company. And right now, we have a bunch of FBM aggregators. Pretty interesting using our reputation database. One of them being Lighthouse, who is already testing on the reputation database. The long-term goal of the reputation working group is to become a DAO, whereas you can see in the diagram in the right, we have kind of a tokenomics mechanism for the contributors to earn DAO tokens and for the customers of the database to pay in DAO tokens to access the data. That part is still under development. Right now, V2 is launching with Open Data Hack end of August, and it is going to be free to access for now, before we have the tokenomics figured out. So feel free to message me on Filecoin Slack at Carol if you want access to the working group database. Thank you. Super cool. Thanks so much for sharing. I think we have an async update here. Hello, everyone. This is Steph from Sentinel Speaking. Today, I'll be talking about Mercury, which is Sentinel's latest initiative where we try to provide a service for the user-friendly API, serving both historical and monitoring data that Sentinel offers. Mercury Fill is the first client we created for the Mercury project to validate our idea. It is written in Python and was designed to help analysts and research scientists get their hands on historical Filecoin chain analytics data using tools they're most familiar with, such as Pandas and Jupiter, both within the Python ecosystem. Mercury Fill is available to download from the PyPI registry and can be used by people with the protocol.ai email address. The slide also shows a GIF with a demo of Mercury Fill in a Colab notebook. Google's hosted Jupiter notebook service. I've included a link in the slide for you to try out. When you do, and if you do have time to share some feedback, please do so so we can better improve this service for you. Thank you. Awesome. Thanks Steph for the remote presentation. Over to RG for data cap quality. Thanks, Molly. More recently, we've been working on defining what quality usage of data cap means. Right now, it's a combination of three tools. We're looking at retrievability, deal distribution, which is captured by the CID checker bot, as well as the risk and reputation scores, which is currently under development. So you can go to quality.datacapestats.io to play with this graph. But as you can see, as you know, Sheena mentioned, retrievability has been increasing and improving. So we're seeing some upticks there. There's still a chunk of data that is unknown. So we are working on bringing more insight into what kind of, you know, how data cap has been used, but highly encourage everyone to check out this website and support us in our efforts to define quality usage of data cap. Thank you. Awesome. So useful to be collecting good metrics here. Ali, tell us more about Lillipad. Hi, everyone. I'm Ali. I've been helping to build Lillipad, which is developing the infrastructure for a trustless distributed compute network on IPC that will enable this internet-scaled data processing, AI, ML, and other arbitrary computation from the Filecoin blockchain, basically. So it'll also unleash this idle processing power and unlock a new marketplace for data and compute via some of our SPs and anyone else that has spare CPUs and GPUs that they want to earn money from. So Lillipad leverages Bakuyao under the hood. You can kind of think of Lillipad as the incentivized Web3 version of Bakuyao. So kind of like Filecoin leverages, these crypto-economic incentives to act as a persistence layer for IPFS, Lillipad does the same for Bakuyao. So while they share some code bases, they are kind of completely different as well. So we recently released an early test net that you can try out as well, which is called Lullichusa. You can either use this from the CLEE or you can go ahead and use this from a smart contract. And we are also working really hard in the background at the moment to release an IPC version of this test net. We wanted a test net out for all of these Paris events though. So we kind of rushed one out in the meantime that people could use and play around with. So technically Lillipad is being built with these deterministic modules. So AI inference modules are currently being enabled with like kind of a payment per job economic model. Obviously we're only on test net so it's free. But the roadmap is firstly just for these deterministic compute jobs and then run non-deterministic game theory being considered in the future. So we want your ideas as well. Tell us what you want us to add and please like us on Twitter or X or whatever it's called now and we'd love to hear from you. Thank you. Awesome. Next, Singularity, Shelen. Thanks Marley. Hey everyone. I'm excited to briefly introduce Singularity V2. As we know it is the popular data on boring tour network. The V1 has prepared about 100 payback data in the past one year and we are excited to take this tour even further. So V2 started to test just a week ago and if you want to hear more updates feel free to join the Singularity V2 channel. The specific benefit of Singularity V2 bring is to for example the inline preparation it could reduce the disk space for the data propellers but more importantly the improved data making and the preparation module will be served for the project motion and eventually serve the ISV even even better and the V2 also add a dear monitoring dashboard to help the user to track dear status which is a highly requested feature. Shout out to Xinyan Hanna MRC for helping the V2 augmented and if you want to learn more about what V2 will offer feel free to check out those our gable links and our new dashboard. Thank you. That's awesome. I love dashboards. Everyone go take a look and I think this is 100 you know 100 boobs of data. That is probably one of our most high-volume onboarding tools in the Filecoin network so great work making it better. Marco tell us about ITF 117. Hey folks ITF 117 was in San Francisco a couple weeks ago. ITF follows many of the same principles we value at PL and they've been doing it for 40 years. I think there's a lot to learn from them and I encourage engineers to attend one of these meetings at some point to see how the standards of sausage gets made. I've written some notes and takeaways it's linked there and I recommend watching oh no my text didn't properly show up. I recommend watching Corey Doctoros. Don't play this video. Presentation at the decentralization of the internet research group. One quote here that I have listed that is hidden behind the video is Corey quote or said like all billionaires Mark Zuckerberg is a policy failure. So if you want to see Corey dunk on the big platforms I recommend that video. Awesome we are now over to our deep dive on Field of Summit. Jennifer do you want to share some more about this awesome two-part gathering for the Filecoin developer community. This is our amazing Field of Summit. Apparently we have an awesome branding set up these days. This is a two-part event that the first part will happen in Singapore only a month away. September 12th to 14th and the second part is in Iceland. September 25th to 27th. We are having two part because PeopleBeautyFilecoin is delivering so many and spread all over the world and we recognize there are challenges for people to offer to one location. So we are having one in Asia and the other one in Europe and in North America so that we can gather all the developers, most of the developers who are contributing to Filecoin and having our very own depth summit. The goal here is to build connections, discuss about the protocol, discuss about tooling, how we can evolve Filecoin protocol, how we can make data onboarding experience better, how can we start working on Filecoin scalability with IPC, what kind of the use case is there, how can we start with compute over data, how can we start to compute jobs on the data stored on Filecoin and also all the retrieval storage is now we are storage network. You get data aid, again you want to get data out. So we are also having a retrieval check over there. Obviously we want to engage closely with the builders that's building on top of FEM and future quickly building on IPC as well just to discuss all the data economy and different kind of use cases that can be enabled on Filecoin. So having two events, it's a lot and can be a lot of traveling. So the way that we are thinking about this is that we are kicking off all the conversations in the part one, which is the Singapore event, start to gathering the problems, try to brainstorm the ideas on how we can resolve the challenges or the bottlenecks that's existing on the Filecoin network today. The second part, we will be quickly releasing all the discussion happened in the Singapore event to the public so that people can leave comments, engage offline as well. And then in Iceland we are going to continue all the amazing conversations, hopefully happen in Singapore and start to align out the future directions, roadmap and what to build over the next couple months. For us, there's also an idea that using this opportunity to hack and then hopefully leading to launches at lab week that we can share with the broader community. Because there are two tracks different, we also realized that there are different audience in different locations for the for the Filecoin community. So we, on the rough side, we have the track guidelines for folks as shared over here. So in the Singapore track will be focused on again for FEM application application use case like buildings. Again, all the content will be shared online so everyone can have access to that. We also want to engage with the SP community to talk about tooling and stack how we can build amazing things for people to provide storage service. It's a focus in the Singapore because there are going to be a DSPA, a long set of our event. DSPA is an accelerator program for SPs to join the network. That's why it's there. And in Iceland, we are going to have specific focus on FEM runtime development, you know, enable different like runtime on Filecoin, like AquaVM, enabling native rust, like what's on runtime so that we can develop like native actors. We want to talk about how to build scalability solutions, computer over data and retrieval stories on the network. In both locations, we are also going to talk about how to build in toolings to ease the data onboarding into the network and also how the Filecoin protocol development can evolve. And we are also going to have governance track to talk about Filecoin plus fit processes and how we want to handle network upgrades in the next couple of months. A lot of things, if you have any ideas on the tracks you want to lead and it's not listed here, we are opening for tracks. So you will see a huge announcement I think later today, very soon, very soon, that to announce the Field Dev Summit to the general public as well. In the website linked here, you can apply for the event. It is an application based event because we want to make sure we are having all the great talent that's focused on development into this event. If you are more like BD, business development or like, you know, marketing folks who want to build a Filecoin ecosystem from that perspective, there are Field Vegas and also in October, and also a couple other events that you can join over there for this event. Again, we are hoping we can have a lot of the developers and talents to join this one. Please submit talks and tracks that you think can be driving meaningful discussions and takeaways along with other teams. Please help us share the news, tell people to apply. I know it's only a month away and we really, really want to get the news to as many people as possible. So please, please, please help retreat, share the blog post, just invite people who you think can be valuable for this kind of conversations and tell them about it. And also we are looking for sponsors as well to either sponsor the event or like sponsor individuals who can travel into these places. So if you have any connections or you know any people that are waiting to sponsor the awesome Filecoin Dev Summit, please let you know. There is a public channel, hashtag Field Dev Summit in Filecoin Slack. So joining there, we are all going to be monitoring that channel. I think I covered most of it. I don't know, Molly, do you have anything to add? Nope. I think that was fantastic. I'm looking forward to seeing many folks who are excited about making the Filecoin network stronger, better, more capable and building on top of it, whether that's FEM builders or people who are building Layer 2 networks or people who are augmenting additional tools and capabilities around this ecosystem. Super exciting. I'm also pumped for the protocol conversations. I am excited to jam with folks on the Venus and Forest teams about how we can simplify out really minimal consensus layer and start scaling Filecoin utilizing IPC so we can have regional capacity and data onboarding networks within Filecoin itself. So I think it's going to be some really exciting conversations and big thanks to everyone here who's helping prep awesome trucks and talks and discussions for that venue. Please go ahead and join us there. Start booking your travel because I know time's getting tight and very excited for that venue. And with that, we're at the end of our deep dive. Thank you all for joining.