 Welcome everyone to our May PL Andres all hands meeting. Jen and the same as usual, we have our working group update. We have a couple of spotlights and then a deep dive into caboose. Learn more about caboose in this meeting. Reminder, what is the PL Andres working group? We are one of many engineering and research working groups across the protocol apps network where we are pushing forward technology to make the world better. This is really harnessing the fact that we think the internet and computing is one of the amazing upgrades that humanity has had in the past couple of decades. And many of tomorrow's breakthroughs and exciting new capabilities are going to be built on top of the technology and foundations we are building today. And we want them to be resilient, efficient and empowering of human agency. So we work on all sorts of amazing protocols within web three and beyond, honestly. But a lot of our work goes into IPFS, libpdp and Filecoin, but a number of other protocols as well, such as distributed randomness beacons, data layouts for web three testing frameworks, retrieval solutions and much, much more. Our mission is to scale and unlock new breakthroughs for IPFS, Filecoin, libpdp and related protocols. We do it in three main ways, driving breakthroughs in protocol utility and capability. So engineering research directly, scaling the way that we do our network native research and development so that we can bring a larger community along with us and empower many other groups to join us in all of these deep open source improvement projects and also in stewarding and growing OSS projects, communities and networks. Reminder, these are some of the different teams within the PL-Android's working group contributing towards a number of the projects we work on. And this is our strategy for 2023. And since we're now in our, we're not quite halfway through the year, but we're maybe halfway from last time we really spent a lot of time diving through why are we focused on some of these things? I wanted to go over it a little bit more in depth, but high level reminder, we spend a good chunk of our time working on critical systems stewardship and growth. These things do not stay functional automatically. Software needs repairing, improvement and scaling as more people adopt it. And so we are constantly working to help grow, release and kind of scale these systems, growing the teams that are working across it and enabling many other groups to come into these networks and build new implementations, new improvements and new projects harnessing these protocols. And then we have two kind of like core foci for this year. One is really focused on robust storage and retrieval, making sure that that works really seamlessly across IPFS, Filecoin, LCP, et cetera and scaling into additional kind of like CDN solutions so that you can have very, very fast retrieval around the world for data stored in these systems and then around compute over Filecoin state and data, empowering many builders to build on top of Filecoin with smart contracts, with layer twos or subnets as we're calling them, increasing the chain space available and the tooling available for people to run all sorts of exciting compute networks so that they're harnessing the data and Filecoin for more and more useful outcomes. And I just wanted to tie this back to what we talked about kind of in our kickoff for the year in November at Lab Week. We were talking about the Filecoin master plan and how this fits into the core three steps that we've been working on. And so the first step of that is building the world's largest decentralized storage network. Step two is onboarding and safeguarding humanities data within that decentralized storage network. And step three is bringing compute to the data and enabling web scale applications to interact with and build on top of that. So we have spent a chunk of time on building the world's largest decentralized storage network and we have the world's largest decentralized storage network. We kind of achieved a pretty exciting amount of this growth in year one. And we also achieved having the cheapest large scale storage network in the world, a fastest upgrading blockchain storage network. And we made a lot of improvements to storing data and Filecoin making it a lot easier than where we were at a day one when Filecoin hit main net. And so just within the last 12 months, network power has increased by 25%. And you see significant growths both in North America and Asia. But we also really wanna make sure that as we have this large scale storage network that we're resilient to all sorts of macro and crypto cyclicality. And so that's an important component of making sure we continue to hit that top line goal is making sure that as our macro environment is in a different place than where it was a year ago at this time. But even then, Quan has an awesome talk about how to be prepared for crypto winter for various different participants in the Filecoin ecosystem from about a year ago, I think it was May 2022, that is a very good rewatch point. But we absolutely see this sort of cyclicality in other web three ecosystems and other kind of like proof of work environments. And so making sure that Filecoin stays resilient to that. We are way more resilient and sticky as an ecosystem than other things like Bitcoin that see these huge drops and things like hash rate on top of macroeconomic changes. But making sure that the data that's stored here is also resilient to those sorts of changes is really important. And there's a lot of groups across the Filecoin ecosystem and more here needed that offer those sorts of resiliency programs for Filecoin storage so that clients don't have to worry about that sort of cyclicality and they can store their data and walk away. And so there's evergreen solve that for slingshot data. Lighthouse is offering a auto repair and renewal and permanent service on top of Filecoin. The NFT forever smart contracts, we're doing the same forest data that was stored by NFT.storage from a transferring or getting investment from others or kind of like leasing out your storage provider to other investors or clients. Phil Peggy was a smart contract that did that on top of FEM and there's a number of others. And so that's an important part of continuing to make sure that we hit master plan step number one, step number two, onboarding and safeguarding humanities data. This has seen amazing growth, 10x growth in the past year, which is awesome. We're almost at 900 petabytes of live data across 30 million active deals. And that's really, you know, thanks to the software getting better, more clients and storage providers getting involved in data storage, the economics making sure that they push people to store actually useful data instead of just doing proof of work, mining on empty sectors and never actually putting this to work to store humanity's valuable data. And a lot of work also on making this a lot easier with tooling and onboarding ramps. And so big, big snaps to all of the folks from the estuary team working on Falcone Web Services, the NFT storage, Web3 storage teams making this easier and the Falcone Plus team who's both working to scale this and significant programs working on quality and making sure that the data that gets Falcone Plus multipliers is real valuable useful data and not fraudulent data or anything like that. And we've seen really amazing examples of large scale clients coming on top of the Falcone network. There's CERN's Atlas project. There's, you know, all sorts of, you know, documented work frames from Ukraine, from the Starling lab, which has been submitted to UN court to be referenced and, you know, utilize Falcone's verifiable proofs there. There's dark matter research from UC Berkeley, Holocaust testimonies, many, many more. These are just some of the examples that the teams actively working on supporting Internet archive, many other groups who are all becoming the groups that are onboarding and safeguarding humanities data. And final step, bringing compute to the data and enabling our web scale apps. And that's, we're in the thick of that right now. We have FEM, FEM launched just a month and a half ago now, super exciting, you know, a very big step forward for this ecosystem, but also still requires a lot of, you know, onboarding work energy to make this usable and accessible and help many groups build on top of this new ecosystem and capability within Falcone. And so it, a lot of, you know, our work helping make this exist has come to fruition, but now there's a lot of work from this entire ecosystem to harness this and make it accessible and useful to others. There is, you know, the Cod Summit has been happening this the past two days, but a work to bring large scale compute over Falcone data. There are going to be many of these different compute networks in Falcone, optimizing for different trade-offs from privacy to viability to performance. And that's awesome. And there are already multiple that are in flight. I believe there are some like Phil Swan that are live. And more of these are coming. And so this is a very exciting future for Falcone. You know, a chunk of this is already unblocked. We already have WaterLily, which is doing utilizing the Buckelhalla network to run AI compute jobs to generate stable diffusion images for NFTs. But we expect there are going to be many more utilizing the building blocks of Buckelhalla but also taking them further to become Web3 networks. And a very valuable and important component that they would like to build on top of is things like IPC, which enables you to kind of run your own subnet, run your own compute network or L2 kind of like smooth, smoothly and seamlessly within the Falcone ecosystem so that you can create, you know, compute incentives for large scale network creation. And so a lot of the work that we are doing right now is building towards these building blocks existing. And so I wanted to kind of like snapshot what is our progress on some of these different tracks of work? We've made a ton of progress on compute over data. Buckelhalla just hit 1.0 day before yesterday, I think, and that's to the Cod Summit. There's some additional building blocks we need here such that we can have large scale compute networks operating on top of FVM, utilizing interplanetary consensus and building their own Web3 compute networks, but that we are making great progress through the set of dependencies to kind of like unlock all of those many compute networks to exist on top of Falcone on the retrievability side. We have many, I think it's 2000, 3000 Saturn L1 nodes that exist in our offering retrieval requests. We now have Lassie, which is a lightweight retrieval client that we did a deep dive on last time. And we are working on opening up Saturn for client usage. And this is a big, important, challenging chunk of work, but we're making really amazing progress here that Saturn team, Bedrock team, IP Stewards team are all contributing to this effort. Daghouse team is also now getting involved as client number two. And so we have a chunk of work going here and really productionizing fast Web2 speed retrievals of data stored on Falcone. Also thinking a lot about, you know, how, what is the incremental way in which we build the retrieval capability, build your retrieval ability metrics, build your retrieval ability incentives so that we can make sure that we scale this smoothly across our network storage and capacity. I don't believe ceiling of the service is quite live and available yet, but super national and a number of other SPs that are working to offer ceiling as a service to others as like an offering within the Falcone network are working hard on this. And I think are approaching open availability imminently. And so that's a very exciting option which really reduces the cost per ceiling, a terabyte of storage into Falcone significantly. There's also a chunk of work happening across Bedrock and Phil Dev on making it much easier to operate a storage provider such that there are much less operational overheads and that the human effort that goes into storage providing requires less, hey, why did it go wrong? And more, how good continuing to operate as expected. Great, I can go back to work and do other things. And then finally really want to touch on, you know, a ton of our time goes towards the critical system growth and stewardship. And amazing work has been done by many folks to release, upgrade and make new capabilities available. We see many new IPFS implementations being able to thrive and differentiate, have new transport capabilities thanks to all of the work across LibP2P and IPFS in terms of kind of like making those browser capabilities or transports available or pulling all of the guts out of Kubo such that other people can build non-Kubo IPFS implementations utilizing some of those same libraries. Helia is a new JS implementation on the block able to replace JS IPFS. And then a massive amount of work has happened across the entire Phil Dev team and many others involved in Falcon Network releases. Some of these, you know, solving critical imminent high impact, you know, performance or security issues and others of these helping bring things like FEM to the table. So like we put a ton of our time and energy into that thread and we really need to celebrate that as well because like people are making sure that all of the progress that we've made so far doesn't regress, which is really big. And if you wanna track these things in real time you can track them in star maps but it's gonna be a little hard to see all of the pretty logos of the amazing things people are working on there but feel free to comment there and track ongoing progress. Anyways, I just wanted to walk us a little back through kind of like how our strategy ties into the Falcon Master Plan, the progress we really are making against it and celebrate some of the awesome successes we've seen so far this year. And with that, I will hand it off to some of the other Andres leads to talk about our progress on our Q1 due to OKRs. Thanks Molly. I'll start with the keep critical systems running, growing, releasing, scaling and secure. On the IP and I leader privacy project all of the lookups in SID contact now are double hashed and can be read from the private store but we're ramping up those reads slowly. It was 30% beginning of the week. I think we're actually much higher now but that will in the next couple of weeks be 100% reads from the double hash store. On the DHT side, there's been kind of a regrouping there and unlikely to meet the goal for this quarter though the team is going to kind of reevaluate timing and the plan and probably start out with a plan for a composable DHT that then reader privacy would be built upon. On the three critical file coin network releases we have already had two in the 19 and so no further update right this week. Steve, go ahead. Great, thanks. So for hyper scaling and accelerating the talent and teams contributing to the PL stack some great things have happened three or four network events have been successfully executed. The fourth and consensus day is coming here in June. Don't have this marked as green yet in terms of fully hitting the target numbers in terms of attendees and views but again, still great events and we'll see where this pans out at the end of the quarter. But yeah, so that's what's happening on the events side regarding other IPFS implementations potentially using box slowly like this effort has definitely slowed a bit in terms of we haven't been making a lot of proactive evangelizing effort. I'm definitely making sure we're at least being reactive to folks. There's a couple of targeted things that'll be happening here with a blog post and meeting with some users that had we've already set up some meetings with users that had expressed a lot of interest IPFS thing but this isn't a slam dunk yet so we're gonna keep chipping away at this one but that's where we're at here. Back to you Lauren. Thanks on the scale data onboarding and CDN speed retrievals to drive super linear adoption with White House users. It's quite a mouthful. On Saturn, we have kind of two plus customers at this point. We have implementation in effect for REIA that's customer one design phase for DAG house customer two and there's a third customer in the works that's gonna be working on a trial for serving NFTs through Saturn, which is exciting. The P95 time to first bite went from 14 seconds to four seconds since last time we checked in which is awesome. On the success rate, we went from last time at 43% we went down to 36% but actual successful referrals themselves went doubled from about 54 million to 101 million further work happening, particularly on like HGP retrievals that will hopefully get the success rate and the numbers up in the next couple of weeks. And then on the cost savings for DAG house the planning for moving over to Saturn is in works. Jason? Yeah, thank you, Lauren. On the objective for upgrading Filecoin on the FEM and we are having our team summit this week. The team is in Boston. You will hear about the great updates later from Raul. In regards to our metrics, we are really growing a lot of our Filecoin managed to FEM contracts. We're at 830,000 as of today. In our 1000 unique smart contracts targets, we are almost there 920 as of this morning and we are at almost 80,000 unique wallets through FEM. There is no further update on IPCN but then one was launched and back allow 1.0 was launched as well. You will hear about the great updates later during this call. Awesome, thank you everyone. We are about at the middle of the quarter. I know how it flies by. It's great to see us almost hitting or have hit a number of these goals and then getting really focused on where to spend our time for the rest of the quarter. Yeah, great. Sounds good. Thanks, Molly. IPFS stack, just a quick reminder here. This is a suite of specifications and tools where the data is addressed by its content. It has a verifiability mechanism and it's moved in ways that is tolerant of arbitrary transport methods. So in terms of some of the metric KPIs that we look at, on terms of community activity growth, just a reminder here, this is just looking at activity within the IPFS GitHub org. And when we say someone that's active, we're talking about folks that have made at least three of those actions in that month. So we're trying to filter out drive-bys and people that are actively engaged. Not a lot going to say on those metrics right now. In terms of the network sizing and performance, first a couple of call-outs, this has switched over to use the infrastructure the ProBliob team is actively maintaining. And we know where all the data is coming from and are on top of that. But this is admittedly not telling the whole story by any means of the IPFS network. This is only the quote, public IPFS DHT. There is a tracking issue on how this gets expanded to other networks. For example, like nodes that are just engaging with network indexers, for example. So please feel free to click in on that look to see where the current thinking is or leave thoughts. And again, all of the specifics of where this data comes from. And when we start and stop stopwatches, that's all linked from the graph details. But yeah, no other major changes need to talk about there. In terms of IPFS implementation highlights here, the big thing was IPFS thing. Many people were involved in this this last month. Thank you to all involved in Andrez and outside of it. You can see some of the stats there in the picture. But again, lots happened. Great thing is all of the content was recorded. I find the easiest way to quickly index and get into that is the recap blog post that was put on the IPFS blog. So that link is there. And also some Andrez teams also did their own recaps of their takeaways and actions that they're taking. So that's linked. Molly mentioned Helia earlier, but Alex did a great presentation at IPFS thing, kind of announcing it and being clear about what the project is trying to achieve. And there's been a lot of focus on improving the documentation and usability of it. We saw some, what I think of a magic in the way service workers are now resolving IPFS URLs in the browser and had a great success story at the event where Dclimate was able to convert from JS IPFS to Helia over the course of the weekend of the event. On the Kubo side, you know, one thing to just call out is it has a new look. So plan on seeing this logo show up more places, you know, this camper van motif is really intended to capture the, what Kubo is probably best suited for of helping the independent self-hoster. And so yeah, that'll pop up more places. Kubo itself had a 0.20 release, which has a lot of improvements on the gateway side, particularly being driven by Ria. It's also using the latest Boxo code underneath the covers where we did a lot of repo consolidation. We did get a guide out for the community about how you would customize Kubo in different pros and cons of various options. A important thing for people to be aware of about the network is that the hydras are fully dead. They have run their course, you know, we shut them to be, we reduced them to network bridging mode at the end of December, that bridge is now gone. The hydras are gone, the EC2 instances have been deleted. I think there was even a request to delete AWS account. So that is there. There's a link to the IPFS discussion form post giving more of the information on that, but that has occurred. And also kind of building off a talk that had been given at IPFS thing, a blog post went out about an event earlier this year where half the DHT nodes were unresponsive in how the network performed, that it was resilient in that, but also some of the actions that we've been taking. So in terms of what's up ahead, we do have some additional follow-ups from that event that we're gonna be fully handling this next month. And then in general, a lot of the engineers are focused on further gateway improvements, largely being driven by what's needed for RIA around partial car support on trustless gateways. And so there will be landing that spec, getting implementation in Boxo Kubo and the corresponding conformance tests. And then on the Huilia front, now we're gonna be active on the deprecation of JS IPFS and getting a working group together for doing more of this user-focused Huilia work. So stay tuned for that in IPJS, but please raise your hand and jump into that channel if you're of interest. Thanks a lot. Awesome, over to Ignite. Well, thanks Molly. Yeah, so we've done a number of things, but I'll talk about just a few of them. You can read this slide for most of it. A lot of work has gone into IPFS companion MV3, which is a big architectural change required by browsers, mostly Chrome for our most popular product. And because companion is our most popular product, as you can see there in the metrics, we wanna get feedback from dog fooders before rolling MV3 out more broadly. So we'll be releasing the beta channel. It's in review right now. So please help us out. StarMap, it will be migrating as well to D3 rendering very soon. And the older current functionality will remain available for now at legacy.starmap.site. We've got an issue open with Infra to make that domain name available and then that will be available. If you haven't tried the D3 rendering on StarMap, please do and leave some comments on that issue. It's linked in the slide, but it's issue number 237. And then speaking about JS IPFS deprecation, I saw a question. It is not officially deprecated right now, but we will be working on it, providing a migration guide, things like that. The Ignite team will be helping out with that. And we're looking for project suggestions for tools or libraries that never quite took off with JS IPFS. We wanna show wins for Helia. So if you have examples or tools where you're like, oh, this didn't quite work right, like please let us know. We wanna show those wins with Helia. And then our metrics were added to all projects in the beginning of the year and we're starting to see trends. That's why we only have like three months there. But our first is companion at an average of 72,000 monthly users and followed by desktop at an average of 16,000. And I wanna call out that desktop does consume web UI. So web UI is actually with all the, you know, Kubo desktop and web UI to website is that about 22,000 users as of April. That's it. Peter. Hello, IPX Interplanetary Developer Experience team here. So we aim to improve the day-to-day for IP stewards and friends. Last month we've been to IPFS thing and yeah, we really loved it there. We did two talks, me personally actually. So if you haven't been there or haven't seen them, I highly recommend seeing how we handle automation in Kubo Lunge and how we test HTTP gateways. We also deployed our custom self-hosted GitHub Actions runners to new repos, Gully P2P, Quiko and Rusty P2P are the three new ones and it was even better than expected. We've already processed over 18,000 jobs for these new repos and that's 18 times more than we were doing for Kubo Lunge. And we also saw reduction in mean workflow runtime for all of those repos by third, 40 and 75% respectively. And the speed up mostly comes from the fact that on self-hosted, we are not hitting hard limits on number of concurrent jobs that we can run anymore. We didn't neglect gateway conformance either. We implement new car check API. We worked on that with Eric from Saturn team and we also implemented DNS link support and both OTs make us much better suited to cover testing needs for all the projects currently iterating in the gateway space. We also actually thanks to IPF Fasting got to meet with Matt from the FBM team and we got together during the events and followed up after and worked on making sure that Dockering needs from local nets that he develops are available in the FileCoins GitHub Container registry. Okay, so what's next for us? We'll continue working on gateway conformance. That's for sure. In particular, we'll be focusing on test migration from Kubo and support for the centralized gateway working group. We're also getting talk submissions ready for GitHub Universe and I'm really excited about this one. We're also working with the B2P team on automating performance testing and we've on enabling new GitHub features for everyone such as for example, secret scanning, cold scanning or private vulnerability reporting. And if you want to follow along and see what we're up to, make sure to join APDX channel on FileCoins Slack. Thank you. Hey, look, it's my first all hands presentation. Hi, mom. So yeah, LitP2P. So as a quick recap, it's a modular network stack for P2P protocols. I like to think of it as reports from the local plumbers union. The only thing that's really an update here is that as of IPFS thing, we now have seven plus implementations with the new Zig LitP2P implementation. And as usual, we're striving to try to improve the network ability and support for all the Web3 projects that use LitP2P. Yeah, quick, some callouts here. So you look at the network size snapshots. Unfortunately for April, we lost analytics for the ETH beacon chain due to the source going offline, but we were able to reestablish it for this past month and the numbers went up. So that's all good news. We're looking at trying to add in more numbers from other networks as well so that we can get a better, broader view of our impact internet wide. The other thing here about the GitHub activity, you can see we had a dip there in April. We were scratching our heads about this. We suspected it was IPFS thing. And sure enough, if you look back in the history there July 2022, that was also an IPFS thing month. So hopefully the numbers will be back up next month. We'll report out then and we'll talk about it then. Okay, I want to echo Steve's sentiment about IPFS thing. I want to thank everybody who showed up and contributed in the LitP2P specific stuff. We had five talks linked here. We had one workshop, which was really cool. In a span of an hour, people were able to build a fully featured chat app over LitP2P in Rust. And like I said, we've added the Zig LitP2P implementation to the interoperability tests. And we're hoping to get the JVM and .NET one online as well. Let's see here, driving forward on our user outreach. We're taking a much stronger look at what organizations have a vested interest in the viability of LitP2P so that we can have a stronger drive towards long-term governance. I think it's important for us to start encouraging other organizations to have a sense of ownership and control over the direction along with protocol apps. In terms of browser connectivity, this was one of our top-line goals, heading into IPFS thing. I highly encourage everybody to watch this video from Max. A ton of people worked on this. I want to thank everybody. I can't list them all here, but there was like 13 people who contributed to this demo and that talk. It's pretty amazing. And this has become kind of our new unofficial motto that we connect everything everywhere all at once. And then the IPFS thing recap blog post went out this morning. So if you want to see all the talks and commentary, it's all there. Let's see, upcoming, we are, well, we, well, I'm sorry, but we'll go into the implementation stuff here. So we're preparing for some releases. The JS release just came out, 45, with some simplifications that is making it easier for developers to utilize it across projects. And some key goals along our OKRs for doing cross protocol communication went into the Rust LibP2P. And upcoming is this interesting discussion about what does LibP2P plus HTTP mean? There was a killer demonstration at IPFS thing where we showed intercepting HTTP requests in a browser and sending them over LibP2P. But I think there might also be room for discussion about LibP2P opportunistically, offloading requests to HTTP instead of over LibP, like a managed LibP2P connection. So if you're interested in any of this, come join our conversations. It's actively being researched and decided upon right now. And then of course we're gonna continue with all of our performance benchmarking. Again, there was another talk about this at IPFS thing if you wanna see where we're at. And I think that's it. So thank you for everybody who continues to support LibP2P. Back to you, Molly. Awesome. Over to Jennifer for Valkoyne. Valkoyne. Sorry, I can't laugh you about like the mission about Valkoyne. It's just like such a big statement, but we're actually doing it. It's like Valkoyne is targeted to be a decentralized, robust storage network that's distributed for humanities, the most important information. Next slide. On that note, as a storage network, we a storage capacity is our most important matrix. And as you all can see that the network still have a 12.3 expressed off like start forward within the network today. Again, the growth of the network raw by, for those who doesn't know, RBP stands for raw by power. It's not growing as fast as we do in two years ago. However, the QAP, which measures how much useful data is actually stored on the Valkoyne network today, is steadily growing. And we are hitting like 20.3 express of the QAP, which means over almost like 900 pips of data are stored on Valkoyne today. As Molly mentioned in the chat earlier, if you're curious what kind of the data is stored on Valkoyne, go to Valkoyneheavensexplorer.com. You can see a list of the kind of the data there. Next slide, please. Another super exciting thing that we had was about two months ago now, we shipped FEVM and bring it like use a program by the day to the network. And the one thing we are tracking is how much Valkoyne is actually managed by all the EVM actors, these account of a placeholder actors, because that's the measurement we can use to see the utility of FEVM. And as of today, over 1.28 million of Valkoyne is being managed like with hold by FVM like actors, which is like super, super exciting. A lot of them is coming from the bridges, sticking protocols that people have been building, or like even data does in the upcoming future. I think we all have small data for you later on. But also that means we have 920 unique contracts that has been deployed, and over thousands of contracts has ever deployed on Valkoyne. So people are actually building applications on top of our storage network. Yay, next slide. Our Valkoyne highlights, I'm gonna keep it short this time because it has been an exciting three weeks in the Valkoyne land, which we shipped yet another network upgrade which we're used to in the past six months, we literally shipped three network upgrades. However, the special thing about this one, it's actually the ever first answer to the network upgrade we have ever shipped in the past two and a half years because Valkoyne network has been stable. However, in the past months, we have been experiencing some like trend quality challenges of the Valkoyne network that is causing social ladder to losing some of the block rewards that could have potentially earned or like having no operators have thinking, thinking challenges with the Valkoyne network. So we were supposed to be shipping this upgrade in June. However, we were like, if people had to ship it and we did everything basically was in a week. The key of this upgrade was the first fit that introduced by the CryptoNet lab team, especially A-North and Kuban Zen, that we improved the market from by setting a market deal maintenance interval from one day to 30 days that significantly improved our current block validation time. As you can see on the right that it goes from 20 to 44 second down to 1.35 second. I really want to explain why that was a dangerous situation for us to be A, because the following block time is 30 seconds when the crown time actually takes way longer. That also means like the whole block creation and thinking is very dangerous for the whole Valkoyne network and causing all those problems. For example, the network win count, as you can see in the red box was actually dropped by a lot from like six blocking average to two to three blocks for a certain period of the time. That means the chain capacity is reduced against large brothers block rewards, how much block rewards they got also is reduced, but now we are already back to the average a win count for the network. Also, that's a huge deal for our builders as well, because of all these like chain thinking challenges, no service provider get a lot of API requests has facing a lot of challenges as well. As you can see here, the P-99 latency on all the ease method has dropped dramatically since the upgrade and that greatly improved the builder experience. So next up, there's a lot of things we can still do, but like instead of looking at the next big scene, we're taking a step down and it's just like slow down a little bit and we want to better understand the Valkoyne states and also like having toolings built to better help us like monitoring the Valkoyne health. So we don't have to ever do an emergency upgrade ever again and while keeping Valkoyne alive. So that is going to be one of the effort that the field that is going to spend some time on over the next couple of weeks or months, but at the same time, Proof and the Lawless Manor team have already started to implementing the syncytic pull-up designed by the crypto net lab team, Luca, Irini and Cuba in particular. It's a new pull-up protocol that can potential reduce the size of the temporary file saved between pre-commit and commit process from 400 Git to 25 Git. We expect that will be super useful for city as a service provider who have a large pipeline of like onboarding sectors. I think that's all from public side. Awesome, over to Daghouse. It's David from the Daghouse team. Wanted to share that this quarter and probably next quarter too. Largely our focus is going to be on capturing infra cost savings. So things, so we've been in growth mode for so long and that requires scaling up infrastructure and just providing good service levels to our users. But at this point, taking a step back and seeing how can we rely more on Filecoin copies of data, collaborate with Saturn, eliminate double spend, things like that. That's where our heads are at mostly. We're also progressing our W3F project. We're looking to integrate that with NFT storage as the full launch for it and there's some work to be done until we can start doing that. So pardoning the in-front protocol implementations, getting that data into Filecoin deals via Spade, being able to have better visibility into user behavior, able to block users, things like that. Some quick highlights. We did some tweaks to our HTTP gateway to reduce HTTP egress by 80% and BitSwap egress out of Elastic IPFS by 50%. So some good cost savings there. We implemented Elastic Padural Graph API in our gateways and we are getting those records into IP and I now. And this could represent a big costs win because once we're in production, we don't know the exact number yet depending on Saturn behavior, but 85% of our BitSwap traffic going out of Elastic IPFS is to dweb.link. So getting those reads into HTTP instead, which is much cheaper for us, could be a big win. Also, just some progress on the W3UP side as well, implemented the updated Ucanto spec in W3UP and created the spec to start using a UCan based Filecoin pipeline while interacting with Spade. And yeah, we're really excited about working with Saturn and moving forward to try and figure out how to utilize the network and all those great L1s to reduce our infertile costs. Everyone, this is Rahul from the FEM team speaking to you from Boston where the team is having a retreat after we've been together, we're getting together for the first time after seven months and during that time, we actually shipped the FEM. So it's great to be back together. So thanks that just a few updates on our roadmap. We have concluded the vertically scaling notice to reduce ETH, JSON RPC latencies thread. This is thanks to a number of optimizations that the FEM team landed together with the Lotus team in the last few weeks, plus improvements, massive improvements coming from NV-19, as Jenny said earlier. So that is concluded. We are also doing moving ahead very quickly with upping up our game in metrics. So cycle one of our metrics roadmap is already implemented and almost done with cycle two. So there's a great dashboard in Sentinel in Grafana. If you want to take a look at it, it's FEM analysis. And this is basically where all our metrics are being added. We're starting a new thread around building efficient historical access to change data in Lotus. Devs and as well partners like the graph are being currently blocked by high latencies when or impossibility of accessing things like all chain receipts, for example, or all state routes to conduct indexing. So we're really, really focusing on that. We also started road mapping exercises for M2.2, refreshing the old roadmaps around shipping wasm actors. And we're really aiming for progressive delivery over various network upgrades here. Also starting a new road mapping exercise around data plus FEM that is going to be covering all the building blocks that Devs need to build data solutions on top of FEM. So that is things like aggregation, proof of data segment inclusion, repair replication and so on. Really focusing a lot is one on explorer maturity, working very closely with experts to land complete feature sets and fix bugs and so on. One public service announcement hyperspace is being deprecated. So it's end of flight is scheduled by the end of May. And we're progressing with moving all developer activity to calibration that. So all services and things like bridges and DApps that have been deployed there are moving to calibration net. We're also launching a new RFP to continue development of the Falcon Solidity library. So we've really up to our game as well in terms of developer study kits and tools. There was a new set of Docker images to create, to basically provide Devs with a Falcon in a box experience that includes both local net and boost. Also cleaned up over study kits and added two new study kits. So there's one for computer of a data and one for data DAOS. So those are great tools for Devs to get started building on FEM. And there's also an update to the ledger app incoming that will unify the Ethereum and Filecoin wallets into a sync into the Filecoin app. So that's currently in review and ledger is having it's a little bit backlogged. So it will probably might take a few more weeks but that is already almost out. And then as to KPS and highlights as I said, the team summit is happening in Boston right now. We have 830K right now in smart contracts. And as Jenny mentioned, 1.28 million across EVM actress in general. There is a new metric that we're starting to track which is total value managed. So what we're actually starting to see this is very interesting what we're starting to see is functional lending markets really coming to life in FEM, which means that just tracking the contract balance of those pools is not enough because Phil comes in and then it goes out because it's lent to storage providers so that they can deploy to the protocol and onboard data and storage. So really we're starting to see that come alive. So we're creating new metrics to actually capture the value harnessed by FEM. And we think that that is somewhere around 1.5 mil and 1.6 mil in Phil. We have different readings. So we will strengthen that metric over the next weeks until we have a final metric that we confident about. You know, our metrics on unique smart contracts and transactions per day as well. And yeah, protocols are adding progressively DeFi protocols are progressively adding their metrics to DeFi Yama which is the golden standard for measuring TBL. And yeah, if you the dataverse hackathon is now started at the top of this month and has 476 hackers working on that hackathon and we've got in terms of opportunities the data plus FEM roadmap is gonna be key enabler in many new news cases on pod crime. So yeah, focusing on that and sorry for going over just for your test. No worries, compute everyday data. Hi, Ron, I will, in fact, I have so many announcements I will keep it even shorter than that. Really like the best thing to do is go check out our blog back to the out did hit 1.0. There are so many people inside of PL to think from the top from Juan and Molly to every team FEM and station and I'm blanking on all of them IPFS, Filecoin, Filecoin plus, blah, blah, blah. Everyone helped get us here. You can read tons of stuff on our blog about it. We also had the computer for data summit 150 people plus attendees also in Boston coincidentally and so on. So I won't go into all the details. You know, the thing is scaling. We got mentioned in Forbes and Silicon angle for the work on Lily pad, which is a combination of back to out plus FEM executing on chain, doing generative AI, tons of goodness. Please check out our blog and stuff like that and I'll keep it short so the next people can go. Thank you. Thank you. Giannis. Hello everyone. We update since February, I believe the problem team is very much on track according to our roadmap, which you can see on StarMap. We have shipped optimistic provide, which is now experimental in Kubo V020. The nut hole punching study has been finalized and we have a very nice, shiny report out which you should go and check. The CMI, which if you haven't heard this stands for Continuous Measurements Infrastructure is now a thing and is monitoring several of our networks and protocols and will be expanding to more as Steve also mentioned earlier. We're reporting results to problem.io, which is a new thing. You should just go and check it. Everything there is up to date. The whole thing is still in preview but we're shooting to have kind of V1 out by the end of June. So now is the right time to give feedback if you want. And we've got lots of measurement, lots of ongoing work in terms of DHT optimizations. I won't go through them, but cool stuff is happening with refactoring with double hashing and everything. A few things on the highlights. So we've been at IPFS thing, of course. We handled and resolved a couple of incidents with the unresponsive nodes, with hydrodial down, with DHT slowness, exciting and very cool stuff. We have deployed monitoring on all of PL's websites and if you want to monitor yours as well, just reach out to us. We're having some cool graphs coming out of that. Guillem from Problop took over the stewardship of all things, can't DHT, a little bit can't DHT. So more improvements to come there. Yeah, and that's it. We submitted, our collaborators submitted two excellent papers in nice academic research venues. So more visibility to come from there. Thank you very much. Hi everyone, so we have four focus points, I guess. Number one is enabling the Ethereum API on our gateway nodes. Number two is our info wide rollout of Lotus 122.1. So that was actually really, really important for us. So it's resulted in halving our snapshot size and it's really, really improved the performance and reliability of the lightweight snapshot service. So just skipping onto KPI for a second, the raw snapshot size has dropped from 140 gigs to 85 gigs. Our snapshot success rate is back up to 97% over a seven day interval. To give you perspective, a couple of weeks ago, it was somewhere around 50%. As you can see from the little taps in the top right of the slide. So it was pretty, pretty sorry situation. We're very glad to have that back online. The third point is we've completed our split store rollout. So this is another major benefit for us because it's really, really reduced our operational overhead. One thing that we were contending with is frequent data store resets for Lotus workloads. And one benefit that I'm personally very happy about is we're getting way fewer pages out of hours to fix that. The data stores were affecting SLAs for snapshots and causing all sorts of problems. So we're really, really happy to see the improvements around there. And finally, thanks to the recent event horizon change of Lotus, we can now deprecate the raw snapshot. So the minimum supported version of Lotus will support ingesting the compressed snapshots. And obviously the compressed snapshots are going to be much smaller and much faster to transfer. So mostly it's been around storage and we think that we can drive costs down for by at least 20% further to what we already have. So that's going to be our focus for the time being. And you can find us a course on fill in for on Filecoin Slack. Thank you. Thanks, Leahy. On to our spotlight, starting with Bifrost. Thanks, Molly. We'll keep it super brief. Yeah, I just wanted to quickly use this moment to share some important work we did in Fifrost recently to update our metric stack so we can give you all those useful insights about all the IPFSIO gateway operations. So the TLDR of that was basically we just outgrown our old stack and needed to revitalize that a bit. So we took the opportunity to really look at that from the ground up and managed to get some wins in terms of extensibility cost reductions too. You can see there it's like over 80% pile as we managed to leverage some spot instances and things like that. So it's like a huge win from conscious of time. So I guess the big TLDRs just wanted to say thanks to Nikolai and the team for leading the efforts there and all the hard work they did. And that it's the one implication of this is that we'll be sun setting our old elastic search and Prometheus instances, but there'll be some upcoming announcements about that to publicize that and give everybody a chance. That's it for me. Thank you. Awesome. Big picture, go and check blog posts on the computer data summit and watch the recording because there's been a lot of great talks and a lot of great tweets that you can go retweet from all of the people presenting about new computer data solutions and tooling to be used by computer data solutions. So congrats folks and on to Caboose. So Caboose is a thick client that we've written over the last couple of months for Saturn as part of project RIA. And so this is going to be relatively quick, but I'll talk a little bit about where that fits in and the point what we get out of doing this and why we need a thick client. So let me first give a basic view of sort of like what's going on in Caboose. Caboose has two big data structures. One is it has a pool of the nearby Saturn nodes that it wants to send requests to. And it keeps those sort of logically arranged in a staple hash. So based on what Cid you're asking for that hashes somewhere into something that looks sort of like a DHT and chooses one of the nearby Saturn L1s to send the request to. And the reason that you use table hashing there, right? Is that it really improves your cache affinity that a node is getting sort of a limited space of Cids that it is responsible for. And as a result that node then can do better in having a warm cache for those. And then the other data structure that we keep is sort of a pool of other nodes that will sort of swap in if one of these nodes starts being bad. So we've got sort of the fallback set of Saturn L1s. And then what Caboose is doing is as it's getting requests it's sending a small fraction of those to, it's mirroring them and sending a copy to sort of these other pending nodes to get a sense of are they working right? Are they fast or should it swap one of those in because they're doing better than the current ones. So what are we doing here? We've got, you know, on the normal CID and you'd use DNS generally to talk to your CDN. But DNS is centrally controlled, right? So someone has to run that DNS that DNS has someone's name on it. And if we're really building a decentralized CDN we ideally don't have, you know a single entity that can turn off DNS. And so if you imagine that future in a year or something we would have, we can register the Saturn nodes through a smart contract and on chain and in a decentralized way. But you've got to then have a client that finds who's registered and sends the request there. And so as long as you've got, you know a thicker software stack be that a JavaScript service worker or a caboose in for go line clients they can actually go out and pull and find the set of active nodes near them in a way that is decentralized and that is resilient. The other, and so we want to provide sort of one layer of abstraction beyond just a direct list and that's what it does. DNS also has a bunch of caching things that aren't what you're gonna want if you're running something that's really a, you know doing a lot of traffic and doing a lot of requests, right? If we're running things like the IPFS.io gateway where we're sending hundreds and thousands of requests per second, it's sort of unacceptable to wait for the minute long DNS time out before we fall over to another working Saturn node. So we want to be able to be much more responsive to node failures. And the way that we can do that is by having that active pull and active management in the client software to quickly notice a notice in responding and start redirecting the request that would have gone to it somewhere else. And so custom software there can do better than just a standard change in the client. I guess the other one is that DNS doesn't always get you the fastest nodes, right? It gives you who the DNS server is thinking is probably in your region but based on doing sort of these active measurement things which is on each request, we're looking at how fast it comes back and using that to understand what our actual latency is to these Saturn nodes, we can have a much more realistic ranking of which nodes are performing well for us. And then we also, by having the thick client get to do a couple of additional things that are really useful for Saturn. One is this mirroring to find other nodes that's active software, right? That's doing that. But the other one that we can do is an amount of challenge and verification of Saturn itself. So we can send challenges for SIDS that we know are unique. And then we can make sure that we get them and that they don't show up also back in the logs of some other Saturn L1 or in the gateway. And so there's types of misbehavior that we can detect and then remove nodes from misbehaving as a result of that. So by having this client software we get sort of the validation in both directions with logging for payments with making sure that nodes are behaving correctly and then also we can get better performance. So that's the thick client that is Caboose for Saturn. Thanks hugely to Arsh and the rest of the Saturn team and then also for everyone involved in Bifrost Gateway for the integration and working with us on the design of Caboose. We expect to launch it with Rhea relatively certainly. So it'll become another part of the interplanetary stack. All right, I think that's what I've got. Awesome, thank you. See you all later. Bye folks.