 Well, welcome everyone to our June PL-Endres All Hands Meeting. We have these every month. Shocking that it's already been a month. Same agenda as usual, we're going to go through our quick working group update with some of our team updates, lots of awesome spotlights this time, and then a deep dive into the awesome improvements the Sentinel team has been making to Lillie, which is helping everyone have more visibility into Falkwood network metrics and monitoring and historical information. Quick reminder, if you're new here, this is one of the many working groups across the PL network. We focus on engineering and research, AKA Endres. And one of the things that really unites us with the rest of the PL network community is a shared belief that the internet is an amazing super power for all of humanity and that we need to be building a resilient foundation for our information and for collaborative tools that utilize the internet so that the amazing phase transition that we are a part of as a species gets built upon a robust foundation for our information and for empowering individual data ownership, data privacy, connection between individuals that hopefully can make things like brain machine interfaces and AGI and the future breakthroughs that are right around the door actually build upon a better foundation for computing. We do that by building and starting and growing and contributing to amazing computing projects. Some of the main ones we contribute to are IPFS, LPW and Falkwood, but we have a number more projects that we work on and built toward, DRAMD, multi-formats, IPLD and many others. And we think these are making a much more robust foundation for humanities information and enabling of peer-to-peer connectivity between individuals. And so our mission for the Endres working group is to help scale and unlock new breakthroughs for IPFS, Falkwood and LPW. We do that in three main ways, helping drive breakthroughs in protocol utility and capability directly. So working on the technology, scaling network-native research and development, doing our work in the open, being highly collaborative with many different groups across the PL network and sharing our learnings and discoveries and then stewarding and growing open source projects and with some communities to helping attract and build up more work within this ecosystem, making our projects open and accessible and trying to make building blocks that other people can harness to build amazing projects on top. We are broken up into a number of different teams across the PL Entres working group, working across retrieval markets, FEM, computer over data, crypto net, consensus, data retrieval, data storage and many other areas. This is our 2023 strategy, no changes here. We first and foremost focus on keeping all of the critical systems running smoothly and growing smoothly so that they can scale to new adoption. We work on growing the entire network through hosting events, through being kind of like accessible stewards that can help train new people in these technologies and being highly collaborative to attract more reach to this ecosystem. And then we have kind of two main focus areas to add a lot more value and utility directly to the technologies in the space. First is helping grow robust storage and retrieval across IPvS and Filecoin. And then second is making sure that we can unlock amazing new capabilities on top of Filecoin state and Filecoin data to make that more accessible to new builders, to unlock more utility for data that's been stored in Filecoin and to make sure that we have the change space within these sorts of networks to make space for all of this amazing user programmability. And here's our high level view at some of the breakthroughs across IPvS and Filecoin that we're working on, categorized by some of those different growth areas. And we're heavily in focus on some of the new breakthroughs coming, but definitely continue to build on the successes that have happened earlier in this year, especially things like FBM and some of the early launches of Saturn which are setting us up with an amazing community of builders who are building on top of Filecoin and an amazing set of node operators in Saturn who can help us convert some of the first clients to this network as well. And then of course, we continue to do a lot of work across IPvS, Filecoin and other networks to help with things like network upgrades or new versions that can support new capabilities. Two exciting new upgrades that are in the works for people to be looking forward to that are kind of like hovering at the end of this quarter. One is the IPC MVP actually being deployed on Filecoin mainnet. The consensus lab team has been making awesome progress here working with some builders elsewhere in the ecosystem as well, having solidity smart contracts being built for IPC that can then be deployed on FBM and enable the docking of the IPC test net, space net with Filecoin mainnet. So that's a very exciting milestone that we're all working towards right now. And then a big endeavor as well is the collaboration between the Saturn team, the IP Stewards team, the bedrock team, the Bifrost team and a number of other groups to help the IPvS gateway transition to using Saturn as a decentralized CDN for fetching and serving retrieval requests for data across IPvS and Filecoin nodes to all of the users who are trying to fetch content from IPvS gateway. And so that's a big, big endeavor that we are continuing to push on and push towards completing that first client use case for Saturn, which is very exciting. Some exciting progress on some of these big bets that we're pushing towards across the whole PL interest working group and the various communities that we're a part of as well. Exciting update, started to feel Jennifer's thunder but we have crossed an exabyte of total data stored in Filecoin across these 1,500 plus clients, including SETI, Internet Archive, Solana, CERN and many other groups who are now storing large data on Filecoin. It's been really amazing to see how the Filecoin data storage pipeline has continued to grow. We're now hitting, I think it was 5.9 Pebobytes earlier on Monday this week. So we're almost about to hit six Pebobytes of data being onboarded per day, just pretty freaking impressive. Really a huge chunk of progress in upgrading the technology, in improving data onboarding tooling, in storage providers really tuning their data onboarding pipelines in new clients hearing about and being attracted to this ecosystem to bring their useful data to the space. And so that's pretty exciting in terms of that big bet to really make sure that Filecoin is a great home for humanity's most important information. A lot of our work is going right now into Saturn as a Web3 CDN and helping that scale fast retrievals of data on Filecoin. So we have 2.5,000 CDN nodes or points of presence around the world. We're now having over 200,000 successful retrievals per week from Lassi. And we have over 10, I think, different storage provider clients that have 100% retrieval success rate. And so we're really, we're starting to see that come together as a major way of accessing and retrieving data from the network. In terms of compute over data, harnessing the awesome breakthrough of FBM so that we can bring larger compute networks to make all of the data in Filecoin even more useful and help bring the additional data, the output of those compute jobs to Filecoin as well. We saw Bakliao hit 1.0, which we talked about last time. And we have deployed smart contracts on top of FBM that are making compute available to clients within the Filecoin virtual machine, specifically through WaterLily, which is doing stable diffusion and generated images. And those groups are also investigating things like interplanetary consensus to help scale their compute networks as L2s with things like faster block times or more specific incentive systems for each of those kind of more differentiated compute networks. And we expect there to be a huge plethora of these networks that are optimizing for different parts of the compute space. And in IPC, there's been awesome progress in getting IPC deployed and then gearing up for Spacenet to dock with Filecoin Mainnet, which is the next milestone the team's pushing for here. And a lot of work making sure that we're approaching strategic decision making about how to support future IPC subnets with the right sort of node tooling and that we're predicting the sorts of use cases, be that compute networks or regional sharding retrieval networks or with regaining or other use cases that would want to build on top of Filecoin, build on top of a flexible scaling solution that helps scale all of these use cases and attract them to the Filecoin ecosystem with smooth interoperability with storing their data in Filecoin, having all of their data be IPLB content addressed and kind of like bringing all of that utility and usage to the Filecoin ecosystems all. And I'll hand it off to each of our leads to give a quick update on our OKRs for Q2, which we have about a month left in. Thanks Molly, on keeping critical systems running and secure, the first KR was unlocking IPFS reader privacy via double hashing for our major content routing mechanisms and namely DHT and IPN-I. On the IPN-I side, we have all data for SID contact or the most used IPN-I node is serving data via a double hash store, so that's great on the real privacy side. On the DHT side, we recognize we were not going to meet the KR and we phrased it to deliver a publicly shared roadmap for DHT improvements, which will include a composable DHT and reader privacy and that's in the works. And then also doing the necessary prep work to refactor GoLibP2P, CadmiumDHT and wired into BoxoCubo. So stay tuned there. And then on the Filecoin side, we landed two network improvements, 560 and 561 for making the chain much more stable and we were not going to be doing another network upgrade in this quarter, so those will be the two. Meanwhile, the team is focusing on profiling the chain to uncover any other issues they should be aware of. Great. Yeah, thanks. And so with hyperscaling and accelerating the talent teams contributing to the PL stack with the completion of the consensus day event, we've now executed on four distinct events. So well done to all involved there. In terms of attendance and async views, we haven't hit the targeted amounts reach yet, which is why this is not green or marked as complete. And then on terms of the second item around Boxo, we're not aware yet of Boxo fully boosting other IPFS implementations outside of Andres. We haven't done the proactive reach outs of onboarding others that we originally planned. We have been giving some white glove treatment to Outercore Engineering, but not sure if that's gonna be landing in this quarter or not. There are 130 plus packages depending on Boxo, many of which are outside of Andres, but we really just haven't gotten to analyzing which of those we'd classify as IPFS implementations. So this uncertainty is why this is marked as yellow, but certainly continuing to deliver functionality there, especially as requested and needed by projects like Rhea and within Andres. Thanks Dave, I'll take it on the data retrievals from CDNs. So on the first KR here, if you all know from the Saturday Centralized CDN project, the KR we were initially aiming to hit was five customers onboarded this quarter. And if you recall, last time we were updating this forum, we were at two and trending towards three. And so you might be wondering why now we're saying 1.5 and why, well, I don't look as grumpy as I normally do and you would expect me to look grumpy. So the purpose of getting these first world customers is of course to validate and hone in and making sure this system was working properly for them. As you can see in the second KR here, I've had a slight regression in the retrieval success from Saturn. Now the team is made a strategic decision to pause development on new integrations and really focus in on making sure that the Rhea integration is working perfectly. Humming wrong beautifully so that we can nail those KPIs and make sure that those initial customers are happy. So there is some really heroic work going on in here, both from the entire Saturn team, which is now focused in on this, as well as the entire decentralized gateway working group. You can follow along in the public decentralized working group Slack channel if you wanna see all of the fun and drama of this exciting endeavor, debugging a very complex system, but we believe that we are, this is the correct decision and we are moving in the right direction towards getting into a system, behaving the way that we believe it should and really making our users happy ultimately. On the third KR here, that of course we are looking to, one of the reasons we're looking to do this is to reduce our centralized web 2 infrastructure costs by switching over there. We do have a lot of positive news here. So in addition, or sorry, in advance of Rhea being up in public, DAG house gateways for D-Web link have already implemented in production such that when we make that switch over 85% of elastic IPFS bits off traffic is going to move over to HTTP traffic, which is going to, we believe decrease the variable cost by byte, actually 85%. And simultaneous to this, while it is not related to the rate of rollout, it would be remiss to not call out the amazing reductions in centralized infrastructure costs that both of these teams have made. DAG house recently cut $30,000 a month on the gateways there. And we have a lot of other projects in motion, both on the DAG house team and then upstairs that we expect to see significant returns on in the coming weeks. Go, turn it over to you. Iceland for exciting updates from the Falkland L2 and computer for data land. Great, thank you. Thank you, Mathieu, for giving context into all of them. Super exciting to hear all the upcoming progress. On the last OKR, upgrading Falkland with new L2 capabilities, Sharder will change space and compute over data. On FM lands, we have great news right before while we are marching towards the end of this quarter, we met the original OKRs that we have set. We are close to 2.4 writing. As of today, it might even close to 2.5 million film. Manage to FM contracts. Over one 1K unique smart contracts deployed. It's around 1100, close to 1100. And we are at 90K wallets as of today. This is the only one that you might be a little lagging behind and we might not be able to meet by end of this quarter, but the team is putting all the effort to make it easy for FEM entry points. So hopefully we will accelerate that growth there. On IPC side, IPC and one launch, as you all know by now, and M20 is entering to audits. And on Becalaud COD, we met our OKR for this quarter earlier this quarter with 1.0 launch. With that, I will hand it off to Molly. Awesome, and I will hand it off to IQVS folks to give us updates there. Cool, all right, yeah, this will be me again. So talking about IPFS, making the web work peer to peer with content addressing so content can be verified independent of the provider or transport method. So on the next slide here are KPIs. So a couple of things I want to call out. That top left slide is network size, but again, this is only the public IPFS DHT. We are tracking about how to expand to other sets of nodes, but no major college want to make there or in the GitHub community activity. On the right hand side about network performance, where we're looking at time to first provider record, thanks to some work that just recently landed by the Problab team. We're also looking at external client side performance around IP and ICID.contact. And so yeah, and so that's even getting broken out now by cashed in and cashed results. So again, just landed. We don't have a long history of this. And there's maybe some kinks to work out, but this was something we'll be reporting on as one of the other content routing mechanisms within the network. So it turns to some high updates of what happened over the last month. On the Helia, our IPFS and JavaScript implementation that we've been pouring into an important migration guide for the JSI-PFS deprecation got completed and it took on a important JSI-PDP update which has some of the new transports like web transport and webRTC on by default. And the team has also been leaning in on hackFS participation. I don't have any of the numbers yet, but we're really using this as an opportunity to see what is and isn't working for people with Helia. So more to come on that, I'm sure for during our next update. In the go-side of the world with Kubo and Boxo, there was a new Boxo release that was just cut and that'll be moving into the Kubo RC which is expected for Monday. But some user requested things around friendly area pages and being able to display and traverse a DAG Seaboard previews. That's all there. Streaming support in our delegated HTTP routing V1 APIs there and some kind of long-standing resiliency items of events that hit us earlier in Q1. We've finally gotten to actually fully addressing those and those will be coming out. I wanna give a shout out to Crossorg effort with Little Bear Labs. They've been leveraging trustless gateways. We now have a small patch set of changes that you can add to any Chromium-based browser which will add the IPFS protocol handling. And so there's proof of concepts that you can use. And again, it's quite minimal for anyone who's using Chromium to add this in. So we're excited to see where that goes and shout out to Pro Lab. Some of their measurement work has been getting picked up in other places. In terms of things coming, what's been referred to as IPIP 402 which is around partial car support and trustless gateways, particularly needed by Ria. That should be going out to tomorrow in Boxo and that'll be able to get picked up by Kubo Boost and by Frost Gateway and anyone else that wants it. And a key thing here is that it will have accompanying gateway conformance tests. So we're excited to be holding that kind of standard. More routing V1 improvements, as you can see including IPNS and peer routing. A major refactoring of the IPFS companion due to browser changes with MV3. It will be launching over the next month. We've got our second beta is out there and we certainly welcome feedback before we push this out to many thousands of users. And yeah, there is a lot of work going on right now on the go side of the world for DHT refactoring preparing us for other work to come. So that's what we'll share there, thanks. Think yeah, next we're into IPDX and we got a video to play. IPDX has been working tirelessly to enhance our processes and workflows. Recently, we've submitted four great talk proposals for GitHub Universe 2023. Each focuses on diverse yet essential aspects such as managing GitHub configuration as code and monitoring GitHub actions. We've ramped up security by implementing secret scanning and push protection across protocol labs. We're also proud to announce that we've enabled code scanning in selected repositories aiming to extend this across the organization soon. In terms of collaboration, our team has provided key support to the LibP2P team to automate their performance testing. We've also set up protocol labs self-hosted GitHub actions runners for the IPNI project. We've introduced several enhancements to gateway conformance, including the CarCheck API, DNS link support and more. Looking ahead, we aim to fully implement code scanning across protocol labs, migrate remaining gateway tests to a more efficient framework and continue to improve our operations while reducing costs. We're excited for what's to come and appreciate your support as we forge ahead. Awesome, thank you AI, TikTok, delightedness. And I'm going to go to appreciate all of the amazing work that the next team is doing over to LibP2P. Thank you, Molly, for that kind introduction. We don't have AI here at LibP2P, we just worry about the plumbing that powers everything else being built around here. So I wanted to highlight some of our KPIs. We had feedback last month that we should be showing IPFS in here in our unique node count graph. Previously, we hadn't included it because it's generally much larger than all of the other networks and it tends to squish everything down at the bottom. But we were requested, so we're going to add a here so that you can see the total numbers for transparency's sake. One of the things I want to point out is that when we went to update the numbers for June, we noticed that there is a sharp drop-off in numbers and that we're certain is due to the number collection system having some issues that we're looking into right now, we have some action items about that. But I can assure you these numbers are not reflective of reality. It's also early in June. Well, the report out for June is for the previous month. So the other thing I wanted to point out on the community GitHub activity after previously reporting a sharp drop-off in April, we're back live and excited again. We now have confirmed that IPFS thing causes a sharp decline in our metrics. You can see back in July in 2022, we had a sharp decline and then we were all at IPFS thing in April of 2023. So we're calling this the IPFS phenomenon. It's mostly because the entire community shows up there. So some highlights on what happened in the last month. We added a new team member, Sikun Tereshandani, who's out of India. We're really excited to have him on the team. If you missed his intro, I would like to quickly highlight that he started as a volunteer reviewing poll requests in January. And then in March, I believe he was given a small Filecoin grant to do some key work in the GoLibP2P team. And now he has joined the LibP2P team as a full-on member. So the rags to riches story from community volunteering to being a paid contributor in a project is true in some cases. We've been doing a lot of work on performance and metrics. We have a dashboard preview up there. It will continue to improve over the next few weeks. And we hope to do a broader introduction to it so that you can read it and understand it. The TLDR on all of this is that Quik is awesome. And it's going, well, I'll talk more about that in a minute. I think the thing I'm most excited about this month is that we're seeing a strong community engagement now. Numbers are going up across the board. We're seeing lots of contributions from outside organizations. Here's a brief list here of some key ones. The one I want to highlight the most is the JVM, the Java version of LibP2P has been very active of the last month. It's a collaboration between Consensus and Pyrgos. And there's a long list of closed PRs from the last month that you can look at there. Let's see, we've been doing our community calls as usual. The last one we had 15 plus attendees. We have multiple orgs. That's also a sign of good health for an open source project. And we're currently right in the middle of the HackFS LibP2P hackathon. I'm most proud that we have six mentors hanging out in there that we recruited from across the project. The other things I really want to highlight are that QUIC is definitely becoming our favorite transport of choice. Martin did some Herculean effort over the last month landing all the QUIC changes into the GOES crypto TLS. And that's going to help us improve a lot of the latency issues with LibP2P connections. And then also later today, Marco will be doing a LibP2P from ground up. It's the very first deep dive, or the deep end, I should say. So look for that later today. Awesome, over to Jennifer. For QUIC, we focus on getting data, getting data out and hopefully getting data computed on top of this distributed storage network. Some very quick follow-up in our KPIs and the total network storage capacity is around, still around like 12 X bytes, it's robot powered. As you can see, it's still like not growing as fast as we usually do. But that is because the sector that's onboarded again from two, three years ago are starting to get expired. I should include a graph here. The daily sector onboarding is still around six PIP per day. So we are getting new storage committed to the network still on a daily basis. Out of the six PIP per day, five PIP, out of six PIP are actually real data stored in Filecoin deals. As Wally mentioned earlier that there are over, we passed one X bytes of data stored on Filecoin. I share the link over there. You can see how a lot of the data sets on Filecoin, but that's very exciting. Some quick Filecoin highlights. I forgot to include that, but I do want to give a shout out that Lotus team feel that we welcomed a couple of new team members to our team as well. Mike, we still from Alpha Core Engineering, now is officially joining the Lotus Actor team and we also get Andy included in our team that's going to help implementing, improving Lotus Manner as the software. Some project updates over here, the Proof team has published some new releases like including on ceiling fix and also better APIs for on ceiling. So it's going to be easier for a search possible now and it's getting integrated into Lotus Manner. It will be included in our next release on their way to search providers. This is a great effort by Alex and the community contributor called Alex Xu. They have landed a deal activation optimization and they improved that by 25%. So if you have heard, PSD has been a huge cost for search provider when getting data on boarded to the Filecoin network and we are trying to get that cost down a little bit by a little bit, I mean a lot, but this optimization just landed. It's waiting for governance process to include it in the next network upgrade. We also have implemented optimized historical data access using a Lotus node. This is a new user requirement on the Lotus node as is FEVM launch because now we are having applications one historical chain data, way more historical like the graph nodes which is what like DAXA and other things. And we have now implemented that thanks to Frigick from FBM team. Lotus Manner team has been again consuming all the improvement from the Proof team and hopefully optimize how fast we can serve retrievals with the Booth team for Filecoin. We are also joining the HackFS. There's a lot of the track going on, FEVM, IPC, Saturn, all very exciting. Do check it out. And also I just learned Saturn Payout is now on FEVM. So that's very cool. Some opportunities there are, again we are trying to, we keep on to looking to how we can bring the computation or the gas cost down for like getting data or like storage on top of Filecoin. So a Proof team has been doing a lot of benchmark from the Snark pack for snap deals. Basically we can aggregate in the computation but also collaboration with super nationals which is an ecosystem partner has been doing a lot of software optimizations. We are also, this minor team is also on track of integrating since that is a pull-rap and getting it ready for in the 21 network upgrade. So that, again, folks can save more storage spaces between PC2 and C1, I believe, but like save some storage spaces. So a ceiling pipeline can be more robust. And also in collaboration with consensus lab, we started to talk to their engineers about some IPC client discussions, what the implementation should look like. It's still early stage from our side is the exploration, but that's very exciting. I think that's it. Awesome. Well, let's keep it quick, but let's go into some of our quick team updates and then our spotlights and deep dives starting with Bedrock. Hi everyone, David engineering manager on Bedrock as a reminder, we work at the intersection of Filecoin and IPFS. Few key highlights to share boosted options going strong. We're almost at 50% of all Lotus market nodes, which is exciting. Also our indexing coverage for Filecoin deals is over 50% per week. So we should be seeing that number go up on the overall number of deals indexed on IP and I. A few project highlights over the last few months. First off IP and I as mentioned earlier is storing all of the index in double hash format. This helps our privacy preserving efforts. The team is also rolling out a new scalability solution that leverages foundation to be that we're really excited about. The boost team has added functionality to serve files over HTTP with the booster HTTP. There's a few blog posts linked there, take a look. And the tornado team added HTTP retrieval support in Lassie with initial integration with dot storage making sure that's working well for the REIA project. So in terms of what's coming next, the IP and I teams are going to continue to stabilize that new infrastructure with FoundationDB. Boost team is working on scalability improvements with the Lotus team, as mentioned by Jenny, as well as migrating to a new database for retrieval scalability. And the Lassie work stream will continue in making improvements for the REIA project with all of the different retrieval parts of Saturn as well and improving that success rate in the long run. Awesome, thank you, Patrick for Retrieval Markets. Hey there, Patrick here from the Retrieval Markets team. First update is from the station team where they've been building something called Spark. This is a retrieval checker module. It's currently making retrievals against Saturn but soon storage providers. We've actually got a slide in the spotlights on this so we won't go into too much detail. Now that we've had 600K retrievals are performed. The Saturn team, they have been focusing on REIA pretty much completely. And the REIA M1 milestone is for us to get to the IBISG Gateway correctness and latency with 10% of production traffic. We're getting close, we're still a few iterations away from this goal, pulling it all together. Saturn in general though, we've got over 2,000 nodes now running point to presence, which is super cool. The Saturn goes web through working group as we've already heard, has shipped a Saturn payout contract in FBM. And again, we're going to hear a bit more about that and the spotlights from Amin who was working on that project. And then across the PLN teams, Magmo is now working with Boost on multi-hot payment channels, which is really cool. And Titan, who is another sort of Saturn style network based out in China, they've launched a testnet for their DCDM and I think they're integrating the FBM and also with IPC subnets. So doing some really cool stuff. The opportunities, there is a bounty available at HackFS for the most impactful application on Saturn and IP&I. I think that started it off yesterday. And Spark and reputation data, we'll come back to that one again in the spotlights. And there's an open role in our team and I'll leave it there. Thank you, Max and Kryptonette. Yeah, so we have this work by Kuba. So Falcon data tools integrated proof of data segment inclusion to enable verifiable data, the great aggregation in Delta. Then we have a bunch of docs that we updated in the last few weeks. So we have the synthetic power of security audit report and then the CC sectors upgrade. So as you know with FIP 19, it's not really disposable to inject data into CC sectors. Yeah, the TLDR is that upgrading CC sectors looks like the best options for SPs. Then we have storage faults mode and then like an overview on the Proof of Space Time security model. And then I'd like to mention the Proof of Space.org website. The goal there is really to onboard researchers and engineers into the space. And we have July 20 and 21 in Paris. That's gonna be an event focused on that. And then we have this Falcon observability proposal and mainly the SP ROI calculator by Nicola, which is part of that. Awesome, thank you. Thanks. And Eric for D-Rand. Right on, hi everybody. D-Rand here, the ultimate randomness solution. I'm just trying to keep it short and sweet. I've got lots of links here for you, but the highlights I'd like to call out. The League of Entropy, which is our consortium of other volunteer organizations that's contributing to our threshold crypto network is healthy and growing. We've added three new members this quarter and we've received some significant expressions of interest from a variety of large and interesting customers, potential customers such as Microsoft, Amazon, Ubisoft, Myston Labs. We've created a lot of new material for in preparation for HackFS. So we have some great resources for beginner hackers as well as more experienced folks who will be joining us for that day. We have a bounty like many of the teams do for D-Rand use cases. And speaking of use cases, we've got a voluminous sort of product market fit document that defines several use cases. For those of you that are more business inclined, would love for you to come check that out and give us some feedback. We recently had a fit that was accepted for onboarding Filecoin to the Unchained Network. So that was very exciting. We also have some work. We've got a new intern who I should have introduced here. I just realized I forgot to include her name and introduction here, but she'll be joining us to work on DKG refactoring. All of our KPIs are awesome. As you can see, we're all at 100% uptime. We've got two billion requests, legitimate requests each month. And we've got a dashboard there for those that wanna double click. We also managed to repel some spammers who did an incorrect implementation at one point they were spamming the network with 80% of the API calls, which was really nasty. So working with Problab and some of those folks on that stuff. And then opportunity-wise, we're really excited because not only have we had lots of new customers integrate DRAN into their products and services, but we've also received some expressions of interest from a variety of different clients, primarily in the gaming space. And so we're investigating some of those use cases. And we will be, of course, with many of you at ECC and Blockchain Oracle Summit in Paris in about a month or so. And so look forward to seeing everybody there. Thanks so much. Awesome, great work, DRAN team. Let's head over to our spotlight, starting with Nikki from Hackathons. Hi, everyone. I have an exciting update from Hackathons Land from Outer Core Founders team for you all. Last Friday, we kicked off our fourth annual three-week flagship virtual hackathon, HackFS 2023 with a global. We have 820 registered hackers and the total prices add up to a whopping $150,000, of which $60,000 are from protocol labs spread out across all of the teams you see listed on the right. Over 90 projects have already checked in with their progress reports and blockers. So all the teams that are supporting the hackathon head over to your dashboard and check those projects out. You can also contact them on Discord with the information listed over there. Coming up next are project feedback sessions for hackers and live hackathon judging. So if you'd like to be a hackathon judge, sign up using the link listed in the slide and a huge, huge, huge thanks to Andreas and all the teams for the awesome support for HackFS. Thank you. Thank you, Nikki. Patrick for Spark. Hi there, me again. So Spark, yeah, this is the first module which the station team is working on. The idea here is to do two things. One is to get to the first station module that gives people payouts and secondly to make some progress on the retrieval incentives and retrieval reputation space against storage providers. So the idea is that stations of which we have now 110 around the world will be running periodic retrievals against storage providers and measuring them and then storing them and then eventually they'll be rewarded for doing these jobs. So far we've been hitting the satin network as a first step and while we've been doing that, we've integrated Lassie into Zinnia and so we can now hit the SPs using all the hard work on Lassie. The next step is to actually create this thing we call Meridian, which stands for measure, evaluate reward, Meridian is a measure, evaluate reward is impact evaluator terminology. We want to create an impact evaluator whereby not only station but also the work we've done on satin to reward people for their jobs. It becomes part of a framework where you can just plug in to measure a certain job in the network, you then get evaluated and then you get rewarded by the smart contract and that works ticking off as we speak. Thank you. See you for JS7 best application. Yeah, it's great. So JSIPFS has a long history here at PL with lots of exploration and lessons learned many of which have now moved into Helia. And so over the last month the Helia working group has taken on doing the work of actually deprecating JSIPFS. So first a lot of planning that went into doing this before we started disrupting and upending people's lives and then involved a lot of documenting, communicating in terms of blog entries, migration guide which we've got a lot of positive feedback on and then it was like enter into execution and disruption. So the team went through one by one about 370 GitHub issues and PRs often denoting whether that has been solved in Helia or is not gonna be addressed, et cetera. So I've done all of that work. So just using this as an opportunity to celebrate the maintainership that went into setting something down gracefully and ideally all the time and dev confusion that we've can say by reducing some of the surface area that we're getting out of Helia. So big thanks to those who came before us many of which who are still here but in different parts of the org certainly to Alex Russell and Ashant who did the lifting here. And Outercore and folks participating in HackFS that have already been giving feedback on how to make this better. We aren't done, done, done yet. There is doc cleanup that needs to occur around IPFS docs the js.ipfs.io website and even proto school. So those are being tracked and we'll do those and certainly gonna be actively improving Helia as a result of the feedback we're getting but we will archive the repo at the end of this week so that no new issues start showing up in JS IPFS. So again, thanks all for your help to get us to this point. Awesome, Amin. Hello everyone. I'm Amin, one of the engineers on Saturn and as you might have heard already we're very excited to announce that Saturn has just deployed decentralized payouts via FEM smart contracts. What does this mean? It means that now Saturn note operators get their rewards locked to their FAC 1 address in an FEM smart contract and they have the ability to query or claim their earnings as well as have access to a dedicated on-chain record of their Saturn reward transactions. We just launched in the beginning of June and it was a very successful launch already 75% of the network has claimed their earnings and to our knowledge, we are the first decentralized CDN to release a decentralized payouts mechanism in this space. So it's kind of cool, but honestly not surprised that we're leading the frontiers in that area. So as part of our tooling, we developed a CLI and a web application for note operators to claim. You can see some lovely pictures of that on the slide and one cool thing about our CLIs that it offers end-to-end FACO and native functionality meaning that you can do everything with deploying, claiming, inspecting your earnings without the need for an Ethereum address and you can use that to interact with the FEM. So everything we've worked on is open source and public and we really tried our best to generalize the tooling that we worked on so that any team that wants to deploy their own like a reward distribution mechanism FEM can leverage the tooling that we have built. And of course, we'd love to chat about that if anyone is interested. And that's it, thank you. Great work, George. All right, my turn now. So yeah, just very briefly going over Consensus Days 23 which we organized this Monday and Tuesday. It was the third edition of Consensus Days except if you're really old at PO in which case it was the fourth one because there was a old event at one point together with the SPC. But yeah, but so we did it in 21 as a virtual event in 22 as an in-person event in LA and now back to virtual. And yeah, it was, I think we continued the good trend of the previous editions. We had a whole lot of very interesting talks. We had 20 accepted talks out of 35 submissions plus two invited talks, one from Haguevus so the chief scientist of IOHK, IOJ Cardano and Zarko, the CTO of informal systems but we have participation from all across the industry and all across academia in Europe, the US and Asia Pacific as well. Other numbers, 231 registrations for this year's events. We have been using the Consensus Channel over time for all of this. So that's up to 347 and our mailing list of participants is now up to 612 members. So people who participated in this or previous editions. The YouTube videos aren't up yet. We just have the raw streams from that captured the whole event. Those have been viewed 614 times or 633 times as of now over the last couple of days but we will be publishing all of the edited individual talks on YouTube in the coming days. So yeah, feel free to follow us on YouTube or Twitter and I'll also drop links in the chat. Awesome and that rounds out our spotlights over to Birdie and Steph for the awesome Lilly performance optimizations. Hi everyone, this is Birdie from Sentinel. Today we'll talk about how we made Lilly around 15 times faster. So just on background, Lilly is an essential software for indexing the Falcon blockchain. It provides data extraction analysis capability. A few things that Lilly can do is to, for example, extracting blogs, messaging receipts and the actor state changes such as minor sector events, market deals and FEVN data. So the main problem with Lilly previously was the infra cost and that's probably the top complaint for our users. And because certain tasks that requires a lot of resources and time to process the data and once the time exceeds 30 seconds because that's a duration of an epoch then we will not be able to keep up with the chain. So in this case, we'll need to process the task in parallel. We need to run multiple Lilly knows in order to keep up with the chain. And this can increase infra cost significantly. So for example, a five node cluster of Lilly knows will cost more than $10,000 a month. And in the past, one vendor even quoted like 1.5 million for two years to book contract to operate Lilly and database. So here's the diagram of the Lilly architecture. So Lilly is a specialized Lotus node. So it imports library from Lotus, IPOD and actors, et cetera. So the main components are the indexer, processors and the exporter. And here's the diagram of the distributed worker pattern. Like what I mentioned, when we need to run multiple nodes we need to set up this infrastructure. So it contains notifier, the message queue, and workers. So this pattern will scale horizontally but it will get expensive really quickly. And I was surprised when I saw this and it requires a distributed system in order to parse and to extract Falcon data. I was told that it's because like the state comparison between epochs is very expensive operation. Although I'm not convinced. So I decided to look into it further. So we start investigate into the performance bottleneck. So we try to start with the most time consuming task which involves like the market deal data. So you can see that here, the task process duration for market deal proposal is more than one and a half minute. So we enable the tracing feature in Lili. So here's a simple trace for processing one tip set in Lili. So the total time spent for processing one tip set is one minute and 33 seconds. We spend most of the time on deal proposal processor. And within the deal proposal processor you can see we spend all the time doing amt.dev. So what does that mean? So amt is IPOD data structure to store an array of data and amt.dev is to compare the difference between two arrays. And then below that we can see we also spend quite a lot of time on export data which is the actor states. We spend 20 seconds to just persist the actor state data. And further drill down, we realize that we're making one database call per row which is definitely not right, but it's an easy fix. So in order to understand why it's so expensive to do the amt.dev, I reviewed the coding API of the amt library and eventually it discovered a bug in the library that's causing unnecessary tree traversal. So in order to explain that, I draw a diagram to demonstrate how we compare to amt arrays. So amt is actually a tree structure. So each node will have links or the actual values. So links will point to other amt nodes and there's a CID associated with each link. And the values are only stored on a leaf node. So here we can think of the V1, V2 as the deal proposal. And so we have a array of deal proposal. So we are comparing the proposal between two epochs. We have state one from previous epoch and state two for the current epoch. So when we are comparing the amt, we look at a CID associated with the link first. So CID is calculated, it's a content ID, right? So it's calculated based on the content of the subtree is pointing to. So if the CIDs are the same, it means that the subtree has the exact same values. So we don't have to compare further. So in this case, when we look at the first thing in the root node, we saw they're both CID one, so we stopped there. And then we look at the only other side, we saw CID two and CID five. So we know that the content of a subtree is different. So you have to find out the actual different value. We do it recursively and eventually we'll find out that in one of the node, the value V6 is changed to V7. Okay, so this is how the amt div should work. But the pocket I discovered is that even though when the CIDs is the same, we still try to navigate down into all these children nodes. This is especially bad because that means we are loading every node from the disk into memory and just to find out that the values are the same in the end. So that explains the high CPU and IO usage whenever involves a huge state comparison. So a similar bug is also found in the IPOD library. So the fixes for both libraries are simple. And we also optimize the actor state persistent to batch insertion mechanism. So the result, market deal proposal extreme time reduced from 100 seconds to 25 seconds. Actor state persistent time reduced from 20 seconds to 20 seconds. The time to process when you pop reduce from 100 seconds to six seconds. This is especially meaningful because once the process time for when you pop is lower, it's shorter than 30 seconds. We no longer need a cluster setting. So we can now run everything on one single instance. And the mounting for cost is reduced from $10,000 to $100,000. And I think now everyone is very affordable for everyone who want to run, really knows on their own. So next I will talk about impact either to staff. Yeah, thanks, Bertie. So all of these massive improvements have resulted in very positive feedback from key Lili users such as Angram. They're now able to use Lili and Archival Snapshots to backfill FEVM data, specifically contracts data. Not only that, since Lili is performant now, we can add more complex processing tasks instead of doing this in a separate data processing pipeline, which I did previously to Bootstrap. So thank you to Terry for baking that into Lili. So now our Lili node operators are able to do that themselves without having to create another service for just extracting contract data. Additionally, Starboard, another one of our key customers has also has been very satisfied with the performance. And they expect that the cost of running Lili nodes will be more affordable for them going forward. And that hopefully reduces the dependency between them and us. Moreover, because of the reduced batch processing time, this also means actual financial cost savings, not only for ourselves but also for node operators and actually also for our batch processing job that we have in-flight at the moment. Here are some resources from our team. Verdi has written a write-up for the performance optimization for people who are interested. Our Notion page, our roadmap, and also we have started a new RFC documentation to better collaborate with other teams and other PLN companies. And you can reach us on Phil Santanel's exact channel as usual. Thank you. Awesome. Amazing, amazing work, Sentinel team, especially because this really helps increase resiliency across the entire ecosystem. The more people can afford to run these nodes and collect their own chain data, run their own archival historical nodes, back their own RPC and analytics, the more resilient we are as a network, and the more accessible that data is for people to make informed decisions or back their own services. So phenomenal work and really excited to see the impact of that, not just on our own budget, but on the many different folks who are now able to adopt. So great work. We're officially at time, but if anyone does have any questions, feel free to drop them in chat or flag. Otherwise, please do leave comments on the deck so that folks can follow up with you directly and get feedback on presentations for next time. Happy Thursday, everyone. Have a great one.