 Happy April, everyone, and welcome to our monthly PL Endres all hands meeting. We have a healthy working group update of our graded Q1 OKRs, our new Q2 OKRs, and then a number of updates from different teams inside the Endres working group. We have a whole ton of awesome spotlights, and then a deep dive on Lassie, which is, as you might expect, because Hannah's presenting it freaking phenomenal and picture-tastic and everyone should be excited. We should say for lots of time. So we'll jump right into it. As a reminder, the PL Endres working group is one of many engineering and research working groups within the PL Network. We are focused on driving breakthroughs in computing technology to push humanity forward. We work within Web 3 because we think the internet is super freaking cool and a massive superpower for humanity. We want it to run with kind of core Web 3 primitives that will make it more resilient, more efficient, more enabling of human agency, and a great foundation for some of the very exciting and sometimes terrifying breakthroughs that are coming in the next, you know, sometimes sooner than we expect a couple of decades that we want to see built on really good foundation. We work across all tons of different projects, especially spend a good chunk of time on ITFS, libP2P, and CalCoin, but also many other projects you'll hear. I think a number of different updates on DRAND and a couple of other projects as well that we work on within the PL Endres working group. Our mission is to scale and unlock new breakthroughs for IPFS, CalCoin, libP2P, and related protocols. We do this in kind of three main ways, driving breakthroughs in protocol, utility, and capability, scaling our network native research and development across many different teams, sharing our work openly, and stewarding and growing OSS projects, networks, and communities. Here is our view of our public endres notion that has all of the information about the different teams within our working group. We've updated this recently to make sure that it gives you a good entry point. And each of these groups is doing weekly sit reps, situation reports, so you can keep track of the awesome work they're doing. Our 2023 strategy, if you're tuning in for the first time, is first and foremost to make sure we are stewarding our critical systems and growing them over time, keeping them scaling effectively, that we're growing the wider network that is contributing towards these awesome protocols that make up the PL stack and helping accelerate many different teams and many folks outside of our specific teams. And then on top of that, we have two main foci for 2023. One is around robust origin retrieval, scaling data onboardings, scaling CDN speed retrievals, and driving adoption. And then two is around compute over five-quantity data. We get to celebrate the launch of FBM, which is a major step forward on that roadmap. But there's also many exciting updates there as well around chain scalability, around bringing compute to data and programming it via things like FBM. We have a public star map that highlights these different projects that endres is working on and divided into our three main categories or themes. And so we have some exciting work that has been happening around actually landing FBM in mainnet, that one green. We have a number of projects that kind of were landing at the end of Q1 and a couple that we still need to update or might be slipping a little bit into Q2. And so feel free to go check that out if you're curious what we're working on. Here are our graded Q1 OKRs. I say overall we did pretty darn good, better than I was expecting. We're middling, so a little bit below our target of 0.7, which is green and success for hitting our OKR goals. We got about a 0.6 around keeping critical systems running and growing. We do have monitoring, but it's not, we're not hitting all of our monitoring targets in terms of maintaining really good site functionality SLA's. We're not alerting on these as well as we as we want to be, and so there's some monitoring there that we've added, but we didn't achieve as much as we had desired in this area. We did do a lot of centralized spend cutting. I think we ended up at something around 25% of cutting costs to centralized Webtoon infra within this quarter boundary, so from end of December to end of March, which is great, but not our 50% goal. If we had landed a chunk of our RAO work, we would hope to have been here, but still still great progress and congrats to everyone who's continued to work on that. Within our hyperscaling and accelerating teams contributing to TLStack protocols, we saw a whole ton of amazing new folks get involved in FEM development. Really huge congrats to everyone who is now building in the space or was sharing the onboarding pathways to help many new builders get involved in building the ground floor of this ecosystem. Very exciting. We didn't do quite as good in our own critical open roles. Some of these are making, you know, continued strong progress, but we need to get them over the finish line. We were, I think, our worst OKR in terms of setting very ambitious goals for ourselves and not being able to land them fully was around scaling our data onboarding and CDN speed retrievals. We're still at mirrored traffic for our IPFS gateway adoption of the Saturn CDN, and so this isn't yet live traffic that is being able to, you know, 100% depend on Saturn. We still have additional work to do there. We ended up at, I think, 746, about 750 petabytes of total data on Filecoin by end of quarter, which is not quite 900, but still really awesome progress. I think we're at the 200k successful retrievals per week from Filecoin SPs, which, again, still super, super awesome progress, but we had high sites for this, which will maybe tune back a little bit in Q2 based on what we actually expect to see from our successful Saturn deployments. And finally, our fourth objective around compute over data. There was some awesome, awesome progress here. Launched FEM, over 500 unique contracts deployed on Mainnet, and knocked that one out of the park. Amazing job FEM team in terms of interplanetary consensus, I believe technically subnets are live on Spacenet, but full IPC launch is coming imminently. So we're not, we're not quite there yet in terms of everything, but we made some pretty good progress here. And we have, you know, for compute over data, I think we've proven that we can execute thousands of jobs per day, but we're not quite there yet from a total adoption perspective and have something like that to work for active exemplar partners, which is pretty close. So overall, that is our Q1 OKR scoring. We'll have those up on our public notion for folks to take a look at. And we're now focused on Q2. And so we've updated our KRs for this quarter. There is some exciting work happening in on the critical system side around adding reader privacy via double hashing, which is going to be an exciting upgrade for content routing within the IPFS ecosystem. There's also a number of critical improvements happening inside of Falconland around long term resiliency and security that are going to be landing at some of the network upgrades and so making sure that those land smoothly. Around hyper scaling talent, we have a lot of exciting events happening, starting with IPFS thing that is happening this weekend in that that is an event that a number of folks in Andres are helping run, but is a wider community event that we participate in and a number of other awesome events that many folks are helping put on or building out. And so goals for making sure those are well attended and get a lot of visibility so that those are put to good use. And we also have a new kind of modular implementation builder for IP builder library called Bokso, which has been emerging. And I think the aim is that it is actually gaining real OSS adoption for from new implementations this quarter. For our CDN speed retrievals, this is definitely like an area that's getting a ton of focus in Q2 with Saturn really, you know, achieving the kind of like production adoption goals. And so this is starting to get additional committed customers with real revenue and fast retrievals of data stored in Palquen and IPFS, making sure that our retrieval success rate of the underlying components of Saturn is high and that kind of some of those underlying libraries are being many 200 million requests per week. Exciting goals. I'm looking forward to seeing it. So that's, you know, a high bar and then really exciting work going to be happening on the intersection of IPFS Gateway and DAG house and Saturn all coming together to harness these technologies to reduce our that non centralized infra and put kind of money where our mouth is in terms of the ecosystem adoption of these what three tools and so very exciting work happening there on the compute overstate data side in Q2. We have a lot of growth goals for FBM around actually gaining, you know, a file coin deployed within smart contracts, numbers of unique smart contracts deployed, number of transactions happening actively within those smart contracts and then scaling the number of wallets that are making use of all that those smart contracts have to offer. So really big focus on growth. This is the FBM ascent phase. It is ascending into the heavens. Very exciting. Also really big quarter for IPC interplanetary consensus and the whole consensus lab team deploying on Palquen main net via Solidity smart contracts and also starting to onboard their first major subnet users, which is going to be really exciting. And then for Bahia as well, which is aiming to reach 1.0 in May and also add a lot of high in demand functionality around having kind of secure and observable environment for client jobs. So going to be a great, great quarter and high hopes for for all of the teams and what we're pushing on will be updating these. These are actually now assigned to DRI and they're going to be giving you updates on each of these areas within our future all hands. And so look forward to those updates starting next time. Cool. And with that, I will hand off to the IPF team. Cool. Yeah, I'll take that, Molly. Thank you. Yeah, the IPF stack for those who don't know it's a suite of specifications and tools where data is addressed by its contents using an extensible verifiable mechanism and moved in ways that are tolerant of arbitrary transport methods. So this particular update will be lighter given the many folks on the team have been heads down preparing for IPFS thing conference starting in a couple of days, but more will come here next month. A couple of things I'll call out on the KPIs. The bottom left there, the fine latency for new content. The good thing there is we're back down to the levels of near the end of 2022 when we had dialed down the hydras, which we already knew was going to have a latency hit. But so some of the operational events that were affecting the network earlier in the year have been mitigated. So that we're glad to see that. There are more improvements and defense mechanisms that we're going to be working on as well. But we've got that back under control. And then also in the community GitHub activity, just a couple of notes here when we talk about active users, we're trying to filter out drive buys. So an issue contributor, for example, is someone who created or commented on at least three did that at least three times in a given month and similar for PR contributions. So it's a little little bit worrying the month over month decline on the issue and PR contributors. We haven't drilled into that yet. I'm going to be looking at the end of the month in April to see if that's still continuing. And to see, you know, there is do you just figure out what we'll do further analysis to see where that drop off is occurring and potentially why, but at least flagging it here that we are we are seeing that in the data in terms of IPFS, some of the protocol and implementation highlights. Yeah, so specs IPFS that tech, the website is live. This is just a nice rendering of all the markdown files that the community has been actively working on over the last year. But there are there have been some exciting contributions to it as well. There's an IPFS principles doc that kind of starts to get into more of like, what is IPFS? And there's an accompanying blog post on blog that IPFS dot tech. And again, this is trying to kind of summarize a lot of the momentum that was that was put in place last year around IPFS being bigger than one particular implementation. So please, please read that. And that'll continue to be a theme that we build on here at IPFS thing. You Molly mentioned box. So this is a this is the collection of IPFS repositories that have been written and go over the last number of years. We've kind of consolidated that into a monorepo that Kubo is using and some other projects like the Bifrost Gateway used in RIA. And we bubbled that up to IPFS cluster and Lotus as well. But enabling users to kind of get started. We hypothesize with in their IPFS journey in using using go. And so this release in zero dot eight has a bunch of these repos all consolidated a lot of docs and tooling to help people upgrade. So that's all been pushed out. Like I said, there is a new IPFS implementation that Bifrost Gateway, it's being used for RIA. And so another release there that has been done, particularly around much tighter metrics implementation and being able to trace requests so we can really diagnose where problems are. I mean, Kubo has gotten most of its benefits in this latest release from changes that have gone on and boxo. You can really read the release notes there. And Healy has a new JavaScript-based implementation we've been working on. The V1 kind of API was quietly released. And the team has been kind of heads down working on a bunch of examples to reporting over examples from JS IPFS world to show how it can be used. But more to come on that especially during IPFS thing. But please know that all of us involved are really interested to hear your use cases and any feedback that you have. And I will be sharing more about that again over the next couple of weeks. In terms of what's coming up, again, IPFS thing there's a lot of presentations that'll be made and undoubtedly follow-ups and just empowering of others in their IPFS journey. So that'll be a big focus. You Mollie mentioned in the OKRs about reader privacy upgrades. These are happening on two tracks. One, the DHT. So there's already a rollout plan that's been drafted but we'll be finalizing that and communicating that with the community. And then with the network indexers they're making changes on the network infrastructure side so we need to get the client libraries updated particularly the routing V1 HTTP API in Voxo so that it can also be doing reader privacy. So that'll be coming out and then there are various operational items for RIA and some of the DHT Q1 events that happened earlier in the year on how we can make the system more robust. So those will be happening before the next update as well. Thanks. Hey, it's Peter from IPDX Interplanetary Developer Experience. So we try to make work for IP Stewards a bit nicer. So what was going on with us lately? We finally have a way to monitor GitHub Actions which we're really excited about and I'm not going to say anything more about it because we're reading a spotlight about it later. We also are moving forward with Gateway Confirmance Testing Initiative. We are already ported around 30% of the tests and we are going to continue doing that in the upcoming weeks and our goal is to port all of them to the new framework. We are already using it in Kubo by First Gateway in Voxo on every PR and push to default branch. What else? We are speeding up CI in many places, most recently Voxo. We deployed our very own self-hosted GitHub Actions runners which cut down the CI runtime and in the coming weeks, we also plan to do similar thing for various repositories in the libp2p.org because we started to exhaust our hosted GitHub Actions runner limits there. So GitHub Actions are getting popular which is another cool thing to see. And to sum it all up, we are going to IPFS think and if you're also there, give us a shout. We are always excited to chat about developer experience. Hi everyone, update from the libp2p team. So exploring KPIs for libp2p pretty much left and right side two dimensions. How does libp2p behave in various networks? We're monitoring that. This is in general the Academy Exporter project. You can search for that and then explore the many metrics out there. This I think always an interesting discovery. And then on the very right side, given that libp2p is an open source project, we're monitoring how people interact with libp2p online and how they contribute to it and overall wonderful upwards trend. Couple of general project updates. I would say the biggest one here, we have the new engineering manager Dave. Dave, welcome to the team. It's wonderful to have you. Then a project at libp2p is in general performance benchmarking. We have a performance protocol now to test performance between two nodes. We have provisioning scripts to bring up infrastructure on AWS and then test performance between two nodes across various different networks. We're giving a talk about this at AWS thing. So join the measurement track in case you're interested. On the community, in general, a lot of interactions for libp2p and in general, I would say a growing community. Couple of highlights here is DFINITY, exploring libp2p for the state sync protocol and then organize that wants to build on the new WebRTC browser to browser project that we've built. So talking about browser to browser and WebRTC in general, browser connectivity is another big project at libp2p. And here we just landed connectivity between two browsers or in general between two private nodes or behind gnats and firewalls that are based on the browser infrastructure or where at least one side is within the browser and we have an implementation in JS and then a spec merged as well. And we're creating an example app around all of this. So at IPO best thing, you'll actually be able to chat with each other but not with a normal chat, but with a chat actually where you're directly connected to the other people in the browser. So I think this is the IPFS on the web track. So join there in case you're interested. Cool. And then there are a lot of implementation updates. I won't read them out at all. A couple of new releases. So please update a bunch of work around dial prioritization which is very important for performance and then yeah, more users across the implementations. That's all for my end. Thanks. Everyone, I will give some updates about Filecoin. In terms of storage capacity, we can see that the trend we have seen since like Q4 last year would a decline in the role-byte capacity in the network while the quality-adjusted power is still going up a lot with a lot of like Falcon plus data being onboarded to network. And from the chart at the bottom left, we can see that the cumulative daily active deals on the network is still trending up and to the right, which is really nice. And on the right, we can see that we actually reached a new all-time high of deals onboarded to the network in a single day with 5.5 petabytes of data onboarded in 24 hours. We have some Feven metrics. Now that we have Feven on the mainnet, we have seen a good amount of Filecoin being held in EVM Actors, ETH accounts and placeholder accounts with about like 423,000 Falcons in there currently. We also seen numerous contracts deployed with like 662 unique contracts on the network with about 170 unique deployers. So yeah, highlights. We launched Feven on the mainnet, which is quite big. And with that, we have seen increased usage on the network and a lot of like new ecosystem services and contracts like Wrapped Fill, Staking Pools, and Teller, a price and data Oracle. And there's tons more, so check out at Falcon on Twitter. Then FIFT sector duration multiplier was rejected for Network 19. There should be some comps on that as well from the Falcon violation. On the storage provider side, we have seen super national open sourcing a new ceiling software, which aims to like reduce the cost of ceiling and also increase the efficiency. So this is like super helpful for both current and new storage providers. We also started work on a scalable and more efficient RPC loader's node cluster for API service providers. And on the 20th of April, the IPC SpaceNet is coming and you will hear a lot more about that later today. Coming up very soon, we will have a new network upgrade that would tips focused on security, stability, and performance. The network version 19 upgrade is codenamed Lightning, as it will, among other things, reduce the Chrome usage, which will give us faster block validation times, which is really important for chain quality. Something a bit different about this network upgrade is that we will have a two-stage network upgrade where the second upgrade, network version 20, is a ghost upgrade that does not have any migration or code differences. The reason for this two-network upgrade approach is to allow the new window post-proof types to be accepted in the first upgrade, while in the second upgrade, network version 20, marks the spot where the older proof types will no longer be accepted. This allows for a smooth rollover period during which both proof types are accepted. Timeline-wise, we expect to have the calibration network upgrade happening next week, with the ghost upgrade happening on the 24th of April on the calibration net. For mainnet, we expect the network 19 upgrade to happen on the 10th of May, with the network version 20 upgrade happening on the 17th of May. And yeah, that is most of the Falconer lives. Now I think it's on to the team updates. Awesome, thank you. Hello fellow Ambrats, an update from the DRAN team. For those of you who don't know, DRAN is a threshold network for generating publicly verifiable, unpassable random numbers. On the KPI front, our most important KPI is uptime. You've got 100% uptime, which is great, because otherwise, Falconer goes down and everyone's day is ruined. On the roadmap front, we've ticked off a bunch of items already and some more are pretty close to falling. We've released Unchained Randomness on to mainnet, which enables time lock encryption. We presented that time lock encryption at Railroad Crypto two weeks ago. We released the paper on it. Now, even held a randomness somewhat there, but that's on the spotlight, so we'll come to that later. Also, we focus a lot on community engagement in the last quarter. We onboarded three new LOE partners. We started by weekly officers, which I invite any and all of you to come along to and ask us lots of questions about DRAN. We also released seven blog posts, including a few cool tutorials on how to use DRAN, how to use DRAN on FVM, how to use time lock encryption, and much, much more. And as a result, community members have built lots of cool things on top of DRAN. So everyone seems to be a russ station these days. We've now got multiple russ clients. Chibu from Cloudflare developed a cool CLI for doing DRAN related stuff and time lock encryption, which the team are using and loving. Also, at an Ethereum hackathon, a team called RNGesus provided DRAN integration for EVM. So hopefully we can see that in a three minutes soon. And also the StoreSwift team, who are also a member of the LOE, have finished up a full rust to DRAN implementation, which is wonderful. Coming up in the next quarter, it's all systems going FVM. Earlier than expected, we sort of thought with the team being so busy that will be a quarter further along, but here we are. Hopefully it's all systems go. Also, we have an ongoing project with social income, Swiss not-for-profit who provide universal basic income for people in Sierra Leone. And they want to use DRAN to select participants who will receive that so that war loads on Siphon all the money is their families. Also, hopefully we get the FRC for time lock encryption on and accepted for FVM and then we can start work there. Thank you very much. Awesome stuff. Over to George for consensus. All right, thank you, Molly. Yeah, so, well, this is our update. It's, the font is a little small. I won't cover everything, but you can read it still. I mean, we don't have KPIs on the slide, but we do have our new draft OKRs, which cover both the stewardship, the network growth, and the compute aspects, right? So we're already making growth progress on it. One of them is complete, the other is halfway there. But basically, we have, you know, we intend to continue improving the Falcon protocol, making it more secure. We're doing some events and obviously the big thing is really IPC on mainnet. On that note, we did have a slightly way on our M1 milestone, which is launching exactly one week from today, but we're also doing a more complete launch. So that's good. And the rest of the plan hasn't really changed. So we're still aiming for June for the mainnet deployments, even though we'll just, we'll complete our detailed plan after M1 closes next week. In terms of highlights, the more important part, yeah. So IPC is actually live on SpaceNet already since March, although without the cross-net messages. And for that reason, we've kept things fairly quiet. But again, launch is next week. So we're now going to do a bigger push. I'm already going into the opportunities. But still, from that first launch, we have 26 subnets already deployed on SpaceNet. I mean, normally we tell people to use a vocal root, but we also have instructions for using SpaceNet and, you know, people have been doing it. Obviously, these are all mostly ephemeral tests. There's no actual applications, which is good because we're going to reset it. But, you know, people are actually trying things out. Consisted Broadcast was merged with Asmaster. There's a spotlight on this, so I won't go into detail. And Consisted Day 23 is taking place on the 5th of June. We received 35 paper submissions, so we exceeded our goal. And this time, we also have invited talks from Zarco at Cosmos and Huggalos at Cardano. So we have a very good lineup for the event already. And we'll have a program on the 21st of April once the peer reviews are in. Finally, Opportunity is the big one. IPCM1 is launching April 20. We're finalizing the implementation this week and we'll have a few more days of testing and bug fixing next week. We're doing a big announcement and publicity push. James, thank you so much. James, from the Lotus TSE team. Thank you so much for your help on this. And Alfonso is going to talk about this a little bit more in the spotlights. And one final note, we do have a fresh proposal from Guy on making the EIP 1559. So the base fee mechanism is more robust against malicious manipulation. That's a fifth discussion. It's 686. You can go there and opine. And thank you so much. Awesome. If we don't have Mike on the call, I'll just read it real fast. Probably a highlight here is that there is going to be a CryptoEcon Day in Austin in, I believe, something like the end of April. Coming up soon. So definitely come attend. There's already a whole ton of folks who are planning to participate. There's also three economic steps planned to bolster the Falcun economy in Q2. So lots of plans there to keep improving. I know the CryptoEcon team was very involved in the sector duration multiplier. There was some really great analysis and back and forth between Block Science and CryptoEcon Lab, a broader AMA. If you're curious about that, please definitely do watch the recording or dive into any of the analyses. And then finally, the CryptoEcon Lab is also working with a number of other groups across the PL network to help consult with them on design, evaluation, feedback on various different CryptoEconomic proposals that might be building on top of Falcun or building their own networks within the PL's network space. And so lots of new prospective clients for CryptoEcon Lab, which is great. Over to Steph for Sentinel. Hi. So we have daily archival snapshots. Since Genesis, it's 100% complete. It's one of Hector's last gifts before he went on sabbatical. Our timescale DBE has low latency, monitoring no gaps. And we've actually reduced average data latency to less than 30 seconds. BigQuery historical chain data set is 11 weeks behind. Due to capacity issues, we wanted to focus on supporting the FBM launch. For our roadmap, we were able to successfully support the network V18 launch and also onboarded multiple PLN partners to our data sets in BigQuery, such as Starboard, Crypto, Elementus, CanLabs, and so on. And also network goods, for example, yesterday was able to successfully use archival snapshots for their own work. We are currently preparing for NV-19 upgrade. Some highlights, we have analysis dashboards for hyperspace and mainnet. And the FBM team is going to take over for the metrics going forward, and we will continue supporting them. Crypto used our data sets to complete the storage providers financial reporting progress from Genesis until end of year 2022. And our next focus is to deliver these data sets to other companies that need them as well, such as Elementus for Phil Plus. Crypto Lab also used BigQuery to analyze gas consumption pre-FEBM launch and post-FEBM launch. And we now also are extracting full history of FEBM actor balances and counts. Some opportunities, a lot more people are requesting for more recent BigQuery data. And that's something that we would really like to focus on the next few weeks, and also some performance improvements in Lilly to reduce infracost for us and the community. Awesome. Thank you, Steph. On to our spotlights. We first, Isaac Gusson on here. Happy for anyone else to also jump in, but I just think we should all take a second, unmute, and just do a big round of applause for everyone who is involved in the FBM mainnet launch. This technically happened after our last end result ham, so please unmute, and let's all celebrate this awesome milestone. Yeah. Good job. Thank you all so much. This is big and super, super exciting to see all of the people who are now building on top of it, lots of shiny logos that are now participating, and lots of felt coin that's getting deployed, lots of accounts that are getting created, lots of new applications that are now possible. It's big. Thank you all for being a part of it. Over to Patrick for the randomness. Yes, we ran the second edition of our randomness summit in Tokyo, the first being online only in 2020. Our goal was to interact with people building in the same space as us and get to know some other people working on cool random things. We had 45 attendees from a variety of institutions, notably lots of different ministry of defenses, but I confirm they weren't able to bribe us to break the Iran, so it's fine. Some of the cool follow-ups that might be interesting are NIST are actually standardizing randomness beacons and threshold cryptography. So we're hoping to make DRAND, other than the reference implementation, the first compliant implementation with the NIST standard, which would be very cool. We're planning to host another randomness summit sometime in the future, this time potentially alongside another slightly more hackery event so we can encourage more people to actually build on top of DRAND. All the tops can be found on YouTube, and if you're short on time, which everyone seems to be, Renardo Davide from the University of Copenhagen give a wonderful talk summarizing everything you can imagine about randomness, fair file of secret sharing, VRS, PDFs, quantum randomness, randomness from the speed of light via satellites, it's all there. So check it out, there's a blog post coming soon. And also I forgot in the DRAND update to say the most important thing, we've got a new project lead, Eric, because I've seen this on the call. So welcome Eric, he's going to do lots of biz dev things. We've already taken DRAND of space, so hopefully Eric will be able to take DRAND to the moon and beyond. Thank you very much. Bek, over to our IPDX update. Hi, I'm excited to introduce IPDX's innovative solution to GitHub Actions Monitoring. As you know, GitHub currently lacks a comprehensive CI monitoring product. We'd prompted us to create our own. Our solution is quite elegant. We monitor web web events, store the raw data in a post-rescue database, and use Grafana to generate insightful visualizations. Let's dive into a real-life example. Our main dashboard provides in-depth insights into GitHub Action workflows and jobs. By default, it displays organization-level information and groups results by repository. Users can easily select specific time series or even reconfigure the entire dashboard to focus on a specific repository. With the flexibility to choose time ranges, granularity, and grouping precision, our tool empowers users to gain unprecedented insights. Thanks to our monitoring solution, you can see that a week ago, LiDAR brilliantly optimized the Docker publish workflow in Cuba, reducing its runtime by an impressive 90% by eliminating unnecessary QM virtualization. Our GitHub Actions Monitoring solution offers comprehensive insights and customization options. It gives our two-person stream true superpowers. If you're interested in checking it out, let us know. Awesome. Nice update, Testudo. Hello, hi. This is Matteo from CryptoNet. And what I'm gonna bring on the spotlight in my list in a minute is the fact that Testudo, our newest NARS team, went to open source. There's a report there and there is a blog post, which I invite you to check out. It's tiny in the bottom left over here, but I also share in chat. And our design is there, and you can find lots of more details than what I'm gonna tell you about. You don't know what Testudo is. In one sentence, it's a very fast, this is dark with a very fast prover, very short proofs, and lots of nice features. But the main nice feature, like you mentioned, is it's universal and short. And what does it mean for Farcon? That means that you got easy and fast upgradability whenever you need it. You can do one single universal and easy setup and it can give you fast purse while proving acceleration techniques are getting better and better. At some point, some design is gonna matter. We're gonna hit the wall with that. Testudo would be ready to plug in when we need that. There's lots of other things I could, I'd like to brag about, but I'd like to say, so what's happening next? This was the crowning of eight months of work. Kryptonite is not actively working on this, but we are looking for collaborators, external collaborators to improve on our implementations. And there's a paper coming up very soon. Thanks. Awesome. Alfonzo, IPC. Thank you. This is gonna be really quick. I just want to call everyone to test the subnet. In our repo, in the APC agents, you already have some getting started guide on how to run locally or on SpaceNet, your own subnet. It's true that there's no, until the 20th, we won't release the cross-net message support. So in the end, you will be able to run different subnets, but not communicate between them. But it would be great if we can start getting some folks to test it and to please break it. We want to see what is wrong before we release it to the public. So there are some doubts. There's a getting started guide in the IPC agents, consists of shipjars, IPC agents repo, but in any case, probably we can share the links here and drop us a message in the IPC dev channel in Slack and we can guide you through the first steps. The UX is a bit rough, so any feedback is more than welcome. We want to improve that. The first thing that we want to improve is UX. The tech is there, but the UX is a bit rough. Thank you. Ruta Guy. Yeah, so it turns out Guy is sick, so I'm going to have to do it for him. Current situation, the Falcone is actually vulnerable to a 20% attack and you intuitively wouldn't expect it to be so because you only have one winning tickets out of five in expectation, but as it turns out, there are circumstances in which you can, if you can send different blocks. So with a single winning ticket, you can generate different blocks. If you generate and send a different block to each validator, then you can actually confuse people into preventing convergence and not building a chain while you keep building your private chain in parallel. So that is the issue that we're addressing. The solution is consistent broadcast, which just means that you cannot send equivocated blocks, broadcast or send equivocated blocks to the network. And so that raises the attack bar to above 40% and it brings you to the actual expected situation above. So what does this cost us? And the good news is that it does not cost us pretty much anything. The only thing that we need to do is to keep a cache of the blocks we receive and a buffer and wait two to three extra seconds before actually considering a block valid to start building on it. That also means that no hard fork is required. This is actually an entirely client-based change that people can make. And that brings me to, well, next week to the actual announcement, which is the fact that this is already merged into Lotus Master and is on its way to production. There is a slight bug that we still need to figure out, but it's done. And so Filecoin is safer. Thank you. Awesome. Over to Ian for Problab. Hi, it's Ian at Problab. I'm gonna give this quick to see Hannah, can you hear me okay? Problab's mission is to measure the performance of Web 3 protocols and evaluate them and propose improvements in their design. And to that end, we run a lot of systems that collect data continuously. We monitor things like, we call the IPFS network and the Filecoin DHTs. We monitor website performance. We analyze DHT access patterns and the performance. We have quite a lot of data. We kind of want to service it in a better way, but we also want to service it in a way that gives context to that data. So what we have put together is some just started. It's literally only about two or three weeks old. It's Problab.io. It's a place for us to publish the data we're working on and collecting, give some context around that in terms of like their methodology used to collect it, what it means, how to interpret it, what the limitations are in terms of how that should be viewed. We're putting in data from other systems, some of the stuff we've got in Grafana and Peretius from other systems. We want your data. If you've got data that you think should be analyzed alongside the stuff we're doing with Problab, then come talk to us. And what we got the idea is to bring this all together into kind of a hub and around that we're gonna have all the different things we've got, we're already doing like weekly reports and we've got the KPIs and there's stats.ibfsnetwork which is an overarching kind of place, single point of access for getting to see this data. So come along, have a look, try to keep it simple, but it's gonna expand over time. It's under kind of a lot of work, work in progress kind of stuff. So thanks everyone. Great. And on to our deep dive with Hannah. So I would love to talk to you all about this new tool that you may have seen floating around our various networks called Lassi. And essentially in what Lassi is, is a retrieval tool. That is the bottom line. It is a thing to retrieve stuff from our networks. It's a new IPFS client that can fetch content from pretty much anywhere. It's written in Go, in that respect, it's a new Go IPFS implementation like Pupo, but we've sort of started from the ground up and we have a single goal which is downloading your data in IPFS and Filecoin. It should just work. And we basically say, if you want your data, just tell Lassi to fetch it. That's our kind of our motto. And essentially the origin of Lassi is that as our networks have grown, we start to see a proliferation of some transfer protocols. Well, BitSwap is kind of our big, our sort of bread and butter. We see a lot of GraphSync on Filecoin. We're starting to see a lot of folks interested in HTTP. And this is all very interesting from a programmer perspective, but from a regular person perspective, no one really thinks what transfer protocol would I like to use to get my data? That's lots of the people care about. What they want is they want to get their data. And so Lassi is a multi-protocol retrieval client. So it can figure out what data is available on what protocols and figure out the best way to get it. We already support BitSwap and GraphSync. We're working on a trustless HTTP transfer protocol. And then in the future, maybe we will also add some cool stuff, like this new thing that I wrote is working on, what is it called, Bow? Or if you've heard of Carpool or CarSync. And we have another sort of like approach, another sort of aspect of Lassi, which is in addition to not worrying about how you, what protocol used to transfer, you don't, we don't want you to worry about how you find your data. So Lassi can find data through both the IPFS DHT. We use IPNI, the network indexer to talk to the IPFS DHT. And we can find content on the Filecoin network. We don't have to know where your data is. We'll just track it down. We can even track down some providers who don't put their content in the DHT through BitSwap. And as new content networks appear, we intend to add them to Lassi. So goodness, as though if you haven't heard of the network indexer, I think that's the best way to advertise your content. It's super fast and super awesome. So how do you use Lassi? So we've designed Lassi from the beginning with basically three main ways to use it. The first is as a command line executable tool. You can download Lassi already compiled and ready to go and just run it immediately to fetch data. It's designed like, we try to design it like a Unix command. So you can pipe it and compose it with other things. And I'll show you how that works in a moment. The second way you can use Lassi is we have built essentially a mini HTTP server that exposes a trustless HTTP gateway that serves car files. It is much like the spec for the IPFS trustless gateway, but we do some other cool stuff. You can find data at a half beneath a SID and there's a bunch of other things that go a little bit beyond what the current gateway spec is for trustless data. Though we are actually working on an IPIP proposal to extend it for trustless gateways for other IPFS implementations. And finally, we've designed Lassi from the start to work easily as a library. So you can easily incorporate it into your Go application to seamlessly add retrieval from IPFS and Bitcoin into your Go program. We see this as a sort of long-term Lassi superpower. And we'd love to partner with other organizations, other teams who want to integrate Lassi into their systems. Cool, so there's a couple of things that Lassi will not do. We've built some design constraints into it. And we put this in for a specific reason, which is that we want to stay focused on a goal, which is retrieving data. As a result, Lassi does not store data permanently on your machine. We return a car file to you, and we think that's up to you to figure out what you want to do with it. We also will not provide records to the DHT because again, we're not holding data permanently. You can't, we are not a way to advertise to the DHT, the IPFS DHT, or other content indexing systems. Essentially, Lassi is not designed to be a full-fledged IPFS server node. We are largely stateless. When you run us as an HTTP server, we hold onto a little bit of temporary state just to make some optimizations. But one of our goals from the start is there shouldn't be a config file or any other file that lives permanently on your system as a result of Lassi. We are essentially a stateless tool for the minute you stop the program running. And we think this somewhat artificial design constraint is necessary as a way to keep our program simple, focused and easy to use for other folks in their programs. Cool, so let's see what Lassi can do. Essentially, as I said, Lassi is a simple command line tool, is one of the ways you can use it. You can essentially tell it to fetch a given SIG and then it will return that to you. And it can return that to you as a data stream that you can pipe to another program. In this demo, we actually fetch a SIG and pipe that output directly to the go-car command line tool. And then we use the go-car command line tool to convert it to the flat original data file. And then we pipe that into FFMPEG, the video player. And this is what you get. Again, we'll see if this actually works. Cool, yes, so here we are. We're typing it in, we're getting it for, we're putting in a provider, we're getting a SIG, then we're passing it to car, we're passing it to an FF player, and you magically get a video playing on your screen. It's kinda cool that you can just go from a SIG to a video playing. All right, cool, I'm gonna stop that because it's super loud. So another thing is, so while Lassi's pretty new and it's definitely still in development, Lassi's not really a prototype project. We are already in production. We are the primary retrieval tool for the Saturn network to do cache misses. So when the Saturn network gets a request for data and they don't have it, they just turn over to Lassi and say, find this data. And through the REIT project, which is the decentralized gateway project, that translates to we're downloading about 140 million SIDS a week through Lassi. So we've got some heavy volume use case and we're working on optimizing. We've got a whole team working on it. And Lassi is also increasingly one of the easiest ways to download data through Filecoin. So we're starting to recommend it to enterprise clients who wanna put data on Filecoin and get it back. And we have, as I said, we have a team working on Lassi. We're continuing to optimize, we're continuing to optimize how we select protocols. We're continuing to optimize each protocol implementation. And our goal is to make it faster and better over time. So if you wanna integrate Lassi into your project, I'd say go for it. We do have some work to do on documentation, but we're also working on that. Yeah, so if you wanna know here more about Lassi and you happen to be at IPFS thing, you can come by the data transfer track where we're gonna be doing the deep dive into the architecture. You can also just hit me up on Slack. It's probably the easiest way. I'm also happy to meet with folks one-on-one. If you start to use Lassi, give us feedback, file a GitHub issue. I didn't actually include the repo in here, but talk to the Bedrock team, that's the team that's developing Lassi, or complain to us on Twitter, that we do not have a Twitter account, so you'll have to figure that one out yourself. Yeah, that's the tool, that's what we got. Yay, thanks. And we're around for questions, if we have any more minutes. No, it's 8.03, never mind. If you have any questions for Hannah, please fly to Belgium and ask her them in person in her many Lassi talks at IPFS thing, or stick them in chat, or go stick them in the GitHub repo that maybe will, yeah, was dropped in chat briefly. Really awesome and great to see the progress from tons of different groups. Hopefully we have a lot more to celebrate and share in this forum in a month from now. Post IPFS thing and the many awesome things that are getting shipped and discussed with the whole community there. So happy Thursday for most, Friday for some, yay team Friday, and happy April everyone, excited to be in the throes of Q2. See many of you shortly.