 All right, welcome everyone to our March PLN Dres all hands of the week. Excited to talk through a couple of different things we have as normal our working group update with some of the top level KPIs strategy team updates as well. We have a ton of spotlights, barely fit them on the side and then we have an awesome deep dive from the DAG house team on W3 up. And what's new happening in their world, integrating things like you cans into nft dot storage web three dot storage, and hopefully setting setting a path that others can follow there as well. So some exciting learnings as a reminder, if you are new and watching this for the first time, the PLN Dres working group is one of many engineering and research teams, helping cross the chasm and drive breakthroughs in computing to push humanity within the PLN network. We are all unified and aligned about the internet being one of humanity's greatest superpowers, and wanting to make sure it's built on a robust and resilient foundation that can scale to all sorts of exciting new breakthroughs, and also be enabling of human agency as we make make some really exciting breakthroughs around things like metaverse HGI brain machine interfaces and many other things. A lot of our work goes into protocols like IPFS, LibP to P and Filecoin, but we also help spawn new additional protocols we contribute heavily to a lot of the protocols that are being built across the PLN network. And we participate really heavily in these open source communities as well as kind of like stewards contributors folks that are creating new breakthroughs on top of these protocols as well. So our mission is to scale and unlock new breakthroughs for some of these protocols like IPFS, Filecoin, LibP to P. We do this by driving breakthroughs in protocol utility and capability, scaling our network native research and development throughout the PLN network, and also stewarding and growing OSS projects networks and communities very openly. Here are some of the working groups inside of Andres, some of the teams that help push push this forward. And here's our strategy for this year stays the same where we have a core foundation of critical systems stewardship and growth, growing the teams and networks across the PLN network that are contributing to the stack of protocols, and then to kind of core focus areas for the year, first around robust storage and retrieval and making sure that we have large scale data onboarding super fast and resilient retrievals and adoption of that stack for decentralized storage and retrieval. And then a lot of work happening around scalable compute over Filecoin state and data. Lots of exciting work happening around programmability of storage through things like FBM through compute over data in in Filecoin, and then also scaling our chain bandwidth and capacity to support all of the exciting permissionless activity happening at the compute layer. We are using star maps as our tool for the address roadmap. This looks a little bit different than last time, because I stuck things in themes, which makes it way easier to look at. So feedback on this welcome I literally did this at like three o'clock in the morning. But this gives us a bit of a view into the key themes with an address and how we're pushing forward our work. Each of these milestones is actually managed by the team that is pushing them forward, which makes it very organized, and you can see we have a ton of things landing at the end of March. So it is going to be a very exciting next address all hands, please tune back in when we celebrate fingers crossed the amazing set of things that will have all shipped. You can see some of the really exciting things happening we have FBM being deployed to main that next Tuesday. So in less than a week, we have IPC interplanetary consensus being deployed on SpaceNet which is their test that by the end of the month. We have a lot of work with SP's offering IPFS bit swap retrievals to other Kubo nodes. Others might who might who might be requesting the data from from other nodes and IPFS network. And we have a really exciting integration between Saturn and IPFS gateway which a lot of folks are pushing really hard on to bring the first biggest most exciting CDN customer to Saturn. So I don't know yet because I'm sure there will be many more in the future who give IPFS gateway and for its money. Sorry, I didn't get through all the other ones but to highlight our progress on our overall okay ours. We have set pretty aggressive goals for q1. So far maybe like 50% on track with our critical systems goals. You'll hear more about our MVP monitoring functionality in a second, but so far from like a, you know, SLA perspective where we're maintaining good uptime and release ability. And we've so far since December if we look at the end of February we've made a 25% decrease on our Web 2 infrastructure costs, which is 50% of the way to our goal of a 50% decrease. But that's an additional 25% over the, you know, 50% we've made at the end of the year and so still making pretty awesome significant progress there. And we still have a lot of headroom to go. We have already reached over 2000 builders if you look at all of the people who are participating in the FBM early builders program, and folks who are deploying smart contracts participating in hackathons. And we've had a ton of events thanks to the orbit events program so we are super on track with that engagement component for FBM. So we're filling some of our critical open roles. We had I soon who came and joined us as kind of our endres, kind of like strategy and planning coordination role so filled one of those roles internally which is phenomenal. And I think we're going to be on track to fill maybe another one by end of quarter, which would put us about 50% toward that goal. We have a large scale storage onboarding and retrievals. We are about it, I think 15% of mirrored IPS gateway traffic is being served through Saturn. So in our remaining month we have a chunk of work to do to make progress towards this goal, but this team has been making phenomenal progress and so I have optimism that we will get a good percent of the way towards our high level picture. And then we are 637 Pebobites out of our goal of 900 Pebobites of data onboarded by Filecoin. And right now I think we're onboarding about four Pebobites a day, which is pretty awesome. If we can keep that up we get closer but still not all the way to 900 Pebobites total. And so we'll probably end up at a partial request. And we are not quite yet at 2 million successful retrievals from Filecoin SPs, we have some work to do there as well. And finally, last but not least, our work happening around programmability and compute. We are on track for our FEM launch, not to steal anyone else's thunder, but it's happening and it's very exciting and I think we are going to blow this 500 unit contracts out of the water. We have just a small fraction of our hyperspace, you know builders come and deploy their contracts on FEM. That will be small fries. So fingers crossed. IPC is on track for their space net upgrade and launch of subnets there, though I think we're we still have some user development and adoption to do so if you want to run an L2 on Filecoin, please come talk to us. I know we'd love to hear about what you want to build. And then we have some lofty goal of hitting 1000 jobs per day on computer over data. And I think we have a little bit of work to build up our adoption set to to meet that. But there are some really exciting launches coming for Cod and Bacouya. So keep an eye out. And with that, I'll pass it off to the IPFS folks. I'm going to run a few words on the ultimate content addressable network, otherwise known as IPFS and some metrics. Next slide please. On the on the top left is the number of unique IPFS nodes that are seen through the boot separate nodes of the IPFS DHT. And on the top, on the top right, this is broken down to DHT servers and DHT clients. An uptake in the number of DHT clients and a downward trend in the number of DHT servers, which, however, doesn't necessarily mean that the network size is going down. The dashed black line there shows the number of unique DHT server IP addresses, which means that basically the network is becoming more stable because peers are rotating their period is less frequently. So you can see more details about all of that in the link up there on graph details. So please go check if you want to know more details. On the bottom, on the bottom left, we have the latency to find content on the IPFS DHT through using the CLI client. So this is not exactly representative of the entire IPFS, the entire idea of IPFS, but rather is a Kubo related thing which we might be replacing very soon to focus on the network side of things. You're also seeing a bump around last month, which has been due to a major incident that like you do hard work from all the teams internal and external. We have managed to patch and now we see the latency going down very quickly. So yeah, that's it from this slide. Let's let's go to the next one. Right, so one of the goals has been to a small mention to monitor the website latency as one of the OKR so on the top you see a matrix where on the on the x axis top x axis you see the website that we're looking at and on the y axis. The region where we made the request from so inside each square there is the latency the time to first bite at the P 50. And on the bottom on the bottom, the small number there is the percentage increase based on last week. So this is not representative of a longer time period so it's just walking progress we're going to have a time series of things as a trend line and where it goes but we can spot problems or you know behavior that needs needs attendance and on the bottom figure there you can see the comparison to HTTP also work in progress you will see some blue bars are spiking up this is not due to network or IPFS problems is due to some of our own infrastructure. But it's interesting to see the rest of them and the ratio between between what you get from Cuba is compared to HTTP. Lots of room to improve that but we can definitely spot problems when something comes up with our websites. You can see more that starts the IPFS network and feedback and discussion happens at this link down on the bottom left. Thanks. Hey, so yeah and the and the protocol and implementation world for IPFS so we've had a lot going on. We just shipped the oh dot 19 release candidate for Kubo, which has some significant resource manager UX improvements and some gateway changes in it. Healy is been chugging along for people aren't familiar with Healy it's a new JavaScript implementation. There's a new gateway binary that we're building. There's a link there to the repo. We're going to be, you know, it's the future for the IPFS.io infrastructure, we're starting to dog food, the bypass here so we've extracted the Kubo gateway code out into a library that we're using for this. This is a experiment done for for balancing the buckets in the DHT to improve performance. Too long or TLDR is the performance improvement wasn't as good as we thought it was going to be, but you can read the report there to find out more information. As Dennis just mentioned, we added website monitoring to the Nebula crawler which was the images that you saw in the previous slide. There was a large network incident that we spent a lot of time resolving. And we have a lot of follow up work to make the UX better so that doesn't happen again. Coming up, we've got some more specs. There's a gateway graph API design proposal that's going to be coming up. We're going to be doing a lot of work on go lib IPFS, which is where the purpose of this is to make libraries for building your own IPFS implementation like first class citizens. So, you know, as part of this, we're going to be refactoring a lot of code in the IPFS work and moving stuff around. There's some a lot more details about what we're going to be doing in that link there. We've got some changes to delegated routing to enable streaming responses, which is important for DHTs. And Kubo.20 coming up with a lot of fixes from that event that Dennis talked about. And Helia also has a lot of upcoming work. Hello, so I'm here in person this time. So what's up with IP developer experience. As many of you know, we've been working hard to migrate Kubo from CircleCI to GitHub Actions. And I'm happy to report that we're just now concluding this process. And our research shows that for Kubo, at least, GitHub Actions is not only cheaper, but also faster and more reliable than CircleCI. And also as part of the migration process, we've created a comprehensive monitoring solution for GitHub Actions in Grafana. And that solution can easily be deployed to any org or repo in protocol labs realm. So if you're interested in gaining insights into your GitHub Actions, please don't hesitate to reach out to us. I'm definitely more than happy to help you set it up. And as for the future, our main focus right now is IPFS gateway conformance testing. And we are dedicating all of our resources to support all the gateway implementers. And we are currently on schedule to start using the gateway conformance testing framework that we developed in Kubo, or go live IPFS as early as next week. So it's exciting times ahead. That's all from me for now. And thank you. Awesome. Over to LibP2P. I see no LibP2P is the modular networking stack that's powering IPFS Lotus and a whole bunch of other networks implemented in many different languages. Next slide, please. So this is our KPI slide. I put version 001 with an asterisk since this is the very first version. So what we're tracking today right now, and what I want to show you is network sizes as of yesterday approximately 58,000 nodes amongst the networks that were cataloging. So as you can see, this is from a Kodema exporter dashboard in Grafana that Max from Rust LibP2P put together. So those are roughly kind of the numbers that we see there. As you guys know, after the merge, LibP2P is powering the beacon chain. So all the beacon chain nodes in total sum up about 5,100. So our hope is that we track some of these metrics every month so we can see how adoption is taking across networks. So in terms of community activity, we do have a lot of contributions from external contributors. Some great news to share is that some contributors have been so strong that, you know, we've considered hiring them or giving them grants. So hopefully, as we progress, you know, we get more contributions and then we can keep long time contributors. Next slide, please. In terms of highlights, so general project updates, we've defined some OKRs for H1. I think our main goal is trying to make sure that all the work that we do in terms of engineering is focused on like what creates value for users and really just trying to wrap that into the OKR process. I think we shared last month as well, our interoperability testing efforts are going really strong. We're continuing to invest in testing browser use cases like adding WebSocket secure testing. We're adding stuff for browser to browser and there's one of the gaps in our testing has been expansive protocol testing. So we're starting to address that by adding connectivity tests for Relavi2 and there will be a lot more coming up. We're doing some benchmarking stuff. So if you'd like, you can go to the slides and see the benchmarking protocol we've specified. And those are being added to different implementations. The HDP work is still going strong. You know, definitely check out the video that Martin shared for Move the Bites. And then, you know, a lot of community engagement, but the main focus right now I think that I want to share with you guys is in the month of March we want to complete our browser connectivity story. So we hope to have the browser to browser implementation completed in JS LAPIDAPI browser server completed and go LAPIDAPI and by IPFS thing we want to kind of showcase to the to the world by creating an example app that any new developer can use to quickly launch LAPIDAPI nodes and see connectivity across different browsers using the different transport protocols. And then we have implementations. We're deprecating Circuit Relavi2 v1 in Go LAPIDAPI and JS LAPIDAPI. We've had new releases in Go and Rust, and in the next JS LAPIDAPI release we'll have Circuit Relavi2. So that's it for LAPIDAPI. Thanks. Over to Filecoin. This is going to be a quick one to start Filecoin ES decentralized storage network for store humidity, most important information. Next slide please. Very quick API. As everyone know, Filecoin is one of the biggest storage networks. I still believe that's the case in the universe on the total power. However, we are seeing some of the like just in our network's perspective, the role by power is dropping a little bit these days, because a lot of the sectors on board it about a year ago starting to expire and more storage providers are saving their like storage for like real deal data and all those. So, you know, data sectors takes longer to go on board, so the network growth is taking it not as fast as before, but again we still have a lot of storage into the network. However, the deal onboarding is not slowing down at all. We now have over 600 pairs of data, Filecoin plus data that are stored on the network. That's really impressive. And with Saturn and everything those data can soon be like retrievable and compute over with so that's exciting. Next slide. You can tell I didn't have enough time to actually finish my slides. However, this is literally the highlight of Filecoin. Next Tuesday, Molly mentioned that we are having applied to launch at the end. Yay, user can deploy smart contract on the network. Literally this has been a work for so long for so many so many people. I don't even know what to say, although we are down to like six days before the launch. There's a lot of amazing ecosystem initiative and launch that's lined up that I cannot speak of right now but please follow Filecoin Twitter comment everything to see all the amazing partnership going live next week. But yeah, that's it. Super exciting days. Thank you for all the hard work I know this is many teams all in crunch mode making sure that this launch goes really really well because it's a big one. And I think we'll hear hear more about some of the progress in a second. But first let's jump into team updates, starting with bedrock. So I'm David Jansky I'm an engine manager on bedrock. We work at the intersection between Filecoin and IPFS and we have three teams that focus across data storage discovery and retrievability. Since it's been a while since we presented last. I'm only going to touch on the highlights because the teams made a lot of progress. Starting with the IP and I team that's been focused on scaling the network indexers and achieving that scalability with advancements to Pablo DB as well as index for assignment pools you can read more on our blog. The boost team has added bit swap support, which means that you can serve Filecoin content over bit swap, which is pretty exciting, as well as FVM support, which means you can make storage deals with smart contracts for that upcoming FVM launch which is just exciting as well. And the tornado team released Lassie which is an easy to use client library to fetch content across IPFS and Filecoin, and this really unlocks the potential for project Rea and serving gateway traffic from SPs. And so the team's focus right now is really, how do we unlock that scalability for SPs to serve more retrievals and handle all of that network traffic. And that's what our focus is for the next few months. If you want to find out more, feel free to click on the links or chat in the Q&A. Thanks. Awesome. Everyone go fetch some CIDs with Lassie. Exciting, exciting days. Retrieval markets. Retrieval markets. Hi Patrick here. This team is working almost exclusively on Station and Saturn. The Saturn network has currently got over 1300 points of presence worldwide. If you look in the background of the slide, you can actually see some dots on a map. Those are the actual points of presence live and far left you've got Hawaii, I think that is. The Saturn network serving 158 million requests a day. This is synthetic traffic at the moment. As you, as you've seen in previous slides, we're starting to mirror traffic as part of the Rea program, and it's going to be production traffic very, very soon. And there's a pretty good time to first bite as well. From station, we've got 5,570 downloads and we've got over 21 million jobs completed. In terms of the roadmap, the Saturn milestone one, which is really our work in Q1 is on track. This involves having verifiable payments written into a smart contract on an FEVM, as well as having some first clients on board onto Saturn, which is with the help of the Rea program again. On the station side, we've shipped a wallet and we have shipped the Zinnia public alpha. I mentioned a little bit more about Zinnia in a second and Miro is going to be doing it in a spotlight in a second too. And we've now got a station CLI. You don't have to just run a station as a desktop app. It can be run on a server as well. And we're also trying to integrate back a how into station as a module. Highlights the Rea program as you've heard many times where we're trying to make data stored on Filecoin available through IPFS, or fast IPFS tools, and to reap large amounts of traffic through Saturn and also saving some input costs at the same time. And it's been a real pleasure working with Bedrock, the stewards team and Bifrost on this initiative. Saturn goes web through working group is getting ready to move the Saturn payouts onto smart contract on Pi Day. I've already mentioned the next one and Zinnia I won't go into because Miro is going to demo this, this runtime in a spotlight. Opportunities come join the station module builders working group if you're interested in Zinnia or station. And yes start fetching stuff through Saturn and through Rea as of today, it's ready to go just just start fetching stuff. That's all for me. Thank you for coming days. Very exciting to get some of these first Saturn clients over to Nikola and Max for kryptonite. Yeah, so we have the kryptonite 2023 docket notion. We have three main teams to improve the Falkland network, and to assess those and market needs running a series of interviews to a space and others in the ecosystem. And at Medusa we have like Medusa really provides a simple secure decentralized solution for access control. And the big news it's now nucleating and will be an independent company later this month. We have around 10 teams using FBM and building on Medusa. And then lastly we have retrieve.org. That's a retrieval insurance protocol and yeah history is really interested in excited actually about integrating it. And then I'll leave the ground to we're an initial talk about our protocol updates. Thank you on the protocol file protocol update size we have work that has been done for improving the mechanism of Falkland Chrome, and also to add the features of be able to verify that aggregation, not giving details on this because we will have to explain more in this call later on. Then we have published a new fifth synthetic power up. So this is a simple changes the power pipeline that allows us to save and to reduce the size of the data that has been stored during the during one hour and 15 minutes that is the mandatory time that we have between pre committed to commit. We think this is can bring some nice radical cost savings and there is a discussion actually going on with the providers in the fifth discussion page about this. Please go ahead and leave your feedback. Also we started another discussion about optimistic snap deal. So this is a way to change the current snap the protocol make it much cheaper for providers can be up to 230 times cheaper for the provider. It requires some change on the client side as well. So please go ahead. So that's it off there. Go ahead again and read the, the, the, the, the idea. And that we have, please stay tuned because we will have coming blog post blog post about this to that we'll explain more. The last news about these new proving system that is designed. Thank you. Definitely go check out the crypto website. It has a lot of this really great content and you can stay on top of everything the team's doing great example of network native working in the open. Cameron for by frost. Thanks Molly. So for those of you who don't really know about by frost where the little team that's responsible for running the IPFS gateways that let our legacy web friends get all the awesome IPFS content that they don't know how to get directly. So we're going to call out a few KPIs there about some of the successes the team has had in the last quarter. So we've managed not to have any downtime there. You can see we've done just a little bit over 5 billion requests in the last 30 days. It's not a quarterly stat there, and a little bit under four and a half petabytes of data served in the last month. So that's like, pretty sweet. We're from some of the other teams that were giving updates earlier we're working quite heavily with the rear project and the Saturn integration there doing a little work about helping to duplicate the traffic around creating some standards for correctness and defining the metrics and all sorts of things like that. Also, we've been working with our legal on trying to improve the bad bits process that's something that's a bit clunky and super critical for gateway operators to be able to handle the takedown requests and things that we get. We basically don't get blocked, which was actually something I'd like to call out for the legal helping us with we got alerted to the fact that IPFS was getting blocked in Korea for a while and they were cool enough to help us get that sorted out through some legal leads so thanks again for that. And yeah just a couple of quick shout outs in the team. So thanks to Super revamp our logging stack, which been super instrumental with like giving us more visibility to what's going on with the project rear work so thanks again for that and apologies to any of our exact stakeholders who might have been slightly bumpy right away where some things weren't quite connected. Also Mario for like doing some crucial work that helped with the cost reduction process and we've also kind of tied that full circle and automated it so we've got some equinex billing exported the way we can kind of provide a bit more visibility into our in first friend. And last of all, all the work that George has been doing to help with project rear and the test environment and the traffic mirroring and all those things. Basically what we're doing in the project there to some degree wouldn't be possible without that kind of apparatus there. So thanks for that. And that's all from us. Awesome stuff. Super excited to see all the crossing collaboration here. We are now on to our spot spotlights reminder to keep them short so that we save a full 10 minutes for our deep dive and we've a lot of things to cover. And I believe the first one is a video from Mira. Hello, in today's spotlight I'd like to show you how easy it is to build new modules for a file coin station using our new runtime called Xenia, so that you can measure the performance of your peer to peer networks and services from different places all around the world. Let's start by implementing the actual probe where we dial a pink protocols and some requests and measure the latency of how long it takes. Then we write this measure data into influx DB using their HTTP API for submitting new data and we can use the fetch API, which you probably know from the browser. And then we put this all together in a loop, which choose a random peer, then it measures the pink latency and then records the data into influx DB. And then using this data we can visualize what's going on with our network. You can use influx DB dashboards or you can pull the data into Grafana. And that's it. It was only 76 lines of code. You can find the full example on GitHub. You can learn more about building station modules in our documentation. And finally, if this is something you can use for your project, please come and join the module builders working group. You can find us on Filecoin Slack. Awesome. Thank you, Mira. Over to Steve. Great. Yeah, hello. IPFS thing is coming up quick, April 15 through 19 here in Brussels, so not many weeks away. A few things want to say. First off, anyone is welcome to this people working closely on or with the collection of IPFS protocols will be there, including other businesses and for providers. This is much broader than people just making commits in the GitHub. It's intended to be much better than people making commits into the IPFS GitHub org. So, for example, teams like Saturn, I think have a lot to benefit to share their experience and needs influence others get feedback identify product gaps, etc. And this is more than a place just to present status. It's a place to get work done, especially days four and five are going to be open for workshops and brainstorm sessions so please be thinking about how you can leverage this event. Part 10 plus tracks over 100 registered people have registered. You do need to buy a ticket for this so if you're part of PL andres by taking but obviously talk with your manager this will come out of your group's budget. And there's you can request a hotel room to be part of the block that we have there's messages and PL slacks lobby there and please do this soon ideally this week or next just to help the organizers out. There will likely be a pre meeting coming up for those involved from Andres so that we're aligned and clear on what we're trying to get out of the event for ourselves, and for the community. And for anyone watching this that's outside of protocol labs yes you need to buy a ticket but know that there is a scholars program, which offers a fully paid opportunity for individuals from under represented communities are unique circumstances to join the event. If you have a demo you want to give presentation you want to share or workshop you want to host, please submit that through the website that's 2023 ipv dash thing that I owe. And you don't have to have all the details named it nailed down but it really helps the organizers get a sense of what what's coming and love to have you be there participate. Thanks a lot look forward to seeing folks soon. Awesome. Hope to see everyone there it's going to be a great time. Over to Alex for following on risk and resolutions. So the Falcon network has this thing called chron which is a schedule execution of active code at the end of every epoch that is done on behalf of the system so no external party pays for it. This does some your important system maintenance tasks. But we started seeing a lot of work happening in this chron so much so that it ended up being three times the entire target total for for an epochs validation happening in this unpaid for bonus extra time execution. This is studying perfect block validation times and fast validation is really important for a blockchain networks decentralization allowing lots of nodes to participate and keep up with the chain and chain quality so that the block producers can produce their next block on time. I'm after evaluating the previous tip set. We discovered that this the built in storage market is responsible for almost all of this blowout in chron execution. And because it offers a very high level of service to its clients of incremental deal payments every day. This is probably far too much of a service for a built in subsidized thing to be offering particularly since most deals have no payments and so this is a total waste of time. Hopefully we caught this just in time to be able to detect this understand what's going on and propose a fix for Filecoin network in in time to just roll this into our normal release train that the next, you know the release this will target is is network version 19, which the planning for is already already underway. And so we've done a short term fix which just you divide the problem by 30, and that will bias your good six months and you at least six months to find a more permanent fixed for this problem. Ultimately that fix is probably going to be removing this automatic payment processing and putting the built in market actor down on the same playing field as all other sort of user programmed actors that could be markets, which won't have access to this chron primarily because it's very hard to trust the code that's going to run me. Thanks to Kabukzu and Zenground who did most of the work about this and have been on this problem for a while. I just happened to get lucky and did the little bit of analysis that discovered it was the market actor. But yeah, expect this to be fixed and then Filecoin block valuation times to drop a lot in network version 19 sometime in Q2. Good. Great to see the proactive measuring is helping us take early steps and avoid fire drills. Great, great example of that. We always prefer that versus having to do the fire drill itself. So, I'll just mark you guys. Oh, sorry. Yeah. Hi, I'm cool. So, small deals, small deals, accepting small deals for storage providers is an issue. It's an issue of scale for for medium sized storage provider, storage provider have to accept on the order of million deals a day to be able to fill up their certain pipelines, which is why aggregation services showed up sometime in the past within Falcon network, like a shory and the storage. Those aggregation services while they provide the service of aggregation, the client completely currently completely trusts that aggregation service, which which is fine for as long as those services are trustworthy which is the case currently. The drawback of that process is the client cannot prove to cannot verify that their data was aggregated correctly, and they cannot show to another party that their data was aggregated correctly within that deal, which is why we why we created the verifiable the data aggregation standard, and which produces proof of data segment inclusion. So, the proof of data segment inclusion ensures correct aggregation of clients that are within the sectors, and allows the client to show that proof to a third party or to the contract on chain, which is an important use case in every VVM where, for example, contract people might want to pay for storage of small, small deals, but this wouldn't be able to be executed on because very, very small number of storage providers will accept small deals. So, the the standard itself defines how to aggregate the data how to build an index which is stored in inside the sector of all the data that was aggregated within the larger deal such that a receivable still still possible and very easy. We've reached that design consensus and effort after she was published the go code for proof generation is complete. We're currently working on a solidity verifier for for this proof such that contracts can verify those those those aggregated aggregated deals on chain. And we will be starting integration with that storage soon and most likely with a story as well. Thank you. Awesome. Trustless aggregation is a big problem and exciting to see more protocol tools for people to aggregate all of the little data they want to store in Filecoin and do nice big chunks that make everyone's life easy to work with. So, great work. Check out the FRC if you want to know more. Okay, looks like we have a video on you can invocation stack user controlled authorization network, you can for sure now has an invocation is back and we put together this interactive observable document so you can explore it in more interactive way. It uses several tools like IPLD schemas to parse and validate schemas on a fly. It uses reference implementation to generate data sets from the code snippets. For example, here it showcase the tasks that this code snippet would generate invocation that it would produce. And you can also go look at the whole car that has bunch of looks in them like the task we saw earlier invocation that reference it and authorization and authorization itself. You can also go and modify the code rerun it and see how the data sets change. So hopefully this is more fun way to explore the specification than wall of text. I also hope you will join website storage and IPVM into implementing the specification. Awesome to see I'm sure we're going to hear a little bit more about the power of you cans in our deep dive as well. But great to have good explorable specs. Great, great example of using observable for that as well. All right. So today I'm going to talk to you guys quickly about NFT forever, which is the goal is to preserve off chain NFT data as a public good. And so we're combining a few new things to give a new programmatic deal making flow. We have FEM file coin and Lotus and what what is the the embryo of a FRC standard to create a new programmatic flow so you can see here. What we do is you make a deal proposal by calling a smart contract, you pay a little gas inside the smart contract you have both the escrow, which is the file coin and the data cap itself. That contract then acts as a client and emits an event for a deal proposal onto the blockchain that is picked up by a storage provider running boost. They then grab the data out of the payload from the event. In this case from NFT does storage which is a pre aggregated car file of many NFTs will do the ceiling process create a whole new deal, and then verify it back on chain through the smart contract contract and say yes this is the the said that I want yes this is the deal that I want, and then open it up to logic. So what we've done here is actually decoupled who's providing the data from who's providing the funding and the data cap, as well as who is going to be picking up and verifying the deal so you start to see a more organic market marketplace forming there so we're producing a smart contract and we're going to have some storage providers on Pi day to be accepting deals I think we're getting up to at least I think we're targeting 70 deals a day. For the first few weeks to kind of get it moving. There were a lot of people behind this in deal client contract we have multiple product managers across multiple groups, and then big technical lifts from both Lotus boost and some folks like Mike's as well so that is coming everybody has heads down which is why you had to through me. And so that is going out next week. And let's give this guys a good hand and watch it, accompanying the FVM launch. Thanks. Super exciting showing the power of FVM to take all of these kind of off chain tools and bring them us utilizing this new automation framework so hopefully things get more verifiable more automated and just easier to run and maintain into the future as well with programmable storage. Pretty cool. Over to Jamie for the awesome countdown to FVM event from last week. Yes. Hi everybody I'm Jamie with the outer core events team here to tell you about our countdown to FVM event, which took place last week on March 1. It was the day before the East Denver conference portion started. It was held in the same venue as the FVM hacker base so it was hosted by the Falcon Foundation so we flipped the venue over for the countdown to FVM event, which was a huge success. Lots of excitement surrounding the upcoming launch of FVM it brought in 873 registrations and more than 300 in person attendees, including devs and investors in the audience. This event was streamed on ETH Global TV for virtual attendees and had over 50,000 live stream views which is huge. It was actually the third largest audience from all events hosted on ETH Global TV. There were lots of incredible presentations, panels and more from 34 speakers and there were 18 projects featured in the early FVM builder showcase. A couple of exciting things to highlight from the FVM team. Client contract deal making flow is live with a very big thanks to FVM Lotus and Boost and Dredge teams. They did a demo of this at the event and there's also going to be a recorded workshop shown on Scaling Ethereum hackathon today at 12 PST on ETH Global TV. There's a couple of the record links there with the recordings to the presentations from the event, the event photos and a great social reel there recapping everything so be sure to check those out. Thanks to everyone who helped contribute to this being a huge success. Awesome. It was a fantastic event. If you weren't there go watch the live stream from ETH Global because there's some good good content. And this was a component of our overall like Endres presence at ETH Denver, which happened last week in Denver, which was a super awesome gathering of tons of groups working across the Ethereum, Filecoin, Layer 2, and many other related ecosystems. I think we had a kind of different events that we helped host and or participated in. There was a launch by an FVM social crypto weekend day, the countdown to FVM event that Jamie just told us about. There was some awesome dinners organized by the KLDR team. And we also participated pretty heavily in the ETH Denver, like event itself, we had a booth bear we had some main stage talks. We also helped judge the hackathon in many different areas and saw a lot of amazing folks coming by getting really excited about FVM and how they can make use of it. And also engaging super deeply with the kind of new breakthroughs that are coming coming out of the PL network and our ecosystem these days. And it was a great gathering point for many different builders from groups like, you know, huddle, glyph, impossible cloud, and others who are all harnessing some of these new new technology. And so excited to collaborate with them as well. And now we have exactly 10 minutes to shoot over to our deep dive on DAG houses W3 API client and protocols. David lead for DAG house, we build web three storage and energy storage, which have grown a bunch in the last year. I want my foot in the next slide, because it is a reliable performance hosted IPF a solution that gets your data on to file coin. You can see total uploads have grown to 80% over the last year. But today's deep dive is about our next chapter as a team in a product. Next slide please. To meet the needs of our users, we've had to utilize centralized and for providers for performance reliability and scalability reasons, but the plan has always been to increasingly rely on decentralized talk when and for as it's become ready to reduce costs and take advantage of the global network of many independent nodes. Next slide. We had been planning to nucleate this year with the immediate focus for us on adoption with users willing to pay a premium for our services. But given crypto winter, we've pivoted now to helping address enable the end to end file coin story from a user's perspective. This includes building protocols and libraries for developers to take advantage of file coin, regardless of whether or not there are web three or NFT storage user and dog fooding these protocols ourselves to progressively decentralized our own infrastructure as the network becomes increasingly mature. Next slide. So w three up, we're excited to share details of our new upload protocol w three up in today's deep dive and continuing throughout March. You might have heard a little bit about this in Lisbon last year and it's come a long way since w three up is a storage protocol API and set of clients that allows users to verifiably upload data using their own identity. It's designed as a protocol to be used by any quote unquote storage service, not just web three and NFT storage but anyone moving data around, especially across permission boundaries. I think it could be really useful for many address and PLN projects. W three up offers a layer of abstraction for an actor to send data to another actor, only it does so in a self sovereign and verifiable way bringing ipfs and decentralized authorization protocols to the table. It fills a similar need, quote unquote s three compatibility compliance is trying to fill only it's truly portable top to bottom. It also allows us to progressively decentralized our services as decentralized and for options become viable to fully rely on without requiring a big code migration from our users perspectives. But it also does have a number of immediate benefits as well, such as faster uploads. Next slide. And here's some of the libraries we've been working on from the w three up spec to the protocol and reference implementations to the libraries built on top like headless front end components and the CLA. And we're in beta and users like OpenSea table land and Koi have been trying it out and have had positive feedback about its interface and simplicity and how it just works. And then we have an RC coming up in a few weeks. Next slide. And then in building W three up, we've incorporated our teams learnings for running our hosted IPFS services at scale with competitive performance reliability and talking to a bunch of users. These learnings are most obvious in two categories of lower level protocols. I know the word protocols disease generally pretty loosely, but we're heavily relying on what we call together the deep space nine protocols these lower level protocols. Next slide. And these two categories. So first for data verifiability, we obviously use IPFS, but more specifically, when we can we send around sets of blocks and car files and verify those rather than transacting block by block. And this is even the case in user facing situations like uploads and reads. And then for us, we use you can we use we've worked a bunch with vision to design a protocol that services can practically use. I'll roughly share this work in the you can invocation spotlight earlier. And aside from user owned identity and the benefits that come with that you can also notably allows permissions to be trustlessly delegated from one party to another. Next slide. And then hopefully efficiency with verifiability sounds great to you and taking a step back we think there's a lot of others in the PLM and a lot that others can utilize from W three up in DS nine, especially because they were built to be generic protocols. The goal today is to get you excited enough in them to explore more. And in terms of folks to get you excited no better person Alan Shaw so I'll hand things off to him to talk more about the technical bits. Oh no. Okay David thank you. We're going to dive in five minutes, but this is going to be a little deep dive on the new architecture we have for web freedom storage and NFTs storage and we call it W three up. And here it is. So here is the big architecture diagram but don't worry we're going to build it really slowly. So it's easier to understand so. Well client side server side you get you know understand that sort of stuff. Yeah we have the client, and it could be like the CLI or the client libraries or a web app, but it what it does is it does some work locally. And that's on the left hand side there. And what it does is it creates a car file of some DAG of their upload, or the thing they want to upload. This is done in like a streaming manner and this is interesting because like all of our DAG generation tooling in JS so far has been focused on putting blocks in a block store. And we have to create the DAG in memory or on disk and into into a block store and then export it. And this is just like slow and kind of memory intensive and disk or disk based in intensive depending on how you do it and like in browsers you only get a certain amount of memory to be able to do that sort of thing you don't want to have another copy of it in memory. So had people complaining about that so we built these tools to make it easier to just stream stuff up to our service so so that's kind of cool. And so the other thing that the client does is it signs a you can with like details that are specific to that upload, and it invokes this storage method which is store slash add. And it's either for their account or for someone else's account, where they've been delegated access to put stuff in it on behalf of them which is amazing. FYI you can stand if you didn't know stands for user controlled authorization network. And so we've been collaborating with fission on the spec. And you can say essentially an extension to JWT's and they allow users to authorize what they do themselves it's amazing. So anyway, once the you can is signed, it gets sent to our w three up API and we call this a you can invocation, and the server validates the signatures and a delegation chain in the you can. And that ensures that the user has sufficient access to invoke the action, or what it's called in you can terms is capability. In this case, the user is asking to add a car file to their storage space. And we, and, and so what we do is when we send car files we actually address them by a CID, and that's the idea is a car CID a car CID is just a special CID that is the hash of the entire car file and that hash is baked into a signed URL that is sent back to the user. And then that and that URL ensures that the data they upload must hash to the same value as the as the car CID so that's super cool. And then the user takes that URL and uploads the data to that URL, the data being the car file, and then the upload is complete. The difference here from our old info is aside from the you can which is like huge anyway, but is that the car goes directly into a buckets there's no proxying for a worker they don't send it to us they send it directly to where it needs to be. And that is effectively a speed increase and a cost reduction, and perhaps most importantly, the upload location doesn't necessarily have to be our service it could go straight into Saturn or Filecoin for example. So cool. So then when the upload is in elastic FFS. It's available for bitswap availability for other people to bitswap as they do. I've taught a bunch of times about how elastic works. So I'm not going to repeat it here. I did. I did a really good. I think I did a really good talking ideas camp called five billion blocks. If you're interested in elastic FFS and how it works then check out that. The upload process is much faster and more reliable than it was previously, because the cars are generated in the streaming manner, rather than all in memory, and also by storing it directly into the bucket we are per upload request chunk size can be like four gig it doesn't have to be like 100 megabytes anymore, and then this is this is just make stuff a whole lot faster so so super cool. And then you can see the difference in these benchmarks where for W free up you can see like around 40% or faster upload speeds, and then we can do some more up to financial stations to make this faster. The second upload is the same data and the cool thing about W free up is because we're using car CDs and addressing things and using content addressing properly. You effectively get infinite compression you don't have to upload the thing again if someone else is uploaded that car file. And this just says you're done, and you don't have to upload it so it doesn't take any time to upload the thing is zero. It's 100% speed increase from. Anyway, you get the idea. Okay, so anyway, back to the diagram sorry moving on and we're short on time. Yeah, the car is also sent to our cloud hosted HTTP gateway is cashed there for fast availability over HTTP our gateways called w3 link. You can access it at w3 s dot link is kind of similar to the web dot link. And so the data gets copied there and it remains as a car at rest it means that our gateway serves data serves IPFS content address data directly from car files which is kind of cool. We build some awesome indexes we call one of them dude where the other one is called sat nav and and that that allows us to serve the IPFS content address data directly from those car files dude where tells you which car files your dad can be found in. It's a mapping from root CID of the dad to one or more car CDs and then sat nav is navigation within your car sat nav index is a mapping from car CID to all the block offsets within the car files it's actually a car v2 index if you know and care about that sort of thing. Anyway, I digress a little bit. So if you want to know a bit more about gateways then check out our gateway it's called freeway. It's very fast and good fun. Anyway, so pretty soon spade will be helping us put those car files in file coin deals with boost storage provide providers and renewing them as well so that's going to that's going to be awesome very very soon. And throughout this whole process I'll verify, verify verifiable you can log store collects these you cans that we add and that will later provide us with verifiable transaction receipts. And we can also track metrics through the data pipeline by looking at these you can log so we've got. You can also see the benefits of using you cans with user owned identity in a new architecture there's verifiability at every step in permissioned interactions and user owned portable identity. Delegatable permissions allow more efficient data pipelines from when the data is sitting so for instance like if you're a NFT minting tool you can have your users upload directly to W3 up without needing to run a server to proxy the upload. I don't even need to know that they're uploading to W3 up because I can just be delegated the permission to do it and then just send their data where it needs to be so they don't have to register with us or anything they just they can just be given access which is right. Cool, I'm probably need to skip this. I'm going to skip this bit. The, I'm going to quickly just talk about the, the problem that this is solving and if you want to look at the slides afterwards I can. The biggest UX problem we have is how to make like public key cryptography tenable to a web to crowd. The, the use to centralize services just like you login with your address and so the problem is when you switch devices or lose the device so you drop your phone in the toilet, that how do you how do you gain access to your stuff if you lose your private key or don't have access to it from another device device well we have a solution for that and that allows you to gain access to your stuff using just your email and these are the slides I'm just going to breathe past because I don't have enough time to present that but you can go and look at them anyway blah blah blah blah blah there we go cool all right so wrapping up hopefully up next for W3UP is that we've built this UCAM based auth protocol so we can run W3UP on decentralized infra as well. DAG house we're doing this a lot with our products but based on your user's needs you can use W3UP whenever you need to and get the UX benefits that are provided so but some of the my favorite ideas that we're hopefully going to try and get to doing is writing up those directly to Saturn L1s or Fargo and SPs and UCAM validation and receipts and on the chain using the FEM would be super rad so I'm really excited for some of the stuff that is just literally opening up for us to take advantage of very cool all right that's about enough from me and David I'm really sorry it's taking so long but if you're interested in this then please reach out you can check out our current beta it's out in the moment we're hoping for an RC later this month and yeah oh demos we got two demos uh demo sessions uh big demo session for deeper dives if you're interested in actual demo and usage and how how things work um on the March 24th W3UP on the 31st uh and we'll record it and you can have it and um yeah uh thanks for here letting me squirt my voice at you for a while super exciting and thanks to everyone who who stuck through to the end um and to all of our viewers um dive into some of these links because UCAMs are awesome I like to pitch them at every conference they are a super exciting new technology um and enable us to do much more of the things that users expect in a web three native way um so go and check out all of all of this stuff more deeply and come to the IPFS thing if you want to talk about it more and see even more graphs and talk to the excited humans behind it in person um so hopefully see you in Belgium in uh about a month so um thank you everyone and have a wonderful rest of your day cheers cheers