 We are now on to our spotlights reminder to keep them short so that we save a full 10 minutes for our deep dive and we've a lot of things to cover. And I believe the first one is a video from Mira. Hello, in today's spotlight, I'd like to show you how easy it is to build new modules for Filecoin Station using our new runtime called Xenia so that you can measure the performance of your peer-to-peer networks and services from different places all around the world. Let's start by implementing the actual probe where we dial a ping protocol send some requests and measure the latency of how long it takes. Then we write this measure data into InfluxDB using their HTTP API for submitting new data and we can use the fetch API, which you probably know from the browser. And then we put this all together in a loop, which choose a random peer, then it measures the ping latency and then records the data into InfluxDB. And then using this data we can visualize what's going on with our network. You can use InfluxDB dashboards or you can pull the data into Grafana. And that's it. It was only 76 lines of code. You can find the full example on GitHub. You can learn more about building station modules in our documentation. And finally, if this is something you can use for your project, please come and join the module builders working group. You can find us on Filecoin Slack. Awesome. Thank you, Miro. Over to Steve. Great. Yeah, hello. IPFS thing is coming up quick April 15 through 19th here in Brussels. So not many weeks away. A few things I want to say. First off, anyone is welcome to this. People working closely on or with the collection of IPFS protocols will be there, including other businesses and for providers. This is much broader than people just making commits in the GitHub. It's intended to be much better than people making commits into the IPFS GitHub org. For example, teams like Saturn, I think have a lot to benefit to share their experience and needs, influence others, get feedback, identify product gaps, etc. And this is more than a place just to present status. It's a place to get work done, especially days four and five are going to be open for workshops and brainstorm sessions. So please be thinking about how you can leverage this event. So far, 10 plus tracks over 100 registered people have registered. You do need to buy a ticket for this. So if you're part of PL Andres by taking but obviously talk with your manager this will come out of your group's budget. And there's you can request a hotel room to be part of the block that we have there's messages and PL slacks lobby there and please do this soon ideally this week or next just to help the organizers out. And there will likely be a pre meeting coming up for those involved from Andres so that we're aligned and clear on what we're trying to get out of the event for ourselves, and for the community. And for anyone watching this that's outside of protocol apps yes you need to buy a ticket but know that there is a scholars program, which offers a fully paid opportunity for individuals from under represented communities are unique circumstances to join the event. And so again if you have a demo you want to give presentation you want to share or workshop you want to host, please submit that through the website that's 2023 ipfs dash thing.io. And you don't have to have all the details named it nailed down but it really helps the organizers get a sense of what what's coming and love to have you be there participate. Thanks a lot look forward to seeing folks soon. Awesome hope to see everyone there it's going to be a great time. Over to Alex for falcon on risk and resolutions. Hi everyone. So the Falcon network has this thing called cron which is a schedule execution of active code at the end of every epoch that is done on behalf of the system so no external party pays for it. This does some your important system maintenance tasks. We started seeing a lot of work happening in this cron so much so that it ended up being three times the entire target total for for an epochs validation happening in this unpaid for bonus extra time execution. So that in perfect block validation times and fast validation is really important for a blockchain networks decentralization allowing lots of nodes to participate and keep up with the chain and chain quality so that the block producers can produce their next block on time. I'm after evaluating the previous tip set. We discovered that this the built in storage market is responsible for almost all of this blowout in chronic scusion. And because it offers a very high level of service to its clients of incremental deal payments every day. This is probably far too much of a service for a built in subsidized thing to be offering particularly since most deals have no payments and so this is a total waste of time. So it's just in time to be able to detect this understand what's going on and propose a fix for Filecoin network in in time to just roll this into our normal release train that the next, you know, the release this will target is network version 19, which the planning for is already already underway. So it's kind of short term fix which just you know divide the problem by 30, and that will bias your good six months and you know at least six months to find a more permanent fixed for this problem. Ultimately that fix is probably going to be removing this automatic payment processing and putting the built in market actor down on the same playing field as all other sort of user programmed actors that could be markets, which won't have access to this chron primarily because it's very hard to find the code that's going to run me. So thanks very much to Kuwuxu and Zinground who did most of the work about this and have been on this problem for a while. I just happened to get lucky and did the little bit of analysis that discovered it was the market actor. But yeah, expect this to be fixed and then Filecoin block valuation times to drop a lot in network version 19 sometime in Q2. Great to see the proactive measuring is helping us take early steps and avoid fire drills. Great, great example of that. We always prefer that versus having to do the fire drill itself. So, I'll mark you guys. Oh, sorry. Yeah. Hi, I'm cool. So small deals, small deals, accepting small deals for storage providers is an issue. It's an issue of scale for for medium sized storage provider, a search provider have to accept on the order of million deals a day to to be able to be able to million small deals a day to be able to fill up their certain pipelines, which is why aggregation services show that sometime in the past within Falcon network, like a shory and the storage. Those aggregation services while they provide the service of aggregation, the client completely currently completely trusts that aggregation service, which which is fine for as long as those services are trustworthy which is the case currently. The drawback of that process is the client cannot prove to cannot verify that their data was aggregated correctly and they cannot show to another party that their data was aggregated correctly within that deal, which is why we created the verifiable the data aggregation standard, which produces proof of data segment inclusion. So the proof of data segment inclusion ensures correct aggregation of clients that are within the sectors and allows the client to show that they can prove to a third party or to the contract on chain, which is an important use case in FFVM where, for example, contract, people might want to pay for storage of small, small deals, but this wouldn't be able to be executed on because very, very small number of storage providers will accept small deals. So the the the standard itself defines how to aggregate the data how to build an index which is stored in inside the sector of all the data that was aggregated within the larger deal such that a receivable is still still possible and very easy. We've reached that design consensus and effort after she was published the go code for proof generation is complete. We're currently working on a solidity verifier for for this proof such that contracts can verify those those those aggregated aggregated deals on chain. And we will be starting integration with that storage soon and most likely with Azure as well. Thank you. Awesome. Trustless aggregation is a big problem and exciting to see more protocol tools for people to aggregate all of the little data they want to store in Filecoin into nice big chunks that make everyone's life easy to work with. So great work. Check out the FRC if you want to know more. All right, looks like we have a video on you can invocation stack user controlled authorization network, you can for sure now has an invocation stack and we put together this interactive observable document so you can explore it in more interactive way. It uses several tools like IPLD schemas to parse and validate schemas on a fly. It uses reference implementation to generate data sets from the code snippets. For example, here it showcase the tasks that this code snippet would generate invocation that it would produce. And you can also go look at the whole car that has bunch of blocks in them like the task we saw earlier invocation that reference it and authorization and authorization itself. You can also go and modify the code rerun it and see how the data sets change. Hopefully this is more fun way to explore the specification than wall of text. I also hope you will join website storage and IPVM into implementing this specification. Awesome to see I'm sure we're going to hear a little bit more about the power of you cans in our deep dive as well. But great to have good explorable specs. Great, great example of using observable for that as well. All right. NFT forever. Yes, hello, Scott here from Philadelphia. So today I'm going to talk to you guys quickly about NFT forever, which is the goal is to preserve off chain NFT data as a public good. And so we're combining a few new things to give a new programmatic deal making flow to taking FEM file coin and Lotus and what what is the the embryo of a FRC standard to create a new programmatic flow so you can see here. What we do is you make a deal proposal by calling a smart contract, you pay a little gas inside the smart contract you have both the escrow, which is the file coin and the data cap itself. That contract then acts as a client and emits an event for a deal proposal onto the blockchain that is picked up by a storage provider running boost. They then grab the data out of the payload from the event. In this case from NFT to storage which is a pre aggregated car file of many NFTs will do the ceiling process create a whole new deal and then verify it back on chain through the smart contract contract and say yes this is the the CID that I want yes this is the deal that I want and then open it up to logic. So what we've done here is actually decoupled who's providing the data from who's providing the funding and the data cap, as well as who is going to be picking up and so you start to see a more organic market marketplace forming there so we're producing a smart contract and we're going to have some storage providers on Pi day to be accepting deals I think we're getting up to at least I think I think we're targeting 70 deals a day. This is the first few weeks to kind of get it moving. There were a lot of people behind this in deal client contract we have multiple product managers across multiple groups, and then big technical lifts from both Lotus boost, and some folks like me. So that is coming everybody has heads down which is why you had to suffer through me. And so that is going out next week. And let's give this guys a good hand and and watch it, accompanying the fvm launch. Thanks. It's exciting showing the power of fvm to take all of these kind of off chain tools and bring them use utilizing this new automation framework so hopefully things get more verifiable, more automated, and just easier to run and maintain into the future as well with programmable storage. Pretty cool. Over to Jamie for the awesome countdown to fpm event from last week. Hi everybody I'm Jamie with the outer core events team here to tell you about our countdown to fpm event, which took place last week on March 1. It was the day before the East Denver conference portion started. It was held in the same venue as the fpm hacker base so it's hosted by the Falcon Foundation so we flipped the venue over for the countdown to fpm event, which was a huge success. There were a lot of comments surrounding the upcoming launch of fpm. It brought in 873 registrations and more than 300 in person attendees, including devs and investors in the audience. This event was streamed on each local TV for virtual attendees and had over 50,000 live stream views, which is huge. It was actually on the third largest audience from all events hosted on each local TV. There were several presentations panels and more from 34 speakers, and there were 18 projects featured in the early fpm builder showcase. A couple of exciting things to highlight from there and fpm team client contract deal making flow is live with a very big thanks to fpm Lotus and boost and dredge teams. They did a demo of this at the event and there's also going to be a recorded workshop shown on scaling ethereum hackathon today at 12 PST on the global TV. There's a couple of the record links there with the recordings to the presentations from the event, the event photos and a great social real they're recapping everything so be sure to check those out. Thanks to everyone who helped contribute to this been a huge success. Awesome. It was a fantastic event. If you weren't there go watch the live stream from from global because there's some good good content. And this was a component of our overall like Andres presence at each Denver, which happened last week in Denver, which was a super awesome gathering of tons of groups working across the ethereum file coin layer to and and many other related ecosystems. We had a kind of different events that we helped host and or participated in. There was a launchpad and fpm social crypto weekend a the countdown to fpm event that Jamie just told us about. There was some awesome dinners organized by the LDR team. And we also participated pretty heavily in the east Denver like event itself, we had a booth bear we had some main stage talks. We also helped judge the hackathon in many different areas, and saw a lot of amazing folks coming by getting really excited about fpm and how they can make use of it. And also engaging super deeply with the kind of new breakthroughs that are coming coming out of the PL network and our ecosystem these days. It's a great gathering point for many different builders from groups like, you know, huddle glyph impossible cloud, and others who are all harnessing some of these new new technology, and so excited to collaborate with them as well.