 Welcome everybody to our March Endres the Gathering. If you haven't been here before, here is our agenda. We'll do a quick welcome. We have project updates from five different projects. We have something like eight spotlights from different things that are launching or giving updates in the ecosystem. And then we have two deep dives and demos. One with Alan on some of the stuff they're working on in W3S and then from Shashank on eStore. And then we'll have lots of time. If you want to follow along as we go through, this is a QR code to the slide deck that I'm currently presenting. So you can make comments or ask questions as you go along which helps people follow up later. Yeah, as a reminder, what is this gathering? Who are we as humans? We're folks across the Protocol Labs Network working on a whole ton of different open source projects across the entire R&D pipeline from many different companies and startups who are all building on this. Researchers, engineers, protocol developers and folks who are pushing forward the decentralized web. If you're not as familiar with the Protocol Labs Network is kind of working across three main verticals. What many of us work on is securing and establishing digital human rights working to kind of upgrade the internet with more capabilities like content addressing, verifiable storage, secure peer-to-peer networking and many, many other capabilities. We also have a core pillar around upgrading economies and governance systems using ways of, better ways of funding public goods and valuing the contributions from many, many different open source contributors or ecosystems. And then also a breakthrough pillar on developing safe VR, AI, robotics, BCI and WBE. What is WBE? I asked yesterday because I didn't know what that was. That's whole brain emulation. So these are really, really cool new breakthrough technologies that are just on the cutting edge of becoming possible and making sure that we develop a safe forward pathway for these new technologies is really, really critical and what many of these building blocks aim to do for the internet as well. This is a slight snapshot, just a subset of the many different projects across that R&D pipeline that folks work on and our PL Engineering and Research Working Group, or this one, aims to support and connect these many different projects with each other so that we can be helping ideas that form in the early research stage make it through all the way to large scale production and affecting many, many different people who can interact with technologies built on them. So this is our working group mission. We aim to support and accelerate engineering research breakthroughs across the PL Network. We want to share the latest breakthroughs. We want to support network native research and we want to grow the OSS projects, networks and communities that work on them. Highlighting a couple of recent launches and a couple of things that are upcoming. Big, big, big snaps for the Fluence team, the IPC team and many folks who have been involved in FBM over the years for launching the first Filecoin Layer 2. That is a super exciting milestone. We'll hear from folks from the Fluence team in a bit but huge congrats on that. Very exciting milestone for Filecoin, for Fluence, for IPC, for everyone involved to launch that new network. And if you haven't already, go start playing with it. It is live and ready for people to engage. We also this past month had an awesome set of ecosystem gatherings around East Denver. There was the Phil Dev Summit, number three, lipidokide, deep pin day, many, many other events that I cannot remember the names of off the top of my head, but it was an awesome gathering. There's some great recordings of talks and updates that are now live that you should go and take a look at. And there was a lot of opportunity to sit down and talk with folks as well. If you didn't already see that was announced and launched at those events, there are retro PGF rounds being run for both Filecoin and Lipidokide next month, which is super exciting. Filecoin nominations closed this weekend. So get on it if you haven't already started thinking about nominating the projects that you are most excited about or depend upon the most, or you think have made the biggest impact on the Filecoin ecosystem in the last six months and make sure that they are getting recognized by the community for their amazing work and the public goods they've created. And the Filecoin Foundation around East Denver also released their 2023 annual report, which shares kind of the ecosystem growth that we've seen in their enablement efforts to support that. Looking forward to next month with a lot of events and gatherings coming up as well, PhilStore has some exciting stuff that they'll be launching in Hong Kong, I believe, helping accelerate that onboarding pipeline for new Web2 and Web3 users who are onboarding into the D-Store ecosystem and want to figure out how to onboard their data the Filecoin effectively. Textel Basin will get a deep dive from them in the project section. And they are actively looking for folks who are interested in being POCs in the deep end space. So if you are a deep end network and looking for ways to store your data on Filecoin, come talk to the Textel team. We also have the next network upgrade for Filecoin, Dragon, which is launching I think on April 11th, though someone correct me if I'm wrong. And that's gonna be a big upgrade. It brings a whole ton of new capabilities to the Filecoin network, including a way to do user or the building blocks, at least for user programmable markets. So ways of building new markets on Filecoin that are more efficient or have different requirements than the current markets implementation. We mentioned this last time, but lots of work is going into bringing fast finality to Filecoin. So single block time finality, which will then enable just super fast bridging between different networks, super fast transaction confirmations and a much better experience overall for all FEM applications and ecosystems. So we're very excited and would love to talk to people who are aiming to utilize that in Q2 so we can make sure it works really well. And there's a whole set of people who can take advantage of it in the early days. And then early heads up that IPFS camp and still the summit number four will be happening in Brussels in the beginning of July. I think July 8th through 13th or so alongside ECC. So if you are not already planning to be in Brussels during that time, come hang out with all of the wonderful humans you collaborate with every day at some of those events. It's gonna be a great time. You'll hear more about IPFS camp in one of the spotlights as well. Cool, speaking of events, there were a ton of phenomenal events together, snapshot of pictures. And we have some exciting ones coming up, including NFT NYC, Phil Hong Kong, Funding the Common San Francisco and the public first public goods lab week for the critical apps network over top of the same Funding the Common weekend. So check out events.pl.io for all of those upcoming events. There's also some, you know, a little view into some of the significant events happening at the protocol apps network layer. We are almost at PL's 10 year anniversary which will be a crazy milestone and awesome to get together all of the founders and kind of early supporters and contributors over the years to the PL network, pretty, pretty crazy. There's also this public goods lab week that's happening in April and then the future realms lab week focused on AI and Neurotech will be happening in June co-located with Edge City. So mark your calendars. There's also gonna be a PL meetup in Austin for consensus and there's monthly PL lab dinners that are great gatherings to get to know the other people in the PL network in your region and maybe make some cool new connections or find opportunities to collaborate. All right, and without further ado, I will hand off into the project updates starting with LibP2P. Yay, thanks Molly. So as most of you already know, LibP2P is the network at the heart of most of your favorite Web3 projects. For all intents and purposes, this project is the Utilities and Sanitation Crew that makes most of this stuff work. Next slide, please, Molly, and I'll make this real fast. We got lots of cool updates. So we finally closed the deal on the retro PGF round from last fall. And so we have reached our continuity funding thanks to the Optimism's retro PGF. It was a huge, we were like number five on the list of something like 600 organizations in terms of how much money we received from that and we are so humble at everybody's support. So thank you. Let's see, we had Eat Denver last month, which was a huge success. We had a compressed schedule. We only did half a day. We had about 230 registrants. We had about 70 people filtering through the day. We had a ton of talks from a lot of different related organizations, some really, really good ones. We did a road mapping and SWAT at the end, which has been very useful in setting direction for 2024. One thing I wanna call out is there's a strong desire to try to make the LibPDP community much more research oriented. Since we already have functional LibPDP implementations, I think people are starting to ask, what's next? What can we do with this great tool that we have? There's a link here to our talks from Eat Denver. Please check it out. And we're hoping to have a presence at IPFS camp in ECC Brussels. That is in the works right now. Look for announcements in the next Andres The Gathering announcement. Let's see, so we have in the upper right-hand corner there implementation updates. There's been a number of releases over the last couple of months. There are a few pending ones that we'll be talking about next month. I think we have releases coming in from Rust, JavaScript, and actually, the thing I'm most excited about, Python LibPDP, which just got revived this morning. We had our first open maintainers call for Python LibPDP in two years. So now we have four active maintainers meetings that happen every couple of weeks. So things are growing, things are coming, getting bigger and more exciting. And I think we're expecting NIMP LibPDP's community to have a maintainers call starting sometime next month. The last big announcement, and Molly hinted at this earlier, is that LibPDP is partnering with Gitcoin and we will be launching or we launch, we will be having a retro PGF round for contributors and projects in and around the LibPDP community next month. So keep an eye out for that and get the nominations in in April. And hopefully we will be able to direct the funds to make LibPDP even more useful and provide a lot more capabilities to the downstream projects that rely on us. And that's it, Molly. Thank you. Awesome. Thanks, Dave. Over to Filecoin. James, I think you're first. I'm actually gonna take this one. So for Filecoin, we are building a decentralized storage network. Next slide, please. So most of the updates are actually getting sneak peeked by Molly earlier already. So I'm gonna go really fast. But yeah, last December, we should have a V21 watermelon for Filecoin and since then we have been working on V22 which is the dragon upgrade that is coming on April the end of this, Molly got the date right. You probably already know, direct data onboarding video is coming. We're switching to the new DRAN quicknet that allow us to have faster DRAN block times that you can hopefully soon to be using FEM smart contract. So much will build that. I'm very excited about that. And actually building like, time lock encryption use cases in the smart contracts as also forced where we're enhancing some of the performance of the network by removing more job out of the prong so that the chains more stable. And we're also adding a lot of native actor events into the beauty minor market verified Filecoin plus actors just so that a toolings can, you know, adapt the DDO changes. So still providing a lot of visibilities into the data onboarding activities to the Filecoin network. Like we mentioned, Dragon has launched Caribbean tested already last month. So please join there and testing your stack. And hopefully that everything goes very smoothly in April and everyone can enjoy the new features of Filecoin. We have a couple of new integration. So super national, super sale optimizations has the C2 and the PC21 has been fully integrated into the Lotus minor in the recent release. And it brings up the sector sitting times up to 75% which is a huge improvement for search fighters. So please try that out. A lot of opportunities. We have talked about fast-finality a lot. We actually have a website for that now. So if you haven't checked it out, please do. I will do Curio next. However, it seems like in the 2020 one since I've been working on next pull up that can bring more cost reduction for search fighters. So we're working on that pull up which is a different pull up that is fully able to trust less silly as a service. So if you haven't checked it out, please join the channel. There's a lot of like development work that's going on right now. We're working with WebStreme Man so also for us just to bring the implementation to life. Your feedback is very welcome. The FIP is still in draft. So if you are supportive of the FIP please let us know in the FIP discussion. Huge shout out to Changsit, that ops team who has been taking over a lot of like heavy lifting of the following personal network infrastructures by launching bootstrap note, running the calibration test net. We have a brand new data tab and faucet on the calibration net. And they have been supporting following snapshot backing up by forest node with validation with the lowest node launching as well in the network. So if you are using the snapshot it's actually generated by the forest node which is super exciting. Another opportunity we have been listening to the builder feedback who is building applications on top of FBM. A lot of people want the support for Ethereum Lexi transaction. We opened up a FIP discussion just to talk about how can we bring that to Filecoin so people can actually integrate with Ethereum toolings like a trust wallet, metamask more easily with the building block that you are already familiar with. If you actually want to see this future coming to the Filecoin network please again engage us in the FIP discussions so that you can send a signal to the core devs. Last but not least, I want to give you a quick shout out and call out to the Curio team if you haven't heard about them. They are from way from the Lotus team mostly working on the Lotus minor. They are right now nucleating, graduating from the Lotus team but still working very closely as Lotus maintainers and they're starting their own organization called Curio that's working on the next generation of the Filecoin SP stack to unleashing all the Filecoin storage potentials. The co-founder of the team is Magic, OG of Filecoin, everyone knows about that. He created Lotus with Cuba, why a Ushan Song support joined by Andy Jackson, Nicholas and Mayag who are all like our core team members of Lotus and Boost team over the past three years. So their team will be keep maintaining Lotus minor and Boost for the existing Filecoin storage provider users while working on the next generation AKA a Lotus minor V2, however better and more efficient. It's going to be called Curio. They are going to be power ties in the storage provider user experience and they're actually doing the test net. The general, the GA launch will be in April in calibration, you can already testing all the DDL features, snap deal and everything that you have been doing with Lotus minor using the Curio storage and with the brand new Curio Web UI for SPS to manage, monitor your cluster, your pipeline and your tasks, which has been a highly requested feature from the storage provider for a really long time. So please please please check it out. I forgot to link the channel. They do have a public channel in Filecoin Slack which is Feel Curio Dev. So please join the channel to feel all the latest development announcement and updates and they are welcoming your feedback. And that's it. But who? Awesome. Tons of updates and next time we meet we'll already have an upgraded network. Do we have Tom to tell us about the Filecoin Retro PGF? You might not have a Tom. I will stand in and pretend to be Tom. As a reminder, I mentioned this briefly earlier but we have our first Filecoin Retro PGF or Retropublicans funding round that kicked off a couple of weeks ago at East Denver. The aim is to kind of close the impact gap between things that are super high impact but maybe don't have ways of capturing other funding routes and make sure that public goods are rewarded in the Filecoin ecosystem and there's a great incentive for everyone to publish and share and make accessible super useful tools that help push the Filecoin ecosystem forward. If you haven't already, please, please go and nominate projects that are high value. The criteria is that they have to have created value in the last six months in the Filecoin ecosystem. And I imagine there are many, many, many things. I think I was already seeing something on the order of like 40 plus maybe more nominations when I was staring at it a couple of days ago. I have like a lot of things that I wanna add as well. So please take some time. It closes on March 31st. You only have three more days to nominate all of the things. There's gonna be two million in fill allocated and distributed to projects. It was 200K fill was actually the specific number. And the aim is to repeat this at least multiple times a year maybe every six months. Some people are aggressive and think every four months but we will see. And very excited to run this kind of first pressure of PGF round for the whole Filecoin ecosystem. Over to textile, Carson. Hello, everybody can hear me? Great. So this is gonna be a little more marketing and salesy than I normally do especially given this group. But yeah, we got a pretty cool announcement from textile. You can jump to the next slide which has got the meat. So Molly actually kind of already scooped this a little bit but this is actually the first public announcement of some work that we're currently calling codename basin or project basin. Name TBD is the thing there but we're calling a project basin and some folks maybe on the call have been involved in some of our earlier MVPs here. But we're going to be launching a decentralized object storage system for high throughput data. I think the first sort of data L2 on Filecoin and it's built on IPC and I'll get to that in a second but I'll do all the marketing stuff first. So this has come after a lot of talks and chats with potential users in the ecosystem and outside of our sort of more standard ecosystem in web three and even in web two and we've gotten a lot of people saying like, yeah, this is decentralized storage is great but like I use S3, can you meet me there? And our team has spent a lot of time building databases and thinking about data analytic workflows and all the different TLs and a lot of them end up leveraging some sort of like object storage style API. So we kind of have to build that before you can build all the other things that we want on top of it. And that's the conclusion we've come to. So hey, let's go and build that. But we're not just sort of replicating an S3 decentralized service or something like that. Like there's no point in doing that. We're going to do this right from the beginning as a decentralized network. We already have a DevNet, TestNet or sort of like a private TestNet running live we're throwing lots of data at it tens of thousands of blobs in an object store. And with that comes all the things that you expect from a decentralized and cryptographically secure system. So we've got verifiable data integrity. Everything is IPLD, everything is consensus based with some interesting sort of high throughputs tweaks. So you get sort of like full proofs from end to end for all of the data that you store which you don't get in a more traditional system. And then programmable persistence. So not everybody wants to store all data forever, right? People want to be able to store data hot for a little while and then just forget about it forever or they want to store a little bit of data for a little while and then archive it to Filecoin. And so through leveraging IPC and our experience in the Filecoin ecosystem like that's something that we're going to be providing. The Filecoin archiving thing is not fully fleshed out but the actual working hot layer IPC is running. We have done a lot of research on cost what this thing is going to have to cost and through the power of IPC we can actually control costs quite a lot in terms of like gas fees and total storage costs do a bunch of really intelligent sort of lifecycle management to get things off to Filecoin as fast as possible but no faster in order to keep those costs down. And one of the things I'm kind of excited about I don't know how many people here have gotten a surprising AWS bill in the past. We've adopted a sort of different model for payment where you basically it's like a count abstraction style thing where you pre-fund it and then if you use up those funds you're not going to end up with some like bill that you can't pay at the end of the day you just aren't going to be able to add any more data. So you can kind of really control what your costs are well ahead of time and you have pretty good visibility into that. So for people who are interested in that come and chat with us. But probably the most exciting thing for folks here is it's built on IPC. We're leveraging the IPC subnet framework and we're taking full advantage of its hierarchical design. It scales surprisingly well. Buckets if you want to use the S3 language are custom actors with all sorts of really cool data structures custom data structures built in and will be logically sharded subnets. So basically like if you want a region there'll be a subnet for that. If you want your own private bucket where you're just proving the state that it's possible as well. And then the last thing that's maybe of interest to folks here is we have been working very hard on a custom lazy synchronization process that allows us to ingest very large files in and have those under consensus whilst in the backgrounds sort of more slowly but still pretty fast synchronizing the full state. So you get like a very live chain that reflects updates in real time but for very large data that gets sort of hydrated into the subnet in the background by all of the peers. And that's working pretty great. We found one like consensus issue last week and we're able to patch that super quick. So it's running, this thing is ready to go and we will be launching it with partners soon. So if you're interested come and have a chat with us. We're very excited. Awesome. There's some questions in the chat which feel free to respond to in the meantime but we'll also have time for Q and A at the end for exactly how basin is gonna be a part of the PowerPoint ecosystem and connect and show proofs of longer term storage. And there's yeah, feel free to add questions there and person can respond and we'll go more into it later. For now, moving along to IPFS. I think Janice, you're first. Yeah, hello everyone. IPFS is the one and only data and content platform of Web3 builds on peer-to-peer and content addressing in case you missed it. Next slide please. Quick update on some of the numbers that our ProBlob team is providing. Actually a pretty boring month in terms of numbers. Nothing moved too much up or too much down. So everything seems to be stable in terms of numbers of servers and clients in the Amino DHT or the latency, the lookup and publish latency. You can find a lot more about this at ProBlob.io as well as explanations of what the experiments were running and how we're monitoring this data. Something new is that daily P95 timed the first byte from the APFS.io and the web.link gateways, which is of course super interesting. It seems to be operating as expected. There are a few bumps and ups and downs but overall performance has been expected by the team. Now on ProBlob.io, you can also find these weekly reports for IPFS with several more metrics on geographic locations of users and protocols that have been supported and new protocols that have been found in the network and so on. We're building weekly reports for many other IPFS networks that you're going to find at that same spot too in the near future. That's it from me. Thank you very much and over to Adin I believe. Yeah, so I'm gonna do my best standing in for Mosh today, but IPFS camp is live. So as Molly mentioned earlier, it'll be after ECC on the 11th to 13th. That's the website to go check out so you can go sign up. Early bird pricing is through the second, which is soon. So you should go now or like after the meeting. And if you have a talk proposal or a new track or you'd like to sponsor the event, please reach out. As part of sort of growing the project in general, there's some exciting updates around like making the public gateway infrastructure for IPFS.io and dweb.link more community owned. So this means having updates to like terms of service and what are the protocols that are used by the gateways and talking with users and forming user council around what are the needs from business users of the gateways and digital human rights users of the gateways for what they need to see improve, both that's sort of the infrastructure layer and at the protocol layers. And there's gonna be a retrospective on the IPFS implementations fund that Mosh is going to be talking about at the Funding in the Commons event in the middle of April. And there'll be some reports that come out about around that to check out. And that is all for me pretending to be Mosh. Woohoo, thank you, Aden slash Mosh. Over to an update from the Fluence team on their recent launch. Hello, everyone. Hey, Molly, can you switch to the next slide right away? Yeah, so have everything here. So we launched Mainnet this month. So we are very excited about that. Gave a lot of stress to IPC team and to our team as well. But so basically we launched the off-chain compute network of compute providers which is live and running. We launched the on-chain marketplace built on IPC as a Filecoin Layer 2 and it also works. And it didn't die yet and still live. So we are very excited about that. But every day we're still looking very closely at everything that is happening there. We are slowly onboarding providers. We are slowly allow them to add a compute capacity and submit proofs. And we are checking that everything is operated correctly, that rewards are distributed correctly, all kind of things. So being busy, fixing some small bugs, tweaking performance, like some economic parameters, but overall everything looks great. And yeah, so we'd be announced launch in Denver actually in February. We hosted this Dependay, had quite a lot of people there. One was speaking, was also a lot of people from PL Network there, was having fun at in Denver. We had a cotton candy machine at our booth at in Denver. So like we allow, we offered people to eat some clouds which was very fun. We also announced these developer rewards that we basically allocated on tokens to a lot of web tree contributors, actually like a lot of people from PL Network also received them. And so far we got 4,500 claims so far. That's great. It's actually more than we expected out of total 10K capacity of claims. So this is basically half of capacity already claimed, which is amazing. Yeah, just keep building, keep looking at this stuff. It works, it's stressful, but it works. So that's been pretty stressful and exciting months. Awesome. Well, definitely flag it. There's ways that we as a community can continue supporting you guys with the exciting domestic, pretty awesome. Oh yeah, yeah. Okay, cool. With that we will switch into our spotlights. And I believe I'm the first one. So spotlighting the Phil Dev Summit number three that happened at East Denver. It was one whirlwind day, five tracks, 37 speakers, 128 attendees and a really awesome gathering of folks. We had tracks that were focused on people who are new to the Filecoin ecosystem with the welcome to Filecoin track introducing different key components. We had deep dives on kind of smart storage networks. We had updates from IPC and basin and many other groups. We had a deeper dive on retrievals and full plus data cap allocators. We had the announcement and initial workshopping around the Filecoin Retro PGF and a awesome track on potentialized compute storage and AI, which was awesome and accelerating. So big thanks to everyone who was able to attend the videos are now live for almost all of these. Big thanks to James for all of the editing. So if you scan this QR code, you'll be taken to the playlist of all the videos. There were five simultaneous tracks. No one was able to attend all of them at the same time. So all of those videos are up. I've already watched a whole chunk of them that I wasn't able to be in and highly recommend that. Please share these with all of the people who weren't able to make it in person but want to follow back up. And I think I already mentioned that we're gonna have the next FDS4 coming soon. We'll get that up on the filled up website so folks can plan and start doing early bird tickets. Over to Patrick. Hello. Yes, so I'm speaking here about station, spark and Voyager. Firstly to say that the team behind station has now created a new company and it's called Space Meridian. So you can imagine sort of like lines through space in between all these web three planets. That's the kind of imagery we wanted to capture with this. Our first project station, it's had a great month. I've actually been out of office this month but it's been great to see it continue to grow even sort of passively. We're now up to over 10,000 unique Parkway addresses participating on station in March alone. And this is up by many thousands from February and January. The spark protocol, we've made some nice updates this month. We're now able to calculate the retrieval success rate per storage provider for file retrievals. And this involved a little bit of kind of fiddling around with stuff we could find on chain and then advertisements going through IP and I and then matching it back up to the spark tasking protocol. So managed to get that all lined up and that's gonna give us some great insights into retrieval success rate. And just say this is about file retrievals not about just not piece of retrievals. So it's really cool that we're gonna be able to start to see this. And then, so the first module that was running on that is running on station is spark and the second one is Voyager and Voyager is sampling and testing the W3S slash Saturn network as we can see in that nice logo there. Voyager is now making over 500,000 requests an hour to this network to monitor the health of it and we're gonna hopefully increase that over the coming weeks. And we're just putting the payment rails in place such that station operators can earn for doing these Voyager jobs as well. So overall, we're now looking at two modules on station for which users are soon gonna be paid for both and loads more ideas down the track. If anyone would like to speak about similar use cases of sampling, testing, retrieving and deploying onto the station D-PIN network, then please reach out to us at Space Meridian. Woo-hoo, probably the fastest way to get a whole network of nodes spun up and doing useful stuff. So awesome and thanks so much for sharing Patrick. I continue to earn my station rewards every day. Micro fill for the win. Cool, moving on to Phil Ponto, Eva. Hi, my name is Eva and I've been managing Filecoin chain infrastructure for the past few years at Protocol Labs as part of OuterCore. My new creation is Phil Ponto, which is a blue fund and support unit with the mission of making Filecoin more accessible to a technical audience, including web three builders and companies. My top three activities include one, chain infra as public goods, free tier public RPC, chain indexers and subgraph support and open source developer tools and libraries. Two, helping web three teams integrate with Filecoin and three, grants advisory, including day-to-day technical project management of service partners and helping teams figure out how they can grow. I plan to collaborate closely with the Filecoin Foundation's Dev Grants team with Phil Oz and other node client teams and Filecoin TLDR and Anza research, as well as other teams helping grow the Filecoin ecosystem. Phil Ponto is currently being finalized by the Open Impact Foundation. And as a blue team, it plans to be transparent about its finances and operations. And I'm excited to be barking in this now as a decentralized spoke. Thank you, back to Molly. Thanks, Eva. And definitely, I know many people have gotten to collab with Eva over the years in pushing forward the whole Filecoin infrastructure space and excited to see it keep accelerating. Oh, cool. If y'all weren't at East Denver or you weren't one of the set of people who was able to come to the Orbit happy hour or hang out during Filecoin talks, you might not have gotten to talk about some of the themes for Filecoin in 2024. I put together this rough presentation not to overly self-promote, but was very curious to get people's thoughts, feedback, and debate different points of view on the kind of top main themes for Filecoin in 2024. What I posited and would be excited to hear other people's ideas was the top three themes were one, L2's building on Filecoin, exemplified by Fluence, Basin, and many other groups who are building new networks that interoperate with Filecoin storage and build on top of the Filecoin network of storage providers, technical capabilities like IPC, and kind of the deeper deep-in compute ecosystems that are coming into Filecoin land. Number two was building hot, fast storage interfaces for Filecoin, so similar to Basin, leaning into having those like very responsive UIs that make it much, much easier to integrate with Filecoin storage and interspace, and smoothly with Filecoin L1 storage deals in order to kind of offer that and make it much more accessible in many other networks or for many different application types. And then number three, scaling the Filecoin economy. There's been a lot of growth in Filecoin FEM-TVL, we'll hear about that in a second, but continuing to scale the DeFi ecosystem, continuing to make improvements at a protocol layer that improve network-wide OPEX, and also onboarding more paying users for Filecoin storage, which is, I know something the Philstore team is working hard on, and we, I think as a whole, as an ecosystem, it should continue to be pushing a lot on these top three themes for the year. But if you didn't get a chance to watch it, there's a QR code to the talk. Otherwise, I'd be really curious to hear if people have thoughts on top themes for Filecoin that maybe aren't covered here or a different set that you promote. Cool. With that, over to James for Protocol Office Hours. Thanks, Molly. Hey, everybody. This is James from the Fellows team, and I'm going to speak briefly about our new Protocol Office series sessions. We have two flavors. We have Protocol Office Days, which are going to be longer sessions focused on really deep diving deep into specific protocol topics. And we have Protocol Office Hours, which are shorter sessions where you can pop in and ask and share anything that you'd like. All of these are going to be focused topics. We're hoping to foster collaboration on strategic aligned entertainment across all of our ecosystem partners and friends. We've already had three sessions hosted. We've had a deep dive on aggregators, a deep dive on the market on economics. Those have both been Protocol Office Days. They've been excellent chats. We've had a really well attended. Both of those are three hours, and we've had a really good AMA session as well for the upcoming Network version 22, which was hosted in Farquhar and Slack and the Phil Hemp Help channel. Featured a really impressive lineup of Farquhar and Implementers. We've put everything in Loomis, so you can stay up to date with all the events that have been, and of course, those that are coming up as well. Just go to the link there, which is for those org events. And of course, if you have anything to propose and you wanna meet with the protocol and implementation engineers, anything and everything is welcome. Send us an email at the poatfellows.org address at the bottom of the slide there, and we'd be happy to hear from you and arrange something. That's it for me. Thanks. Boo-hoo. All right. On to Sarah for Builder Update. Cool, so hi everyone. I'm Sarah, I'm mostly looking at Falcon DX. Basically, taking everything that everyone has built here that's awesome and bringing it to feel and testing it out with how hackers adopt what we have, yeah, being shipped. So, pretty exciting stuff for FBM. TVL has grown a whole bunch since, well, look at the placement that's insane from 80 to number 18 on the TVL charts. It's grown a lot. At the same time, we've also been growing the unit contracts being deployed today. So, last I checked was about 3.4. So the numbers slightly updated. It's been a pretty consistent growth since we launched in March, which is awesome. But that tracks mostly mainnet stuff. On the side of testnet, there are a lot of projects that are testing it out right now, mostly coming from hackathons and then going through shout out to the architects team working really closely with them to bring them through the Builders funnel to see them continue to build and eventually ship onto mainnet which will then increase the number of unique smart contracts that we have deployed. A few cool examples. There's been a lot of excitement around using FBM plus IPC. We've been bringing that into all the hackathons and updated a lot of our core messaging there. So our bounties as well are mostly focused on FBM plus IPC. We've seen some teams use it actually deploying IPC even to base which is super cool. And having solidly contracts as access custom sys calls that they are deploying with the IPC custom subnets that they're using. We've also seen it being used for turn based games as well as using storage capabilities on calibration calling it from IPC subnet which is much, much faster. And then having those requests being done on calibration itself and pushing it back onto the subnet. In terms of hackathons, we've been super active in that space. Again, working really closely with Ruben and James's team. We have the ongoing data economy hackathon right now which is a flagship hackathon for Filecoin. So we have about close to 400 hackers and we have about 40 projects that are already self submitted right now. Ending soon, so we're really excited to see what they've built in a month. We've also participated in EVE Global London and EVE Denver to make sure that the Filecoin is still being kept top of mind. And then for scalability, we're participating pretty heavily in that space so bringing FBM plus IPC together into a few key scalability hackathons. So you might have heard of scaling web three as well as scaling Ethereum which are really big hackathons coming right up. In terms of capabilities, we still see usage being super popular around the usage of Lighthouse SDK. So super simple accessing of storage capabilities from a smart contract. So we're excited to see Basin come out as well to see how we can add on as another option for people to use but there's a lot of demand in that space. However, some areas for growth is, I think the awareness and the adoption of compute coordination and on-chain payments with FBM is something that we can promote a lot more. So we're looking into creating more documentation and easy ways for people to fork those rebos and use it. As well for IPC incoming, we're building some templates for solidity contracts being able to access custom syscalls on IPC. That's something they have to hack their way around but we're hoping to provide a really easy template coming up in the next few weeks as well as GMP which is now enabled by IPC for single hop to access contracts that are really on calibration and all the capabilities that they have from the IPC subnet. So I think that we're really super charged some depth development there. So yeah, those are all things we're looking forward to. Super excited for MP22 too. All the things that are being announced like time log and fast finality. I think these are things that people have asked for in the coming in the past few months. So we want to test it out once main net ships as well. And we can start doing it with calibration today. Super cool, thanks Sarah. You're awesome. With that, we are going to go into our demos and deep dives starting with Allen for local.storage. Cool. So really quick background, last year we made a really big step towards decentralizing the pre dot storage by leaning into UCANs and designing and specing protocols that use them for uploading content address data. And so like I realized everyone, not everyone has a good understanding of what UCANs are yet. But if you kind of think about like UCANs as being cryptographic key pairs, decentralized identity, decentralized authorization, verifiable signed requests and responses. And most importantly like permission delegations are giving someone else the ability to do something without going through any central servers. You're kind of in the realms of what UCAN is. So we kind of added that to web pre dot storage and shipped that in November last year. But the service in and of itself is kind of backed by centralized cloud services still. So the task now for us is to transition it into a decentralized hot storage network. Hopefully using all of the good work we've already done as a base. So what is local dot storage? The local dot storage is a spike just to see if it's even possible to run the entirety of the web free storage stack locally or like more specifically as a node on a distributed network of storage nodes. So this is basically an experiment to see if that's possible with what we have right now. So, okay demo, let's do a demo. Just one second before I delve in, what we did do to make this a little bit easier while we were doing the UCAN work, we actually separated out our info code from our implementation code and published the implementation code to MPM. So really if you wanna build and run another implementation all you have to do is kind of install the library and then provide the dependencies to talk to whatever backend you happen to be using. So we were using or we are using like, you know, DynamoDB for a lot of storage stuff. So if you wanna use something different then you can quite easily, you should quite easily be able to do that. So anyway, should be easy, right? Let's find out, let's close this up and have a look. There we go. So onto the demo, wish me luck. This is hopefully gonna work. So once there's this repo called local dot storage in the new W3S GitHub project, I think there's a link on the slides but I can show it later if you want. You can, once you've cloned it and installed dependencies, you can just start up your own local dot storage stack and it will tell you it's a decentralized identifier. This is how you address the system and it will tell you that it's running on port 3000, which is great because then we can just go localize 3000 and just verify that very quick as version endpoint which tells you that it's running there and then you'll see a little log here. So you know that it's running. Now on this side, I'm gonna be using the, we have a CLI tool for web-readout storage which allows you to just kind of upload stuff, create spaces and stuff like that. But what I've done is I've set some environment variables so that it's gonna talk to this local version of web-readout storage that we're running. So I can, I should be able to now just do like a W3, I'm gonna create, what I'm gonna do is create a space first which is just a place where you register your uploads like this and I'm just gonna call it like test, which gives you, because a space is basically a private sub-parit key pair. It gives you that key pair. I need to just like go off to you there and move them where and then next so I can recover if necessary and ask me for an email address and this wasn't actually send an email yet but it could quite easily send an email just to verify that I'm a real person. This really pared down version of email verification but as you can see now, we've created a space. This is it's it and it's, I'm ready to do some uploading now. So I'm in my favorite folder, the gifts folder and in this folder, I have lots of cool things. Where did that go? Did I get a window there? Don't know where it's gone. I can't show you anyway. I'm gonna just upload a cool picture and I could do that doing W3 up and then I've got loads of cool things in here. I'm gonna put, yes, this is dog in there because it's a good one. Yes, there's a dog. There we go. And I can also do this like I put a the both option in it will tell me the like the car file, the set of the car file that went in as well as the, it calculates the PC ID on the client as well as well as the set of the thing. And so you can see over here, we get lots of log lines and things for the uploading. So essentially what happens is when you put something to go through storage you ask it to store something and send you back a URL of where you should put it and that's the opportunity for us to put stuff to a decentralized network. And so we put to here and then we did some, we created some content claims. These are just basically pieces of signed information that are about the content that was created and you can see these transaction commit things which are just happening here. And that's because I've swapped out the whole data store instead of it being like DynamoDB or something it's actually a DAG. So the whole of the data store is actually a DAG and it's just stored in a FS group host. So it looks like this weird scene. If you wanted to, you could just hook up a Kubo to this and look at the blocks and it would be compatible, which is kind of fun. And so this kind of the data store is called Pale which is just a key value store and you can do cool stuff like it's just a key value store and everything is rooted from this one root CID. So I can do, I've got this little tool here called Pale which allows me to like lift out like entries in the Pale and they're just like keys and values. And you can see here that we've got some claims we've got delegations in here. We've got store, store things and I can do, so I can do things like my upload another thing like the classic, the room guardian, let's do that one. Oh, okay. Oh, it's not running. That's why I run it. There we go. There we go. So as long as the service is running, then you can do that. But you can do things like W3 loss to lift out the things that you've stored. These are the two things that I did but if we use this Pale tool, you can actually give it like a prefix. So I can say prefix of upload and I can just list out the keys that begin with upload. It's kind of like level DB. So you can say I want the keys that you can give it like greater than this key and less than another key. And anyway, so that's kind of cool that it's all DAG based but the other cool thing is also built with this thing called local freeway and local freeway you can also start up. And local freeway uses content claims that are published to the server to figure out what bytes it needs to serve a particular piece of data. So if we take our first thing, yes, this is DAG, we'll take that IPFS CIG and then head on over to here somewhere and go as 9,000. Local freeway should be able to serve this it's pretty quick because it's coming straight from local storage and you can look at the logs here and you can see that it's getting blobs it's getting byte ranges from the blobs car files and so what it's doing is it's looking at the CID that it was asked for, getting that lost but it has to figure out which car it's in and then get the block from the car by requesting the specific bytes that will satisfy that request and then serving it to you. So and that's basically the freeway is exactly how it's basically how the website storage gateway W3S.link and NFT storage shortings work today. But this is just an example of how this can all work and it can all be backed by a DAG and then one other cool thing that I was thinking of was that if it's backed by a DAG and other nodes are also backed by a DAG then why don't we just allow them to talk together and use the same DAG? We could, that could be a thing. I don't know if we should do that yet but just an idea and we could use Merkle clocks for that we already have an implementation of Merkle clocks Merkle clocks are used in IPFS cluster for the data store we have an implementation in UCANNs and the cool thing is with UCANN based Merkle clocks you can effectively because UCANNs are all about delegating permissions you can effectively delegate people ability to change the DAG or not which is really cool. Anyway, I've probably gone way over on time but this has been a fun demo and I hope you enjoyed it. That was awesome. Thank you so much, Alan. This is super cool. Where should people go to find and maybe run this themselves and play around with it? I will put the link in the chat. Cool, yeah. Thank you, thank you. Okay, we have one more demo. Feel free to drop it if you have to go but we're gonna finish up for the recording. James, would you mind playing our video from Shashank about eStore? Hello, everyone. I am Shashank and I am here to give a deep dive into my project, eStore. So our vision with eStore is to main stream on-chain decentralized storages for public use so as to become a perfect service for a customer who doesn't want to trust centralized services like Google Dive, etc. with their files and to provide a UX at the shifting of people from Google Dive, etc. to our eStore is completely frictionless. So they are evident problem with centralized storages to which Filecoin is a solution. So centralized storages are on mercy of their owners as well as there have been evident cases of data leaks, etc. So that is quite evident. Moreover, there are many issues with currently available solutions to store data on Filecoin as well. Most of them are centralized or through centralized gateways and as soon as centralization comes into picture privacy is compromised. Moro experience is also not that good. For example, one of the example of centralization is that most of the currently available providers provide retrieval through pinning on IPFS not directly through files that are getting stored on Filecoin. So that's also an issue. That's also an option component coming into picture. One more issue is that there are currently no on-chain solutions available on FBM that store files directly on Filecoin through FBM. Moro, if someone wants to store files independently so he would either have to have a data cap or a failed token so as to do that. And one more issue that comes with Filecoin is that deals are non-perpetual. So for a person coming from web to store a solution where he can store a file and keep it as far as long as he wants, that's not the case in Filecoin. Yeah, so that's all. So we aim to solve them. First of all, we are completely decentralized on-chain solution. And by that, I mean that you then can come to our DAP so as to store and manage their files. On other hand, they can also directly interact with our smart contacts that have been deployed and have also been open-sourced if they don't want to stress ourselves. Next, we are providing plus-chains on our eStore. So user can store files on Filecoin from any of those 10 plus-chains. So this enables people who are not having fill also to store files on Filecoin and manage them through our UX. And third thing, the easy-to-use interface. As I said, we aim to provide an easy user experience for people to store their files on Filecoin, by which I mean, for example, the renewal plan. So as soon as our deals are getting spare, there would be a one-click option so as to renew those deals and keep their files until perpetuality so as to provide the least amount of friction for people to shift to our services. Last and then an important thing, we have open-sourced all our smart contracts on which eStore is working. So as to make it available for builders to build cross-chain dapps, storage dapps on Filecoin through smart contracts, which I believe is the core functionality of FBM. So people can use FBM for any purposes, but the difference between FBM and other changes is that it provides compatibility with underlying file-fanching. And also through open-source, we enable transparency and trust on our e-mail. So currently, we have recently launched our app on Testnet and it's live. You can check it here and give feedback. It's working on Testnet because there are many, I contacted a minor on Testnet and we have been providing that service on Testnet, but mainnet requires a lot of work, mainly because mainnet require aggregated deals, small deals of few MBs. And also we have been actively working on coming to an on-chain, trustless way of aggregation with Filecoin team as well as to remove blockers and make it feasible, make our app feasible in mainnet as soon as possible. I have been working on some ideas as well, which I would keep on, I would be working soon and let's hope that we are able to remove those blockers and come to mainnet as soon as possible. So I would love if someone goes to Testnet and tests our app and I would love any feedbacks which you can give me on Slack, et cetera. Woo-hoo, thank you so much, Shashank. And thanks for staying up, I know it's super late for you. I know we're super over time. So thanks everyone so much for the awesome gathering today and see you all again next month in late April when we'll have lots of more amazing launches to share with folks. And thanks for the phenomenal demos and everyone have a great rest of your March. Woo-hoo, cheers friends.