 As a quick reminder, if you're here for the first time, the PL-Endres Working Group is a collection of amazing teams across the PL network, helping drive breakthroughs in computing technology to push humanity forward. We work across a ton of awesome projects that have been spawned from PL over the years, open source communities like IPFS, LAPDP, and Filecoin, but also many more, like DRAND, multi-formats, and others. Our mission is to drive breakthroughs in protocol utility and capability, scale network-native research and development, and help stewarding grow these open source projects and networks. And we have a ton of teams across this working group that are focused on different areas of improving these protocols and ecosystems. And these are the four main things we're focused on for the year. First, stewarding critical growth, growing the overall network, making sure that we have a robust storage and retrieval system across IPFS and Filecoin, and that we bring compute to Filecoin state and data so people can build awesome things on Filecoin. Quick view into some of the breakthroughs we're getting close. This is one of our last all hands of the year since we have these monthly. And we've done a ton of things from shipping Filecoin virtual machine to a lot of work around compute. We have a lot of new retrieval tools. We're gonna hear a little bit more about Capoose later today. We've seen a lot of ceiling improvements for adding new capacity to Filecoin and then lots of fundamental protocol upgrades and network versions along the way. Of the things that remain, we have some exciting work happening around IPC, helping L2s boot on top of Filecoin. We have some important retrieval work happening both on the CDN side and retrieval checking side. Some really useful work happening kind of at the storage capacity and data onboarding side that we're gonna get a deep dive on later today. And then we have some important protocol improvements in Filecoin landing through the end of the year. And I'll pass it off to Lauren to give us a deep dive. We're doing prospective grading of our OKRs. We know we still have a little bit more time before we fully grade the end of these, but tell us where we are on critical systems. Thanks Molly, on critical systems, the improving the IPFS gateway with error codes is complete. Go try your favorite error and get back an error page. Retries are still in the works. On the three fifths to the Filecoin economy, there are three in progress with DirectFill Plus, now renamed DirectData onboarding and gas lanes have fit drafts and then there's work ongoing with the batch balancer. The five community bootstrap nodes is complete as is the Filecoin chain robustness work. Steve? Great, yeah, for hyperscaling and accelerating accounts and teams on the stack. CryptoEconLog is on track. They do have two paying clients currently, the second one being DRAN. So congrats to them, that's great to see. And on the DRAN front, they've landed two customers, including Proof of Play and SSV.Network and they're finalizing their growth plan and roadmap largely with those CryptoEcon lab inputs and they've even been accepted here into an accelerator program here in the Northwest, Pacific Northwest. So yeah, good bonus item for them. On the Healy item, we kind of parked this one. I'm not gonna meet the original goal of being able to retrieve or sorry, to be author content from the browser and retrieve it other places without relying on preload nodes or pinning services. We are, we wanted to divert more energy towards the reliable retrieval from the browser rather than the authoring story. And so to do that, to give a authoring story, you're gonna use pinning services. So that's why this one isn't fully met. And on the pinning service side, we've gone through all the pinning services and actually haven't found any that work effectively from the browser, although we are in close communication with Staleway and they're making updates to support this. So it looks like we'll be on track to hit that adjusted goal by the end of the month but this was a significant goal post change, which is why we're marking this as orange. Thanks, how's it going? Yeah, thanks. Matthew's in transit today. So jumping in for his items in terms of onboarding third-party integration partners for project motion, project motion, alpha is on track for the end of the quarter, but I think third-party integrations might be coming a little bit after that. So yellow, TBD, we'll see. For unsealing fixtures, those are on the 1.23.4 train. So landed and working on getting out into a release near you, but a little bit more work required for the redundant window post testing and improvements there that we would like to see land before we call the screen. On the Saturn side, we're not on track to hit our goal by the end of Q3, aka in the next two weeks. We're still working on getting our prod traffic flowing through our new Saturn nodes, but some new improvements to share in this call on tracing that can help us get back to that goal. And so we're tentatively hopeful that we'll be pushing this back to no later than lab week. And then finally on the dot storage side, already got all of the things needed around time to deal for future data and a strong spade integration, but W3UP is still rolling out in terms of getting data on boarded back into the system historically. So we have a little bit more work to do there to fully call the screen. Hi, Sue. Hi, everyone. On upgrading Filecoin with new L2 capabilities, starting with FEM, I will dive into more details during our spotlight. FEM just marked six months anniversary last week and we have great milestones. But on the first line, we reached all the goals, like some of them missing slightly, but we still have two weeks left. We are rather than 15 million TVL. We are now around 22 million TVL on DeFi Lama, which is great, greatly exceeding our goal that we had. In terms of wallets, we reached to 630,000 wallets as of this, over 630,000 wallets as of this morning. And in terms of unique contracts, we are almost at 2,000 unique contracts, which is slightly below the goal that we had put. And on new FEM capabilities, we are below the entrant aggregators. It was an ambitious goal. We have one lighthouse working very well right now and we have multiple conversations with multiple aggregators to minimize dependency to one aggregator. In terms of storage deals, we are almost 2,000 storage deals as of today. And looking at the technical platform side, all of our epics for foundational changes and related FIPS were deployed before the last goal for this upgrade. Kudos to our engineering team. It was a very tight deadline. And we are on track in Q4 to bring non-malicious runtimes. There is more on this. Please come and find us in Iceland for FEM and runtime strike. You will have a chance to learn a lot more about what we are planning to do in the coming quarters and be part of the discussion to join us. On the IPC lands, the model also touched on earlier, lots of exciting things are happening. The reason that we are orange for the Q4GA, we prioritized some of the DX developments moving from M3 to M2.5. So it was a conscious decision that caused delay in certain aspects for GA implementation. On the other hand, for the province early users, we are in conversations with lots of teams and getting feedback from them. So we are hoping that that is gonna be reached by the end of quarter early Q4. And lastly, CUT test net deployed. We didn't reach to grow trajectory for the users. That was mainly because technical aspects of moving to go as prioritized. And that was the prioritization in getting users. However, we will see that we are ending the quarter over 500 executions and under total unique external users. Awesome. Well, a lot of great progress and we'll finish grading these for next time once we have officially hit the end of the quarter. And with that, I'll pass off to IPFS. Yeah, great. So with IPFS, we're working to make the web work more peer-to-peer with content addressing so content can be verified independent of the provider or the transport method. So yeah, looking at our metrics here, you'll know major call-outs you want to make. Yeah, there are, we have tracking items to expand some of this, but again, we're looking at the, you'll public the HD, as mentioned earlier, we're now calling Amino. So we'll use that word going forward. We're looking at the size there and some of our GitHub activity and also the latency across different content routing systems like Amino and SIDDOT Contact. You go to the next slide. Some, in terms of some of the updates, there's a new Kubo release that you'll be shipping or RC, I should say they'll be shipping in the day-to-day and then the release itself happening next week. But the routing D1 HTTP delegated endpoint that's been getting specced out over the last number of months, adding extra aspects for delegated routing like peer routing, et cetera, have now landed in Kubo after going through the spec process. There was some research that was done in terms of how we could reduce secondary DHT lookups that were done by Problab and those are now making their way into production. And you'll be able to track the latency improvements from that with a link there. The Trustless Gateway API that has been getting a lot of work across multiple teams, that is now being exposed in Kubo using the new LibP2P HTTP functionality. That's an experimental feature flag in Go LibP2P and also now you can use it within Kubo as an experimental feature. And maybe a call out for folks is that the Emplex is deprecated. We are not enabling it by default and future releases will actually just remove it entirely. But that should be good for the ecosystem as a whole. And final release, like I said, it'll be next week. More to come about IPFS companion in the spotlight, so I won't say anything there. Healy has done work to allow remote pinning support. As I mentioned, we're actually not finding great providers that work from within the browser, but the code to verify that and test this out is now in place. As you continue to work on more onboarding examples and getting into the cadence of producing monthly project reports, looking at who our consumers are, new projects that have been emerging, et cetera. So we published the second one of those. And so behind the scenes, we're cleaning up IPFS D2, having D2 only records that has landed in both Go and JS implementations. So things that are coming, this group of people is actively working on how we diversify our funding for the future. And we need to, we're working on those funding sources and talking about the community with that. So that'll be coming up. The, like I said, we're putting a lot of our effort into having reliable retrieval from the browser, even having an aspect of Helia being able to fall back to using the HTTP trustless gateway when other peer-to-peer methods aren't working. And then with that, having a way where we can continuously monitor how are we doing with improving our reliability. So that's coming and work on the gateway conformance suite so that we have good dashboards to showing how different implementations and gateway instances are actually performing in the wild. It will be coming. We're also going, doing more cleanup of things that haven't been well specified or haven't had corresponding compliance tests next up as UNIX FS. So more to report there. And viewing IPFS.io gateway as a proper public utility, getting a landing page in place for that and other ways that we can further reduce the cost of that fleet by right sizing and request throttling. So more to share hopefully in next month's all hands. And there's some things happening in each T side as well, but those will get covered in a spotlight as well. A pause. Thanks. On to LibP2P. Yeah. So the LibP2P project is dedicated to a set of stacks or a set of specifications for a modular network library with implementations in many languages so that it's usable in as many computing environments as possible updates on the KPIs for the community. What we're seeing is we are continuing to grow slowly numbers of contributors tend to are slowly picking up across all projects, all across all implementations, even though we're seeing mixed counts on individual projects, some going up, some going down. Our statistics around the number of peers in the networks have stabilized since the changes that happened earlier this year in May. And we're hoping to see those numbers up tick upward as we engage further in our KPI refinement. Next slide, please. Couple of really important call outs here. LibP2P plus HTTP has landed in Go LibP2P, which brings HTTP semantics as well as HTTP transport to that library. There is an associated blog post on it that just went out this morning, I believe. And there's more to come around that as we talk about other implementations adapting it. Other really important things, let's see here, the Rust LibP2P stable release with quick support went out. And we also now have the Ethereum beaking chain landed experimental support for the quick transport. There were some changes in crypto TLS with quick Go that also landed there as an associated blog post. And then what's coming up is WebRTC private to private is being worked on right now so that we can further elevate the browser itself to being a first-class citizen in peer-to-peer networks and having support in other implementations such as Go and also Autonet V2. Hopefully we'll have more exciting things in the next all hands. I'll hand it over to Filecoin next. Thank you, Molly. Filecoin, we are here to be a crypto backed, decentralized efficient robust foundation for humanities information. Next slide. Every time I say that, say the Filecoin mission, like what does decentralized robust foundation mean to me as from a tech perspective, it's really, really clear. But like sometimes I pause and be like, yeah, what is like humanities information? I have a question mark on that and at the field at Summit in Singapore who actually did a great talk to refresh Filecoin's mission, he explained what is actually, what do we mean by humanities information? So obviously everywhere I know we want to support like public good and data, all the scientific papers, historical like presence on so forth. However, humanities information is really the union of all the humanities data. So like, you know, my cat's photo is as variable as like, you know, some history in some places in my humble opinion. So like all those are considered any data that's generated by humanities, it's humanities information and the Filecoin ESD decentralized storage network for that. The talk is quite refreshing. I have a link in this slide. So I would recommend everyone to take a look at that. Next slide. So quickly on to Filecoin KPA eyes, we are still a little bit over 10 expires of like robot power on the network. We didn't store more data on Filecoin compared to like a month ago. However, our data onboarding is not slowing down. We're still at like 1.5 expires, but mostly it's because we have some old data stored on Filecoin from like slingshot like early network time are expiring. That's why we are not seeing increasing the total data stored. However, we are not slowing down on storing new data. The next thing as soon as already mentioned, it's very exciting. We have hit over like 10 million fail, like managed deposit, you follow a FVM based smart contracts just like the other day, like last week that in Singapore we were celebrating it live. It's very amazing how we achieved that in six months. Kudos to all the DeFi teams and ashrams, Longfei, Sara, Matt, everyone that you involved in this whole effort. And the next slide. Some highlights, update, Lotus v124.5 RC1 is out. Swordfighter who has facing unsealing bugs, those are fixed. And now you can actually unseal the data and the server troubles. We also finally ship FRC 0051 EC Broadcasting FRC, which is a piece of work done by consensus lab over months and we just did the implementation. So before, if you have about more than 20% of the storage power of the networking, you can potentially attack the networking away. However, now that merging is now raised to over 44%, which is like a huge improvement. The code needs to be carefully tested. It's a little bit scary, but thanks for a use for implementing this as step and guide from like CL team for carefully review the code for us. And also I just learned Australian Cardiac Institute is actually storing their data on Falcoin, which is like quite awesome. Humanities information, one more on Falcoin. That's cool. So field app submit in Singapore is a little bit of a little bit of a little bit in Singapore has wrapped up. We are going to Iceland next week. Molly is gonna do a recap from Singapore. I'm super looking forward to Iceland to continue and dig deeper in. Molly looks shocked, but I assume but like Iceland is gonna be awesome. We're actually gonna do more deep conversations on solutions like workshops and we're starting to talk about retro incentives, more data, client data onboarding, toolings and if you are not around time and scalability and computations. So if you're in Singapore, stay tuned. If you are not, we're also gonna do public a community forum, I strat, Twitter's and so on so forth for you to follow around. I'm also recording for field app submit in Singapore is coming up. So, so stay tuned and be trying to on watermelon is coming. We have scope finalized. I usually is gonna give you a spotlight later on. And I think someone already mentioned that we almost launched the Fall Queen Rust API, not Rust API, REST API. Sorry about typo, but that's also very exciting. That being said, a lot of fun in Singapore but also a lot of hard work, expose a lot of challenges and opportunities for Fall Queen to evolve. High-level teams will include in fast functionality, fast work time so that we can get fast retrieval, better ceiling data onboarding pipeline. We want to build more allocators for Fall Queen plus data caps so we can get the data in the network. We want to reduce Fall Queen plus like, abusement, a lot of interesting conversation about flexible sector commitments and like, update the both sector content. So you can only, you can seal the sector once but putting new data into the sector. Very interesting conversation on what kind of the deal abstractions Fall Queen needs. How can we build better storage on ramps? How can we improve client onboarding experiences? How can we build a sound Fall Queen economy? All those are challenges but also opportunities for us. That's it. Awesome, now into our spotlights. Please, everyone keep to one minute so we can spend our time on our deep dives. First to companion. Hey, everyone, I'm presenting IPFS companion. IPFS companion for those who don't know is the best way to interact with your local Google node. Basically any IPFS implementation would work but it's a browser extension which sees 60,000 monthly active users and recently it has been transitioned to MV3. MV3 is just short for manifest V3. This is the new standard for browser extension. The original proposal goes back all the way in 2018 and it was landed as an issue in January 2019. Now the transition has completed the release is going out today and hopefully it will be approved by Google and other web stores so that it is automatically shipped to your browsers as soon as possible. If you don't already have companion installed please go there and install it and report any issues if you find any. The benefits of this new release will be improved performance and responsiveness. That's one of the issues we've been listening to from the community but the new way of defining rules makes it faster. There is better resource utilization where the service worker in the background actually goes to sleep when it's not news so that doesn't leave all of your power while running in the background and we also have new enhanced metrics. Eventually this MV3 release will transition into extension support for phones. Firefox is already adjusting that out in beta so eventually we'll have access to like IPFS implementation in browsers on the phones so that would be nice. For the future we are working towards embedding Helia in IPFS companion and watch out for more updates in the future. That's it. Awesome, thank you. Do you come up? Hi. So very quickly I'm going to talk about a project that we did with Credo Network so they were going through a transition phase in terms of their utility and tokenomics and they reached out to us to help them redefine their tokenomics. And so we had, we essentially did two main things. So first we proposed a new set of tokenomic mechanisms in particular to new fee models and a staking model to support the introduction of a proof of stake system. And then we also created a bespoke model which we named Make a Credo that allow us to test, analyze, and fine tune the parameters of their economy. And we did a set of analysis with this model. So I cannot go into a lot of detail but if you are curious and you want to learn more about this project, I leave here the link to the Medium posts and also to the full set of reports which go quite deep into each of these things. And if you are curious and want to learn more about this or if you are in need of these type of analysis and support, reach out to me or at the Cryptoicon Lab in general. Thank you. Awesome, great, great example of helping boot really well tuned economies that can be sustainable along into the future. So excited for Credo. Ayush, tell us about NV21. Hi, folks, quick NV21 update that I wanted to provide covering essentially what's in the upgrade, what's not in the upgrade and when to expect this. So stuff that we have in NV21, this will be our first upgrade in about six months which is the last we've gone between upgrade for a little while but we're expecting the Phil Crypto team has a synthetic pora which is just a lighter weight group of replication that simplifies the overall ceiling process for miners that's expected to go live in this network upgrade. There's been a few months in the work. The FVM has four changes that basically improve and harden the Filecoin virtual machine. A lot of this work kind of falls into the categories of stuff that was important for security reasons but were descoped from the original, from the FEVM upgrade earlier this year because it was safe enough to ship but we still wanted to go back and robustify this. Also a lot of this paves the path towards so-called native actors or native Watson actors which were just on the FVM team sort of map. We have a FIP that was that originated outside of Entra, which is exciting that allows search providers to move partitions between deadlines. And this is one that just allows for greater configurability in search writer operations as well as some protocol bug fixes improvements. Things that are not in it, the direct data onboarding FIP that was in work as we've been discovered from this upgrade as has a switch to the new DRAN network both of which we're expecting in NV22 now. And dates, calibration test net upgrades October 10th, main net upgrade November 7th. And I do want to call it a lot of risk here which is that November 7th deadline is very close to lab week. US Thanksgiving is a couple of weeks later which makes planning a little difficult. We need to be very careful about slippages especially given that there's traveling in the next week at Iceland and Silveig is coming up as well. So at least the Phil Dev team is definitely working to try and make sure that we have all of our planning accommodating for that because the risk of slippage would be a little difficult to organize here. So stay tuned for more. And if you are on a team that actively supports network upgrades whether that's infrastructure or monitoring this is we're getting into that role at pace. Thanks. Awesome. Exciting. Coming soon. Hi, soon. Let's celebrate FEM. Yeah, hi everyone. So FEM as I mentioned earlier marks six month anniversary last week and also here's a tweet from TLDR. Feel free to show your love, free to eat, comment, like it. So just going over some of the details here I mentioned field deposits earlier and the OKRs. Just to touch on a little why we think this is an important milestone for us. 10 million and it is off this morning at over 10.5 million right now as you see on the left hand side from our Starboard publicly available chart too. This metric is important because this is not only showing that FEM and smart contracts deployment are being used and DeFi is growing but also this is how our SPs have access to pull off field being state and that they can borrow. And you can see the field borrow 7.7 million almost 8 million. That's a very important metric too. And we almost always expect there's gonna be that lag between those two metrics both from a time lag but also for our protocols DeFi's taking protocols damn having corridors keeping some portion of the field before that's being borrowed by storage providers. But this is a very important milestone for us because this shows that we are still increasing in our hyper growth phase and we are very excited to see how like fall is gonna unfold for us as FEM a platform on DeFi land. A few other metrics DeFi Lama TVL has been a big important metric that we've been following because it shows Filecoin and our platform as compared to across all chains. So we are ranked in 29 but this number sometimes like shifts you might see us like from 29 to 31, 32 our goal was being top 30 which is a great success because as you know this ranking also depends on the price which we have no control and nothing that we are influencing. So despite everything going on that shows our growth in terms of usage. Lastly, I also wanna touch on our users contracts deployed they're almost at 2,000 smart contracts deployed and 200 teams building on FEM which is super critical and this 200 number we have a very high bar for that it's not only any project using FEM but really teams either got funded by us, Piel or have funding enough like building serious projects and platforms products on FEM. This also shows a great usability for our platform. Lastly, I also wanna mention as you see on the bottom right, total wallet size. This was a metric that we felt that we needed to improve much more when we were barely reaching to 150K last quarter and this quarter tanks to also defy improvements and other use cases we are now over like 6,000, 30,000 wallets but that's, yeah. I wanna say lastly, are you already touched upon improvements done on our platform with the current network upgrade and mentioned the NATO actors coming? I wanna highlight again, come and find us in Iceland for FEM tracks be part of the conversation and discussion share your feedback, share your great ideas. There is a lot coming in Q4 and Q1. Awesome, great time for all of those 200 plus teams to keep making the FEM platform and runtime better for their needs as well. So awesome opportunity. Will, tell us about the new Rea tracing. Sure, so this is a cross team effort. This flame diagram that you see is on honeycomb and is actually something that can get generated by any box out implementation. But the by cross gateway one is extending the tracing IDs back through Saturn so that we can also connect it with the subspans that come from those Saturn nodes and all the way back. If you go to the next one, you will, the next slide, you'll see that we're also getting some of these flame graphs on the specific request in the browser. So every time you access a URL from the Saturn variant of the gateway, you can actually get some timing information about the specific request back and that's something that we can share in band with any client. And so you can see here that this retrieval actually came through BitSwap for instance, and you can see how long it takes for things like lassies to do their parts of the work. And so this set of timing breakdowns along with the more detailed set of labels and the ability to do analysis in honeycomb are giving us a lot more visibility into what's happening both in type of Saturn and then providing it as a surface from Saturn to its users. Cool. So we're useful. And I'm sure this stacks with the caboose improvements as well to keep making Saturn and Rhea better. Amin, you wanna tell us what's new in caboose? First, just gonna recall what caboose is to make sure we're aligned on which caboose we're talking about here. So caboose is a block swap for crowdfiles and in Saturn we use it as the primary client for project Rhea. So eventually at some point, all the requests that hit up just IO, caboose will be the client talking to Saturn those requests. And one of the core functions of caboose is optimizing which nodes to select and continuously maintaining an active pool of the best nodes to talk to. And the big update here is that we're working on a new adaptive algorithm for caboose pool management thanks to some of the amazing work done by Will and Arsh. So to briefly explain this update, there's like two big parts to it. First is that we're changing the caboose pool to be dynamic instead of fixed. So that means that as long as certain nodes meet the performance criteria as we define, it can participate as part of the pool. And that's currently done and going to production. The second and next step targets how much load hits each node. So previously we uniformly distributed the SIT space amongst the nodes in the pool which implicitly evenly distributes the request that hit each node. And we really wanna allow each node to become its true self. So now we consider each node load capacity to how much of the SIT space occupies. So if you look at the diagram on the right, basically the nodes with larger capacity would occupy more of the SIT space and hence receive more requests. And yeah, it's like really putting it into simple terms but caboose is really the clue between two really large systems. So there's a lot of work and testing that goes into implementing changes like these. So again, thanks to Will and Arsh. So we're expecting these changes to maintain a more optimized node pool and hence like really improve the virtual performance. And that's it, thanks. Awesome. Excited to see how that performs using all of these tracing metrics to see how it goes. Gui, tell us more about Amino DHT. Yeah, sure. So I'm happy to announce that the public IPFS DHT finally has a name, Amino. And like Amino Acid in biology, we believe that Amino DHT will serve as a building block for larger and stronger system. So at ProLab we've also been working on modernizing the code DHT implementation. So these reflect or solve multiple non-standing issues and enables easier participation from new contributors. It also makes possible impactful future changes such as a massive optimization for the reproved process as well as the reader privacy upgrade also known as double hashing. Dennis built a new bootstrap monitoring tool, Boomo. And Boomo already uncovered bootstrap of failure both in Amino DHT and PyCon as well as a bug in Golip V2P. So it's been out for two weeks and we already got great results. So thanks to Dennis. And we, so ProLab also built a new DHT bootstrapper that's based on the refactor DHT go code. And it's good because it will offer more diversity to Amino DHT bootstrapers that have been facing challenges lately. So that's it for ProLab. And I guess next is the deep dives. Awesome, we have two deep dives starting first with project motion. Awesome, hey already I'm ceiling. So today I'm very excited to talk about project motion which is a cross team effort with the hope to simplify the process of data onboarding. So next, before I dive into project motion let's start with why we built it in the first place. As we know, get your large data to a pipeline network it's not a trivial task. It has many steps involved from data extraction, data preparation, data making transfer and data management. So you may wonder how can we simplify it? So motion is one of the efforts to just simplify the process and just do that. Next, what is exactly motion? It's a software integration and orchestration layer between the traditional data storage solutions and Filecoin. Essentially it aims to abstract the way the complexity involved in those operations that you just saw. And this API will be used by SV to store and retrieve data from Filecoin and check verify the status of data being stored. So now next I want to briefly touch on who motion for. Motion is targeted, the primary user of motion is independence of a vendor who act as the middle layer between the large data clients and storage providers. Those as we may already have backup solutions to different clouds or on-prem storage and are looking to add additional storage layers on Filecoin to attract more clients. So this, although it didn't directly touch the client but it's benefit client by providing that simplified solutions. It also benefits storage providers who want to charge a fee, get some pay deal from clients. So it basically benefit all three personas you just saw here. And next I want to briefly touch on why use motion. So unlike the alternative solutions in the market the proprietary solutions, motion is completely open source and it's a one stop solutions that abstract away the Filecoin specific internal steps. And the knowledge of Filecoin stack required for you is minimum and it's also aimed to support large data flow with ease. And next I want to talk about how does motion works? So motion operates through what we call a motion engine which as we run in data center or cloud is deployable view Docker and it designed to support a large amount of data. So the engine use the existing tools like single larities who is a great tool for data preparation. It also leverage the data making module from then and for the retrieval part we leverage the library for that. So all in basically provide that all in one interface that for you. And when an API call is made the motion package send the data off to storage providers. So that's the current design. And next I want to talk about what are the works that we have done so far? Kudos to the amazing engineers who work hard on this like Xin'an, Hanna, well Marcy and Allegra they all made great contribution to this. So right now we are on track to our alpha release we now have a motion test for API have S3 connectors that is under ISV testing and we also have a end to end data preparation deer making flow that is ready to test. So if you are interested later I have a GitHub repo that have all our roadmap and upcoming features. So we are also set for the beta release which come later around lab week mid November. So we will add additional features like partial retrieval robust retrieval support for larger data uploads and got a plan for payment. So next yeah call to action. So if you are interested in what we currently already been the public roadmap and testing structures for the things that we already released like S3 connectors, feel free to scan this QR code and also check out our public open or GitHub repo. Also there is an ask for the community who is watching this video join our private beta program to get to be the first one in the line to get our latest update. And if you are SP we will love you to participate us and give us your great feedback. So that's all, thank you. Awesome, thanks so much Helene. Super awesome to see motion coming together. I'm gonna give a super quick update on the Phil Dev Summit for anyone who wasn't able to join us in Singapore. It was an awesome gathering. We had tracks from down to the core protocol upgrades and evolution to boosting SP business success to scaling file coin with IPC now and in the future and many others. Lots of amazing talks and discussions and workshops to work on new ideas together. Some pretty exciting ones. Here's a quick summary of some of the learnings and takeaways from each of the tracks in Singapore in the protocol evolution track. There's a lot of talk about fast finality and how that's super important for pretty much everyone who wants to be working with file coin today. So nudge nudge, this is a top priority for many groups as we look into Q4 and our next network upgrade. There were some really interesting designs around a time-stamping chain which could allow us to have a much faster lighter weight PO-REP, much faster lighter weight retrieval. And we have a design that we're gonna keep working on in Iceland. So pretty exciting ideas there that could also have effect of a much faster block time for file coin as well. We talked about deal abstractions and how changing them is definitely an expensive cost for all of the people who integrate and look at chain metrics and other things. And we describe the concept of sparkling data which is data that doesn't live within a deal but might live within a sector which is okay but we wanna minimize our sparkling data that's floating around that's hard to introspect and keep onboarded and useful for folks. And we also had lots of interesting ideas about how we can make the file coin protocol much more lightweight and efficient through decoupling sector commitments from data commitments and maybe not even having deal sectors and just trying to really simplify and reduce the complexity around onboarding a committee capacity sector and then adding data into existing sectors so that the whole protocol could be more efficient and insensible lined. So some really interesting ideas there that's just a tease, feel free to go in and dive deeper into the recap track to get an even deeper dive from Nikola. For scaling data onboarding, we talked a lot about motion and there was some kind of initial early feedback and data points coming from the local storage provider community there. Also some really interesting conversations between Singularity and Lighthouse about potential synergies around data onboarding. For, I think this is supposed to be boosting SP success. Some really great action items coming out of that. Some FIP discussions being proposed around making worker addresses only responsible for block signing. Some new SP implementations and improvements that should make them more interchangeable. Some work that the stores with teams gonna take on to make a pre-commit one, make it easier to reseal unsealed sectors, which will save overhead costs for storage providers and some good discussion around non-interactive co-op and the concept of ice cube sectors that can be kind of like deposited from your ice machine and then loaded into the network at a later time. And some pretty interesting ideas about how that could make the ceiling as a service and SP bottlenecks a lot more efficient. In the governance and Phil plus track, we talked about the need for more data cap allocator programs that we can trial and measure and tune and start depositing data cap to then be reallocated by many different systems within the Phil plus program and they can tune for different client needs, different data types, everything that meets our unified definition of all of humanities information that we want to be stored in Filecoin that can adapt these programs for different constraints. And there was also some discussion around the new FIP001 V2 proposal with some changes to kind of how having a community guild, maybe some on chain voting, maybe some ways of measuring R&D investment so that we can make sure that we're continuing to involve the FIP process, especially for things that kind of get deadlocked in kind of community debate right now in order to help make that process have an endpoint or at least be more clear about how our community governance works in those contexts. Finally, the IPC roadmap had a lot of excitement for faster block time and also folks who are excited to use the Mycelium L2 network that the IPC team will be spinning up so that folks can take advantage of faster block time there and that was exciting for a number of groups outside of the original IPC subnet builders who also want to take advantage of some of the opportunities there. And I don't have a slide on this, but call to action. If you're excited by these things and you want to be engaging, join us in Iceland, Phildev.io for our part two of the Phildev Summit. We'll be talking about these issues a lot more and we're going to be working to build a community roadmap of what are various different teams going to be contributing to make file going stronger and better and more capable as we look into 2024. And so bring your ideas of important upgrades that should be worked on and actively prioritized as well. And on Wednesday afternoon, we'll be doing a great session on that. All right. Well, thank you all for being a part of this, sharing exciting updates and excited to see a number of you in person next week. Cheers all.