 On to our deep dives. We'll do these a little quickly, but first, Hannah for Data Transfer. I'm Hannah from Bedrock, and I'm excited to introduce an effort. Our team is exploring to supercharge the ways we are able to move data around our networks. First of all, I wanna help folks understand what we have today. You've probably heard the words BitSwap and GraphSync. I wanna talk briefly about what they are and how they're different. Both of these protocols move IPLD data around LibP2P networks. The analogy I've been using to help non-progressors understand is this. BitSwap is roughly designed like BitTorrent while GraphSync is roughly designed like HTTP. That means they shine best in different scenarios. BitSwap like BitTorrent is good for moving highly distributed content from many peers where each individual peer might have low bandwidth, like a home computer. GraphSync like HTTP functions great for downloading data from high bandwidth servers like storage providers. The other big difference between the protocols is a historical artifact of how they were built. BitSwap is the bread and butter of IPFS while GraphSync was written in the course of file coin development. This has led to some big difference in the implementations we produce. These aren't differences that are inherent to the protocol, but they're nonetheless quite significant. GoGraphSync supports layers for payments and authorization while GoBitSwap keeps everything free. And not only that, but GoGraphSync provides multiple layers of control to our operators. While GoBitSwap has very little, has a lot less configurability. This has led in newer situations to a kind of a difficult trade-off. We are starting to see in like retrieval markets, it would be nice to be able to reach for either protocol without having to think about what is and isn't supported in terms of things like payments and authorizations. It's a tough trade-off right now. Retrieval markets, for example, needs multi-peer transfers, but also they're going to need payments eventually. What do they use, right? And this is what Project Thunder's trying to answer, the why not both. We wanna make each product protocol more powerful and flexible, so it isn't really a choice. I shouldn't have to say if I build for file coin, I use GoGraphSync, if I build for IPFS, I'm kinda stuck on BitSwap. Or if I use BitSwap, I can't use, have payments. The auto-retrieved project you'll hear about next is great for bridging IPFS and file coin, but in a long-term one shouldn't need to run a server to translate transfer protocols. And it's not just about making these choices easier. We can actually use one protocol to fill in the gaps with the other. BitSwap lags behind BitTorrent in performance sometimes because BitTorrent starts with more information at the start about the structure of data you're downloading. So what if GraphSync could be used to quickly discover that information? How much faster could BitSwap be? These are the kinds of questions we're aiming to answer. So anyway, how are we gonna do all this? Well, this is what you're gonna get for the five-minute version. No, seriously, I tried to make like a super simple architectural guide in no matter how much I cut it down, the answer will be unsatisfactory unless I'm taking the other geek dive's time and I'm not gonna do that. That's not what you do to teammates. Suffice to say, it's complicated. In terms of what we're doing right now, we have two protocols and several layers of payments that only work with GraphSync. In our current work, BedRock is re-architecting the higher level layers to be fully protocol neutral while IPFS stewards are building the hooks in BitSwap to make it possible to support payments. This is complicated, slow work, but you will see, hopefully, a re-architected go data transfer, V2, and it says in a month or so, but I just heard two weeks. So in two weeks, it will be here, but here's a ton of more information. You can read the detailed project proposal on roadmap, take proposed extensions to BitSwap, watch a video on how we're re-architecting the data transfer, and you can also follow progress with hashtag data transfer interop on Phil Slack. And you can check the slides to dig into these. I might maybe do a deeper dive for programmers at some point. One last thing. We may not get this work done super slew soon. These kinds of protocol changes are really hard. They're always hard every time we do them. They're long-term investments and they don't always have super visible like immediate wins, but they have very big long-term wins. So our team, it's possible we may need to get reallocated at some point for immediate priorities, but my hope is we're gonna get there and we're gonna invest as an organization in this kind of low-level work to unlock these key long-term benefits for our network. That's all. Awesome. Thank you so much, Hannah. On to Will for auto-retreat. So auto-retreat, we've mentioned a couple of times. This is one of the stop gaps that we're putting in place so that in the short term, we can make content that's in Filecoin accessible to IPFS, to gateways and just more generally bridge some of the protocol gaps that we've got at the moment. It also serves a secondary purpose, which is it gives us a lot more view into the state of retrievals and lets us work with data programs to sort of help set up the right incentives to encourage storage providers to ramp up on their retrieval bandwidth and their infrastructure so that they can serve the amounts of retrievals that we're expecting to keep growing. So this is running, we recently switched it to a Kubernetes deployment that we can keep running pretty stably. We're working through some ongoing resource management stuff so that it not only is running but also serving at high quality. You can see some gaps in the success failure rate where it runs out of memory currently. All this work is thanks to Elijah on the Outercore team and Kyle on the Bedrock team but more generally what this is going to mean is that when you go to IPFS.io what will happen is that will go back to the big IPFS node that is that gateway. It will be peered and so it's bit swap requests. We'll talk to its peers and one of the peers will be this auto retrieve node which looks like an IPFS node that is just sort of in the IPFS network. Right now you need to be peered. What that means is it's serving currently IPFS nodes that are in the DHT server ring because it automatically connects to them but if you're another IPFS client you're not getting the full benefit quite yet because you won't necessarily be connected. Those bits off requests will then be seen by OuterRetrieve who will ask the store the index indexing node for those sids. When those sids are found from a storage provider on Filecoin it will then make a graph sync request to pull that content locally into its own cache and then we'll say that it has those blocks and be able to respond to them over bit swap. So it acts like a block cache. It keeps a relatively large order of tens to hundreds of gigs of blocks that it knows about in cache that it's pulled from storage providers. But the thought is that these are transient. We can eventually have them running in the same regions as gateway instances and just generally use this as a short term over the next month's way to bridge until we get some protocol upgrades. I will leave it there. There's an OuterRetrieve channel at Filecoin Slack. Awesome, thank you so much. And on to Jennifer Filotis. A lot of you may know she's PLV7. The huge big Filecoin team has given you the couple to a lot of smaller teams and over the time we have Birox working on market problems. There will be taken, I swear. The Lotus team has tried to find our own definition of our own entity in the whole Filecoin community and ecosystem team. So that's what we are sharing here today with you, like what our CQ is. On the left side, as you can see, we are a small team still. We have eight folks who is four engineers and four technical support engineers that have been super helpful for a lot of the cities. Our mission still first is like we serve Filecoin network. We ship the protocol along with other implementation teams. We want to make sure all the node operators can run a Lotus node and talk to the network, talk to the chain, building their applications. Developers is a huge, huge focus for our user group. As you may already know, Lotus is slowly stepping back from the market development. However, we want to be able to enable folks like Birox and engineers to do the market protocols on top of Lotus. And also when the FEM is coming, we want to make sure that developer's care also have a very good experience. Basically like enable a lot of use case on top of Filecoin. That's why we think developer is a super important community that Lotus should focus on. The other one, we don't have to say story for others. We need them to get all these data into the network and also like user support. We want to make sure that we maintain a good open source like community and help us further build the Filecoin network. And next slide please. So that's our mission scope and how do we ship all these things? So we have a bunch of things like Lotus trying to do. So most of you will be curious. I feel like the P2P and PLD will have been an approved team. We have been working very closely to get their stack also shipped in Lotus as we are a user of their tag. So how long does this thing today? It's like we ship monthly feature releases which always is optional release. It includes a lot of like Chinese new features. We are still like shipping on like a go-fuel market like Hamas works behalf. Like all those Chinese is going into that. But mostly we are focused on maintaining this a lot. We spend a lot of time to do bug fixes, like pay off the tech that just to make sure our user can be happy to use the Lotus in their production line stable aid. And we also have Mentor Release which is for network upgrade. Those are like less stable on the timeline because like a wide refuel point is an upgrade. We will do the same. As you can see here in the screenshot, we haven't been missing a monthly feature release for I would say eight months now, even when we ship like mandatory network upgrade release we also make sure we keep the feature releases going just to make sure all the development in master. Yes, good. Yes, ship. So how can we get the things developed and like, you know, coordinate into these releases? So we have a set of like processes. So first to start with, we start our day in the team with Cat and Memes as you already know, you can see here, these are our Lotus Cat. And we also have Memes going on so to make our life a little bit more fun, get into the real work. So a lot of time that is our technical supply engineering team is doing is to make sure that we charge the incoming issues in GitHub or like in Slack or GitHub discussion within 48 hours so that that team knows that you need this thing that needs to be looked at to make sure that it's in broke and also to fit into our backlog of the things. Next slide, please. There are things that it's like, oh my God, you have to fix that immediately. Otherwise, how could our work may die? But a lot of the other things will be going into the Lotus Backlog. So basically our TSE team will be putting out this like weekly charge summary, which fit into our spring planning. Before I get into the spring planning, I do want to say another thing we do is like we do Coltry Project Backlog charge prioritization roadmap planning because like Lotus is still trying to, it's still kind of like a stakeholder of the core development of the Falcon Network and because we are within PLN working closely with a lot of other teams like protocol, opportunity team, CLL, consensus lab, DRAN team, we kind of know more like what's coming in the six months or a year basis. And that's why we try to keep everything in our backlog just to keep everyone informed, including the Falcon Foundation and other core labs. So we do a Coltry Project Backlog charge just to help us understanding what's need to be in the next network upgrade and start planning. So for that, so that's of course they say and we also within the Lotus team, we have our bi-weekly thinking section. So this is a time we, you know, a lot of other teams is doing amazing work in the ecosystem. It's hard to keep up. So this is our chance to catch up with the work and you know, just like to understand like what may be ready to come to us and we have to be the shipmanship of their work. So this is the part we're trying to understand the problem needs to be solved and learn a lot of the new work that other people are doing like D-Rent, time encryption or like sharding and all those things. After all this planning, like backlog feeding, we do our monthly spring planning. So basically this is like a week before the cold freeze we will pick up, what are we gonna ship for the next week? We will make sure that we have to be analysis implementing some like low hunting food features and implementing the projects that's in our roadmap. Next slide. We're almost done. And so those are all the development work work we are trying to do. And on like community engagement project management, we also have weekly community updates that share any field loader's announcement channel. I will suggest you, I will like recommend you to join that channel to get the timely updates from our team. We are also generating library reports just to inform you all the feedbacks we're getting from the Filecoin community in general what their pain points are what are the new use cases people are looking forward to so that we can unblock them. So like as you already know, we always have a lot of things going on. However, we do want to say like we welcome all the incoming requests to get into our backlog. We cannot guarantee when we can get into that but we commit that we will eventually go through them one by one by with you guys or like with grants or external teams. So it can be super helpful if you give us precess ask on like the problem and the issue and what the user story and what the pain points is or like those can help us power ties all these requests. And you are running a new project for example, our program for like Evergreen or Slingshot, you need our support to like, you know, just to like set a good foundation of the program. Let us know if you give us like one, two, three months of like lead time, we probably can find time to work with you and be responsive with your participants. The other thing we want to do is like onboarding and support all the source contributors. So if you know any team that can be good for us to collaborate, please let us know if you want to establish those relationships. Yes, said a lot of things. So how can you actually find us? Again, create the issue is always the way to go. Loaders is our GitHub repository or you can go to the beauty actor where one of the co-maintenors of that as well. We are very responsive in the public field Loaders staff channel, even more responsive than the DMs. But if you want to reach out to our team like having a meeting, have a talk or a cyber you can reach out to me in DM as well at Jenny Juju. But again, I check the public channel more often. We do have office hours, but honestly just join the field office. Most of our engineers just look hanging out there. So like if you want to talk to us, join the office. Everything I just presented you, you're seeing the public Notion page, there's a link there. You can see our roadmap schedule our mission scope, everything there. And we started our Twitter account early this year and we started to trying to build our own profile there. So followers and likes are highly appreciated. That's that.