 Hello everyone, welcome to our May Endres All Hands meeting. We're gonna start with our working group update and then we have three deep dives on data transfer, auto retrieve and Lotus. As a reminder, we're one of many research and engineering teams in the Protofill Labs network where we drive breakthroughs in computing technology to push humanity forward. We really believe that the internet is very core to the work we're doing and building it on top of core content addressing primitives in web three is a way to make all of the amazing discoveries and work that's happening over the next couple of decades grow on a foundation that is resilient, empowering a human agency and gonna set us up for lots of future success. We currently do that, mostly focused on these kind of set of core projects, especially IPvS, LibPDP and Filecoin but there's many other core building blocks to that as well. And our working group mission is to scale and unlock new opportunities for IPvS, Filecoin and LibPDP and we do that in some critical ways by growing the network, by driving breakthroughs in network protocol and stack and also scaling the development and research happening across this network, both personally and by participating in the wider ecosystem and helping it grow. This is our set of core Andres working groups. We are growing these every single day. If you are excited about the work you hear about here we are looking for awesome new humans and we have a number of open roles. So please join us on this link and we would be very excited to talk to you and see how we can collaborate on pushing this work forward. Our strategy for 2022 is focused on kind of these four areas first growing the talent funnel of amazing humans contributing to kind of the PL network stack and all of the kind of like projects and protocols that we support especially around IPvS, Filecoin and LibPDP. We have a vast majority of the team that's focusing on making IPvS and Filecoin robust storage and retrieval networks and driving a lot of kind of adoption and growth and robustness there. We also have a number of really exciting kind of like research and engineering breakthroughs around programmability, scalability and compute that are in the works right now. And finally we do all of this with a first and foremost focus on making sure that these networks continue to operate smoothly putting out kind of like new core protocol releases burning down tech debt and making sure that we can upgrade and push these things forwards securely. Quick view into some items, some exciting milestones that folks are working towards in various different working groups. There's been a ton of work happening on petabyte level onboarding onto Filecoin. The data programs team has been doing a ton of work has crossed one petabyte a day that we wanna make sure this is maintained and consistent going forward. The bedrock team's been doing a ton of work on IPvS, Filecoin interop, launching the network indexers, setting up auto retrieve as a bridge between requests and IPFS and requests to storage providers. There's a lot of work happening around retrieval markets to enable kind of fast CDN level retrieval of data across the IPFS and Filecoin networks. The retrieval markets working groups pushing on that. The FEM team is pushing to first land FEMM-1 and then drive towards unlocking user programability in FEMM-2. The crypto net team has a really exciting MVP that's actually already up on Ethereum Testnet around a Filecoin retrieval ability Oracle that can add additional kind of retrieval guarantees there. And then the consensus lab team is doing some really, really amazing work toward making sure that we have the scalable chain space for all of the exciting work coming out of FEM and crypto net and other teams so that we can have the consensus space to have all of those transactions. And so please check out our public notion if you want to learn more about any of these projects all of these teams are doing weekly set reps so you can see the latest from each of them as these milestones come closer. And now handing it off to Adeen for IPFS. IPFS, we're trying to make the web more peer-to-peer with content addressing. And the network still seems to be doing its thing. We have nodes, people use them, they get more nodes. Content routing is still done fine still under half a second for finding latencies. We can keep going down for that but for the DHT that seems where we're at right now we've had many open and closed PRs this month and it's more to do. On Friday, we had an IPFS implementers workshop which I'll be talking about later in this call which is very exciting. Plucking with people across the ecosystem who are working on IPFS implementations. Go IPFS 013 RC1 is out. The amount of changes was so big it did not fit in the GitHub UI, it broke. Some big changes that people will like include LibPTP resource management which has been I think five or so years on the request log and having hole punching that works by default which is also quite a number of years in the making. Big thanks to the LibPTP folks for that and gateway API changes and improvements. The reframe spec has been implemented. It is in the hydras and it is in the development indexers which leads us to have lots of cool options around how we do delegated routing. And more on that will be forthcoming. And some of you may have seen there is an IPFS collab with Lockheed Martin on seeing how we can send IPFS to space. The major theme of things upcoming is how we support more implementations. There are many, they show up all the time. Yesterday, Jeremy pushed one up called YPFS. It does things, it works for larger storage nodes and has different sorts of requirements that might be needed from something like the IPFS and enabling everyone else through things like specs, things like delegated routing, reframe. And for those interested, we'll be renaming, we're working on renaming IPFS. There's an issue in the IPFS repo. Suggest a name, so it's the good name, so it's the troll name. If you don't suggest good names, we're gonna go with banana. So suggest a good name, all right. Awesome, over to Alex. Hi, so in the JS IP stack, what's been going on? Well, since we were last all together, we shipped a JS-LibP to P37. The notable feature there being it's now ESM only and the whole thing is written in TypeScript, which gives us a lot more safety as developers. And we just had nicer experience, but there are practical benefits as well. Not those aren't practical. The bundle is now smaller. It's 137K down from 180 because we now don't use a bunch of dependencies because we've managed to drop things like big number because there is big int in JavaScript, which we now use natively, that kind of thing, which is really quite exciting. We also, you'll see the graph on the right. So that's quite nice. There's been a very long running problem with LibP to P and JSware over time. It just eats all the memory. So that's the graph on the left and the graph on the right is the after, which you can see looks a lot better, which is very much a way off my mind. I'm very happy about that. So yeah, cool. Also, ships 37.1, which just tidies up a few little extra things around the edge of 37. So you should upgrade as soon as possible. It's very, very good. And please tell us the things that are bad. So far no one has told us anything is bad. Therefore it is all good. So you should definitely upgrade. Yes, please. Next coming up is those versions of the P2P rolled up to IPFS. It's ready to go. It's the end of the day here and it's Friday tomorrow. Never should be anything on a Friday. So that's going out first thing Monday morning. It is also ESM only because that's what happens when you go ESM only. This just, yeah, it leaks everywhere. But it's the future. You should never be using CGS anymore because ESM is the module system in JavaScript. Everything else is a hack. Just let it on top, user space. Anyway, ESM is the way forward. Yeah, so it's now ESM only in JSR2-VS as well. We have like lightweight PIDs. So this is very exciting. So the PID module up until this point has dragged a lot of crypto dependencies with it. So implementations of all the algorithms that we use that are not available in web crypto, which makes it incredibly heavyweight and completely unseeable for use in the browser, for example. Well, you're generally not doing any of these cryptographic operations. The new version does not have any of this baggage, which is wonderful because it means that we can now have a proper PRID type and we don't have everything as strings, which is, you know, pretty tedious and completely counter to this whole, let's have types for things. And yeah, so what else? So the next version of the P2P is gonna have a resource manager very similar to the one shipped in Gola P2P recently. Yeah, it's gonna arrive. And Circuit Relay V2. And just a massive thanks to everyone who's helped pull this across the line, particularly the people that Chainsafe, who've done a wonderful job reviewing my whatever it is, 500 odd PRs. They were very patient, very patient, very kind with the approved button. So thank you very much to them. And just to generate to everyone who's helped up with contributions from the community and all that collapse. So yeah, that's it. I hope you get it. Awesome, congrats on the releases. Exciting, moving on to LibP2P. Hello everyone, LibP2P is the networking library for peer-to-peer protocol development. So let's look at some updates from this last month. So go LibP2P, we've been consolidating repos furiously. And so now we have a bunch of flaky tests. The tests that were flaky that wouldn't run very often are now running on every go LibP2P commit. So this graph here shows you tests that failed, but then when you click the retry button with no changes in code, they worked. And the bigger the circle, the more times you had to retry it before they worked. It was especially bad at the beginning. We still have like quite a bit of flakes and we're committed to fixing them. So expect this graph to look better next month. And overall the nodes in the network have held pretty steady. So last month was peer-to-peer Paris. We have the videos now of those presentations. There's the long-term view by Juan, which is really good. And there's also a link here to the playlist. I especially recommend checking out Martin's quick deep dive. That was really good and made me really excited for quick. We had some community calls with the community and someone presented Swift LibP2P, which is quite cool. And I would recommend checking it out. For some technical updates, we have now a draft spec and proof of concept for web transport, which is basically like quick in the browser. And this allows browser nodes to connect directly to any LibP2P server without them having like a, valid blessed certificate. And then WebRTC is another path that allows this. So we have WebRTC in the browser where this is going to be to implement this server side. And so again, this will allow any browser to connect to any LibP2P server. And what's especially exciting about these two paths is that this coupled with like relays will allow any browser to connect to any other browser without like a dedicated centralized server. So they'll be able to use any relay server and create a direct peer-to-peer connection to any other browser. So that will be huge when that works. Go LibP2P release a new version and it's just a consolidation release. And we have some updates in Rust LibP2P coming out which will be, which these two updates are net negative code. So that's really exciting. And yeah, to help pave the way for a quick in Rust LibP2P, that's it from LibP2P side. Woo-hoo, over to IPLD. Rod will be giving us an update. IPLD highlights. So despite losing Daniel and Eric having a well-earned break, IPLD development continues. We have been working a lot on bind node recently. Now bind node is part of Go IPLD prime. It is a interaction mode with a Go IPLD prime that we are focusing on because it makes working with Go types a lot easier. It is currently used in latest Go Graphsync and Go data transfer. And we're working through Go fill markets when we get that full stack fitted out then we get to use some of the power of Go IPLD prime throughout that. And we simplify a lot of code. There's a lot of complexity in there to do with IPLD that we get to throw out. We're also adding support for custom type conversions for complicated situations like token amount and address and signature, which are all in Filecoin. These are encoded as byte arrays, but they have these complex structures and rules around them. So adding in support for dealing with those in bind node allows us to do things like work with types that are also using Sieborgen. So you can use Sieborgen and bind node at the same time and you get the benefits of both for whichever situation you want. So this is also useful for migrations and testing. So this is going full steam ahead. Other things that are going on, we've improved Go IPLD prime's IPLD schema DSL parser. It's close to complete, close to language complete. Just a few things left off that are fairly low priority. The FVM team and in particular Volcker have been working on the Rust, DAG Sieborg codec implementation. It's got full spec compliance. Now it's passing all our test fixtures. JavaScript's car library has had a couple of updates. It's got now car v2 read support. And thanks to the dot storage team and irakli in particular, it's there's a new car buffered writer for sync in memory car creation because they're doing some really interesting things with small cars and using cars as a transport in memory through the browser. So check that out. Opportunities for you. There's a very active public sync and chat every two weeks that happens across the PL network. Bunch of people doing interesting things. You can join via zoom to join in and chat. It's pretty freeform or you can watch it live on YouTube details or on GitHub. We're also continuing to evolve by node and would be interested to hear about other use cases so that we can test it and push its limits. And that's it for our ability for this week. I think I'll jump in here next Molly. Just wanna quickly talk on the developers team. Yeah, so we've been really trying to shore up our operational security, particular around GitHub. There's kind of years of technical debt in terms of who has access to what repos, et cetera. And so we are getting all of GitHub management into code into GitHub itself so that you can create a PR to give people access. You can see the commit history and we have better auditing, et cetera on that. So that whole system has been built out. It's been rolled out to many of our repos. So in future, the desired state to get to is if you wanna have access to something, you create a PR and the relevant approvers can review it and get it merged and then an automatic workflow is executed that gives permissions. So that, like I said, has been deployed a lot of places. We haven't tightened down the hatches many places yet. We're starting with libp2p so you might start getting notifications of how you're about to lose access. These aren't one-way doors. We'd rather get things tightened and then add people back in as necessary. So that's underway. There's a lot of the team is focused on working on test ground right now with Bloxico getting us to a place of stability in the test infrastructure. So we can do, we can do compatibility testing and performance testing and particularly focused on libp2p first. So we'll have more to share on that the next all hands update. And the other thing we just wanted to flag is the team is working on using self-hosted GitHub actions for more reliable CI setup. So if that would be of use to your team in any way please reach out to the IPvx team. They're very responsive and discord. They've got office hours, et cetera. And they're just really here to help people be more productive. Thanks. Awesome, great to hear. Over to Filecoin. Jennifer. So Filecoin we're trying to build a decentralized storage network for the most important like humanity formation. So as you all can see the network total storage capacity is steadily growing. We are at almost like the big 17 except provides for quality-adjusted power. So that's very exciting. A lot of those are contributed by the data program for onboarding real data into the storage network. For today we are hitting 90.56 PBITES which is like 20 PBITES increased from like colossal hands which is like incredible. As you guys can see we're at 0.64 PBITES like per day at growth. A lot of this is thanks to Filecoin plus team and Evergreen which is all very exciting that you see the network is actually being used for the real data. Next slide please. So for the Filecoin highlights you probably have heard of it and you are gonna keep hearing it for the rest of the year. FEM we have been talking about it forever. However, we're finally so close to actually shipping FEM and one into the Filecoin network. As of last Thursday the reference implementation of FEM has been featured freeze. We have the RC tag out already and before the network upgrade we will be in the more outdated about fixing mode in parallel. We can never only working on one single thread. So we are already started to scoping out FEM into scope. So for those who doesn't know M2 is about in unblock user programmability. So users smart contract into the Filecoin which will basically able more use case onto the Filecoin network. As those are scoping out we are also working with FEM early builders to building like the 2016 case test nets and all those things that can improve smart contract or that developers as in two launches later this year. Along with the FEM we are still working on the beauty actor. So for the network upgrade as V16 we are gonna be switching from the spec actor which is ready to go until beauty actor which is really in rust. So for this big switch a lot of like test coverage really is ongoing huge shout out to Zen. He has been amazing and leading this effort with the forest team another implementation team in the network to help us cover the whole code base and ensure the switch is safe and security line in the Filecoin network. As for loaders we are the shipping ship. So we are working with all these teams very closely and getting ready for the V16 upgrade. So we are gonna call feature freeze actually tomorrow and having the tag and deploy that on butterfly so that we can start to test with the community for the first time on FEM. A couple of the opportunities again now with V17 come V16 coming in July later in June early in July we will see how the development goes. The bug bounty is still open. We welcome everyone just to help us like further out of the code. Again, it's a big change to the system. A lot of work is being done by early builders and he is gonna give us a Macy update later. For like the Filecoin in general it's like forest team has been growing a lot and actually have been stepping up a lot on beauty and actor like effort. So with them joining this we can have like more resources developing different protocols in the future. So we're super looking forward to that. Katie from the Filecoin Foundation the TPM, the governance TPM she started a new FRC Filecoin Request of Commons proposal so that we can have a governance process for things that's not consensus critical but also like good to have a standard for the Filecoin protocol that will be including like markets and API standard and all those things we are still looking for feedback. So if you have any thoughts and click on the link and help us review the process. And again, we have two working parallel movie files on Filecoin. So the quarter has started to play out well, the next network outbreak should be. So if you are curious what's being proposed click that link as well. And that will be maybe one or two upgrade before FEMM two we'll see. Awesome, thank you Jennifer. Over to Jesse for NetOps. I think NetOps, we still checking our TTFP for the 95% tie still around the 10 second is still putting pretty well. Our IPFS class to pin out low is increased to 346 a million pings that have like 10% increase from last time we checked. Our weekly IPFS IO gateway request is still increasing to around 955 million requests. A unified IPFS IO gateway user is 8.2 million. So a lot of numbers still very stable. A lot of usage is still increasing but our networking up tie is still pretty good. Okay, the DRAN API chain love sentinel, file info, IPFS push check always a 100%. The IPFS IO gateway, we have like three nine, not five nine, but we try to reach like before nine and then five nine. I think that's from the high level point of view from the NetOps API. Awesome. And an update from Yanis on DRAN. Hello, hello everyone. Quick updates with a lot of new things actually. So we do have a new shiny monitoring dashboard for the resharing ceremonies. So if you've been in a DRAN ceremony before you would have noticed that there is lots of manual kind of communication and back and forth with every partner to make sure that everything is in place before actually executing the resharing and the DKG. Now, thanks to Mario's great work. We do have a monitoring dashboard that informs us of many of the things that we didn't know before and had to go and figure out like by contacting partners. That would be if they upgraded to the latest version of DRAN, you can see a screenshot at the top which shows a very small part of the connectivity map but you can think of that as a, every partner is both on the X and Y axis and you can see if there is a transient failure then this needs to be figured out before the ceremony. If everything is good, it's green and so on. So that gives us much more kind of confidence for everything and the screenshot at the bottom you can see, the group size, the threshold. If some nodes are falling behind in terms of following the latest randomness and so on. So that was great news and we used that for spinning up a new network on TestNet which includes the new features that I have talked about before. So the new network is unchanged, randomness is unchanged and the frequency of producing randomness is three seconds. So that was, yeah, it's TestNet but it was successful, it was great. We now have a kind of, if I can put it that way second network running kind of virtually together with the first one, the default TestNet that was not affected by any of that. We had a couple of hiccups with Unchained TestNet that we fixed quickly but yeah, that is great news because it can take you to your next level. We have Unchained Randomness so we can build time lock encryption on top but also run more networks with different nodes, non-Eloy nodes and so on and so forth. So we can have a DevNet into TestNet, it's great. We can do lots of things. Eventually Filecoin will probably want to transition to that so we started an initial discussion with FieldDev team but lots of work still needs to be done. We won't expect any of that before the end of the year but yeah, get involved. In terms of opportunities we have, yeah, we're doing a big push to get the word out about DRAND with blog posts and several public talks. Jolin is pushing for that and he's actually out on a conference, well, was last week. So great stuff there. As I mentioned, time lock encryption is something that we want to develop clients for. It's a great use case for DRAND. It can be used by different blockchains for several other reasons and yeah, looking forward to have that landing. We do want to provide some funds to LOE operators because providing randomness is a public good, as we all know. So working on that again, later quarters we'll have an update. It's just a shout out here. And where to find us? We moved out of PL Slack. There is a DRAND workspace. You can join if you want. We'll be inviting the community as well like the users that are building on DRAND using DRAND. So great stuff there. And I've linked to the public page and roadmap as well as our internal page with weekly updates. So click on those links or get in touch if you want to be added to follow what we're doing. Thank you very much. Awesome. David Choi. As far as KPIs go, NFT storage has crossed 65 million uploads. So SYNC Steady Growth's still there and notice that the amount of data, the volume of data being uploaded to Filecoin is also increasing. So speaks to potentially larger drops in that sort of thing. Web-free storage, saw some acceleration in growth there kind of organically as well as the team looks to focus more on growing that side of the product. As far as highlights go, one big update is that we are really digging into testing elastic IPFS to be able to use it in production and kind of at first roll it out as a redundant layer to cluster, eventually being able to rely on it as the primary right layer for our products and a lot of good news there. Thanks, Alan and Bengo for instrumenting a lot of that testing, but very few connection errors, bit swap errors are really low being able to retrieve pretty much everything that we know is in S3 and usable read speeds for the most part. There are some gaps still that we're working with the near form team to bridge, but one big thing that effort that has been collaborative across all of Entrez has been being able to get elastic IPFS records onto the DHT. So huge shout out to all those who have been involved with that and will, but we're starting to see records from elastic IPFS on the DHT. So seeing that percentage climb from zero slowly up. So yeah, that's super exciting and like we're looking forward to having a example of another IPFS implementation in production. We released the super hot gateway yesterday for NFT.storage's gateway. We now have a way to permacash data on the edge and it's now in private beta. This is a premium feature that we're launching to our users and we've had some inquiries already and it's a big milestone for our team both in terms of a big feature launch but also the first monetization feature that we'll be charging for. So looking forward to seeing where that goes. We also met in Miami for a few days earlier this month, a lot of productive hacking. We have videos demoing these hacks. So ping me if you're interested in seeing them. Alan also posted them in lobby maybe two weeks ago or so, lots of good team discussions and good food. So it was great to see everyone in person. You can see some photos of us on our team activity to the right but unfortunately most of the team did get COVID in Miami. So and literally most of the team almost all the team got COVID. So everyone please do be careful out there. Perhaps if you are visiting Florida especially. Rough, well hopefully everyone is feeling better. Jacob, bedrock. Yes, so bedrock, that's going on. Project Lightning has moved on to doing an overhaul of GraphSync VR, not GraphSync. Sorry, go-day transfer V2. And so the team's been working on that. That's gonna lay a lot of the foundation to make the next work that we're looking at for both retrieval from storage providers as well as interop, much better to work with. And you heard some of the IPLD Prime updates from Rod earlier. This is integrated in there. So a lot of good stuff going on. So hoping to see that out in the next couple of weeks on store the index teams aiming to have ingest all the NFC storage into the production indexer on the running at CNA Contact by end of week. And then also working with Ken Labs to get a short block cash advertisements in place. And this is gonna let us sync. Now that we have two indexers, Ken Labs is running one. We're running one on bedrock team. We wanna make sure that those are in sync. And so we're adding more of that distribution to the network more than just BL running indexers because hey, distributed networks. And so also on the boost side of things we are releasing the RC. We've already cut attack for that. We're gonna announce that on Monday. So we get RC testing going. And then we will be releasing that to production two weeks later. So we're gonna start our ramp up with lots of blog posts and stuff. So you can see more information about boosts coming soon. And then the team is already starting to look at what the roadmap looks like for scaling. So we've been talking with storage providers on how they are gonna scale up to enterprise level needs because we really wanna hit this like goal of five petabytes per day of onboarding in Q3. And so we wanna make sure that we're supporting that with storage fighters on reputation side of things. In the next couple of weeks, we're working at getting per provider insights into bandwidth across providers, retrieval limits that they have in place, and then aggregations of their deal success and failure rates. This is gonna help us better inform what we're looking at for incentive structures for retrieval as well as just general metrics across the network. It's gonna be super helpful. Hopefully we'll see that sometime in June. And then on the KPI side of things, yeah, sort of the index, we're almost approaching 9 billion CIDs ingested, which is super awesome. And we're almost at 90 storage providers who are currently indexing, which is great. And on auto retrieve, so we launched auto retrieve dashboard. This is really, really helpful for us moving into the retrieval side of things and getting a better lay of the land for the network. I think over the past seven days, we've averaged about 20 retrieval requests per second. And so this is the bridge Molly mentioned earlier that's kind of translating between IPFS requests and Filecoin requests. And so what the team is working on is pulling a lot of these events that come in through the retrieval pipeline. And so that we can then better understand where our failures having on the network. And so we can now spend more time focusing on improving that. So as we're like retrieval markets and interop scales up in the future, we'll be able to better support that from the storage provider layer. So very excited about a lot of that. And then for some of the gaps that we're looking at in the future, one of the things that we're hoping to leverage boost for is as we get more rollout is getting opt-in metrics for storage deals. Because while we've ramped up retrieval, we don't have good insight into like high throughput on storage deals. And so we're gonna be working with storage providers to better understand like how we can get them to opt into these metrics so we can gather more and more information from the storage side of things to improve metrics there. Thanks. Over to Zhua for data programs. All right, hello everybody. I'm Zhua representing data programs to talk about some updates. So first and foremost, we're focused on systematically addressing some of the choke points in the client onboarding experience. So focused on improving over the wire bandwidth by exploring BGP. So massively increasing the size and volume of our pipes and also looking at off the wire solutions. So using sneaker nets, we're shipping the hard drives to ensure that people don't face all these difficulties on getting data onto the network. Deep has been hard at work in implementing Phil plus V3 for LDNs which should significantly speed up the end-to-end LDN time to data cap, which is something crucial. If you're a large client, you need to be allocated data cap or need to make sure that process is as seamless as possible. And we can actually now verify how we're making progress there by referring to that new graph that he shipped it this week. And we're also improving Phil plus by having a third round of notary elections. We're having 50 plus notaries participating starting next week, which is really great, great news. All this is contributing to a really healthy growth in our data being onboarded. We are currently at 89 pips of verified deals from 800 clients at a healthy 8% week on week growth. This is pretty consistent over the last two months or so over on the client pipeline where solutions architects across PL and Falcon Foundation and ecosystems are collaborating where we have 60 pips of ongoing, 60 pips representing clients who are onboarding their data. And we also have 110 pips from clients preparing for their POCs. So also very healthy pipeline there. A couple of highlights. The client growth analytics dashboard is now 100% automated, great news. No more CSVs to get metadata all kind of cleanup. Slingshot V2 crossed 40 pips of total data onboarded. That's four times the original goal and it's on its way to the file milestone of 45 petabytes. And finally, Slingshot Evergreen crossed one pib of data in deals renewed, which is fantastic. A couple of gaps that we are addressing, we are finding that there's many opportunities in identifying the use cases for data owners and helping them find the right solution. In other words, we're kind of lumping them into one big bucket. We need to figure out how to refine the onboarding solution for all these different types of data owners representing different complexities in terms of their data volume and technical know-how. Also with Evergreen, we're seeing that the average replication factor is about two and the max replication per CID is set at 10. So for folks who are interested in ensuring the permanence of open data on Filecoin, please refer to Evergreen at Filecoin.io and help us increase that replication factor. Finally, coming up next, we have Phil Plus Day, posted on June 7th. Really excited about this. Please, please join us if you can to learn more about the latest news, participate on ongoing discussions. Phil Plus is a really major part of data programs and getting verified deals on board. So we'd appreciate and love your collaboration presence there. Thank you so much. Awesome, over to Alex for CryptoEcon. Yeah, this is Alex. Yeah, and just wanted to go over a few of the highlights of the past month. We've increased our hiring pipeline quite a bit. We've hired five people so far in 2022 and our goal is to hit 10 more. Those are data scientists, software engineers, TPMs and startup operators. We've reviewed 50 proposed changes to the protocol and we've increased the number of public presentations and publications. We've done five so far in 2022. As a team, we wanna ramp that up to 40 as we continue to create these CryptoEcon Day events. We've done one so far that we did at the DevConnect in Amsterdam. We have three planned now, one for Phil Austin, ETHCC in Paris and Phil Singapore and possibly the Korea Blockchain event. So we're getting a little system down so that we can kind of present what we're doing and let the public know about CryptoEcon and continue to build that brand and that hub knowledge hub there. On the project side, we're working with hierarchical consensus. That's something we were working on for weeks and we're trying to reach consensus with the consensus lab and alignment there. Some initial scoping on Saturn. Project Atlas, marrying geospatial data with the Filecoin network. We've begun a first phase there of ideation of different DAPs and other applications that we could use, building a research roadmap and an analytics roadmap and let's see, we have a new hire. We have a couple of offers out and some new hiring theses filed. So we have a lot more candidates in the pipeline. We're being very active in that. Thank you. Awesome. Coming into our spotlights, Jennifer. I'm just gonna use this time like to say this actually involves a lot of teams other than FEM and Lotus team. So just wanna give a shout out to everyone there. FEM, beauty editor, Lotus, kind of all of this. Huge shout out to Travis helping us setting up all these tests so we can ensure like, you know, secure testing and all those things. A lot of efforts going to internal and external audits. Voker who worked on the IPLD part makes sure we actually use IPLD in Filecoin building mind and Kuba on the gas and a lot of like a fussing work. Nemo Digg on the rough side proof scenes and Eva Jagger on the external audits. And also we have external people like Filecoin Foundation so that's like helping along the way. Just want to give a shout out to everything for the upcoming outbreak. Pink, Ally. Hey, yeah. I think Jenny and Molly you've got to most of the technical updates. The M1 is in development freeze. So they're onto mapping milestone two the big one programmability. The FEM early builders program is awesome. We've got about 20 builders and about 10 of them are building tooling and infrastructure. We've just released an RFP an umbrella RFP for that and already have two RFPs in one for a high level rust SDK to make it easy for developers in future. And Zondax is building an assembly script SDK. We've got two others in the works probably like the hard hat of FEM from Bloxico which is really cool. And also one from a tiny go SDK as well which is a super interesting one. And Jim Pink's been super active here. If you want to have a look at his little actor playground he's dockerized Kubernetes everything and put it on observable HQ. So you can kind of play around with a few demos he's made on there for FEM as well. We've got a tweet out as well on some more of the things that the people have been doing. I'll just post that in the chat. Thank you. Marco. Hello. This is a spotlight on consensus labs Udico garden. So Udico is in botanics like between Lotus and plants where it's a group of which Lotus belongs. And our Udico is a Lotus fork in which we basically implement hierarchy consensus. If you want to try and play with it this is Udico garden. It's a set of scripts that basically uses Terraform to deploy Udico test networks on AWS. And then you can play with whatever we ship so far which are cross-subnet transactions, standard consensus and basically hierarchy consensus MVP. We are getting more people very soon. So we have, we are going to do long running tests on long running deployments and basically at dashboards and monitoring and a few other things. So try it out. Thank you. Adine. Yeah, me again. There was an implementer's day. It was on Friday. It was really good. It was about four and a half hours, eight sessions from a lot of people from different companies. About 50 people who showed up and were watching. 1000 people have looked at the videos so far. If you missed it out, check out the videos. It's on the IPFS YouTube. There's an IPFS implementer's Discord channel if you build or interested in building IPFS implementations. And there's a sync every two weeks. It is on the IPFS community calendar. To Google it, you will find the IPFS community calendar and a big shout out to Brendan and everybody who helped make it happen. It's really exciting and looking forward to seeing this grow. I'm presenting on behalf of CX about the big data exchange that is a marketplace for storage providers to bid to store charismatic public data sets. This is kind of like think of the benefits of NFT marketplaces for discovery of amazing content, but for storage providers to discover amazing public good data sets that they want to help store and onboard and replicate across the PowerPoint network kind of paves the way for bringing even more great data to the network and an even better onboarding experience for clients with really valuable data that we want to persist. I think they have successfully closed their first auction at 29 terabytes closed at 16 fill paid by a real SP. So awesome to see them hitting that thing. Go visit bigd.exchange to see how you can sell valuable data and how SPs can bid for the opportunity to store that in the network. Really, really great to see this happening. Go check out the team will be at Phil Austin if you want to learn more. On to our deep dives. We'll do these a little quickly, but first Hannah for data transfer. I'm Hannah from Bedrock and I'm excited to introduce an effort. Our team is exploring to supercharge the ways we are able to move data around our networks. First of all, I want to help folks understand what we have today. You've probably heard of the words BitSwap and GraphSync. I want to talk briefly about what they are and how they're different. Both of these protocols move IPLD data around LibP2P networks. The analogy I've been using to help non-programmers understand is this. BitSwap is roughly designed like BitTorrent while GraphSync is roughly designed like HTTP. That means they shine best in different scenarios. BitSwap like BitTorrent is good for moving highly distributed content from many peers where each individual peer might have low bandwidth like a home computer. GraphSync like HTTP functions great for downloading data from high bandwidth servers like storage providers. The other big difference between the protocols is a historical artifact of how they were built. BitSwap is the bread and butter of IPFS while GraphSync was written in the course of file coin development. This has led to some big difference in the implementations we produce. These aren't differences that are inherent to the protocol but they're nonetheless quite significant. Go GraphSync supports layers for payments and authorization while Go BitSwap keeps everything free. And not only that but Go GraphSync provides multiple layers of control to our operators while Go BitSwap has very little has a lot less configurability. This has led in newer situations to a kind of a difficult trade-off. We are starting to see in like retrieval markets it would be nice to be able to reach for either protocol without having to think about what is and isn't supported in terms of things like payments and authorizations. It's a tough trade-off right now. Retrieval markets, for example, needs multi-pure transfers but also they're going to need payments eventually. What do they use? And this is what Project Founders trying to answer but why not both? We wanna make each product protocol more powerful and flexible so it isn't really a choice. I shouldn't have to say if I build for FilePoint, I use GraphSync. If I build for IPFS, I'm kind of stuck on BitSwap. Or if I use BitSwap, I can't use have payments. The auto-retrieved project you'll hear about next is great for bridging IPFS and FilePoint but in a long-term one shouldn't need to run a server to translate transfer protocols. And it's not just about making these choices easier. We can actually use one protocol to fill in the gaps with the other. BitSwap lags behind BitTorrent in performance sometimes because BitTorrent starts with more information at the start about the structure of data you're downloading. So what if GraphSync could be used to quickly discover that information? How much faster could BitSwap be? These are the kinds of questions we're aiming to answer. So anyway, how are we gonna do all this? Well, this is what you're gonna get for the five-minute version. No, seriously, I tried to make like a super simple architectural guide in no matter how much I cut it down, the answer will be unsatisfactory unless I'm taking the other deep dives time and I'm not gonna do that. That's not what you do to teammates. Suffice to say it's complicated. In terms of what we're doing right now, we have two protocols and several layers of payments that only work with GraphSync. In our current work, BedRock is re-architecting the higher level layers to be fully protocol neutral while IPFS stewards are building the hooks in BitSwap to make it possible to support payments. This is complicated, slow work, but you will see hopefully a grow re-architected go data transfer, V2. And it says in a month or so, but I just heard two weeks. So in two weeks, it will be here. But here's a ton of more information. You can read the detail project proposal on roadmap. It proposes extensions to BitSwap. Watch a video on how we're re-architecting go data transfer. And you can also follow progress with hashtag data transfer interrupt on Phil Slack. And you can check the slides to dig into these. I might maybe do like a deeper dive for programmers at some point. One last thing. We may not get this work done super soon. These kinds of protocol changes are really hard. They're always hard every time we do them. They're long-term investments and they don't always have super visible like immediate wins, but they have very big long-term wins. So our team, it's possible we may need to get reallocated at some point for immediate priorities, but my hope is we're gonna get there and that we're gonna invest as an organization in this kind of low level work to unlock these key long-term benefits by network. That's all. Awesome. Thank you so much, Hannah. On to Will for auto retrieve. So auto retrieve, we've mentioned a couple of times. This is one of the stop gaps that we're putting in place so that in the short-term, we can make content that's in Filecoin accessible to IPFS, to gateways and just more generally bridge some of the protocol gaps that we've got at the moment. It also serves a secondary purpose, which is it gives us a lot more view into the state of retrievals and lets us work with data programs to sort of help set up the right incentives to encourage storage providers to ramp up on their retrieval bandwidth and their infrastructure so that they can serve the amounts of retrievals that we're expecting to keep growing. So this is running, we've recently switched it to a Kubernetes deployment that we can keep running pretty stably. We're working through some ongoing resource management stuff so that it not only is running but also serving at high quality. You can see some gaps in the success failure rate where it runs out of memory currently. All this work is thanks to Elijah on the Outercore team and Kyle on the Bedrock team, but more generally what this is going to mean is that when you go to ipfs.io, what will happen is that will go back to the big IPFS node that is that gateway. It will be peered and so it's bit swap requests. We'll talk to its peers and one of the peers will be this auto retrieve node which looks like an IPFS node that is just sort of in the IPFS node. Right now you need to be peered. What that means is it's serving currently IPFS nodes that are in the DHT server ring because it automatically connects to them. But if you're another IPFS client you're not getting the full benefit quite yet because you won't necessarily be connected. Those bits off requests will then be seen by OuterRetroof who will ask the store the index indexing node for those sids. When those sids are found from a storage provider on Filecoin, it will then make a graph sync request to pull that content locally into its own cache and then we'll say that it has those blocks and be able to respond to them over bits one. So it acts like a block cache. It keeps a relatively large order of tens to hundreds of gigs of blocks that it knows about in cache that it's pulled from storage providers. But the thought is that these are transient. We can eventually have them running in the same regions as gateway instances and just generally use this as a short-term over the next month's way to bridge until we get some protocol upgrades. I will leave it there. There's an OuterRetroof channel at Filecoin Slack. Awesome, thank you so much. And on to Jennifer Felotis. A lot of you may know she's PLV server, the huge big Filecoin team has getting you to decouple to a lot of smaller teams. And over the time, we have Bayrocks working on market problems. There will be taken, I swear. The Lotus team has tried to find our own definition, our own entity in the whole Filecoin community and ecosystem team. So that's what we are sharing here today with you. Like we will, our thinking is. On the left side, as you can see, we're a small team still. We have eight folks who is four engineers and four technical support engineers that have been super helpful for a lot of the cities. Our mission still first is like, we serve Filecoin network. We ship the protocol along with other implementation teams. We want to make sure all the node operators can run a Lotus node and talk to the network, talk to the chain, building their applications. Developers is a huge, huge focus for our user group. As you may already know, Lotus is slowly stepping back from the market development. However, we want to be able to enable folks like Biroc engineers to do the market protocols on top of Lotus. And also when the FEMs coming, we want to make sure that developers can also have a very good experience. Basically like enable a lot of use case on top of Filecoin. That's why we think developers of super important community that Lotus should like focus on. The other one, we don't have to say storage providers. We need them to get like all these data into the network. And also like user support, we want to make sure that we maintain a good open source like community and help us further build the Filecoin network. And next slide please. So that's our mission scope and how do we ship all these things? So we have a bunch of things like Lotus trying to do. So most of you will be curious as I feel like the P2P IPLD will have been approved team. We have been working very closely to get their stack also ship in Lotus as we are a user of their tech. So how long does this thing today? It's like we ship monthly feature releases which always is optional release. It includes a lot of like shiny things, new features. We are still like shipping on like a go-fuel market that kind of works behalf like all those Chinese is going into that. But mostly we are focused on maintaining this a lot. We spend a lot of time to do bug fixes like pay off the tech that just to make sure our user can be happy to use the Lotus in their production line stable way. And we also have meant through release which is for network upgrade. Those are like less stable on the timeline because like a whatever Filecoin needs to upgrade we'll do the same. As you can see here in the screenshot we haven't been missing of monthly feature release for I would say eight months now even when we ship like mandatory network upgrade release we also make sure we keep the feature releases going just to make sure all the development in master. Yes ship. So how can we get the things developed and like coordinate into these releases? So we have a set of like processes. So first to start with we start our day in the team with cat and memes as you already know. You can see here these are our Lotus cat and we also have memes going on so to make our life a little bit more fun get into the real work. So a lot of time that is our technical supply engineering team is doing is to make sure that we charge the incoming issues in GitHub or like in Slack or GitHub discussion within 48 hours. So that team knows that you need this thing that needs to be looked at to make sure that's in broke and also to fit into our backlog of the things. Next slide please. There are things that it's like, oh my God you have to fix that immediately otherwise, Filecoin network may die. But a lot of the other things will be going into the Lotus Backlog. So basically our TSE team will be putting out this like weekly charge summary which fit into our spring planning. Before I get into the spring planning I do wanna say another thing we do is like we do call three project Backlog charge prioritization roadmap planning because like Lotus is still trying to is still kind of like a stakeholder of the core development of the Filecoin network and because we are within PLN working closely with a lot of other teams like protocol, opportunity team, CLL, consensus lab, DRAN team. We kind of know more like what's coming like, you know, in the six months or a year basis. And that's why we try to keep everything in our backlog just to keep everyone informed including the Filecoin foundation and other core apps. So we do a quarterly one project Backlog charge just to help us understanding what's need to be in the next now upgrade and start planning. So for that, so that's a costly thing. And we also within the Lotus team we have our bi-weekly thinking section. So this is the time we, you know a lot of other teams is doing amazing work in the ecosystem, it's hard to keep up. So this is our chance to catch up with the world and you know, just like to understand like what may be ready to come to us and have, we have to be the shipmanship of their work. So this is the part we trying to understand the problem needs to be solved and learn a lot of the new work that other people are doing like DRAN, time encryption or like sharding and all those things. After all these planning and like backlog feeding we do our monthly spring planning. So basically this is like a week before the co-freeze we will pick up, what are we gonna ship for the next week? We will make sure that we have to be analysis implementing some like low hunting food features and implementing the projects that's in our roadmap. Next slide, we're almost done. And so those are all the development work we are trying to do and on like community engagement project management, we also have weekly community updates that share in field loader's announcement channel. I will suggest you, I will like recommend you to join that channel to get the timely updates from our team. We are also generating library reports just to inform you all the feedbacks we're getting from the Filecoin community in general what their pinpoints are, what are the use case, new use cases people are looking forward to so that we can unblock them. So like as you already know, we always have a lot of things going on. However, we do want to say like we welcome all the incoming requests to get into our backlog. We cannot guarantee when we can get into that but we commit that we will eventually go through them one by one by with you guys or like with grants or external teams. So it can be super helpful if you give us precise ask on like the problem and the issue and what the user story and what the pinpoints is or like those can help us prioritize all these requests and you are running a new project, for example or a program for like Evergreen or Slingshot you need our support to like, you know just to like set a good foundation of the program let us know if you give us like one to two, one to three months of like lead time we probably can find time to work with you and be responsive with your participants. The other thing we wanna do is like onboarding and support all the source contributors. So if you know any dev team that can be good for us to collaborate, please let us know we want to establish those relationships. Yes, set a lot of things. So how can you actually find us? Again, create the issue is always the way to go Lotus is our GitHub repository or you can go to the beauty actor where one of the co-maintenors of that as well we are very responsive in the public field Lotus dev channel even more responsive than the DMs but if you wanna reach out to our team like having a meeting, have a talk or server you can reach out to me in DM as well at Jenny Juju but again, I check the public channel more often. We do have office hours but honestly just joined the field office most of our engineers just love hanging out there so like if you want to talk to us, join the office everything I just presented you you're seeing the public Notion page there's something there you can see our roadmap or really schedule or mission scope everything there and we started our Twitter account early this year and we started to trying to build our own profile there so fullers and likes are highly appreciated and that's that. Awesome, thank you so much and that is it for our agenda today so everyone have a wonderful rest of May and thank you to all of our presenters for that deep dive.