 Welcome to our April, Peel, and Dres all hands excited to give everyone overview of what we've been up to in the past month. We'll start with our working group update and then we'll go into a number of deep dives on boost data programs and slingshot client growth working group and a number of upcoming events for our ecosystem that folks should be aware of. So as a reminder, you're one of many working groups in the PL network, we work to drive breakthroughs and computing and technology to push humanity forward. And that's because we think that internet is one of humanity's most valuable technologies and making it stronger and better and run on more kind of user agency enabling primitives like content addressing it's the most valuable work that we can be doing right now. Our mission as the address working group is to scale and unlock new opportunities for IPvS file plan limpid be unrelated protocols, and we do this by onboarding amazing developers driving breakthroughs and protocol utility and capability and scaling the research development happening across the network openly and publicly. So our work encompasses many of these amazing projects as we mentioned, there's a whole ecosystem of them that we build and help push forward. And we are made up of these 12 I think we may even have more at this point, teams within the PL entrance working group, working to drive new breakthroughs, improve areas of storage and retrieval reliability across the pipeline and generally push this ecosystem forward. We have a number of open roles across our working group. So if you are looking for an opportunity to join this hundred plus individual team of amazing humans, please please do reach out to us we have a job board here that you can look at some of these available roles from engineering managers to software engineers to TPM is product managers in front engineers research engineers scientists data scientists, developer relations engineers, tech writers community managers and much more if you're excited, please come reach out and join our community. So our strategy for 2022 for the entrance working group is first to focus on the talent funnel helping more amazing humans on board onto these programs and help make these protocols better to really focusing on robust storage and retrieval across the pipeline, helping many new pepper bites of useful data on board to the file point network, and making sure that robust retrievals happen across IPFS and file point to really help scale adoption of these critical building building blocks. And third driving breakthroughs in programmability scalability compute. This is a lot of our work around the file point virtual machine, FBM retrieval markets consensus scalability and being able to do compute over data. And finally, and most importantly, making sure that these really critical networks are running smoothly and upgrading reliably, and that we kind of help burn down our technical and operational debt to make it easier and smoother to to run them at scale, going forward. So handing it off to a dean for IPFS. Yeah. One of the things we work on as Molly mentioned is IPFS trying to make the web work in a peer to peer way content addressing is one of our big tools for this. We keep an eye on what's going on the network. The number of peer to peer nodes is is doing well, finding content, nothing, nothing too surprising is going on there. Still about half a second for content routing. We've been getting a lot of PRs, both, both from, you know, core developers working on stuff and community members who who are sending more and more PRs to help out which is great. We're trying to get those closed out. So we've been getting, we're getting better and slowly trimming down our backlog there. So let's go next. All right, what's been going on this month so we had some security fixes around bypass 0122 and one. So this is a specification for a protocol called reframe, which is a request response protocol that will help us with things like delegated content routing. We will sort of hear more about that as it ends up in God give us which will hopefully be coming in the release in, you know, next month. We're going to wrap up our first implementers sync for there are more implementations coming around that are implementing IPFS in their own way, and having good place to chat with them about protocol improvements and some of the tough problems we collectively face has been nice and we're just getting started. The community calendar will have more. Yeah, lots of other changes making gateways better. People bouncing for these for a while with the PDP resource management, which you'll hear about later. And making codex better. Generally, lots of the PDP things coming. And next month also working on data transfer. I'll hand it off to the PDP folks. Yes first. So this is their highlight of the JSAB stuff. So we are in the end game, hopefully, over the conversion to be the type screen so there's an assay that's available. It was shipped to the end of last month, you can get it right now if you install the PDP app next. And so, and so the magnet up to IPFS and making sure that all the components integrate well. So the process up is kind of the final core component needs my creating. There's a PLS in progress and wait for for some reviews on that. We also have some other features in the works we've got implementations of Yamax and Circuit Relay B2, which are hopefully going to be finished at some point in the near future. So what's next, so yeah, we're going to be publishing a roadmap of what's going to happen with the JSAB IPFS. It's going to be the best testing platform to build your distributed apps on, and all of those components you could run, it's going to be incredible, we're super excited. We're also going to be able to improve in the DHT support, making sure it's made by default and it works, it's made by default right now in client mode, the extension that will be opening up in server mode so that means all of that. It's going to be amazing, we hope to get it to you very, very soon. Awesome, over to the PDP folks. So here's some update on the PDP, the peer-to-peer networking stack. Next slide please. Okay, this was not supposed to be in the presentation, but okay, we have a bunch of notes running in different networks in the IPFS network and the FileCon network, Kuzama network, you can see it's pretty stable going up and down a little bit. Next slide. We, the entire PDP team is currently in Paris and we'll be giving a few talks on tomorrow and on Saturday. The PDP introduced a product board at this link here so you can see what we are currently working on. Regarding our whole punching project, Project Flare, we've rolled out GoLiP to PDP 0.19 where Relay v2 nodes are now automatically discovered and this will ship with the next IPFS release. These nodes will be able to automatically coordinate whole punches, which is really exciting. And we have a collaboration with Dennis to measure those whole punch success rates and we'll probably see numbers there pretty soon. Quite excitingly, our team has grown. We've now three new team members. Kuzama has joined on the Rust side, Marco is on the Go side and Melanie is currently in Launchpad. So what's up next? For GoLiP to PDP, we are continuing the ongoing effort of consolidating our repos. We are working on test ground testing to do more interrupt testing between different implementations. We are also working on a new protocol to migrate streams from one connection to another connection, which will help us with Relay connections. And we've started working on web transport and there's a draft spec and next up will be browser interrupt. On the Rust side, we're also working on test ground and getting quick running in Rust. Awesome. Hello. Yeah, that's a new slide, a new team. So I'm part of IP developer experience team. We came up with this name only two weeks ago. It's quite new. There is two of us, me and Lauren. And what we want to do is to empower PLO and Rust to innovate by simplifying the workflows that go through every day. Because there is only two of us, our immediate focus on IP stewards, but we do want to, like, our vision goes beyond that. And you can read more about that on our public motion page. And yeah, what do we do right now? We do have two bigger products on our plate at the moment. One of them is GitHub Management. We want to streamline GitHub configuration management so that it's easier, safer, and in general better for everyone involved. We want to drive GitHub configuration management through GitHub PR workflows and that project is ongoing. We do have an infrastructure setup for most of our GitHub organizations right now. And the next steps would be to reach out to developers and promote the new way of doing things. The other big project we are involved in is test grounds where we collaborate with Bloxica on the future of test grounds and more immediately we're working closely together with B2P on interoperability testing. And if you want to find us, our public notion page is a great source of information on everything we do. We try to be reopened about it. We do have a channel in IPFS Discord and that's called IPDX and we're also quite active in test grounds. We do have weekly office hours on Mondays at 4pm UTC time. That event is available on the PL Andrews calendar. So check it out, join and say hi. And yeah, you can also drop us an email on ipdx.protocol.ai. Thank you. Awesome. Thanks so much Peter. I think Jennifer is currently on a flight. So I'll cover the Filecoin side unless she pops up in this slide. But as many folks know, Filecoin is a major endeavor here as well, looking at our top level KPIs. Total network storage capacity continues to grow smoothly. Deals on Filecoin I think are actually up by over a million deals since the last time we updated this, which is awesome growing really quickly. Which also corresponds with data stored on Filecoin. We are about to hit 70, but we're currently at 69.999, which is pretty cool. And we have crossed 60 million NFTs stored on Filecoin as well, which is great. So lots of growth. Part of this also comes with an increased use of Snapdeals, the Filecoin upgrade that happened beginning of March. And so with Lotus 1.15.2, which is going out, I believe next week. The storage provider experience using Snapdeals has gotten a lot better. A lot of UX fixes happened within the protocol. So that's helping Snapdeals increase in their usage throughout the network, which is great to see. The Lotus team is hard at work on Network V16, the SCEAR upgrade, and getting FEM development happening as part of that, which brings a lot of really, really important changes. The Filecoin crypto team is focused on Halo 2 to bring even more recursion to our proofs, which can help us scale chain bandwidth significantly. And then big snaps to the Bedrock team for launching index provider, which went out in Lotus 1.15.1. And so indexer nodes are now default indexing new storage providers in the Filecoin network, which is great. There's also a new fit for index providers. The FEM working group is really busy hardening and auditing all of the FEM work. And so please check out the bug bounty program and audits if you want to help contribute to that ongoing auditing work and help harden FEM and make sure it can be a really secure and smooth upgrade for the whole network. Also big snaps to our friends working on Boost, which is the reference implementation of the new markets protocol that's in testing with XPX program. Challenges for this area is just there's a ton of development to happen on FEM land in order to land this M1 milestone. So check out the tentative timeline that we're keeping up to date to make sure this can be a really secure upgrade to the network. We've done lots of big things to the forest team who has kind of joined forces with the Lotus team to rally together and improve test coverage over the new built-in Rust Actors. And just all of the other groups who are getting involved to help bring this milestone of FEM across the door. It's a big deal. And a lot of help is happening there. Cool. And passing it off to NetOps for team updates. Okay, this is just it. NetOps, the KVI is still key going. We still get as good as last time to say we are working on to improve our TTFB to be less than maybe five seconds hoping we can make some happen. The ping and mud low is still increasing. You can see the number we are increasing to be slowly. The weekly IPFS gateway request and the unique IPFS gateway user, we are also increasing. We have a checking to make sure our cluster can handle it and moving forward. Our network update, network update, still fitting our need, 100% for DRAN, the API train.loft99.9, IPFS gateway 99.9 as well. For other things, it's 100%. We're still on check to make sure we can support another boost about the usage of IPFS. Next, George. Hi everyone. A few biofrost team updates. So cluster version one has landed and has been rolled out to web three storage. Thank you Hector for that. It features improved memory usage, which allows us to tweak the bit flow parameters so we can increase transfer speeds. Graphs look really good. Like Jesse mentioned IPFS gateway has hit 950 million or close to a billion request, P95, we're done to first buy this around six seconds. We're working on making that lower to five seconds. Maya has done some excellent work on the synthetic monitor, which should help us investigate potential bottlenecks in some of our regions for the gateways. He has also refactored our metrics and alerts. We have a lot less noise now in the alerts channel and alerts should actually be actionable going forward. We've also just hooked it up to off genie. So it should be, it should be a lot better than, and it was at least less noisy. Also opportunities. We're in the process of upgrading IPFS legacy notes. I guess cluster legacy notes and web three and NFP three storage to use striped this volumes XFS next to last such three starting function which should all result and better IO, which is the biggest bottleneck for the clusters for the gateways. We're upgrading to engine X plus, which gives us better health checks active pulse checks latency based traffic routing as opposed to a number of connection, which is what we're using now active upstream DNS refresh, which will allow us to add more servers in a DNS and load that the load answer would automatically discover them and start sending traffic to them. So that should help and also it should provide us with a ton more internal metrics, especially around caching should help us investigate how our caching is doing and how we can improve it. So we're, we're tweaking our caching a little bit when introducing cash slicing for more efficient caching and retrieval and also cash locking, which allows us to do by range request straight that option server. Fully cached should result in better, better transfer speeds in time to first fight. And also we're putting together a dashboard in your final that's dedicated to the IPFS clusters, which should help with investigating when when nodes are under heavy load. Awesome stuff. Thank you markets fill in front. So one, one project that we started is we get ops. This is a GitHub driven way to manage Kubernetes clusters and applications. We have our first onboarding group going through that store the index and Sentinel and so infra and store the index is the first one that will have a operational cluster with with application managed through get ops. Sentinel is next up. We have the chain snapshot service. The milestone one which is snapshot in S3 storage is nearing completion big thanks to Travis for his work on that for API chain dot love. We have 100% uptime 164 million requests serves and served and the average is 63 requests per second or the last 30 days and we're also planning a version to so we'll have some some some in improvements on capability and performance there and Corey worked on an awesome benchmarking tool so we can actually see if Lotus can support our growth targets for chain dot love. The opportunities that we have. We have a get ops onboarding group. And number two that scheduled for June. So for any teams that would like to move their application deployment or cluster management to a get ops workflow. Please contact me on fill slack and also for chain dot love. We expect to have better metrics on users that we can understand use to job it and to instrument rate rate, the meeting on the low escape base. And we also have a project life cycle overview on our notion page so you can check out where our various projects are at and is looking to to notion docs for projects as well. Awesome. Super cool. Let's move on to forest. Hey there for us to with the Sentinel team. As a reminder the Sentinels teams mission is to provide software and data for monitoring analysis of popcorn chain updates this week, we released Lily vo dot nine with support for Lotus one dot one five enables experimental FEM support. We upgraded the core processing pipeline of Lily that increases throughput and observability some graphs showing that here way less go routines way less data skipped. So we added support for chain we are tracking and improved cap detection and infrastructure land we are migrating over to the we've works Kubernetes deployments. You updated our helm charts for better use through the public. And we've migrated our notion or our alerts to our notion page you can see that there. Coming up next we're working on vo dot 10. It's in progress right now. This will support distributed indexing allowing to be scaled horizontally so that we can process the chain in real time. We provided an architectural overview and are implementing this as we speak. Over to DRAN. I guess no DRAN right now, maybe next week. Over to Nitro David Choi. Hey, yeah. So Nitro, we are the team that builds NFP storage and web three dot storage, quick highlights for KPIs and this is as of yesterday for seven, not February 2. I don't believe it or not. I used to make slides for a living for like maybe over the week. But anyway, if you storage cross 60 million uploads as Molly mentioned, that's, you know, it's just continuing to grow and grow team super proud of it. You know, probably will bring to the ecosystem there. One big highlight is between NFP storage and web three storage combined 350 heavy bites uploaded to Filecoin. So there's some crazy like heavy bite numbers for overall Filecoin useful data upload numbers and we're just a drop in the bucket there. But yeah, if you think about how small your average NFT is. It's a it's a lot of NFTs there so you know super happy to see that number growing. One of the highlights go one big highlight is that IPFS elastic provider lives. So it's the cloud native IPFS implementation that we've been developing with near form. And it's providing NFT storage records to the DHT through the indexer nodes. It says historical ingest stone progress here for referencing historical NFT storage data but I just got word that that historical ingest is complete so we're going to do some testing on, you know, the retrievability and speed there and things like that in the coming weeks. This is one part of a just general new uploads interface that we've been working on, trying to just be have more scalable infrastructure for both reads and writes. It's making good progress. We're really combining the infrastructure between NFT storage and Web 3 storage to be like the single upload interface and as a result, it's going to be a little bit of time until NFT storage and Web 3 storage can directly take advantage of this interface. But we're going to hook nifty save up to it soon in the coming weeks so that's super exciting. And then also check up check out NFT up. It is a desktop app that Alan created as the easiest way to upload large directories to NFT that storage so if you know of anyone doing like 10,000 pfp drops or NFTs and things like that and it's many gigabytes of data in a directory. We're going to point them to NFT up and we've been getting reviews from users and a lot fewer questions from non engineer artists out there asking how they can use NFTs storage so huge thanks to Alan and Chris for making that happen. And yet team is meeting in Miami next week for an onsite. Super excited to see everyone in person. If we are a little bit slower to respond that's probably why. Real quick call out on a challenge we experienced last week. We had our second Google driven gateway blockage in, I think probably four weeks or so. This time it was due to them writing a bad reggae statements for Chrome safe browsing. This was like the same kind of thing that happened previously, they had promised this wouldn't happen again and we'd never get the whole NFT domain blocked. But, you know, this kind of stuff does happen does illustrate the dangers of too much power on the internet with one body, even with the best of intentions. So we're publishing a blog post on our experience around this tomorrow and hoping to continue the dialogue from there about running a IPFS gateway in the web to world. That's it. Awesome. Thanks, David. Over to Jacob for bedrock. Cool. Yeah, so boost is getting ready for launch next month. So we're currently working on integrating with downstream provider so as to where we're hoping to integrate with them their roll out support for boost. Next week, textile launch support for boost this week already which is super awesome. So we're going to start testing folks who've already upgraded to boost and her doing early early adopters. And on the store the index side of things. As Molly mentioned, we launched the index providing in Lotus in 115.1. So far we've ingested indexes from about 55 storage providers and currently at a rate. It fluctuates quite a bit but we're ingesting billions of indexes per week so far, which has been quite a bit. We're rough estimations right now or about that's about 10% of deal capacity on the network right now. So rapidly increasing that and with that some of the work that we're going to be doing right now is working on planning out what the future of scaling looks like so we can scale this horizontally as well as distributing it. One thing that's really nice in a highlight is that can labs is now running an indexer so we're getting more agencies to run indexers. So not just what the bedrock team is running at cid.contact but we'll get more indexers out into the network and then provide support for everyone to be able to scale horizontally over time so we can support all the massive growth on Filecoin. Yeah, then we're also looking we've been meeting with retrieval markets and source providers and a few other groups to understand retrieval priorities and storage scaling priorities have some really great discussions this week as well as last week in the boost link to later in the deep dive. And so we're looking at a lot of as we get ready for boost launch. After that what can we do to help these large scale enterprise storage providers scale their system so we're looking at splitting out markets markets process into into multiple systems there. So very exciting. And yeah, I think that's most of the stuff. Yeah, that's many things and they're all awesome. So thanks so much for sharing over to Marco for consensus lab. Yeah, hi all so on the consensus lab side we have wrapped out the initial peer wrapped up the initial POC of hierarchical consensus. Basically we have a bunch of demos if you follow us on mother of all demo days we are usually taking up the first half an hour as consensus lab presenting demos of the stuff that you're building if you missed it. We have an accepted paper on hierarchical consensus architecture at the dmps workshop. And now we're moving to productization so first things there they started in April. We shipped you Dico garden which was script so managing and spoiling you Dico test that's basically on AWS using AWS and Terraform. We have CI is an eye test. Now for you Dico and we are moving to basically we are now at integration with FEM, translating our two actors from being native file coin actors to FEM. So we're wrapping up this stage of proof of checkpoint the proof of stake protocols to Bitcoin. This is a code name is Pikachu now and basically we're wrapping up with initial smaller scale deployment you which uses Bitcoin testnet and Filecoin KBS submitting paper for the next week. And for consensus for you Dico subnets we essentially have the POC of tendermint running as a consensus for you Dico summit. And we are now fully stuffed to tackle the big project on efficient subnet consensus so there we presented the neurosis. This is one of the blocks ago one of the building blocks that we're going to use there on the highlight side hiring is going super well Sergei joined the beginning of April as research engineer working on the efficient summit consensus project by three. One research scientist joining in July, and one intern joining in May, and in addition to that our hiring pipeline is full so essentially we have another three research engineers and one research scientist in the decision hiring. So this is our next stage. We have the papers that I mentioned we're collaborating intensively with creep back a lab on Oracle consensus crypto icon, flashing out gas models and other things. And we are paying a lot of attention to impact and work in the open and community also will give an invited talk. This year consensus day is going to ship as CCS 2022 workshop CCS is basically the business security conference in the world. We have some PC co chairing the duties we launched consensus love discussions which is a GitHub, essentially, you'd have discussions. Basically, wrapped this consensus love discussions where we are using as a forum to discuss in the open our ideas and improvements of our protocols. Update on our side, we have also discussing with computer or data, the right ways to leverage hierarchical consensus to help compute over data. Basically improve its current stage. Let's put it in. Thank you very much. Your research engineers and research scientists are watching they should clearly jump in and join all this great work. And enough to run a footprint at lab. Hi everyone. Sorry for my low voice bad voice but I got a cold. So today I, instead of giving an update of the project that crypto net lab is working. I want to use my two minutes to show you the prototype of a project that is the data retrievability alcohol. The goal of this project is to guarantee retrieval from a decentralized storage network file coin and we know that we have incentive and penalization for providers for guarantee the storage but we do not have these tools for retrieval. So here the goal is to have these tools and and and the same guarantees for retrievability and what makes this very hard to do is that there is not an equivalent for retrievability as the proof of space time. And the guarantees that proof of space time this this this is for storage. We don't have such proof of delivery for retrievability. So we have to find the other solution for example we have to overcome this impossibility problem possibility result with the retrievability protocol that the test if a file is being retrieved by the provider or not and in the negative case can slash in our in our MVP project the article is implemented by a set of referees that are semi trusted so we only trust a part of them and this is basically ask again a very smart way to file and they have the power to slash or not to the provider if this provider is not basically retrieving the correct file and there are a lot of details and sub to see how that I'm telling you but they are very important to have something that is basically scalable doesn't have a big on chain footprint I can work with the web scale and what I want to do now is to show you since this is live now we have a smart contract implementing this on test test that live. I want to show the video that that is a demo about our prototype. Thank you. This is a quick overview of the prototype we built on the on the right we can see the provider. And on the bottom part we can see the referees. On the left part there's the user interface, minimal user interface where you can enter and try by yourself the how the deal works. And of course there's the contract. What we can do now right now is create proposals and ask for bills. Now we are getting a quick proposal. Now we can see that everything will work automatically. Nothing of anyone can try it. Anyone can try just understand what what's going on. Thank you. Awesome. Thank you Renee it's super awesome and great to see a live demo on test. So, now we're into our spotlights on awesome things that have shipped in the past month, starting with Lotus. We still maybe are missing a Jenna version she's on a plane. I can hop in here. This is highlighting the most recent ship version of Lotus 1.15.1, which has experimental FBM support and also the indexer provider on by default, and the upcoming 1.15.2 which has a lot of things that have been heavily requested by the storage provider community, window post workers. These are actually window and winning post workers, which help you kind of scale your approving process, enhanced to the ceiling scheduler, and a lot of improvements to snap deals which make them doing those in the live network, much, much easier and much better for storage providers. So, lots in these latest Lotus feature releases to enjoy. On to Wes for the computer for data summit. Thanks Molly. Can you hear me okay. Awesome. Thank you so much know we had we were very fortunate to have a few days in Paris early this month to bring together a number of service providers and number of technology partners. A group of us from the back of YAL team to really talk about what has been done in this space previously what's the history and one give us a good start just kind of giving a sense of the history of compute over data. We learned from other partners that it tried to implement their own ad hoc solutions to to distribute compute over data. We also learned from other folks from the crypto econ team and a number of other groups to kind of weigh in on considerations that we should have as we start to build this. But really this was the first time for everyone to get together and sort of sort of community around this compute over data project. There are a series of sort of prepared lectures and day one day to is more of an informal on conference session. There's a lot of considerations that we need to start planning for related to security and reproducibility of the data sets verification of the data and so we're talking about different trade off to the architecture it was a lot of fun and we had real fortunate really smart people there with us. But the biggest takeaway is that we definitely have the interest of the storage providers and that'll be critical for the compute layer as we build out back to YAL. We want to really think about a lot of the architecture was built around the concept of plugability so if we do verification we want to allow to plug in their own verification we want to support as many different times as possible, things like was I'm really growing a popularity. And so I think we have a good kind of sense of the initial considerations that we want to build out for the next couple months and as you can see here the goal is just to get the system up and running so we can do nice demos, leading into the next few months we're going to be focusing more on outbound and getting user validation of the system so we'll be highlighting each month trying to get one notable research academic user and system. And then as you can see we've got some longer term goals here for the next few months after that but we are off to the races and if anyone would like to contribute or find out more information. We do have the notions summary link there all of our code is also public on the file point page so we'd love to have your opinions and feedback as we build it out. Awesome ZX and Alex Crypto Econ Lab Summit. Great hey this is Alex for crypto econ lab. Just want to review what we did on our crypto econ lab summit it was the first time that we had all met in person. Our lab is now when we add one more person will have grown 200% in the first three months of the year. So, a lot of onboarding and keeping busy there. We've decided now that we have this new capacity to split our work streams. We had a previously had everybody working on everything approach. A lot of that was due to the fact that some of us were new to the area and needed to catch up. But now I think we can split things into different work friends and they're listed right here. We have a lot of work with our public consensus, etc, working on project Atlas in a way to met marry file coin with geospatial processing which seems to be a very natural marriage there. One thing we did do is to have our first ever crypto econ day summit at dev connect. We had 60 attendees and 11 talks. We scheduled this pretty quickly. And I think we can do a much better job going forward. We're looking to now do a quarterly file coin economics day. And so at major conferences. So we're working on that. And with that, I will turn it back. Awesome. Good learning there on using summits like this as an opportunity to get to know people who might want to join the team. I feel like the good lesson for other other teams as well. Thank you. Passing to row. Good. This is Raul from the FEM team. The FEM team had a long and needed Kolo and Amsterdam for three weeks. It's heading. It's actually adding tomorrow we're still here. I think time has been crucial to work through some critical items for the M for the upcoming M1 milestone and also to flesh out the scope and work breakdown for M2. As a result of gas parameters for 532 are almost finalized, and probably an update to the draft focus coming, coming in tomorrow. We also made a ton of progress with various hardening work streams for M1 including the ones listed here. We also worked through the NV16 testing and deployment timeline with the Lotus team and the Infra teams had many product conversations around DVM and native scoping and the development experience prioritization there. And also had the opportunity to connect with several collaborators including the FISCHEN team to discuss technical design details for the Filecoin EVM implementation. Awesome. Patrick, retrieval markets. Yes, we have, we've had a retrieval markets and will specifically Saturn Kolo this week in Amsterdam as well. One week of the Saturn team being here on Tuesday, we had a retrieval markets workshop intro to Saturn. We had four speakers and over 30 people watching or attending the event. Similar learnings to what Alex mentioned for the CryptoEcon day. We also had a lot about how to put on events in the last minute and there's lots of learning to take and doing it again in the future. We've also had the team, MyL team, joined for the last two days, which has been great. It's great to have the Saturn and MyL team in the same place. And even though they're building their own networks, they've been breaking bread together and sharing stories of building retrieval networks. So it's all been very happy. One of the takeaways is that we've got a much clearer route towards launch for Saturn. And we, it was also great to hang out with some of the CryptoEconomics team earlier in the week and make progress in that space too. As Jake mentioned as well, we had a meeting with the Bedrock team, Bedrock X retrieval markets and we covered a lot of interesting things. So it's been a great week and things are much more clear for the future. Great looking roadmap for the next couple of months as well. Over to Patar for Edelweiss. Hi everyone. So I want to make a quick announcement of a developer tool that we've built called Edelweiss, which is an extensive RPC protocol compiler based on IPOD. This tool is now production ready at its first milestone. And it's generally meant to streamline the process of defining formally future protocols as well as legacy ones. It has a lot of features which you can read about on the GitHub repo. And it's quite flexible. The current adopters soon to be in production are a few projects, some in the IPFS ecosystem, so Hydra and IPFS itself, as well as store the index from the Filecoin ecosystem. Feel free to reach out if you want to learn more about it. Thank you. Awesome. Great to see that launching. Cool. And the remainder of our time want to run through these deep dives starting with Boost. I'll be quick. So yes, Boost, the new version of the markets protocol for Lotus that currently supports. It's going to be a tool for storage providers as well as some client tooling that includes the existing version of the deal protocol V11. And also introduces a new version of the deal protocol V12 that allows you to select data transfer and gives us some other neat types. So when we were building out Boost, one of the things that we wanted to do for storage providers was give them access to more information into their system. So this is currently a design mockup for the UI and what we wanted to showcase is the availability of information. While the web UI is really good for small scale miners that don't have a lot going on, as we've been talking with very large enterprise storage providers. It's very challenging for them, but the advantage is there's a GraphQL endpoint that they'll be able to curl in query and we can build CLI tooling on so they can start operating on massive scales of tens of thousands of deals. So this is really good. And so we can move on to the next slide. With this, as I mentioned, we introduced new data transfer protocols for storage. One of these comes with HTTP transfers. So we have a lot of folks who are building car files and uploading those to servers like S3 to then transfer them over to storage providers. And so what we built is a way to make that into an online deal. So now when you're negotiating, you can say, hey, here's my car file, here's the copy for it. Go grab it. The source provider will immediately download and store that file. And the nice thing I mentioned earlier, Textile has shipped support with this, which has been really nice because they've had to pull in a lot of these data and serve it to storage providers themselves. And now what we've done is skipped all of that gap so they can just say, hey, storage provider, if car files over here, go get it and make the deal. And we've also seen some 20x speed improvements in some of the initial tests for folks over at the current storage protocol speed, which is really good. So we added support for the HTTP over Lippie-to-P protocol that's been around in Lippie-to-P for a while. And this allows a very lightweight streaming protocol. And so we've been working with the Estuary team, with our team to integrate into Field Client in Estuary. And so our goal is to roll that out next week and start testing with some of these storage providers. And over all this whole process for both Textile and for Estuary, they're able to fall back to the 1-1. So for folks who haven't upgraded to Boost yet, it's not a problem. They can use the legacy protocol and move on with their day. Next slide. So with this, there's boost.filecoin.io. We've got a bunch of docs there and we'll be adding tutorials as well. We try to create as much information as possible. We also released some utility commands with Boost. So there's a Boost client, which folks can use to execute and make deals, check deal statuses, things like that. And then we also have a BoostX Utilities command. We discovered that people were going around and trying to figure out, like, different tools to build their car files and to calculate comp and all of that. And so we tried to bring all of that tooling to Boost so that it makes making deals much easier to do. For folks who don't know what CID gravity is, CID gravity is tooling that's built on top of the Lotus process to allow folks to kind of automatically configure their storage providers to handle certain deal rejections, do certain analysis. And we've been working with them and they've already updated to support the latest for Boost, which will give them some additional information to continue to build tooling because one of the things that we want to do with Boost was allow more extensibility of these underlying processes. This kind of showcases the support that we're doing there. Next slide. So there's a link here to the storage provider AMA demo we did last week. We don't have enough time to go into the demo here, but it was about 15 minutes and you can see Anton going through the whole deal flow process. You can see a lot of what's going on in the UI, see all the new snap deal stuff in use, which is really great. And then on in terms of rollout, we're going to be announcing a beta phase for early adopters in the next couple of weeks. We want to finish up integration rollout with estuary before we roll into that. And then we're aiming for that full launch in mid Bay. And then with prepping for launch, we're also starting to look at what's next. So working on scaling for these large enterprise storage providers so they can handle onboarding, you know, hundreds of terabytes of deals per day. And also looking at planning very support for either HTTP and or free retrieval for BitSwap. So more to come. If you want to follow along, join the boost channel and Parkland Slack. Thanks. Woohoo. Thanks so much. Passing over to deep for Slingshot. Hey everybody. Hopefully you can hear me. I'm going to try to keep this a little quick because I know we still have at least three more sets including this one to get through and about six minutes to go. Yeah, most of you already familiar with what such that is community program for processors and prepares and search providers really the onboard lots of open interesting data sets to the FAQ network. We kick this off right around main net and are nearing and crossing actually 37 petabytes as of today of data onboarded over the course of the program. We're currently in phase 2.8 of this program which will likely be the last or if not the penultimate phase for this before we transition to something more interesting and unique which I'll be sharing towards the end of my little section here. But the current focuses right now are to try and get this up to 40 petabytes of data onboarded. We're currently at about 61 ish unique puppy bites we'd like to bring that number up closer to 65. And all of that is visible in this data explorer that we have on the website as well which is linked here loss and definitely not least focuses right now on this are on improving and increasing the quality of the work and the baseline for the standard of how current data are sort of adding to the network adding to the programs and ensuring that the data that's being onboarded is retrievable. Next slide. So the reason this is labeled programs and not slingshot is because I also want to walk you through some of the other components that we've been working on that sort of partner as part of this group of things that come together and how they all tie together. So the first of which is recovery. You probably heard of this in December where we had a data loss incident, which resulted in us coming up with a slingshot recovery umbrella within with which had, sorry, which had two separate programs within it, one of which being restore where the resource from outside the network, because we completely lost the replicas that we had, and needed to go back to the original sources, and the other being repair where we had at least one replica of a specific CID available within the network and we wanted to come up with an automated self healing mechanism to identify that replica and find a way to incentivize rebuilding of those replicas. In a similar vein, at the end of March we launched a program called Slingshot Evergreen, which is an initiative to guarantee the permanence of the data that's been onboarded to the Slingshot program. So I mentioned, you know, Slingshot's been around for about 18 months. That is also the deal term for many of the deals that started happening here at Mainnet, and we sort of kicked this off just in time to ensure that data that was onboarded isn't going to be lost from the FACA network. And so the idea here is that we ensure at least 10 replicas that are very thoroughly geo distributed are available for the next several years of this data and ideally forever. So that word is definitely loaded. And I'm sure some of you are already buzzing and thinking about implications with regards to FEM and stuff and absolutely yes we're super interested in that. But right now where we're at is we're doing a bunch of KYC on search providers that want to specifically participate in replica building for these CIDs and using CIDs as the main mechanism to identify subsets of a data set that are nearing expiry to ensure a continuation of the availability of that data long term. Next slide. And so why am I telling you about all this and what are we working on today. So in the last couple of minutes here just want to touch on the, a few of the different work streams that we're prioritizing at the moment. So from the recovery front, it's definitely still a work in progress. We're reaching about six megabytes of data brought back onto the network. We still have about that much to go. So we're working on extending some participants in the distro program with follow on grants for what we define as hard to reach data sets. So these are cases in which like the original provider of the data can't provided to us as easily anymore. But AWS decided to stop subsidizing hosting that data as part of their open registry call back to David choice comment earlier, not super great when we have centralized decision makers on the internet we want to provide an option that exists for the long term for free as well as we can to in certain cases because of like three life events, some scientific data became hard to obtain like a good example is we had an organization I believe in Italy that's doing like a data center migration for their stuff which is like satellite data. And so we need to wait for that and then the connections aren't super fast and so coordinating and working with these individual organizations to bring on replicas onto the network is a compelling and interesting for us to chase down. Second, I mentioned the data sets for it's about to get a super cool overall both from a design standpoint, but also from an accessibility and development standpoint. So we're thinking about integration with project back layout and seeing if there are ways in which we can remote trigger the operations of compute on this work way to the future so lots of interesting thinking happening on that front. Third, I want to call out the retrieval success rate side of things on some shot. This is a super important metric for us we are in every phase we sample all the data that's being onboarded for retrieval retrieval ability. So I think that that needs to scale into all the programs as well as even beyond the program so we're thinking of ways in which we can provide that as an API and hopefully we might become part of you know the presentation that happened through the crypto econ lab as well and see how we can put it put in some of that data into what they're looking at. And last, I want to chat a little bit about what we're currently referring to as a data preservation now. So again, switching to the next slide. I'll show you my little spaghetti diagram. Awesome. So what is data for data preservation now or like what are we had to do with thing shot v3. So just talk to you through a bunch of the different components. What's interesting about them is that they all come together to actually build a set of services that can work really nicely together to build an engine that pushes data through the network ensures that it's always self healing in the case that it's lost. So specifically, we've got the sing shot program with some evolutions will become a really nice onboarding mechanism for data sets that people are interested in onboarding to the network. We've got evergreen that ensures that it's not lost in the network. We've got restore and it's some components to ensure that if it is lost we have mechanisms to self feel. We have a mechanism to test retrievability and ensure that there's quality in how it's being accessed and that it is available for people that want it. And then we have an explorer to find and actually use the store data sets. And so we're looking at interesting incentivization mechanisms and bringing all of these components together in what we're sort of currently terming as a data preservation now to build a machine, a machine, hopefully leveraging these components as well as other developments in the network to onboard useful open data sets to follow and forever. Thanks for the time. If you're interested, please your child would love to pick your brain on your ideas. Sounds amazing. Very, very excited for that to move right along. Wow, for client growth working group. Awesome work deep. Hey everybody, my name is Ron Fiedaero and I'm working with deep and David on expanding the the man side of Filecoin so our team is responsible for driving organic adoption of Filecoin by seamlessly onboarding the landscape scale data, and we'll do that by focusing on utility and demonstrating exactly what users get and clients get from our network on process focusing on making things more seamless and more frictionless. And more importantly, on tooling laying the foundations for robust and composable pipeline for ingesting data on to the pipeline onto the network. In the next slide I talked a little bit about what I've been caring a lot about since I joined protocol labs, which is metrics metrics metrics if you can't measure it, you can't manage it. And so particularly I care about the pirate metrics are acquisition activation retention referral and maybe one day revenue. So I, you know, we're trying to create a team that is able to measure everything we care about. And to scale this the demand of our solution. In the next slide I talked a little bit about what I've been working on over the past few weeks, which is getting things together, connecting key data sources across product marketing and the onboarding funnel so that we can actually measure our client growth funnel on HubSpot understanding these organic inbound leads through Filecoin at large data dot Filecoin that I owe, and then understanding how our folks are engineers are helping clients onboard to the date to to the network. And of course, on GitHub understanding how clients are going through the data cap provisioning process. So the TLDR here is we've launched a an awesome dashboard that is comprised of all these different data sources, and really shows us week on week day on day, how our client how our, our demand side of the network is growing. And the next slide we can actually start focusing on the key acquisition funnel right how our users navigating through each of these key steps from being aware of our product to actually being qualified to actually onboarding through a group of concept, and then being happy onboarded clients. So, next slide, I present a little bit of some of our awesome opportunities. We have a first view into our acquisition funnel. This allows us to dive really deep into the different stages of the data onboarding process for large clients and really refine product opportunities. And most importantly, we can also be very open and transparent at PL and publicly, hopefully with our growth ambitions and how clients are actually finding us and using our product. So in terms of challenges, it's really difficult to consolidate and clean the data from so many different sources. We still have to instrument a ton of things inbound leads from different sources, different states. In terms of quality of service during and after onboarding. We have to ensure that our numbers are bulletproof garbage in garbage out that won't do. We need to make sure that these things are reliable. And finally, we need to separate client acquisition by initiative. And really tell who gets more credit me or deep. I'm just kidding, but we really have to understand how programs are contributing to our growth and really be able to point to the ROI of these different initiatives. So, thank you so much for being with me. Please reach out. If you have any questions. The dashboard exists right now it's it's internal. So just ping me for for access details. Thank you so much. Helping people on board on file point is a major a major initiative and there's so many different ways that we can focus on it so getting better data and visibility into where people get stuck. How we can build new tools or improve the product so they can help help people on board successfully. It's awesome. So thank you for getting us that visibility job. Thanks, Molly. Cool, quick index on upcoming events. As you've heard many of these teams have been getting together over the past couple of months which has been amazing. So we want to get together with the whole community as well and make sure that we're engaging with everyone who's viewing this asynchronously. I want to index on two great lists of community events that folks can check out if they are excited engaging with it as far point would be to be communities. One is all of the amazing hackathons that are happening across IP of us and file point. And this is that events.filepoint.io or hackathons.filepoint.io there's a ton of things happening here and probably like, you know, 10 plus things per month. So definitely get involved there. Also, the file point foundation and the distributed web has a list of upcoming community events, which includes lots of group meetups and other gatherings so two places to find upcoming events if you're looking for them to let you know about that particular. First, there's Paris P2P which is happening this weekend that the P2P team is doing a whole ton of presentations at definitely tune in there if you're in the Paris area this weekend, please stop by and say hi would be super fun to meet more folks in person. Phil Austin is coming up in June, June 8 there'll be a ton of people from across the community will be having our next collo week for the launchpad program there too. If you're interested in speaking there is a form that will stick in the YouTube channel as well. And we'd love to see a lot of people there in Austin. And then finally IPFS camp 2.0 or 2020 is happening later this year, July 14 through 17 in Amsterdam so super excited to bring back together the amazing community that gathered for IPFS camp 2019 in Barcelona. And definitely start gearing up, start thinking about the things that you want to present and the deep design discussions and workshops that we want to host to help push the whole IPFS ecosystem forward. Thanks for tuning in. I hope everyone is having an awesome week and great seeing you all.