 So, yeah, most of you are already familiar with what Snunch out is community program for processors and prepares and search providers really to onboard lots of open interesting data sets to the FAQ network. We kick this off right around may net and are nearing and crossing actually 37 heavy bytes as of today of data onboard it over the course of the program. We're currently in phase 2.8 of this program which will likely be the last or if not the penultimate phase for this. Before we transition to something more interesting and unique which I'll be sharing towards the end of my little section here. The current focuses right now are to try and get this up to 40 Peppie bites of data onboarded. We're currently at about 61 ish unique Peppie bites we'd like to bring that number up closer to 65. And all of that is visible in this data explorer that we have on the website as well which is linked here. Last and definitely not least focuses right now on this are on improving and increasing the quality of the work in the baseline for the standard of how current participants are sort of adding to the network adding to the programs and ensuring that the data that's being onboard it is retrievable. Next slide. So the reason this is labeled programs and not slingshot is because I also want to walk you through some of the other components that we've been working on that sort of partner as part of this group of things that come together and how they all tie together. The first of which is recovery. We've probably heard of this in December where we had a data loss incident, which resulted in us coming up with a slingshot recovery umbrella with within with which had sorry, which had two separate programs within it, one of which being restore where the data needed to be from outside the network, because we completely lost the replicas that we had, and needed to go back to the original sources, and the other being repair where we had at least one replica of a specific CID available within the network and we wanted to come up with an automated self healing mechanism to identify that fetch that replica and find a way to incentivize rebuilding of those guys. So in a similar vein, at the end of March we lost a relaunch program called slingshot evergreen, which is an initiative to guarantee the permanence of the data that's been ordered on board into the slingshot program. So I mentioned you know slingshots been around for about 18 months that is also the deal term for many of the deals that started happening here may net. So we sort of kick this off just in time to ensure that data that was onboarded isn't going to be lost from the FACA network and so the idea here is that we ensure at least 10 replicas that are very thoroughly geo distributed are available for the next several years of this data and ideally, that word is definitely loaded. And I'm sure some of you already buzzing and thinking about implications with regards to the FEM and stuff and absolutely yes we're super interested in that, but right now where we're at is we're doing a bunch of KYC on search centers that want to specifically participate in the replica building for these CDs and using CDs is the main mechanism to identify subsets of a data set that are nearing expiry to ensure continuation of the availability of that data long term. Next slide. And so why am I telling you about all this and what are we working on today. So in the last couple of minutes here just want to touch on the, a few of the different work streams that we're prioritizing at the moment. Firstly, on the recovery front, it's definitely still a work in progress. We're reaching about six megabytes of data brought back onto the network. We still have about that much to go. So we're working on extending some participants in the distro program with following grants for what we define as hard to reach data sets. So these are cases in which like the original provider of the data can't provided to us as easily anymore. These examples are AWS decided to stop subsidizing hosting that data as part of their open registry call back to David choice comment earlier, not super great when we have centralized decision makers on the internet we want to provide an option that exists for the long term for free as well as we can do in certain cases because of like real life events, some scientific data became hard to obtain like a good example is we had an organization I believe in Italy that's doing like a data center migration for their stuff which is like satellite data. And so we need to wait for that and then the connections aren't super fast and so coordinating and working with these individual organizations to bring on replicas onto the Falcon network is a compelling and interesting for us to chase down. Second, I mentioned the data sets for it's about to get a super cool overall both from a design standpoint, but also from an accessibility and development standpoint. We're also thinking about integration with project back layout and seeing if there are ways in which we can remote trigger the operations of compute on this work way to the future so lots of interesting thinking happening on that front. Third, I want to call out the retrieval success rate side of things on some shot. This is a super important metric for us we are in every phase we sample all the data that's being onboarded for retrieval retrieval ability. And we think that that needs to scale into all the programs as well as even beyond the program so we're thinking of ways in which we can provide that as an API and hopefully we might become part of you know the presentation that happened from the crypto econ lab as well and see how we can put it put in some of that data into what they're looking at. And last, I want to chat a little bit about what we're currently referring to as a data preservation now. Molly if you don't mind switching to the next slide. I'll show you my little spaghetti diagram. Awesome. So what is data for data preservation now or like what are we had to do with thing shot v3. We talked you through a bunch of the different components. And what's interesting about them is that they all come together to actually build a set of services that can work really nicely together to build an engine that pushes data through the network ensures that it's always available and ensures that it's self healing in the case that it's lost so specifically, we've got the sing shot program with some evolutions will become a really nice onboarding mechanism for data sets that people are interested in onboarding to the network. We've got evergreen that ensures that it's not lost on the network. We've got restore and it's some components to ensure that if it is lost we have mechanisms to sell fuel. We've got a mechanism to test retrievability and ensure that there's quality in how it's being accessed and that it is available for people that want it. And then we have an explore to find and actually use the store data sets. And so we're looking at interesting incentivization mechanisms and bringing all of these components together in what we're sort of currently terming as a data preservation now to build a machine to be a machine hopefully leveraging these components as well as other developments happening the fuck on network to onboard useful open data sets to follow in forever. Thanks for the time. If you're interested, please your child would love to pick your brain on your ideas. Who sounds amazing. Very, very excited for that to move right along. Wow, for client growth working group. Awesome work deep. Hey everybody, my name is Ron Fiedero and I'm working with deep and David on expanding the the man side of Filecoin so our team is responsible for driving organic adoption of Filecoin by seamlessly onboarding petabyte scale data. And we'll do that by focusing on utility and demonstrating exactly what users get and clients get from our network on process focusing on making things more seamless and more frictionless. And more importantly, on tooling laying the foundations for robust and composable pipeline for ingesting data onto the pipeline onto the network. So on the next slide I talked a little bit about what I've been carrying a lot about since I joined protocol labs, which is metrics, metrics, metrics, if you can't measure it, you can't manage it. And so particularly I care about the pirate metrics are acquisition activation retention referral, and maybe one day revenue. So I, you know, we're trying to create a team that is able to measure everything we care about so we can really scale this the demand of our solution. In the next slide, I talk a little bit about what I've been working on over the past few weeks, which is getting things together, connecting key data sources across product marketing and the onboarding funnel so that we can actually measure our client growth funnel on HubSpot understanding these organic inbound leads through Filecoin, largedata.filecoin.io. And then understanding how our folks are engineers are helping clients onboard to the data to the network. And of course, on GitHub understanding how clients are going through the data cap provisioning process. So the TLDR here is we've launched a an awesome dashboard that is comprised of all these different data sources, and really shows us week on week day on day, how our client how our, our demand side of the network is growing. And the next slide we can actually start focusing on the key acquisition funnel, right how our users navigating through each of these key steps from being aware of our product to actually being qualified to actually onboarding through a group of concept, and then being happy onboarded clients. So, next slide, I present a little bit of some of our awesome opportunities. Woohoo, we have a first view into our acquisition funnel. This allows us to dive really deep into the different stages of the data onboarding process for large clients, and really refine product opportunities. And most importantly, we can also be very open and transparent at PL and publicly, hopefully, with our growth ambitions and how clients are actually finding us and using our product. We have a couple of challenges. It's really difficult to consolidate and clean the data from so many different sources. We still have to instrument a ton of things inbound leads from different sources, different states. In terms of quality of service, during and after onboarding. We have to ensure that our numbers are bulletproof garbage and garbage out that won't do. We need to make sure that these things are reliable. And finally, we need to separate client acquisition by initiative, so we can really tell who gets more credit me or deep. I'm just kidding. But we really have to understand how programs are contributing to our growth and really be able to point to the ROI of these different initiatives. So, thank you so much for being with me. Please reach out. And if you have any questions, the dashboard exists right now it's it's internal. So just ping me for access details. Thank you so much. Awesome helping people onboard on PowerPoint is a major a major initiative and there's so many different ways that we can focus on it so getting better data and visibility into where people get stuck, how we can build new tools or improve the product so they can help more successfully is awesome. So thank you for getting us that visibility job.