 All right, welcome everyone to our July PL Andres All Hands. A lot of exciting things to share for the past month of amazing work in the Andres team. As usual, we'll start with our working group update, talking about some top level KPIs highlights and some team updates, do spotlights into some awesome things to highlight across the group. And then we have two deep dives, one on network version 17, which is the next Filecoin network upgrade that is in the works right now. And then a data dive from stuff on NetOps for kind of some of the new data pipelining work and tooling that's happening there. As usual, where the PL Andres working group were part of the awesome protocol labs network where we drive breakthroughs in computing technology to push humanity forward. We really are building the new primitives that we think are gonna make the future breakthroughs of tomorrow built on top of a great foundation for kind of human agency, collaboration, resiliency and growth. We work on a lot of really awesome projects, especially IPFS, LibBP and Filecoin, but also many of the other things here like test ground, IPLD, multi formats, proto school, DRAMD and more. And so definitely part of these awesome open source communities and participate a lot in how we can help make them better. Our mission is to scale and unlock new opportunities for these protocols. And we do that kind of in three main ways, onboarding amazing new developers and contributors and helping support them to attack major projects, driving breakthroughs in protocol utility capability and working a lot on the research and development and doing that across the PL network in a kind of network native way. These are our 12 different Andres working groups and there's a lot of open roles across the PL Andres working group. If you are looking to get involved and you're say an engineering manager or a TPM or an infra engineer or a product manager, please do reach out to us. We are hiring and we also really love collaborating with many different people who wanna work on these technologies in a lot of different ways. And so check out some of our open jobs here or use your QR code reader there. As a reminder, our strategy for 2022 on Andres has four main components. The first is growing the talent funnel of amazing developers, researchers, product managers who are kind of attacking the open problem areas within this domain, working on doing our kind of development and work in a very collaborative, open, transparent way that enables a whole network of teams pushing these projects forward, enabling robust storage and retrieval across IPFS and Filecoin so that we have lots of useful data being onboarded into the Filecoin network and also making sure that is accessible, retrievable and building amazing applications and velocity on the IPFS side as well. We also have some really awesome breakthroughs in programmability, scalability and compute that are well underway and making really amazing progress and that can kind of really level up these systems, unlock brand new capabilities and help push these ecosystems forward. And of course, we do all of this work on the really great foundation of having our solid network operations, really good stable releases, really good testing infrastructure, really good monitoring, analytics, uptime, all of that and from making sure that we run systems and projects and protocols that are highly reliable and robust. And so those are kind of four main strategy areas and update into the Filecoin roadmap. This is something that we've been sharing across the Filecoin ecosystem. There's a couple of new things to highlight in here. I've added in station, which we are going to hear about more later in the spotlight section, which is super cool. Actually already has some repos out there that people can take a look at. Their ceiling is a service that I don't think was here the last time we presented this slide, which is something that's being worked on across the SP working group team and the Phil Dev team on Lotus helping enable much more decoupled ceiling services within the Filecoin network. There's Moon landing, which actually kicked off, I think last week, which is a new program specifically where helping storage providers really ramp up their data onboarding capabilities, introducing them to all of the tools and best practices. It really, really exciting progress and something that we should keep updating if you see something missing in this roadmap. Please let me know because there's probably, we're running out of space, especially in data retrievability, but we'll make more space. There's more awesome things that we should be tracking and sharing more widely with the community. So definitely something that we're participating in heavily. A reminder of our Q3 goals, now color coded, we are well on track for our goals around hyperscaling the knowledgeable and aligned developers, improving the PL stack, making really good progress, where we are almost at our number of hires into this working group, which is great. And also really far exceeding these goals when it comes to supporting a thriving community of participants in the PL stack. So well green on that one. We're yellow on delivering robust, accessible storage and retrieval. We are still hovering around one Pebabyte of Phil plus data on boarded per day, not yet at our five Pebabyte goal. So a lot more work to do there. But we are well exceeding one million successful Falk member retrievals per day. I think Saturn alone is doing something like 10 million per day. So that's really awesome and great to see that progress there. Launching network breakthroughs. We have a lot of work happening here, but I think we still need to get more clarity into our roadmap to understand whether, how on track we are for this. We don't yet have a FEM testnet or smart contracts deployed on that testnet. So we're not yet on track for that goal. And we do not yet have a specific breakthrough that's targeted to start getting real world adoption in Q3. A couple have roadmaps that seem well on track for Q4, but wanna understand and get more visibility to that. And then we are doing a great job on terms of having our critical systems running very smoothly, really good continued uptime handling of security issues and also supporting many new core protocol implementations, big, big snaps to the IPFS team on this, where there is now 21, I believe different IPFS implementations listed in IPFS documentations. And so huge, huge congratulations to the IPFS stewards here and the people who are maintaining Kubo and pushing on other IPFS implementations as well. So that's where we are there. And on that note, I'll pass it over to Edeen on IPFS. So IPFS, we are trying to make the web work peer-to-peer using content addressing. Here's some KPIs that we're tracking. The one that's most noticeable here is that our increase in outstanding PRs in the Go and GUI repos. I suspect this is mostly because both the IPFS group working within Andres and others were stretched pretty thin around the IPFS thing event and also some other fun things going on, like moving all of the docs websites and such from IPFS.io to IPFS.tech and renaming Kubo. So we've got to keep that on track and get those down. Some updates. So there's the IPFS thing event. You'll hear more about that later today. The IPFS specs repo has gotten a bunch of reinvestment. We have a number of open IPIPs, which is the process we're using to Shepherd and Mubalong specs, including a few from Cloudflare and Fission. Sort of as soon as we started reinvesting in specs, the community started echoing back. So this is looking good place for us to be continuing our investment. Kubo 0.14 was released, which was the first release actually named Kubo with some fun things like delegated routing support. As I mentioned, the IPFS.io, all the things not the gateway have been moving towards IPFS.tech, which protects them from things related to the gateways, which is good. And as Molly mentioned, we have an implementations landing page with lots more that we are hoping to add going forward and making space and really helping people understand how they can use different IPFS implementations instead of trying to make one thing do all of them. And some things going on ahead are making the specs process in IPIP more easier for people to engage with and helping Shepherd that along and making the reframe API and go a bit soft better and easier for people to work with. Awesome. Over to LidP2P, I believe Max has a video for us, but LidP2P is the modular networking stack of P2P protocols used by Web 3 and we'll get a update from Max. LidP2P highlights, from the project side, we are growing the team. We have P3 joining as a technical project manager next week. And then we have JQ as a software engineer joining the week after. On the community side, we have steady interest in the LidP2P community call, very helpful to see people face to face every other week. And then we had our first Rust LidP2P community call discussing nitty gritty details about Rust LidP2P. Cross implementation efforts, we currently have two new transports in the working, Web RTC and Web Transport. They allow us from a browser to connect to a remote endpoint where we don't trust the TLS certificate off that remote endpoint. This is pretty much the majority of IPFS nodes, for example, out there today. What we see is in the specification based on the LidP2P specs repository and this is driven by us and the community. And then also we have three implementations in progress of this specification in Rust, Go and JavaScript. On the web transport side, this is still in progress in the specification phase at the ITF. And really, really good. We have Martin in the ITF and we have had Martin at the ITF meeting in Philadelphia last week. And then there is progress on the Go Quick and Go LidP2P web transport implementation. Really cool. Then there is a new working group on Episub led by Bezo. So anyone interested in Gossipsub in general or Episub, please check that one out. Implementation specific, we have Go LidP2P V021 released that enables, for example, on the research manager side for limits to scale based on the machine you're running on. Then also an allow list mechanism, which allows you to set certain nodes that can connect even though you reached a limit. And then the research manager received a lot more metrics and Grafana dashboard so you can actually monitor all of it. On the Rust LidP2P side, V046, there's not really anything user facing but there are large re-factorings for the upcoming work part to see and quick implementation in Rust. And last, LidP2P is a huge community and it's not only protocol apps work. So I want to highlight Nim LidP2P today. Nim LidP2P, the team is really hardworking and they merge the YAMX implementation so you can now use Nim LidP2P with YAMX and next to Implex. And then also they're buying more and more into LidP2P's whole punching mechanism. So for example, they implemented circuit relay V2 and are currently implementing Autonet. That's all for my side. Thank you, Max. IPDX, Peter. Hello. So IP developer experience here. We are trying to empower PL and Rust to do what they're the best through developer experience improvements. So last month, we've been to IPFS Vink. We gave a talk there on how to configure GitHub as code. The video will be coming out shortly. We've also had a lot of feedback on Testground. So thank you for that. It was great to hear all the user stories and we're definitely going to use that. New things happening in Testgrounds. We have GoliP2P cross version testing on every PR out and there have been plenty of stability improvements. So if you haven't checked out Testgrounds in a while it might be time to go back. Unify CI, we automated adding new projects to Unify CI. So now we detect every go repository across our PL owned orgs and propose adding Unify CI there. And we also made some improvements regarding NPM publishing from PL orgs. So if you're interested in that to reach out about the next things coming up we have new things in GitHub management going out live this month. Many user actions, user defined actions that will help us automate checks like all public repos should have their default branches protected. A lot more stuff coming out in Testground as well with the P2P interrupt testing in Rust and between Rust and go. And a new release of Unify CI is coming out this month with Go 119 support. Awesome. And drop by their office hours if you wanna get involved or have more questions. Over to Jennifer for Falkoin. If you don't know Falkoin and we are trying to build a crypto powered decentralized storage network for most important humanity information. To build that we are building a huge maybe one of the biggest like storage network in the pandemic right now. We are at 16.86 like XB bytes in raw by power. And that results in we push to the big 18 XB bytes in Q&P for the network over the past month. So that's super exciting. A lot of them effort are coming from a lot of data onboarding in the Falkoin deals which we have 140 QB bytes data. A lot of them are 100 like a lot of them are verified data. I wanna point out like the matrix we're showing here are all in raw by power. So there are exactly like 127 P bytes of like data that's not being stored in the Falkoin network via like verify deal. So that's super exciting. That's thanks to a lot of like data programming that other teams are ecosystem are pursuing. So that's great. And we are storing a lot of useful variable data set as you can see it on the graph. And we're still like increasing on the daily data growth rate. It's not that stable, but you know, we're getting there. It's increasing some Falkoin highlights from the field that team a lot of feature release. The option release was 17 zero was L with a lot of like storage provider user experience related features so that they can maintain their operations and service easier while they provide service to the network. We have our very first release announcement video which is linked in the side of feel free to check out on Lotus at the score website over to the account. We are also working hard on the split store which is gonna be coming in the beta release code freeze next Tuesday. Split store is a thing to ease the change in the storage data store management for all nodes operator using Lotus. As you know, the state and the chain size grows really fast of all quit network. So we want to make sure the operation there can be very easy. So the node operators doesn't have to do a lot of maintenance for their node. Fill crypto team is shipping a proof version V12 in the next load as release. Also coming next week, we are having some like major multi-core exterior improvement that will enable storage provider to better utilize their whole resources and ceiling on a full capacity. We have been testing this with storage provider in the past like two months already. We have been seeing great results and great feedbacks. So if you are watching this, I would highly recommend you to check this out if we are a source provider. Along with the other core that's crypto net lab if VM team forest venus and everywhere else. We are trying to figure out the network V17 upgrade scope and like we're planning on that. We're trying to define a couple of like high value, high impact fifths to introduce in the next network upgrade so that we can support the VM programmability better later in this year. So the latest like technical scope can be found in the TPM get help repository as well. And there's a very active governance process going on with fifth 36, which is like introducing a sector multiplier, duration multiplier. If you're interested in that, please take a look at that discussion. Another huge part of the file coin network, if VM we are working on if VM milestone 2.1 now, which will enable user programmability on the file coin network. We are gonna start with FEVM, which is EVM on top of FVM. We are forming a leading by role. We are forming a FVM working group that covers like core engineering, product research, developer experience, network upgrades, all the collaboration and so on so forth so that we can have the full usability of the FVM upon the launch. We are doing a lot of like scope planning. We are doing designs, back brings dummy and such. If you want to follow and participate during the progress during the public VM channel. And yes, we are coming up with some OKRs. We will be sharing with public, maybe in the next address like all hands at the next month. Coming up again, we are working on EV17. And Alex Norse will be giving us a deep dive on what's coming for file coin network later. And we are working on split store as mentioned earlier. FVM 2.1, yes, another one, sorry, there's a lot of overlapping here. You should, I think in our thoughts, but also the other thing didn't mention there is a follow this manner. We are supporting ecosystem team like few mine, super rational like to hopefully enable sitting as a service for the whole start fighter community so that more start fighters can join and onboarding and on the file coin network easily easier. And also just like a shameless plug-in, we are doing our very first load as a friend day like a mini summit in Lisbon on November the 22nd. Registration will become open by results. So please stay tuned on that. Awesome, November the 2nd, you heard her here. Going on to our team updates, starting with NetOps, Jesse. Okay, NetOps on day, our KPI, our 95% high TTFP, it's a great improvement from the team. We will share a little bit more detail in the next slide in the highlight, the TTFP drop the 11 second, which is almost a five second or less than previous week is a great improvement. IPOS cluster picking update, still keep pretty good number, pretty high. We try to make sure we can keep this kind of number continue going on. We're hoping more and more community partner will come in to help. So that means we're hoping the number getting a little bit, but we are not hoping it jump into maturity. The IPOS gateway requests are keeping pretty good number, 800 million a week, still keeping a pretty high, pretty good progress. The unified IPOS gateway usage is a 6.4, sorry, I should update this one, it is 9.1 million user increase a lot last week. We're also looking into is any user pattern get changed and it's immediately changing our site may be better, but that may be also one of the result because our TTFP get increased, so it's getting better. So we will into more detail in here. This is update from the NetOps. Yeah, so from the Sentinel side, we currently have the following and going projects, data modeling with DBT, and we have also deployed data infrastructure as code. You can find it on protocol slash dataintra. And we are migrating from Regist to BigQuery so that it would, so that we will be doing less data warehousing operations work and more actual data analysis work. In terms of hiring, David Gasquez has joined the team as a data engineer and we are still looking for another data engineer or BI analyst to join the team. Since we have introduced DBT as a data modeling tool, we can now do data modeling and transformations independent of Lily's data models. So if you have any feedback regarding the Lily's data models now, please feel free to reach out to us and first hand it all slack. There's creating for chain data is also now simpler, especially if you need data all the way to Genesis on BigQuery, just as a note, it's currently still in beta because we will be reprocessing the chain and I'll talk about this later in the deep dive. And you can also now deploy data pipelines onto Argo. It's language agnostic and Kubernetes native. It uses custom resource definitions so you don't have to, yeah, learn Python to write data pipelines. And yeah, that's pretty much it. Awesome, thank you. Yet in contact with the team on Slack. Over to George for Bifrost. Yes, hello. A few more updates on the Argo Pass Gateway. So I think the numbers on the previous slide were a bit outdated. We're actually hitting consistently over one billion total requests a week. We've been doing that for a few weeks now and we're at nine million unique users. So we just had to scale out the gateway nodes quite a bit and we've improved the caching and mode distribution across the nodes. The overall time to first byte P95 is down to seven seconds for last week and so far this week is 2.4 seconds. Notably five out of the seven regions where we run nodes are around one second for P95 time to first byte. Running Kubo 014 on that with those gateway nodes with the new LPP resource manager and the Goal Routine patch from Steven. Thank you very much to both Steven and Marco for helping out with tweaking those. It has resulted in better performance and uptime for Kubo nodes. Opportunities, we are planning on deploying the load balancing layer in part of the NFT cluster nodes with the will allow us to more easily swap out nodes and test a new disk layout with ZFS. I thank you Matt to get us for suggesting that. We also plan on spitting out the bad bits blocking logic into its own service so that we can share it across the cluster nodes as well as the gateways and maybe open it up to others at some point or at least share the code so that others can easily run this blocking service. Also, we are streamlining the scaling all process by automating any new peers. There's still one manual process right now where we have to generate the pure IDs. So we're going to automate that and we're also ramping up tracing on the gateway Kubo nodes. This should help us identify potential bottlenecks on the code. Awesome, great progress and really amazing to see those improvements in TDFB numbers. It's great to hear. Over to Michael for a DAG house update. Hey, yeah, it's a little light because everybody's out on vacation just while I'm playing the role of David. Yeah, so the numbers look good. We shipped a lot of more IPFS elastic IPFS stuff sort of in and around IPFS things. We're very excited about it. So there's some new blog posts and we've started to do some of the early community engagement there. We've also completely now turned it on in production. We're not waiting on cluster anymore for responses to send to clients and we're also turning it off. We're not sending data from Cloudflare workers to cluster anymore. We're actually only sending it over to Amazon and then from Amazon we send it to cluster potentially for another backup and we're looking at other backup solutions too including putting it back in R3. But the finances around what this costs so roundabout and strange. But anyway, that little roundabout effort actually saved us a third of our bill. But anyway, yeah, so we don't depend on cluster anymore. Our perp numbers are changing pretty dramatically based on that. And we also have an open IPNS service coming out called the W3 name that anybody with signed IPNS records can use to host their records and put them in the network. You don't even need to sign up for an account. And so highlights, Alan wrote this awesome tool called DAGULA that everybody should check out. If you wanna move around DAGs point to point between different bid swap nodes this is a point to point service. So it doesn't try to do all the peer connections and management for you. It's actually a programmable way for you to say connect to these peers and just get this DAG. Yeah, yeah, it is a great name. And we've actually got that running in Cloudflare workers a little bit now and that's starting to give us some new superpowers in the network. So we're pretty stoked on that. And just people will probably see some fallout from this but we have a pretty dramatically increased threat of malware in our services this week. And we are responding pretty dramatically to that. So expect more news and more action from us in the coming weeks. And that's all we got. Awesome, thanks for the updates. And definitely go check out all of the presentations from the IPFS thing. Michael hosted an awesome track there that had some really great presentations like this one. Over to Lauren for Bedrock. Hello, so Bedrock is laser focused on faster and a lot reliable retrievals. A part of that is we implemented HTTP piece retrieval to accelerate the Evergreen program to get more data onboarded faster and then exploring SP retrieval generally with that. And then meanwhile on the IPFS to Filecoin side, we have other tree working and collecting metrics so we can analyze what are the bottlenecks, what's happening in the system. Indexer team is about to launch the updated CID contact. And the whole team is working on a cross team collaboration to understand how we can improve retrieval generally for storage providers and for the IPFS side. Some KPIs, we have 31 storage providers running Boost. We have four indexers running with 184 nodes sending announcements to those indexers and we've indexed about six billion CIDs. Opportunities going forward are diving into that auditory data to understand what the bottlenecks are and fix them and working on this collaborative working group to improve the whole system. Awesome, great to have more cross team collab on important retrieval. Retrieval is not something that one particular node owns but as a whole system effect. So great to see that coming together. Over to Luca for crypto net update. Currently, GripNet is working on three main areas. The first one is Filecoin protocol improvement and here we have storage market programmability and FBM standards that are two key efforts for a successful ecosystem of FBM application. We have also several FIPs in flight and a proposal for a network version 17 that Jennifer already mentioned and Alex will talk about deeper later on. The second working area is on-chain storage products. Basically this effort is driven by the fact that we are convinced that we need basic on-chain primitives for Filecoin and for Web3. Retrieval pinning is one of them is the one in the most advanced stage. There is a live version of Ethereum. I will give a really short demo after this slide. Basically, you can pin an IPFS hash and get retrieval guarantee via a network of referees. We also had the storage metric now effort that was meant to collect the storage performance metrics in order to have reliable data on-chain. This project unfortunately is currently on post. There is a post mart and doc which is linked in the slide that you can read if you want to know why and when we got stuck. And we are also designing a new range of products that we know from talking with the community that people want. One is per-patrol storage, one is proving data in the clear and another one is a way to use NFT storage straight from a smart contract. The last work area that I want to mention is the research effort, the basic research effort that continues to be strong. We had four new papers on vector commitment and vector commitment as maybe everybody knows are a key element for making our snark smaller. On the top of that, we have Testudo which is a research effort in the snark land that aims to reducing the proving costs of our proof of replication. As a final remark, we run a second version of the Krippelmet on tour that we also did last year. Basically we gather problems and suggestions in the community. We collected all this information, we are building a graph, the effort is not completed yet but if you want to help out you are more than welcome to join. Medusa, which is one of the other projects we are working on as a demo and Nicola will show it soon. And on the hiring side, we are actually hiring PMs, TPMs and software engineers. So just for you to know that these are the vacancies that you are most interested in. And if you go to the next slide, there is a short demo on the retrieval pinning services that I wanted to show you. On the client side, you can connect with MetaMask. You can see all the deals that you have active. You can download the data that is object of the deal. And basically you can create a retrievability deal with a network of referees. Here there are all the conditions that the network of the referees want you to be compliant with. And we have basically two modes. One is like the standard mode that selects a file, creates a CAD, and then you can basically select some conditions like the duration, like the collateral and all these kind of features. You have like a lot of guidance with respect to that. And once you agree with the terms of the network, basically you can start the retrieval deal. In parallel, we have an expert mode retrieval deal where you can basically play more freely with all the parameters. You can set, you can upload a file and you can set like the retrieval, then all the parameters that define the contract. And on the top of that, you can also basically select your own CAD if you have one. What does the retrievability guarantee come from? Basically we have a network of referees that if your file cannot be retrieved, are basically in charge for giving it this to you by clicking this button and invoking the referees to basically retrieve your file. And this is it basically. Super cool. I know a lot of people are excited to play around with this. So if there's a place where people can get involved, please drop some links somewhere so that we can follow along. Looks really interesting. In the former slide, there are all the links of all the projects that I mentioned. So you will also find the ones for community. Awesome. Thank you. Marco for Consensus Lab. Yeah, so this is a busy slide because we haven't spoken in a while and we grew a lot. So we are like 15 people, 10 LTCs, five part 10 people, two advisors, three interns and our hiring pipeline is still going on. We shipped the Consensus Factory Event where we had the Cardano Interium Foundation, Cosmos Informal System, us and algorithm discussing scalability of consensus. This is a very interesting event. Consensus Day is coming about. We have 25 really high quality submissions. We're present in like a lot of PCs for different conferences, awarded two grants. And if you want to check our progress in demos, we are present in each and every matter of all demo days like monthly. On the roadmap, we are focusing on, we have different projects, but we're focusing on three, right? So hierarchical consensus is our lighthouse project. So progress is the last time we have one of the two key actors, the Subnet Coordinator actor now as FVM built-in actor. So using M1. We have an implementation of an FVM of Atomic Cross Subnet Transaction Protocol. We have published a hierarchical consensus specification. We have a pre-fit discussion on this. What's ongoing? So this is shipped. What's ongoing is we're working on hierarchical consensus MVP in Forest. So not only in Udiko and Lotus, right? So we're working in Rust, hierarchical consensus in Rust. And we're working on test met infra deployment and monitoring in dashboard. So this is ongoing. On the second project, this is a we're focusing on. This is efficient consensus for submits. We are on track for delivering MVP end of August. And then we have our third project on which there is a lot of resources, which are improvements to Falcon expected consensus. In part, we're working with CryptoNet on this and there are several research vectors. So attack analysis, improvements to current expected consensus, security proofs of the current expected consensus, but also exploring EC alternatives. So we plan to wrap up this around mid-October with progressive milestones being shipped. Different highlights. Well, we have two lab members that are getting married. I guess they're happy with the work in the group so they can focus on private life a bit. And we welcome three LTCs. So Akos, Wilson, Guy, they're all focusing on hierarchical consensus. So we are basically strengthening the hierarchical consensus team, but we also have three summary interests. So Dan is working on web assembly concurrent executions. Andre is working on Y3 project efficient consensus and Shredshaw is helping us with Y4, which I already introduced. For all stuff that we do, so we do everything in open like other labbers, but go to consensuslab.world. So this is our new landing page and it will take you, which we deployed recently and it will take you to all stuff consensus lab. We had a team, we can belgrade detail in the roadmap for the next, for the second half of the year, queue, invite the talks, papers and so on. There are opportunities, things we had a presentation of hierarchical consensus and IPFS thing, bootstrapping the discussions on HCU's cases for content routing. A few un-invited opportunities, right? So people approached us from Sonar, which want to bring Falcon ecosystem closer to the cosmos one and they're trying to see if they can leverage hierarchical consensus. And basically we're planning a bigger stakeholder call around deployment strategies for HCU and few talks around Lisbon Crypto Week and Lab Week. This is still in the, like, coming soon. Thanks a lot. Awesome. Great progress here. And congrats for continuing to stay on timeline and roadmap. Over to David for Computer for Data. Hey, everyone. It's been a couple of sessions and I'm really excited to present the progress. We, you know, it's crazy. We wrote our first line of code in February and then threw it all out after our proof of concept time. And so really what you're looking at is everything here happening from April. We've hit our first three roadmap updates. As of right now, we're handling multi-sector data sets. We currently are testing against one terabyte data sets and are simulating 10,000 nodes, which you can come look at all our videos and see exactly what's going on. And we are on track. This month, we're gonna be adding a Filecoin integration, 10,000 file jobs and long-running jobs, including those that cross node restarts, joining and leaving the network and so on and so forth. All the large jobs are using standard IPLD and Merkle Tree storage of the data. So everything flows directly back in and is native. You can see there are target launch metrics that we wrote down when we finished compute over data day in April and we're green. We've already achieved four out of the eight, nine, excuse me. One is in yellow, the 10,000 node job. You wouldn't want to submit a 10,000 node job. It works, but it takes a very long time. So I'm only giving that a yellow. The other thing that's really spinning up there is the computer over data activity. We've launched the computer over data working group. Already have 11 working member projects who have joined and are presenting, who will also be presenting in the computer over day to day in November. Wes will be talking about that momentarily. And then also three computer over data networks running against IPFS and Filecoin. So that's the big thing there. Keeping us on time, you can see the computer over day working group in the highlights has scaled faster than expected. So I'm really pleased about that. Lots of real world use cases. You can see all of them there. And our team is continuing to scale up. We have seven total people, three more in the funnel that we're really excited about. If you have workloads, internal, external, you name it. Large data, we want to test our API, our usage, our scale against your stuff. Don't care, whatever it is. If you want to run compute against your data, we're there. Wasm support, if you ever wanted to try Wasm as an executor against stuff stored on IPFS, we support that right now. So you can submit a Python job, we cross-compile it into Wasm and you can execute. No understanding, no understanding JavaScript. We do it for you. So if you want to give that a shot, you're more than welcome. And please do come participate in the computer over data working group. Every two weeks, we get together and talk about stuff. And with that, that's any wonderful opportunity. Let me hand it over to Wes, who will talk a little bit about that in the next slide. Awesome, that brings us into our spotlights. And I believe this is first. So, Wes, take it away. Yes, thank you so much. We're happy to announce that the computer over data working group is launched. We've had our third meeting. All this information is available on cod.cloud as well as the YouTube page. The purposes behind the working group are to create a space for collaboration. There are many different teams in addition to Baccalaureate that are trying to solve this problem of computer over data, increase awareness, marketing, go-to-market because these projects, some of them are younger and they're attracting user to VC interest. And then also to foster collaboration, there are often shared standards between different computer over data projects. And we want to really invest in support those shared standards. We do have this upcoming second round of our computer over data summit. It's going to be Lisbon, November 2nd through 3rd. We very much encourage you to pencil in the dates we'd love to have your attendance. Also, please jump into the Slack channel if you'd like to stay up to date on the developments there. And the next big ask is if you know of any computer over data projects that are not part of the community yet, please do send them our way. We do want to grow the community. It's a big ecosystem out there. So we're always looking to add folks who are trying to solve these problems. And again, big thank you to Patrick and their retrieval working groups for all their help getting this started. They definitely paved the way for us. So that's all we have for today. Thank you very much. Awesome. Over to Ansgar Cruz Saturn. Good morning, all you beautiful people. So Saturn is Filecoin's content delivery network. Our mission every day, every single day is to make Filecoin fast. So we got a lot of progress reports. While the network is still in testing, it has been growing, growing, growing. We are now 44 points of presence globally. That means L1 nodes. These are nodes running in data centers that end users will talk to first. And we have been loading those nodes. So we are now pushing over 80 terabytes and 120 million requests a day. And if that sounds familiar, that's our target. Our initial target is the IPFS gateway network load. So that's the same load. And how are we doing performance-wise of that load? We are 800 milliseconds faster at 95th percentile, time the first byte than the IPFS gateway. You can see a little graph of that in the upper right. And we're twice as fast as the IPFS gateway at the 50th percentile. And the L2 nodes, this is like the next step in the network. So L1s will cashmiss to L2s. L2s will run on user desktops and station. You'll hear about station Julian shortly. And L2s will cashmiss to FS network. And storage providers, that's forthcoming. Now, what's in the pipeline? What are we, what's on our menu next? We wanna continue to improve the time the first byte faster, faster, faster. We want to integrate with the IPFS gateway and see how we can bring Saturn to the existing kind of production load with the IPFS gateway. And thereafter, we are aiming for a public L1 loss. That means anyone, your friends, your family, your mother, your dog can go run an L1 node in Saturn's network, contribute to that network and be remunerated in Filecoin for their contributions. And we'd love for you to join in on our little party. You can come jump in in Filecoin Saturn channel on Filecoin Slack, check out the orchestrator, which is a piece of software that monitors the whole network, monitor our progress on GitHub. And then a huge shout out to the best little team at Protocol Labs, the Saturn team, let's keep cranking. And that's it. Woo-hoo, awesome. And now the other part of that station, Julian. Hey, I'm Julian. I'm part of the Filecoin Station project. And so we're building a desktop app for the Filecoin network, which actually spun out of the Saturn project. For users, this means that everyone can participate in the network by running the app and everyone can and Filecoin by doing so. And it should be easy to install and run so that we can just grow the network as much as possible. And so that, yeah, basically shouldn't even notice it's running. For developers, Station is a deployment target. The first module that will be deployed to Station is the Saturn L2, which adds edge caching. In the future, there's going to be a lot of computer-over-data use cases, which can be very interesting and whatever else you can think of. So we're building this as a open platform, right? And an open platform obviously also needs a good security model. We are still working on that. We might use IPVM and the other platform will also handle a resource allocation for you so that module authors can focus on the actual business logic and don't need to be concerned with using people's machines too much. If you have questions or ideas, please join us like. Thanks. Awesome. Jesse, quick update on the off-summit. Hey, okay, Jesse again. So last month we go into the Iceland with the IPFS thing together to our off-summit. We go to have a lot of learning and knowledge sharing, which you can see the leasing here. We will slowly sharing with the community what we're going to do, and I think today the data team will share a little bit about what the plan we're going to do in the data platform. You can see we have a lot of topic in there. Also we travel, go into the biking, marketing biking. It's pretty nice, very beautiful country and a lot of activity. So that's a head of a highlight from our off-summit. If you're interested in any what we are doing here or want to join us for the next summit, please let us know. We also have a hiring page in here. If you want to do something with us, please reach out. Awesome. Thank you. We have a couple more and then on to deep dives. Crypto Econ Lab, please Sean. Awesome. So we had a very successful Crypto Econ Day in Paris. We had over 300 registrations and over 100 people attend. So I think that was our biggest one yet. We've been branching out to more non-PL speakers. I think our ratio at Paris was 50% from outside PL and 50% inside PL. And you can see the links on our Crypto Econ Day website. We keep all the talks there. So if anybody wants to watch them, they're there. We've also had two new team members join Juan Pablo. He joined in Paris and Shyam is joining part-time in next week and then full-time in September. Some upcoming things. We have a new Crypto Econ website that Dave is working on. And then we have Crypto Econ Days in Singapore, Bogota and Lisbon. So if you guys are interested, please register and come to our events. I think they're gonna be great. And Dave has done a great job of finding new people to give talks at each of the events. And then some ongoing projects. So we're working on the sector duration FIP, which I think is pretty well-known and Tom and Vik have been doing a great job there. Saturn Aliens, we just started working on doing some gas modeling, hierarchical consensus, and then Project Atlas. And then we are also looking to hire a couple of research scientists and three to four software engineers. So if you know anyone who's interested in working on this type of projects, please ping me and I will reach out. Awesome, I give a sping, Steve. Yeah, awesome. This happened earlier here last month in July in Iceland. There's been different mentions to it. But yeah, we've got around 80 folks, 30 different projects represented, 12 different tracks you can see. A lot of recorded video from this, that'll be going live early next week. So stay tuned for that. But you know, emphasis on multiple areas, certainly the various implementations and experimentation. We talked about the spec improvement process and kind of as soon as that got lit up, like we immediately had people engaging with it, which is great to see. And I think a real highlight for me was to see that there was a lot of orgs outside of protocol labs really working together in a networked fashion. You know, the IPFS and WASM track, you know, kind of birthed the Interplanetary Virtual Machine Working Group. So you can join that on Discord and Filecoin Slack. And similarly, the Content Addressable Alliance, Working Group has gotten started with, you know, it advocates and promotes the usage of content, content addressing in different places. So good stuff there. There's more coming here in terms of recap blog posts. Right after this call, there was actually a bi-weekly implementer sync to carry on some of the conversation and we're gonna be creating a monthly builder sync as well. And for folks to be aware of IPFS camp will be coming where we'll gather the whole community. And details on that will be shared in the ipfs.tech blog and the IPFS newsletter as soon as we have it. But thanks all for those involved. Good times. Awesome. Over to Anor to tell us about Network Version 17. Hi there. So Filecoin Network goes through a few upgrades each year we upgrade the L1 protocol. The last one launched on the 6th of July, which brought some great new stuff in particular, enabling the FEM as the canonical VM on the network. While the FEM are working on user programmability, our next step is to launch a bunch of upgrades to the built-in actors and the built-in protocol so that when people can write their own contracts and actors, they can do cool things. So the main theme of this is to support utility for user programmed things. So there's a bunch of new capabilities and a bunch of refactoring to make things play more nicely. This is all in a big effort towards programmable storage, which is sort of the dream for Filecoin where Filecoin, your network stores your data and then you can write applications on top of Filecoin as smart contracts or actors that can do stuff with that storage. So broker deals, retrieve that data, compute over it, do all kinds of automated replication and renewal and the kind of things that people expect to be built into the protocol, but are actually not. And all of the finance on top of the storage that sets up the economic incentives for a robust birth supply and demand of storage. So there's a big set of proposals that are in scope for this network 17. Most of these have been in discussion for a long time. I'm not gonna talk through each one here, but there's a large set associated with the programmability of the finance or the programmability of doing deals or programmability of the storage that will enable better capabilities for user program contracts when they're enabled. We're not necessarily gonna get everything that we'd like into this upgrade. So the edge of the scope here is just being negotiated amongst the core devs community at the moment. And there's also a set of, a smaller set of proposals around sort of network policy. And so the biggest one here is the changing the reward distribution to incentivize longer-term sectors and stabilize the storage product rewards. And associated with that, there is an opportunity to clarify networks policy. Should we ever discover flaws in the cryptographic security of our application? This is a thing where we wanna discuss upfront what kind of policy the network would take so that storage providers have some predictability over their returns and what risks they're taking when they make really, really long-term commitments to their sectors. We're already a fair way through implementation for these proposals. Some of these are being built by teams outside of Critical Labs, which is fantastic and I think we wanna continue to expand. So there's a couple of the formal flips for some of these still being written but the plans are well-known. In some cases, we do the implementation at the same time as the proposal because the implementation tells us a lot about, we learn things from writing the code and then form part of the spec. But we're well on track for this. So upcoming work here is the governance process for these flips that are not yet approved needs to work through and then ongoing implementation down a few streams, the CryptoNet team and the Lotus team are all lined up behind all of these items so that we can have aiming to finish writing all the code in the next month or so and then start winding up the network upgrade process that works through your test nets and integration and so on towards an excellent work upgrade which will be sometime October-ish this year. Awesome, thank you, Alex. Over to Steph for our data update. Hi, just a short update regarding the work that we've been doing at data. Our biggest goal for Q3 is to make popcorn chain data fully curable. What's that mean? Today, historical chain data is an S3 bucket. It's called the Bill Archive and they are represented as CSVs and these CSVs are transformed into per cut files which can then be queried with Athena query engine. However, this usually lags by one to two weeks and the current data lives on timescale DB. So this makes it really hard to query data from, for example, like today to three months ago so I can stitch it with data that you might need for let's say like up to genesis. Obviously we want to make it easy to do analysis not just for now to three months ago but from now all the way to genesis. So how are we going to do that? We are going to unify the historical chain data and current data into one data warehouse and we've chosen BigQuery because it has the least overhead when it comes to operations. You can read their proposal and notion if you want more background and context on why we ended up using BigQuery in the end. Progress so far, we now have a BigQuery project with the historical chain data from the S3 bucket so we just simply ingested those CSVs into BigQuery but that won't be the longer term solution. We simply wanted to do that because we wanted to do data modeling with DBT which David has worked on so that transformations will be easier. It's version controlled, transformations also happen in the same data store that the transformed data lived in. So it will be much easier for us to model and massage the data to however we need and based on feedback as well. So we can iteratively improve the data model as we learn more about the needs of our users. If you want to test out BigQuery chain data you can do so on Sysense or Grafana if you choose the temp underscore bq data source. So if you look on the screenshot down here that's how you would choose it as a data source in Grafana and you can do the same for Periscope as well. Another exciting news is we have data infrastructure as code deployed and as well as our Go workflows. What this means is that we can lean into using the containers that have already been created for us by the rest of the PL network to create our data pipelines instead of having to write programming like language bindings which is what was being done previously where we had to write language bindings in Python for Lilly. Obviously that's more code means more maintenance and for a very small team of two I think we just wanted to try and make our pipeline then work close as being as possible and as time we check now as thick as possible. So what's next? We will be reprocessing the pipeline chain to address issues and existing CSVs. We'll also be doing more data modeling that's gonna be tomorrow at 11 a.m. with myself and David if you want to join just feel free to ping us on stack. And we will also be setting up existing production pipelines to have BigQuery as a destination. So this is also a nice side effect of moving from relationship to BigQuery is that because BigQuery is also a data warehouse we can use it to store other business relevant data as well because this was another set of issues that we had where people would come to me and ask, hey, why can't I do exploratory data analysis with let's say data from GitHub and try and find some correlations with salt coin chain data. That was really difficult before and with BigQuery that with migrating all of the data into the query, we hope to address this issue. We will also be adding some data validation and testing with DBT as well. Yeah, if you're interested in any of the work that we're doing reach out to us and feel sentinel or ping us at channel data and talk to us back. If you would like to learn more about how to use BigQuery with Goffano or Periscope, just let us know as well and we'll try and get you bootstrap. That's it, thank you. Awesome, thank you so much, Seth. I definitely think some docs and or tutorials on how others can consult about that will be super useful as well. Cool, well that brings us to the end of our time. Unfortunately, we're out of time for Q and A but thank you all so much for an awesome Andres Allhands and for the great deep dives and see you all next month. Cheers, everybody.