 Then welcome everyone to our July PL-Endres All Hands Meeting. Similar agenda as usual, we'll have a quick working group update of the various different teams in the network and what we're focused on. We have a lot of awesome spotlights today, and then we'll have a deep dive on the PO-REP days, which happened, there was one that happened last week, I guess, and then one happening next week at, in Paris at XTC. So stay tuned for that. As quick reminder, PL-Endres working group works within the Protocol Apps Network. We are working on making the internet built on top of user-empowering primitives that helps lock the net open and make it a great foundation for some amazing breakthroughs coming in the next couple of years or a couple of decades, depending on who you ask. A lot of our work centers around IPvS, PTP, and Filecoin, but also focuses on many new exciting breakthroughs in computing that are growing out of this awesome collection of humans and teams that make up the Endres working group. So our mission is to scale and unlock new breakthroughs in IPvS, Filecoin, PTP, and related protocols by driving breakthroughs in protocol, utility, and capability, scaling network-native research and development, and stewarding and growing open source projects, networks, and communities. These are some of the amazing teams and projects that make up the PL-Endres working group across a whole range of really exciting different initiatives, and this is our focus for 2023. Working on stewarding critical systems, growing the network of amazing humans contributing to these efforts, working on robust storage and retrieval for IPvS, Filecoin, and beyond, and then enabling compute over Filecoin state and data. Made a lot of progress through this. See a lot of green check marks already, which is really amazing, and have a couple of really exciting additional projects that we're continuing to work hard at. And we'll be continuing to flesh out a whole list of launches for Q4 that we'll maybe share in our next Endres all hands. Some of the really awesome big bets that we've been working on over the past year and a half or so, lots of amazing work on large-scale data onboarding in Filecoin. There's been some amazing work across a whole number of teams helping clients onboard, improving the tooling, improving robustness and reliability, both of storage and retrieval. And we celebrated last all hands crossing an exabyte of total active data stored in Filecoin, which is super cool. A lot of work on Saturn as a Web3 CDN for enabling fast retrieval of data on Filecoin, but also caching Web3 data around the world, making it super accessible and easy to use. Really exciting progress. If you're joining us at ECC next week around compute over data networks and being able to enable compute L2s to add additional utility to Filecoin data storage to unlock kind of composable data transformation and utility moving forward. And then also a lot of awesome work on interplanetary consensus or IPC, helping unlock these breakthrough new applications to build on top of Filecoin and have a solid blockchain foundation for web scale applications that can store their data in Filecoin, utilize compute over data networks and go far beyond that with the scalability they need in this space to really expand, utilize regions and interoperate across many different subnets. We have crossed the end of Q2. We are officially in Q3. Welcome to July. And so we are going to grade our OKRs from last quarter. I assume jump in if I'm supposed to let people do that individually. I think we can go either way. Lauren, Matthew, Steve, I think we are all ready to go, but you can- Go for it then. Sorry. I guess then. I'll jump in. With our very sophisticated OKR algorithm, we determined on the keep critical systems running, growing, releasing, scaling, it's secure. We were about a 0.6 on this goal, on unlocking real privacy via double hashing on DHT and IP&I. We had to scale back this goal for the DHT side due to other priorities and the work has bumped into Q3 on the IP&I side. We accomplished this on the server side. All requests are stored now, double hash. All CIDs are stored double hash in the IP&I. All requests are being answered with this double hash store. When the IP&I receives CIDs, they are hashed at that point and everything comes out as encrypted as well. So really the remaining work happens on the client side and there is a proposal on what the actual format would look on the client side right now, open if anyone wants a comment on it. On the Filecoin Network Improvements goal, we did land two network improvements that were pretty significant in NV19. There was FIP60 setting market deal maintenance intervals 30 days and FIP61, the window post grindability fix. And then the third achievement we've achieved right at the end of the quarter is continuing and to this quarter is doing state profiling of chain itself. Steve, go ahead. Yeah, so on hyperscaling, accelerating the talent and teams, contributing to the stack, PL stack, I should say protocols for hosting events. We did have four events successfully hosted. We didn't quite hit our numbers in terms of attendees both in real life or asynchronously, but that's where we are there. That's why we didn't give ourselves a full one there. And on the second item regarding Boxo, there are getting close to around 300 packages depending on Boxo at this point, but and some of these are IPFS implementations that are moving SIDS around, but many, if not all of them are projects that were already previously depending on these IPFS repos and that were consolidated into Boxo, so don't want to take credit there. And we didn't, we haven't identified cases yet where projects were previously using CUPO or it forked it, which I've now switched over. So this endeavor is still ongoing. We haven't hit the goal. And as a result, not going to claim the key result was achieved fully yet. Anyone next Matthew? Yep, on scaling data onboarding and of course getting the data back out really quickly. Last month we talked about how we made a strategic decision to change some of these goals. So we still have the KR here of five plus Saturn customers onboarded. If you recall from last time, we made an intentional shift to focus on customer number one, which is Rhea and getting it right. The team has been making really great progress on that. We're moving forward on really honing in on correctness. On the last bullet here, the one thing I wanted to note is I gave it a 0.6 with an asterisk because while we have not reduced our centralized costs by switching to Saturn yet, the team have all made really amazing progress in cutting our centralized web two infrastructure costs. In fact, in the last quarter alone, we cut another $5 million from our annual spend. So from a numbers perspective, this is a 1.0. We're going to see more gains once we get the reasoning we thought we were going to get from here. But let's move on. Hi, son. Thank you, Matthew. Great updates. On the computer data and state of the things upgrading Filecoin with new L2 capabilities, we had three main OKRs across FVM, IPC, and CUNS. On FVM land, it's 0.9 because while we like overly reached all the metrics here except only one of them wallets, so in the number of amount of field managed by contracts, this week we reached 3 million, which is great. And we have very ambitious goals for the upcoming quarter. Unique contracts, we are around 1,400 which is again over a thousand. Average transactions, as of today, we are around 150,000 field average transactions for the last 14-day average. This is a learning for us when we first launched and those numbers were super low when you look at the benchmarks that are not very clear, easy benchmarks to look at. So we are definitely learning how to put into context certain numbers and setting more ambitious goals. And then on the FVM wallets with some due to some entry needs gaps that we are focusing on addressing this quarter. We are, as of today, a little over 100,000. We reached over 100,000 this week. While we exceeded most of the metrics, given that we didn't know how to set them right ambitiously and we missed one of them, it's 0.9. On IPC land, there's a great update the team is gonna share today. So I don't want, I'm slightly breaking the news, but we reached that like a great milestone into during this week, more to become later in this call by the team, Kudos to IPC team. And on cloud side, we reached this milestone in May and there are more to become that we were about to share in Q3. And David also will share more later in this call what we are planning to do next week in Paris. So with all, all four is running up to 0.1 and 1.0. Awesome. And we have set ourselves some new ambitious goals for Q3. So I'll let folks take it away again to share what we're gonna be tracking ourselves on for the next three months as we dive in to kind of the next check of work around these areas. Great. The Q3 goal on keeping critical systems running, the first one is relating to the IPFS gateway and actually having transparency and what the errors are. So we actually can see what's going wrong and then try to address those. Transparency is the first step. On the Filecoin side, we wanna land three high value FIPS that actually improves the Filecoin economy. So some velocity there on improvements for efficiency and economy. And then the second two KRs relate both to getting more community members running important infrastructure. We want more community run bootstrap nodes and community run window post disputers and consensus fault detectors. So we'll be creating a lot of the software to enable that and working with the community to get those running. Steve? Great, yeah. So again around growth to the stack. Yeah, so first one here about CryptoEconLab continuing on their great trend of being self-sufficient and self-self-funding. So that's what item number one's about. DRAND is also moving into similar territories and getting a validation on their product market fit, getting that plan out there and how they're gonna nucleate, et cetera. And some other key Lighthouse customers validating it. And then the third here, we ran into this one recently at HackFS where users think they can offer content in their browser with a JavaScript implementation of IPvS like Helia. And then we'd expect to be able to retrieve that content via a gateway like IPvS.io and that isn't working today unless you start relying on preload nodes or pending services. And so this is a cross effort of cross IPvS implementations and work in LidP2P to pull that all off which is a kind of a golden path use case that we expect people to run into which we wanna get nailed out and working seamlessly. On data onboarding land with project motion they're gonna be looking to get two or more third party integrations onto this new and easier way to do file point integration. We're also going to be concentrating on SPs in the four to six pair bed by range helping them get unsealing for retrieval working correctly. One that I just mentioned in the previous things as on project area, we are going to go to get to 100% of IPvS.io gateway traffic being served through Saturn by the end of this quarter, the initiative. And in dot storage land, the March of W3 up into the glorious land of production are full having 100% of historic uploads available in there and stored in file coin deals. Thank you, Manchu. And on computer with data and state land for FVM, we have two main goals. One is accelerate the FVM adoption building on what we achieved on Q1 and launched. As you see, we are targeting 15 million fell TVL by the file on us definition this quarter which is a huge goal. And we are hoping that it will get us to top 30. As you know, top 30 depends on other dynamics too, but that's the goal. We think we should be there on the number of wallets that is still a very important metric that we are carrying from last quarter. This quarter we are aiming to get to 200,000 wallets and users. And then lastly, unique contracts which is a great representation of how many unique teams, how many unique projects are being deployed on FVM. We are targeting 2,500. And the second big goal for FVM in our product and engineering platform capabilities, we are really working on unlocking capabilities that's gonna enable multiple runtimes and new use cases on FVM. So this quarter we are looking at data plus FVM use cases getting at least four end to end data aggregators that's gonna make it much easier for developers to start data with Filecoin and deploy their program smart contracts. And related to that, we are aiming to, it is 2,000 storage deals being done through these aggregators on FVM, through FVM. And then lastly on platform end, some of the biggest milestones that we are hoping to achieve this quarter, non-monetious runtimes being securely deployed, demonstrating that we will be on track for multiple runtimes in Q4, which is gonna help many other teams including our big beds too, to achieve growth and accelerated. And then lastly, Lotus Ethereum JSON RPC API enabling transaction tracing. On IPC implementation side, as you see after reaching to big milestone this week in Q3, we are aiming to launch robust and audited IPC implementation on calibration net that is gonna demonstrate that we are on track for Q4 GA public launch and sign up five prominent early access subnet dev teams to commit also allocate their engineering resources to build on IPC. And here we have CDN, DeFi, gaming storage, important verticals that they will be aiming to target and achieve commitments there. Lastly, on card side, we are planning to deploy live card test net and demonstrate growth through users, external users on this test net with 50 plus nodes for the compute jobs, 1,000, at least 1,000 users, which is a very ambitious goal for card test net and 2,000 jobs submitted. And we are hoping to be demonstrate with our technical roadmap that we are gonna be on track for Q4 GA as well. Awesome, super exciting ambitious goal sets. If we hit 70% of this, I'm going to be pumped and I bet the whole ecosystem will too. So awesome work everyone. And with that, I'll hand it off to IPFS. Yeah, I'll take this. So IPFS, we're making the web work peer to peer with content addressing. So content can be verified independent of the provider or the transport method. And so next slide here about some of our KPIs. You many of these come from probe lab.io but you can get all the details by hitting that QR code. Not a lot of major call outs I want to make here. A couple of things I'll say on the right hand side, there is a slight uptick in DHT find provider latency. The probe lab team is investigating that. So don't have anything to report on that yet. I would expect we will during our next all hands but that's undergoing. You will see a large IP and I latency drop for the P9D uncached case. The bedrock network indexing team did a whole bunch of internal query routing improvements and they are now seeing the net result of faster lookup. So great job to those folks. Next slide on the protocol implementation highlights, continuing with project REIA and other needs, gateways are continuing to get a lot of investment. More to come on, more will be shared on this but do want to note that all of the sharnest tests that have been accumulated, sometimes learned the hard way in Kubo over the years have all been moved to this gateway conformance project so that any gateway can be tested as long as it has an HTTP endpoint conforming to the gateway spec, we can hit it and test it. So that is all done. The work for partial car support within Tresos gateways has also landed and shipped and is now being used in multiple IPvS implementations. With Kubo and Boxo, we did now finally ship Kubo 0.21 that was talked about last all hands. There were some performance regressions and bugs that needed to get fixed. We've got those all handled so the release is fully out the door. With those features that we talked about then, Boxo itself, the kind of launch milestone that we had set for ourselves is now fully done, done, done in terms of the repo consolidation and archiving of around 30 repos has occurred. Go car has been like moved back out of Boxo and we're just depending on the IPLD slash go car repo which is important for users. And there was also some dependency clashes that would occur if someone was using Boxo with some of the existing repos. And those have also been addressed thanks to various folks involved. Yeah, and as you can see, Boxo has a logo so we'll be using that brand going forward. Also the, I wanna note the IPLD Explorer is working again. I know that's kind of our main if not only visualization tools for IPLD data. So that is using Helia. It's no longer depending on JS IPFS or PL preload nodes. So if you had abandoned the tool or abandoned pointing people to use it because it didn't work, please resurrect and try again because we've got that working. And also wanna call out that the community and the maintainers have been involved in hackFS. Quite a few submissions were made. We added a link to the winners and there was a lot of good feedback not all positive of course but certainly a lot in that direction. It was great to see some of the comments particularly around Helia and the work that had been done on the documentation examples being useful to see. There's a stuff that's coming up more gateway work around being able to signal how you want your blocks ordered within cars. So that'll be making its way into Voxo and the conformance tests. Companion MP3 launch is imminent. We're just wrapping up the last item so we can really be tracking the usage and metrics and handling the migration path but you can get the beta of that extension if you'd like and feedback is certainly welcome. There's been a lot of, I think it's been alluded to before but there's been a lot of work going on behind the scenes about the DHT and how we get it into a place where we continue to upgrade it and support it even as other content routing systems like IP and I are being invested in. So getting that publicized with proper milestones that roadmap will be coming out this month before the next all hands and also just a lot of behind the scenes code cleanup to pull out the IPFS specific parts. There'll be a new Kubo release and the probe lab.io website which has already been live for quite a while but we've been doing fine tuning on that, that the V1 website announcement will be happening and certainly we welcome folks to be checking it out and letting us know if there's anything you need from a IPFS network metrics regard. Thanks, onto libp2p. Okay, so libp2p is the P2P networking library for many applications. Next slide please. I have a minute here so I'm only gonna highlight a few things. Max put a lot of work into libp2p performance dashboards. There's great visualizations, check it out at the link. At the latest community call, Iroh showed off how they do that whole punching. Lots of cool discussion around quick and DERP and NETS. We have a recording out on YouTube, check it out. The next go libp2p release will enable smart dialing by default. That means that we'll have more efficient connections and more efficient use of network resources. For example, you won't start eight connections just to close seven of them half a second later. It already is shipped in V028, it's just by default off. JS libp2p is a monorepo now. Rust libp2p has a big version 52 release with a lot of community contributions. And it includes one of my favorites which is a web transport transport for use in browser-wise environments. Libp2p HTTP has a new redrafted spec link in the slide. And we're going to define how HTTP resources are represented in multi-adders. Also known as HTTP paths. All right, that's it, thanks. Awesome, thank you, Marco. We try to build a decentralized storage network that can get data in, data out, and then you get data computer over with, that's Falkoi. Next slide, some quick networks stats. And the total network storage capacity is still over 10, 82 feet bytes. We can see that the growth of the network, RBP is still slowly decreasing in these days as the network has some changes I will talk about in the next slide. However, the data stored on Falkoi deals are constantly growing. We are at 1.19 exhibits which is 10% of the network storage are now being used to store variable data. That's very exciting. Next slide, some highlights first get through the updates. We have launched a Falkoi state viewer. We do have a spotlight later so I don't wanna go through that in detail but it's one of the profiling effort that helps us better understanding what's going on in the Falkoi state to make sure the network doesn't blow up out of nowhere. We also released a two loaders releases in the past couple of weeks with a lot of like optimizations that has efficient VM execution lines, a lot of like SP software improvement and also a critical thinking issue about fixes was the support from the P2P stored like Marco and we get this out of the way to start Friday so they can keep a high uptime node. The next one is extremely exciting. The Booth V2 was launched with local index index data store, data store, I forgot. However, that means like Booth is now more than ready to be scalable and allow SPs to serve the increasing amount of retrieval requests more reliably. So congrats to the team. From the last update, people probably know the proof team and the loader's miner team has been working on since that pull up and we're about to launch a butterfly test net for community member and CNS service provider to testing this new proof optimization from the protocol level. We are also working on integrating supranational PC2 binary scene to loader's miner which will bring in 17% optimization for the PC2 task for both CC sectors and data sectors which will reduce the cost for onboarding storage onto the following network. I learned today that there's a retrieval box with reputational score launched for the following network that means we have moved from getting data into the network but also slowly become a storage network have data out as well. So that's very cool. Again, this is all very new. So like there's improvements like coming that's a good first step. Following as soon I don't want to speak too much about IPC but there are some like exciting updates coming on later on. There are some challenges in the following network today so I want to briefly touch point on that based on the market and the whole blockchain dynamic these days. We are facing some challenges as a storage network where we are having slower storage capacity growth than two years ago that means the network is now below the baseline maintain and as a result it's causing lower block rewards in general and the storage provider who are providing service on the network are potentially facing some sustainability like challenges. So there is a very active community discussion that is going on to identify the problem and trying to work together as a community to figure out some solutions to make sure the network is sustainable. So if you have any thoughts, please join in the community discussions alongside of the overall tokens of the network there are also a lot of discussion around how following plus working on the following network. There's a lot of recent discussion on what is qualified as a data cat. Oh, sorry, Phil plus, not fit plus working on this too much these days but it's following plus. So there's a lot of discussion on the following network is designed to be a foundation for humanity's most important information. The network is designed to incentivize useful storage for those data. So we want to make sure that incentive system is like effective and ready in the longer term but in the short term but like in the immediate like short term there are a lot of conversation on what are the useful what data set are considered useful for the humanity. How do we govern this mechanism on choosing the available data set to incentives by the network or the client should be paying for the storage. So there's a lot of interesting conversation going on in the Phil plus like channel. So if you have any thoughts on how Falkwing should evolve on this perspective please join in the discussion. There are a lot of opportunities that we're discovering as well as a result of the whole profiling effort with Falkwing state and storage capacity and the Chrome modeling. There are areas we can reduce the state size or the operational overhead within the Falkwing protocol to potentially save operational costs from the storage provider but also in the longer term to make sure Falkwing is gonna be scalable and sustainable in the longer term and without any surprises. So we will be focused on that at field at in the next couple of weeks. That's it. Great to be doing lots of modeling and good that we're seeing opportunities. All right, now handing it off to our awesome spotlight presenters. Starting with me. Lou, I am already in Paris so I'm very much looking forward to seeing many folks from our community at various events at and around ECC over the next week plus. Here is a list of some of the many exciting events where you can meet fellows from the PLN Dress Working Group and beyond at all sorts of fun gatherings. So if you're gonna be in Paris, stop by. If you're not gonna be in Paris send your friends we'd love to meet them and make sure that you check out the recordings because many of these are also gonna be live streamed and available. Falcon Unleashed, Crypto Econ Day, Infra Gardens, Proof of Space Days. So definitely check out the recordings if you can't be there in person and very much looking forward to gathering the community together. Over to Peter. Hello, quick spotlight from IPDX. So IPDX, we're taking care of developer experience needs within IP source team and beyond. And yeah, so as Steve already mentioned we did reached quite a big milestone we in Gateway Confirmance Project. So I'm not going to reiterate that but I'm just going to say how proud we are of this project and how quickly it became really key across PLN. We've been in various initiatives and that's all thanks to the ownership that Voron provided there. Yeah, so I just want to thank Voron here. And if you haven't checked out Gateway Confirmance yet and you're working around the gateways, please do. And another spotlight from IPDX is around GitHub self-hosted. Just below. It's around GitHub self-hosted runners and we managed to reduce operational costs of that solution that we produced by six times. So we're really excited about that because it means that this project is much more well positioned to serve as a solution to runners bottleneck problems that we often see around PLS. So if you do experience that, please reach out, we'll be able to help and we'll be able to do it more cheaply now. And if you're interested in more of our work for the next part of the year, our roadmap is up. So make sure to check it out. Thanks. Awesome, congrats. On to Micros for state-sized visualization. Hi, we're here to demo a tool for visualizing the Filecoin state. The motivation came from a large reduction in the state used by Lotus Nodes and Filecoin Snapshots in NV-19. Before NV-19, we saw about 50 gigabytes extra and by using this tool, we found where the data was being used. So we can visualize what happens and understand the network. This is an interactive chart and we can use it for other things. Over to you, Mike. Awesome, yep. And so this will be generated weekly and we'll get a Slack ping within IPFS link that's hosting this data visualization and it'll let us continue to monitor the state usage that's in Filecoin. And we're really excited about it and excited about sharing it with you. There we go. Nikolai, tell us about Bifrost. Yeah, hello from the Bifrost team. This month we extended our telemetry platform to support long-term log data retention. And now we store the IPFS gateway access logs in Amazon S3 and this enables us and other teams to analyze the gateway traffic patterns without running permanent expensive compute infrastructure. We already use Apache Kafka to ingest logs from all servers that we maintain. So we took advantage of this and rolled out an open source tool called Kafka Connect to handle the data archival and upload to S3. Now the stored data can be processed using most of the popular data analytics tools like Google Data Pro comes on Athena or you can even use Python with Pandas. Now all of this enables us to provide a self-service platform for everyone in the organization who wants to perform any long-term analysis, like how is the gateway traffic affected by the NFT mania? And you can also analyze the content of the gateway serves. You can find more information about how to access the dates on our public Notion page. And if you have any questions, reach out on the FileCommerce Lock. That's it, thank you. Thank you, thank you, Eric. Everybody, very excited to announce that we've got DRAN Time Lock working on FVM in an MVP form. So really pumped up about that. Patrick and Yolan have been working hard to put this code together along with some fantastic help from the FVM team and the IPC team. So thanks very much. You folks know who you are. We're really excited about this, enabling this primitive because it's going to enable more compute over data scenarios that involve time locking files. So for example, users can send time lock data into FVM that gets decrypted at a later point in time. And it also enables some more sophisticated commit reveal interactions, including random type randomness rather. And we're really excited about potential MEV prevention methods. So we're looking into that very closely. And overall, it's a big step for us as a small DRAN team and we're really excited to be able to contribute to FVM success. On top of that, we've got a few demos here. These are more end user demos. They're not demos of FVM. They're just to get familiar with time lock and pace lock in case you haven't already seen those. And then lastly, also a quick mention, we have an event. Unfortunately, this event is at the same time as CryptoEconday. So for those of you that aren't hardcore crypto economists, please come check us out over at the Infra Gardens, which we're co-hosting with a bunch of great PLN partners. We have over 1,000 signups already, but we can only host 500 people at a time. So we're gonna be sort of trying to manage things at the door so that we don't overload the venue. But we've secured Molly and possibly Juan. We'll see. As you can imagine, Juan's schedule in Paris is crazy busy. So, but thank you, Molly, for being willing to join us for one of our panels. Much appreciated. And so we really look forward to seeing those of you who will be up in Paris with us and meeting some of you in person for the first time. So thanks very much. Glad to be there. Looking forward to it. Over to Vic. Hey, everyone. As Jenny mentioned earlier, the Falkwood Network source class across the baseline in February of this year, which has caused block rewards to decline, which is intended as per our hybrid minting model. Despite this, QAP growth remains high due to strong storage onboarding, which has resulted in a strong basey and consistent amounts of Falkwood locked daily due to storage collateral requirements. Both of these factors have contributed to Falkwood's daily change in circulating supply hitting an all-time low this month. Some storage provider businesses are struggling in the current environment, but there are some encouraging signs, particularly the DeFi economy that can provide liquidity solutions for SVs, which is another added benefit of having the FEM shipping in March. These economic mechanisms are operating as intended and the baseline function will continue to incentivize healthy cooperative competition amongst storage providers to onboard storage capacity empowered for the network. But as you look ahead to the Falkwood Economic State later this year, one thing to note is that part of Genesis Festival and late in October, and given current trends, we think minting will likely continue to decrease as network raw by power decreases. Some good things on the horizon are protocol simplifications and cost optimizations can also help entice more onboarding and looking even a little bit further out, the baseline might also cross the network's quality-adjusted power, which could provide additional storage onboarding incentives through decreasing storage provider collateral requirements. So in summation, like at the moment, our assessment is that these economic mechanisms are working as intended, but DL will always continue to monitor these trends as they develop. Awesome. Thank you for the great graphs and for helping us all have visibility into this evolving economy. Over to David for some exciting news with Bravo. Indeed. Thank you all so much. It feels so lucky to be part of so many great announcements. For us, we have our new launch relative to Baka Yao. So for those that have not kept up, Baka Yao is the open source platform that works in both Web 2 and Web 3 scenarios. Web 2 nucleated out into a company called Expansal, and we have been hard at work on the Web 3 version of it. Currently, depending on where you look, will be named, codenamed Bravo or Lillipad. Regardless, it is the Web 3 version of Baka Yao using off-chain execution, but scheduling verifiers and consensus. It will be built on top of IPC. It will offer solidity execution and trustless contract context, and it's being built by the same team that built Baka Yao. Tons of deep filecoin, IPFS, libP2P integrations, and so on. The status is we have solidity of support. We have IPC support as it stands right now, as well as Ethereum support on our testnet. And we have a running testnet. You can see the sample jobs there down in the right. The first jobs ever executed against our testnet is from an application called COUSA, which is where you provide a string and so on. Bravo Network quickly is the consensus layer that sits on top of Baka Yao. And I'm happy to answer all these questions. I know it's very subtle. Our initial use cases are around generative AI, LLM, filecoin data processing, scientific computing, and so on. And we are very proud to announce that next week at ECC, we will be launching our very unstable but real testnet. So you can go try this yourself. And we are really, really excited. Things have moved very, very quickly. We have many presentations next week and the week after. And we already have stable modules for stable diffusion. S3 read and write, deterministic wasm, and so on. So lots of stuff moving very, very quickly. We would love your feedback talks, so on and so forth. And we are very excited and expect to have lots more announcements in the next coming quarters. Awesome. And if you have a little extra bandwidth and want to hack with some folks, there's even a augment hack where you can hack around with the new testnet at ECC. So really, really hidden stuff live off the presses. And we look forward to seeing your bugs. And to keep the exciting launch news rolling, Alfonso, take it away. Hello, everyone. So I'm proud to announce that IPCC may net. So yesterday we deployed the smart contracts of IPCC may net. So now you're able to spawn new subnets with solidity-based parents. This means that now you are able to deploy a subnet and interact send funds and exchange funds with may net seamlessly from your subnet. We put there, so if you want to start interacting with it, you can go to, I mean, go to IPCC agent, install your IPCC agent, and you just have to point to these two contracts that I just share. I mean, I will share it in Slack so that they are copy pasteable. But with those pointing to IPCC agent there, you will be able to deploy a subnet from may net. That being said, we strongly agree to, sorry, strongly advise to just test it in calibration first, because this is an alpha release, which means that it hasn't been audited and your file coins may be lost and we don't want anyone to lose their file coins. But either way, I really recommend everyone to just spawn a subnet, see how rough is our UX, keep as much feedback as you can and help us build IPCC. And thank you George for doing the slides while I was out of office. Thank you everyone. Super exciting, go play around with that as well. And we have fast shipping dependencies of IPC on may net and Bravo using IPC and all sorts of good stuff. So many things hot off the presses all using each other. And now over to the team for proof of space days, deep dive. Hi everyone. This is Max and Irena from Kryptonette. So we had part day at PL last week, 6th of July. We cover things like new research directions in proof of space, security models, survey of previous efforts on proofs of useful space, what changed since 2020. And we have this new website that just launched proofofspace.org, just like an educational and outreach website. Some stats we had almost six hours of talks, conversations and thanks to Solaris shout out to them. We have now professionally produced individual talks that are available. We had 21 registrations up to 40 participants. And yeah, the feedback was really good. We had five over five and like good qualitative feedback as well, people reached out. So I'm gonna password to Irena to talk about insights and outcomes. Thank you Max. Sorry. So I guess the main site, well, so first of all, we have two viable paths that we can implement in the short medium term. One is about improving the existing SDR parameter. For example, the retrieval speed, so going to hours, is today to maybe seconds, using the VDF in combination with the existing path, especially winding path protocols. The other viable path is replace SDR with something that we already know. For example, the NSD construction was proposed three years ago, something like that, with better actually parameter. So thanks to some very recent research effort, we very likely can have better parameter than what we thought. And this again could be of some faster than today retrieval and other in general, improve status quo. Also for security and other efficiency measure, like for example, food screen, on-chain food screen. Then on say in parallel for more on the focus of medium term, what we really understood is that we want not just seconds for speed, but maybe milliseconds. And this last is probably going to be achieved with a parallel effort, a kind of parallel track that is focused research on completely brand new proof of space and proof of replication construction. So leave the SDR space and more general graph labeling areas and find a new construction that can give by default faster retrievability with better parameters. This is parallel to resume all the ideas that we have that use at VDF. VDF time is becoming every day more real. Three years ago was hard to have like concrete parameters and testing and hardware that was things that were out of the picture. Today this is changing, like every day is changing a lot. So the idea is that we want to resume all these ideas and maybe test them, test them and test them. And last but not least, what an outcome of it was very important from the part of the day is that, you know, maybe not, there is no one solution for all the application. So it's very important to classify the storage application that where we want to see by coin active. We need to understand different storage mediums, storage size, all these concrete real-world application need to be classified and we need to extract from the application, from the product point of view which are the requirements, the constraints even that we want to, that we need to have on the next part of and post construction. And we are going to work on this with bedrock and other teams of course. Next is the proof of space date. Max, do you want to present them? Oh, it's fine. Yeah, you can also, so yeah, we're gonna have July 2021. Please join us. So July 20 is gonna be a conference day in your republic and July 21 is gonna be a workshops day and gonna have a river barge just in front of Notre Dame. So beautiful location. And yeah, we have more than 50 registrations so far. Yeah, the goal of this on top of alignment and like sprinting on the work that you saw is also to onboard new engineers, new researchers into the space and we already have quite a few students who registered. So yeah, please join us. Awesome. Really great to see us making progress here. I know the whole community would be really excited about some of the new ideas that are bubbling out of these conversations around faster po-reps, especially ones that have faster retrieval as well and enable lighter weight, lower cost ceiling to boot. So super, super awesome. Thank you guys for pushing on this and looking forward to seeing you at proof of space days.