 This is today's schedule. We've done our welcome and now we're about to jump into Andres Impact in 2022. There are four sections related to our four strategy items for 2022. We're going to start with a set of presenters on talent funnel, then storage and retrieval, then programmability and compute, and then critical network operations. We are now going to dive into our Andres Working Group 2022 Impact presentations. Again, talent funnel is going to be first. So if you're in the talent funnel section, check your slides. Definitely be keeping track of where you are and maybe come up towards this. But I'm going to give a quick overview first on the PL Working Group. Awesome. So welcome everyone to our 2022 Impact for the Andres Working Group. We're going to give a number of lightning impact updates from people across our team. But to start out, what is the PL Andres Working Group? We are one of the many groups in the Protocol Labs Network, working to drive breakthroughs in computing technology to push humanity forward. And we're working on this because we think that the Internet is one of humanity's superpowers and that's setting up a strong and robust foundation for future breakthroughs in computing that extend and empower the Internet need to happen on a really robust, resilient and empowering framework for people's information. And so a lot of the work we do around IPFS, Filecoin, LitP2P is around enabling those digital rights that we think we should embed directly into the technologies that we use and rely upon every single day. So these are a couple of the projects that we work through. Obviously IPFS is a place we spend a ton of time, LitP2P and Filecoin. But we also spend a lot of our time on things like IPLD, test ground, DRAND, and on supporting the immense network across the PL network of amazing projects and new research breakthroughs that are happening there as well. Quick view into some top 2022 impact metrics. We have now seen LitP2P be supporting over 300,000 IPFS nodes, five and a half consensus clients powering its 450 plus thousand validator nodes, 4.6 thousand Filecoin storage providers, and 112,000 polka dot parachain nodes. We've also seen things like the IPFS network grow immensely, now having over at various different times 300 to 700,000 nodes in the IPFS network, which is crazy, and seeing fine time drop down to like 400 milliseconds, which is pretty amazing. We've also seen the IPFS gateway really increase its user adoption. It's over 8x in the past year from, you know, it's like whatever 4x, 4 million at the beginning of the year, now over 12 million today. So just an amazing increase in the number of users and in the number of requests that are being made every single week. We've also seen networks like Filecoin grow immensely as well. Sorry, 37% growth in 2022 with North America growing almost 90%, which is fantastic. And so this is the overall capacity being made available on the Filecoin network. But what's really seen a breakthrough is scaling data onboarding, which is useful data being stored on Filecoin, vast majority through the Filecoin Plus program. We now have over 260 Pebobytes of live data and 10 million active deals, which is so cool. That is really thanks to all of the hard work of people here improving the core technologies that people are using and the programs and systems that help support them get things like Filecoin Plus data cap, things like that. So we have still 16 expo bytes of data of capacity left to fill with useful data. So let's get a move on it. We have a, you know, I'm sure that's only going to take us a couple more years, but really, really amazing progress and amazing acceleration. You can see, you know, this is where we were in January and the vast majority of the total data being stored on Filecoin has happened just since January, which is really, really cool. Also, from a network perspective, the work that we're doing has also permeated into many, many other teams and startups. We're building upon kind of these core foundations and extending them and bringing them to users via products and businesses being built on things like Filecoin, IPFS and similar tools. So now it's, you know, over 500 companies, projects and dev grants in this ecosystem, which is pretty amazing. So here our Endres working group mission is to scale and unlock new opportunities for IPFS, Filecoin, lippy to pee, and the related PL stack of protocols, so some of the ones I mentioned before. And we do this by onboarding amazing humans, all of the people in this room, but also supporting all of the amazing open source contributors and groups across the PL network that are trying to upgrade these technologies and working with them really seamlessly. We also do a lot of core work within the PL network on driving breakthroughs in protocol utility and capability. This is adding new awesome things like Filecoin virtual machine or retrieval markets or interplanetary consensus, things like that to these networks to help make them better for everyone. And then we also support these groups by doing network native research and development. So that's doing our work in the open. It's publishing our research so that many others can build upon it. And it's acting in a really collaborative and open way with the whole network of other teams and startups and projects growing in these ecosystems. This was our strategy for 2022. We're still working on it. It's still only October. But we kind of had four critical sections. First was talent funnel. This is focused on one operating in a network native way where we're really embedding our knowledge and capabilities into every team across the network, externalizing that research knowledge, growing our team. We can definitely check that one off. We've definitely grown a ton this past year, but also helping support many other teams in their growth over that period of time. And so helping teams like Outercore or helping other new teams get started across the PL network, things like IRO and others. And so that's like a great way in which we've helped, you know, spread that talent funnel goodness. A lot of teams have done a ton of work on developer experience. Getting things like test ground up and running and working on every single libp to ppr is a core way in which you've been improving that. And then finally externalizing the directions we're heading and also making available lots of RFPs, dev grants, and other ways of aligning ourselves with other groups across the network and shooting for a unified goal, which we can all work on together. And so those are kind of the four main areas of our talent funnel contribution in Andres. We've also then worked on two critical sections of growing the capability within the whole IPFS and Filecoin ecosystems. The first around robust storage and retrieval, making sure that a decentralized storage network is actually accessible, robust, and you can build amazing, immersive and useful applications on top of it. This has been a core focus for this past year, both with data onboarding and with the focus on actually making sure that you can retrieve and access that data smoothly. You know where it's located. You have reliable data transport tools that when you onboard lots of data, you can actually, you know, make use of it within Filecoin and within IPFS as well, and making sure there's a smooth connection between all of the different IPFS nodes that are operating in the network. So this has been core focus and continues to be. We also have done a ton of work around breakthroughs in programmability, scalability, and compute. So this is taking early research ideas, pushing them through early, you know, test nets and early releases over the course of the year. Things like FEM going from M0 in January of last year to M0.5 in, I think it was like March, then M1 in July, and now leading up towards M2. And so working on bringing those new breakthroughs all the way from, you know, research idea through to real conception and adoption. So that's been a core focus for us. This is, you know, core to what we do, right? We upgrade these networks we work on with new superpowers that then many other people can build upon and can create their own new capabilities that they want to add to these networks as well. And then finally, critical network operations. Obviously there is a lot that goes into keeping these networks robust and scaling to all of their new usage. 12 million weekly gateway users, I think that's what it was, doesn't just happen. There's a lot of work that actually goes into making that a reality. So keeping our systems running, but not just running, scaling, growing, releasing new versions, releasing new improvements, and kind of paying back old technical debt from previous years is a core part of us being able to actually achieve any of our goals. And so, you know, starting from the bottom, that's our foundation for any of the other work we want to be able to do. Okay, so now we are going to jump into each of these areas and highlight a little bit more deeply the work that they're doing. These were, by the way, the top level kind of objectives we sent for ourselves, mapping to those areas, scaling the developers on the PL stack, robust accessible storage and retrieval of data, launching network breakthroughs, and keeping critical team and network systems running. So yeah, we're going to jump into talent funnel. This was a slide that we looked at at the very beginning of the year for the folks that were with us then, highlighting some of the areas that we kind of took initiatives around, so moving our communications to other channels. This is just an overview though. I'd really love to invite the people in this area up so that they can talk more about their specific parts. I guess I'm first. Working in public. Great. That was the first area that we kind of took on was making sure that we're doing our development openly so that we can externalize all of our knowledge and we can also collaborate more effectively with other teams. So amazing work from Steve and others on actually moving our Entrez communication channels out of protocol lab Slack and into more open communication forums like Filecoin Slack and IPFS Discord. So check. We are now working more openly in those areas. We've published over 30 Mother of All demo day demos, which is awesome. These videos are all online and open on YouTube. Anyone can go check them out. There's some really, really cool ones more recently. We have over 4000 Entrez all hand views of people who are using those all hands to get a deeper sense of the areas that we're working on and also as a conduit for knowledge on how different systems are being architected. And, you know, when we're hiring amazing new people, many of them are saying, wow, I saw the work that you were doing here and I wanted to be a part of making that happen or I wanted to collaborate on this area. So it's amazing as well as as a way to interconnect us into the network and bring an awesome new talented ecosystems. And finally, we do weekly sit reps from each team, situation reports on what's happening, what are their challenges and risks and opportunities. And we've had over 500 of those published in the past year. So freaking awesome. On to David. Thank you. Hi, everyone. Good morning. So when it comes to externalizing R&D knowledge, we basically focus in three pillars. The first one being events and collapse. So the rest of labs have been the, or the labs from Entrez have been the most consistent and putting out events and bringing the whole community together to learn about our development, our progress and also our open problems. We have had events pretty much every month this year and some events were from our own. Others were in partnership with many other conferences that were happening. And thanks to that, we actually had a lot of like really important projects in the space coming to these events and like starting collaborations with us. The second pillar research results. Okay, if we need to externalize R&D knowledge, we have to have something to externalize. And so our researchers have been very prolific while yet another year publishing many results in top tier conferences all around the world throughout the year. And also being part of like program committees and reviewers in many other other conferences that are happening. We have launched multiple RFPs, given out multiple grants and even had some breakthroughs that we could have, that we learned yesterday during Nicolas presentation of things that we thought to be impossible last year. It's very remarkable. The third one is really presence and stewardship. So it's very important for us to be able to shape the direction in which research is happening in the wider space. And that has been done by creating like this gravitational pools, this gravitational centers from starting with the websites from each of the labs that describe all of the projects that are happening. And give everyone an update not only on the projects but the status of the project, the open problems that exist and also inform people about the roadmaps so that other teams in the ecosystem can really plan accordingly knowing those roadmaps. This is like a few things of the many that happen. We have like a full complete snapshot. Wow, this is bright. That has been shared and well, I hope you all can take a look at it at some point and leave some comments, ask some questions. All of the folks are here and so they are available to answer your questions and discuss more. Thank you. Good stuff. So some other network native development efforts where we're working first and foremost with a public network of teams rather than a centralized hierarchy of internal teams that we're organizing. By getting out more knowledge and incentive structures to grow the overall network. Some of the fruit that we saw here, we got multiple new IPFS implementations including elastic IPFS and IRO came out. We have a new LAPDTP implementation and Swift LAPDTP. Yeah, this is a multi org effort around all the grants and RFPs certainly with Outercore and Filecoin Foundation. But a lot of the people in this group are involved in either pitching some of these grants or certainly reviewing them and validating that we actually got the results that we wanted. So when we say open grants here, these are like more openly defined, but these were fully funded and being executed on grants. So many across the Filecoin Foundation with Filecoin Dev Grants, IPFS Dev Grants and with Teffra Labs and Radius, we had 15 complete grants across IPFS, LAPDTP and Filecoin and multiple that are still in progress. So a lot of activity here in terms of incentive structures and then in terms of giving direction for where we're going effort all across Filecoin, IPFS and LAPDTP around specifications. So these are the stats pulled from 2022 in terms of how many of these specs were either updated or improvement proposals landed. And as you can see, these are all done through GitHub and pull requests usually around 50 plus for each of these projects. And yes, some of these are small, but some of these are mammoth efforts in terms of community polls. Some of these have hundreds of comments to actually get them landed. And you can see it's not just a select few that are making these happen. These are dozens of people across multiple organizations. And then when these land, they very much impact the work that others are doing. So great work to all involved. Thanks. Yeah, great. The staff recruiting team really found their form this year. And it's obvious from the number of new faces in this room that we're in an entirely new world of growing our team and growing our capabilities compared with the past. So across 2022 so far, you know, three quarters of the way in. There are more than 1400 interviews performed within engineering and research. We have 120 teaching and training and mentoring to launchpad participants to help spread that knowledge faster and better. I think the, you know, my favorite part of the impact here is this is not just just a matter of effort, but we're improving the systems, improving the processes, improving the automation, improving the reliability with which we can do this. And so the recruiting team and in collaboration with the interviewers and hiring managers in Andres have delivered new interview training, new interviewing rosters. We have metrics. We have training metrics. We understand the system. We know how to improve it. And so I'm looking forward to, you know, even greater things next year as we continue to grow from here. Hi, I'm Peter from IPDX team. My main focus is improving developer experience within IP Stewards team through tooling and automation. But certainly that's not, well, we try to think beyond that group. And I hope you were able to see our presence all around PL throughout the year. Actually, this time last year, the team didn't even exist yet. So that was like one of the main things we did throughout the year to bootstrap the team, think a lot about team identity and long-term vision. But we also did bring some improvements throughout the network. So one of the first projects we focused on was Unified CI. We joined Martin and helped with maintenance of the project. Throughout the year, we made two major releases, which coincided with major Go version releases. And we merged over 250 PRs, which updated Unified CI in various repositories. And a majority of those were merged automatically. But we don't focus only on Go there anymore. With Alex, we implemented support for JavaScript CI in Unified CI. And by now, that reaches almost 80 repositories across our organizations. The other project we focused on was GitHub Management, which is a way to manage GitHub configuration through code. So you might have seen that around in various places already. And we already had almost 400 PRs throughout 10 repositories. And even last week, we bootstrapped GitHub Management in two new organizations. So it's clearly getting traction, and that's really exciting. Testgrounds, yeah, that's definitely one of the pillars of our work. We revived the project throughout last year, focused heavily on stability, which enabled us to actually start running test-grown tests in IP to be in Rust and Go on every PR. And what else? Oh, yeah, not bad at least. We also tried to improve security in various places. One of which was we bootstrapped the effort to rotate NPM tokens throughout our repositories in all of our orgs, so that we use shared NPM tokens, not tied to any single developer, that makes the whole setup more secure and scalable. And, yeah, finally, thank you all for your participation, because without you, none of this would be possible. And yeah, we hope to see you next year as well. Amazing. So again, we made a lot of improvements across the whole talent funnel area within Andres, hitting kind of that top strategy goal. And our next focus was on robust storage and retrieval. Anyone who's going to talk about storage and retrieval, please come hang out up here. In storage and retrieval, this is really at the core of what we do, right? You need to be able to store data in decentralized networks. You need to be able to retrieve it from many retrieval clients across browsers, IPFS nodes, IPFS desktop, all sorts of other amazing Web3 powered tools. And this is really core to the mission of Filecoin. If we're going to create a decentralized, efficient, and robust foundation for humanities information, you've got to have robust storage and retrieval. This is core for IPFS as well. You want data to be accessible to everyone, and you also want it to persist long-term. And you want to be able to work seamlessly with long-term verifiable storage networks like Filecoin to make sure that data sticks around and is accessible to you long in the future when you're not necessarily running the original IPFS node that you were running at that moment in time. So we had a big focus around enabling seamless data onboarding onto Filecoin, focusing on making retrievals reliable, immense progress in that. I'm sure Jacob will give us a peek into what that looked like at the beginning of the year versus today, and just helping support all of the adoption of storage and retrieval across IPFS and Filecoin. Alright folks, so that big happy graph that we keep showing up with the curve going up to the right is actually the result of many, many weeks of work from the data programs the client growth, the ASP growth, and many other teams to enable. At some point we felt that we had to break physics first in order to get to the scale of that onboarding that we are today. And thanks to so many projects from Synchoteva Green, Synchoteva 3, Filecoin Plus, Accelerating How It Distributes Data Cap to clients, Data Conservation Houses and Big Data Exchange, we managed to grow this year from having 200 TebiBytes of data onboard of the day to a consistent 2,500 TebiBytes, so that's 2.5 PebiBytes, on a 30-day moving average. This is really impressive, like we didn't thought it was possible. And so yeah, and this is just like the beginning of many, many things. We have an ambitious goal of actually getting to the five PebiBytes a day by the end of the year and well, things are looking good and there are many projects coming in. With Cybertruck kind of like breaking the boundary of the internet bandwidth and actually taking data directly to the SPs. We, the Cod, Bacallau, like enabling a market for data set derivatives that will enable new data sets to emerge from the SPs directly and many more initiatives like Moon landing, Enterprise fuel and so on. So very exciting, thank you. Great. I'm talking about, following up on what Derby just said, there's a lot of exchange. We are also, we observed this like a big, there's lots of block reward subsidy being given on the network and we want to, how can we harness those subsidy to incentivize more participant and more clients joining the network. So big exchange is using an auction mechanism to really auction off data sets from clients to miners or it could be the other way around from miner to client depending on the market dynamics. So far the platform has transacted 10% of all the Falcon Plus on the network and it's growing really rapidly and this is also moving towards a protocol kind of platform where you can build different kinds of front-end. I think the slide is not updated. There's another auction house called Data Conservation House that you can use the same global liquidity pool to transact data on the Falcon Network. And at CryptoEcon Lab in our network we always monitor this kind of network to really detect evolving opportunities like this. So if you have new ideas and new possibilities, please come talk to us. Thank you. Hello. I'm here representing the DAG-House team, formerly known as Nitro. And the DAG-House team has been really hard at work improving the Web3 storage platform and growing it and making content-addressed data available over IPFS and stored in Filecoin Deals, the number one storage medium for Web3 data. At this point, we are at 170 million plus uploads. This about a year ago was about 15 to 20 million. So it's a huge amount of growth over 85,000 plus users including a lot of the largest NFT marketplaces and minting websites as well as a lot of cool other Web3 projects. Altogether, I think this clocks in at around 800 tebibytes of data so compared to the 2.5 pips being onboarded a day, not a lot but coming from the other side, that's still a lot of NFTs being stored. On Filecoin. We also launched our own HTTP gateway back in March called W3-Link. It's grown at an over 25% week over week clips since then serving over a billion requests now. And just also wanted to give a shout out to the deployment, the creation and deployment of a new IPFS implementation called Elastic IPFS that drastically improved our services reliability and scalability. So if you haven't checked that out before, it's on GitHub. Please check it out. Thanks, everyone. Hello. I'm too many slides guy from bedrock. So if we go back to January, there was no way to find content on Filecoin. You had to know where it was. And if you knew where it was, there was less than a 30% chance of actually getting the data without manual intervention and our visibility in this was really low. Market tooling was also really limited for SPs and a lot of time, again, manual intervention, offline deals to get around it. So where are we at today? So right now, over 27% of Filecoin SPs are announcing their content to indexers, which means data is now discoverable. With an estimated 25% of all Filecoin deals currently being found via the indexers. So CID Contact Indexer, which is Bedrock's indexer, is one of six partner indexers, which indexes over five billion CIDs every week. We're also expanding indexing to IPFS. Thanks to support from the reframe protocol, we already have the IPFS Collab Cluster fully indexed. We also operationalize auto retrieve, which is issuing about half a million requests every week. And we've also doubled success rates from 30% to 65% over 65% in the last three months alone. With this, we also drastically increased visibility. So this, you can see BitSwap requests all the way through Graphsync. And we get a full breakdown. You probably can't see, because I'm in the way of all errors being listed. So we know who's blocking content, who's rate limiting content, and where our software needs to be improved. So we also launched Boost in June, which has been topped by over 100 SPs, representing over 440 PEOPLE bytes of quality justice power on the network. And recently in SlingChop v3, they have elected to mandate Boost, which is pretty sweet. So Boost expanded additional tooling for storage providers. So we've got peace retrieval over HTTP, remote copy for storage, improved visibility and logs, and more coming. And then also, if you come to IPFS camp, you'll be able to see Kubo retrieving directly from Filecoin. Thanks. Hey, hello. This is for the IPFS gateway. If you've never heard about us, it's a great thing. That means we do the work. Everything is silent happening. This is a number we've put together to show how important is the infrastructure it is from the first delivery, from 70 seconds to 5 seconds. Unit user from 2.2 million to 12.8 million. The request from 0.5 trillion to 1.8 trillion. So everything happened under the same. You probably didn't see it, but you can see the number, the team doing a great job, hoping we keep hooking on this, make sure even better. You will hear more about our roadmap tomorrow. So this is the sum of the data we want to share with the team, is how other people using IPFS gateway. We have a breakdown for the talk-hundred reference. You can see our user come from everywhere with a different purpose. This is kind of the proof to us, the gateway we're all doing. It is really useful for the network and really useful for the real work. Thank you. Hey, I'm Patrick. Yeah, the Retrieval Markets Lab has been hard at work in 2022 on a couple of projects in particular. The first is Saturn, a decentralized CDN for data stored on Farcoin. Saturn is currently serving, we're testing with 160 million requests a day and 66 terabytes of bandwidth being shifted as well. The time to first bite performance is almost twice as fast as IPFS gateway, which is really exciting, and we're now exploring with the IPFS gateway team potential integrations. Over the last month, we've launched the Saturn Sunrise Program, which is 14 external teams running L1 nodes in the Saturn network and working with the Saturn team on that. We've also got the Saturn L2 nodes, running in the Saturn testnet on the Station desktop app. So that is our second project as part of the Retrieval Markets Lab, Farcoin Station. Farcoin Station is a desktop app for the Farcoin network. It's currently available to download from GitHub and it immediately sets you up running a Saturn L2 node. It really is about allowing anyone to join the Farcoin economy, not just people who know how to set up a storage provider. We're also exploring other modules to run in Station, including with the Compute Over Data Lab as well. Station kind of grew out of just being a tool to run a Saturn L2 node and then it was like, hey, we could actually do loads of stuff with this. So it would be a deployment target for anyone looking to deploy any sort of module to a peer-to-peer network of devices running in home networks. All of this will be talked about by members of the Retrieval Market Lab and others across the Retrieval Market Working Group on October 27 this Thursday. So yeah, please come join the Retrieval Market Summit. Thank you. Hi, Luka from CryptoNet. As CryptoNet we have been working on Retrievability. What we got out of it is Retriev.org. Retriev.org is now launched on both Ethereum, Testnet and Polygon and it's a way to have Retrievability assurance on your data. What happens is like anyone can be a Retrievability provider. The clients can make an ad hoc Retrievability deal on an IPFS CID and if the file cannot be retrieved by the client you can go and ask a referee network to either provide the file or slash the provider. We're going to have a presentation at Retrieval Market Summit so I basically... I mean, feel free to check out Retriev.org and come to this presentation. Thank you. Amazing. Everyone who is going to speak about breakthroughs please come up to the front. I know there's a lot of them. This is a super, super exciting piece of the work that we do across IPFS and Filecoin which is helping kind of steward grow and then actually develop, deploy and productionize new breakthroughs into the networks that we work on. This is bringing things like FBM, new L2 capabilities, consensus improvements, computation and other things to the networks and data being stored in IPFS and Filecoin. So a ton of work here and really we'll let the teams take it away. All right, so I'm Raul. I'm going to be covering the FBM very quickly so as you've all heard probably yesterday the FBM project delivers on-chain programmability to the Filecoin network. Looking back this year we shipped three major milestones. Milestones 0 and Milestones 0.5 were all developer milestones in Milestones 0. We got the FBM syncing with Mainnet for the first time running the current code of actors that was powering Mainnet at that time. With 0.5 we made running that node possible for anybody in the community so anybody could stand up a canary node against Mainnet at that point in time and then with M1 we actually shipped the FBM to the live network. We installed the FBM technology and this happened in July 2022. Now behind the scenes this was a massive change. It doesn't look like a big change because the network continued operating normally and kudos to all the teams that made that happen but really this was a massive change. We're literally changing the engine of a flight while it's moving, while it's flying. So we changed over 100,000 lines of code. This was an entirely new execution path in Filecoin Clients. It was an entirely new actors codebase new bundling code, a reworked gas model many repos affected and three client teams collaborated in this effort so that was Lotus, Forest and Venus. Now from then onwards we moved on to M2.1 which is the milestone that we're working on right now. So this milestone brings the EVM runtime ships the EVM runtime as the first runtime on the FBM where users will be able to deploy smart contracts for the first time. Now the Wallaby Testnet is already online just a few days ago. We managed to conduct our first metamask transaction on Wallaby. So that's a huge milestone for us. After this we're going to be moving on to enabling the deployment of WASM actors to the FBM and from then onwards we'll be moving on to perform further protocol improvements on programmability. Now I just wanted to give you a sorry about the brightness here again but wanted to give you a little bit of a view on what happened behind the scenes here. This is a massive engineering effort. M2.1 was very unclear at the beginning. It changed scope several times and we conducted an engineering breakdown from there. We went on to a high level plan. So once we got the scope tightened and locked down we went on to catalog all the specs that we needed to write and cut like all the technical designs that we needed to work through. That led to an incremental delivery plan and kicked off Wallaby as a new test net so we could deliver all of the work that was happening within 2.1 incrementally, literally every week we're shipping releases or we try to and we also have a massive developer community, big developer, a growing developer community onboarding into FBM so we also launched the FBM forums. As I said we have developer community coming into FBM we've run two foundry programs by now F1 is running right now. This is early builders that are deploying use cases on FBM. These are some of the teams that are participating. If you know of anybody that is very curious about working with Filecoin and deploying new use cases around programmatic storage and so on then let them know about the foundry and tell them to apply. Thank you. Hi, I'm Alex from the team from CryptoNet. We deliver core protocol improvements with a big focus on Filecoin this year and in particular setting the application development platform so that when the FBM does enable those user programmable actors and smart contracts we can build some amazing applications. The capabilities that we've been working on and unlocked this year adding a beneficiary address to the minor actor will allow on-chain collateral lending markets. We also have the FQA power calculation which is going to let us update the data in sectors so that the storage is reusable. We've decoupled the Filecoin plus mechanics from the built-in storage market which is laying the groundwork for user program storage markets in the future. A standard authentication method is going to allow smart contract wallets and actors and other on-chain entities to make deals with the built-in market and future markets. We've designed a new proof exploration mechanism which both allows us to increase sector commitments up front sector commitments in the future and also gives us built-in network recovery mechanisms in case anything ever goes wrong without the FBM and other on-chain transactions. We've designed a new proof exploration mechanism which both allows us to increase sector commitments up front sector commitments in the future. We've designed a new proof exploration mechanism in case anything ever goes wrong without proof crypto. And also designed token standards and are in progress designing NFT standards to help interoperability and composability of both native and EVM actors when the FBM opens up for them later on. Along with a bunch of other products like crypto. And like we've seen before, all these improvements in Filecoin allow a whole new generation of storage protocols that can be built on top of Filecoin. And at CryptoNet we're trying to explore what is the space in front of us. We've seen Retrip.org, we also launched Web3 Bounted app that allows to do one metamask transaction to have your file stored on Web3. What are the products and the primitives for building these products? So we talk about Web3 storage bounty, the retrieval pinning but there's way more. There is perpetual storage and repair protocols. This is the field of auto renewal contracts that we're going to be looking into next. But there is a lot more. Whenever you want to automatically recreate a deal, you want to make sure that you pick a good miner. So we need better metrics and better tools for making that happen. And then there's proving data on the clear. There is the Filecoin Oracle that could be exported into other chains so the other chains can see if data are stored or not. And these primitives allow for Filecoin storage to not just be the storage market that we use today but way more. I'm just going to give a quick overview. Unchained Storage is this website that we launched for Labweek and we're looking into for making storage more programmable in Filecoin on another chain. And the goal is to actually offer a dashboard so it lets you see what in, so for example, this is my wallet and this is the data that I'm storing. This is the different storage products that I'm using. And then you can go on a single CID and you can see who's storing it, who's providing retrieval, who's paying for insurance. And not only in the CID, you can see who's going to look as good as this and eventually you're going to be able to just add a deal on a CID so you can participate in crowd-storing a file. Hello from the Ficon Crypto team. One big thing that we shipped this year was the Snapdeals which was a new proof and included a trusted setup which is a lot of work that takes months to do and shipping data all over the world. And we also have a lot of things that are available like the GPU acceleration so we put out tightly integrated code into a separate library so that also other people working in the cryptographic space can use GPU acceleration and easily integrate it into their things. And of course there's also impactful things that shouldn't have any impact which is generally in a cryptographic protocol and we just like reviewed our code and also saw that there might be problems with our things. So the short version is if you hash things, hash everything you have and not only parts of it. And of course there's Halo 2 that you might have heard floating around and we are already there that we have the current proofs are already reported to Halo 2 and I already mentioned it. So that's all. So I will talk about vector commitments. This year we are looking at changing the old fashioned in Filecoin with something new and fancy which are vector commitments. This is an algebraic cryptographic primitive that allows for shorter proofs better verified, functional opening, not only membership proofs for one element but for many values. And the main problems to make this practical was to have some space-time trade-offs as in more countries. So we allowed the storage provider to trade some storage for faster computation in the opening of their values and this is Muppets, a new scheme that allowed to plug in any algebraic vector commitment into a tree structure so we don't have to worry about any other problems that we have for more countries. They are using the same trusted setup as grad 16s of power of 1000 ceremonies. It's ready to implement right now. We have everything we need at the description in a paper so everything is out. We just need people motivated to look it more into it for Filecoin. We also have a application that allows us to make the vector commitment compatible with other snarks that prove replication in our. So if you are interested to find out more, the papers are out. They are accepted in cryptographic conferences and we have talks about these primitives in our vector commitment day event and in ZK study club. So currently Filecoin uses grad 16 for proof of space and grad 16 is great because it creates the shortest proof in the literature and also does very fast verification constant actually. However, it has a very slow prover that's very expensive arithmetic operations and also if we change anything about the Filecoin proof, we need to re-run the data. So we have a lot of information about the proofers and the verifiers. On the other side of the spectrum, we have snarks with fast provers and the fastest in the literature is part of this. These have transparent setups which mean everyone can do the pre-processing. There's no secret in powers of tau. In order to get the best of both worlds, Kryptonite is trying to combine spartan and grad 16. In order to get a universally trusted setup which means we run it once and whatever we changed in the Filecoin proofs we don't have to re-run it. We expect the faster prover when this is ready up to between 4 and 20 times faster. The proof as short as grad 16 and the constant time and we see in this graph what we've run right now that the studo performs much better than the grad 16 prover. Thanks. Hi everyone. I'm talking about MEDESA today. This is another thing we are building of Kryptonites. MEDESA is an open access control mechanism so you have apps that can define the access control on-chain using the whole language, smart contract or MEDESA. MEDESA has access to the content to a fresh network. For some of us who know D-RAN it's very similar to D-RAN except now when you request content then the MEDESA network will re-encrypt towards your key specifically key and so you're the only one that can decrypt it. Everything is backed by cryptographic proof so you can see in the nice diagram here you have many different types of documentation. Anybody can use it, no worries. It's already live on Testnet. It's not live on AVM yet but it's going to come soon this year we hope. You can visit the website for more information and also on the run map we include global public decryption so for those who heard about D-RAN it's going to be very similar and that's about it. I won't talk about D-RAN because we didn't figure out we should invite all 15 people that we grew to have a slide but actually this growth was the most important thing for consensus labs so we grew from four people at the beginning of the year to 15 actually in a month and that was amazing and then we didn't only spend money we actually did some nice work so we're focusing on two types of sharing of conferences and so on lots of PC committees so we are really paying attention to that and then to actually implement our ideas and help the Filecoin ecosystem we have these two main ideas that you heard by now so this is interplanetary consensus X hierarchy consensus which basically will allow us to spawn subnets from the Filecoin mainnet and then we will have a total order in this subnet as opposed to expected consensus on the Filecoin mainnet so these are the two main things so we started Q1 with the IPC we delivered the IPC design and the first MVP which was in Udico and Udico is a fork of Lotus which we used for experimentation but our like two main actors were in go at the time so slowly we were transitioning to a user defined FVM actors and these are governing IPC. We kicked off at the beginning of the year Q1 when team was actually formed for mere consensus development and now we have the mere MVP and we are currently here so we are launching the space net test net in Q4 so this will have first mere consensus protocol like a standalone test net with FVM as we are following right but the consensus is not expected consensus and then in Q1 we are going to start spawning subnets from that test net and we are hope to go to the mainnet in Q3 next year so that's one big thread the other big thread is actually expected consensus security analysis that we conducted and for which we are actually going to launch public feedback discussion just I think this week I hope because we are ready for that and we expect the expected consensus on the mainnet Q1 or Q2 next year. Thank you very much. Hi I also didn't realize I was supposed to have 12 people up here but I'm Dave Bracek I lead computer over data. Computer over data is really two things the first is we launched a brand new working group. This is a working group of about 15 companies about 75 people who are working together in the open source community across the country. So anyone out there who wants to participate in overall identifying ways to improve compute decentralized compute and again it can be file coin specific it can be just decentralized web it can be Web2 we are totally good with that. In addition to working together we are also working on a number of standards invocation standards authentication standards pipeline so on and so forth we are working on a number of standards invocation standards and we have bi-weekly meetings and extremely active group we are pleased with how quickly it is growing. The other thing we are doing is we have also launched back a Yao. I was looking back at the timeline I was like this can't possibly be correct but it really is. On January 26 there were zero human beings in the world so we had to go to the repo. Eight weeks later we had our first compute over data summit attended by 75 people and a proof of concept live in the world. Eight weeks after that we were live you could actually go out there to the public with no authentication whatsoever begin interacting with it. Eight weeks after that you can interact with estuary so you can read your data wherever it is on IPFS perform your compute and as of next week we will be declaring beta so we are really really excited and we are going like mad. A lot of people ask what can you do with it. You can do python, you can do r, you can go, you can do job script basically if you can contain it you can run it and we would love for you to prove us wrong but David Gaska is over there. He wrote a database running on it like it is the most craziest thing in the world and it works. You can use GPUs, you can do sharded jobs, it is up to you. And our goal is to be live by Q2 of next year V1 meaning not that it is not your data already is there and being written but we want to declare API stability and in order to do that that obviously requires quite a bit of engineering and so our goal is to hit very early in Q2 of next year. And today I would argue that it is among the easiest ways to read from IPFS and write to it and if you want to see some absolutely abusive demos please come to IPFS camp or our computer data summit and you will see me doing absolutely terrible things. And there's your instructions if you want to try it out yourself. Awesome, that's freaking cool. So thank you to all of our breakthrough presenters. Now for really the meat of what we're doing here, driving critical network operations, making sure that we're keeping critical systems running, we're improving them all the time and we're ensuring that they're secure and constantly burning back tech debt or operational debt that make it hard for us to keep running these things as they're scaling massively. These new breakthroughs are launching into networks, we have massive new amounts of users and at the same time we also want to be improving the foundations that all that work is happening upon. So this was kind of how we broken down our objective at the beginning of the year, making sure that we define, implement and maintain good ownership practices, release often and then we also want to make sure that we have a very, very high availability performance very, very top of mind. And this is, you know, if we don't do this right, nothing else matters because you don't have a system, you've lost your users, you've broken the foundation upon which they're doing their work. So this is really really the foundation of everything else we've heard today. Hi, I'm Jennifer from the Lotus and actor field dev team. I wish I had like 20 more people can join me but the network is still alive. There's no incident in the past year so things are doing pretty well. While keeping, like since running we've shipped a couple of follow coin network upgrades with bringing the research work into the follow coin production like Mignette and the first one being the only 15 or SNAP which enables third providers in the follow coin network to store clients data in the committed capacity sectors of the network. So this is the first time we've seen a SNAP. By SNAP I mean before it's going to take like 46 hours to do this whole thing but right now if you have a sector over there, as Nicholas shows here, it only takes him like less than 15 minutes to onboard some data into a certain to get back of like storage. So that's like pretty amazing and also as you can see in the past he's snapped into the network. It's not like a lot but we are picking up, you know, like as the network trying to onboard more data, we hope people like SNAP more often. The next one, the one makes me the most nervous. But again, the network is still alive with FVM. So that's why we started to summon the rush plays into the follow coin network. We basically started to transform a follow coin into a FVM network and also we switch from the ghost back actor to rush built-in actor which breaks some souls but also you don't make people feel alive again but like dealing with the code. However, the follow coin network is running built-in actor right now. We have constant addressable code CIDs. We are actually charging the gas for the actual like awesome execution costs and as like Rah mentioned, it's not a FVM in next like February. The last but not least, the shark is about to be released hopefully next month in November and this is the one of the biggest like upgrade for the year. As you can see, there are many flips presented by A&W. So we are delivering like a wave of like protocol refinements so that we can allow like when the family is there, people can deploy different things. So we are doing a lot of the actor factory over there. The car here, we have the couple in follow coin plus from the market place. This is going to be the foundation for different storage, user program of storage market and help like incentive storage providers to store like Wi-Fi data with data cap longer if anyone deserves. It's like data renewal in the network so in the past, not in the past year, in this year, we are about to ship like three network upgrades with three versions of built-in actor which is like pre-compiled smart contract in follow coin network. Vf3 loader's mandatory releases total of 11 flips will be landed and we also fix like five protocol security, you know, bug improvements landed in the network. We don't only do network updates, we also have a reference implementation to maintain so we have like kind of two tracks, we have the loader's core and the loader's miner, we also have the community like follow coin community that uses like loaders we are trying to support them. So for the loader's core, we are targeted at improving developer and no operators experience and loader's miner is the implementation that 98% of the network, so we ship like eight future releases, monthly, now stop and for the loader's core it's not follow coin network, I promise, follow coin network is still alive. So we are shipping like split store, thanks to Sen, we are also like the ground work was done by Viso, it will enable easier chain management for the no operators with the growing follow coin states. So for the loader's core, we have a loader's node cluster using the rough consensus, which is going to increase the node redundancy for the storage provider and we are also launching the remote manager will help manage their assets and account better. For the loader's miner, magic has been nonstop like shipping things to make things like scalable and make mining and providing services. So you can win the rewards while proving the storage, nothing break. There is a huge combined effort with crypto, crypto net, we have to integrate into that loader's miner, we are enabling as a service, that means not everyone with storage has to have tons of hardware and do a lot of like, you know, depth ops to join the follow coin network, if you have the network, you can join a follow coin as a miner. We are doing a lot of loader's miner refactor and architect just to make sure loader's miner as a software, we can keep up with the network girls and support like more storage provider, provide enterprise life of service. We also started our new team, TSE, I call them like chose senior experts, but they are technical support engineers in the team . So we have a lot of community members to join the network. This year we close almost a thousand issues. I know that's crazy, but that makes us stay focused on what's important. And also we merged 800 PR, I don't know how we do that, but I'm pretty sure this number is right, if not, whatever, but that's a lot of work. So we have a lot of information. Our TSE group has launched 30 weekly loader's newsletter so that all the information has been communicated with our community and we have been creating a lot of the conceptual videos, tutorials to help people to understand this very complicated protocol. We have hosted three AMAs, and we are hosting our very first loader's day, November 2nd. If you are still in town, please stand up and talk to the team. And also we started our Twitter and YouTube accounts, please follow us if you have any question, talk to the team. We're mostly here. I think that's that. Hi, this is Birdie from Sentinel team. I wanted to go over some upgrades and we currently have like around 20 active alerts leveraging the Sentinel data. So it's used to monitor not only Sentinel services but also anomalies that's happening in a lot of the network. So come to us let us know if you want to monitor any real-time anomalies. Please let us know. And for Lili, we have more than 10 plus releases to help us with our ongoing upgrades and improvements. The other major change in Lili is to implement a distributed worker pool for higher performance chain indexing. And we are currently migrating to CICD via give-ups. And we work with one of our power users, Starboard, closely this year. We hope to have more collaborations and to meet the lead from Starboard during the lab week and we'll have more discussion later this week. For the PODW side, we are moving toward the unified data warehouse with Google BigQuery. There are a couple of benefits. I think the first is faster time to value and less operational work for the team. And the second is possibility to share data. We currently already have some Filecoin Chain data in BigQuery, a back field from our S3 data, and we are also migrating the Redshift data, for example, for the FieldPlus data into BigQuery. So we, going forward, it's possible to join this kind of data easily. And the last but not least, data pipelines manage with code, which can be used in the S3 data. Thank you. Hi, everyone. I'm ZX. Back again. I lead crypto econ lab. Our vision is to be an N2N R&D lab from protocol product ecosystem, center around incentive, markets, scale, PMF, and mass adoption. Well, I also thought there's only one slide per team, but don't worry, we are hosting two teams. We are going to bring our team up into three sub teams. You hear more about them tomorrow. So from a protocol, to layer to incentive, to ecosystem solutions, and involvement. In the past year, we have been working on different incentive and mechanism design with many different teams, from like interplanetary consensus, some incentive to economics for infinitely scalable blockchain, and also programmable storage markets with triple markets. You also hear about us giving presentation in the next week, and also Atlas and various unic selling point of storing data on Filecoin. And we also monitor and raise network health issues, which also involve many of us here. From risk assessment, monitoring, simulation, modeling, scenario analysis, to putting a ton of data into the system. So we are working with many of you guys, and there's also design recommendation, also being involved with governance and field pool execution. Here's our website, and then there's also CryptoEcon Day. We will see you guys this week. Thank you. The voice of Marcus from far away, where he is probably asleep. Network infra scalability. We are working with the tutorial and interactive API documentation backed by API Chain.Love itself. We have a new Lotus gateway design that allows horizontal scaling and range limiting. In 2022, API Chain.Love was able to sustain over 200 queries per second, and we expect our new design to be able to scale well beyond that. We've been working on a GitOps Web3 platform to accelerate application productionization by offering a low-friction self-serve GitOps deployment and hosting platform to get apps running at fail faster. We're targeting a general availability in Q1 2023. We've upgraded our full archive node data stores to 64 terabytes in capacity. We're previously limited at 16. The current full archive data stores is sitting somewhere around 19 terabytes. So we should now have the capacity to store the full-chain history for another two to three years. We plan to use our automated EBS volume snapshotting to publicly share our full archive data stores as well, making it easy for others in the PLN to run their own full archive nodes. We are also planning to scale the core infrastructure that powers the Filecoin network through decentralization, defining impact evaluators and service levels to incentivize other organizations to run bootstrap and speeder nodes and ensure quality and uptime standards are being met. Expect a public dashboard tracking the official Lotus Bootstrapper nodes that are baked into the Lotus binary. And finally, we will have some redundancy and decentralization from the powerfacts and data that we produce on the fill-in protein. We will store chain snapshots and archives in Filecoin, IPFS, on-premises and in the cloud. And also we will store some of our full archive data stores in IPFS, on-premises and the cloud. That's it for now. Thanks very much. Hi. Hello, everyone. So I'm Yalan and the DRAN team. So the DRAN team, I'm not even sure it had slides last year at Lab Week because it grew tremendously with two extra people from two last year. So 100% growth. It's nice. We've been launching higher frequency network on our testnet with a three-second testnet running since over six months. We've been able to launch the time lock encryption based on DRAN back in August. That should come to mainnet also in Q4 with the updated mainnet network we'll be launching because it will be using Unchained Randomness, which is also a new thing we introduced this year. Next, we've also been able to increase the observability we've got into the DRAN network, which is a huge network of 24 nodes right now. So it's very difficult to get good metrics from everyone and everything. So now you can see our dashboard on the right here. Behind the scene, we've been doing a lot of cleaning the code base and everything and bug fixes and that's been great. We've released a brand new TypeScript client that's actually not using Wasm this time, so it's way easier to use. And we've also been able to onboard two new League of Entropy members in China Source Swift and IPFS Force. Source Swift, it's pretty cool. They've also been launching a new Relay and that's really interesting because something that was said once, I wasn't sure, I don't know, but I was told, is that maybe a DRAN nodes inside their own data center, just like you would have your own NTP server inside your own data center and that's exactly what's happening with Source Swift, so that's super exciting. Next, we've been trying to draw the community around DRAN, especially since time lock encryption has a lot of broader applications compared to public randomness. So we've been going to conferences, publishing blog posts and yeah, that's an ongoing effort. And if you want to talk to us good luck, we've got our own DRAN workspace in Slack. You might be able to find us in the NetApps channels. Hey, I'm Gus from the IPFS stewards. So we own Kubo, which is the IPFS implementation, formerly known as Go IPFS. This year we spent a lot of time trying to get our release cadence much faster, so we had six releases since last lab week and we just recently switched over to a five-week release cadence, which was nice. We shipped some things that have been lying around for a while, like UNIX FS directory sharding, I think, which was four years in the making. Circuit Relay v2 was turned on, which came from libp2p, which enables users to have better natural. We switched block stores from being keyed on SIDS to multi-hashes, which is a trivial change to make, but very non-trivial to roll out. We added block and car response formats to the gateways, which is really awesome because it enables verifiable retrieval for clients so that you don't have to trust the gateways when you get the data from them. We shipped a lot of features from libp2p, including resource management, which is also a very long-standing process from users, web transport, and we shipped configurable delegated routing so that content routing, pure routing, and those kind of routings can be configurable to like, you can offload them to out-of-process servers. We also rebooted specs, so the IPFS specs had been sort of for a couple years they hadn't been well maintained, so we finished the HTTP gateway specs, the IPFS specs, and we introduced the IPIP process which is similar to FIPS, a little more lightweight. We renamed Go IPFS to Kubo, and lastly we renamed all the websites from IPFS.io to IPFS.tech and we re-architected the hydroboosters to address some scaling bottlenecks, so they now handle at peak, over 100,000 requests per second, and the DHT queries just excel this graph on the left, dropped dramatically when we did that from 1.3 seconds to an average of 0.4 seconds to resolve a provider record. The nodes in the network have grown up to 120% to about 540,000. Hi, everyone. So, in Jerslan, we've had a very exciting last lab week. We have modernised our code base more modern than before. You've always got to keep it on the bleeding edge in JavaScript. So, what's happened is we've ported, I mean definitely bleeding edge, blood everywhere all the time. We've ported LibP2P to JavaScript, which has been fantastic. So, we have types that you can rely on now because they're actually generated from the code rather than being hand written and then hopefully maybe they match up. JavaScript itself is now ESM only because there is no module loading system in JavaScript, apart from ESM. Cjs is a userland, interesting userland heresy. We've had lots of releases, we've spent a lot of time solidifying the base and making a lot more resilient to attacks, particularly because now JerslanP2P is underpinning the TypeScript implementation of LoadStar, which is Ethereum 2, which is obviously a very adversarial environment for it to run in. So, it's now a lot more stable and secure than it was before. We have collaborations with Chainsafe and Little Bear Labs. These are external dependencies. So, Little Bear Labs are implementing the new WebRTC transport, which is very exciting for going to be able to dial Cubo nodes directly from the browser without needing to go via web sockets and configure certificates and that kind of stuff. So, this is super exciting. Definitely one of the takeaways from the all thing in Iceland was the connectivity story from browsers to Cubo nodes that could definitely be improved and this is part of that. So, we're very excited about delivering that very soon. We've shipped some new features. So, we have a DHT implementation. Hi. One of the definitely one of the longest running things. There's no DHT in JavaScript. There is! You can use it. It's amazing. You should totally use it. Anything that says that is wrong. Like, there's no DHT that says that is wrong. It is amazing and you should use it. Please do. We have Yamaks now. So, we can finally retire Emplex. This has been this is also thanks to ChainSafe who picked up the reins on that one which is great. And Marko, where's Marko? Marko has implemented WebTransport transport which is another way of dialing. Kind of slightly awkwardly named but we don't make the standards. So, WebTransport is another way of dialing a QBO note directly from the browser which is fantastic. That's it. That's been the year in JSRPFS. Okay, cool. I'm Russell, Sergeant Pookie on the IPFS GUI and Tools team. We just got restarted again this year. So, I'm going to skip the first part for a second. We've got two full-time engineers now and a UX designer. So, that's great. You can see the graph on the left there that we've significantly increased the amount of work that we're doing. Those are changes from showing a year ago, 2021 October to now. So, this year we've implemented the pinning service compliance tool which helps show pinning service providers meeting the pinning spec and we've used that tool to validate pinning service providers functionality prior to then bringing them into the WebUI and desktop app. So, now we have some default service providers added to the desktop and WebUI apps. We've also added a few features to desktop and WebUI such as publishing files to IPNS, managing IPNS, publishing keys. There's also pinning files pinning pinning files UI update thanks to HAC great work there. We've also got a process going now from restarting our engines establishing a cadence of getting some designs and then implementing those. We have published our roadmap. We're looking for feedback really looking for input from the community and from what you guys want us to work on. There's a lot of different tools and different things we could be working on a lot of tech debt that we need to catch up on but if there's something of higher priority you want us to work on then let us know. You can see our star history over here desktop has been steadily climbing I don't know if you can read those from the back but there's a desktop companion WebUI public gateway checker and then the pinning service compliance is tiny little bit higher. I think it's got like seven stars or something maybe. But yeah, so desktop is our highest start item. Then we've got companion WebUI and public gateway checker has seen a spike this year. All the gateway usage increasing eight times. We're seeing that kind of in the public gateway checker. That's it. Thanks. I think we're going to have a humble year this year. We're down to half a person. Still working out which half. Half person and some contracting as well but I've captured here some of the core work but it's not exhaustive. There's a lot of ecosystem work going on and also notably Dag House is doing some really innovative stuff with IPLD that is not captured here that would be great to capture stuff coming out of core worth highlighting. One of the most notable things that we've worked on this year is bind node. Daniel who left earlier this year he left us with bind node which is a brilliant piece of code in go app building prime. It's our most mature schema mapping layer. We are in the process of investing in it and not so much in code gen and we've done a lot of replacing of old code with bind node. We've worked on robustness and productionisation. We've extended the feature set support and focused on reducing friction of using it so less boiler plate much more natural for go interaction. Massey put some numbers together earlier and saw a 10 to 15 times code reduction when you replace the code gen with bind node. There's three major projects that he's looked at for that . So we deployed it in the data transfer stack. That's led to simplification, new messaging and management possibilities and cleaner upgrade paths. So it's a really interesting code going through that data transfer stack. In JavaScript, a few things schemas updated and expanded supporting the latest work. We've got brand new transformation and we've got some IPLD URL experimental work going on pulling together schemas and ADLs and doing some really interesting stuff with URLs and IPLD. So it's a great experiment to look at there if you want to see some of the future JavaScript stuff. Multiformats, we had a very long upgrade to Multiformats version 10 and we've got a link interface now which is really big news because we've got a lot of stuff going on in JavaScript. So pretty much everything in JavaScript that touches our stack pulls in this one library Multiformats and it's for CID class. This gives you the option of having an interface so you can just do compatibility. So it simplifies dependency. So going forward that will be interesting. And also lots of ESM migration stuff. Other, so IPLD patch was introduced and there's a Go implementation and there's also a JS implementation also in that IPLD URL experiment. Launchpad, I'm pretty proud of this one. When Launchpad kicked off, we kicked off with a complete IPLD curriculum and I think it's being rated on obviously keeping the lights on and being responsive to vulnerabilities. We've stayed on top of vulnerabilities, made sure that things are stable and all locked down. So there you go. Hello everyone. It's me then. I'm going to talk about problem. What we want to have a problem is data driven protocol design . We focus on main lines of work. We focus on kind of small improvements, things that might seem very easy and very small to fix such as looking at the DHT routing tables and seeing whether the information that is included in there is up to date. So your requests when you're asking the DHT are going to the right place. So we're going to look at some other records and that's some magic numbers in there. So provide the records is like your little advert to the network when you're publishing content. But if that is not in place, then your content is not findable. No one can reach it. So pretty important. And we had some really good results. So we're even doing some improvements which are supposed to come to the next Kubo release or one after the next, something like that. And we're doing some big things as well. So we're taking up some big projects such as privacy which has been a long standing commitment and we're building one of the solutions which we hope is going to work well and so on. And we're also doing some measurements on the network punching. So there is a solution which is unheard of and all credit goes to the but what we want to do is basically measure if it works well. So expect some news in the coming months. We are building the tools to how to kind of integrate all of these in one place. Right now they're in standalone GitHub repositories. Yeah, more than welcome to go and check them up. And finally, yeah, we're interacting a lot with our big community. We've been in several high tier conferences. We did our own workshop back in July. And we've had even first time contributors to the code base itself. So yeah, that's it. Hi, I'm Hector. I work in NetOps. In NetOps we do a bunch of things from Sentinel and some. One of them is IPFS cluster which falls within the Bipros team responsibilities. NFT storage and web research people were very important. And we had to support them. So they drove a huge amount of improvements into IPFS cluster that we used to orchestrate pins on IPFS. One of our biggest cluster is very close to hitting 100 million pins. We have another 40 million in a separate cluster. I think it's one petabyte of data in one of them and getting close to that amount that we drove during that year to support that effort. Going forward, we're also working on the IPFS operator. It's the first time we actually invest into figuring out how to run IPFS on Kubernetes along with cluster and how to make it well so that nodes have connectivity. We have presented a prototype already in Iceland in the IPFS thing. We're going to again present now in IPFS camp. And of course, all of these goes together also with making content retrievable that includes gateways and you already saw the graphs are already like all these graphs that they showed you that are going up. That involves managing a lot of infrastructure and scaling up the infrastructure particularly gateways. We at least had four more and we had before. So I think we have over a hundred now. Reducing the time to first bite was already mentioned increasing the number of requests that we can handle goes along with a lot of improvements in the infrastructure in the back. Which disc layouts to use with technologies, with type of caching to do, how to configure engines, how to handle failure requests and retries and so on. And all of this is happening behind the scenes so that hopefully you don't know this so much when something doesn't really work. Thank you. Hello, I'm Martin. Work on Lippie2P. Our users had been asking us for the longest time to implement proper hole punching into Lippie2P. Actually they were surprised that we shipped a library called Lippie2P without hole punching. So we implemented it. We rolled it out. It's live on the IPFS network. There are more than a thousand relays. Relay servers used to do that hole punching and we've collaborated with Proplab who measured if this actually works and we have the numbers and they look pretty good. So this is shipped. It works. The other thing that users had been asking us for the longest time to the network to the IPFS network, for example. So the answer we gave them so far was like, yeah, we use web sockets and you need a host name for that and you need a TLS certificate and you have to set up everything manually and it was just a big pain. Basically nobody did that. We changed it this year. We now have web transport transport. Which allows you to just connect JSLiP2P from the browser to any GoLiP2P node without any configuration. You just start up your node and JSLiP2P can connect to it. So that's pretty cool. We are now working on WebRTC. It's rolling out. We'll be rolling out the first part later this year. The second part, the browser-to-browser connectivity will be rolling out early next year, so now you can also two browsers can talk with each other without any configuration needed. So that's pretty cool as well. We launched a website, connectivity.lip2p.io that explains in detail how different nodes, if they are like Go nodes, REST nodes, browser nodes, NodeJS nodes, what protocols they use to talk to each other. So if you're interested in that, just head to that website. And this was alluded to a little bit earlier, GoLip2p is now more resilient against DOS attacks. We have a DOS mitigation guide, which I want to highlight for the general public, which is a lot of good notes on how to architect your application to be resilient against DOS attacks. And for GoLip2p, we even have integration with fail-to-ban. So you can hook things up to automatically ban malicious actors. Me again, Jenny. No, Juju. Everyone is talking about how big, like how fast the team is growing. Our team has grown like five times bigger than last year. Last year, we have one person, Johnny, to serve the whole Android docs need. But now we have four full-time engineers and one part-time documentation engineers, tech writers, to enable... I'm not going to try to say this, but DOS docs as a service. We are rebuilding the team in August. We have realigned our mission. We want to create user documentation to make it easy for like three years... Not three years old. Our protocol is so complicated. It's easy enough for seven years old to understand and develop using beauty on top of the stack of wear building. So again, even though we have been growing faster, but the engineers just keep working fast and shipping new features. It's still not a big enough team. That's why we can only focus on three projects right now, being Filecoin, IPFS, and LibP2P. I know there are some IPL difference here or some support, but if I get more people, we will try to make that happen. But the idea is for each docs engineer, they are embedded within the engineering project team along with the future releases major product launch. We will have the end-user documentation ready so that people can actually use the amazing features that everyone just described earlier. So we are joined by, we are in collaboration with a lot of other teams from like AutoCorp DevRel team, FEMTX team, so we will be shipping some like FEM docs like this year. Metamask integration just works. I hope we have some like user documentation for smart contract developers started to integrate things in. So we are working with the Lotus TS team for people to understand how to run a node and how to join Filecoin and like, you know, store and retrieve data. IPFS LibP2P, honestly, my team member knows it better than I do, so like please go find like Danny, James, Johnny and Timo and to ask them like what's going to be going into the new docs later on. And we are also trying to enable the docs as a service so we are documenting the documentations. So basically you can find the series of the guides in our Notion pages. If you follow the guides, you should be able to spin up a doc site for your own project if you want to. You don't have to block by our limited resources. So go check it out. We also have deployed something like Molly really likes, like Matrix analysis to a lot of the major documentation sites and the whole idea is like we want to learn, like the material they are trying to learn from our documentation. If not, we're going to try to improve that. So all the Matrix is actually in all the setups of each project. So if you are curious about if people are reading or we're tech or not, go check the Matrix out. But I think that's that. Awesome. So that is our Endress 2022 summary of all of the amazing impact across all of these teams. I think this deserves a huge Minion level