 Cool. All right. Well, welcome all. Thanks so much for coming to the third PL Andres All Hands meeting of 2022. My name is Steve Leppke. I'm one of the engineering managers here on the team. You are Jennifer today. We're going to, per usual, do some team updates and also spotlight on some projects. And then we'll have a focused time at the end to do a deeper dive, particularly ransom of our infra projects. Reminder for setting context, PL Andres group is part of the larger Protocol Labs whose mission is to drive breakthroughs in computing technology and push humanity forward. Specifically, the PL Andres group itself is heavily involved with projects you're likely familiar with, like IPFS, libp2p, Filecoin, IPLD, and others. And the mission for our group is to scale and unlock new opportunities for IPFS, Filecoin, and libp2p by onboarding the best developers and contributors in the world, driving breakthroughs in protocol utility and capability, and then scaling network-native research and development across the network. So how we do this, this is handled currently by about 12 different teams taking on different areas here. I think last all hands, we shared about our public notion page, so you can click in and read more about these groups. But these are the teams that are responsible for pushing on that mission. We're currently about 95 amazing, strong individuals, but we need to bring in more across all kinds of different roles, engineering management, software engineers, TPMs, PMs, Infra, research data scientists. Like there's lots of openings, particularly for Andres, but also in the wider network as a whole. So if you know of anyone or you're interested yourself, feel free to apply. Again, lots of opportunities and we need more great folks to help make it happen. Specifically our strategy again, related to what I just was talking about is to grow a wide and robust talent funnel. So both in terms of people coming in, but also making sure that the knowledge that has been developing within protocol labs over the last number of years is getting outwards and is being shared. So that's part one in the talent funnel. Key to all of this is having robust storage and retrieval. We wanna be able to get large data in, but also to be able to get it out. So that's critical. We spotlighted last all hands about the FEM effort, which is one of our concerted pushes for breakthroughs and programmability scalability compute. But there's other great stuff happening there as well regarding retrieval markets, increasing the scalability of consensus, compute over data, et cetera. And kind of underpinning all of this is that we keep our critical network operations running. So the releases of the libraries keeping the Infra for the network's secure and running as well. Like all of the items of keeping the lights on are key for the other efforts and to enable others in the ecosystem to build on top. So that's the strategy. And with that, we'll now jump into IPFS. Take us away, Aideen. All right, everybody. We'll talk about some of the recent updates on IPFS and how it is that we are progressing making peer-to-peer networking and retrieving data easier. So some of the metrics that we're tracking, number of peer-to-peer nodes on the network and how long it takes to find newly added content. Things are still sort of progressing pretty well. A new one that a new metric we've added here is around sort of how we're managing on the go side of least and graphical interfaces with PRs and issues. We have gotten better recently at closing more issues. We've been doing and making our way through those. There's still a bunch left to do. So if you see some slowdowns, you sort of know what our backlog is looking at, but it's shrinking, which is good. So this month we've made some releases, some security releases for go IPFS, bug fixes for IPFS desktop, things like keeping the Ukrainian, Russian, Wikipedia snapshots penned and updated, various spec PRs to make things better and interact with our community and improve our protocols. Thankfully, Steve's made an awesome project board so that both people working on the team and externally can see what we're up to and what's the top of our priority list. Coming up, go IPFS 013, we're embedding libpv resource management and making our gateways better at surveying things like car files and so they can be implemented verifiably. And we're going to better interact with our community on helping them understand different types of IPFS implementations that are out there and set up some meetings around those new implementations. Thanks, Edin. So libpv modular networking stack for peer-to-peer protocols, powering all kinds of different networks. Yeah, okay. So we're monitoring all these many networks already, how is libpv interacting over there, how are the many different implementations working. So if you want to explore that data in general on the bottom right is a link to a lot of the data we collect. Cool, so one big work item on the libpv storage side is hole punching or AK project flare. For that to work in a decentralized fashion, we want all IPFS public nodes to act as relay nodes and thus facilitate hole punching for those clients behind nets. For that to work for the many clients out there, we actually need a lot of relays or public nodes on the IPFS network. So this is a really important metric worth growing in the long-term, but probably high enough for now for us to actually move forward in general around project flare. So libpv highlights from the community side, we're planning to attend P2P Paris conference end of April. So it will be really cool to see many folks over there. And then actually this month in Berlin, we went to the ResNet lab meetup, Organize the Yoneness, which was really cool as it's always good to have engineers and research in the same room. All right, so big effort, as I said, hole punching, AK project flare. And the missing piece on the gold side right now is for clients to discover public relays out there and thus listen via those relays and thus facilitate the hole punches. And today that is called auto relay, but there's still a couple of missing pieces there. Then we want to make sure we don't regress and they're going forward. So we need more automated testing around all of this. And then we have a really cool collaboration with Dennis Trautmann from ResNet lab, who's helping us to measure success rates on hole punching. This is very similar to project flare phase one. You might remember that binary running on your laptop back then. And yeah, that will help tremendously. Gold APTB V018 with research manager is shipped. So that is required for IPFS was at 013. And then the greatest news, I think for our team, we have three software engineers joining the lippidipitin. That is huge and really, really excited for that. All right, and I think Steve, you're doing the right side. I know we're kind of over time. The key thing just want to say is like test ground is moving forward. You can find us on IPFS discord where all the action is happening. So we're really excited about that. Alex gave a great update last time about the investments he's been doing in JS lippidipi to get it ready with TypeScript. So that's in its final stages. And there's been some important security fixes et cetera that have been going on there, but that's all been publicly disclosed. Good stuff happening in lippidipi. Thanks, Max. We'll move on to IPLD. Take it away, Eric. Hello, Lo. So for everybody who needs a reminder, IPLD is the way we are trying to make a data model for the decentralized web and libraries and software to help everyone build better stuff faster. We have a couple of updates for you. Thank you. We have a new release out of the Go IPLD Prime libraries. 16.0. This should be a super easy upgrade. So if you use this library, please do go ahead and bump the number. We've got lots of new stuff, lots of stuff fixed, lots of stuff a little bit easier to use, and it should be completely without breaking changes to the API because we'd start to prioritize that very highly. One of the coolest things, there are many, but one of the coolest things is we now have an interface for big bytes, which can be used by advanced data layouts. And this will let you work with large blobs binary data without necessarily immediately loading the full thing into memory. This is going to unlock a ton of features in the future. Some of the other things that are already advancing hand-in-hand with this is selectors can signal to invoke an ADL. An ADL might be, for example, UNIX FSB1, and we have plug-in code for that now. And another new feature has come along, such as a range clause in the selector system, which can ask for a range of bytes within a larger object. This is wild because it gives us an end-to-end feature where you can ask for parts of a large hunk of data and have somebody stream you back the merkle proof of what happened inside the process of getting there. And you can have this conversation without needing to understand any of the sharding functions or anything that happened on the inside. This is incredible. This has been years of planning. Thanks a lot to Will Scott and the team around him for achieving some of these features recently. We've also got some new patch specs. These are things that you can review that are pretty early and have not landed yet. We are, of course, hiring. I'm talking too fast and running out of time. There's also some new exploration reports on the horizon about a lenses concept, which could further advance the signal length system and make it more extensible. If anyone can take a look at that, feedback is highly welcome. Lots of other news that I just don't have time for. See you around. Thanks, Eric. Filecoin, take us away. Hello, everyone. Reporting live from the Jennifer's apartment. All right, Filecoin. We have some metrics to start with, I think. Let me get to the next step, perfect. So total network storage capacity is up to just under 16 megabytes. This is up from 15 and a half the last time we spoke, I think. So another half I've been writing a couple of weeks. Total number of deals is up to 2.45, up from about 2 million, up from about 2 million the last time. Data stored is increased by about 15 petabytes from the last time we spoke. Up to 56.6 petabytes, 37 of which is verified. So, yeah, the amount of verified data on Filecoin is increasing very, very quickly. And we now have 49 million NFTs stored on Filecoin as well. That's 151 terabytes a day. Highlights. So the last time we spoke, snap deals, which was network B15 was kind of the big exciting thing. In the time since then, we've had over 750 proof replica update messages. So a lot of data is being introduced through this new path that was introduced through there. In the time since then, FVM is increasingly becoming the focus of the various teams working on Filecoin, both within Entrez and outside of it. The exciting update to share is that all major implementations can now sync mainnet using the FVM. And after this morning, the Venus team, the folks that IBSS for announced that they have been sealing new data, proving it and winning blocks all using the FVM, which is very exciting. They totally beat us in the Lotus folks to it. But we're very happy about that. Rust-based actors are being tested and audited for NV16 as well. And a bunch of last few details are being finalized for the V16 upgrade. Coming up, we're scoping and implementing the last few issues for V16. Post workers, which is something that's really been demanded and initially was implemented by the community, is almost ready to be used. So it's going through its initial testing phase. The folks over at Bedrock are getting ready to test and ship the boosted indexer work. And Philpruse is continuing to work on Helo too. Right now it's still within Philpruse, but it'll probably be coming to, getting ready for integration and to go live, kind of some type of the Q2 timeline as well as expected there. Lots happening as always. Awesome, thank you. We're gonna move forward into team updates. First up is NetOps. Hey, everyone, this is Jesse. This is the first time I do e-news coverage and thank you so much for putting everything together. So NetOps, we have several very key API in there. First is our 95% tie for the first tie to first bite. We are training now for the next second. It was, as I remember, the 32nd night one month ago. That's a huge improvement from our point of view so we can move forward and have a better performance. For the people who upload and ping this data into our MFT cluster in the Westerie cluster, you can see the number growing around like, less than 10% I would say, maybe 5%, but that's a lot. We are going to move a pretty good amount of number and data coming in. So the very request, we are increased for 853 million. And the gateway usage rate is a 4.1 million. Also, you can see our network uptie is only one. We kind of see it down once. It's an IPFS IO gateway. It's down like, we didn't meet the finite, only 3.9 in there. So everything looks pretty good, but what we use in this KVI actually come here, we are not trying to show in people, say we are great at all the traffic cameras. We do want to see the traffic growing, but what we want to more see is more adoption. So more people running their own cluster and IPFS and Firecoin cluster. So we are hoping we can see less requests coming in, but the whole network getting more rich. That will relate to our next page, the thing we are working on. Yeah. So now also we have several updates. First, because we want to build this kind of best practice for running the IPFS or Firecoin infrastructure, we are working with the vendor who have a lot of experience working on the enterprise standard to working with us. So we can learn from that, so what we should do. Second, we want to increase adoption. People can use our IPFS or Firecoin. So we also spend the effort to running this kind of lotus build artificial pipeline to make sure we can create a different build easily so people can use our Firecoin to run their Firecoin on their own thing. Third thing is our internal. We have a leading running inside our data center. It's part of a lotus, but we want to make sure when we represent out all the data and how we run data center is work class. So we are working on the reflector to make sure that it can be scale, easily scale more stable and have a better performance. Fourth thing is more on the operation size. We, if a people want to join our network, they need to start have a snapshot. We never have a right process in place to make sure people can have the snapshot easily and starting joining network. So we are working on how to make sure this is more easier. Yeah, the first thing at the last is hiring. Okay, we need a lot of people coming to help. Come to help us, you know, anyone is good, share the resume with me with a seat and we will working from there to make the teams better and stronger. So I think that this is the first time I'm doing this all hands. I want to say that in everybody, right? I know a lot of people in the team, we are working very hard under the same. No one is, if nothing going wrong, people didn't notice that infrastructure team. If nothing going wrong, it's just infrastructure team will be aware of that. So that's why we will try to also have some of the team introduction after this slide, as in the end. So everything can come in, our team members are coming in to tell people what we are working on, why we are on that. I think we are totally alive with the protocol lab to build. We want to make sure the adoption for the IPFS and file coin increase in the world. We want to serve in the world in the better place so people can have the PDOP network for file storage. Go ahead. So Jesse, just one more thing maybe we should add is we actually have just signed up for the Kubernetes, GitOps contract to actually build out a fully automated and next generation GitOps based platform. So that's looking forward to that one and the different teams, not just NetOps, but many other teams as well across BL who can benefit from this. Updates from the Nitrofront. So NFT.storage is closing in on 50 million uploads. We have some announcements getting ready, working with an agency, like a professional PR agency to get this pushed in a lot of mainstream press and would appreciate everyone's help in amplifying this when it comes. I think these announcements will be queued up for Monday. The threshold will probably be crossed over the weekend. Otherwise, yeah, we're seeing a kind of continued exponential growth on the NFT store side for a while. It was kind of like linear on a larger base, but some more efforts, recent efforts like integration with Metaplex and Solana have really kind of bolstered the numbers recently on the Web3 storage side, continued steady growth. One thing that isn't called out here is the number of active users continues to increase on the Web3 storage side and we're excited about being able to give that a little bit more love as time goes on and things get short up on the NFT storage side. Highlights, we'll talk a little bit more about this in the shipped slides, but a bunch of stuff has been shipped in the last few weeks from the NFT storage gateway to UK and delegated uploads to a big redesign of Web3.storage. So excited to tell you more about that in a little bit. Heads up to everyone, if you see like notion docs with warning labels on the top for Nitro things saying that this doc is moved or deprecated, it's because we're midway through a notion migration into our own notion workspace, along with other nucleation prep is happening in the background, so just ping me if you need access to things. And yeah, we gave a few talks actually at South by Southwest, including for the main event and looking forward for those recordings to be up and we'll definitely share with the group once those are there. And then really quickly, one thing to call out on things moving forward, we're pushing forward a big work stream around improving the uploads, pushing forward a lot of improvements to our uploads workflow. Right now, users have to generate car files in memory on the client side and send piece by piece that car file up to the service. We're kind of like rethinking this to be like more streaming like and resilient and be able to make it easy for users to upload large files. If they have a bad connection, it's no problem, that sort of thing. So that's gonna be a multi-week effort, but it brings together a lot of cool things the team has been working on. So we are looking forward to the results of that and we'll definitely share more about that as it becomes fleshed out. Hi all, an update from Bedrock team. Starting BoosterAllowed next week with the SKX group and on track for full launch end of April. Index of project is onboarding dot storage collections at the moment. We're on track to be part of Lotus 1.50.1 release and engaging additional index providers such as Kinetic and Lobs and others. RepSys has delivered a demo and initial integration with Eelbot and Pando and we're starting to integrate with Phil Rep. And Lightning Work is wrapping up with the next release the updates include graph sync improvements and data transfer stability improvements. Exciting highlights, we've indexed over one billion CIDs, added improvements such as multi-protocol support and rate limiting. The team has posted eight grants on DGM attracted about 20 super solid community applications and submissions. The areas covered with these grants range from large one to full plus fraud analysis. Please please refer your contacts to apply for these grants if they haven't yet. Our KPIs are under construction for investigating some of the dealbot issues from last week. In general, we're working on expanding deal success metrics across additional data sources hoping to use auto retrieve to get network-wide retrieval metrics and also looking to create a more scalable solutions that can serve more targeted metrics needs such as data programs. In terms of opportunities, first in on the retrieval incentives we're consolidating work across multiple teams across the company to make sure we're all aligned in what we're building. And also we're rolling out our new storage provider community engagement program for new releases with both than indexer releases being the first one to be tested out. Thanks Bedrox, Rezdev. I can jump in with my terrible voice but I think the biggest data point here is that there's a computer over data summit in Paris, April 3rd through 5th. So there's a lot of exciting roadmap updates leading up to that and then there's going to be a great report after fact. So see many of you in Paris in a couple of weeks. Over to you, Patrick. So yeah, retrieval markets, a roadmap update. The satin team have been working hard and we're getting the first satin gateways deployed this week. There's a new retrieval market website going up very soon with just kind of flashy new designs and just a bit more ability to add new projects and just discover what's going on in this space. Myel have completed the JS GraphSync grant and they're also working hard on Rust GraphSync for those who are interested in different variants of GraphSync. And there's also been a grant kicked off with Lee-Way Hertz for a retrieval performance dashboard which will essentially compare all of the different web three CDNs and just yeah, across the world making requests from different parts of the world and finding out where they're performing. In April, satin will start launching some stations maybe just privately to start off with before going public. We've got a WebRTC direct grant kicking off the chain safe and the Titan Ultra Network which is another web three or content address CDN. They're completing their phase one research and that's with new web group. In May, the satin is going to, we hope have its VZero launch and we also come to the end of the Magmo grant for multi-hop payment channels. I'm sure that will continue into some more work. And in the second half this year, the FCR proxy payment network which has been sort of a follow-on project from the Pegasus work last year will complete and will also be trying to tackle some of the crypto economic issues around retrieval. So highlights KPIs, the team's grown. We're now working with like four, soon to be five people, most of whom are working on the satin initiative. And as everyone else has said, there's positions open. There's also positions open in retrieval markets. We're gonna have a team Kolo in Barcelona between April the 25th to 29th, the week after the Amsterdam DevConnect. So let me know if anyone would like to come and join. And yeah, we've launched some satin gateways and we've got 10 grants going with the Retrieval Market Working Group. Challenges or opportunities, I'll let everyone decide which bucket it falls into, but there's so many CDNs now for content address data and the list just keeps growing. So it's just gonna be interesting to see how these can work together and find slightly different spaces in the grand Web3 map. And then there's also just the crypto economic challenges of the retrieval network. Very interesting and very challenging. And yeah, that's all. Thank you, Patrick. Into some of our spotlights at the end. As you mentioned earlier, that now Lotus can also sync mainnet, PowerPoint mainnet with FVM using the actor, which is super exciting. We are, the node is now launching in Lotus V1, 151, like RC we're getting tested with some of our TSEs. So that's that. Yes, Venus F4 seems to be beating us. They are also on mainnet there, very exciting. So, first step towards FVM. Slingshot. Hey everybody. We launched a new program within the Slingshot umbrella. Yesterday actually called Slingshot Evergreen. For those of you who have been following, Slingshot has been on a nice long journey or since mainnet of onboarding. Loads and loads of data to the network. I think we just at about 35 every bytes now in terms of data onboarded of about 61, I believe, public and open data sets across the board. But of course, 15 months also means some of the initial deals have started to expire and we wanna ensure that data isn't lost, so that we can reliably host mirrors of those data sets on the network into the long term. So Evergreen is effectively like a mechanism or a long-term program that watches for data that is in deals that are expiring in the next two months and puts them up for effectively like renewal. And so search providers can sign up, get vetted and then identify specific PCIDs that they're interested in storing in verified deals for the next year and a half and will automate like deal proposals for them. You should check out the program details our super swanky website. And like really wanna thank the team for a lot of hard work over the last week to get this out the door. Nailed the timer. Good stuff. All right, NFT storage gateway. Hello, it's Alan's lucky day. Get to see my face again. We launched a NFT storage gateway. It's a little bit misleading if you come from the world of IPFS gateways where it's actually a gateway racer. Folks send HTTP requests to the NFT storage gateway, NFT storage.link. And then we race multiple public gateways by sending requests to them. And then the first one to get back to us with the data, we serve it up to the client. It does come with very aggressive CDN caching. Cloud worker, it's built on cloud workers running on the edge. So we do get to take advantage of a lot of the awesome caching infrastructure out there. We actually have had a 70% plus cash hit rate so far for content requested on the NFT storage gateway. Just really specializing in doubling down on NFT storage or NFT CIDs. And there's been about 30 million requests in the last seven days. So a lot of folks have already started using it. There are upgrades coming soon, a paid permacash super hot gateway. Contrary to what you might believe, I did not make this. It was Vasco with the help of NetOps and really appreciate the hard work there. Another thing we shipped, I mentioned earlier, UCAN delegated uploads. Hugo was the real driver here. There's a problem in NFT storage previously where we would issue API tokens, but our users couldn't put those API tokens in their end users, browsers. And as a result would have to put up a proxy server to kind of be an intermediate touch point to upload data to NFT storage. But now you can do it without this proxy server because of UCANs. There are JWTs where you can have a chain of signatures signed with folks as DIDs to grant subsets of permissions to subsequent UCAN tokens. So a user can get a UCAN token, authorizing them to upload data on the marketplace's behalf directly to NFT.storage. And it's coming in Web3 storage soon. It's gonna be super valuable there as well. We're spinning out a general UCAN microservice to kind of combine all this into one thing as well as use it in our new uploads flow as we talked about earlier. So super excited about that. Please test it out if this sounds interesting to you. And finally, there was a big Web3 storage redesign and it is real drippy. It's nice looking, please check it out. Agency Undone did this for us. Just yeah, I mean, not a whole bunch to say other than it looks completely different than SuperSleek. I'm a boost here. Hello, boost. Yeah, it's coming. It's so close. So we're kicking off testing with SP's next week, which is very, very exciting. And we're on track for a full release in April. We're wrapping up right now updates to Estuary and Fill Client so that when Boost launches, they'll be able to take advantage of it right away. We're also working with Textile to get support into Auctioneer and Bidbot. So all those offline deals will no longer need to be offline. It's gonna be great. So in what is Boost? Well, here's a couple of feature heightlights. So Boost is the new version of Markets for Lotus. It's fully backwards compatible with the current version of Lotus. It's also standalone. So Lotus can release on its own and Boost can release on its own which is really, really great. They only depend on the Lotus API, which is awesome. We're also launching with support for storage over HTTP. So all those car files you have on S3, guess what? You can just make deals with them directly. We're also launching with a lightweight CLI client for data preparation and storage. So if you ever thought like deal-making on Lotus or deal-making on Filecoin was too hard, it's gonna get easier so soon. We're also launching with a web UI for storage providers and you can check out a bunch of the in-progress docs at boost.filecoin.io, thanks. All right, so just a quick update from the FEM Early Builders Program. This has been great work from Ali and Dragon happening here. We had about 100 applications out of which we accepted 25 teams and individuals. And last week we had an informal meet and greet call, 45 faults from many groups joined including people from Ballas and Mansdow, Polyfine, Phil Swan, all of those groups that are listed on the left. And we basically went around the room and introduced ourselves. And it felt like there was great energy and pussy-assim and it felt like the start of something big. Lots of cool builders all around the world and each team spoke about what they're aspiring to build on top of the FEM. And some of the great ideas there were things around reputation systems, cross-chain bridges and L2s, new SDKs, indexing services, a bunch of things there, pretty cool stuff. And last but not least, Ali and Dragon also worked on spending up a Notion website for the program. So there's a link in the slide, go into the slide and check it out. And Ali is also putting together a public directory of all the teams that are participating and what everybody's working on so that you can spy on the progress there and stay updated with everything that's happening inside the program. So make sure to check it out once it's out. All right, DRan is in space. We have been collaborating with CryptoSat for a while since last year, September last year. And finally, the experiment went live and it was successful. They actually run an instance of DRan between a node in the ISS and on the ground station and they intend to roll this out to multiple satellites in the future. And essentially this is going to become a new frontier for more trusted, tamper-proof computation, so to speak. So DRan is one of the first early set of protocols that are going to be onboarded into space. And we are looking forward to working with them further. So stay tuned. We actually grew our ecosystem collaborations as well. So a couple of weeks ago, we actually had a ETH Global's WillQuest Summit on Web3 Gaming and we gave a talk on DRan and a lot of folks excited, got a few responses on Twitter as well. So just kind of building up on the ecosystem collaborations going forwards. We also grew the League of Entropy, the collection of the partners that actually operate the DRan network as a decentralized network. Store Swift joined us as a 15th member and we have a couple of new members lined up as well, including CryptoSight as well. And of course, we completed the development of new DRan features, which is quite groundbreaking in sense. One of the first few randomness beacons which will be unchained and will actually also be enable us to run multiple variable frequency networks, which means we can run the 30 second network. And in addition to that, we can also run lower frequency networks as needed for supporting additional use cases. So testing is going to commence next month and we intend to file a Filecoin FIP and engage with the Filecoin team to understand how best they can actually make use of the new features that we are actually launching. Timeline is not a concern. It's more about making sure that we find the right integration with Filecoin in the future that can make use of the new features. And of course, we are kicking off a number of LUE League of Entropy focused projects that are going to make life easier for those 15 odd independent organizations to kind of operate DRan. Thanks to Yanez, Nicola, Yolen, Mario, Will, Hector. I think it's been an amazing push that we have been giving here and looking forward to taking DRan to the next level and growing the LUE as well. Thank you. Hello, hello. It's Yanez here. I'm coming to report after a great meeting that we had in Berlin with ResNet Lab and our collaborators. We gathered everyone for the second time in like physically in Berlin and we had great updates from all the teams. We had lots of what they're doing. There was lots of enthusiasm, lots of breakout sessions and lots of results. There was many LUPITB and IPFS students that joined. So thank you for joining. It's a great way to start working closer together. The primary topic was on network measurements and benchmarking and protocol optimization. I gave a demo in the Endress demo day on the 10th of March. So watch that recording. We've produced a 14 page report with all of the latest results and there are links here to find out the outcomes of the meeting, which to go in a little bit more detail are a list of items that we want to dive deep into. These includes the flare and nut hole punching that Max and other mentioned previously. We want to see better, you know, how the effectiveness and the performance of protocols like BitSwap. We want to identify if there are peers with rotating periodies in the network, which might be screwing up some of the content routing processes, unresponsive DHT server nodes, the reliability and effectiveness of hydra nodes and so on and eventually build what we have a vision for the IPFS network observatory where one is going to be able to look into the network from multiple different perspectives and be able to identify, you know, problems and where there is space for improvement. This is the GitHub repository that we're going to be using. There are already several RFMs requests for measurements. So if you're interested, just follow that repository and out of that it became clear that we need to have a dedicated team that is going to be working on protocol benchmarking and optimization. And we call that the probe lab is still in formation and it's going to be, it's going to live in within production engineering, which is a new group that is going to be announced very soon. So stay tuned. There is going to be lots of collaborations with IPFS and LiPi2P students. Again, as was said previously, we have a notion page where we explain and we have pointers about everything we're working on. The first order targets is the nut hole punching success rate measurement, optimistic provide and several network measurements for the IPFS observatory. The main motor here is that, we're not measuring networks just for the sake. It's not, this is not an end, but what we want to do out of this is identify bottlenecks, find bugs, quantify the available space for improvement and eventually have protocol optimizations. We are going to have grants for that milestone grants. So get ready, spread the word and follow us to see interesting things. We have already some results on nut hole punching. You can see snapshot of a Grafana dashboard down on the right-hand side, but there is much more to come. Thank you. Thanks, Yanis, exciting stuff from ResNet Lab. Very cool. So get to do the net ops deep dive. Yeah, from a very high level, net ops, especially in the back end side, we have three different area, Biforce, File, Info, and Sentinel. From the high level, the picture is like this. I will have a mirror doc to share every interest. The team will go through and into tests of what they are doing and the team member and what we are today. So for here, I just kind of want to start this conversation, but all the hard work that's been happened is missing the team. What is Biforce? What does the Biforce team do? It does a few things. Mainly runs IPFS infrastructure at scale. Our biggest project is the IPFS gateway, hosted at IPFS.io and you have that link, which allows browsers that speak, well, and tools that speak HTTP to access content from the IPFS network without having to run their own nodes. It basically provides a canonical way to address IPFS content via HTTP. We also provide the default bootstrap nodes, which are baked into GoIPFS and JS IPFS as a public service so that other nodes can find each other in the network. And we also run preload nodes, which augment JS IPFS and expose IPFS endpoints that are not available in the browser. In practice, that means that JS IPFS clients can add content locally in the browser, then use a preload node to request that CAD, effectively caching the data and allowing the browser to have to be closed and be loaded without losing the data. All right, so why are we doing it? Our motivation is to provide the best performance infrastructure for others to use, definitely and most importantly for the IPFS gateway, which seems to be the most widely used infrastructure, provide best practices and standards and tools for others who want to run IPFS and IPFS cluster at scale. How are we doing it? The high level of how the gateways run is we have bare metal nodes, they're run on equinix. We run in seven data centers and we have between four to 16 nodes. The reason we run on bare metal and at VMs is because this IO is very important and access times are very important if we want to provide very fast service. So that's why we haven't moved on to VMs yet where most storage is via network, basically. We're working towards that too. There is a load balancing layer that's running Nginx and does all of the HTTP layer balancing and filtering and such and also a separate IPFS layer upstream from that, which allows us to scale out just the IPFS boxes without having to mess with any load balancing or Nginx issues. On the load balancer layer, we use anycast to route traffic to the two addresses to the data center that is the fewest hops away from the request origin. So each load balancer node announces a global BGP route for the IPV4 and IPV6 that we run. We use hundreds of metrics if not probably thousands by this point to monitor and alert. As far as uptime goes, we have Bingdom checks and synthetic checks from the outside of the network. Within the RNET we have Nginx metrics, load balancer, error rates, performance, time to first byte, that kind of thing. We also collect go IPFS metrics from within IPFS, things such as go routines, peers, want lists, also time to first byte within IPFS and also OS level metrics such as IOCPU, generic things. We follow infrastructure as code principles which means everything that we run is managed in GitHub and we deploy it through CI via Terraform and Ansible Playbooks. Our progress so far, we've recently hit one billion requests a week on the IPFS.io gateway. It has since gone down a little bit but it seems to be going back up. So we're hovering around a billion requests, total requests and we've hit time to first byte of around eight seconds for 95% of our users with a 9.9% uptime. And what we're going for is five seconds. So we will continue to scale and improve our system to ensure that for 95% of our users, it doesn't take longer than five seconds to start receiving content from the IPFS network. All right, so what we do, we operate and monitor core file point network infrastructure, bootstrap nodes, API chain.love, stack, dashboard, disputers and we're also a core part of running the DevNet such as calibration net, butterfly net and interop net. We drive operational improvements in tooling and Lotus and support and enable network developers and operators. Our top goals for 2022, the first one is around API.chain.love, which is a Lotus gateway. This is a service that I'm sure many of you have possibly used. It's the default in Lotus Lite and Lotus Lite is often used as the introduction to file points, so many new users, they interact with the file point chain through chain.love and it allows you to interact with the chain without running a full Lotus node or a distor. So we have some very ambitious goals here. We're trying to push it to be able to handle more than 200 requests per second without having latency in the chain thinking and we're really looking to push Lotus to its limits and develop tooling and improvements in Lotus to reach our goals because it will be very difficult, I think, with the existing patterns to get to our goals here. We also have a goal to run a Lotus chain backup service that produces backups that are never older than eight hours. Reba has done a amazing job in file coin launch of running this and we're hoping to launch a parallel service so he can sunset his and move on to other things. And we are, in general, trying to reduce the operational overhead for Phil Infra so we can focus on high impact projects and not get bogged down in some of the manual toil that we have in the past. So how are we doing it? We are monitoring for high uptime and collecting data for continuous performance tuning. We're automating and improving the deployment of our Lotus Core Infra and reducing manual toil for our team and creating and sharing operational tools and resources. And we're also supporting a lot of devs. We are providing access to storage provider hardware and storage for five teams in our data center and we're leading a managed GitOps platform rollout which is what Sid had mentioned, which is WEAP and WEAP will give the ability for Android teams to have the autonomous control over their applications and network deployment capabilities. So we're rolling that out within the next month or so and we will give you more updates on milestones as we start to work with WEAP. So this is a basic diagram and it's showing we are running our current core Infra in EKS. So we run Lotus and the gateway and the dashboards there and that's where we're focusing some of our work around deployments and automation. We collect data in Prometheus, visualize it in Grafana and then we also monitor that data and page and send alerts. Our progress so far we're super pumped about ChainDouble because we're seeing a pretty huge increase in usage. The weekly average is currently at 45 requests per second but that's a 57% increase from 40 days ago and we have four lines in uptime for that same duration as well. We have three regions where we actually operate our clusters. So we have nine draft nodes in three regions. So it's easy for anyone to join the Filecoin network and for the stats dashboard, we have a 99.8% uptime in 2022. We would like to get that to three, three lines but we're super close. What's next? We have the Filecoin Chain Snapshot service. It's in its planning phase and our first milestone is to provide Snapshots in S3 which is the current functionality in that and we are also hoping to push on HA and scalable Lotus because we need to ensure that we can keep Chain.Love up and meeting the demand and we would really love to actually see this service grow super useful to the whole network and we have a recent Lotus Build Artifact improvement plan. There's a link to it there. You can read more and more about it but the general idea is to increase the build success rate and the usability of Lotus packages and images that get built as part of the CI pipeline and that's all, thank you. Thanks, Kerenan with Sentinel. Hey everyone, I'm Hector. Sentinel is another of these things inside NetOps. And our goal is to guide this access of protocol apps technologies through data monitoring. We're especially focused on everything that happens on the Filecoin Chain that involves doing a bit of everything, writing software and running it in our infrastructure doing a bunch of data warehousing but also doing monitoring and analysis of the Filecoin Chain and dashboards, other things. Our main objective is that this chain data is complete it's reliable that means it corresponds to what is actually on chain that we're able to query that data really fast as soon as it's produced by the chain and that we can extend those queries for the whole length of the chain which is when it starts becoming a large amount of data. Of course, it's not only for us internally it's also for the community to build upon this that means that we need to make not only the software but also the data that we extract available for reuse for the community so that they can run their own analysis. And we have to keep all of this running while Filecoin keeps taking great steps and making progress at great pace. This is a very simplified diagram of the Filecoin data extraction pipeline. We have Lili which I will speak a little bit more in a moment which is the application that extracts the data from the chain. We push that into a database in this case timescale and we have an additional data pipeline which is essentially storing the whole archive of data also on S3 packets and making it available through Asina, et cetera, et cetera. So it's very simple here but there's a little bit more complexity when you look inside those boxes. One of the applications, the main applications that we write and maintain is Lili. Lili is a wrapped Lotus node that watches the chain and on every epoch every 30 seconds when the new tips have happened extracts everything that happens in it, messages, chain economics, sectors that have been committed, deals, et cetera, et cetera. Everything extracted as a structured data into a database. The idea is that we have this running and following the chain perpetually but also that we're able to use these to also reprocess the chain or extract data from previous moments in the chain. That happens when you were not running during a certain time or when you introduced that back or when someone wants to do something else with the data or when someone wants to process a chain which is not mainnet, et cetera, et cetera. This is the diagram of the architecture. I think the thing that we're aiming to move to when we want to escape horizontally and so on. Now it is similar but things are more contained on a single application, a single demo. The main thing that worries us for Lili is that with FBM and with the growth of the Falcon chain there will be way more things happening on chain that will mean way more data to extract and that extraction needs always to happen within these 30 seconds that every epoch takes because otherwise you're gonna be falling behind. Therefore, we need to find ways to parallelize extraction as much as possible so that we can indefinitely scale and being able always to have this fresh data for analysis. We've done lots of progress. We're at a very stable moment now. We're able to process a chain in time. All the data goes into dashboards, into the database. All the data goes into the archive and our database and our archives are available to partners that we support and that build their own applications. For example, Starport has made really nice public stats or public panels with graphs about Filecoin for the community to make Twitter threats about Falcon data and so on and this is all powered by the work that we do in Centino. And Steph is gonna talk about this one. Hi, I'm Steph, I'm part of the Centino team and I'm mostly focused on the data ingestion and data analysis side. We want to provide data to everybody in the PL network so that you can make more informants, more decisions in your everyday work. And our goal is to build a data platform that enables anyone in the PL network to perform their own analysis so that you wouldn't have to, let's say, come to me and ask for a query. Ideally, you would be able to do that yourself. So that's kind of like what we're working towards. How are we currently doing it? We're creating a source of truth with pipeline that gathers data from different varied sources. So it's not actually only trained data but also data from our Airtable CRM, Twitter mentions, stuff from elastic search communities and events folk and we also enable the warehouses for the interest group and also the ISM systems so that people can use their self-service tool of choice, whether that's periscope or looker or observable. Yeah, so currently there's already a lot of dashboards available in periscope and data warehouses such as Redshift are already available given like the correct credentials. If you want access to it, you can just like to me and you can set something up for you. What's next? We need to automate a lot of existing manual processes. So for example, like our archiving operation is actually done, it's triggered by the user. So somebody enters it just into an EC2 instance and then runs like a bunch of commands and then the archiving process is performed. But ideally this should be in a pipeline that's triggered with a cron job instead of somebody, instead of a person having to do it. We would like straight or well-defined self-service data roadmap so that hopefully by this year, people can ask questions to our data warehouse and get answers directly. Awesome, thanks a lot, NetOps for the deep dive. Thanks all for everyone who presented and for tuning in this week. We'll be back again in four weeks. Hope everyone has a wonderful rest of the day.