 Hello and welcome everyone to our first PL entrance all hands of the year. We are a little overdue since February, but we're excited to share all of the new new things that have been happening in Andres land. Here's our agenda. We're going to start with our update. We have a lot of good spotlights for you. And then we have two deep dives on all of the awesome cost optimizations that have been happening. To our various different web to infra that we use to power things and the results of the December not whole punching month. So excited to make time for those. As a reminder, the PL andres working group is part of the PL network where we drive breakthroughs and computing technology to push humanity forward. We think that the internet is one of humanity's greatest superpowers and that building an awesome solid reliable foundation for humanities information is super critical to do now. And now especially because there's so many breakthroughs coming we're already seeing how fast I can take over the world. And we want to make sure that these are built on solid foundations that are robust to all sorts of, you know, issues centralized control and other other problems like that. We have the pleasure of working on a ton of amazing projects that are open source and being part of these open source communities and ecosystems. We especially spend a chunk of our time on IPFS live PDP and file coin, but there are many other protocols and and networks that we spend a chunk of time on as well and contribute towards and help flourish. And so we're always seeing and helping support more get started in the PL network as well. Our mission is to scale and unlock new breakthroughs for IPFS Falco and Libby and related protocols. We do that by helping drive new breakthroughs in protocol utility and capability scaling the great collaborative open network native research and development that happens across the PL network. And also helping Stewart and grow our open source projects networks and communities to to new heights. We have a whole chunk of different teams within the PL interest working group. And you can get in touch with all of those in our public notion. And this is our strategy for 2023. So shouldn't shouldn't be that familiar or should be pretty familiar for folks who saw it in December but new and updated for this year. Our base base layer is keeping critical systems stewardship and growth. We want to be growing because if you are static, you are dead. And so we need to make sure that the amazing open source communities we contribute to are always on a solid growth trajectory. We also look at the overall contribution and teams that are contributing towards these projects and supporting many groups to come in and and build and scale and launch new applications and businesses that collaborate and increase kind of total net value created here. And then we have kind of our two two big bets for 2023 that look very familiar. One is robust storage and retrieval across IPFS and file coin, scaling our data onboarding and scaling our retrievals at CDN speeds with lots of adoption from lighthouse users who we can shine shine the lighthouse on to show the value they are getting from IPFS and file coin. And then second compute over file coin state and data, upgrading file coin with lots of new L2 capabilities, helping scale the chain space for all of those capabilities, and then also bringing compute to the data and a lot of stuff there. Diving into the robust storage and retrieval. This is our storage and retrieval lifecycle. If you saw the address summit should be pretty familiar where we have storage clients who are storing data with storage providers who then, you know, interface with retrieval providers and retrieval clients and really our focus in this quarter in particular is really around making sure that the retrieval part of that lifecycle works and works well. And so there's actually a very exciting project called Rea, which is focused on robust retrievals collaborating between the IPFS gateway and Saturn and boost and network indexer and, you know, a lot of the underlying components and modules from a data transport perspective across IPFS and file coin as well to make sure that this storage and retrieval lifecycle is getting connected smoothly, and that we complete that circle so that the data stored by say file coin SPs is accessible on IPFS gateway clients using Saturn for those fast lookups also helps us support all of the amazing growth happening in Saturn land and bootstrapping that business, and it helps us with our cost cutting efforts to actually reduce the costs that we're sending to folks like AWS and Equinix and instead get to reroute those funds and resources to all of our, you know, Web three powered Saturn nodes in our distributed retrieval market. So, lots of great work happening there big thanks to bedrock Saturn IPFS by frost and a number of other teams for pushing that forward are two kind of well we have a couple big initiatives in in this second area. One of them is FVM, which is unlocking a whole ton of new things like storage provider defy loans and staking programmable storage that can lead to perpetual long term, you know, you know, infinitely long term, maybe, deals into the future, and then just a huge plethora of different projects that builders are making through various hackathons. So lots of new capabilities coming there. And the other thing that folks are building on top of FVM is also that computation over data and the ability to have layer twos that can, you know, harness that the L one of Filecoin and Filecoins kind of programmable state to then spawn additional components that are optimized for new capabilities and so some new capabilities are very excited about our bringing large scale compute over Filecoin data. This is all the great work that the Buckley L team is doing. There's also lots of opportunity for more compute networks to come and build on top of those reusable open source components to build bespoke compute networks that are optimizing for different points in Juan's triangle. And then there's some really awesome work happening by the interplanetary consensus team to actually bring shardable chain space and the ability to create subnets where you can transfer state between subnets to the Filecoin network and then beyond. And so very exciting work happening there. And those are some of our kind of big bets in this area. So here is a draft set. This is probably fresh for many folks but our Q one okay ours or objectives for the year. These are the objectives are things that we hold constant kind of quarter to quarter, keeping our critical systems running, accelerating the teams contributing to the stack scaling data onboarding and then upgrading Filecoin with those new capabilities so that maps to our overall strategy. So here's some of the KRs here. You know, first, we definitely have an effort that we are scaling up around making sure we're better monitoring the functionality of the IPFS network with better understanding of, you know, hey nodes might be online but are they serving the content and are offering, say, you know, IPFS powered websites, the sort of performance that are needed to serve their end users effectively. And so that is kind of a focus within our overall goal of making sure we maintain good uptime we want to add that as a requirement to achieving our uptime and security goals and and and aims. We also we hadn't okay our last year on cutting our infraspend to centralize web to infraservices we overachieved on that goal our goal was 30% cut, and we did closer to like 40 plus. And so our goal for this quarter is to cut it by 50%. We're already about about almost halfway to this goal, I think we're about 20% cut in our centralized infraspend which is awesome. And some of our work around decentralized gateway. We project would would hopefully get us the rest of the way so we would still I think count this goal achieved, even if we spend the same amount but we have rerouted it from centralized spend on AWS and equinex to instead spend to Saturn and all of the Saturn decentralized providers, because that is staying within our technologies and ecosystem within web three, which we think is better. In terms of hyperscaling the talent and teams contributing to the stack. There has been awesome work on engaging all sorts of awesome builders in with FEM. And so a goal, I think we're already already on track for this. But engaging 2000 builders through 20 plus events to start building on or with FEM. I think that's pretty, you know, a high mark for the FEM team but I think doable. We're also making sure that we're scaling and filling some of the critical open roles within teams that are really hitting inflection points. This quarter and next quarter and that we're preparing them for the sorts of scale adoption usage and hopefully business success that that they would then have kind of members of their teams to help help with that scale. In terms of scaling data onboarding. This is our decentralized gateway goal to have majority I think the goal that the other team is using is 90% but at least 50% of IPFS gateway traffic being served by Saturn by end of quarter. And also this is a shared goal here this 900 Peppabytes of total data on Filecoin is a shared goal with the outer core team that they are also pushing for reaching that by end of quarter. And then we're going to definitely be contributing to the number of successful retrievals where it about, you know, we've hit a high of about 1001 million successful retrievals per week from boost, but looking at 2 million successful retrievals per week by the end of the quarter. And finally, our kind of like core upgrading Filecoin state and data and compute. We have our FVM launch, which we are gearing up for with over 500 unique contracts deployed on main net last minute upgrade because we've seen so many unique contracts deployed on hyperspace the current developer testnet that I think we can hit 500 so I've up the goals on y'all. And also a milestone around interplanetary consensus launching subnets on space net which is their ongoing IPC testnet and gaining some some users actually starting to make use of IPC on space net to start testing it out. And then finally for compute over data. They're already running active jobs on Buckley out but they aim to reach 1000 jobs per day by the end of the quarter with five plus exemplar partners. So we have some good goals, a lot of them are awesome launch related that we are going to aim to hit by end of quarter. We'll let you know how we do. Our roadmap has now moved to star maps. So if you would like to see it and interact with all these different GitHub issues you can go here to see more about it. You can see a number of milestones that have been accomplished over kind of like since like end of Q for beginning of January. I think this one the Zinnia project Zinnia launching a wallet and falcon station landed yesterday maybe so update your station nodes if you haven't already. And we have a number of exciting launches that are happening right here in this quarter. I have free IPFS free retrievals from boost which is happening February 23. We have SPs like gaining the actual adoption with SPs to, to, you know, enable boost and start offering it by end of March. We have our FBM mainnet launch also coming up in, you know, the very the middle of March, we have our Saturn integration by end of March. We have a milestone around unchained DRAN mainnet. And we have IPC display deployed on space net and a stash station runtime public alpha. And I believe this is a Saturn for CDN customers by end of quarter so a lot of really exciting milestones that we are we are launching here. Feel free to add more to our star map roadmap there as we go. And I'll hand it off to our IPFS folks to tell us more about what's happening there. IPFS, as you know, is the peer to peer gateway to the decentralized web. It serves content over a network of peers, which the content is content that is content address, of course, instead of location address as is the case with current internet technologies we believe that there is a lot of power into, into these technologies so we're doing our best to make it as fast and even faster than current technologies. So a little bit on the JBI is here where we looked into the network size, which has grown. If you see the last few bars towards the beginning of this year, we've hit almost 500,000 unique IPFS network nodes in the last week alone I think that's a little bit less it's 400, 490 or something. But we tried to look in the, in the context of the first objective that Molly mentioned before, we tried to look into a little bit into the detail of that to figure out how many of those are DHT servers in the network and how many are DHT clients to start getting a little bit more clarity about stability in the network and how nodes are interacting with each other. So on the top left, sorry, the top left we see the bars over time on the top left, sorry, on the top right we're seeing the DHT servers versus clients for the last five weeks. So basically for 2023 we see that of course there was a deep during the Christmas period where the network size shrinked a little bit people had other things to do but then it's picking up again and we're seeing that there is a fair amount of client nodes which is staying stable over time and then the server nodes have also increased and are keeping stable the last couple of weeks. We're going to still keep on monitoring that but we're going to start providing other characteristics of the network such as stability, uptime, how many nodes can actually serve content and be very useful to the network, which kind of leads us to the network performance, the find latency, which you see on the bottom left. There is an increase in the latency to find content in the IPFS network through particularly the DHT. This is because of several reasons that several events and incidents that have happened the last couple of months. We're looking into that and I can confirm that this latency is going back down this week. It's not reported here but we're going to start seeing normal levels of latency. Again, there were several nodes that had misconfigured their nodes, some of them due to the resource manager, some of them due to other reasons. After a little effort, we've seen that networks are going back to normal thanks to the IPFS team that provided the right recommendations to them. But we're going to start providing a little bit more detailed KPIs for the performance there. You can find all these in the Notion page in the document that is linked up there in the title of the graph. Finally, in terms of community, we're seeing a small uptake in the number of users that are getting involved in terms of fears and issues that are being discussed on GitHub and so on. We hope this is going to keep up and we're doing our best to serve everyone's requests. Thanks, Yanis. Gus, tell us the highlights. Okay, so yeah, we've got a bunch of improvements on the protocol and implementation side of IPFS. On the cost saving side, we decommissioned the bulk of the work that the hydras were doing. They're still running, but they're not caching content for the DHT anymore, so we ended up saving around $30,000 a month from that. There was a slight latency increase of about 13%, but we're going to keep working to drive that back down. We released Kubo 0.18, which is the biggest change and that is shipping the HTTP delegated routing with SID.Contact indexer enabled by default. So now Kubo will be querying both the DHT and SID.Contact when it performs its content routing resolution. We started shipping Golib IPFS. It's still a work in progress, but this is a library that contains components for building your own IPFS implementations. We think that most applications should be building their own binaries and not using Kubo because Kubo is turning into a kitchen sink, and the result of that is it doesn't serve anyone very well. So we're really focusing a lot of our effort now on extracting stuff from Kubo and putting in a lib-IPFS to empower the community to build their own implementations. Helia, which is the new JavaScript implementation, has its first demo day. Big shout out to Ignite for helping us work on the JavaScript Kubo RPC client. And we did finalize finally the HTTP delegated routing spec. You can see that in IP IP 337. And there's a new Luma group for IPFS events. It's in the link there at luma.slash IPFS. This includes things like the content routing working group, office hours, JS IPFS, computer over data, etc. And then coming up this month, there's a new gateway binary we're working on as part of RIA called Bifrost Gateway. The link to the repo is there. And it's building on top of lib-IPFS. So we've extracted the gateway code out of Kubo and put it into lib-IPFS and we're reusing it in this new gateway binary. Awesome. Great to see. Over to Russell for Ignite. Great. Thanks. So major update here with regard to all of our projects now having telemetry. We have metrics for WebUI. The Kubo WebUI mentioned there is just getting metrics for users rendering WebUI from Kubo. So that's coming when 0.19 is released. But a big call out here is that the telemetry has switched to opt out if it had telemetry previously, but a lot of the projects didn't. So now we have a lot of metrics. As you can see here, we're primarily just trying to get an account of users initially. But, you know, our metrics will be opt out going forward. So we'll stay away from user identifiable metrics, P2 data, anything like that. But yeah, just that's the major call out there. From the data you can see in this bottom left graph that companion is, you know, dwarfing the rest of our users. So we have a clear picture there that companion needs to be our priority and will be going forward. You know, we'll make sure to focus on improving companion and targeting that. I've got a link to the dashboard that we have in Notion where I've got some of these charts rendered and then a link to the Google spreadsheet that is public where users can see our daily active users, some of these other charts where I'm just pulling the last 90 days from Countly and publishing them in Google spreadsheet automatically. And then for StarMap, we've actually changed the name from starmaps.app to starmap.site. So the link shared earlier might redirect you to starmap.site. Don't be afraid. The redirect should be working, but we'll use starmap.site going forward. There's been some UI updates, significant speed improvements. So on initial render, we're pulling the data from GitHub. So, you know, it's going to be slow if there's a ton of issues, but on that second render, it should be almost instantaneous. So any children issues you've pulled or anything like that will be cached. There's a stale right while revalidate caching strategy. So the cache is always up to date with the latest from GitHub. So you might have to refresh twice, but the data should always be up to date. And then, yeah, bug fixes were working on D3 migration progress, but I'm also I've also looked into the list view for starmap.site. So let me know if you feel strongly one way or the other. I'm kind of waffling on which to prioritize there, but they're both, you know, coming. And then another call out IPFS companion metrics. We've got user accounts here, but we don't really know how many users using companion are, you know, benefiting from companion, how many IPFS URLs are they visiting per day, things like that. We really want to see like, whether, whether companion is actually useful or if these, you know, 60,000 plus companion users are only using IPFS like 1% of the time, we'd like to get a better picture into that. That's it for us. Peter IPDX. Hi, it's IP developer experience team. We're here to help IP stewards be more effective through tooling and automation. We're currently finishing up cubo release automation and cubo circle CI to GitHub actions migration projects. We're also in the process of rolling out new unified CI release, which comes with go 120 upgrade. Finally, we're extremely excited to announce we're starting on the gateway conformance testing initiative. As for the last month, the main highlight is that we're handing off test ground to the Celestia team. Throughout 2022 test ground grew immensely. We've engaged with over 50 community members spanning more than seven companies. We accepted many contributions and made the project feel alive again. Now we're stepping down as the test ground maintainers because IP stewards are currently focused on types of testing which can be achieved more effectively using other means. We still believe in the project and we do expect to be back. When that time comes, we're sure the project will be in an even better place. Celestia has expertise, passion and resources that will take test ground to the next level. In the meantime, we'll stay involved through monthly test ground implementers sync. See you there. That was pretty magical. Well done automating yourselves out of presenting at all hands IVDX team. We are impressed and we'll take a note out of out of your book. I hear you have many, many potential users for your all hands automation. So great work and keep it up over to LibP2P. Hi, this is Marco from LibP2P. The networking library for peer to peer applications. Okay, so we're continuing to work with like HTTP as the request response protocol for LibP2P. If you're interested in using HTTP and get the benefits of LibP2P, reach out to us. We're looking for people to work with on this. We shipped interoperability testing, which will test all the different implementations can communicate with each other over every supportive transport. Moxer and security channel. And we're also doing browser testing here on the community side. We launched or we publish our browser to server blog posts. And we did a talk at Boston on connectivity side. We have shipped and completed the work for web transport and go and jazz. WebRTC is shipped in jazz and Russ. Go is very close. We started a spec for browser, browser, which will let like two browser peers talk to each other. A lot of updates and implementations don't have enough time to go into all of them, but definitely check out the links. And that's it. Awesome dashboard of all of your green test results winning file clean land. I'm low key was stressing out for the past 30 seconds to work on my robotic voice, but I felt, but anyhow, hopefully we're not felt in beauty balcony. decentralized, hopefully being the words like largest storage network for humanities information. So some likes that story. So the network total storage capacity. Unfortunately, it's going down a little bit these days. We are still super huge though we are at like 13.81 pit for the whole total network by Rob capacity. The network storage. The price is caused by many reasons the onboarding is going a little bit slower due to the micro economy, and we also have a lot of early days like storage started to expire from the network. However, we're hoping and that was many initiative coming into the fall point we can report the ecosystem more and having more storage onboarded to the network in 2023. However, that being said, more and more storage is being used to store like useful data on the balcony network that's very exciting. We're at 531 pit for the total Rob bytes on the network and more excitingly we are actually very stably storing more than two pit per day of data so you know what's that closer to our goal these days so that's very good. I have something new for you guys at this time. So as you have heard, there's a new thing called FEDM that's coming to the balcony network, and we are enabling user program, program ability on the network and we have this brand new testnet that's developed focus called hyperspace. And so far the network is one month old, and we already have like 23,000 of contracted void, and within that we have like three, seven hundreds of unique contracts, which is very exciting. And we are at 17,000 unique Eastern accounts on this network, again, a one month network but have 70,000 accounts already. One thing that blows my mind, there's one single contract is triggering like 200,000 invocation on that contract. I'm super curious what that contract is about, but that's just something exciting to share. A couple highlights. One does FEDM launches to the MENA is the biggest question I have been getting recently, and we finally has a date and it's happy pie day on the March 14th. We are hoping to launch FEDM into Falco in MENA via the 18th huger upgrade. There are six bits that will be included in this network upgrade. And if you don't know that just yet, the Chen ID on Eastern for the Falco in MENA is actually 314. I'm not saying we plan this out, but it's, you know, happy pie day. And as mentioned, we ship a new developer testnet. There's a channel, a file called feel hyperspace discuss if you want to help us testing and you know, just play around deploy some application on the network, please do. For the next, the Lotus team, we're also looking at how we should write architecture or improving the Lotus minor to help start further scale their system and working with the Bulls team to make sure the whole data onboarding process can be more like robust. And yes, Lotus market is now officially end of life as of like January the 31st and Bulls is now the go-to market implementation and SP tooling for data onboarding and dealmaking. We have 7% of SPs are running Bulls, most of the dealmaking SPs are already running Bulls and they're onboarding data using Bulls like on daily basis, which is very cool. And there's a new program called Golden Retriever just to help us to make sure that the data installed on the Falco is actually available for other humans. And Crypto Econ Lab is also proposing a new fit for collect sector duration multiplier to incentives like longer storage commitment. Not only we want to align the network mission even more with the start weather, but we also want to make sure that the network can support them and thanks for them for their service on the network. And therefore we're introducing this sector duration multiplier. The fifth is doing draft. If you have any questions regarding it or any feedbacks, please do head to the fifth discussion and let us know. We now have a couple team updates from FBM computer for data probe lab and fill in for Raul. Take it away. Hi everybody. So we launched type of space just last month. You already saw the stats around number of contracts deployed and then sends and so on. It's very exciting. This just launched less than less than a month ago. We three days after we concluded a workflow to upgrade. This is by the way, a developer testnet that is not resettable. We're aiming for no resets unless there is a disaster. It's an upgrade only network. And this is really critical for developers because if you remember the previous leading edge network, which was called Wallaby resetted basically every week or every two weeks. And this one had to wait development progress and developers really don't like this. So it's really important to have this testnet. This testnet is basically driven by the FBM team. Very excited by that. We have been very hard at work writing fits and specifying this massive change that we introduced that we're introducing to the network. There are five types. All of them have been accepted by now. If you want to read more about about how this works internally. There is a new address class. There is the capability to emit actor events now. There's of course an EBM runtime actor. There is you can learn everything about how we support Ethereum accounts addresses and transactions and how that is in the future going to be evolving to account abstraction. And some some gas updates have also been introduced. Check out those flips. And as for what's coming next, we are in the final stretch where literally this is weak. This is team minus five weeks to mainnet launch. It is we're preparing the launch sequence here. It just feels like, you know, those early days of made it made it launch as well. It just got like a lot of revival of that. And basically at this stage, we are code freezing the last release prior to the final release. So that is the code freeze release called carbon out of three. It will be released on Valentine's Day and hyperspace upgrade will upgrade. Sorry, the hyperspace test that will upgrade to it the next day. And then calibration that will upgrade to that release on February the 21st. We are projecting payday for mainnet as Jennifer already said, and it may have something to do with the chain idea or may not. But yeah, there's a nice buy oriented scheme there. And what I thought it would be worth highlighting the massive change in terms of lines of code. This is a network upgrade that involves over 100 K. And that lines of code introduced across ref FEM built and actors and Lotus. So it's a it's a huge, huge, huge code base change that we're introducing here. It has been audited by two external auditors with them that that were very, very useful. It has been audited by an internal red red team as well. We have launched the bullet proofing initiative, which is a pre mainnet crowd sourced audit. What we reached out to the individual security researchers, other security and audit firms that didn't actually make it into kind of like his main auditors. And these have been going through through the code base. There are rewards here and they've been filing filing some issues which are interesting. And definitely some of them have come across real issues that we managed to fix. So that that was a great initiative. We're also working on ecosystem preparations. I like to describe this launch as not the FEM launch. We're not just launching the technology to the network. We're actually introducing programmability as an entire new capability to the network. We only do this once. So really there are going to be more FEM upgrades and there's going to be a native development environment and runtime and so on. But these are incremental incremental improvements over this capability that we only get to introduce once. So really key here is making sure that the ecosystem of existing partners, tools and applications and products that are running on file point are well adapted. Our do not break, of course, but also are prepared for a programmable world. And this entails exposing not just pretty application front-ends and so on, but also making the functionality available through APIs and potentially adding oracles on chain. Such that smart contracts can access their data and so on and their features. So we're working with things like minor reputation services. We're working with wallets, fill plus tools. We're also looking at adding oracles, bridges, DeFi tools and so on. So there's a lot of work going there. There's also a hackathon that literally ends whose finale is tomorrow. Sarah is going to talk more about that in the next slides, I think. So I won't cover it much. And yeah, that's, and then bulletproofing, which I already went through. So yeah, that's pretty much it. Sorry for more than one minute for sure. It's exciting stuff. We're all very pumped for FBM. So yeah, we'll get a deep dive from, or a spotlight from Sarah in a moment about what you can do next. Compute over data. Hi everyone. I'll try and keep it short, but that FBM stuff is fantastic. Since last we spoke, we had our COD summit over 800 attendees, 200 in person, 600 live stream. We launched to beta on stage in November. And since then we have seen a huge uptick in the number of jobs. People out there actually in the public, executing jobs on network. Super awesome. You know, we have over 100,000 in aggregate jobs on the public network. And we're at 30,000 jobs a month. So we're quite excited. We've released a whole bunch of new technology, including you can support. We support external networking, for example, for pulling in packages and other things while you're compiling private cluster support. Our Python SDK. And just this week we were able to launch a streaming prototype that included eventing and live camera capture through back to jobs, which were really excited. One thing that we did want to highlight here you can see on the right, a double unicorn double horns unicorn, violating the name generated by stable diffusion. That is the first object generated via a smart contract at EVM contract pushing through to back to you and coming back with a result. So there you go. First artifact in the world. We will have a, we have another code name, unfortunately named project Lily pad, not associated with Lily coming later this month, which will allow website and allow anyone to pay for fill for jobs, including NFTs. There you go. Executed via contracts through. We also have an on-prem contract to allow people to do it using the IDFS cluster and other on-prem services project amplify, which will automatically wrap and augment existing data on Filecoin. And then lots more you can see there. I'll try to keep it short. So that's what you see there. That's all of computer data. Pretty darn cool. I like your double unicorn and very excited to see Bakliao coming to FBM. That is, you know, hey, things are reinforcing and pushing each other forward on to markets for Phil Infra. Okay. So we launched a new Lotus lightweight snapshot service at the end of the last quarter. We have been receiving up to 50,000 main at snapshot downloads per week. And we also now provide some snapshots on the calibration network. We've also introduced compressed snapshots for faster download times. One of the challenges with snapshots is they continue to grow. So by having a compressed snapshot, that should make life a bit easier. And we've cut costs by moving from S3 to R2 for storing the snapshots at the moment. We also have some updates on the Lotus Gateway side. We launched a website called chain.love that has Lotus Light Docs, a interactive explorer and more. So it's all backed by our gateway, which is api.chain.love. You should check it out. There's pretty cool stuff there. And just in general, a update on our gateway. We had 99.88% uptime over the last year. So close to three nines. And we service 20 million queries per week. So that's a lot of nodes that don't need to sync the chain. And they can leverage our Lotus Gateway. And the last update is now also a low balance service across the Americas, Europe, and Asia, where it was a American only deployment before. So you should have some better performance from Europe and Asia now. Our future opportunities. We're going to be supporting the next upgrade. It's critical that we're on top of that. And we help the lowest dev team as well as keep our core network infrastructure running smoothly during that upgrade. We have an opportunity for anyone that is interested in running bootstrap nodes. Please reach out to the Phil Infra team. And we'd be glad to help you work through that process. And we're always hoping to decentralize our core network in infrastructure. So reach out if you have interest. Thank you. Awesome. And our last team update from Problab. Hello again. So lots of news from Problab. We're pushing forward with all things measurements. Very important to know what's going on in the network, as mentioned several times already. And we're really glad that people are reaching out to us to ask for more of these. So although we don't have capacity to serve all of the requests, we're definitely having keeping a backlog of all those. We've recently transitioned from the NetOps team, which was the support that we were part of two IP stewards. And that's because we want to make sense to work closer with the IP stewards team, saying it's not only the measurements, but also on optimizations upon the measurements that these are done. We've done the natural bunching measurement campaign in December. And a big thank you to everyone that participated. Dennis is going to give a deep dive in a few minutes from now. We've led the hydrodial down events when we figured out that the performance boost that we were getting from hydros is not as great as anticipated. So we've run experiments and did some pretty graphs to show what's going on. We're still monitoring the impact that the hydrodial down events resulted through. And we've shipped alongside the hydrodial down reports, seven other technical reports with lots of nice graphs, such as the ones that you see on the right-hand side, pretty technical, pretty detailed, not really the kind of 30-second elevator block post read that you would do. So if you're into that sort of thing, head there. Yeah. And we're working on a spec for reader privacy in IDFS and B2P. So this is super, super exciting. There has been lots of work to even write the spec. I know it's going to be used by several teams in the BL network and in our stack. So the spec is still being put together. So if you want to contribute, go to that pull request shown there. Yeah. So in terms of opportunities, we're posting weekly reports on the state of the IBFS network at stats.ibfs.network. And we want to enhance what you'll see there with a lot more that is part of our milestones for 2023. ThunderDome is going to be used as a pre-release tester for Kubo, among other things, which is really great, a great tool to put into good use. Yeah. And finally, not very future opportunity, past opportunity. IBFS comes since, seems to be ages ago, but we've got lots of the recordings of the things that we've been working on. So make sure to have to watch those for all of the updates, at least up until a couple of months ago. Yeah. We're in Slack and IBFS Discord, pro dash lab. We've got an ocean page with a project board on all of our projects and the GitHub repository there where the section is happening. We've got a lightning speed through our spotlight, starting with Space Warp. Okay. Challenge accepted. So for FEM, talking about Space Warp and how you can get ready for it, if you haven't heard Space Warp, it's the launch program that we're running with Ecosystem. TLDR, we had a huge hackathon that just ended. The finale is this Friday. If you want to tune in and see what teams have made it to the finals and what people are building, you can do so at the link here. Great bottom function. We've had a lot of great content. We both demo and solutions come up from the hackathon itself. You can click on all those things to check it out. I will call out the huge piece. And thank you for all the teams that enter us here who have participated at Bacchial, Medusa, S2 area as well, working together with us to get demos out and integrations out and being allowing hackers to test those. Also realizing that dealmaking context is a huge piece of the developer experience for them to understand so we're going to work a lot more on the documentation moving forward. The other piece here is that if you want to collaborate with FVVM, we have the early builder demo showcases that are running all the way to launch. If you want to participate, you can DM me to join the FVM Foundry F1 channel to get what the agenda is. And so when you participate in there, you can also project, you can share your project or you can have the demos that you're working on shared with other hackers there to test it out. So a lot of building going on right now. If you want to participate in that and get your project tested, go to the link over here. This will be an internal resource page for folks over here. As much as possible, we hope that you ask questions and build us openly. And then if you have any internal questions that you're not sure about, you can ask in the FVM internal channel, which is private and you can also ping me to join it. But yeah, 30 seconds. Tune in tomorrow. On to Birdie. Hi, this is Birdie from Sentinel team. We want to share a service we created to generate a daily archive of gray snapshot for the many chain of synced genesis. So we currently have 900 snapshots, totaling of 33 or byte of data. So what are archival gray snapshots? They include state-drew messages received for the 2880 epochs that they cover. Why are they important? First, they allow initialization of notars or file coin clients on any specific date. Second, our team use those snapshots to extract an index of full chain data. We are currently at 90% completeness of the data processing and we made it queryable in BigQuery. We are starting to onboard people both internally and externally. For some high-pride tables, we do have full data. For example, we provide via messages and gas output to Cryptio for land to help auditing work for storage providers. Third, those snapshots surface at full chain history backup beside existing full archive nodes. It's possible to reconstruct the full node for those snapshots. And that is the possibility of storing a full file coin chain on file coin IPFS is greatly simplified. We already have a plurian partners showing interest in working on this. We can share more in the future. Thank you. Woo-hoo. Great progress. Station. Hello. Station is your gateway to the file coin economy as now shipping with an in-built wallet. Before, you had to put in a fill address from an external wallet, but now the fill address, according, is super slick and it just sets one up for you automatically. The way you can use it, you just go in and you get your station address. And as you earn fill for completing jobs and earning from the file coin economy, it starts to tally up and your total goes up. And we want to treat it as sort of a hot wallet. So every so often you'll transfer that fill out to a more secure or permanent wallet. And you can see on this wallet page there's a lot of transactions coming in and we're leaning on the Glyph APIs here. If you want to find out more, head to fillstation.app to download. And shout out to Julian Gruber, who's led on this initiative and Miro as well. NB, there are no modules yet, which are actually going to be paying out fill. But there are a lot of module builders who are really interested in building these modules and having a peer-to-peer network set up in people's home devices. We've just kicked off a working group with a Slack channel. So if you would like to get involved, please get in touch with us and join the working group to create all these modules, which are going to start participating in various different ways for the file coin economy. How's that? Deal-client contracts, Trinage. Hello. So with FEM coming online, we decided to build some generic client contracts, which would serve as a template for a lot of the use cases we want to see built on file coin. Examples of such are data dows, perpetual storage, storage automation. We also developed tooling on the SP to claim bounties and enhance the deal-making protocol to initiate deals with contracts, which will make deal-making more programmatic. I want to thank you to the Anton and the Boost team who are working to basically build a solution in a more productionized manner and bring it to SPs. Some of the positive things to come out of this work highlighted a lot of the gaps we had in the protocol and drove some protocol refinements. And also in tooling, which we have available today both on the dev and the SP side, we've also seen several devs use these contracts as templates to build their own functionality as part of HAC-FEVM and the Spacework hackathon. So that's been very encouraging to see. If anyone is interested in learning more about this, we have created multiple tutorials with several links to documentation and hope to create more. You can find us at where to find us box. I do want to give a shout out to Zenground0 for initiating this project. Irene and Luca from the CryptoNet team who've been engaging in protocol research for this and Jennifer for holding the project together. As for what's next, we want to use these building blocks to store Filecoin on Filecoin, which hopefully serves as a strong demonstration for what can be built on our network. Epic. Cool. We are on to our deep dives. Let's start with NetOps. So this is a deep dive about the infert costs of optimizations that we've been doing. Our goals are to reduce infrastructure span across all of protocol labs, clarify the costs per team by deprecating shared accounts and establishing a regular cost reporting, ensure maximum efficiency through rate sizing and removing unused infra, establishing cost baselines and providing best practices and tooling to keep costs low moving forward. We established a working group that meets weekly and they've been working very hard for months now to cut costs. On the AWS side, we've cut costs by 60%, on Equinex by 22%. Some of the highlights, we've been deprecating a really old account called the Filecoin staging account. It's existed for over five years and it was kind of a catch-all for everything to do with Filecoin. I'm not sure if anyone even knows what GoFilecoin is, but we found a couple of nodes that we're still running from there. That's pretty old school. We established a migration plan for any services that are production services that are running there and then we got rid of all the known unused infra and then we enforced a tagging schema and performed a screen test that really got rid of all the orphaned and unclaimed infrastructure that had some really good results. Thanks to all that, that helped us out. On the Equinex side, the focus was on the gateways. There was actually some bandwidth optimizations that are good to share with the whole network, the migration to Kubo and to the resource manager had significant savings on our egress costs and then we moved some of the infrared to a reservation. So the last bit I wanted to share was just the adoption of cloud custodian. It's a Python-based tool that was really helpful for all this. It helps with reporting, enforcing tags and we're able to schedule our entire screen test with this tool. So for any team that's looking to cut their cloud costs, I would really suggest that tool. So the next steps, we now reached a baseline I think where the costs are clear for each team. So some of the next steps now that we have that data is to establish long-term savings plans and we're looking to establish best practices to keep our costs down on a term basis and finally to provide best practices and tooling to keep the cost low for other companies in the PLM as well. I think it's really important that we help the whole ecosystem and do as much as possible to share this work that we're doing. So thanks to all those that helped out and also thanks to the Working Group because that's been a lot of effort and we hope to continue to Thank you. Huge work here, big, big thanks to everyone and really this goes into like everything that we're saving in our infra cost optimizations is more the work that we're doing elsewhere, which is all the better. So thank you, Marcus and others so much for all that hard work. Over to Dennis for our nut hole punching results. All right, hi everyone. This is Dennis from Problab. I try to speed run my tiny presentation. So we as Problab were running the nut hole punching month. So first of all, thanks to everyone who participated. So what were the project's goals and so net hole punching in general, we want to have full connectivity among all nodes of Oli PDP network despite nets and firewalls and in general this specific project because we wanted to gather information that guide protocol optimizations or also some implementation details and we got a lot of data for that. The measurement campaign ran from the first of December until officially until the first of January this year. I analyzed the data that we gathered until the 10th of January. So a little more we had almost 300 API keys generated that could in theory have submitted data but after some data application so on, I'm pretty sure that with this 154 clients, so we had 154 clients deployed that have hole punched or punched 47,000 peers around the world and contributed 6.25 million data points that we are now analyzing or I have started analyzing and on the right-hand side you can see where the clients were deployed and the remote peers resided so we basically probed the whole network. So here we have the time on the x-axis on the left-y-axis the success rate and each dot is an individual so the color of each dot is an individual network in which the client was deployed and so basically what you can see here is that the success rate if we probe the whole world or all the other peers in the network, we have a hole punching success rate of around 70%. And so I will get to this in a bit but yeah so this highly depends on the network conditions but yeah so we have around 70% and the faint red line in the background is the number of data points that were submitted on each day across all the clients that we have deployed which at peak were 35,000 and yeah dropped after I have sent out the email that the hole punching campaign is done. So what were the insights so far so I am still working on the final report so the success rate is around 70% but as I said since we are probing the whole world it depends on the network of clients as well however that is the success rate that the peer would experience right now if they used the hole punching protocol today to connect to a random peer other peer in the network. Then some other insights we expect a quick more successful than TCT we couldn't verify this so it seems to be that they are similarly successful with regards to hole punching IPv4, IPv6 IPv6 has a much lower success rate we are still not sure if this is in measurement detail or actually a problem in the implementation or the protocol interestingly it is not draw-up time dependent so if you know the hole punching protocol a little bit better we are synchronizing both peers and we thought that the round trip time is a big factor plays a big role in the success rate which we couldn't verify so that is also good that we are not aiming to optimize for that then peers as expected have a worse success rate in VPNs but also we have interest in another protocol for that lined up and this graph at the bottom quite insightful so we try the hole punch three times but we found out that if a hole punch was successful it was with 97.6% successful with the first attempt with the first try, this means let's optimize the protocol to change strategy with the second or third attempt to increase the odds with the second and third attempt so this is also another insight for that elicit the three like there is another protocol improvement we are proposing based on feedback from the first time talk that we Max and I gave in Brussels last weekend also some implementation details issues kind of things so still discussing discussions going on around some weird data points that we have which could indicate some bugs or not so we are still discussing that and in terms of next step I am working on the final report I am already started with that there are many more graphs and so on many things to look into a lot of angles to look into the data and feel free to have a read and this is the next step, thanks a lot awesome, thank you so much we got lots of really great data we have some awesome cost optimizations for infra, we have FBM hackathon closing out tomorrow you should go to that demo and we have a great set of launches coming for end of quarter to everyone for an awesome all hands and an awesome beginning to the Q1 and excited to see the progress when we get together next and thanks y'all for staying late and have a wonderful rest of your day