 Awesome. With that, excited to kick it off and welcome you all to our first edition of Andres The Gathering. We have our agenda for the day. We're going to jump quickly into some project updates from various different projects across the Protocol Labs network. Then we have some spotlights. Please keep these to a minute each so that we can get through them. And then we have some demos and deep dives, one from the high availability Lotus provider. So get excited. Next slide, please. The focus for this group is to bring together engineering and research contributors to the PL Andres working group so that we can share awesome work happening across projects such as Falkland, IPFS, libp2p, and more along with exciting updates across the research and development pipeline and new launches or notable discoveries or learnings that we can share back with this community. Next slide highlights some of the amazing community projects that are being built in the Protocol Labs network. We do a lot of collaboration with each other and are excited to help make each other successful. Awesome. And this is our mission as the PL Andres working group. We want to help support and accelerate those breakthroughs by creating some venues that enable sharing updates, supporting network-native research and development, and growing OSS projects, networks, and communities. Though we definitely want to create great interconnections. Thank you to Dali for this awesome Andres the Gathering poster, which really speaks to the badass community of collaborators who can use this venue to share some updates and then excited to kick out deeper dive discussions collabs from there. So please keep comments, questions, other things rolling in the chat. For the content here, we're going to be covering lots of amazing projects across the entire research and development pipeline. So expect some stuff that's very early stage speculative that folks can build on and expect some stuff that's later stage highlighting new releases or new functionality that affects a large-scale production audience. Cool. And with that, one last call for or request for folks here. We do our best to spotlight upcoming upgrades releases and launches. We're building out the 2024 calendar right now. And so if you have an upcoming release or upgrade roadmap or similar, you could ping them to me on Palcoin Slack. I would love to include them in kind of our overall roadmap for next month for our next Andres Gathering. So shoot them my way. Thanks. And with that, I think we're headed into projects, starting with IPFS. First is me. Yes. Yeah. So Shipyard. So Interplanetary Shipyard is a new entity that formed recently coming out of PLink. We work on IPFS and LUPTP implementations and measurements of them. You can find us in Filecoin Slack or the various IPFS working groups, lunches, working groups, dApps, Helia, et cetera. Some recent work. So Helia and Kubo and Boxo have a number of releases. Yeah. You can see there's a number of pinning related changes, name pins or requests for a long time in Kubo, having protections of the RPC API and upgrades in Helia as well. We have Rainbow and the Waterworks. Waterworks is sort of public goods for things like gateways and deliated routing and that kind of thing. Now having new, having, you know, custom implementations that are better supporting that. And we have some upcoming work. If you have some needs from IPFS or LUPTP tooling, give us a, give us a shout. And we'll chat. Hello. On the problem side of things, there's an update for IPFS. We have been wondering as we do several metrics on the IPFS network. At the words, the mid, beginning of mid, to mid of December, we've seen a large spike in the number of nodes that entered the network. That was due to a misconfiguration from another network and caused the nodes, as you see in this left top graph here to shoot up to more than 300,000 nodes, which was not kind of the natural growth of IPFS. The problem with that was that, as you see in the next graph, most of these nodes appeared as offline, so it was difficult to reach or unreachable, which in turn meant that the vast majority of nodes, like the largest percentage of nodes were appearing offline. And on the bottom list of graphs, we see that this affected the lookup latency, which had a spike towards the mid of December, and a severe latency increase when publishing content to the network. So, of course, that was not great news, but we worked together with the developers of a project that jumped into, jumped into this straight away, and we fixed the issue. We see that numbers went back down. And things are looking normal. You can find all of the metrics that we're gathering at ProBlob.io, as well as the links to each one of those graphs in the slides. Thanks for that, and over to Mosh. Hi, so back in November, we announced that the IPFS and the PDP projects were going to take the next steps in project maturation and form independent entities and operations outside of PL. So here, I'm here with a couple updates on that. The IPFS core cell is formed and first grants are being distributed. We have refreshed a list of working groups, public working groups for the project. The DAFs working group started getting together last fall. A dean is the chair of that. The comms working group recently booted had its first meeting last week. Jackson-Dame is that chair, Browsers and Standards, chaired by Robyn Bourjean. You can go to Luma for relatively complete list, and we're working on getting more comprehensive index and GitHub for all these working groups. And then also save the date. We are planning the big IPFS community event for the year, second week of January in Brussels, aligned with ETC. The design will be sort of a hybrid of camp and thing with a target of 50% sort of existing IPFS community, implementers, maintainers, sort of, and tooling builders, and then 50% users. So that includes anyone listening out there. Please save the date as well. Thanks. And over to LibP2P. Hi, it's me again. For LibP2P, the ultimate modular networking network, it's a network that is powered in most of Web3 projects and networks. Starting with Go LibP2P, they need the next steps, as you know, our WebRTC, which has been in the works for a while, and is a big, big thing. When this lands, we are going to have nothing notes anywhere in the network that currently have to go through a relay. They won't have to do that anymore, and they will be able to connect with browsers directly. And, you know, that is going to be a game changer, because basically web browser connectivity is going to go, it's going to be stable for good. Now, to achieve that, there are a couple of small things to do. We need to finalize this PR there on the thin packet that needs to be out before closing the data channels, to avoid leaving others hanging there. And of course, the resource usage is very important and needs to be tested before it goes out of experimental mode where it is right now. The second big thing is Autonaut V2. As we know, Autonaut has been in production for years. But there are some shortcomings which you can find that we have talked about elsewhere. Now, what needs to be done there in order to improve that is we want to allow for testing reachability of individual addresses, avoid amplification attacks and provide the verification mechanism for successful dials. More details on this, on the links that are in the slide. And as for the future of Go LiP2P, we want to implement error codes. So you can think of that as HTTP status codes, but for LiP2P, because right now when an error occurs, you know, the remote party is just leaving their connection, which is not great because it's leaving us in a state where we don't know what happened with error codes. We will at least know, and then we can go and fix it and not have it there the next time. So that's great, but it's not going to come until towards the end of Q1. On seriously, between, there is a lot to be shared there, and this set of bullet points is not doing it justice. I'm going to point to the blog post there and can put it in the chat as well, but we are going to do a spotlight in the next gathering if there is space. So stay tuned for that. And finally on Rust LiP2P, Autonaut V2 is almost completing that implementation of LiP2P. There are some reviews pending, the link is there, and it's closed to being completed. Rust LiP2P is the most active implementation. Shipyards right now doesn't have full-time maintainer for Rust LiP2P. So if you want to contribute, of course, help is more than wanted, but make sure to contact us or, you know, stay connected in the well-known channels for Rust LiP2P. We're going to be reorganizing the calendar, the community calendar, and we'll keep up with that so that anyone interested can get involved. That's it. Thank you once again. All right. LiP2P community, that is me. Thank you, Yanis. Before I start, I wanted to add to Yanis' point about Rust LiP2P. I'm the closest thing to a maintainer right now, so I'm filling in a little bit on the Rust LiP2P project, but we are looking for a volunteer maintainer, so if anybody wants to step up, I'd be happy to help them come in and know what, you know, how to manage the project from the start. So as Mosh pointed out, LiP2P is going out on its own. As of November, we've moved out and organized in the community. There's a lot of really good things about the LiP2P community right now. LiP2P obviously is in a very strong market position. There's like something like 300 billion in market cap that rests on LiP2P. It's at the heart of many of our favorite projects, and as you can see from the graph in the lower right-hand corner, all of our numbers are slowly going up and to the right in terms of community participation. As of right now, the LiP2P community is well-funded. Our grant program is coming online and we'll be supporting community operations in the ongoing future. There are some things that we could improve on. As we move out to being an independent project, I would like to see us include all implementations on an equal footing and we will be highlighting all of them here as they release and getting incorporated new projects and things like that. I like Molly's timeline of releases. I think we need one for LiP2P. So what can you expect in 2024? As Janice pointed out, the calendar is getting reorganized. We're going to be booting up community calls or re-establishing community calls on the regular. The community calendar is going to be fairly open. If you have something that's LiP2P related like you want to do a local meetup or anything like that, reach out to us. We'd be happy to highlight it, help you market, get users there. We're going to attempt to participate in events worldwide on a voluntary basis. We're looking for speakers who want to get read in on a speakers group so that we can have coverage globally and raise the profile of the project. The monthly newsletter and blog is also looking for people who want to write pieces or highlight their projects or things like that. More on that coming soon. All community-wide announcements will be in the discuss.libp2p.io discussion forums as well as in the discussions on GitHub for the various projects. Just watch there for announcements. This is really becoming something where the community needs to step up and take ownership. I'm happy to organize and run meetings and get everybody to show up and get us going in the right same direction. As for changes in the structure of the project, the last column over there, who now? Our community chair at this point is Raul. You can get him at protocol on GitHub. He's also Raul K. on the Filecoin Slack. I am the community architect. I borrow a term from Linux Foundation. You can get me a D-Husby on GitHub. I'm also Dave, Libp2p community on Filecoin Slack. Then we also have our community project manager and security chair, which is Prithvi Shahi. He's P-Shahi on GitHub and also that on Filecoin. That's it for Libp2p. Let's make 2024 a real growth year. We're trying to build Filecoin D-Storage network for Web3, but also a robust, efficient foundation for humanities information that's currently living in Web2. Next slide, please. A quick update from Lotus. We just published our latest release, v125.2, I believe last week. There are a couple critical bug fixes and improvement towards the syncing issues that some of the operators are seeing in this release. We highly recommend everyone to update to this release as soon as possible. If you have any questions, please reach out to us in the Field Lotus Help channel. We also have a very exciting new Lotus provider, Alpha, out in this release. This brings us to high availability post-workers, and currently the team is already adding on syncing pipeline to this new single binary of your power base for providing storage on network. I believe we have a deep dive later, so I won't go into too much detail here. We also recently integrated the supranational PC2 binary, which allows storage provider to perform a PC2 task as fast as 2.5 minutes, that will significantly improve everyone's ceiling rate. If you haven't checked it out yet, please do. We also have our first upgrade in 2024 that's coming up. The whole name, Dragon, I believe that's because it's the Year of Dragon in China. I think that's why Caitlyn probably should cry me if I'm wrong, but the code release is coming up on January the 30th. All the implementation teams are working towards our goal, and we're hoping to launch the upgrade in May night on March the 8th. We have a little bit more details on what's going to be shipped from this upgrade in the spotlight. We just got the guest number for the actual events that can power through the network monitoring tools, and the guest number is looking really, really good, so we're expecting to have that in the upcoming upgrade. Also, there are new fifths that's coming up, so recently we just passed last call for the super snap that's coming from the proof team, will help people to make snap deals even faster, more cost efficient. Irini and Luca also just opened the zip draft for online pull-up that will be simplifying the ceiling pipeline and we're hoping to launch that in the future, and hopefully, we will see the services and active cc sector market. Many teams are working on bringing fast finality into May night, we're hoping to launch that gradually into the network, sometimes in Q2, Q3. If you want to follow the work and contribute, help us audit the consensus and the code, find out what's going on. I also opened two fifths recently, the first one is converting the many reserve account from a multi-sig to a key list account so that the network is more decentralized and protocol improvement can be made via fifths. We're also proposing to drop in the existing proof commit sectors, just in favor of the new proof commit methods that's coming up. We're also proposing that the new proof commit aggregation make sense again to free up more chain bandwidth for other activities and also reduce the system crown usage. Last but not least, the CEL team has open updated activation timing suggestion for the lower bomb of the sector initial pledge just to make sure there's no delay. I feel bad I didn't include any event and things like that, so I'm just going to do a butt out call out. We are hosting a protocol forum kind of open office hour call in February just to break down the upcoming network upgrades for the community members so if you have any questions, stay tuned and join us later on. We are hoping to see you in the next conference. If you have any ideas, reach out to me at journey juju or follow me. We would love to hear from you. We've got a brand new website. Please check that out and let us know what you think. We're very excited to now be the main provider since the fill out service was Sunset. We've seen a big jump in usage and things seem to be holding stable and looking good. This is kind of a theme for us. We're definitely looking at both increasing the features of forest as well as starting to participate more in actively running infrastructure within the network. Within forest we're really focusing on making sure that we have a stable data functionality. Along with that we're also starting to run boot nodes. We are implementing a new version of the snapshot service which will hopefully have some additional resiliency built in. We're also starting to operate a few calib net miners which will be lotus at this point in time. We're also starting to implement a new version of the snapshot service. We're also working on the next work upgrades and implementing fast finality. Certainly for us I think really the focus for this year is growing the usage of forest. If anybody is interested in running a forest node please do reach out to us. We're just coming up interestingly on the second year of back out. The first line of code was written in February of 2022. It's been a phenomenal growth since we've gotten there. Next slide please. Our progress has been really on the technical side. We've gotten so much customer feedback and usage and interesting things happening. Back out hit 1.0 in May of last year. We've had two significant releases since then. 1.1 in the fall and 1.2 in December. Tons of activity, private IPFS clusters, simpler configuration of nodes and TLS support. Tons of GPU support and our web UI as well as integration of more traditional storage mechanisms like S3 buckets and cloud buckets. Coming in the Q1 in the end of this quarter we have a bunch of things around authentication and authorization. This has been in demand for a lot of corporate folks who want to integrate us with their LDAP and other solutions as well as a lot more simplicity around offline execution which helps with our consensus based solutions. You can see our community growth there. Line goes up to the right. It's been super cool to see people from the external community plug in and be a part of our overall network. Next slide please. The biggest thing for us is just continuing to listen to users. We had some really nice use cases and presentations. They were really cool using us on submarines which is neat in non-combat situations but still using us, which is really fun. We were able to present at KubeCon, at Reinvent, at Open Source Summit in Japan and Web Summit. Really exciting that people are seeing enough value for us to be presenting at these locations and talking about all the goodness that is back at our IPFS computer of date and so on. Things are going really well. If you have partnerships or places that we can plug in to help your projects out, we couldn't be more excited. Back to you, Molly. Awesome. I think that's up for our project updates. Now we have a couple of spotlights on recent or upcoming launches so that everyone sees aware of cool stuff over to Sierra and I believe from CryptoEcon Lab. We have been investigating the state of the Filecoin network since it's the start of the new year and we had a milestone where QAP crossed under the baseline back in December 17th of 2023. We wrote a report detailing what this means for the network, for storage providers and so on. Just to give a couple of quick highlights, the QAP goes below baseline. It triggers a switch in the pledge mechanism and so pledge is now forecasted to decrease. This is actually a antifragile mechanism built into Filecoin to encourage more onboarding because as pledge gets cheaper, your fill-on-fill returns increase which should incentivize people to onboard more power and invest into the network. This is technically a good thing in the sense that we have improving macroeconomic conditions combined with increasing returns in Filecoin so we hope to see more investment into the network but for a more detailed analysis you can see this report that's linked here and I can provide it in the comments as well. That's all I have. A demo done in partnership with Lockheed Martin and it's about synchronizing data between a ground station and a satellite in both directions. The basic premise was just to prove IPFS useful in a satellite situation at all. We only implemented the parts that we were actually needed for this particular project so that would be UNIX FS files and ROBLOX. Custom protocol that only sent the minimum data we needed because in the situation this bandwidth really is at a premium we also did some things to reduce dependencies and reduce binary size at build time just in case you need to do an update to an already deployed satellite. But it was successful. We got files of various sizes sent in both directions and so that opens the door for possible future development. Thanks for making it happen and for sharing it. There's a blog post as well that people can read to learn more. That was such a cool demo and a hard act to follow. Everyone check out the blog post and all the links. Hi, I'm back. I'm here to share that LibP2P and IPFS both were recognized in the latest round of the Optimism Retro-PGS. LibP2P, the number five vote-getter it's a direct dependency of optimism IPFS, the number 21 vote-getter and so maybe unpacking the process a little bit because I know folks are curious. Voting was done by about 200 badge holders. A lot of them published rank lists that then were fed into other badge holders selection. So each badge holder could vote however through whatever method and inputs they wanted but a lot of them ended up kind of publishing their own stack rankings ahead of the vote and then incorporating each others. I think the application form was super short. I think there were two sections for paragraphs and then a lot of lines were links. So the project's reputation websites Github's really spoke for themselves and I think that's all due to the incredible communities behind these projects. Not only today but for the past 10 years since they were first created. I'm sure many of you are curious what's going to happen to these funds. They're going to be managed by the core cells and go to a combination of rewarding prior contributors, rewarding their own upstream project dependencies and then I know if I can speak for IPFS core the majority of it will go to continuing to support the technical and social substrate for these projects. So thank you to everyone past, present and future who are contributing to these and also to Robin and Dave for submitting the application and getting those over the line. Thanks, Mosh. Over to Patrick for station. Hey there. Patrick from station. Station allows anyone to join the Filecoin and Web3 economy. So please tell your friends who are potentially not as technical as you are to download the station app and get started. We launched station with payouts just before lab week last year. So beginning of November at that point we had around 10 stations and you can see that we've gone through pre-tuned first through November and then reached 10,000 station nodes at the beginning of this year which is a nice New Year's surprise. So station network is growing fast which is really exciting and as you can see on that map there's stations all over the place even got one in Greenland. The first module running on station is called Spark and Spark samples retrievals from Filecoin storage providers. You can see that in the last seven days we've tested 502,000 different CIDs and these CIDs are taken from LDN fill plus storage deals so they're the ones which are supposed to be publicly and fast retrievable. We're performing loads of retrievals and we can really kind of dial that up or down based on what we think is the right amount of retrievals to sample the network and at the bottom we can see the retrieval success rate which we would like to improve so that's the whole point of Spark. We're trying to get this graph to go up and be a really nice level. Some team updates. Spark we launched it and we've learned a lot about building smart contracts in the FBM since then and we've actually massively reduced the gas cost which are now almost negligible which means a high percentage of the fill is going to the station operators that's behind Spark. We're building a public Spark dashboard so that everyone can see all the stats behind this protocol and then we're also working on a second paying module to follow Spark if you have any questions. Awesome. Next I think we have NV22. Hey everybody this is going to be a quick summary of all the goodies landing soon in Filecoin Network version 22 codenamed Dragon. First up is FIPS 63 which will allow Filecoin to leverage DRAN's new QuickNet network and its three second unchanged randomness beacons. FIPS 74 will remove crumb based automatic deal settlement which currently accounts for a huge percentage of network costs via the market actor. 74 addresses this unsustainable cost. 76 is direct data onboarding, direct data commitment into sectors with significantly reduced expense and gas costs using a new data onboarding pipeline and it brings us one big step closer to an L2 storage market. FIPS 83 brings enhanced external monitoring of crucial network information such as data cap allocation and sector lifecycle transitions there's lots more info on the slides all of the blue fit membership on our clickable link so please take a quick look and join the conversation the current upgrade timeline is on the right where you'll also find a few links to other potential projects that we're really looking forward to this year. So we've got last call has already passed on 15th of January with the upgrade timeline. Co-freeze 30th of January can now upgrade for 20th of February Mainnet for the 18th of March that's it for me. Thanks for listening and over to Galen. Awesome. Coming in from Filecoin Foundation and Filecoin Plus Governance we have a round five notary election cycle giving an update here context we're building, we're supporting people building more specialized pathways to data caps we're moving away from sort of the single LDN or large dataset world that you may be familiar with and into lots of different teams building their own sort of pathways that are more specialized to their clients their type of data and how they're doing to distribution. You can see more from lab week talks link there the process is also spelled out pretty clearly in the GitHub read me but a big big update is that there's a hard deadline January 20th or your submission of your air table responses so if you are building a pathway to data cap or a friend or a team that you know of wants to be giving out to data cap to clients January 20th is that deadline at this point we have 81 GitHub issues that have been opened with 72 air table responses so 72 out of those 81 applications have been submitted. If you have questions come find us in Slack and check out those recorded things. Thank you I'm 13 seconds over. Awesome though we are now in our deep dive section and so we'll hopefully have a little bit of time for Q&A at the end as well but Andy will hand it over to you to tell us about high availability with Lotus provider. Hey everybody so this is an update on what is typically called the Lotus minor team we're branded Curio now and working on replacing Lotus minor and worker with a clustered solution called Lotus provider. The neat thing about this is due to this modified greedy work distributor algorithm that we've gotten play here we can have any of the machines you see on the right go down and still have 100% uptime or answering posts scheduling the work and even responding to HTTP requests on the remaining up nodes and it also offers durability and so that even partially worked tasks get completed and load balancing so this might sound a bit complicated bringing clustering to something that used to be hub and spoke so on the next slide we've got a video of literally all it takes to make this happen. In this demo I will show the three steps for migrating to Lotus provider in this demo I'm setting up a single YG database for demonstration purposes but users can set up multiple YG databases in a cluster to enable high availability. You can find the install links for YGDB in our documentation let's configure the sector index to be in the YG database instead of an in-memory process on the Lotus miner. In your Lotus miner config.toml file go to the subsystem section and locate the enable sector index DB config and set that to true. At the bottom of the config file you can also see that we have a new section called harmony DB. Set the host to wherever your YG database is located in our case here it's local host we can now restart the Lotus miner process to make these changes take effect. At restart of the process the sector index will be initialized on YGDB and we can confirm that it's working by running a Lotus miner proving compute window post command. In the final step we will migrate window post to the Lotus provider and completely disable window post computation and scheduling on the Lotus miner so that the Lotus provider process takes care of it. First we need to stop the Lotus miner process. Then we need to get permissions from the Lotus chain daemon which we want to connect the Lotus provider to. After that we can run a migration script with the Lotus provider config from miner dash dash to a layer base. This will migrate all the needed configs from your Lotus miner to Lotus provider. Before we can start the processes back up again we need to disable window post entirely on the Lotus miner process by setting disable built-in window post equals true in the Lotus miner config.toml file. After that you can restart the Lotus miner process and the Lotus provider process. We can confirm that the Lotus provider process now is able to schedule and compute window post by running the Lotus provider test window post task command and see that it inserts a window post task to the database which it picks up and computes. So to expand on that window and winning post are available today in production on 125.2 release of Lotus. And lastly we're planning to bring ceiling over but certainly it's getting more complicated with a bunch of separate tasks partially owning the scheduling work. So to make things easy we have added a GUI for visualizing what the cluster is doing and that should make things a whole lot easier to to operate. Thanks. Awesome. Thanks for the deep dive and demo. Thanks everyone so much for attending. Got through it a little bit faster than expected which is awesome and if you have other demos or deep dives for our next monthly gathering please do them to Andres Admin and we can get you added and set up to help present your work. We're excited to showcase any research breakthroughs, new engineering work or exciting launches so please you can fill out the linked form or email us at AndresAdmin at protocol.ai. If anyone has any questions or Q&A we're happy to stay on for a little while longer otherwise excited to go see what you all are building and showcase it next month making the future better day by day.