 Hey, everyone. I'm Amin and I'll be presenting the metrics dashboard that I built for the Saturn team. For those of you who do not know, the Saturn team is part of Thoughtcoin retrieval markets and we're building a decentralized CDN to improve the IPFS gateway performance as it's currently morbidly slow. Part of achieving this is generating like a monitoring tool and that's key for the following reasons is to monitor the vitals of the CDNs, such as latency, request served, etc. And also our goal is to beat the IPFS gateway. So a dashboard is a great way to quantitatively and visually track that effort. And additionally, the dashboard is useful in debugging errors and also testing response to the network to new optimizations that get launched. This is a dashboard, it's on Grafana right now and currently like it's in production. One of the first graphs we see is something called time-to-first byte. And time-to-first byte is a metric that measures the time between a request for resource and when the first byte of that response begins to arrive. And here we're basically trying to erase the IPFS and Saturn time-to-first byte values. And well, it's not really like a race anymore because thanks to the engineers on the Saturn team, the Saturn CDN is now making the IPFS gateway look like Internet Explorer. And it's clearly like outpacing it at this point. The cache rate graph is basically like a measurement of how many content requests Saturn's cache is able to fill compared to how many requests it receives. And one good takeaway from this graph is that Saturn does not miss pretty often. Hi, everyone. My project is Outercore Plus. And so what it is, 247 Hub, where a place for all of our events can live, focus on Outercore events, but also some of our community events. So it's supposed to be a virtual experience. Even though you can't attend live, you can still go in here, you can feel connected to the community. And you're not just kind of like sitting behind Zoom. So this is just like the menu bar that will show whenever you're in the environment. And it just kind of outlines like what all you can do while you're in there. You can watch live streams, you can watch on-demands, you can talk to other people that are in there, you can network with the community, with other PL and people that could be in there at the same time. There's also some fun elements, a photo booth, you can get PL swag. You can also go to the exhibit hall and get resources on IPFS, Filecoin, Filecoin Green, and our community as well. This is the exhibit hall. It just kind of gives a picture of kind of what it could look like. It's supposed to have a look and feel like as if you're inside a trade show, these will kind of be like booths and that's where all the content would live. I also just put a Hackathons banner up because we have a lot of Hackathons that coincide with Arvents. That would be really good to have that connection there as well. So if you were to click on one of the booths, this is kind of what you would see on the inside, a scissor roll playing when you walked in and then you could see how on the right hand side, you can click on our previous events and at any time go in and watch all the videos related to that. You can also contact us, schedule a meeting, all those sort of things as well. Cool. So what's the problem I'm addressing? Even if somebody has an ability to learn, but they don't have an idea that they want to build on, where do they go? They could go to the ecosystem websites. They could look at GitHub repos, but there's no single place right now for the PL network where all of these ideas can be found together. So I'll first walk you through all of the data that I've put together and then I'll show you the website which sort of presents all of this data to the end user. So I have collected data from the IPFS ecosystem dashboard, Filecoin ecosystem dashboard, all the Hackathon winners that we have. These are from 2022. Awesome IPFS which has all of its information on GitHub. And then there's also the PL network air table, which is an overlap with some of these, but has some unique companies as well. And all of this is like real data. This is not demo data. So if you're a builder looking for ideas, you end up here and then you can try and find what idea you want to build on basing all of these filters. So say I want to look at all the past Hackathon winners, then I have all of this information, their project names, what were the building, if there's a GitHub repo that's like available open source, and then they can actually just like start contributing. The different categories that I have so far is one where does this originally sit? And I'll find a way to link it in each card as well. What stage is the idea in? So our solution is to rely on the DRUN network to achieve time lock encryption. So we use DRUN mostly as a reference clock because the DRUN beacons are issued every 30 seconds currently with a new test network that's issuing every three seconds and your round. So you can really use the different beacons to map to a given time. So we can say round number, we know exactly when it will be issued. The main idea behind these is that we use the BLS signature scheme, which is already used by DRUN. And then we use some identity based encryption schemes that are compatible with the pairings we are using in BLS so that we can use the signature from BLS for the DRUN beacons as a secret key. And we can use the round number, which is in the future as a public key mostly. I got a very short demo for which I will share my screen. We click our encrypt the crypt button. It goes and works out what round it should be encrypted for and does the magic. We get this lovely ASCII armored output here, which if I clear the plain text, we should be able to now decrypt to the hello launchpad that we put in before. Of course, anyone can try this to make sure I'm not cheating or even suggest some plain text to try it. If we wanted to encrypt something way toward the future, let's put it to tomorrow, we would get a very different cycle text and attempting to decrypt that will hopefully not work. Yeah, it's too early to decrypt it. Stuff still to do is obviously make it look a lot prettier. This UI is showing my art skills to the max, unfortunately. Once we release a DEF CON next week, all the code will be live in the mix and you can all have a look and enjoy it. Thank you everyone for tuning in. So when I first started chatting with the IPFS team working group, they mentioned this idea of including a diagnostics and or a metrics view within the web UI and desktop app. I set out to conduct some user research with the goal of delivering a research-backed dashboard UI for IPFS web and desktop that would extract necessary metrics and diagnostics for IPFS users so that they could find and understand information about their node in order to optimize performance. So this is just an idea of what a dashboard could look like. Ignore the data, but some really important features would just be to provide more information. You really want to focus on storytelling here so that people know what they're looking at. Some customization, too, is great. But again, this was kind of dependent on the data that we have. A lot of what you'll see here, though it is dummy data, is based on existing information that we have. So like a DHT check if we couldn't integrate into the web UI or desktop app, that would be something that users are looking for. Nice seeing everybody. Let's talk about moon landing. Big goal here is to bring as much SP to our ecosystem as possible and to have them store actual data and grow the network as quickly as possible. We're doing that through the project moon landing by raising awareness around Slingshot and Evergreen. These are two programs to onboard as much data as possible, mostly the public datasets through Slingshot and the Evergreen program will renew all those deals in the future. Also during the program, we're going to introduce DSPs to each other so they can match make in the future when they need to store data replicas themselves. So it's an ongoing process on educating them on everything that's possible on onboarding data. In the back here, you can see the moon landing website that we put up. We have a landing page now for moon landing where we talk about the program details, the incentives for SPs like why you should participate in moon landing, what you're getting out of it. And the rest are some slides that we gave the presentation on last week in Vegas. There's good traction. We got a lot of questions asked after the presentation from SPs that are kind of new to the ecosystem about our milestones, including our requirements for onboarding as much data as you can into our ecosystem. Hello, everyone. I'm in all basically my project for this launchpad cohort is basically to migrate the current content of the launchpad curriculum to a new platform. So for this platform, we have chosen Hugo. Hugo basically is an static site generator, you know, is widely used across the PL network. For instance, the Lotus docks are built on Hugo or the Filecoin docks are built on Hugo. I've been working on several design changes. As you can see here, firstly, this landing page which basically contains all the different sections dynamically fetches all the different, the number of lectures and tutorials that we have. Then we also have this other feature and based on whether you're a shallow or deep resident, you know, the main shows like a different content for different personas that we have in launchpad. And so basically the idea in the upcoming weeks is to gather feedback over this site and improve it and include more features. We probably will be like quizzes or some other features that might be useful for upcoming launchpad residents. So the project that I worked on for launchpad is SHAP explainability tool for machine learning models. So essentially, if you have a bunch of data that you're kind of like, you know, need to figure out what's going on there rather than like looking through dashboards and hunting and packing and like scrolling through spreadsheets, just let a machine learning model ingest that data and tell you what's important. So step one is you need to train a machine learning model. You need to come up with a hypothesis of what might organize this data. And so you have input variables and output variables. Step two then is applying the SHAP explainability package. So running the this explainability process essentially goes through the machine learning model and it zooms in on all of the data points. Essentially the orange surface is this windy parametric surface created by your machine learning model to kind of weave through and fit the various data points. And what you're doing essentially is you're zooming in on each data point to create a more well-defined, maybe like a linear model is one way to do it. SHAP does it slightly differently, but you want to get a simpler model that in this picture is modeled by this blue plane so that you can explain it better. What you get from that is feature importance. So how important are those various input variables to the impact to the prediction of the output variables? So on this graph, you can look at it and say, oh, the biggest drivers in this hypothetical example are input variable one and two, but input variable eight is not a very big driver. And that can give you a lot of good insights into the data. So the first project was actually taking a look at what it would take to build a sort of more sort of enterprise ready for one of a better term storage system around IPFS. I also wanted to align with Web3 ideals and so just make sure that we weren't just re-implementing some old sort of Web2 style stuff and making sure that we're still on the right path, even though we're learning from sort of 50 years of storage industry. And then go into some specific details around how to implement some of this. The missing components, what we might need to develop to build out a system like this. As I said, IPFS gives us content addressability. It gives us a way to refer to the content and if the content changes, then the way we address the way we retrieve it changes, as we all know. Now, even on your laptop, the Mac or Linux PC in front of you, or Windows for those that are using that, if you open up a file, write some data, and then save that file, you don't have any guarantees that that data is actually still there. It hasn't been written correctly now. Will it still be there tomorrow? If I go back in a year's time, is that data still in one piece? Then there's other problems too that are a lot harder to know about and detect, things like unrecoverable read errors and a bit rot. And so the answer to that could be, we'll just make more copies. So instead of three copies, we'll make 10. But now you're talking about having 10x storage cost for storing one amount of data, which for small amounts of data may not be a problem, but for large scale out stuff, that can be a real problem.