 Hey folks. Hello, hello everyone. We will give folks a couple of minutes to join, be on silent with my video off for a second or else it's like the first five minutes of show me what you got ends up being me randomly talking. So. Hey. I always get excited when former residents join. It's always lovely to see, it's lovely to see everyone's faces but I always get so excited to see Diego and Wes here. Oh, it's wonderful. Good to be back. Yeah. Good to see you. Likewise. And I like your background too by the way. That's really cool. Thank you. Wes, every time I think about Lisbon, I just think about Pacquiao being cod in Portuguese. Indeed. I know. Hopefully we can have some fresh salted codfish. Yeah. And celebrate computer over data. Yeah, hopefully. Oh, thanks for, yeah. Andres went over. Okay. Yeah, we'll give folks probably like another like, I would say like three, four minutes before we go ahead and get started. If Andres is over, give folks time to maybe like grab some more coffee, get excited, do some Trumpian tracks. You know. Awesome. Jay's here. Hello. Hi. I love Shelby, my God. It's like all my favorite people in one room. Okay. So I'm going to go ahead and get started. I see mostly residents here that will be presenting knowing that Andres runs late or is running a little bit over. I'm just going to kind of start with the logistics of how things go. But folks who, you know, are here to watch and mentors who are here. So much for being here for my residents will go in the order that the presentations are set in. There's two ways before you go. Either I can share my screen and kind of you could just tell me like next slide. If you are demoing something or if you would rather have control over that, just have the deck up and you can share your screen as well. I'll ask everyone before that. All right. Sound good. Please don't. We have some folks that, you know, need to kind of go a little bit earlier and a little bit later because of timing. So I just asked on the resident side, please don't like change what order you're going in during during the session. Does that sound good? Okay. Awesome. Great. Can everyone see my screen? Wonderful. So welcome everyone who's here for show me what you got demo day. And also welcome to everyone who is maybe in Singapore and watching this a couple hours later after it's posted. Really appreciate you all being here both live and also the folks who are going to watch this async as well, which I know a lot of you will be. And this cohort was our largest cohort ever, which is really exciting. We had 42 residents, which is wild and about 25 folks attend Colo week. So today what we'll be doing is we'll be just giving a brief overview of the program as we always do some and we'll be going through some projects. We have some really amazing projects for you. It's always great that, you know, from my end, I get to see them from I don't know what my project's going to be to, I think we could do this too. What if we did all of the things to an actual, you know, minimal viable product, which is always exciting. These projects were done over a six week time period. The first three weeks of that was actually spent really deep diving into a lot of our major products here at Protocol Labs. That's IPFS, LiveP2P, IPLD, and Filecoin. And then we had Colo week where we transitioned kind of from that async traditional learning sense into projects. And one of the reasons we do this is so folks who are coming into the network really get to get their hands dirty with the code and kind of dig in and start making an impact, you know, really early on in their work, which is absolutely amazing. And we get projects from all over the ecosystem from network partner teams to the IPFS stewards to folks who are doing projects on something that maybe they won't have an opportunity to work on later, but is a really good thing to do cross-functionally. It's great to see these all come together. And we've got some demos for you as well. Okay, Kitty, could you exit and reload because we just put the voting QR codes on the slide. So exit and reload with the slides. Sure. Just hit refresh on the browser for the... You're muted. Shouldn't be. Also, up here, like Lindsay just said, we get to vote. Voting is open for everyone. So please stay, watch, you know, all of these projects and demos and vote for your favorite. Vote for the one you think has the best name. Vote for what you think has the largest impact or the greatest bug fix as well. So definitely do that. You can only vote once, but you know, it's very important. And again, residents, mentors, community members, everyone is welcome to vote for different projects. So just a quick overview of Launchpad. Again, it's a six-week onboarding program designed to train and develop technical talent across the Protocol Labs network. We have folks here that are participating that are network partners. We also have folks that, you know, are working in, you know, the Andra's working group or the Outercore working group. And our goal is to scale a lot of different projects, a lot of different technologies as well as onboard folks into Web 3 and build really strong cross-team and cross-working group bonds throughout the network as well. We were really lucky this cohort. We got to go to very sunny Palo Alto. So now I'm going to go ahead and play this video. Let me just make sure this audio will come through correctly. Go ahead and play this. Absolutely amazing learning during that Color Week. Lots of connections that were made, which is always great to see. And then shout out to our Launchpad team that is here. So Hannah, Carla, Lindsay, our new cohort manager Dave, our curriculum team as well, Marco and Annal, who do absolutely phenomenal work. Shout out to all of them. And also shout out to the video production team for making this for us. We had Jared out there for a couple of days with us, which was great. And here we just have some kind of additional pictures. Again, lots of deep learning happening this week. Color Week is always a really intense week, but an absolutely wonderful week. And it really gives folks the tools to kind of transition over, again, from more of that, you know, curriculum-based learning into that project-based learning that we now get to see the results of. So let's go ahead and get started with show me what you got. First up, we have Joao. Joao, would you like me to continue to share my screen, or would you like to share your screen? Yeah, you can continue to share your screen. That should be okay. You want to share your screen? All right, so here we go. My project is called Color Week Sutter. Next slide. Which basically becomes quick. It's a transport particle for a CP tree. And the first one is the centralized distribution network in Falcone, which is a team I'm working on. And it's using currently HTTP2 and NGINX. And basically we wanted to try something more interesting. Excellent. This is pretty much because time first buy is age magic for us. So like class in this value is more for us. And so HTTP2 should be a quick way for us to make improvements on these, on these fronts. And so it's high to assessments. Next slide. So we started by benchmarking. This is actually a team offer at this part. So basically different team members such different points. And so there's a benchmarking everything and making sure that the, the developers are looking for actually, were actually possible. Then we researched on the browser side because well, to add this HTTP connection to the browser to actually initiate them. Correctly. And that was a bit challenging in some aspects. Daily integration. And then we started monitoring everything and distinguishing between HTTP2 traffic and HTTP2 free traffic. Excellent. So here's my demo. It's like the shortest ever. So there you go. There's a curl. There's a curl. He thinks Saturn on the particular CID. And so there you go. It's HTTP3 as you can see on the first line. Next slide. But not everything work the way you expected. We didn't really get better than time first by performance, which is kind of unexpected. Actually things are saying mostly the same for the metrics we care about, except for the percent out 50% of times first bite, which actually drops around, I would say 20%, which is unexpected. You can see like on the chart there. There's like this, the time series, the bull one. This is actually a ratio here between our times and the IPFS gets ways one. So it's more like here more is better than so. So adopted it. So that was unexpected for us. We didn't really know why that was happening. Next slide. And so there were a few conclusions we came up with. First one was that well implementation was looking correct to a twist, but there could be some some some performance improvements that could be done on engine access, quick implementation, which is currently better quality as you can see like on the bottom right side. There is a face climber there, but we actually have developers beforehand and they're like, well, there are many people using this on production already. So it's, it's probably should be okay. Then we're also considered to be up. There could be other network with confounding factors here because most of the, the web to live is, is TCP optimized. Whereas I quit with UDP. So there could be other things that play here. Well, more recently we, we had a different state from this, because we're not, we're not good enough. There had to be something else. And so next slide. And so we, we tried replacing the SSL we ended up using to go with, to go with a GP tree and actually, which is called boring SSL. And so we ended up going to back to open SSL. And that actually improves our performance again. I mean, it didn't recover completely as, as, as, as at this moment we're still like, but there's a, there's a notice of improvement. So we're still assessing this. And so like, you can see, like, there's the, again, the blue time series. There was like this, this, this, this sign there. It's signed by the first red arrow. That's where the deploy happened. Things dropped a bit. And then I only fixed on the next red arrow again. And so things like increased again. So it's, so that's pretty much it. Working to this. So if anyone has any insights into a cheat tree and possible things that could be looking into, we'll be glad to hear from you. That's it. Wonderful job. Job, and the entire Saturn team. Right after every presentation, we'll give folks, you know, a couple of seconds to ask questions. Does anyone have any questions for Joelle? Moving on. We now have the data onboarding website, a customized journey for clients who want to store on file coin. You like to share your screen or would you like me to click through? Yeah, I want to show my screen, given I have a well-friended demo. Thank you. Awesome. Can you see my screen? Cool. Hey, everyone. Yeah. Great to be here. So today I want to walk you through the journey of building the data onboarding website, specifically the roadmap and also the well-friended demo. So let's first start with the objective of this website. So right now the problem we continually seeing is, when people ask us where to store our data on file coin and how exactly do it, there is no best place to find those information. So the objective for this website is to really, it's three-fold. First, let people know why file coin, why it is a better solution than your current, like either central storage public cloud, really showcase the cost-benefit comparison of file coin. And next is to show the what and how. As a user, we want to help you to identify the right data onboarding tools and really attach a short sweet video on how. So essentially, that will customize the journey based on your needs. And lastly, we want to use this website as an opportunity to build an open community. So anyone, if you want to store data on file coin rather than let the information silo in Slack, we wanted to have an open forum so people could support each other directly. And essentially, we might even have some data onboard the wizard that can help you in this journey. So with this objective in mind, in the couple of weeks, in this couple of weeks, this is basically what I did so far. So initially, we'll work with some folks on our team, the client growth team to develop the planning documentation, essentially source the content and assess the content gap. And really just see what kind of information we could already leverage in the, in the like either file coin, or I own the main website. And now I'm at the wire framing and content review phase. So, and I'm happy to show the offering demo very quickly later. The next step for this is to gather preliminary feedback for this website and really work with our customers, clients, some storage providers to get their feedback while we are still developing the, the mockup. And then like aim for October, I want to work with our web development team to actually code it, like develop the backend and test both internally and externally without going in mind that we want to ship it in the field lesbian. So now let's take a look at the wireframe demo in, let me open the Figma hope it will work. Awesome. So here is the preliminary website. So as you can see here, as a potential user potential client, the first hero page you saw is the key value prop of storing on file coin with very essentially, you know, we have the variability, flexibility cost effective data flexibility. So it's no longer, you know, just a fancy kind of blockchain technology that you can really add this value to your business. This can really add value to a business. And then you get to like a, a selection wizard, right? If you are enterprise, you click enterprise, and then you tell us how much data you want to store. And then like you might want to store all more than one tick bytes. Then what's your frequency, right? Retrieval frequency. If it's yearly, then we will also ask you like how many people want to access it and essentially recommend a product for you. So that's the whole point to customize your journey and really recommend the right product for you. So this website is still in development. So you can see essentially we will keep adding, you know, the more and more content involved in terms of, you know, work with the actual product owner for those products and have a short and sweet video to explain how to use it. And if you want to get in touch with our like expert, when you want to onboard large data set, then we will also offer solution architect help directly from this website. So yeah, so that's pretty much. And also like I would love to, we would love to add like our proofs, social proofs and testimony from the organizations we already working with to add a trust layer to our solution as well. So that's, yeah, that's all I have so far. And really excited to see how these things will play out next month and eventually get more feedback once it's getting closer to, you know, the more complete state. Thank you. Any questions. Awesome. Thank you so much. And I also can't wait for next month as well. Next up we have interplanetary specs. Would you would you like me to share my screen or would you like to share yours? Yeah, please if you could share. Yeah, I got you. Hey everybody. So nice to see everybody again, even if it's over zoom. But this is the interplanetary specs project that read and I worked on. Next slide, please. So just to give you guys a quick recap of what we're trying to address with our launchpad project. I'm on the loop PDP team read is on IPFS. And a lot of the work that you know our teams do on a day to day basis involves in writing specifications and writing implementation according to specifications. So these specs. They're really critical and they only become more important as our ecosystem involves as more organizations rely on IPFS and loop PDP to, you know, build their stack. For example, each two is is, you know, something that was six years in the making and now after the merge they're you know, powering their entire networking with loop PDP. It's also really important because additional implementations get written in different languages. For instance, you know, blue PDP has so many new ones being written every day and swift names, what have you. However, we're kind of in this position where the current specifications don't meet critical needs for a number of different regions. So one of the things is the specifications themselves have inconsistent versioning or lifecycle management. So as a new user or someone who wants to consume the specs, you know, if you're an implementer, it sometimes is unclear how much you can trust a spec, even though it may be checked in into the repo, it's still not, you know, upon reading is still 100% clear specs across organizations like IPFS or loop PDP. And even within the within a single repo, they're not really standardized. So it becomes more unclear in terms of gauging how comprehensive a specification is. And then the specs themselves, right, the content that's within the specs themselves, even though they may be following different formats, those, those specs are also in like varying states of completion. So those are some of the issues like if we hope to, you know, power the Web3 ecosystem with IPFS, we also want to make sure that they're well specified so that people can rely on them, you know, have a certain sense of accuracy and trust. So another, another thing that we're hoping to address is like change management. So we want to have a single place to record and track proposed changes and also introduce an inclusive process to triage and evaluate proposals to specifications that bring into wider ecosystem. So the IP Stewards team does a great job of, you know, evaluating spec changes, but eventually we want to make sure that the wider community is also involved. Similar to, you know, how FIPS work today. Next slide, please. All right. So I'll jump in. Again, I'm Reed. And so to, the approach we've taken with this has been to really kind of take a step back and look at like, what is the ideal way that we want to spec repo to look, we want the specs to look, we want things to be structured, and what is the process we want to evolve things moving forwards. And so those are the things that we've really been focused on is getting those things in place. So first is defining a comprehensive spec versioning and life cycle system. So this is a way for us as these specs are, we got a lot of specs already out there. We got new specs coming in and how do we track and make sure that we understand what's in draft, what's published and things like that. Second is we wanted to find a standard repo structure for the spec repositories. So we wanted to find a standard repo structure that just enforces best practices. Not a lot more to say on that. Third is create a base template for the specification. So then we really define what a good specification looks like through that template. And then new specs that come in, we'll pick up that template by default. And then we're also, as we'll talk about later, we need to go back through and take existing specs and move them over to this. And then finally, as time goes on, the protocols are going to evolve. And it involves all the relevant stakeholders to consider changes and propose changes to these protocols and either approve or reject them eventually. I want to call out, we're not starting from scratch with a lot of this. So libpdp has a great start on a spec versioning and lifecycle system. We are taking it and involving it somewhat, but it is a baseline for us to go from. Similarly, IPFS recently started the IP, IP interplanetary improvement process. So we're going to start with a model off of what Filecoin is doing. And similarly, we're taking this and starting to evolve that as we'll talk about in a minute. Next slide, please. Yeah, so this slide is talking about point number three, which is like a base template for any new spec that gets introduced, whether it's in libpdp or in IPFS, something that can be extensible across projects and across repositories. What it really seeks to do is address concerns that have been raised by IP stewards, engineers and by the community. So like you'll see that I put into issue, some of these are open issues that have been around for a long time. But we're hoping that by standardizing this specification, a newcomer or a new implementer can come in and say, okay, I know exactly what state the specification is at today. And it kind of outlines a structure that is followed and repeated across other specifications. So you have a nice uniformity across projects. And it also addresses some things that people have raised about like adding a glossary, maybe, like there's new terms that get defined when you're writing new specifications for new protocols. And so you want to also call those out. Really what we're hoping to do is make this, not just make the specifications not only consumed by implementers or people who are really in the weeds with libpdp and IPFS, but also approachable to new people. So adding a glossary to the specs, something like that is going to be pretty welcomed by the community. So this is just an example of something that we're going to do for point number three. And we have others as well, but we'll get into that later. So next slide, please. So for the, again, this is the improvement process, the RFC process for specs. So, and as I mentioned, we've recently introduced this for IPFS. One of the things that we've recently done as part of our project is documenting and proposing a set of changes to that process, which really kind of builds a much more structure, but still lightweight consensus process to ensure that proposals are fairly considered with input from stakeholders and the rest of the ecosystem and just making sure that the wheels keep moving, right? So if you propose something, that you know that there is a certain window of time in which it will get considered and move through that process and either kind of given an up or down vote or feedback to modify. So there's a lot of next steps for us in this process. So we want to take, there's a bunch of pull requests currently against the IPFS specs, IPIP process. And so we want to triage those into the process and keep that moving. And then setting up kind of the working group touch points that are called for in our proposed changes to the process. The next steps after that would be, you know, as we kind of get things refined a bit on the IPFS side, moving this process over to live P2P and then exploring what it means and whether it's needed to scale this to cover IPLD, multi-formats and some of the other projects. Next slide, please. Yeah, so our next steps really is to work with our teams and the communities to commit to some of the processes that we've defined and, you know, the structure of the documentation. We really want to make sure everybody's involved. Reid and I kind of have been working on this and we also really need to incorporate feedback from our team. So what you're seeing here is just, you know, our thoughts, but our next step is to involve, you know, much wider group of stakeholders. We're planning to build and, you know, drive the adherence to versioning to the templates, things that we've described above, identifying documents and, you know, deltas and changes across repositories and really lead the effort to, you know, make sure we triage things, clean things within our respective teams. Triaging the open issues that are opened against either the IPIPs or the specs repos. You know, we've already started to begin doing this with, you know, our team leads and then lastly, institute the RFC and change management process in the respective communities. So like we mentioned before, something that's very inclusive that, you know, takes into consideration what the community also thinks about this. So that's it for our presentation. Any questions? All right, great job. Really excited and, you know, love the idea of adding in that glossary. I think that that is very well needed and will be wonderful and love the collaboration across different projects as well. All right, next we have Rita with the Community Events Playbook. I do see Rita here. Rita, would you like to present or would you, is the Wi-Fi still a little wonky? All right, I am going to go ahead. We got you. I can hear you. Yeah, it's still wonky. If you could press play right there, please. I got you. Community Events Playbook, is a step by step guide to planning an event that attracts the desired community. What are Community Events? Community Events are gatherings that aim to bring together specific audiences in ways that enrich their knowledge and grow the community. How do I plan an event? This playbook will answer this question. Each event cycle will begin with a proposal, who, where, and why, a plan, how, execution, delivery, and a follow-up. Next steps. In building the Community Events Playbook, I started with our Notion page. The Community Events Playbook starts by, again, defining community events and walking you through all phases of event planning from the pre-event planning stage, right down to follow-up. So we go pre-event planning, establishing who's the driver, who are the decision makers, who are the contributors, and what's the history of this event. And before, have we done this event before? What were the schematics of the event? What were the logistics? And what were the results? All of that information goes into your initial pre-planning process because this will help drive communication. This will help determine the planning process. This will help drive so many efforts in getting the desired results. From the pre-event planning stage, we go right into event planning, starting with proposal, what is the event location, format, time, date, all of those things, description of the event, all of those things get addressed during the initial proposal process. And the planning phase, phase two, or step two, is establishing communication channels, weekly planning calls, Slack channels, selecting a venue, looking at the research, the venue research database that we've newly established. Also set up the site visit checklist to help once a venue has been determined as a choice. The next step is to conduct the site visit to ensure that the venue actually meets our needs for the event. From the planning process, we go right into execution and delivery. That involves setting reminders, notifying your attendees, communication, communication, communication. Event execution is all about communication, whether it be your internal staff, vendors, sponsors, or your attendees, you can never under communicate. This section will get fleshed out a little bit more as we are always discovering new SOPs that work for each event. In the post-event process, that's where we do our follow-up. What are our attendees saying about the event? What's this event considered successful? Did we meet our KPIs of the event? All of that gets addressed in the post-event process. And this would not be a playbook without having some additional resources, like our event brief, more event planning 101, as well as reviewing some of our past events. We always will link to past events so that one can see, you know, what the tracks look like, what worked in Austin versus Toronto. These things will all be fleshed out in the community events playbook. Going back over to our slide deck, what are our next steps? Continuing to uncover more and more SOPs and continue the development of our Notions page. And that is the community events playbook. Thank you for your time. Rita, shout out to you, because I know you recorded this at an event where, you know, the Wi-Fi was a little wonky, but if anyone has any questions for Rita, feel free to type them in the chat or slot her as well. Community. Thank you. Next, we have Sam Kara with Decoupling Data Sets Explorer for Slingshot. Hi. Next slide, please. Ah, okay. So, Data Set Explorer is just basically a portal for discovering and exploring data sets, open data sets that have been uploaded to the network through the Slingshot program. And at the moment, the current challenges that we have tracking, all the uploaded data sets, it's not easy. It's not easy. So, this data set, it's also a challenge. It actually, that capability doesn't exist. And then very important is, if you are a researcher or you wanted to consume these data sets, understanding the properties right now is a big challenge. Retrieval is also a functionality that we would like to have on the portal that doesn't exist. And then finally, there is no mechanism to measure community engagement with the data sets. Next slide. So, the proposed solution ideally is to rebrand Data Set Explorer. So, decouple it or separate it from Slingshot and have it evolve it into its own product. And the idea is rebranding it, have probably just changed the logo, the URL, have it hosted as a separate service, as opposed to the way it is part of the Slingshot service right now. And then enhance it to a point where we are able to support active state on chain, as opposed to the archived data right now. And then there are minor UI enhancements that we intend to do, which will bring in the ability to filter data sets nicely using various properties from the car metadata. And then also bring in the aspect of discovering, discoverability by exposing more properties of the data sets. The next thing we hope to achieve will be to build a mechanism that is able to support the data set. And then the next thing we hope to achieve is to foster community engagement, just collect feedback from people, what are guys saying about particular data sets, what challenges somebody faced and things like those. Then leverage a bit of to support retrieval and unloading of files directly from Data Set Explorer. So, the idea here would be to have the IPFS Gateway and support retrieval of data sets. And then finally integrating it with a battery out to support remote execution of code on the data where it is stored. So the idea is if at some point, not if because at some point it is, we're going to look for ways to monetize it. It is going to have much more value if somebody can execute code on the stored data sets, even save the output of whatever it is that the analysis or whatever they would have been doing if it is training an ML or something like that, save it back. So that's it. All right, wonderful. Next we have Ivan with indexing IPFS. Again, folks, just vote for your projects along the way as well. But take it away, Ivan. Cool. So, okay, let me share. Okay, you can ask this question. So, right. Katie, can you please let me share the screen? Should be good now. So guess that's one. Okay. Can you see it guys? Right. Okay. So just to recap the problem. That IPFS notes they use the DHT for content routing, which is decentralized but can be slow to propagate the requests. Updates through requires multiple hopes to find like a single piece of content on the IPFS and just generally can be slow. So Filecoin can in the same time benefit from faster lookups by each line indexes, which are really fast to find the providers that have a certain piece of content. So recently, as I mentioned on the all hands before, IPFS introduced reframe protocol, which basically allows IPFS nodes, nodes to delegate all the routing to some external system. So now IPFS nodes, now basically those who are running the nodes can decide by themselves like how they want to advertise their contents. And what it allows us to do, it allows us to connect to the to the Kubo nodes via reframe protocol and capture all the data that this IPFS nodes has and advertise it out to the indexes. And then through these indexes, people will be able to find this content on the IPFS nodes. So IPFS nodes IPFS content can be available alongside the Filecoin content too. So I will show the demo in a minute and just basically just want to discuss the on the next up first. So first of all, like Kubo, reframe is at a very early stage at the moment. So only the release candidate one has been cut today that actually supports it. So support is very varying. It's really early days. So we can start trialing the indexing of IPFS on smaller nodes on smaller nodes of IPFS cluster that's managed by protocol labs. And then like once this proven to be successful, once we polish the box on both the Kubo and indexes side, we can work with the IPFS team on having PL run IPFS cluster to provide index to us, to us meaning to the indexes. So yeah, and now let me try to show you the demo. So and just bear with me. I'm showing this live nothing is prerecorded and Kubo is, as I mentioned, so using the forked version. So it might be a bit fragile, but I hope everything works. So I'm running three things currently. So at the left bottom part of my screen is the Kubo node that I had to tweak a little bit to start supporting and reframed properly, which is not supported. It's now supported in release candidate one. So above it is the indexer that I'm running locally. So where we can basically that you're going to provide us with the information about where the content can be found. And to the right of the indexer, I am running the index provider. This is basically kind of the breach between the Kubo nodes and the index itself. So I also do have the browser window open that is pointing to my local Kubo node. So I can let me try to open the text editor. So the name, hello. People save it to desktop. And then go try to input in this file. So once important that you see, so it started capturing different things. So basically index provider, I received the advertisement from the IPFS nodes and then advertise it back to the indexes. So now what I can do, I can take the CD or the upload file copy CD, go here and use the indexer command line to find this CD, hopefully. So fine. So yeah, in basically, and then it shows that okay, where this content can be found. So, and this also works with more complex pieces of content. So for example, I uploaded the, some zip file of item two, which is 23 megabytes. So I can go to the inside, it consists of a lot of chunks. So I can take a sub chunk of it, for example, and try to find the sales about it. So it also shows up. Yeah. So, and basically that's it concludes my demo. All right. Excellent. Always I know nerve wracking when you are live demoing, but went off without a hitch, which is great. And next up we have, I lost my bliss, Ravi and Dave, we'll be talking about heterodex searching for web three. Hi everybody. Great presentation so far. Ravi and I are going to be doing something a little bit different from Ivan's there. The goals of our project and we can move to, I'm sorry, I'm on. Okay. Do you want to go back to sharing your screen? Thank you. Cool. Next slide please. So we had two goals here. We're going to get into this in the next slide in a second, but as you probably all heard in the last few weeks, there's been record high interest in opportunities within the PL network. And Ravi and I began discussing the channels in which PL services candidates and decided that it might be helpful to create a tool where recruiting team members can click and find candidates through new means or new channels. And if we go to the next slide, we'll just build on that slightly. So the current state, I think August was the record high number of applicants into the PL network and the prediction shared by Ian was that September is going to surpass that. And so Ravi and I really focused on the idea that in Web three, and there are a lot of talented individuals who exist outside sort of the traditional hiring avenues. And the question that that we pose as working through this was how best can PL access the periphery where community of these potential candidates reside. And Ravi's going to take over here to explain a little bit about this chart and a bit more on what we did over the past few weeks. Ravi, you're muted. Ravi, are you there? I didn't realize that was a mute. Sorry. As you can see from the graph, like applicants is surpassing any other domain and applicants are good, but majority like if you see the percentage of qualified applicants is really low. If you can go to the next slide, please. So like how do we find the best talent for us? And we really wanted to challenge ourselves to think outside as Dave said, the traditional channel be it LinkedIn or applicants. There are numerous channels available out there. So we set a target that we are going to at least find 30 channels. As of now we have completed nine different channels to find candidates starting from Kaggle like this is where we came into this is where we basically learn about the custom search engines, which is these days called the programmable search engine to find these talents for any open role within PL or PL network to start with. We started playing with Kaggle. Like there are a lot of competitions going on. There are a lot of discussion, but that is not of interest to us. We are rather interested in finding the high potential candidates from Kaggle. So we need to redefine a custom search engine in a way that we are able to extract information from a user level that we have the handler and we should be able to find exactly who the user is. We also worked on a lot of refinements on the users, what we are finding like if we want to find user from a specific geographical location, we can do that. If we want to find user from a specific diversity pillar, we can do that. We also build on the research gate, another great place to find like good research candidates. This is where we again were able to extract the user profile rather than the discussions. We provide a refinement of defining like finding users from top CS grad programs in the world like the MIT, the Stanford, CMU of the world. We were also able to provide another refinement or research gate on the basis of diversity standpoint. Again, like there are nine other different channels including GitLab, GitHub, like most of the time like GitHub is there, but like how do you take your recruiting or sourcing to the next level from just a user profile finding, has he contributing? Like what exactly like is this person, where exactly is this person located? How do I find his email address to reach out? Can I relate this GitHub profile to any other social profiles available? Can I relate this GitHub profile to a LinkedIn profile? So we were able to accomplish all this, and I'll show you in a second how we did that also. Again, Google Scholar Behance, because one of the other biggest need from a network standpoint is like finding a lot of cool designers. So we were able to create like provide some magic on Behance, custom search engine to find candidates. Facebook, the recent tobacco of the graph search because of the Cambridge Analytica case, like how do you find candidates for Facebook? So this is where we, I think I've probably spent most of my time among out of these resources, the nine resources listed on the slide, but we were able to like the HTML in the InSpec code and able to figure out like how do we find the users which we are really going after plus diversity is always one of the biggest focus area for any companies or like even at PL or PL network, we were able to define just another search engine specifically focusing on finding candidates, like women or Hispanic candidates or African American, and you can provide any sort of a refinement on that. Like if you want an African American candidate from this location, from these colleges, all these refinements are available. Now, if you talk about like all these, these are all separate, separate custom search engine. We like rather than giving somebody like 10 or 15 search engines, why don't we combine the functionality of all search engine into one toolkit? Kitty, if you can go to the next slide. So this is where we, this is just an example how to configure the custom search engines. Next slide please. This is how we did the GitHub parsing, like we were able to find candidates from there. We were able to locate the profile and from there on right inside, we got all the social profiles including email addresses. Next slide please. This is the toolkit we created. It's a website, pretty simple website we created on Google slide. All the custom search engines are available there. So everything is housed in one place. And as I said, like we are probably going to add another 20 more resources. So it's publicly available for anybody to use. We kept it really simple in a way that it's not just a recruiting professional can use it. Anybody even outside of recruiting should be able to click and find candidates. Next slide. We know we talked about OS and IT and image searching. This is how we solved the problem where we used the algorithm behind the two biggest search engines available in the market today right now. And we were able to find totally different set of candidate for one search string. So if you talk about increasing your productivity, like you can 2x your productivity right away. Like if you see the results on left and results on right are totally different. Next slide please. It is not finished. We have a lot of things to do right now. Like one of the other things which we really wanted to go after is the mapping. We are looking into solutions like we know like solutions available on the scale talent and company and industry. But how do we combine all those things into one toolkit? Is that what we are working on right now? We are also working on like sorting out our ways to find candidates from discord and telegram. Because we just don't want to blast positions there. Rather than how can we just potentially recruit and go directly to the users. And the last two things are out of time here. But the recruiting team is already doing some amazing innovative approaches to this stuff. And just to highlight a couple things you will see in Lisbon. Davis is running a friends of PL recruiting event. The idea being can we bring talented people together in a specific event related to recruiting. Rather in, you know, it is kind of a networking thing or promoting the PL network and then from there attracting candidates. And one of the ongoing conversations that Ravi and I had and is going on in the recruiting team is how can we incentivize members of the PLN to encourage their connections to join. I think we are out of time. Thanks for your patience. Awesome. Okay, next up we have Koi, which will be talking about DAGs with decentralized identity and IPFS and Web3. Storage. Hey folks. Anybody's watching that doesn't know me. I'm Alex from Koi. I will suppose share my screen that can click through it. That's okay. Yeah. Sorry, we're at a conference this week. So most of the teams unavailable unfortunately, but I'll give you guys a quick once over and there's a quick link to the technical video here at the end that Raj made. So if anybody wants to see all the details of how to deploy a Koi task and what the kind of intricacies look like under the hood, we can go through that as well. It's probably, it's pretty long for this. So if anybody wants to watch it themselves, I'll drop a link in the chat. So the general idea here was we have a faucet for Koi that we've been using for a while. And we wanted to make it a little bit more. Number one user friendly to get people sort of an identity and so they have something that they can continue using as they work with Koi in the future. We wanted to store that on IPFS and have all the F stations live on IPFS without any on-chance smart contract anywhere. So the whole thing runs on a Koi task as we call them, which is sort of the spoke consensus network that you can write in JavaScript. And the data for this kind of DID system that we've created all lives kind of like a DAG on IPFS. And we also wanted to make sure that this was fully decentralized. So while we're using Web3.storage, we figured that as a default within Koi tasks. So when somebody runs a Koi task note and they select this particular task, they'll be prompted to create an API key for Web3.storage, which hopefully will drive some users their way, but also means that anybody who's writing a task can then kind of use that as a de facto way of storing things. So all of the tasks that people write in the future will hopefully take advantage of the fact that many of our nodes will have these Web3.storage keys. And that should mean that lots of stuff will end up getting up to IPFS. So I'm sure too we'd like to open some of the standards that people can use it for a variety of other things related to other PowerPoint projects. We'll probably publish the bold DID spec as well. So it's nice to use. Just to give it a quick once over how the faucet kind of works. There's sort of two main steps here. The first one is people have to get assigned a DID. And the second part is that once they have the DID, they have to get attestations to prove that they have, say a Twitter account or a phone number or an email connected to it. That last part is the most important because that's how we prove that they're human. And that's sort of why we can actually issue tokens from our faucet. And one of the other big things kind of under the hood here is that Koi's attention tracking game also uses these DIDs as a spam prevention mechanism. So we previously had these deployed on our weave. It looks like the cost basis is a lot lower to some PowerPoints. So we'll probably be using PowerPoint in the future for this. Yeah. Without further ado, essentially how this goes. We explained this in the first one, but we have our user Miley. Let's imagine Miley Cyrus comes through Launchpad and she wants to get approved with the DID. She comes and gets a DID. It's just kind of issuing something and putting it on to IPFS. Once that's up there, then people can start issuing attestations to her or she can request an attestation. So one way to request an attestation would be saying, I want to verify my Twitter handle. So I'll post something on Twitter, say a little hash or something that's a unique signature. And then people can go and verify that her Twitter has that hash on it and they can issue attestations, except they won't do it personally. They won't do it automatically using their point of, and so lots of these point of this can also be configured to have an API key for Twitter. And so one API key for Twitter, one for web through that storage and they create that attestation uploaded to web through that storage and then submit that to one of the task nodes to kind of click everything together and then it gets added into our index bag. So we started building this out. The faucets now under construction has been for a while actually, but now that we've got the DID standard pretty much together, it's going to take a bit of an upgrade. So what this will work is that every time that someone verifies a new form of identity, they'll get more tokens from the faucet. So the first thing that they do, say it's Twitter, we'll give them their first one token. Once they get that one, the next one will give them double that. And then every time that they add another attestation, they'll have double. We're hoping to add a lot more beyond this, but to start with Twitter, Discord, email and phone will be the main ones. Email and phone we're going to be doing from COI because it's kind of hard to decentralize those at the moment, but gradually we're hoping to add more and more types of verification here that can kind of increase what people are doing. And for example, if Launchpad wanted to attest that certain people have been through Launchpad, they could issue an attestation following the same framework and that would kind of give people a little bit more of a verified identity and humanness. And we can probably add some reputation specs for that so that their attention is worth more in point. Again, all of this gets thrown on the IPFS via Web3.storage and this is most of it. Here's a quick overview of what the actual DAG structure for this looks like. So you basically got a whole bunch of these little signed payloads. We've never built a DAG before. So this is kind of our hack-together implementation. We'll probably continue refining the process over time. The general concept though is that each of these little snippets of JavaScript here for non-technical people are little payloads getting up to IPFS and each one of them are signed by somebody that's submitting them. So the first one gets signed by you as the user. You generate a key, sign your DID with that key. That gets uploaded via one of the COI tasks nodes so you don't even have to worry about talking to IPFS or FilePoint. Someone else is going to make sure it gets uploaded because there's a second incentive model for them. So they're going to earn some COI tokens to run in their node and if their node completes a bunch of these registration uploads then their node is going to get more points towards getting a reward every 12 hours. If they want to do something like change that DID then they're going to update a diff. So say I wanted to add a username to my account. I would submit a new one like this and as long as it's got the same signature then we can kind of aggregate all this together. Then what the COI tasks nodes are going to do in order to maintain this index is they're going to keep all of the payloads that have been submitted and they're going to basically compute an output of that to give you the latest version of the DID which is what the API that we provide will show. So this output DID doesn't actually exist anywhere as a complete payload but the nodes know how to reconstruct it because they're all sort of playing the same game and they're fighting to create this index faster to get the rewards. So they're all kind of on an ongoing basis. They're all trying to compute this index and submit a hash of the index. So this allows us to continuously update the index without worrying that anything's being left out because if there's a new payload that's uploaded to IPFS and it's not in the index then that node won't be able to get the reward. So each one's always trying to find more payloads that are signed by that user. And if they're not signed by the user and they end up in the index then the node gets audited and we take away their stake. As for the attestations they work kind of the same way. So as the owner of my DID I can request an attestation, say for Twitter. So I submit one of these payloads. Once that's up there then somebody else can go and I grab the task name, we'll see the request and task names will all kind of scurry to go and query Twitter, grab the data from my account, make sure that the hash that I posted to Twitter actually lines up with my account ID and everything. And if it's successful and it follows the proper process then they'll submit one of these attestation payloads which goes into the task notes again and they add that into the index. And then finally there's a master index that sits on top of all of this. And so the cool thing about this that's really neat is that instead of having all of this live in a smart contract that would be on Ethereum or polygon or something like that this master index basically is just one hash that gets updated once a day. So it increases the throughput for something like this a lot compared to trying to do everything in a smart contract. We still have a lot of the kind of verification and trustlessness that we would expect from decentralized systems. There's a lot of like kind of this is why we set up Poitass in the first place but I think IPFS is going to be a really good area for us to work on this with and we're going to hope to publish a lot of these as standards so that other people can follow this design process and make good use of the system. Yeah, so just to quickly show how that works I think we had this this slide was in there before you have a desktop node where you run your tasks so you just kind of click to select them and then you set a state to run them on your computer. When you start running one of them it's just going to look to these kind of outstanding things that are happening pull that data down upload it into IPFS do all that stuff. There's a configuration screen in here as well that you use to set your API keys like Web3.Sverage and we have some prompts so when you go to run a new task it requires Web3.Sverage API key it'll give them a little link to say hey here so you guys set one of those up and there's some console outputs here main thing being this get request and postman's pretty hard to read from here but basically there's an API on each one of these nodes that's going to allow people to query the node itself and ask for the data about the DID user and so we're kind of clustering these together and we're working on some routing technologies as well it'll sit on top of this and maybe even more useful. I guess last thing to show would be over here you've got the different kind of end points that are supported so each user for each node will open up a register endpoint it'll also have the ability to list the IDs based on their ID, give attestations and see the sub pending attestations all that kind of stuff so just all these little nodes running around trying to upload things to IPFS and verify them and that kind of thing so more to come on this. I will drop the video in here because it's quite long so if anybody wants to watch that give it a shot and let us know what you think. Alright great next up we have Chris who will be talking about Phil plus program infographics from growth marketing. Yep can you share your screen Katie? That'd be right. I'll be really quick guys we've been working on so my team's really focused around storage provider awareness attracting more storage providers into our ecosystem and ensuring that they have all the information they need right before they come into the ecosystem before they get too in-depth in the conversations and one of the topics that they're always asking about as top of mind is the Phil plus program can you go to the next slide please Katie? So what we did is we reached out to a vendor that I used to work with when I was in previous roles different companies called CPR interactive out of the Bay Area and they do a really good job of creating really really easy to understand infographics for one of the things they do anyway they do other things too but the point is that they were able to learn about all about our Phil plus program and put it into an interactive infographic these are just screenshots right now of what you would see if you look at the simple graphic on your phone or your tablet and really easy to see here right you click the button the plus button and opens up a more detail about each particular reward or you know incentive that they have a storage provider if you go to the next slide I can show you more of what the interactiveness of this will be can you click on that link there the graphic should be linked Katie who insert that graphic out right there and the password is all lowercase vision the word vision and then you will be able to click here guys you'll be able to scroll down a little bit you'll be able to play this animation scroll down a little bit more I'm sorry further down right there there you go it's already finished so now you can see what's what's happening is as the user comes this page we're asking to click step by step and understand what a data owner does what a notary's role in this whole process is and what storage providers are going to do scroll down just a little bit long a little bit more right there there you go and hit play again there you go and so it's going to show in an interactive manner what actually happens you don't have to click anything it's going to do it all by itself it's just a video recording is all it is but with this inter graphic will allow users to do visitors to do is to actually see what are all the steps that happen in a field plus program and how the source rather gets block rewards as a result of all this and so we're hoping that this increases the understanding of new potential source writers onto the network and that's it I told you I'd be really fast all right I have a feeling some of the this might be added to the launchpad curriculum very shortly awesome great job so next we have Sean who will be talking about long-haul testing on Filecoin proofs perfect hi so my name is Sean I work on the Filecoin proofs team so we maintain the library which does all the proofs for Lotus and Filecoin so ceiling proof of spacetime proof of replication go ahead and go to the next slide I'm right so the Build Back project was a project that previously existed when I joined the team I was tasked with figuring out ways to improve the way it was run so as I said previously this does some long-haul testing of sector ceilings to all slight proof of replication proof of spacetime making sure those parts of the code are robust so there are a few problems with what we were doing one problem was tests can take several hours to complete and they require really expensive server hardware so the problem there was a lot of native cloud solutions have limitations on the hardware they can run on how long that you can actually use their container based or virtualized solutions if you're running on a circle CI the other problem was contrary to that to get around it the tests were running on servers in the protocol labs data center and they had to be deployed scripts they're hard to monitor had to separate sort of orchestration in order to pass back test results or notifies of test failures so a lot of manual plumbing that had to be built a lot of scripts a lot of cron jobs and everything a lot of calling through logs on the server in order to analyze test failures the solution which I proposed was to orchestrate everything using a feature on major cloud platforms called self hosted runners so basically the self hosted runner is you install their agent on your server and that just handles communicating back with the cloud based solutions infrastructure so basically you install their agent you can do all your orchestration scheduling of tests and everything on the one centralized location so the benefits of that is you don't have to build your own notification mechanism you don't have to build your own UI and you can also use their easy to use format which is basically like one document that can be just defined in one YAML file and basically take advantage of a centralized location for everything so the next slide shows kind of the one slide demo of the user experience here or developer experience this is like a demo of the test run so this is actually at the top level in github theory you can see we have our badges and those badges can be you know placed on any web page in this case this is the this is displayed from our github repo for file claim proof project so you can see here there's one badge which is passing and then you see there's a failure here on this other circle CI so basically for a developer or someone in the community they want to see the health of the project they go ahead and click on this it jumps through to the circle CI dashboard and we can see there's a test failure and the second one where so these are the three jobs that are running we have a memory leak job which actually runs on cloud-based container which is doing basic memory leak testing file claim proofs library making sure it's not wasting memory somewhere and it's cleaning up after itself the second two tests there the GPU test and CPU tests are actually offloaded to our data center where one's running on a GPU based server with an NVIDIA test like our the other one's running on a strictly CPU based test doing all of our ceiling tests so you can see on this second one there's a failure in the GPU test and you just click through to that and then you can see all the tests that ran so you can see in this third graphic the test durations that I was talking about previously like our lifecycle test for a 32GIG the second one down is a nine hour run time and then a bit further before we can see it failed there was a failure which is why the bad news showing up read in the 64GIG test and you can see if you expand that in the logs the circle there seeing that failure was the device ran out of storage space which wasn't a failure in the test but it's really easy just a few mouse clicks using this you get a top level where you see things are green or red failing you can click through open the logs everything's nicely formatted so everything can be centralized also email notifications if something fails so much easier to orchestrate things than having tests that are individually deployed to individual servers and for next steps around this we're looking at using IPFS for some of these large file storage so a lot of these are 64GIG parameter files that need to be moved around so you want to try dogfooding our own tack to actually handle our intermediate file storage for a lot of the bigger parameter files that we're using here that's all I got all right great so now we have FEM form with Matt and Zach and I believe Zach will be presenting Zach would you like me to keep sharing my screen or would you like to share yours? yeah you can keep sharing it thanks Katie fantastic presentation so far it's been exciting to see how everyone grew from when we all met and Palo Alto it's pretty inspiring but yeah so I'm Zach I work as part of the FEM DX team Matt is also working here and Sarah so that's the full FEM DX team so far and if you kind of miss the Q&A or you know need to get caught up on what the FEM is it's the Filecoin virtual machine and it allows developers to create smart contracts or external actors and deploy them onto the Filecoin blockchain to run and have access to actors but stores contracts as a native primitive and next slide please so currently the FEM and FEVM which is a runtime on top of the FEM that's Ethereum compatible are in development and most of the developers who have questions is Slack Slack is terrible for discoverability so a lot of people come in and ask the same couple of questions over and over again so in term before docs we want to have a place where developers can go in ask questions and also read other people's questions to kind of prevent all of this extra noise from occurring and so the solution we have is just to create a form right a place for both topics very similar to the discourse forms that IPFS already kind of had and again this is kind of a precursor to docs so we're going to use the information and the questions and the articles that are being written on the forum to guide what information we want to have on the FEM docs when they get released next slide please so it's already live there's already some content on there we even have some external content on there which is exciting and a lot more is about to come up online we're kind of waiting for the iron release of FEM which should be next week hopefully and that allows Ethereum JSON RPC capabilities so people will be able to go into remix write code and solidity and deploy straight to all of the tests so yeah and currently you can check it out right now anyone here can go there ask questions about the Filecoin virtual machine at FEM that discourse that group it's a temporary URL for now we're going to try to get a more official Filecoin.io URL and have that but yeah please if you have any questions about FEM at all please ask it I'm sure a million other people have it and it'll be great to see some content in there next slide please so this is kind of a basic doc we're using right now to organize our content as you see there's a lot planned and we're kind of prioritizing it based on you know first principles first explainers as the iron release comes out we'll have a little bit more demos around the point while it be as that'll be a lot more developer friendly as than it is now and yeah so that's kind of where we're going with content next slide please so there's some other future plans is again we're going to move it to a more official URL right now we're thinking FEM.flashforum.filecoin.io to keep it as a subdomain within the Filecoin.io domain we want to link to it in the FEM you know homepage so that way developers can go to this homepage they see everything they need to go to and eventually again docs will also be on there that's kind of part of this whole grand plan we want to get more community curated content so after this presentation and just show me what you got we'll be going and announcing it on the Slack channel that's open to the public and hoping to get some more of those like foundry developers and stuff creating content on these forums and maybe some more integration with Slack and Discord that's really it for the FEM forums again please if you have any questions or you know anything about FEM go there first and ask it there and me, Matt or Sarah will be happy to come in and help out and if there's any questions feel free all right and also shout out to Dragan who came and spoke to this cohort about FEM and some of the stuff coming up in their future roadmap as well next up we have Waleed who will be talking about Bacchia and monitoring Canary yeah thanks thanks Katie so yeah I'm Waleed I'm one of the engineers in the cloud team working on compute over data so if you go to the next slide back to how is a network that allows users to submit jobs that gets executed, computation jobs that get executed on a trustless network of nodes so a job that gets orchestrated, nodes can bid for job execution, some bids gets rejected, accepted and the end result is made available by the caller using IPFS Bacchia is still under development it's still a new network with very limited traffic and what we're missing today is a way to continuously monitor and test the network, test the different APIs that Bacchia provides I would like to have visibility whenever there's any degradation in terms of availability performance or anything else that can go wrong so what I've been working on is a canary that continuously run different test scenarios on the different APIs that we have and what we have today is a canary that publishes metrics dashboards and trigger alarms whenever these metrics speech define threshold and also integrated these alarms with Slack channel that publishes a notification to a dedicated channel subscribed by the operators and the maintainers of the project or whoever wants to join and interested in knowing the health of the service so if we go to the next slide and where the canary is under the hood to fast, yeah so the canary is implemented using AWS Lambda so we have different Lambda functions and each one of them is running specific test scenario whether that scenario is testing certain APIs such as listing the jobs submitting a new job or with different configurations such as submit with and without concurrency and the Lambda function call back to our network submit the job or call the API and then publish metrics about that specific execution these metrics include the latency the success and failure rate this allows us to build a dashboard create alarms and these alarms eventually integrate with with Slack channel if we go to the next slide and we see the end result is we have on the left hand side this is our canary dashboard that monitors the we have different graphs for different APIs or different test functions and on the right hand side is our Slack channel where we get notified whenever the alarm is triggered and again whenever the alarm is back to normal and if we go to the next slide what I want to call out is that the project is be usable the whole code is implemented including the infrastructure using infrastructure as code using AWS CDK to create the whole infrastructure and also comes with the code pipeline utilizing AWS code pipeline to automatically build and continuously build any changes in the canary this includes changes in your resources such as adding in your alarm changing the dashboard, modifying your threshold all that gets automatically deployed as well as your Lambda functions adding a new test scenario or changing your implementation of the test scenario and the project is fairly documented in our GitHub repo so if somebody wants to is interested in implementing for the monitor service please do reach out and have to help and finally a general recommendation is that most teams they invest in implementing integration testing that test cases that only gets executed on demand whenever you have a new release a new deployment gets executed and that's it and that is we're missing a lot of value here in the sense that it would slight modification of refactoring or with proper use of dependency injection it should be fairly feasible to have the exact same test suite running both environments running on demand as part of your release pipeline and at the same time the same test suite run continuously and periodically as a canary hitting your production network or testing network or whatever different stages that you have in your pipeline that's it alright thank you so much next up we have one who's going to be talking about gas consumption in the map file quit network unfortunately one had a 15 hour flight delay so they would have had and recorded so we'll take a couple of minutes and watch that I am a research slash data scientist from the crypto icon app and today I will be presenting my launchpad project entitled analysis modeling and simulation of gas consumption in the file coin network so let's get to it recall that the file coin network utilizes an EIP 1559 like mechanism in the sense that the total amount of gas consumed by a message is given by the product of a base fee measured in fill per gas unit times the number of gas units spent here the base fee is adjusted dynamically according to the following equation here b sub t represents the base fee at an epoch t g sub t represents the gas consumption at an epoch t and g star represents the target block size which is taken as half of the maximum of the block space given that in periods of high congestion the base fee increases minors are incentivized to either send this messages or wait for the network to be congested as otherwise they would consume a larger amount of gas similarly this same mechanism incentivizes minors to include more messages when the network is decongested further more this mechanism makes the network resilient against spam attacks given that the network load increases during spam attacks maintaining full block of spam messages for an extended period of time becomes impossible for an attacker due to the increasing base thus if I give an epoch the gas consumption g is smaller than the target gas consumption g star we say that the network is not congested and as such the base fee at the next block increases conversely if the gas consumption is higher than the gas target then the network is congested and the base fee increases clearly understanding the behavior of BFT as a dynamic process is important for modeling and testing different mechanisms in the network there is a caveat however and it is that the gas consumption process capital g sub t is in itself a random process depending on several unobservable quantities such as demand which are difficult to model or even measure this in turn makes modeling and analyzing g sub t and hence the base fee b sub t non trivial tasks motivated by this the aim of this project is to first obtain key insights on the statistical behavior of gas consumption in the network second to develop probabilistic data-driven models for this gas consumption and third to develop a tool set to simulate this behavior in the hopes that it can be used for other projects in research or the wider ecosystem to do this we analyze gas consumption on an epoch to epoch basis as well as on a message by message basis to achieve this we use Sentinel in order to query historical chain data reported at every epoch from July 22nd to August 22nd 2022 this corresponds to over 88,000 data points in particular we focus on the following data models on Sentinel first derived gas outputs which is a database that contains gas data related to the execution of a message we also utilize the data set message gas economy which contains aggregate gas data across messages over an epoch and we use the parse messages and chain consensus data sets which provides an additional relevant data related to the chain let us now present some key insights from our analysis the link to the full report is presented at the end of this video for simplicity we will focus our modeling in this normalized expression for gas consumption until the as we can also write the equation for the base fee dynamics in terms of from here a positive value of g tilde represents high gas usage meaning that the network was congested and conversely a negative g tilde means low gas usage signaling that the network was deep congested we begin by examining the statistical behavior of the gas dynamic here we plot the time series of g tilde in the top left its histogram in the top right its auto correlation function a measure of how strongly correlated g tilde is as a time series in the bottom left and it's a critical distribution function at the bottom right from here one can infer that it is more likely to have periods of high congestion as shown by the peak in the histogram as well as on the time series since it is not as often close to negative one as it is to one and in addition this auto correlation plot suggests that there's a weak correlation between measurements of g tilde suggesting that roughly the random process g tilde becomes statistically independent of its history every five or so epochs if we look at the periods of very high and very low congestion and look at the distribution of the time between these periods we can see that their distribution can be well approximated as an exponential distribution each with a different rate as shown in the figure in particular by fitting an exponential distribution to these times with high congestion and low congestion it can be inferred that high congestion peaks happen roughly every 25 blocks and similarly low congestion peaks tend to happen on average every 50 or so blocks the fact that high congestion peaks happen twice as often as low congestion peaks also agrees with the proportion of the time where g tilde is in a high and low demand or low congestion state measure as 4% and 2% of the time in perspective we were also able to infer that the gas consumption process seems to be invariant with respect to small scales in the figures we plot the process g tilde observed every 10 epochs corresponding to 5 minutes on the top and to 120 epochs corresponding to an hour at the bottom notice that the statistical properties stay fairly similar across these timescales we were also able to infer that the gas consumption process seems to be invariant with respect to small scales in the figures we plot the process g tilde observed every 10 epochs corresponding to 5 minutes on the top and to 120 epochs corresponding to an hour at the bottom notice that the statistical properties stay fairly similar across these timescales we now shift our attention to the gas consumption by message on the left we plot in lock scale for visibility the average block proportion of a given message here we group the messages into two categories in orange on the right we plot the mean proportion of these groups as we can see the control plane messages dominate the block space as they can be up to a couple of orders of magnitude larger than the data planings on average data plane messages account for about 5% of the gas spend while control plane messages account for the remaining 95% here we present the top 10 messages per gas consumption if you're familiar with the inner workings of Filecoin you probably won't be too surprised to see that the top 3 messages are pre-commit sector, pre-commit sector and submit window proof of storage these 3 messages make up for about 87% of the average gas consumption looking at the correlation plot among these top 10 messages there does not seem to be any strong correlation among them except for the mild statistical correlation for the previously mentioned degree can we use this data to come up with mathematical models to simulate the gas consumption the answer is yes in fact we came up with several models of increasing mathematical sophistication ranging from histogram sampling to Markov chain stochastic processes in order to recreate the random behavior of the gas process here we show the time series and histograms for the simulated data for different models both at the block level shown at the top and at the message level shown at the bottom notice how the histograms and time series of the top plus resembles those of the measured data quite well these histograms here again resembles those that we were showing at the beginning we also used advanced statistical methodologies such as heat and market models in order to model the unobservable demand process that drives the gas consumption up or down here we fitted such a model to identify five possible demand states of very low, low, medium, high and very high demand one such a model has been fit one could use the results from it to simulate both demand and gas process as shown in the figures at the bottom here there's the process that jumps from each of those fives demand states this talk was heavily condensed from the report shown in the first reference we're currently working towards publishing a python package with the modeling codes in the hopes that they can be abused to the wider community if you're curious about it however you can DM me for more info thank you so much for your time next up we have marketplace for sps which is really exciting would you like me to keep sharing my screen or would you like to present you can share your screen okay okay hello everyone so the problem we're solving is you know I'm from the SP and clients team on the ecosystem growth sites so like the problem we have right now is that we're getting a lot of feedback from the search providers is that they do not know where is the one stop shop to get all the resources or information they need to look for you know what program we have or the lenders or like vendors are adding you know like resources available in the market because every time we go to meet up in person but like we list out there's new program coming up to take a photo but they're not really going back to track you know like what's going on how to join them so that's the kind of pinpoint for search providers and also clients as well so I'm listening here the problem with solving is that we want to make sure everyone can find everything that ecosystem related in one place second of all we're making sure SP's can start and grow by having one stop shop place to getting the resources they need how does it work so from realistically we're thinking creating a place involving you know like all the PM's with their products and also solution builders and vendors and also make it easy for them can be able to add their program or products or resources by filling out the application in the marketplace but it's going to be approved and validated by the PL team and the opportunity like again I mentioned earlier is to create a marketplace I could say like a website as well for SP's getting a resource and then program or vendors and ecosystem next slide please so this is the first version the software architecture that we have in mind basically it's a place that including program ABC vendors XYZ or you know like could be different solutions but we're going to direct to the SP's and clients by their size for example we have screenshot evergreen or even mool ending you know based on the SP current size or their goal we can lead them to the programs or resources that they're fitting and from there we can also lead to each individual website or each individual landing page for each program that's the idea but you know in the meantime also we realize compared to SP's the clients have similar needs not exactly the same because for resource size they're not using the exact same resources but at the end you know they need to be meet up each other like the source provider is storing data for clients and clients is looking for source providers for storing their data so for the next slide please we had a conversation with data onboarding team as well a few weeks ago so we decided we were having a conversation going it's like besides SP side you know there's a potential we can literally collaborate together with the clients team which is Shilin and Jaws team like we have our main website launching right now it's the SP and the IO if you guys have time you can jump on it and there's like main resources for SP already existing over there but there's you know like not including everything that they needed so we're thinking you know we instead of only for SPs we're thinking creating a two track two track direction lines for our search providers which is the provider side and also the client side for storing real data based on their size could be a small for multiple programs and resources and vendors and then those are the ones potential could be listing and matching for the SPs and also based on their needs and their goal and also their qualifications will be able to match the SP with the client's needs so at the end they can meet each other so I draw the little line right here it's like so a big example if there's a median size search providers that they're willing to store data for clients and you know they're big enough and then they have the capacity they also have the collateral to store data for clients and clients looking for qualified SPs to do you know like storing real data for them on a regular basis this system be able to you know like match them and also direct them together and then for beneficial part is that right now we have challenges to verify each SPs because you know like there are 4,000 nodes in the ecosystem but SPs could be having multiple minor IDs under one SP but there's a big challenge for us to collecting the real information and also verify who's SPs by this way we'll be able to first of all tracking you know who's sign up the program that want to join the marketplace and second of all we'll be able to when we're matching SPs and SPs we'll be able to tell you know like to validate and then verify the minor IDs with the SP account as well so I would say really beneficial for both sides for the ecosystem side we can literally see you know who is actually involved in this tracking who is the real SP behind the minor IDs and second of all we can actually providing clients you know matching the right SPs for them to store their data which is also at the end going to be improving the ecosystem growth so our next step it's going to be next slide please it's going to be first of all we need to figure out the scope for the project like at first you know we started thinking about this is a project for SPs but we realized the client sides also need something similar so we're thinking they should be working together either way because one is the provider the other one is the store so like they should be working together at the end we're thinking how to scope this project together with the client side and also break it down to different phrases because it's going to take a little bit while to get to the end the final goal for market places but we might you know turn into like short-term solutions what could be and then like next step could be like you know like upgrade a little bit or for different you know functions like step-by-step instead of you know one stop shop and then became like a huge project we need breaking down through small periods to finish each goal and second of all is determined area of ownership at this point basically we're working on one project with between two teams so we need to clarify you know who's going to be the lead and then who is responsible for which part and you know how often when you are going to be weekly or our bi-weekly how's that going to work and third is we need to define opportunities and challenges as well like find out what's the possible solutions because we're thinking about matching SP and clients there's going to be a couple problems along the way especially you know identifying who's SP and who's the client they should be matching with by qualification so when you define the possible solutions for multiple challenges and next one is we set up timeline and review the project project like weekly basis so instead of you know like in order to push forward the project we need to have more regular base what should we call it working meetups or calls or something like to appreciate project forward more consistently constantly and then next one is we need to think with the data owner team which is Joao's team as well and making sure we're determining the best way how to collaborate with each other last is like we still need to do the complete a project proposal to the internal team and like for funding wise we need more support and also finding the right team member for the web developer site as well we might need to look for a third contract party to finish the web design but yeah I think this is a good step to start that's it all right awesome next up we have knowledge graphs with Alex hello Alex do you like me to keep sharing my slides or would you like to share yours you can keep sharing yours all right okay so for the ProCollabs network knowledge graph we seek to enable access to integrated information within and between the ProCollabs network members and participants the way we will do this is by defining implementing the structures processes and systems necessary so we can go to the next slide okay so the ProCollabs network is growing and we need systems to help the network scale this is seen as a lot recently where we can actually find some information or connect the dots of who is doing what and what not so as some examples as a member of the Outercar organization you would probably want to build existing reports easily and quickly as a member of Starfleet you would probably want to see and understand how the projects connect and how they align to the strategic goals of the organization as a member of the space part team you would probably want to quickly understand who needs help and quickly assist those teams and founders as a member or a participant of the ProCollabs network you'll probably want a good view of the information and helpful resources available to you and another example as a member of the network funding team you would probably want to quickly scan for valuable projects that you want to support and would be valuable for the network so to do this you can go to the next slide okay so with that we have a lot of information available we have a lot of different resources a lot of projects a lot of contributors and the data comes from everywhere and they're not always up to date so we have data across different platforms so for example we have for division control we have data on github we have some data on gitlab although this is less for our ecosystem for the CRM we use health spot, the art table notion it's all over the place we use github to do that as well for communication there's a lot of options that we actually use like discord, telegram, zoom so we can we have a lot of scattered data around those channels for knowledge basis we have a lot of information in google docs, notion confluence this is less common for most of us and coda okay with that all the data available we gotta start thinking about how can we actually answer questions like how do the different organizations within the network actually interplay and how they connect to the goals of the network which projects from the network collaborate with each other and for how long they've been collaborating for example those would be questions that are useful for insights and actually understanding where you want to go with the network and where you're going whether you like it understanding where you're going is helpful you can go to the next line and okay so how can we get that insights and actually get that information from the knowledge from all the data the data is this is all over the place like I said and any kind of insights and to get any kind of insights we first gotta integrate that data and do some reasoning around it inferences make relationships and run out so as previously mentioned we have challenges when we want to do that because the number of platforms and services that are offered are growing exponentially and teams are using whatever is best for their work and their workflow each platform uses different APIs different data models, different everything so you gotta fix those challenges and figure out a way of fixing that okay so you can go to the next one okay so what's the solution for this this is the backend layer is the knowledge graph so this is where the whole knowledge graph comes in the large graph is basically data with context and integration so you actually do the data from data sources do some data processing and integration run some pipelines, normalization standardization and then you would connect entities from different data sources to a single or common language and common schema that you can actually do some connections and understand the relationships between those data points you then within the knowledge graph you can enrich that data by since you have already integrated the data you can enrich that data by using other data platforms and then you expose that we will expose the large graph as a GraphQL API so you can consume that from the user interface so you can build web apps or integrate this data into the different data platforms build some dashboards analytics and whatnot okay so you can go to the next one this is the first prototype of the data modeling for the graph so it's very basic for now but you can expand with time so you have a person that has a role and has a skill and maybe within a team so a person or an organization may need a service or may need someone with a certain type of skill that's where the knowledge graph can actually do some run some graph algorithms to get recommendations in that file and match make those opportunities we can also use that to identify collaboration opportunities between the network members and whatnot and you can go to the next one okay so the next steps for this would be probably a data platform so you can have a user interface to better manage the metadata and data models and schemas and whatnot you would probably want a feature for collaboration and sharing those data packages and data models so that would be with some type of sort of forking like github does we can compose the graph with a federated super graph that can be enriched and extends both from sub-graphs and yeah so and we also would probably do some ecosystem integrations with like LD to handle the data models and schemas and mappings and baccalaureate for data processing pipelines and I guess that's it I want to thank the help I had from Masi and a lot of support and help there and yeah that's it thanks for the opportunity and thanks for the odd support from the launchpad team and everyone from the ecosystem all right awesome yeah and like Lindsay said knowing those numbers is actually really incredibly helpful so knowledge that we'll all take with us next up I have bunny slope with Anastasia, Bryn, Caitlin, Caitlin and Megan hi everyone I just want to say that all of these projects so far have been like so amazing it's like a really tough act to follow all of these incredible projects and I will make an attempt so our project is entitled bunny slope the rest of my team is having a blast in NYC right now so I'm the lone wolf here to present our project and I will try to do it justice so our goal with this project was to create a living resource destination for Filecoin for IPFS with use cases and with learning content so much of our team is working with the Filecoin Foundation Filecoin Foundation for the decentralized Web my work with the Filecoin Green team we're interacting a lot with people that are not adept in this Web3 environment and maybe don't have the knowledge that a lot of us in this space have you know even with words like decentralized like what does that actually mean so we're interacting with a lot of people that don't have that kind of background knowledge and I know for me and my interactions with a lot of those types of people I'm constantly sending links and resources and so this project kind of stemmed from that need of like where we have all this information that on all of our resources and like we have so much information in this world and we don't really have one good place that aggregates all of that so that's what this project kind of stemmed from you know really if Web3 wants to scale beyond and Filecoin IPFS all the things we're working on if we want to scale beyond where we're at right now we need to enable knowledge in a really accessible and low barrier manner so that is where Bunnyslope comes in it is an intro to Filecoin to all of the different use cases that we have currently and some learning content to kind of get people's feet wet when it comes to this world so next slide um we first started this process by this originally started as a way to aggregate case studies a lot of the case studies and use cases for Filecoin right now we're just kind of living on different blog posts in different PR like press releases and in people's brains that are part of the Filecoin network and so we first started by taking those case studies and aggregating them into a dashboard and actually Katie would you mind if I share my screen just so I can kind of walk through that sorry so I want to apologize for all my tabs in advance so here we go we started by taking all of the use cases that we know of and aggregating them into this air table so you'll see a lot of the work that the Filecoin foundation and Filecoin foundation for the decentralized Web a lot of the projects that they're working on some of our projects that we're working with on the Filecoin green team we aggregated them into this air table and our hopes with aggregating them into this specific uh like type of interface into air table is that we can create something that doesn't necessarily have to be maintained by just us that people can come in and fill out um this form and inter information about the project and we can kind of keep this as like a living breathing thing because it is really useful to have these use cases for again people that maybe don't understand what the use case could be for decentralized storage this kind of gives a really good picture of all the different work that's going into this so um we have short descriptions about all of the different case studies links for more information the status of the projects you'll see a lot of these are still ongoing the types of data that's stored as well as the current size of data sets which this is something that I think is really interesting especially for me working on the environmental side of things I want to know how much environmental data is being stored so it's really cool to kind of keep track of that and have a repo that has those things so that is one aspect of what we worked on um and then we also aggregated a bunch of onboarding resources again this is for a less technical audience um tons of information out there which is amazing but not always easily findable accessible um and the beauty with this I personally believe um so this we we took that case studies the resources that we aggregated onto that air table and moved them into a Notion page and the thing that I think is the most beautiful about this is that you can kind of level like you can kind of pick how far deep you want to go into it because with all of these concepts you could spend days just like going down the rabbit hole and um finding all this information and with this it's a really good intro um and you can dig deeper and deeper and deeper so our hope with this is that we can kind of have levels to it so if you just want basic reading here's one resource if you want to like really dive into like a three hour long talk um or some of the youtube videos that we have like that is another option as well so um we took the case studies we took the resources um from that air table and again we want to keep the air table system to kind of have this be something living and breathing um and yeah so now we have this Notion page that has some of these resources so basic definitions and information um still kind of building on this this is our MVP um we have all the case studies aggregated on here um with more information from the different blog posts and again those different um sources on that have where they were living before on just blog posts now they're aggregated into something a little bit um easier um and then yeah so we also did a piece or I did a piece on the environment 101 um because our work at Filecoin Green is kind of at an intersection an interesting intersection of web 3 and um the environment there's either people that know a lot about web 3 or know a lot about the environment and some people that are kind of in that in between space um so we kind of aggregated some of that information in here we have a glossary that I'm building um that actually our team in a recent meeting was like this would be something that would be really useful to have and I was like perfect I'll throw it into my Launchpad project um so again just a really nice way to aggregate all the different projects that we're working on and all the different information for people um to interact with it more um so as far as next steps um we need to finish just kind of cleaning up the Notion page and linking all of those resources in there um and then we also want to come up with an action plan to maintain the hub um with new use cases with updated and learning content again we don't want this to be something that we necessarily have to like constantly be maintaining but rather something that will be useful for everybody to um build together and so um also getting that buy in from teams to keep this as a living resource so thank you sorry if I went a little over wonderful thank you so much next up we have Sergei talking about Purana Hi everyone I apologize for the background noise I have a really long morning with the late flights can you hear me well um yes okay if if you can share the slides please yeah yeah so my project is called organizing knowledge, shading for Filecoin community um I'll just start with observation how the knowledge is being shared and managed in Filecoin community today and it's not it it applies not only to Filecoin this is what we're observing for majority of the website projects today is that um most of the community knowledge between the members is exchanged in various messengers like Discord, Telegram and Slack um so Filecoin has Slack, Protocol Labs has Slack uh there's Discord channels for Protocol Labs Network there's some Telegram channels related to Filecoin so all of those communities communicating together in those siloed channels that are not searchable uh all the knowledge stored there is not structured and it's not curated by the community basically it's not really usable but uh those channels store lots and lots of information we did analysis of uh Filecoin Slack channel and we found 380 channels today so this is this is very impressive so and why this happens why there's no good tool that would solve the problem of the knowledge shading and we believe that there's simply no tool today that would satisfy the needs of directory communities that are very fast growing um that are more complex than typical organization that is providing just single product that's usually collaboration of many teams working together each team is pretty independent and the resources of that team are also kind of maintained independently so we didn't see any solution that would work well with distributed organizations like large organizations like Filecoin uh can we go to next slide please so and for someone who is not familiar with Piranha this is our mission is to build effective knowledge based protocol specifically focused for directory communities so the protocol itself is fully decentralized and built on blockchain and Filecoin IPFS all of the content stored in a distributed way and owned by the community itself and it also provides various incentives for the users to contribute in form of a token that would be launched later various NFTs that are being rewarded to the users and we are planning this awesome collaboration with the coin network to reward users with attention tokens as well so we're trying to align with incentives in the directory communities and also following the community philosophy and what's the most important is to satisfy the needs of the community for knowledge sharing can we go to the next slide please so originally we thought that we could start just a single community for Web for Filecoin on Piranha and just create environment where resources could be stored and community members can come in and exchange questions discuss things post tutorials but we quickly realized that Filecoin organization is not as simple. We started doing our analysis on the various projects and resources within Filecoin network and we already discovered over 200 projects and resources so it definitely doesn't fit into one category so after we created compile list of the resources we started thinking that they need to be broken down into categories because one single community is not enough. There are different teams that are managing those projects and resources and so on so after we compile those resources we broke down them into categories but the way we want to break it down we want to have instead of having separate communities we want those communities to live under one umbrella of the Filecoin network can you go to the next slide please so we started building it out on our protocol already so the way we see it the plan itself is broken down into communities so for Filecoin network we are planning a master community that we are calling basically Filecoin network can you go to the next slide please so with this master community would include posts and resources from all of the communities related to the Filecoin network like it's kind of patent level that aggregates all of the resources that we find related to the Filecoin network next slide please and under that patent community we see various categories for sub-communities that are dedicated to specific sub-areas within the Filecoin itself IPFS various related projects to IPFS like lip peer to peer IPLD separate communities dedicated to storage providers community related to file alright but while we have folks joining in again I will Sergei if you just want to wrap up real quick yeah yeah so I'm basically done so the next step for us to finalize that changes on our site on the political side to support various levels of the communities and set up all of the resources that we created kind of aggregate them together populate them and organize them into that structure okay awesome next up last but certainly not least we have Julian who is going to be talking about speeding up lip peer to peer hi everybody I'm Julian I worked I worked with the lip peer to peer team so maybe everybody already knows that lip peer to peer is the module that teaches things together do you want to present the slides yeah thank you yeah yeah so this is the piece of software or library that teaches every host together and it carries the traffic of IPFS and one sticky point of our IPFS network is that it is too slow I mean it's not it's not too slow but I mean the speed and latency has always been a sticky point for us so the goal of this project is clear so we want to reduce our latency and speed up the network can we go to the next slide please please yeah so this is a high level overview of what's happening in the lip peer to peer module itself when we try to establish a connection between two different nodes so basically we establish a network connection like TCP connection and then on top of that TCP connection we add a lot of different stuff so the first thing we add is security because we don't want to send our traffic in clear text so we want to secure our traffic by encryption so we use commonly used security protocols TOS or noise you can see we have we support both of them we choose one of them first the first step is you can see the multi stream selection on the second later we are what happens there is that it selects a security protocol either TOS or noise and after that's done we do security protocol handshaking so that means we negotiate encryption keys and what kind of algorithm we use to encrypt the traffic and after the handshake is done we run another round of multi stream selection to do a multiplexer selection because on top of the security we also want to reuse the connection we don't want to establish a new connection every time we want to send a piece of traffic between two different nodes because library supports different different applications different modules so for that enables us for different applications to reuse the same connection by multiplexing so we exchange another round of multiplexer selection and then we instantiate a multiplexer on top of that so we have a multiplexed secured connection between two different nodes that's what we can use now we want to speed that up so the proposal group had is that we we want to collapse the multiplexer selection and security handshaking which is marked by the blue shade here into one step rather than two steps so that saves us one round trip time because we save one extra handshaking marked in the final two yellow labels so we don't have to do that anymore so if we can pick it back the multiplexer selection in the security handshaking we can do that the way we do that is we make use of the early data support of some of the security protocols in this case it's TOS or noise can we go to the next slide I can show some results over there thank you this is what we did I inserted the multiplexer selection information into the TOS handshaking so the angled brackets started log lines is that what I inserted into the library to show what's happening actually those are not actually those are not going to be in the production code just for demonstration purpose here so you can see that when we do TOS negotiation we have already had the information of two multiplexer choices in this case it's Mplex and Yamax multiplexers we entered that into the security protocol TOS here and after that is inserted we can see that after the TOS negotiation is done the protocol selected here is multiplex Mplex multiplexer so you can see on the top it's a server and on the bottom it's the client you can see that we can select that multiplexer protocol on top by picking back into the TOS negotiation yeah so if you think about this think about the IPFS network as geographically distributed network with thousands of nodes and if you envision that a piece of data need to traverse the network get distributed to its destination you can see that the end result is very noticeable back in the users because the production will be in seconds you know the user can get their data out and get faster response here yeah that's what we had here that's all I have thank you so much alright great job so I won't keep you folks any longer I know where a couple of minutes over so congratulations everyone cohort is officially launched absolutely amazing projects and learning over these past six weeks and again for folks who are joining async because you're in Singapore thank you so much for watching this later moving forward for my residents and folks that are still here please for best in show tell other folks to vote for best in show we have different categories this is a QR code just like maybe some of you did voting for American Idol back in the day additionally we are actually going to have a learning credential which is really exciting a learning credential so building off of web 3 and some of the themes around here if you complete the post test you get this learning credential so please please please complete that and get your launch pad learning credential from CERTI additionally it is up to you now one of the best parts of launch pad is the fact that we don't just do this alone the reason why this program is successful is because we have a lot of folks who volunteer time and resources and knowledge to support folks as they are coming into the network so as you move forward in your journey in the network please please please think of ways to give back and be a strong suede for launch pad don't forget about the time that we your mentors were there to support you think about being one I will drop it in the chat as well to sign up to be a mentor our next cohort starts on Monday so really soon additionally for those who are going to go to Lisbon we will have the launch pad social we invite all of you to attend and we will also be at IPFS camp as well for my current residents tomorrow we will be revealing best in show at the retro so please please please join that as well and congratulations everyone thanks so much you're welcome great job everybody yay yes and a huge huge shout out to the rest of the launch pad team that absolutely does a phenomenal job so again that's Marco, Hannah, Annal, Lindsay and Dave our new cohort manager and Carla as well who definitely keeps us all sane so have a great day everyone thanks, see you guys bye