 Well, welcome to show me what you've got. We're just going to run through a bit of a high level overview of what we're going to be seeing and sharing today and a bit of a background of what we've been up to in the past six weeks, which, which got us here. There's a little agenda. And yeah, first of all, just as we as we work through today, this has been shared with the cohort in in Slack, but we will proceed through the order in which the slides have been arranged in which people have uploaded their content. Like I referred to some people have other places to go and it's getting late in some regions of the world. So we'll stick in that order. I'll share my screen and unless you'd like to share yours just let me know, and you can take over sharing responsibilities I think we have one or two demos scheduled. So people will be sharing their screens and working through what they've created. In the top right hand corner of the screen at the moment you'll see this QR code and that'll appear a few times throughout the presentation. What that is is there's there's a some some awards that are going to be given out based on the work that's happened and the presentations we're going to see today. So if you scan that QR code, it'll come up again at the end of the presentation and maybe once or twice. Before then, and that'll give you a list in which you'll have some options to vote for a variety of awards from best technical presentation best collaboration most creative there's a variety of awards in there. Those winners will be announced tomorrow in our final weekly sync of the cohort. This cohort we had 34 residents I believe from four or five different organizations and spanning regions from South Korea across North America, all the way to Western Asia and Middle East. In Lisbon for Kolo week and we'll have a really short but very cool video recap that we'll see in a second. And the first three weeks of our six week cohort was mostly focused on learning the content of IPFS IPLD live P2P and file coin before meeting in Lisbon. I've sort of jumped into a bit of what what Launchpad was right there but for those of you who are joining from across the network who maybe weren't involved in the last cohort or heard about Launchpad in Lisbon, maybe at one of the events. Launchpad is a six week full time onboarding program designed to train develop a match technical talent at scale with opportunities in Web three across the protocol labs network. We've got scaling hiring onboarding and we build community and the images at the bottom there are the wonderful team that I get to work with that make this happen. Some of them are on the call here. If I just run through the names quickly the team is growing, but from the top left down to the bottom right we've got Christian Molly Brooke Carla Walker Lindsay snow Katie Hannah, myself, enol and Marco. And none of this would be possible without the contributions of those team members so when you see them thank them. Drop them a message on slack to say how wonderful the events for in Lisbon or how meaningful the, the, the interactions were the quality of the curriculum. And yeah, this is cohort v six we started cohort v seven this week as well so it's been a bit of a whirlwind for the people whose faces are on screen there. And we're excited to, to host this event today and share what what's been created with the network. Here's one of the photos from from Lisbon. There's the group that managed to jump in Ubers shuttle between events between hotels and this might have been the day where we had the most people at one time so thanks for everyone for for managing your time and trying to make the sessions and further ado, let's watch a little recap of Cola we can Lisbon. I hope this audio will share. Let's see how it goes. Maybe here, maybe did you try resharing and then click the checkbox that says share computer audio. Maybe it's just music though. Sorry if the audio was was sharing poorly there will and obviously transmitting video over this seems to work better for some than others will will share the link so you can you can enjoy that with as loud as you want. A little smooth video. Are we back to the presentation just to make sure that I'm not sharing something irrelevant here. Can I get a thumbs up okay see some nods awesome thanks. We're going to do that again. All right, let's move on. To show me what you've got we're going to run through this. Like I said if you'd like to share your screen just let me know. Yeah let's do it. Sarah is up first. First of all, a huge thank you to the launchpad team. It was an amazing experience and I was at a dinner party the other day and someone asked me what decentralized storage is and I gave an answer that actually sounded like I knew what I was talking about so thank you to you guys. And it was so nice to meet everyone in person. So what a wonderful experience. So for my project, I thought that I would do something that's directly related to my role. I joined recently as the managing editor as part of the spaceport team. And one thing that we part of my scope basically is to help increase the cadence of content on the protocol labs blog. So, when I joined, it would be updated once every maybe four to six months. And then there would be like a flurry of posts and then nothing for a little while again so I wanted to help introduce a bit of structure and we thought to do that by doing a low lift at the beginning. And also has an amazing archive of all of these talks that have happened through funding the Commons and things like lab week and PL summit so we thought to repurpose some of that in through the written word. So we created a series called the transcriptions. And the idea is to take some of these talks, and then do kind of like executive summaries or key takeaways so that someone who's really busy and doesn't have time to watch an hour long talk of on a, you know, a really technical deep dive can just take a glance, skim through something and then go through a full transcript that can includes video timestamps to take you to a section of the video that you might want to see. So goal number one was to identify around 10 talks to begin with, and that includes topics that are more introductory, things like the importance of public goods funding and then moving along to more technical things eventually things like fpm. So one of the challenges with that there are so many talks and it's difficult to play favorites so of course every different team thinks that the talks that they're curating are the most important ones and we want to make sure that we can choose a nice selection. So we had to kind of meet with different teams, talk to them about which talks we should highlight, and then create a structure so we aim to publish every other week until the end of the year, and then starting in January ramp up to weekly. One of the learnings was that the talks are super long, but they're full of value and insights. So how do you take something that's an hour long where so much of it is interesting. So, rather than just straight transcripts we did an executive summary at the beginning that includes key takeaways top insights maybe really interesting quotes, and then have the complete transcript below for whoever wants more. But the next step for us that we're excited to do is basically transcriptions covers a lot of talks that deal with theory or high level topics, and we thought to complement that with interviews with founders from within the PL network, who are, you know, putting practical questions to some of these theories. So it's kind of like introducing a concept and then the people who are applying it. And we're going to do that through a mix of written content and video. And that should be launching in January. So a lot of work and we're bringing on writers and copy editors who are contractors to help us out. So if you have your support, if you read the content, click on the links so that we up our page views. But most importantly, give us feedback if you attend the talk or you hear one that you think is really fascinating or important please reach out to me. So the next slide just shows you a sample of the layout to the design like the kind of branding look and feel that we have that translates across social media as well. And the idea is to, this is part of a wider plan to establish thought leadership in this space so that when people think about web three blockchain some of these important things that we're working on that they think of protocol labs. So that's it for my project. And thank you so much for your time. Thanks again for everything. This was such an amazing experience for me. That's awesome. Thanks Sarah. Looking forward to seeing that develop even further over the next few months. Lucky's up next. Just a reminder there's that QR code. It'll appear throughout. So you'll have multiple opportunities to to vote probably best to wait till the end but you can open it up and have it handy if you'd like. Yeah, me to keep sharing my screen. Yes, please. Thank you. Okay, and you can take it away. Great. Hi everyone. Good to connect again and back to Canada and the cold is beginning to happen. But hopefully I will survive my first winter in Canada. So, as we have been discussing and as have been like a governance champion across this button. We know that the file coin foundation is the governance to work for a clinical system. And at the moment we don't have a publicly available roadmap that shows our record of work and what's filed for improvement proposals we have. And around at the moment, most of the governance work is around file point improvement proposals where members of the file point ecosystem and propose technical or non technical proposals that can be included in in in the network upgrade. That's what happened. That should happen within file coin. And at the moment, people are not sure how to engage with the file coin improvement proposal process. And they're not sure where they can find what's, you know, the information as to what are the current FIPS I'll call them FIPS to make it easier. What are the current FIPS we have available. What do they talk about. How can I be part of the governance process. So, so what what I'm doing at the moment is to identify and create a publicly available ledger that can show, you know, what's what's available FIPS that we have. The important is who is championing the FIP and what stage is that FIP so that anyone can publicly can go in and check that information so that you also know a bit about the trajectory of the network and some of the work that our developers are working on and what improvements we're going to be seeing going forward. So, what I've learned so far is that governance is a critical aspect of any open source protocol, including file coin, but it becomes very. It's not very popular at the moment which is great. But the conversations are happening. It shows that the network is maturing because as the network matures then it becomes very becomes like a political thing and people all have different opinions and ideas as to how the network can improve. But it would might be useful that way we can see what improvements we're going to be looking out for in the next year. When we're hoping to see all this improvements or the proposals land in the network upgrade and at what time, and how you can engage and read more about what this proposals are. One of the things at the moment, one of the challenges that I had that you know during color week was identifying the best place to house this roadmap so that it's publicly available so either colleagues in the team who are good at, you know, managing jobs or resources like this or tooling like this, please reach out to be so we can discuss how best to sort of publicize this roadmap. But at the moment, I am building that out. I have mapped out all the work across what are for 2023, but it's still an internal document that I'm not ready to share yet until maybe in two weeks time, but I can share a link where you can take a sneak peek as to what improvement proposals we have at the moment. Next slide please. So like I said, the roadmap would cover FIPS, the file called improvement proposals, it will cover ecosystem development and engagement with members of the community. So like I said, the FIPS should show us what are the publicly available, what are the FIPS that are happening that anyone can read up about and understand especially for technical colleagues or anyone who is just curious about what's going on. So at the moment, oops. Part of that work is also to cover governance work stream so FIPS is not the only work stream that the governance team and the Filecoin Foundation what we're doing. We also have events and activities we're hoping to launch to publicize governance and to make governance popular amongst the ecosystem. And so that's also one of the things you find in that roadmap. That's the next slide. So progress so far, like I said, I have drafted the first roadmap. I will share a link where you can have a sneak peek but it's not publicly available yet. And then you would see some of the Filecoin Improvement Proposals that will go into the next shark network upgrade which is good, as well as some other Filecoin Improvement Proposals that are being drafted and worked on by several technical colleagues at the moment. So you can just read up and know what they are all about and hopefully they can go into the next network upgrade next year. So, next steps, again, I didn't find the platform so we can house this roadmap so that it's easily accessible by everyone, regardless of technical background or proficiency. Yes. And I think that's that's it from my side. Thank you so much. Thanks Lucky. I'm excited to see where that one goes as well. And Lucky gave an on concession in in Lisbon which was cool we got to learn a little bit more about the work of governance. Allison's up next network goods. I'll hand it over to you Allison. Hi everyone doing this morning, afternoon, evening, middle of the night. I can't believe the time zones that we have here it was so nice to be in Lisbon at the same time. So, I am the people ops lead for network goods. And for those who made it through everything which I believe you have because you're here. You'll note that in the launchpad curriculum, there was a single presentation by Matt our head of network funding in order to talk about network goods is an organization and specifically network funding. And I wasn't satisfied with that. So I'm putting together a curriculum element for my team, which was really created and you can move to the next slide as some background. It was really created as the evolution of the PL research team which no longer exists in the same entity. And it really became this combination of research moving more towards meta research, as well as funding for the public goods, and our mission is to engineer tools and opportunities for revolutionary coordination systems. Not a lot of people know about us here at PL on the PLN and we really want to up our PR within the network. So I'm putting together relations for those who don't know the shorthand for that. And so I'm putting together and have started this learning journey, which I can give you the objectives on the next page, I believe. These still need to be refined I will probably talk to some of our expert learning and development means and everyone who works at launchpad here, but right now, the plan is to have seven learning objectives, moving from understanding of just what network goods are and public goods and how they're similar but slightly different. And then being able to conceptualize and understand all of the different parts of our organization, which includes network funding that you've all learned about, but also network research and research acceleration, which are other areas, standing a whole bunch of projects that we do on the network goods team. Ideally, there would be a high level understanding of some of our main projects, if anyone was able to attend funding the Commons in wisdom that you heard a lot about hyper starts, for example. And so, I'm not going to read through each of these I think you can do that async. But that's really kind of the overall. Yes, everyone has a different definition of PR. That's why full requests. I clearly am not an engineer, but thank you, Marco for for sharing that with me I'll be more careful with my acronyms. Moving to the next slide I can talk about the roadmap itself so I'm still at the very very beginning of this and that's really around planning and content discovery. And here I joined the company about six weeks ago myself. So, learning all this stuff is new to me as well as trying to build out a learning curriculum to explain it to others. So right now I'm in the, this, the stage of collecting artifacts transcribing watching all the videos and really trying to get my head around the content itself. Ideally, looking at Q one next year I'll be able to put together a skeleton framework for the curriculum itself, and start to pilot it with those who are experts in what we do, ie the network goods team. Following quarter, the plan is to iterate and finalize that and then pilot it with folks outside of network goods to make sure that it is accessible for anyone regardless of their familiarity with the content going in. Hopefully by May. I'm hoping that we would be able to go live on launchpad itself and really make this part of the curriculum that everyone going through this program will be able to participate in. And then kind of from that moving into this really being adapted and expanded further for our network goods new team members were hoping to onboard a whole bunch in 2023. And also really to increase the awareness not just within the PLN but also external to PL to our network of researchers and funders and everyone that we're trying to create this movement around for funding the public goods through our work, such as funding the Commons, which I encourage you all to watch it is for if you weren't able to attend. On the next slide I believe I go into the stage that I'm in so I covered that already. And I think some of the challenges really as I mentioned I knew myself so this is all new material for me to learn and understand and then be able to educate around. So, and before we go Lab Week and prepping for lab week. Definitely put a pin in my ability to progress in this project. So there is a new focus on it now that we're back for now that I'm back rather, and that's kind of the plan going forward. Thanks, Allison. Yeah, Lab Week, I have a week through a curve ball into a lot of these projects, which is all the more impressive that when we see all the work that you all have created and are sharing with us today, so extra big props to all of you for coming up with these cool ideas and seeing them closer to fruition, if not all the way, while managing Cola Week and Lab Week, et cetera. Daniel's up next. I think I saw him on here. Are you happy to have me share the screen, Daniel, or do you want the screen? I will share the screen at a certain point. I think right now you can share it. OK, sounds good. OK, great. So thank you very much, guys. As everyone knows, I'm an unmatched candidate. So I don't have a specific team that I was working on regarding my project, though I do have interests. And my interests are basically related to this. I will go later on. But well, my interests were basically related to crypto econ and to cryptography. Those were the two talks that I will say that impact me the most from the least one part of the curriculum, the talk from Vic and the talk, especially the talk from Professor Rosario Genaro. So at the end, in my project, and something weird is happening there on the screen, at the end of the day, my project is related to these two fields in the following way. I want to introduce you to Lendi. Lendi, it's a platform to allow phylicon storage providers to borrow phylicons against the future rewards of their phylicon storage deals. So the reason this might be needed, and this is something that came from my conversation with CryptoEcon, is that the storage providers need working capital, but they might not have themselves a lot of assets to actually back alone. But they already have committed some resources to buy their hardware and also some collateral to basically secure their phylicon deals. But then what is the equity that they have is their phylicon deals, right? But those are future rewards that they will receive. It's not assets that they are really own. It's not phylicon that they are already on, but it's phylicon that they will own in the future. So the idea of Lendi is to allow this phylicon future income to become collateral for loans. So how it works is that the storage provider has to register into the platform. And this, it works by providing to Lendi their storage provider ID and the BSL key that actually ensures that they own that storage provider ID. So that's a cryptographic signature. Then the Ethereum address is kind of linked to that specific storage provider ID. And then the Lendi platform has access to see all the specific phylicon deals that that specific storage provider has. The other part of this is when they are there, then they can lock those phylicon storage deals. And when they lock one deal, what happens is that the future rewards of that deal can not be used to do for any other purpose than for paying the loans that they have taken in the platform. So that's the way that they lock the collateral, basically. But it's obviously that collateral would not appear until the storage deal expires and they get the rewards. So basically that's it. You lock your deals and then that will provide you collateral. And then you can borrow a certain percentage of the value of that collateral, obviously everything in phylicon. I put here like 80%, but that's kind of like not written in a stone. And that's the way it works. So basically what it's assuming is that there is an Oracle service that will allow to link their field matters to a storage provider ID and also assumes that Oracle can work within the phylicon ecosystem to be able to get all the storage deals that they have and lock the capital. Now I want to get the screen to go to the proper demo. I could. Yeah, I'll stop sharing and you should be able to share. Great. So let's go here. It's difficult to see here. It's here. Sure. So can you see my screen? Yes, we can. So basically here it is. Of course, right now it's user not detected because I have not linked my MetaMask. So I will sign on into Lendi right now. Since I already linked my account, it's going to work. But the Oracle doesn't really work inside phylicon. This is everything working in Polygon and I'm just mocking the Oracle. So it's not actually touching any phylicon testnet state, but basically this is the idea. So I will link this account, connect, and I'm getting my data. So here it is that this is a provider ID address in phylicon ecosystem and I already have linked it. So this button is disabled. But if I was not linked, I could have clicked this and provide my VSL key and actually my provider ID. And then the Oracle is supposed to be verifying that I control that through the VSL signature. Then here I can see my actual storage deals. So here I actually have locked these paper deals, only one of them, which is this for 633 phylicon with this deal ID. And I can borrow against this future reward here. So here is assuming that I lent already 100 phylicon against that. I could go to lock other storage IDs and that will kind of like storage deals and that will allow me to borrow even more. But I will not be able to touch these funds until I pay those debts. So basically that's the way it's supposed to work. I will return to the presentation to just go to the assumptions part. And choo choo. OK, next one. OK, this one, the previous one, previous one. So the actual stack for the application is solidity and foundry for the smart contract side and React is the front end. The code you can find here in this repo and then you can download and play with it. The front end is complete, but it has to be integrated fully with the smart contract, which is not done yet, even with the mock part, which is not actually dealing with the phylicon. So next one. And this is the way it deals with my interest, right? Obviously, the lending platform is a very important topic of research for CryptoEcon. That's something that they want to research as well. And actually, the process to make this Oracle work, right? Be able to get the phylicon state and not only get it, but maybe modify it through a smart contract is one of the projects that CryptoNet is actually working on. So there are a lot of synergies with those teams, both with the protocol level part, which is kind of building how the data is going to be presented with the storage primitives through work with those storage primitives in other world, blockchains, and also with the Medusa team, which is building kind of like the compute side of having zero knowledge way of sharing those data. So it's kind of like a certain deputus that at the end I ended up working a project that is very related to the teams that I want to work with in PL. So that's basically, Rhino is everything more, but you can play with it. Thank you very much. Thank you, Daniel. Super exciting. And thanks for sharing that demo. I think we have a few more demos coming up shortly. Bose up next. I think Bose here. And I think Bose is also going to be sharing a demo later on. Bo, do you want me to start and then transfer at the point of the demo? Bo here. Might need to do a little rearranging if. OK, no worries. We can move forward and come back to Bo. Let me just see who's up next. And I'll move Bose presentation a little later. Is Dennis here? Yeah, cool. Hi, everyone. Yeah, I also have a demo for everyone. So maybe I will just take over screen share. Yeah, no problem. OK. You cannot start screen share. You cannot? No, can I? No, it's working. Can you see my screen? Yeah, we can. Great. OK, so perfect. OK, so in my launchpad project, I was actually working on something that's related to what my team is doing. And if you remember that, so we want to have a measurement campaign on the decentralized net hole punching. And in this launchpad project, I was just trying to ease the onboarding of participants of this measurement campaign. And whoops. OK, hang on. Here we go. And yeah, for that, I built this little menu bar tool, which you can see here at the bottom. And yeah, just as I said, to reduce onboarding friction. And then there are some details around how we do API key requirements around that and so on. So let's give a brief context again. So in a peer-to-peer network, it's actually very important to have full connectivity among all peers in the network. But the internet, as it's built right now, is actually tailored towards this client server model. So you're not easily able to just connect to your neighbor or to another person in the same room. This is just how the internet is built right now. And the people of the lipid-to-pea team built this specific hole punching protocol, which allows up to a certain percentage for just peers to connect, despite all these nets and firewalls and so on. And in this measurement campaign, I actually want to measure the success rate of this protocol and how well it works. And for that, we need as many people as possible to run this client, which will just do a hole punch to a random other peer and then report back if it worked or not. And for that, yeah, I developed this little menu bar item, menu bar application, which is actually pretty easy to install. And I want to showcase that in this short demo here. So if you head to the repository page here, which I will just drop into the chat later on, you can scroll down. And I think if I remember correctly from all launch per caller week, most of you are on a Mac. So you can just decide if you're on a newer Mac, you could download the M1 and M2 version. And if on an older Mac, the Intel version. So I'm on the newer one, so I will just download the application here. So it got downloaded here to the left. And then I can just double click it, install it to my applications folder, and then start it up, click Open. And now it's asking me for an API key. So if you want to have a customized analysis of the data that you were contributing to the research project, you could request a personal API key here. But this is not necessary. So you could just press Continue. And then it's asking you if you want to have it launch on startup. I click Yes, in my case. And then you can see this little icon here on the top right. And it's already started. So now it's just sitting there and running and doing all the whole punching stuff in the background. And it's taking very little resources. I think it's just around 2% of CPU and 100 megabytes of memory and not much bandwidth at all. So like a lightweight website every few minutes, worth of bandwidth maybe. And yeah, that's basically it. So you shouldn't actually notice that it's sitting there. And well, I refined some of our dashboards which show just how everything works. These are some technical details here or some performance measurements. So a few minutes ago, it was just seven. So this is the active clients in the network right now. So there are just eight clients running at the moment. And just before I started this, my own client, it was at seven. So I'm the eighth one now. So it's working. And yeah, I would highly appreciate if many of you would just download this little client application and leave it running. Our plan is, so you don't need to do it now, but our plan is to have as many people as possible signed up or well, at least having downloaded this application until December. So that we can have as many people as possible to have this client running throughout December. So that we gather a lot of data and that we can do an analysis on that. And also we want, this is also a research project. We want to write a scientific publication from that data. And yeah, so I will drop some links after this demo here. And I hope you can sign up and yeah, participate. Well, yeah, I think that's it already from my demo here. Thanks, Daniel. I think you might have 27 new signed, maybe 26 new signups after that. So nice plug there and please share. And I see the chat, there's a request for sharing of all previous resources mentioned. So let's continue to do that as we go through. Thanks, everyone. Yeah. Derek, I see Derek here. I'm not sure if he's Phillip on as well, but spy say hi guys, take it away. Yeah, all right, sweet. So what we did for our project was looking at how we can use some of the Web3 ecosystem to kind of help some of the processing that we are using to gather information for some of our client base. So we looked at building on BacaLoud to kind of improve the data that we're getting from Web3. So move on to the next slide and then I'll just give you a little highlight here. So a little about Spice here, we're an early stage startup. We're also a protocol-apps portfolio company. And one of the reasons we chose to work with BacaLoud is because we are a part of the computer over data working group. So trying to help keep that moving forward. We presented during working group number six for the computer for data. And then we just launched in April of this year. So we're still kind of early stage and building out. All right, next slide please. And so basically what we were looking at here is, you know, one of the problems with data is the egress of data is very expensive. And so we're trying to look at solving that a bit by trying to use that decentralized compute to run those compute jobs closest to where the data is. And so for our project, what we did is we used Spice to get a collection of the Board Ape Yacht Club and then get the owner of one of the specific board apes and look at the number of NFTs that that specific owner had within their collection and then taking that collection from that owner and just turning them into a collage. Not a super fancy thing, but it's just an example of kind of taking a bunch of data and then using that processing power of Boc allow to turning into something interesting, I guess. And so the demo here will be about six minutes so feel free to play it at like one and a quarter speed or something to kind of speed it up a little bit a day. But if you just go to the next slide you can go ahead and click play there. And this is Phillip presenting at the Computer Overdata Summit. Awesome. And it just says there was some audio issues earlier if there are audio issues with this again, please just let me know and we can try to troubleshoot. For this demo, I'm going to take the Board Ape Yacht Club NFT collection. So this is the address here that I- Can you go to full-sizer? Yeah, audio is perfect. And just I'll run this query in just a few seconds. I'll be able to get all 10,000 of the Board Ape NFTs and I can find the current owners of all of those 10,000s in just a few seconds using Spice. And so let's say that, you know, I'm really interested in this one, 5306. And so I want to find all of the NFTs that the current owner of this NFT has in their collection and make a call out from that. So the way I would do that in Spice is I would say, okay, well, I'm only interested in token ID 5306, run that query. We would get back the owner information. Yeah, so we got this back. And so we can take the current owner of the NFT and then I can basically with the same dataset, our NFT owner's dataset, I can query, okay, what are all the NFTs that this current owner possesses? And so I'll run that query in Spice. It'll take a few seconds and then I'll get back. There should be around 17 that I found from earlier. And normally what we would do is once you use this interface to kind of explore the data that you're interested in, we have here a dataset reference here of all the different datasets for the chains that we support. And then once you're kind of interested in, you find the query that you're interested in, you would put this into your application using our SDKs and that's how you would integrate Spice into your application. But for the purposes of this demo, I'm just gonna download this through the CSV. And so we'll be able to show you that data here directly. The way that this demo is gonna work is, so you showed, I just showed using Spice to get the NFTs that are owned by a wallet. But I need to next call the token URI. And so in Ethereum, there is a token URI function on the smart contract. So like if I call that function with the token ID that I'm interested in, I will get back this IPFS link that I need to then pull off. And so if I actually show you what's in this IPFS, so I'll just resolve this. So this is the content ID of that link from the token URI. And if I actually show you what is in here, it's just a JSON file. So I mentioned before that, you know, data's moving off chain. So this is the kind of one example where the image link is not actually stored on chain. It's a metadata file that points to the image link. So we actually need to do two things. We need to first run a DAB and buckly how that parses that metadata out from all of these different metadata JSON files. And then once we have the actual IPFS links that have the image data, we can run a second job that will go and create our collage for us. And so we can see here the token address here is on the left first column and then the token ID is the second one. And so what I'll do here to run back to how job is, I have a little helpful script that will get the metadata URIs by calling the Ethereum smart contract. So, you know, this is going and calling the Ethereum smart contract. We filter out the ones that are not IPFS right now and we'll just work with the ones that are in IPFS. And then you see here that some of these are files within a folder that stored in IPFS. We actually need to go and resolve all of the content IDs of this. So I'll do that now. So this is actually resolving the content IDs of these into the actual IPFS content ID and then dumping what I need to pass into back to how into these volume arguments. These are the commands that we actually need to mount this data into back to how so that the job can access it. And then I'll come into run. And then I'll run this first job that's will actually parse out the metadata in this parse metadata script. All we're doing is just scanning all the files that are in this directory and then extracting out that image URI that I showed here. So we're extracting out this image URI for all of the images and it's writing it to this file called image URIs. And so now I have this job completed so I can take out these image URIs and then I can move on to the next step. So now I've got all the links to IPFS for all the images that I'm gonna create the collage out of not actually gonna run a Buckley-Hell job that will assemble these into a collage but I'll go ahead and write that now. Generate volume urgs and we're gonna use image URIs that we call. So we have similar to what before we have these commands that will actually allow us to attach this to the Buckley-Hell job. So I'll run another Buckley-Hell job here and then pass in the command that actually will create the collage. It's cool. And so the way that this script is running is very similar to the first one. It's gonna loop through all of the images that have been mounted into that directory. It will format them into kind of the same size and then output this collage called collage.jpg. And so now we have here the collage of all of the NFTs from the data that we got in Spice. One of our goals at Spice is to kind of integrate the Buckley-Hell experience directly into our into the product. And so the where you might have something that looks more like this where all the stuff that I just showed you is kind of contained within Spice and all you would need to do is just upload some T-processing job that needs to combine both on chain and off chain data and then we would handle the scheduling and running of it for you. Okay, all right, next slide. All right, so yeah, again, that was Phillip presenting at the computer for data summit, you know that the full recording that was a chopped up version, you know for time constraints here. So, you know, feel free to jump on there if you want to kind of get the more in depth version of that. But basically, like you mentioned we're looking at trying to integrate Bacalao into our system that way we can kind of more easily make the experience for customers kind of trying to query IPFS data and web three data to kind of just have it one integrated experience and using Bacalao there to run those compute jobs next to where the data is stored. So just hear some places you can be able to find us and click on the links there on the slide and what those are made available. And that's it, any questions? All right, thank you. Thanks, Derek. And yeah, if you could drop that if you have a link to that video I think chat would appreciate it. We'll do it. Yeah. Anjuman's up next. He wasn't able to make it to the presentation day so we've got a recording. I'll try to play this at 1.25 speed as well. We're about a little under halfway through presentations and so just to remind you that we'll try to keep between that five and seven minute presentation time for those of you who are coming up. And let's take a look at this presentation here. Hi, I'm Anjuman and I'm on the Collabs team in the other core ecosystem working group. This presentation is a thought experiment on how the IPF is set can be used to make satellite sensor transmissions more reliable. So just a level set, satellites can be in three possible orbits, low, medium and geosynchronous. And you can see from the diagram down below that those are three very different altitudes above her most famous satellites that we know about Starlink constellation, Hubble International Space Station and other defense satellites are in lower orbit anywhere between zero and 2000 kilometers above ground. This is very different from geosynchronous satellites which are about 38,000 kilometers above the Earth's surface. So large orbit satellites have certain advantages and disadvantages. Some of the advantages are it's very low latency it's closer to Earth's so a round trip ping will probably be around 20 to 40 milliseconds. It costs less to put them there in the first place and they're extremely fast. So they go over an entire orbit in anywhere between 90 to 120 minutes. And because they're closer to Earth, the sensors on board can acquire a much higher resolution of data from the Earth's surface. Some of the disadvantages are to do with the fact that they are so fast. Since it's closer to ground, when it zooms by a ground terminal at high speed, there might not be enough time for it to transmit the relevant data to ground that has been requested. And the other is there is plenty of credible evidence that these satellites can be disrupted by nation state actors, Russia, China, India they've all demonstrated de-orbiting satellites via missile launches. And this slide just shows how short an ideal flyby can actually last based on just the geometry of the Earth. So the diagram on the left is a cross section of the sky looking up from a ground station. And you can see when a satellite goes nearly directly on top of us, it can be as short as 15 minutes. And in a far less ideal scenario where it's off over the horizon, it could take as little as eight minutes. And that's assuming we can even see it over a mountain or building or something like that. And so let's do some really rough math to estimate how much imagery or sensor data we can actually transmit in a 10 minute flyby. And so if we look at some of the bands these lower Earth orbit satellites use, we can roughly estimate around a GB of transmission for a 10 minute flyby. And if we look at the data that the satellite is actually acquiring, five by five kilometer image with 16 big color and half meter resolution, which by no means is the best, is about 0.2 gigs. And what that means is in a 10 minute flyby, we can transmit five of these images. That's not ideal. Some of these satellites collect data over seven different wavelength spectrums and it's definitely going to be collects that are larger than 0.5 square kilometers. And so a lot of companies are using satellite interconnects over a constellation to actually get longer coverage even when the collected satellite is not in view. And so starting military satellites, they're doing this by radio or laser connections. And so kind of a spoiler alert, once you start squinting at what an interconnected satellite constellation looks like, it starts looking a lot like an IPFS form. And so getting to the punchline, we can actually use different components of the IPFS stack in a way that makes transmission more reliable and allows all of these units, the ground stations and satellites would speak the same language. And so for example, for IPLD we could have a custom geolocation-based IPLD spec that actually chunks the overall imagery into smaller grid sizes. These then could also be serialized in the car files if they want to be chunked together. For example, I might want to see a shipyard with the water that's nearby because it is extremely relevant to see what ships have been launched. Lit P2P is a great protocol because I am talking over radio, I'm talking over a optical laser, I'm talking over a fiber optic backhaul on the ground, and I'm talking between all these peers. And so Lit P2P is a great solution for maintaining those connections. IPFS allows us to broadcast relevant CIDs down the ground and then we can have the ground station keep pulling the swarm over ahead for relevant blocks and CIDs. And the cool thing is if these blocks are constructed smartly, even before I've received the entire Merkle deck, I can kick off data processing for blocks I have received, which is especially important for low latency cases in defense. For example, Brazil is trying to use satellite imagery to figure out where there are fires and send teams within two hours to go, you know, collect them. Like you don't have time to acquire the entire imagery run a three-way pipeline and then send them. And then finally, because the orbit times are so low, there's a lot of collects over the same area, but across time. And so IPFS is actually a great solution for versioning those across spectral wavelengths and across time. And so what that means is if I only have, I'm a ground station and I only have the bandwidth to receive one image, IPFS could be an easy way for a certain tasking to receive just the latest one and part of the old ones. And finally, here's a graphical view of how IPFS... Yeah, here's a graphical view of how IPFS can be used by a ground station to kick off ground processing, while the satellite is connected to the ground station, continue processing while the satellite is disconnected, but the ground station can still request blocks from the swarm. And finally, when the satellite reconnects with another ground station, now my original one can actually peer or the very high bandwidth backhaul on the ground. And so this is just a quick story into how IPFS can be used in a satellite constellation. I'll probably be sharing some of these thoughts with the browsers and platforms team and excited to see if we can integrate these approaches into our partnership. Good luck. Thank you. Cool. That one was a little different. Satellites. Awesome. Can we move forward? There we go. Yeah, wow. This presentation was out of this world. I like that, James. Very cool. And yeah, just again, the QR codes there as we go through. Jonathan's up next. This one for anybody who's at the Angra's dinner. This one got a lot of applause. If this is the one, maybe not. Maybe I'll let Jonathan jump in here and explain more. Yeah, it's kind of similar. At the end, you'll see the two separate ones. You'll see at the end that they're coming together sort of in the works. Cool. Would you like to share your screen? You want me to continue sharing? Yeah, yeah, if I can share, that would be great. Yeah. Okay, where is it? Okay. So hi, everyone. I'm Jonathan. I'll be talking about a demo application I made called Only Files. And Only Files is using protocol called Medusa, which I'm working on under the CryptoNet team. Great. So from a very high level, we could start with just a set of problems and a set of solutions. So this first problem is a personal problem. I guess it's a goal of mine, which is that I needed to build an application to showcase Medusa. So Medusa is a protocol that's sort of geared towards developers. Developers would be the users of Medusa who would then go build sort of applications for end users. So for me, it's important to, I guess, sort of use my dog food sort of my own stuff so I can better understand the developer experience and build better tooling for future developers to sort of build on Medusa. So the solution here is this demo application, Only Files and Only Files allows you to sell access to content using decentralized protocols rather than centralized platforms. So the decentralized version of Only Files, let's say. And it's pretty relevant because sometimes, we might ask ourselves, why are we doing all this work to build the decentralized protocols and they can be kind of expensive, the user experience can be not great. So sometimes it doesn't make as much sense. But I think for this problem, it makes a lot of sense because even just in the last year, we've seen a lot of deep platforming of various people on different social media platforms, but especially on OnlyFans and sort of with sex workers in particular. So and how they sort of get the platformed or censored is through, I guess, a few means. And so primarily you have payments in Web 2, which are sort of run by a handful of big players like Visa and MasterCard and PayPal. And more or less, those payment processors sort of have arbitrary control to block payments and generally censor financial transactions. But of course, in Web 3, we have a solution for that. Blockchains is sort of the initial start of Bitcoin, which sort of enabled a peer-to-peer payments network. So we have a solution there. We have a piece of the solution. Now going back to the problems, we also have issues with storage and access. So for a lot of these user generated content platforms, there's a company that runs that platform, builds the platform. You upload the content to them, but essentially after you upload it, it's really not fully yours anymore. I mean, they store it, provide your databases, and probably the term to service say that they can do whatever they want with it. And that's obviously can be problematic. In Web 3, we have Filecoin, which is an amazing sort of open storage network where anyone can upload files and anyone can provide storage for files, as long as sort of the deals and the sort of crypto-economic conditions are met. And then similar to related to storage is we have access control, who can access those files, right? And so in Web 2, there's a policy for who can buy and sell content or who can upload and view content, but it's, and they may sort of tell you what it is, maybe the term to service or other places, but you can't view the code that controls that. You can't really verify it. And moreover, like I mentioned earlier, there's sort of policy for who can do what, but ultimately the company sort of has the ultimate kind of control over that. And they, if they want to view the data and do what they want with it, they can do that. And so now this is sort of where Medusa comes into the mix, which is that Medusa is a decentralized access control network, right? So basically anyone can create rules for who can sort of view their content and they can have a guarantee that no one else is going to see that content, assuming that the network is operating properly and the right sort of crypto-economic conditions secure that network. Great, so, and I'll just go over the design of Medusa just very, very briefly. This is kind of like the general architecture of how the system works. So on the left, you have client applications. So only files is an example. You could also have private mailing lists for NFT holders or document sharing. And so basically each one of these applications, there's some content that you want to control access to and you want to put those rules on chain or somewhere that's sort of transparent. And so with only files, the rules are like, I upload content instead of price and if you pay for it, then you get to see it. But the mailing list, it's like if you own an NFT you can sort of see the posts on the mailing lists and with document sharing, you could think like a decentralized Google Drive where maybe I can submit a proof that I have an app protocol.ai email address and that lets me see all the documents in, you know, within the company of the organization. And then the middle, you have this Medusa contract which can live on many different blockchains. And essentially the contract controls where you can send requests and receive results back from the Medusa network. And then on the right, you have the network which essentially you have many different nodes that are sort of running this network and basically what they do is they all have shares of a private key. And if a valid request comes in, you sort of need a majority or a threshold of those nodes to kind of compute a partial result. So a partial like decryption, for example, they can aggregate those together and send the result back on chain. But of course that result, though it is public anyone could see the result only the individual that it's intended for can actually use the result to go and in view some data. Great. So, and only files I kind of mentioned a lot of this already but right, the idea is you have secret content, you upload it, you set a price, whoever pays that price can see it. And sort of the tools that we're using or file coin to store using encrypted content, a blockchain, some blockchain. For this demo, I'm using the Arbitrum Testnet but it could be anyone. And on that blockchain, you deploy smart contract which sort of sets the rules for who can access your data. And if payment is a part of that, you can also use the blockchain for that. And then we have Medusa, which is sort of controlling the re-encryption or the unlocking of content based on payment being received. Okay, so now I'll go into the demo, which I tried to show before and it didn't work but it should be. Which I tried to show before and it didn't work but it should work this time. Fingers crossed. So, okay, here we are. And actually, let me, if anyone wants to play with this as well, I'll just sort of pop it in the chat. I think I'll do it after. Yeah, here it is. So you can go use it there. First thing I'll mention is there's a faucet here. So I think it's kind of difficult to get like testnet ETH on Arbitrum, but I've set up a faucet. Please don't abuse it. If there's not really any rate limiting on it, but basically if you connect your wallet, you can click the faucet. It'll send you like 0.01 testnet ETH and that should be enough to kind of use the application. So let me just, I'm gonna refresh the page. So now I'm sort of not logged in, not connected. I can connect my wallet. Great, I can sign in. And so what happens when you sign in is you essentially, you sign a message and then we can use that signature to basically derive your key or your sort of like Medusa identity. And so this is nice because it means that we don't store any keys anywhere. You don't have to store any keys. As long as you have your sort of theory on private key, you can use that to kind of use Medusa. So here we have a form and this is all very rough sort of looking, but basically you have a form to upload your content. I have some unlocked content already here and then down below you get these like listings where you can pay and sort of unlock content. So let's, you know, upload something. I have some, okay, this like stable diffusion kind of a Gwana looking thing. Let's say I just, I want to sell that. So I'll put in a price and we'll make it pretty cheap. Maybe one more zero there. And then let's see, this is like fully Gwana from stable diffusion AI thingy. And then I'll click to sell my secret. And so now it'll take a second, but it's encrypted. It's uploaded to IPFS and now it's asking me to sign a transaction to register that content with Medusa. So now it's registered. I should be able to scroll down and see it for sale down here. Okay, great. Here's my cooler Gwana, you know, see it on IPFS but you're just going to see sort of an encrypted blog or really you're not going to see anything because it's trying to render an image. But I can click unlock it. I'll pay the fee with a little bit of gas and then it should show up here. It's kind of decrypting it at the moment. Should have a better animation there to kind of give a better idea but give it a second and there it is. So please play with the demo, break it. And this is sort of a rough proof of concept but we'll see where this could be heading in the future. So I think the future is this D-Only fans project that was kind of presented at the Andres dinner in Lisbon. But I think I guess how these two things come together is that this problem of providing like a decentralized platform for people to basically buy and sell content it's much bigger than just the access control and the storage because there's other sort of more difficult problems or more kind of social or human problems to solve as well. Things like content discovery, you know, like which would include maybe like a reputation system and sort of being able to follow people being able to search for content and having content suggested to you but how do you do that in a decentralized way? So it's kind of an interesting problem. Privacy is another issue where Medusa will allows you to sort of control access to private content but you can still see transaction metadata when you use Medusa. So someone could still see that I paid for content from someone else. They won't know what it is that I got but they could still see that that transaction happened. So that's probably something especially in this context that we would want to improve or figure out a solution for. Content moderation, like things like I guess, like banned content, which is difficult because there's some things that are quite obviously should be sort of banned from platform but there's also kind of a gray area. And so coming up with a good way to sort of reach have like social consensus about what should be sort of moderated and what shouldn't is also a very difficult problem. And then yeah, like abuse avoidance, you know, how do we sort of avoid things like hate speech and also things like content theft where maybe someone takes someone else's content and sort of upload as their own and that's obviously a problem but it's kind of an interesting research question. Like maybe there are ways to do sort of cryptographic watermarking on content. So that's interesting, but again, very difficult problem. So this is kind of where the The Only Fans collaboration comes in. The Only Fans is sort of a project that's kind of being researched in the consensus lab. And so we sort of realized like, okay, obviously we need some sort of access control. Medusa is perfect for that. So that's really sort of the evolution of this demo is the next steps to basically integrate Medusa with sort of The Only Fans subnet on PowerPoint and then continue to build out the proof of concept and sort of see where it goes from there. But that's all I've got. Thank you all for listening and if you have any questions slack me or and we can set up a call. And so yeah, thank you. Thanks a lot, Jonathan. Very cool stuff. Yeah, these are really impressive. We've got James up next and I'll start to throw in like a five minute warning into the chat just to keep us on schedule in case people have things they need to be at shortly after this. James, do you want me to manage screen share on this? Or would you like to start with that? I may throw in a quick demo at the end, time permitting, but I think I have no screenshots. Cool. All right, hey, everybody. James here, if you recall, I'm a technical writer so I'm surprised my stuff relates to technical writing and the IPFS docs specifically, which is the doc set that I'm gonna be starting on here at Protocol Labs. So if you could move to the next slide, please. So the overall idea here was to do a couple of many projects that were more or less related to the IPFS docs and technical writing in general, as well as the larger initiative on the team I'm on, which is docs as a service for the larger Protocol Labs network. So what did I do? I created the, how do you say? Recreated the IPFS docs on a Hugo Static website as opposed to ViewPress, which is what we currently use. It's really just a wireframe, not meant to actually be a functional documentation set. Created a tutorial template to be used for a quick tutorial creation and it played around with some CI CD tools that are specific to technical writing, like Veil and Markdown Lint. I didn't get to integrating source cred like I wanted to, but it's still something I'm interested in. All right, so why did I do this? One big thing that was really helpful about Lisbon was IPFS camp at IPFS camp. I was walking around just talking to folks, developers, real people, I guess, building on IPFS. I sat in the community circle that was led by a read from the Kubo team, got a lot of great feedback there, basically dumped all of that into an ocean doc, which folks can look at if they're interested in. And yeah, it's just a bunch of feedback about the IPFS docs experience. My experience in Launchpad was really helpful too for just kind of thinking about how do we, what's the best way to organize content in IPFS? I wanted to test Hugo because it has some other features that VPress doesn't. And I'm always a big fan of automation. There's certain things in technical writing that just kind of are not fun to do, like, you know, coming in a Markdown document for spelling errors, definitely not my favorite thing. And this all relates back to docs and the service. So we'll move on to the different parts of this. Dave, if you could, next slide please. All right, so the first thing I did here was set up an IPFS docs wireframe on Hugo. I want to give a big shout out to my colleague, Johnny, who's not here. For those of you who were in Lisbon, Johnny gave the IPFS desktop walkthrough. And while Johnny was in Lisbon, he started working on a GitHub repo to quickly spin up a Hugo doc site template, essentially, for the docs of the service initiative. You can check it out, you know, whenever you like. We love feedback. So I served as a guinea pig for that. It was really great. I was able to spin up a website in about 10 minutes. Has a lot of stuff already templatized, pretty easy to use. Has a lot of great features baked in there. Some things that are specific to Hugo, relative references as opposed to manual links. Basically, Hugo won't build if the relapse aren't working, whereas ViewPress will build with broken links. So that's a quality issue, has nice themes. Menus are automatically created for every single page based on the header depth. There's things like short code, so you can create like tabbed views, which I'm a big fan of. And then Johnny started working on commands to automatically create top bars, sidebars, and page menus. I actually, as part of this project, added in another command to create a tutorial template, which I'll talk about later. So if you could, next slide, please. All right, so the next part of this was just thinking about the information architecture. So I will say I've worked in other technical writing jobs, primarily for closed source software with one implementation, essentially, of that software. Thinking about information architecture for IPFS is a lot different for me, because we have all these different tools. There's all these different implementations, Kubo, JS, IPFS, being two of them that don't necessarily have feature parity. And then there's this whole decentralized nature of protocol labs like the team developing, I think it's IRO, so I heard that correctly at IPFS camp, the Rust implementation, I know there's interest in developing others. So just thinking about how do we organize all of that content? Do we need multiple sites? Do we need one site? What's the best way to lay out menus? What's the best way to make sure that people can get to the information that they're looking for as quickly as possible without getting lost or frustrated? So I mentioned that I got some feedback in Lisbon, had my launchpad experience, just random conversations with peers in the couple of weeks that I've been here, and then experience from previous jobs has led me to the thought that it's like, okay, maybe we can look at different ways to do the information architecture of the site. So I'll hopefully show this later, if I have some time. The approach that I took for a new layout on Hugo was first I'll say a three dimensional navigation. And by that, if you guys remember the IPFS doc site, currently there's a sidebar and sub items, and then some of the pages that are linked to some of those have menus, but not all of them do. So what I tried to enforce here was a top bar layout, it's logical categories, which are described below. Each of those top bars goes to an overview page with a sidebar of logical subcategories. And then there's menus automatically on every single page that are basically created as a function of the header depth. So yeah, I mentioned menus by default. I was trying to start thinking about the user persona. So am I a developer? Am I somebody trying to implement the protocol? Am I just a total noob like myself? I don't know really anything about web three. I'm just trying to understand what the heck is IPFS. I tried to demonstrate the use of tabs over linear reading. So I'll give an example of this. When you set up IPFS desktop, you can set it up on Windows Mac or Linux, right? So in the current documentation, that's a linear reader. So different sections, you have to scroll through it or click down to the section you're looking for. With tabs, pretty self-explanatory, just click the Windows tab and you'll only see that content. Therefore, avoiding seeing other content you don't necessarily need to see or want to see. The idea with the landing pages was that those should essentially service the directories for the top bar categories to filter readers to the place they need to go. So the top bar items I created were basics, reference, how to tutorials and community. This is actually, I took inspiration from the Filecoin docs. Basics is pretty self-explanatory, conceptual overview, quick starts, brief overview of community stuff. Reference is, this was based on engineering team feedback. Basically the idea is to just have the HTTP gateway API reference in there and then possibly link out to like a Kubo specific site or a JS IPFS site. Although there's disclaimer that's still very much up in the air. We're gonna be having those conversations for a while. How to page, which is basically breaks down actions that you can perform with IPFS like adding a file, right? And then so you have a tab to view, how do I add a file in JS IPFS, Kubo, IPFS desktop, so on and so forth. And then tutorials, pretty self-explanatory. So next one. All right. So I mentioned- Sorry to interrupt, we're just gonna have to keep moving a little quicker if we can just to get through in the interest of time. Sure, sure. So real quick, the Johnny's Doc site allows for the quick creation of templates. It's something called Kynes in Hugo. So as you can see from the screenshot right there, you just run a command and it'll create a tutorial template. That was part of my project. I'll link to the repo at the end. So if you wanna take a look at it, you can look at it there, but it just automatically spits out a markdown document with a pre-formatted tutorial structure as the creator of that tutorial can fill in the blanks. Next slide, please. All right. So just talking about automation, there's a lot of rules for markdown formatting that vary across sites, things like GitHub, Hugo, stuff like that. And in the technical right-of-wearing world, there's style guides. Nobody wants to actually remember this stuff. It's really difficult to remember. So Dave, if you could skip to the next slide. The answer is automation. So there's a couple of tools, markdown link check, markdown lint and veil that I tested out here. If you could skip to the next slide, please. The first one, markdown lint is pretty self-explanatory. It checks markdown structure and formatting spaces, things like that against a predefined set of rules. You can configure it yourself. Markdown link check, also very straightforward. It checks for bad links, returns an error if there's a bad link found. Next one. This is my favorite tool right here. So basically it allows you to programmatically assess markdown document against a style guide like the Microsoft style guide. Spits out a bunch of warnings, completely customizable, configurable. You can combine different style guides, different rule sets, things like that create your own. So for example, you'll see an error in there that says, did you really mean Filecoin? In a custom version of this, we could have a set of re, how do you say words, that are allowed. So Filecoin wouldn't return an error, things like that. Then there's things that are specific to writing. So if anybody remembers this from school, I don't remember half this stuff like passive voice, overactive voice, returns suggestions for that. I think another great thing here that I'm a fan of is at the top you'll see these weird numbers and statistics. So if you're familiar, I think we have a lot of a few former teachers here. So you all may have heard of the flesh concave grade level. So it automatically runs measures like that against the markdown document, which is potentially useful if you're trying to write like intro level material versus like a spec. Like if the spec is a college level reading material, probably not a huge issue, but if the basics material is over like sixth grade, eighth grade, something like that, it's like, maybe that's pointing to like, okay, let's rephrase this. If you could skip to the next slide, please. So yeah, the, oh, sorry. So just to wrap it up, the lessons learned here, a couple of things. I'm a big fan of Johnny's project, the Docs starter repo. We're gonna continue iterating on that a lot of fun things that we can do with that. We'd love for people to play around with it. Switching to Hugo, I think definitely has some benefits. I mentioned them earlier, things like rail refs, menu creation, themes. A lot of folks in protocol labs already know Hugo pretty well. And then just running through all this gave the docs team some good data for the docs as a service initiative. Information architecture, like I mentioned, it's definitely a non-trivial problem. Gonna be talking about that for a while with engineering teams, different folks. One big question, once again, is single site versus multiple sites. Stay tuned for updates on that. The tutorial template, I'm a big fan of things like that. You can imagine that we could expand that idea to other different types of content so that you can automatically create them. And then we can get, you know, community contributions that are within a set of guardrails essentially. So you have your template, fill in the blanks, boom. And then lastly, if you don't wanna think about writing things, the idea is, hey, here's a Markdown Lenter, a link checker and the style guide checker that when you're writing things, you know, if you're not a technical writer, which you don't need to be, that's the whole point. You have all of these tools to help you out essentially as you're writing. And one of my big goals in the next month or so is to start customizing all of those four protocol maps. So I probably don't have time to do a demo, but all, if anybody's interested, you know, just ping me after and happy to show you the repo and stuff like that. Cause I'd love to get feedback and we'll definitely be iterating on this at the technical writing team. So thanks everybody. Thanks James. That was great. I like Marko's idea. I might have James launch back collaboration in the future. Bo's up next. And yeah, sorry, James, I mean to interrupt there and remind you about time, but we are going to be cutting it a bit close. So just a reminder to all that we'll try to hit five minute mark and I'll put a note in the chat once you get there. Bo, do you want me to keep sharing? Would you like to take over? No, please. It'd be great if you can do that. I'm sure you can question me messing around with it. So quick background here. My name is Bo Berkshire. I'm Marko on the Mosaic Working Group, which is in the services or spaceport team at PL. So our mission here on the Mosaic side is really this idea of building a marketplace. And the idea that we're going to do it on web three principles so that it can help PL to attract the best and brightest minds of web three. I think the real, there's kind of obviously two sides of this. One is trying to help those teams build better and faster. I think where the vision gets in, in gets more complicated is, you know, there are marketplaces out there, but if we can do it right, we can make sure that the other side is also incentivized to really be successful. And I'll let you guys read the vision on your own. If you can go to the next slide, Dave, I want to explain a little bit about this idea of matching versus marketplace versus ecosystem. So if you think about matching, what a lot of marketplaces do that aren't very, how should I say, effective from a business model standpoint is they just match up supply and demand. So you have supply and one side demand on the other, you match them up, that's liquidity. A great example here is Upwork. You know, they amassed a massive number of freelancers, had a lot of demand, but they didn't actually add a lot of value outside of that matching. And so what happens is they get disintermediated. There's no point on me doing my second and third project on Upwork. Once I've already met the person, I trust them and I like them. So I just pulled off there and then the freelancers happy because they get more money and I'm happy because I don't have to use their system for communication. So that's kind of where a lot of people get stuck and die is okay, we match, but they don't do much else. The next low to down is Marketplace where you're actually adding a lot of value in the process before and after a match. And the great example here is Airbnb. So as a client who's looking for a house, you know, they help me not only find the house, but they do some quality control in there. For both sides, they provide a legal framework, a contract, payment processing, and then obviously on the supply side, by providing insurance and protection of your asset. So there's a lots of other value they're adding. The key idea here is before and after, they're actually doing value add activities. The next level beyond that is an ecosystem where you get beyond just those two and you get other individuals involved to add value as well. A great example would be developers. You know, if I have service providers on one side and I have PLN and PL teams on the other side, what about developers who are building tools who could help them work better together? What if I could get them involved in the relationship? What if I could get token holders involved who cared about the value of this token to really play an active role in governance and make sure that we're building an ecosystem that everybody is involved in and has a say. And as I looked around, the best example I could find is actually the point of ecosystem. I think they've done a great job of building out a framework where everybody's incentivized on the line. So that gets into kind of our goal is to build a similar ecosystem, not just a marketplace, but a true ecosystem where if I am a service provider, I'm not just trading my hours for money. I'm actually getting some state in there, some equity stakes, some value beyond that that I care about. And then trying to do that same thing for clients, for ecosystem partners, for token holders and developers. Next slide, please. So learning so far, this is way more complex than we originally thought. As just one example of that second point, there's a lot more stakeholders than we realized. We identified service providers originally because that's who we were working with. But the example is one agency I talked to, well, it turns out my contact agency is actually an independent contractor who basically is full-time for them, but on the side is also a developer in developing a tool on Web3 Stack to help build websites faster. So there's an example of an agency that's actually an independent contractor who's also a developer stakeholder. So it's just one example of the complexity here that's much bigger than we originally realized. I think a second learning we've had so far is that the idea or the vision that is on that earlier slide, it really resonates with the service provider stakeholders. They are very frustrated with this life of, hey, I'm gonna trade an hour of my time for a dollar and that's it, I get nothing else out of it. So it's becoming more apparent that if we can build this, if we can overlay a marketplace with this ecosystem model and the incentive alignment, there is interest there. Next slide, please. So roadmap, very quickly, I'll go through this. We're still in the planning stages. We started jumping into the mapping and realized that we needed to slow down and kind of step back and get into the planning because it's just bigger and there's a lot more steps than we realized. So our hope is in Q1 to identify the stakeholders and really interview them, talk with them, understand what they each want. Then really build out the alignment map in Q2 to try and align each stakeholder and map out those relationships and what happens between each one. I think Q3 is gonna be pretty pivotal in terms of modeling. There are organizations out there that do pretty extensive modeling or hoping to work with one of them. I think one of the things I didn't understand upfront was the importance of figuring out how people can gain more incentive model and try to identify ways to prevent that abuse. And then the goal is in Q4 to actually overlay that incentive model onto the marketplace that we build between now and then. Next slide, please. So current status, like I mentioned, still in the planning stages, I think the biggest challenge I've got is just time. And specifically the kind of catch 22 of priorities. So on the team itself, we're currently actively trying to build a marketplace where we're matching up supply with demand and adding value along the way. That in and of itself is a full-time job and oftentimes feels more urgent than this idea of creating an incentive model, which is still a very amorphous idea. I think the irony is that if we do that right, it's actually more valuable in some ways than just building a marketplace. Cause an ecosystem is a bigger vision and adds more value for everybody involved, but it's just the harder one to kind of keep moving forward little by little. So definitely feeling that catch 22 challenge right now. That's all I've got. Thanks, Bob. Sounds great. And that was very efficiently delivered too. Very, very cool. Brian, would you like to take over screen share? Oh, and just a reminder there, the QR code, you'll see it throughout, but scan that and we'll come back to it all at the end. I'll give a final reminder where you can, you can vote while things are fresh. Brian, do you want a screen share? Do you want me to screen share? I'm going to demo at the end. So. You want to take it now then? Sure. I'll take it now. Go ahead, take it now. I'm going to go pull the presentation up. Ah, do you know what slide we're on, Dave? Ah, there we go. You're on 65, I believe. Yeah. I'm not sure how to present here. Do you have a slide in the top right? Top right, okay, cool. There you go. Probably should I just let you do this. That's okay. All right. Okay, so we're going to talk about meta transactions and some some important things to mention around them. They're based off of EIP 2771. These are Ethereum improvement proposals. And essentially what you're doing is you're signing a message from one user. And in that message, the user specifies who they trust to relay that transaction to a smart contract to be settled. And so the forwarder is paying for that transaction. And this is all kind of worked out, been worked out in a secure way through this EIP. The important part to know here is that the smart contract is verifying that the forwarder is sending the right signature and that signature has the original sender's address in it along with the forwarder's address. And through that, you can kind of verify that this is a secure transaction, the original signer meant to do this. Another EIP dimension here is 712. This is essentially a lot of this. You know, it adds security and a good user experience because typically with your transactions, when they pull up in MetaMask or any other wallet, they do not present, you know, the data is just one long hex string, hexadecimal string, and it's not very readable. So that's what EIP 712 does, is it actually presents a message inside of your wallet before you sign it so that you can know what you're signing. It does that through like a base domain signature. And so this is sort of a security practice that essentially the wallet will check that one you're talking to the right contract. So if you see number four there, it's the verifying contract. So if you're talking to the wrong contract, then the wallet will actually indicate that and kind of warn you against that. Another really important one is chain ID. The thing that you really have to be worried about with these types of transactions are typically it's replay attacks, which means that the signature can be used over and over again. Chain ID prevents relay attacks on other chains. So some of these transactions would be or could be valid on different blockchains that use the same wallet and signing schemes, unless you add the chain ID in there and then the wallets will not allow that transaction to go through. And so there's some other things with the domain signing, but that's sort of like the initial part of the 712 signature. The other part is defined by each transaction. So this is like the custom data that you see in the message here. And for this, we've got the owner, which is the original signer. We've got the trusted relayer, which is who I'm trusting to pay for my transaction. And then we have the nonce. The nonce prevents replay attacks in the same contract because what you do is you bump a nonce. So if it's at zero and you or in this case, it's at seven. Once the transaction goes through, that goes to eight. That invalidates that signature and that transaction from ever happening again. Yeah, I think I'm just going to. This is sort of like a simplified version of the contract here. The most important part is that we are passing in the signature, which is the three parameters here. We're recovering the user from that. And then we're checking that against the owner. And yeah, so that is essentially how the contract is maintaining the security of a original signer, signer's transaction being paid by another person or entity. Yeah, so we're using. I'm using LibP2P WebRTC to make these connections in the app. This is just kind of a breakdown of some of the code and handlers. I won't stay on that much. And now I will pull up the demo. So I've got two windows open here. One is running on port 3000. One is running on 3001. And what we're going to do here is start some LibP2P nodes. So the only thing that you really have to know about this app is I'm submitting an address to the contract. So I'm submitting this address to the contract through this relayer. And so if I'm registering, I'm going to start a node. And this is called the dialer in LibP2P. And then this node is going to be the listener. It's going to listen for the data that I'm passing it. So this will load up. They each have their own node. They each have their own PRID. You would do this offline. So if me and David are doing this transaction together, I would pass him this information on Discord. And then he would, and then I would connect to his node. And let's just see if this, let me pull up the video I recorded earlier, because I think it's running right. Brian, maybe you could also share this video recording in the chat. Yeah. That way. So I will. I recorded this just before the talk, just in case this happened. So here we are. We're adding the PRID and the connection address. And let me just skip forward here. OK, here we go. OK, so you'll see that the message passed over here. And this is now waiting for me to send the signature. So I'm going to go over to the app on the left. I'm going to grab my wallet here. That is going to be my trusted relayer. We're going to sign this transaction. So we should see that pop up over here. And you can see that image from earlier. We've got the trusted relayer. And the owner is this user's wallet right here. So we've created a signature. And I'm just showing you what that looks like right there. That's the signature that we're going to pass to the smart contract. First, we're going to pass that signature over to the user on the left. Once they receive that, they will sign and pay that. So basically, he's paying for the, oh, sorry. This user on the left is paying for this transaction to go through and sending that signature we saw earlier to the smart contract. And I think I'm just showing here that the signature was received by this user on the left. So it's the same signature. Then we will pay for that. And you can see that settled right there. So yeah, that's it. All the kind of communications done through libp2p. And it's basically you're passing data from one peer to another in order to facilitate a minute transaction. That's great. Thanks, Brian. And yeah, if you could share those resources in the chat, people can check it out. And I think the, well, yeah, or even drop a link to that video into the actual presentation. So if anybody goes back and checks it later, they can, I guess they can see the recording. But it's nice to have the resource too. Good stuff. I'll take over a screen share from you then. And we'll move forward. I think up next we've got a video from Po-Chun who couldn't make it today. I'm going to play this on 1.25's playback speed. And I hope you let me know if the audio is not working. I'll try Po-Chun and we'll use it. So I'm going to run this Po-Chun and talk about the management project when we make a video. So if you can see, Po-Chun is a wrap, not as long as it's basically indexing the proper blockchain. Here's the current architecture. So you can see there's a really notifier that's syncing data on the proper network. And it's passing to a task queue. That will become a sub-border set of the new workers. The task defines how we extract, transform, and load blockchain data into a destination, which is a checkout data warehouse. There's a couple of things to cover. It's every one can know what to say. Then I'm going to add a new worker. Hey, Dave. The audio is a little tough. Can you try resharing with share with computer audio? I'm muted. Yeah, unfortunately, that would require me to log out. I can do that if somebody can manage the screen sharing. We could even move forward onto the next presentation and come back to Po-Chun. Well, I tried to do that earlier, and there's some permissions issue. Should we do that? Yeah, we can move on, and I will try to assure. Yeah. Great. Thanks a lot, Katie. We have someone else go while I get it set up. Yeah, let me click forward here. Dan and Elliot are next. Sure. Yeah, I guess we can jump in. Yeah, thanks, guys. All right. Well, yeah, I'm Elliot, and I did this collab with Dan. I'm helping to lead the Ignite engineering team. So that's IPFS GUI and tools, and that includes IPFS desktop and web UI. So our project is an IPFS search integration into those applications web apps. And yeah, let's go to the next. So IPFS search is a way to discover content on the distributed web. So when you start using IPFS, there's not an easy way to find out what is available there. By the way, as I'm sure you all know, web UI is what IPFS desktop is built on. And these apps are the primary entry points for new IPFS users. So it's very easy to just run IPFS desktop. You get a Kubo node automatically, and you can start interacting with IPFS right away. In these apps right now, you have to kind of know a CID or have a way to look up what you want. But with IPFS search, you can actually discover new IPFS content, old content, any content. It's really a search engine that tries to index everything that is in the DHT, everything that's on the network. And that enables you to better appreciate the value of all the data that exists on IPFS. So as mentioned, I'm Elliot. I did this together with Dan, and a special thanks to Russell, Frito, Matias, Lytle, and Julia for their help as well. So I think I'll pass it over to Dan. Hey, thanks, Elliot. Yeah, you can go to the next slide too, I think. So this was kind of covered as well. But yeah, basically, this idea was kind of born from, I'm pretty sure, interaction between IPFS search team and Russell and Lisbon about kind of trying to expand the reach of IPFS search. So I can actually, can you go to the next slide? OK. Yeah, I guess instead I'll just do a quick demo so I can take over. But yeah, basically, the goal here was to create a proof of concept for integrating IPFS search into the current web UI. There's some reason I can't share. You should be able to share now, Dan. OK. OK. And you can see Chrome and Hoping. But yeah, so basically, this is for those of you who haven't seen it. I hope everyone at this point has seen the IPFS web UI. But this is just the main page. So what we did kind of as a V0 is just add a new tab to the nav for essentially spinning up an IPFS search within the web UI. So here, everything behind the scenes is kind of powered by IPFS search as API. And right now, what's happening behind the scenes is we're just indexing all of IPFS for, in this case, NASA. And so what we can do here is currently explore the CID, which is already existing within IPFS web UI. This should pull up hopefully. Yeah. And then if we go back to search, we can also link out to the IPFS search details page. So hopefully, this should spin up on ipfsearch.com's detail page if it works. My computer is super slow, but let's see. Yeah. So basically, it's a pretty simple demo and proof of concept. But we were able to essentially pull in the IPFS search into the web UI and kind of, I guess, next steps. Katie, I guess you can share again. I'll stop sharing. Yeah, I can take over. Or Dave. I was going to say, I know Frito had some questions as well. He jumped in specifically to ask you, Dan, about that. OK. Yeah, we can basically, for us, the next steps are really just collaborating further with everyone that we've been working with over the last week and cleaning up the UX and UI. And there's a lot of functionality that IPFS search has in their application that we could try and bring into ours as well. Like pagination, you can play pretty much from ipfssearch.com. You could queue up an entire playlist of audio and listen to music that's stored on IPFS. So there's a lot of really cool things that can be done in the future here. Yeah. I don't know if, Elliot, you had any other comment? Yeah, I think that kind of covers it. We did have a fun discussion on GitHub about kind of the future of IPFS search and decentralizing it more, using libp2p, kind of ways to make sure that users can access it, even if there's some, for example, DNS censorship. And then, yeah, just deeper integration and kind of improving the way that users actually do a search in Web UI. Thank you. Yep, thanks. Thanks, Dan and Elliot. Sorry, I don't really have questions, but it was very nice to say, Dan and Elliot. And I'm looking forward to collaborating more about this. Thanks. Yeah, thanks for all the help throughout the last week. Sure. Yeah, thanks. And a perfect live example of some of the really cool collaboration that comes from these projects. And thanks, Frida, for helping the guys out. I think we've got the sound sorted. We'll give Po-Chun's video another try here. Again, let me know if not. And then we can give it a third attempt. Hello, everyone. This is Po-Chun. I'm going to talk about my long-track project around hacking leading data storage and performance improvement. In case people don't know, Leading is a wrap. Note has no design specifically for indexing the Falcon blockchain. So here's the current architecture. So you can see there's a leading notifier that's syncing data from the Falcon network and inserting this task into a task queue. Now it becomes soon by a set of leading workers. The task defines how we extract, transform, and load blockchain data into a destination, which is usually our data warehouse. There's a couple of downsides with the current design. First of all, since every worker node is a independent node-as-node, when we want to add a new worker to the pool, we need to wait for it to be fully in sync with the Falcon network. Second, since each worker node is doing the network syncing, as well as the data extraction, we need a pretty high-end hardware spec in order to handle the workload. So this makes running too many workloads expensive. So in order to make this design more scalable and easier to maintain, I thought about a new architecture proposal. So here's the new design. So instead of using a local disk to the data store, we use a distributed data store that shared across the leading notifier and leading workers. So leading notifier will be responsible for syncing the Falcon network data to the data store. And the leading workers will only need to focus on extracting data from the distributed data store. This makes the leading worker stay less, more lightweight, and easier to scale up. Also, the distributed cache can be shared across all the nodes. And both the cache and data store can be scaled up independently. So I've implemented a prototype to use F3, a distributed data store. And the red is a distributed cache. Then I realized that there's a lot of room for performance improvement in the leading code base. So I decided to pursue that in that direction. So I look at the leading production dashboard to find out what are the most expensive tasks. So we can see that most of the expensive ones are minor sector related. So when I look at the tracing for a certain sector event task, I notice that in some cases, there will be one minor that's taking a lot of time to extract data. So when I investigate further, I notice that it's because the minor has a lot of sectors, more than two millions in this case. So what I did to the code is to make it more performant. So the trick is, instead of using a two-way merge for a beta rate, I use a multi-merge to merge all the sector states in the end. So this significantly cut down the runtime by more than half. So the result, the runtime for the task reduced from 50 seconds to 30 seconds. So another performance fix I did is to get rid of an actor code mapping in a code. So Lili tries to construct an actor code lookup table for every epoch. So actor code is just a code that indicates the type of actor. So for every epoch, Lili will loop through the state tree, usually with 1.5 million actors to build the lookup table. So this process will take 40 seconds without any caching and 10 seconds with the state store caching. However, once we build that lookup table, it's only used for a couple hundred times within a task, which just doesn't justify the cost of building this table. So after I remove that lookup table, I reduce the task runtime from 48 seconds without caching to 16 seconds with cached. Then now it takes 8.5 seconds to finish that task. Right, that's all I have. Thanks. Hello everyone, this is Pochen. I'm going to talk about Man on Track project and Run Hacking Readable Story. I'd prefer to see it real quick. There we go. Cool, thanks Pochen. Sorry, wasn't able to make it. I'm glad he was able to share his video. I think Marco's going to take over screen share as he has what was a little bit of a later mission from Sarah. Would you like me to continue through the deck first? I was able to add it to the Google Drive, or the deck so you can just go on to the next slide. All right. Should be there or not? I know she uploaded some slides but had some technical issues with video. Yeah, I could take over. It's all good for me. Yeah, go ahead. I think we just have a few more after this for anybody concerned about us running over time-wise. Great, see my screen all right? Yeah. The FEMDX team today. Plus audio. The early builders dashboard that I have done for. Yeah, that's good. That's what you got. So really quickly, for those of you who have not heard about the Early Builders program for FEM, it's a little bit different in the sense that it's focused on early builders building with the product being incrementally delivered rather than building on. This is just a really quick capture of how the program runs. I'm just going to leave it here, but TLDR, they come in and they go through weekly check-ins with us and then eventually they graduate. It's not a hard explicit outcome that we want them to deliver a product on FEM at the end of the cohort, but we highly encourage it and we bring them a lot of resources and bring them much closer to the PLN resources to do so. Coming out of it, they then go straight into launching their product or helping us to run community calls if they are more of a community member than a team and or they then join the Builders funnel. So what are the useful metrics in this case? So the goals of the dashboard is to capture a snapshot of how the program is going. Probably by weekly cadence or monthly cadence is something that we're trying to figure out now. It then helps us to optimise early builders program for deployment of FEM because at the end of the day, we want them to deploy the FEM-compatible FEM. We want to see actors on the network. And so how do we make sure that progress is moving along and how can we use a dashboard to capture that? Also making sure that they have great developer experience because they are going to be the best advocates for us to move on to product launches, the FEM launches. So we need to make sure that that is going well and also to capture the value of the program as a whole. Challenges that we faced, the I-Face building this was defining metrics. The team had a little bit of a huddle to come together but it was a little bit challenging to know what we should be measuring that will be useful metrics, especially when there's not a hard expected outcome at the end of the program. Lack of visibility in the team's progress was also a challenge because teams communicate in very different ways and have very different transparency comfort levels. So we might not actually know what they're building but how long do we know that they're progressing along? Also figuring out automation for the dashboard, a lot of this is very subjective information and also sometimes word-of-mouth, right? So learnings is to have really strong relationship building and communication with these teams and making sure that they feel like they can share what they're doing with you rather than you'll never find out from the GitHub or their website alone. We also track crowdsourcing team inputs but they're also so learning for us that just needs a lot of inconsistency. So it gives us a sense but then we also have to research and make sure that it's consistent so that when the reader reads it they get value out of the dashboard as a whole. Next up we are looking at after this how do we update, what's the update cadence for the dashboard? Sharing the dashboard with relevant stakeholders and seeing how it provides value and then being agile with that and also with FEM developing as it goes along to then change the metric that we capture. So I'll do a really quick demo of the dashboard over here. Okay, cool. Here you can see total teams is about 89 teams as of now, 89 active teams. We have about 20 projects with FEM deployed as of today. So then you can tell that's maybe something we should be nudging a little bit more on. We have 14 teams that are funded that's something that we're looking at for sustainability beyond the program if they are tied in with a dev grant or into the builders funnel. This gives a sense of the percentage of use cases so for our FEM product team especially we'll know like which other use cases there are key. So you'll know like, yeah, data dials are a pretty big use case and maybe that's something we should prioritize and then of course there are many more over here and then you know who to ask to help build out your solution blueprints. Over here is this is estimating engagement. This was highly subjective based on someone like me who's a program lead, giving like a score of how I think engagement has been going at the weekly check-ins versus on the Slack channels and so on. So it's just a sense, but it gives us a sense that okay, I need to know that maybe I want to shift it more to the three and fours rather than them being at the twos and maybe for those teams that are at the twos, how do I ping them to make sure that they're doing okay and they're engaged, right? And then testing over here is more for our product team and our engineering team. So for stuff that is going out, making sure that that's captured and if anything that needs to be tested is not on here, that's something that we know that we need to start having a conversation on and asking for volunteers. And then lastly around expertise, kind of getting a sense of the languages that everyone's working with. So we know what to prioritize when we're building our SDKs and also if we need experts to test on certain things we know who to reach out to. So yeah, that is mostly the demo, thanks. Awesome, yeah, Dave take it away. Cool, thanks, Sarah, Marko for your help there. That was recorded at like 2 a.m., it's on your screen up late doing it. Denise, up next, Denise, would you like me to continue sharing or would you like to share? I can actually share my screen. Cool. All right. So just some quick background on me. I am on the Spaceport team at Protocol Labs. We provide services and resources to teams within the Protocol Labs network from events like Lab Week to onboarding processes into the network and sharing all of the different resources that Protocol Labs has to help these teams grow more quickly. And one common question that we get quite a bit is it's difficult to find people who can answer a particular question about a PL project or a PL team, especially on the technical side. So for example, if I have a question on LiveP2P, who do I ask, where do I go? For me, I have the amazing Launchpad team here, but if I am brand new to the network, have not done Launchpad yet, what do I do? And the solution that we have for now is the PLN directory and office hours. So you'll see some screenshots here. For those who have not seen the directory yet, I highly recommend you check it out. I'll put it in the chat after this call, but it is a place to see all the teams within the Protocol Labs network, as well as who is in those, who's a member, what their role is and their contact information, even for some people, a direct link to their calendar to set up office hours. And so in this case, say I want, you can search for teams in the search bar up here. Let's say I wanna look for LiveP2P. I would get a result for the LiveP2P stewards team that has recently been added in. You can see their website, their Twitter, a little bit more about them. And more importantly, some of the members here, I will caveat that this is an incomplete list and still a work in progress. But looking here, I can much more easily find that Steve is the lead for this team. And he actually has an office hours link available. So I can schedule a quick 15 minute chat to reach out to him and say, hey, here's the question that I have. These are the list of teams that have now been added to the PLN directory on the left-hand side. So the Launchpad team, thanks everyone for this, as well as most of the engineering teams. The reason I wanted to focus on the engineering working groups is because these are the most kinds of questions that we have that we don't have, that my team doesn't have a quick answer to. Working groups that will be added by the end of this month are all listed here. And the V2 vision for this is that each working group will have a full list of members added and that each working group will also have a preferred contact method listed so that some of the messages that don't need to go to individuals can go to a shared message board or email and can get responded to more quickly. Quick ask for this group is if you see your working group here, please check it out at plnetwork.io slash directory. You can use that same search bar. And if something looks off, please use the request to edit button. And if you have any feedback, please share it with me at spaceport-admin protocol.ai. Thanks, guys. Thanks, Denise. Super helpful resource to keep up with all the changes and where everything's located. Yeah, actually, Denise, if you could drop that link into the chat, that'd be great as well. We've got Yuri up next. He was not able to make the call today. Oh, wait, Yuri is on the call, but he submitted a video recording, which I will play now. Doesn't have a mic today. No worries, Yuri. We've got your presentation here. And I think there's two after Yuri. Hello, everyone. My name is Yuri. I am a software engineer from Piranha. First, a little about my main project. Most Web3 projects today exchange most of the community knowledge between users in different messengers, like Discord, Telegram, and Slack. And information is not searchable and doesn't have a structure. Basically, it's not usable, but those channels store lots and lots of information. For example, we did the analysis of FileCo and Slack channels and we found 380 channels. And this is impressive. We didn't see a typical solution to this problem for big Web3 organizations like FileCoin. And for someone who is not familiar with Piranha, this is our mission to build an effective knowledge-based protocol specifically focused on Web3 communities. The protocol itself is fully decentralized built on the blockchain using FileCoin APFS. All the content is stored in a distributed way and owned by the community itself and also provides different incentives to contribute in form of tokens and different NFTs like reward. And we are working now on collaboration with the coin network to reward users with attention tokens as well. So during the launchpad, I was working on community documentation. Piranha gives various Web3 communities the opportunity to create a separate sub-domain dedicated exclusively to this community with their own topics for discussions and its rules of conduct. Previously in the past, there was only an FAQ page which was not a flexible enough tool to introduce new users to the philosophy of particular community. And we had an idea to implement dynamic documentation system like similar to a deep book, but decentralized. Now we have finished the documentation menu to start on the IPFS indexing with that graph and moderators and administrators of the community and create or edit the whole documentation section with only one transaction. The main problem with the previous version was the necessity of sending the transaction each time you create or edit any documentation item. It was too long and creating complex documentation was too hard. There is an editing mode in the current version all changes are saved on the local storage and only after publishing the JSON document with new documentation, it will be saved on IPFS and hash will be stored in the blockchain. On this slide, you can see how the documentation menu looks on the page and here is the editing mode. After clicking save to draft you can see how the documentation looks like without sending transaction, you can add text, you can change title, you can do anything you want. Why? Create new posts, edit old posts and also change their item order. So a little bit about the format. It was also pretty challenging to create the graph parser for such a big JSON structure. On the left image, you can see how the documentation object looks on the front end part, on the draft. Before sending to the IPFS. The right image is the same JSON object but passed by the graph. And also you can see that the main item information like documentation content are also packed to IPFS and in this structure, we have only hash. So very soon this functionality will be deployed in production and will be used by community mediators. And that's all I wanted to say thank you very much. Hello everyone, my name is Yuri. I am a software engineer from throughout. Thanks Yuri. Thanks for sharing that. Very cool to see Piranha. Hello everyone. There we go. In another cohort and building upon your tech in Launchpad. Snow's up next. Snow, would you like me to continue sharing the screen? Sure, that'd be great. So hi, my name is Snow. I started off with a project of what scaling notion would look like for Launchpad in 2023. So to be completely honest, if you can go to my first slide I started not knowing anything about notion and I purposely picked using this tool or having it as a part of one of my responsibilities because I did really want to learn. And so basically what I started off with my goal is what does notion gonna look like when we have multiple running cohorts in parallel and like how can we make this a more accessible super useful tool for all residents and cohorts going forward. And in the beginning I was noticing some challenges of just this is a big learning curve for me. It's a new project and also being a contractor there were some permission issues that I was having but actually as I've gone through and we can go to the next slide I have started just researching, reading making progress of what's going on. And the thing I kept hearing again from my team or hearing from others is automation and how can we make notion for us in Launchpad? How can we automate things because we're gonna be growing we're gonna be having multiple cohorts, more residents even doubling those numbers in 2023 for the goals that we have. And so as I was looking through I actually just realized like things are happening really fast and features are actually coming all the time that this is a constant learning opportunity for me to find out what is new. For example, like recurring templates is something that just came out that I found videos about this month in November. It's kind of like not as easy as like a recurring task feature yet but it's something that notion is hoping to make as a building block for the future. So on the bottom corner you can kind of see that there's templates that you can upload a template and make it come out at a certain date but I'm wondering what that would look like for the future of making full on tasks that we're doing and we're using for Launchpad. Next slide please. And so as I think about it more what does Launchpad need to be automated as we scale? As I've been going through this especially from just joining the first cohort as a resident participant and kind of observer and now finally with V7 starting this week and actually jumping into the tasks and tools that I was put responsible for. It's those resident profiles, the resident checklist, the templates that we're continuously using. What happens when we have more than one cohort and they're not running in parallel? How do we make sure that that is making sense? And rather than moving from a manual copy and pasting to more to formulas discovering new features and automation. So that's all I got. That's great, thanks for sharing Snow. And everybody in here has engaged with the Launchpad notion pages. So very useful and helpful for current and future cohorts. And the last presentation of cohort V6 is show me what you've got. We've got Robert next. Robert, would you like me to continue sharing or would you like to share your screen? Yeah, no, that'd be great if you could share. So hello everyone, I know we're over time so I will keep it brief. It's amazing to see so many great projects and I know I'll be using many of them. And even more impressive is how many are in the demo stage already and I'm very, very impressed. So my name is Robert. I manage the orbit program here at the Filecoin Foundation. And what I'm looking to do is automate many of the internal orbit processes and then also demonstrate a number of Filecoin virtual machine use cases. So maybe we go to the next slide. So for those of you who don't know, the orbit program is the Filecoin Community Ambassador Program. There are over 70 ambassadors in as many countries all over the world and more or less, orbit's been misdiagnosed as an events program because we spend money on events, but these ambassadors not only host events, but they translate documentation and publish articles about Filecoin in their home languages. They build on Filecoin, you know, a number of different things that they are involved with. And it's really, really amazing to see how the program has grown. It started in January and the participator keeps going up and up and up. Okay, maybe the next slide. So on the previous slide, you saw a bunch of maybe charts. When I came to the orbit program, everything was being done manually. More or less, people would submit their events briefs via email. We would transcribe their proposals into spreadsheets. And there's a number of situations that we have to email invoices and contracts. And it's just, you know, it's just a totally out of control, very time consuming process. And you can imagine if we want to scale, the orbit staff should have to do none of that. So we are transitioning to a program called AirTable or software called AirTable. And more or less, we spent a couple of weeks mapping out what the process is from applying as a volunteer to join orbit all the way through getting an event approved and having to submit the receipts. So this is more or less the automated process that AirTable is going to manage, including sending out automated emails and to invoices and contracts and all of the like. Let's go to the next slide. So where this is relevant for the Filecoin, virtual machine is that these orbit members actually have a ranking and we're trying to pit them in competition. So there's some gamification to keep them active. And what we want to do is actually use orbit as a demonstration of three Filecoin, virtual machine use cases. Reputation, rewards and voting. So we're going to go to the next slide. So there's some gamification, rewards and voting. So more or less what's going to happen is we have the AirTable. That's the logo all the way on the right on your screen. We have all the AirTable basically managing all the processes. And we get really amazing data from AirTable. We're then going to use another software which is coincidentally called orbit, not too confused everyone. And that is where we're going to actually track user participants like volunteer participation and their ranking. And then we're going to get that into a Filecoin, virtual machine, smart contracts somehow. Okay, next slide and last slide. So the Filecoin, virtual machine, our smart contracts, we are in the design stage. So this is more or less what the process kind of looks like. So we'll have the input participation data from the AirTable. And that will get basically processed in the orbit software, which will then compute the ranking of the orbit members. We will then have some type of smart contracts rank tracker that associates that rank with the members Filecoin addresses. And then more or less basically two things will happen. The first one will have some type of token system where they receive tokens for ranking up, and then they could use those tokens, send them to a foundation wallet to redeem some type of perk. For example, if you get to the third highest rank, you actually unlock the foundation will pay for them to come to a fill event as an example. So maybe there's some type of NFT that represents that perk that will send them getting to that rank, and then they send it to the foundation wallet to redeem it, and then we send them the compensation to be able to pay for their flight and hotel and all this stuff. So that's more or less the reputation and reward. So that's how that would work. And then there's this idea that there would be a yearly orbit summit. And we thought maybe the highest ranked people will be able to vote on who has to host the yearly, which orbit member will host the yearly summit. So we actually have an anonymous election smart contract voting machine that I actually use as a homework assignment when I am teaching software classes such as the one over the summer we did for Co-Rise, but this works on Ethereum. It's really interesting because it has some cryptography involved that actually hides people's votes. But we want to migrate this over to the pipeline virtual machine and more or less we will have some type of automated process where people that are ranked three and ranked four, their wallet addresses will automatically when we want to generate an election, those wallet addresses will automatically come into the election and be earmarked as valid wallet addresses to vote. And then they will vote using their private key on whatever it is they're voting on. So those are the three use cases we're looking to build. I'm hoping to do it by the pipeline virtual machine global hackathon that's happening at the end of November. The air table luckily is in the build stage with our partners that we have like a third party consultant group that's helping us build that out. The rewards of reputation are in the design stage and the voting we need to migrate from Ethereum to Filecoin virtual machine. So I just want to thank Sara Theum who has been very generous in her time. This has been maybe the most unlikely collaborations between an ambassador program and the Filecoin virtual machine team. But I will definitely be leaning on her greatly for bringing this to life. And I also want to thank Elijah Jasso who was a TA and of course I used to teach who brought the voting machine to life. Okay, thank you so much. I'm looking forward to delivering this in the next couple of weeks. Very cool. Thanks, Robert. Cool to see those stats about where in the world events are taking place. Thank you, everybody. Thank you so much. Thank you so much. Thank you. Congratulations to a close and your final, almost your final launch pad tasks complete. And with that, we have a cohort being launched into the network. You're going to go forth and do more wonderful things. And these preview presentations we saw today. We're an incredible mix of. Of creations and ideas and talent from across the world. And I'm looking forward to seeing what everyone's been. Been working on over the past six weeks and, and working on moving forward before we throw those graduation caps, James. There's one last task. And that is to take out your phones, scan that QR code, throw your votes in. As if you don't have enough swag from Lisbon. The winners will be receiving more swag. And there's the categories on the left. Contributions, technical contributions, collaborations, most valuable. We'll tally these today, share them tomorrow during our final weekly sync. And then we'll bid you farewell into the, into the ecosystem. But we will definitely be calling you back. We have such a cool collaboration of talents in here. And in order to build a launch pad and improve it. And we'll probably be calling upon your talents to revisit future cohorts. I do want to once again thank all of the launch pad team. For the work over the past six weeks. And I didn't mention it earlier. Big shout out to all the mentors. Who, who played a integral role and they always do. They always do. They always do. They always do. They always do. They always do. They always do. And in, in guiding our cohort residents. Through the six weeks. And. Graduates. Is that a word? People who are going to graduate. We will likely ask you to be mentors in the future too. Possibly. Lastly, if you completed all the quizzes at the end of each section of the curriculum. And now having completed your project presentations. I'm going to show you how to do that. I'm going to show you how to do that. I'm going to show you how to do that. So look out for an email. That should be coming your way. Once those tasks are all complete. And projects have been shared like they were today. And I think that brings our show me what you've got to a close. Another reminder to vote. And now it's up to you. Thank you, everybody. Have a great weekend. Have a great day. See you later. Thanks everyone. Thanks everyone. Bye.