 Well, welcome to show me what you've got we're just going to run through a bit of a high level overview of what we're going to be seeing and sharing today and a bit of a background of what we've been up to in the past six weeks which which got us here. There's a little agenda. And yeah, first of all, just as we as we work through today, this has been shared with the cohort in in Slack, but we will proceed through the order in which the slides have been arranged in which people have uploaded their content. Like I referred to some people have other places to go and it's getting late in some regions of the world. So we'll stick in that order. I'll share my screen and unless you'd like to share yours just let me know, and you can take over sharing responsibilities I think we have one or two demos scheduled. So people will be sharing their screens and working through what they've created. In the top right hand corner of the screen at the moment you'll see this QR code and that'll appear a few times throughout the presentation. What that is is there's there's a some some awards that are going to be given out based on the work that's happened and the presentations we're going to see today. So if you scan that QR code, it'll come up again at the end of the presentation and maybe once or twice. And that'll give you a list in which you'll have some options to vote for a variety of awards from best technical presentation best collaboration, most creative there's a variety of awards in there. And those winners will be announced tomorrow in our final weekly sync of the cohort. In this cohort we had 34 residents I believe from four or five different organizations and spanning regions from South Korea across North America, all the way to Western Asia and Middle East. We met in Lisbon for Cola week and we'll have a really short but very cool video recap that we'll see in a second. The first three weeks of our six week cohort was mostly focused on learning the content of IPFS IP LD live P2P and file coin before meeting in Lisbon. I've sort of jumped into a bit of what what Launchpad was right there but for those of you who are joining from across the network who maybe weren't involved in the last cohort or heard about Launchpad in Lisbon, maybe at one of the next. Launchpad is a six week full time onboarding program designed to train develop a match technical talent at scale with opportunities in Web three across the protocol labs network. So, we've got scaling hiring onboarding and we build community, and the images at the bottom there are the wonderful team that I get to work with that make this happen. Some of them are on the call here. If I just run through the names quickly the team is growing, but from the top left down to the bottom right we've got Christian Molly Brooke Carla Walker Lindsay snow Katie Hanna myself enol and Marco. And none of this would be possible without the contributions of those team members so when you see them thank them. I'll give you an optimal message on slack to say how wonderful the events for in Lisbon or how meaningful the, the, the interactions were the quality of the curriculum. And, yeah, this is cohort v six we started cohort v seven this week as well so it's been a bit of a whirlwind for the people whose, whose faces are on screen there. We're excited to to host this event today and share what what's been created with the network. Here's one of the photos from from Lisbon. There's the group that managed to jump in Ubers shuttle between events between hotels and this might have been the day where we had the most people at one time so thanks for everyone for for managing your time and trying to make the same session and trying to make the sessions and let's move on. To show me what you've got we're going to run through this. Like I said if you'd like to share your screen just let me know. Yeah, let's do it. Sarah is up first. Hi everyone, first of all, a huge thank you to the launchpad team. It was an amazing experience and I was at a dinner party the other day and someone asked me what decentralized storage is and I gave an answer that actually sounded like I knew what I was talking about so thank you to you guys. And it was so nice to meet everyone in person. So what a wonderful experience. So for my project, I thought that I would do something that's directly related to my role. I joined recently as the managing editor as part of the spaceport team. And one thing that we part of my scope basically is to help increase the cadence of content on the protocol labs blog. When I joined, it would be updated once every maybe four to six months. And then there would be like a flurry of posts and then nothing for a little while again so I wanted to help introduce a bit of structure and we thought to do that by doing a low lift at the beginning. The protocol labs has an amazing archive of all of these talks that have happened through funding the Commons and things like lab week and PL summit so we thought to repurpose some of that and through the written word. So we created a series called the transcriptions, and the idea is to take some of these talks, and then do kind of like executive summaries are key takeaways so that someone who's really busy and doesn't have time to watch an hour long you know a really technical deep dive can just take a glance skim through something and then go through a full transcript that can includes video timestamps to take you to a section of the video that you might want to see. So goal number one was to identify around 10 talks to begin with. And then includes topics that are more introductory things like the importance of public goods funding and then moving along to more technical things eventually things like FBM. And one of the challenges with that there are so many talks and it's difficult to play favorites so of course every different team thinks that the talks that they're curating are the most important ones and we want to make sure that we can choose a nice selection. So we had I had to kind of meet with different teams, talk to them about which talks we should highlight, and then create a structure so we aim to publish every other week until the end of the year, and then starting in January ramp up to weekly. One of the learnings was that the talks are super long, but they're full of value and insights so how do you take something that's an hour long where so much of it is interesting. So, rather than just straight transcripts we did an executive summary at the beginning that includes key takeaways top insights maybe really interesting quotes, and then have the complete transcript below for whoever wants more. And the next step for us that we're excited to do is basically transcriptions covers a lot of talks that deal with theory or high level topics, and we thought to complement that with interviews with founders from within the PL network, who are, you know, putting practical implications to some of these theories. So it's kind of like introducing a concept and then the people who are applying it. And we're going to do that through a mix of written content and video, and that should be launching in January. So a lot of work and we're bringing on writers and copy editors who are contractors to help us out. I would love your support if you read the content, click on the links so that we up our page views, but most importantly give us feedback if you attend to talk or you hear one that you think is really fascinating or important please reach out to me. And this shows you a sample of the layout to the design like the kind of branding look and feel that we have that translates across social media as well. And the idea is to, this is part of a wider plan to establish thought leadership in this space so that when people think about web three blockchain some of these important things that we're working on that they think of protocol labs. That's it for my project. And thank you so much for your time. Thanks again for everything. This was such an amazing experience for me. That's awesome. Thanks Sarah. Looking forward to seeing that develop even further over the next few months. Lucky's up next. Just a reminder there's that QR code. It'll appear throughout. So you'll have multiple opportunities to to vote probably best to wait till the end but you can open it up and have it handy if you'd like. Hi everyone. Good to connect again and back to Canada and the cold is beginning to happen. But hopefully I will survive my first winter in Canada. So as we have been discussing and as have been like a governance champion across this button. We know that the file coin foundation is the governance to word for file point ecosystem. And at the moment we don't have a publicly available roadmap that shows our record of work and what's file point improvement proposals we have. In the background at the moment most of the governance work is around file point improvement proposals where members of the file point ecosystem and propose technical or non technical proposals. That can be included in in in the network upgrade that's what happened. That should happen within Filecoin. And at the moment. People are not sure how to engage with the file coin improvement proposal process. And they're not sure where they can find what you know the information as to what are the current FIPS I'll call them FIPS to make it easier. What are the current FIPS we have available. What do they talk about. How can I be part of the governance process. So, so what what I'm doing at the moment is to identify and create a publicly available legend that can show, you know what's what available FIPS that we have what the content is who is championing the FIPS and what stage is that FIPS so that anyone can you know go in and check that information so that you also know a bit about the trajectory of the network and some of the work that our core developers are working on and what improvements we're going to be seeing going forward. So what I've learned so far is that governance is a critical aspect of any open source protocol, including Filecoin, but it becomes very. It's not very popular at the moment, which is great. But the conversations are happening. It shows that the, the network is maturing, because as the network matures then it becomes very becomes like a political thing and people all have different opinions and ideas as to how the network can improve. But the roadmap is useful that way we can see what improvements we're going to be looking out for in the next year, when we're hoping to see all this improvements or the proposals land in the network upgrade and at what time, and how you can engage and read more about what this proposals are. And one of the things at the moment, one of the challenges that I had at the, you know, during color week was identifying the best place to house this roadmap so that it's publicly available so either colleagues in the team who are good at, you know, managing knowledge hubs or resources like this or tooling like this, please reach out to be so we can discuss how best to sort of publicize this roadmap. But at the moment I am building that out. I have mapped out all the work across Waterfall 2023, but it's still an internal document that I am not ready to share yet until maybe in two weeks time. But I can share a link where you can take a sneak peek as to what's what's improvement proposals we have at the moment. Next slide please. So like I said, the roadmap would cover FIPS, the file called improvement proposals it will cover ecosystem development and engagement with members of the community so like I said the FIPS should show us what are the things that are publicly available. What are the FIPS that are happening that anyone can read up about and understand especially for technical colleagues or anyone who is just curious about what's going on. So at the moment. The FIPS work is also to cover governance work stream so FIPS is not the only work stream that the governance team and the Falcon Foundation what we're doing. We also have events and activities we're hoping to launch to publicize governance and to make governance popular amongst the ecosystem. So that's also one of the things you find in that roadmap. You can go to the next slide. So progress so far like I said I have drafted the first roadmap I will share a link where you can have a sneak peek but it's not publicly available yet. You will see some of the file point improvement proposals that will go into the next shark network upgrade which is good as well as some other file point improvement proposals that are being drafted and worked on by several technical colleagues at the moment so you can just read up and know what they're all about and hopefully they can go into the next network upgrade next year. So, next steps, again, I didn't find the platform so we can house this roadmap so that it's easily accessible by everyone, regardless of technical background or proficiency. Yes, and I think that's that's it from my side thank you so much. Yeah exciting to see where that one goes as well. And lucky gave a an on conversation in in Lisbon which was cool we got to learn a little bit more about the work of governance. Allison's up next network goods. Good morning, afternoon, evening, middle of the night. I can't believe the time zones that we have here it was so nice to be in Lisbon at the same time. So, I am the people ops lead for network goods. And for those who made it through everything which I believe you have because you're here. You'll note that in the launchpad curriculum, there was a single presentation by Matt our head of network funding in order to talk about network goods is an organization and specifically network funding. And I wasn't satisfied with that. So I'm putting together a curriculum element for my team, which was really created and you can move to the next slide. As some background. It was really created as the evolution of the PL research team which no longer exists in the same entity. And it really became this combination of research moving more towards meta research, as well as funding for the public goods and our mission is to engineer tools and opportunities for revolutionary coordination systems. And people know about us here at PL and the PLN and we really want to up our PR within the network. Public relations for those who don't know the shorthand for that. And so I'm putting together and have started this learning journey, which I can give you the objectives on the next page, I believe. So, these don't need to be refined I will probably talk to some of our expert learning and development leads and everyone who works at launchpad here, but right now, the plan is to have seven learning objectives, moving from understanding what network goods are and public goods and how they're similar but slightly different. And how research and funding work together, and then being able to conceptualize and understand all of the different parts of our organization which includes network funding that you've all learned about, but also network research and research acceleration, which are other areas, spanning a whole bunch of projects that we do on the network goods team. So there would be a high level understanding of some of our main projects. If anyone was able to attend funding the Commons in wisdom, you heard a lot about hyper starts, for example. And so, I'm not going to read through each of these I think you can do that async. But that's really kind of the overall. Yes, everyone has a different definition of PR. That's why full requests. I clearly am not an engineer. Thank you, Marco for for sharing that with me I'll be more careful with my acronyms. Moving to the next slide I can talk about the roadmap itself so I'm still at the very very beginning of this and that's really around planning and content discovery. As I'm here I joined the company about six weeks ago myself. So, learning all this stuff is new to me as well as trying to build out a learning curriculum to explain it to others. So right now I'm in the this the stage of collecting artifacts, transcribing, watching all the videos, and really trying to get my head around the content itself. Ideally, looking at q1 next year I'll be able to put together a skeleton framework for the curriculum itself, and start to pilot it with those who are experts in what we do, ie the network goods team. The following quarter, the plan is to iterate and finalize that and then pilot it with folks outside of network goods to make sure that it is accessible for anyone regardless of their familiarity with the content going in. And then hopefully by May. I'm hoping that we would be able to go live on launchpad itself and really make this part of the curriculum that everyone going through this program will be able to participate in. And then kind of from that moving into this really being adapted and expanded further for our network goods new team members were hoping to onboard a whole bunch in 2023. But also really to increase the awareness not just within the PLN but also external to PL to our network of researchers and funders and everyone that we're trying to create this movement around for funding the public goods through our work, such as funding the comments, which I encourage you all to watch it is for if you weren't able to attend. So on the next slide I believe I go into the stage that I'm in so I covered that already. And I think some of the challenges really as I mentioned I knew myself so this is all new material for me to learn and understand and then be able to educate around. And also, lab week and prepping for lab week. Definitely put a pin in my ability to progress in this project. So there is a new focus on it now that we're back, or now that I'm back rather, and that's kind of the plan going forward. Okay, thanks Allison. Yeah, lab week, have we threw through a curve ball into a lot of these projects which is all the more impressive that when we see all the work that you, you all have created and are sharing with us today so extra big props to, to all of you for for coming up with these cool ideas and seeing them closer to fruition if not all the way. Thank you for watching. Call a week and lab week, etc. Daniels up next. I think I saw him on here. Okay, great. So thank you very much guys. It was last everyone knows, I'm an unmatched candidate. So I don't have an specific team that I was working on regarding my project, though I do have interests. My interests are basically related to this I will go later on, but well my interests were basically related to crypto econ and to cryptography. The top those were the two talks that I will say that impact me the most from the least one part of the curriculum. The talk from big and the talk, especially talk from Professor Rosario Genaro. So at the end in my project and something weird is happening there in the screen. At the end of the day my project is related to these two fields in the following way. I want to introduce you to Lendi. Lendi it's a platform to allow file con storage providers to borrow file cons against the future rewards of their file con storage deals. So, and the reason this might be needed. And this is something that came from my conversation with crypto econ is that the storage providers need working capital, but they might not have themselves. They need a lot of assets to actually back alone, but they already have committed some resources to buy their hardware and also some collateral to basically secure their file con deals. But then what is the equity that they have a is their file con deals right but those are future rewards that they will receive is not assets that they are really own is not fine on that they already on but it's like on that they will own in the future. So the idea of Lendi is to allow these a file con future income to become a collateral for long so how it works is that the source provider has to register into the platform and this it works by providing to Lendi. They're a storage provider ID and the BSL key that actually ensures that they own that is storage provider ID so that's a criminal cryptographic signature. Then the, the Ethereum address is kind of linked to that, to that specific storage provider ID, and then the Lendi platform has access to see all the specific file con deals that that specific storage provider has the other the other part of this is when they are there, then they can look those file con storage deals. And when they look one deal what happens is that the future rewards of that deal cannot be used to do for any other purpose done for paying the loans that they have taken in the platform. So that's the way that they log the collateral basically but it's obviously that collateral would not appear until the storage deal expires and they get the rewards. So basically that's it you log your deals and then that will provide you collateral and then you can borrow a certain percentage of the value of that collateral obviously everything in file con. I put here like 80% but that's kind of like not not written in a stone. And that's the way it works. So basically what it's assuming is that there is an Oracle service that will allow to link their field matters to a storage provider ID. And also assumes that that Oracle come work with within the file con ecosystem to be able to get all the storage deals that they have and log the capital. Now I want to get the screen to go to the proper demo. If I could. Yeah, I'll stop sharing and you should be able to share. Great. So let's go here. Sure. So can you see my screen. Yes, can. So basically here it is of course right now is user not detected because I have not linked my metamask so I will sign on into lending right now. Since I already linked my account. It's going to work, but the Oracle doesn't really works inside file con. This is everything working in polygon and I'm just mocking the Oracle. So it's not actually touching any file con. State but basically this is the idea. So I will link this account. And I'm getting my data. So here it is that this is a provider ID address in file con ecosystem. And I already have linked it so this button is disabled but if I was not linked, I could have clicked this and provide my BSL key and actually my provider ID and then the Oracle is supposed to be verifying that I control that through the BSL signature. Then here I can see my, my actual storage deals. So here I actually have locked these paper deals, only one of them, which is this for 633 file coin with this deal ID. And I can use these, I can borrow against these future work here. So here is assuming that I lent already 105 con against that. I could go to lock other storage IDs and that will kind of like storage deals, and that will allow me to borrow even more, but I will not be able to touch these funds until I pay those debts. So basically that's the way it's supposed to work. I will return to the presentation to just go to the assumptions part. And to two. Okay, the next one. Okay, this one, the previous one, previous one. So the actual stack for the application is solidity and foundry for the smart contract side and react is the front end. The code you can found here in this, in this repo and then you can download and play with it. The front end is complete but it has to be integrated fully with the smart contract which is not done yet, even with the mock part which is not actually dealing with the file con so next one. And this is the way to go. It deals with my interest right obviously the landing platform is a very important topic of research for crypto icon. That's something that they want to research as well. And actually, the process to make this Oracle work right be able to get the fight on the state. And not only get it but maybe modify it through a smart contracts is one of the projects that crypto net is actually working on. So there are a lot of synergies with those teams. Both with the protocol level part which is kind of building how the data is going to be presented with the storage primitives through work with those storage primitives in other word blockchains. And also with the medusa team which is building kind of like the compute side of having zero knowledge way of sharing this data. So it's kind of like certain deputies that at the end I ended up working a project that is very related to the most to the teams that I want to work within PL. So that's basically right now is everything more but you can play with it. Thank you very much. Thank you Daniel. It was very exciting. And thanks for sharing that demo. I think we have a few more demos coming up shortly. Hi everyone. Yeah, I also have a demo for everyone so. Okay so in my launchpad project I was actually working on something that I'm so that's related to my to to what my team is doing. And if you if you remember that so we want to have a measurement campaign on the decentralized net hole punching. And in this launchpad project I was just trying to ease the onboarding of participants of this measurement campaign. Oops. Okay, hang on this. Here we go. And yeah, for that I built this little menu bar tool, which you can see here at the at the bottom. And yeah just as I said to reduce onboarding friction. And then there are some some details around how we do API key requirements around that and so on so let's give a brief context again. So in a peer to peer network. It's actually very important to have full connectivity among all peers in the network, but the internet as it's built right now is actually tailored towards this client server model. So you're not easily able to just connect to your neighbor or to another person in the same room. This is just how the internet is built right now. And the people of the lipid EP team built this specific whole punching protocol which allows up to a certain percentage. Yeah for just peers to connect, despite all these nets and firewalls and so on. And in this measurement campaign, I just, I actually want to measure the success rate of this protocol and how well it works. And for that we need as many people as possible to run this client, which will just do a hole punch to a random other peer, and then report back if it worked or not. And for that. Yeah, I developed this little menu bar item, menu bar application, which is actually pretty easy to install and I want to showcase that in this in this short demo here. So if you head to the repository page here which I will just drop into the into the chat later on. You can scroll down and I think, if I remember correctly from all launch per quarter week, most of you are on on a Mac so you can just decide if so if you're on a newer Mac you could download the M one and two version. And if on an old Mac the Intel version. So I'm on the newer one so I will just download the application here. So it got downloaded here to the left. You can just double click it, install it to my applications folder and then started up. Click open and now now it's asking me for an API key so if you want to have customized analysis of the data that you were contributing contributing to the research project, you could request a personal API key here but this is not so you could just press continue, and then it's asking you if you want to have it launch on startup, I click yes, in my case. And then you can see this little icon here on the top right and it's already started. So now it's just sitting there and running and doing all the whole punching stuff in the background. And it's taking very little resources, I think it's just around 2% of CPU and 100 megabytes of memory and not much but also like lightweight website every few minutes worth of bandwidth maybe and yeah that's basically it. And so you shouldn't actually notice that it's sitting there. And, well, I refine some of our dashboards which show just how everything works. And these are some technical details here, or some performance measurements so if a few minutes ago it was just seven so this is the active clients in the network right now so they're just eight clients running at the moment. And just before I started this my own client it was at seven so I'm the eighth one now so it's working. Yeah, I would highly appreciate if many of you would just download this little client application and leave it running. Our plan is so you don't need to do it now but our plan is to have as many people as possible signed up or well at least down and having downloaded this application until December so that the we can have as many people as possible to have this client running throughout December so that we gather a lot of data, and that we can do a an analysis on that and also we want this is also a research project we want to write it in a scientific publication from from that data and yeah so I will drop some some links after my after this demo here and I hope you can sign up and yeah participate. Well yeah I think that's it already from my from my demo here. Thanks Daniel. I think you might have 27 new sign maybe 26 new signups after that. So, nice plug there and please share and I see the chat there's a request for for sharing of all previous resources mentioned so let's continue to do that as we go through spy say hi guys take it away. All right, so what we did for our project was looking at how we can use some of the web three ecosystem. And to kind of help some of the processing that we are using and to gather information, you know, for some of our client base so we looked at building a buckle out to kind of improve the data that we're getting from Web three. So, move on to the next slide and I'll just give you a little highlight here. So a little about spice here we're an early stage startup. We're also a proto call apps portfolio company. And one of the reasons we chose to work with buckle house because we are a part of the computer over data working group so trying to help keep that moving forward. We've presented during working group number six for the computer for data. We just launched in April of this year so we're still kind of early stage and building out our next slide please. And so basically what we're looking at here is, you know, one of the problems with data is the egressive data is very expensive and so we're trying to look at solving that a bit by trying to use that decentralized compute to run those compute jobs closest to where the data is and so for our project. What we did is we used spies to get a collection of the board AP yacht club and then get the owner of it one of the specific board apes and look at the data that that specific owner had within their collection, and then taking that collection from that owner and just turning them into a collage not a super fancy thing but it's just an example of kind of taking a bunch of data, and then using that processing power of Bach allow to turning into something. I guess. And so the demo here will be about six minutes or feel free to play it like one and a quarter speed or something to kind of speed it up a little bit day but if you just go to the next slide you can go ahead and click play there and this is Philip presenting at the computer every day this summer. For this demo I'm going to take the board AP yacht club NFT collection so this is the address here that you go to full size or. Yeah, it was perfect. And just run this query in just a few seconds I'll be able to get all 10,000 of the board eight NFTs and I can find the current owners of all of those 10,000 and just a few seconds using spice. And so let's say that, you know, I'm really interested in this one 5306. And so I want to find all of the NFTs that the current owner of this NFT has in their collection and make a call from that. So the way I would do that and spice is, I would say okay well I'm only interested in token ID 5306 run that query. So I would get back the owner information. Yeah, so we got this back. And so we can take the current owner of the NFT and then I can basically with the same data set, our NFT owners data set. I can query okay what are all the NFTs that this current owner possesses. And so I'll run that query and spice. It'll take a few seconds and then I'll get back there should be around 17 that I found from earlier. And normally what we would do is once once you use this interface to kind of explore the data that you're interested in, we have here a data set reference here of all the different data sets for the chains that we support. And then once you're kind of interested in, you find the query that you're interested in, you would put this into your application using our SDKs or, and that's how you would integrate spice into your application. So I'm just going to download this through the CSV. And so we'll be able to show you that data here directly. The, the way that this demo is going to work is so you showed I just showed using spice to get the NFTs that are owned by a wallet. But I need to next call the token URI and so in Ethereum, there is a token URI function on the smart contract. So like if I call that function with the token ID that I'm interested in, I will get back this IPFS link that I need to then pull off. And I'll actually show you what's in this IPFS. So I'll just resolve this. So this is the content ID of that link from the token URI. And if I actually show you what is in here, it's just a JSON file. So I mentioned before that, you know, data moving off chain. So this is a kind of one example where the image link is not actually stored on chain, it's a metadata file that points to the image link. So we actually need to do two things. We need to first run a job in Buckley how that parses that metadata out from all of these different metadata JSON files. And then once we have the actual IPFS links that have the image data, we can run a second job that will go and create our collage for us. And so we can see here the token address here is on the left first column and then the token ID is the second one. So what I'll do here to run back with how job is I have a little helpful strip that will get the metadata your eyes by calling the ethereum smart contract so you know this is going and calling the ethereum smart contract. We filter out the ones that are not not IPFS right now. And we'll just work with the ones that are in IPFS and then you see here that some of these are files within a folder that stored in IPFS we actually need to go and resolve all of the content IDs of this so I'll do that now. So we're going to start by dumping the content IDs of these into the the actual IPFS content ID, and then dumping the, what I need to pass into back to how into these volume arguments these are the, the commands that we actually need to mount the data into Buckley how so that the job can access it. And then I'll come into a run. The first job this will actually parse out the metadata in this parse metadata script. We're doing is just scanning all the files that are in this directory, and then extracting out that image or I that I showed here. So we're extracting out this image or I for all of the images. And it's writing it to this file called image your eyes that people cool and so now I have this job completed so I can take out these image your eyes. And then the next step so now I've got all the the links to IPFS for all the images that I'm going to create the collage out of not actually going to run a buckley how job that will assemble these into a clutch but I'll go ahead right now. Generate volume orgs, and we're going to use image your eyes that cool. But we have similar to before we have these commands that will actually allow us to attach this to the buckley how job so I'll run another buckley how job here and then pass in the command that actually will create the collage school and so the way that this script is running is very similar to the first one is going to loop through all of the images that have been mounted into that directory. It will format them into kind of the same size and then output this collage called collage that you think and so now we have here, the collage of all of the NFTs from the data that we got to spice. So the goal that spices to kind of integrate the buckley how experience directly into our into the product. And so the, where you might have something that looks more like this where all the stuff that I just showed you is kind of contained within spice. And all you would need to do is just upload some enough to processing job that needs to combine both on chain and off chain data and then we would handle the scheduling and running of it for you. Again, that was Phillip presenting at the computer today the summit. You know that the full recording that was a chopped up version. You know, for time constraints here so you know feel free to jump on there if you want to kind of get the more in depth version of that. Yeah, like you mentioned, we're looking at trying to integrate Bakl out into our system that way we can kind of more easily make the experience for customers kind of trying to query IPFS data and Web three data to kind of just have it one integrated and using Bakl out there to run those compute jobs next to where the data is stored. So just here's some places you can be able to find us and click on the links there on the slide. Let's say they're made available. And that's it. Any questions. Thanks, Derek. On your minds up next he wasn't able to make it to the presentation day so we've got a recording and let's take a look at this, this presentation here. So I'm much fun and I'm on the collapse team in the other core ecosystem working group. This presentation is a thought experiment on how the IPFS that can be used to make satellite sensor conditions more reliable. So, just a level set satellites can be in three possible orbits low, medium and geosynchronous. And you can see from the diagram down below because of three very different altitudes above her. And the famous satellites we know about Starling constellation Hubble International Space Station and other defense satellites are in lower orbit anywhere between zero and 2000 kilometers above ground. This is very different from geosynchronous satellites which are about 38,000 kilometers above the earth's surface. So large orbit satellites have certain advantages and disadvantages. Some of the advantages are it's very low latency it's closer to earth so around trip, you know, ping will probably be around 20 to 40 milliseconds. So they put them there in the first place, and they're extremely fast so they go over an entire orbit in anywhere between 90 to 120 minutes. And because they're closer to earth, the sensors on board can acquire much, a much higher resolution of data from the earth's surface. So the disadvantages are to do with the fact that they are so fast. So it's closer to ground. It's when it zooms by a ground terminal at high speed, there might not be enough time for it to transmit the relevant data to ground that it that has been requested. And the other is there is plenty of credible evidence that these satellites can be disrupted by nation state actors Russia, China, India, they've all demonstrated the opening satellites by missile launches. And this slide just shows how short an ideal flyby can actually last based on just the geometry of the earth. So the diagram on the left is a cross section of the sky looking up from a ground station, and you can see when a satellite goes nearly directly on top of us. It can be as short as 15 minutes, and in a far less ideal scenario where it's off over the horizon, it could take as little as eight minutes and that's assuming we can even see it over a mountain or building or something like that. Let's do some really rough math to estimate, you know, how much imagery or sensor data we can actually transmit in a 10 minute flyby. And so, if we look at some of the bands these lower earth orbit satellites use, we can roughly estimate around GB is transmission for 10 minute flyby. And if you look at the data that the satellites actually acquiring, you know, five by five kilometer image with 16 big color and half meter resolution which by no means is the best is about point two gigs. And what that means is in 10 minute flyby, we can transmit five of these images. That's not ideal some of these satellites collect data over seven different wavelength spectrums. And it's definitely going to be collects that are larger than 25 square meters. And so a lot of companies are using satellite interconnects over a constellation to actually get longer coverage, even when the collected satellite is not in view and so starting military satellites are doing this by radio or laser connections. And so kind of a spoiler alert. Once you start squinting at what an interconnected satellite constellation looks like it starts looking a lot like an IPFS form. And so, getting to the punchline. We can actually use different components of the IPFS stack in a way that makes transmission more reliable and allows all of these units the ground stations and satellites would speak the same language. And so for example, for IPLD we could have a custom geolocation based IPLD spec that actually chunks the overall imagery into smaller grid sizes. These then could also be serialized in the car files if they want to be chunked together. For example, I might want to see, you know, a shipyard with the water that's nearby because it is extremely relevant to see what she's a big launch. Lift P2P is a great protocol because I am talking over radio, I'm talking over optical laser, I'm talking over fiber optic backhaul on the ground, and I'm talking between all these peers. And so Lift P2P is a great solution for maintaining those connections. The IPFS allows us to broadcast relevant CIDs down the ground, and then we can have the ground station keep pulling the swarm overhead for relevant blocks and CIDs. And the cool thing is if these blocks are constructed smartly, even before I received the entire Merkle deck, I can kick off data processing for blocks I have received, especially, which is especially important for low latency cases and defense. So Brazil is trying to use satellite imagery to figure out where there are fires and send teams within two hours to go, you know, fight them, like you don't have time to acquire the entire imagery, run it through your computer pipeline, and then send them. And then finally, because the orbit times are so low, there's a lot of collects over the same area but across time. And so IPFS is actually a great solution for versioning those across spectral wavelengths and across time. So what that means is if I only have, I'm a ground station and I only have the bandwidth to receive one image, IPFS could be an easy way for a certain tasking to receive just the latest one and the old ones. And finally, here's a graphical view of how IPFS, yeah, here's a graphical view of how IPFS can be used by ground station to kick off ground processing. I disconnected to the ground station continue processing while the satellite is disconnected but the ground station can still request blocks from the swarm. And finally, when the satellite reconnects with another ground station. Now my original one can actually peer or the very high bandwidth backhaul on the ground. And so this is just a quick story into how IPFS can be used in a satellite constellation. I'll probably be sharing some of these thoughts with the browser and platforms team and excited to see if we can integrate these approaches into our partnership. Thank you. Cool. That one was a little different satellites. Awesome. Can we move forward there we go. Yeah, wow. Presentations out of this world I like that James. Very cool. And yeah just again the QR codes there as we go through Jonathan's up next this one. I don't know anybody who's at the anger as dinner. This one got a lot of applause. If this is the one maybe not maybe maybe I'll let Jonathan jump in here and and and explain more. Yeah, it's kind of similar at the end you'll see they're the two separate ones you'll see at the end that they're coming together sort of in the works. Hi everyone. I'm Jonathan. I'll be talking about a demo application made called only files and only files is using protocol called Medusa, which I'm working on under the crypto net team. Great. So, from a very high level, we could start with just a set of problems and a set of solutions. So this first problem is a personal problem. My goal of mine, which is that I needed to build an application to showcase Medusa. So Medusa is a protocol that's sort of geared towards developers developers would be the users of Medusa who would then go build sort of applications for end users so for me it's important to, I guess, sort of use use my dog food sort of my own stuff so I can better understand the developer experience and build better tooling for future developers to sort of build on on Medusa. So the solution here is this this demo application only files and only files allows you to sell access to content using decentralized protocols, rather than centralized platforms. So, you know, some of the decentralized version of only fans, let's say. It's, it's pretty relevant because sometimes you know we might ask ourselves why are we doing all this work to build the decentralized protocols and you know they can be kind of expensive the user experience can be not great. So sometimes it doesn't make as much sense but I think for this problem it makes a lot of sense because even just in the last year we've seen a lot of deep platforming of various people on different social media platforms but especially on only fans and sort of with sex workers in so and how they sort of get the platformer censored is through I guess a few means and so primarily you have payments in web to which are, you know, which are sort of run by a handful of big players like visa and Mastercard and PayPal and more or less those payment arbitrary control to block payments and generally censor financial transactions but of course in web three we have a solution for that. Blockchain sort of that was sort of the initial start of Bitcoin which sort of enabled a peer to peer payments network. So, so we have a solution there we have a piece of the solution. Now going back to the problems. We also have issues with with storage and access so, you know, for a lot of these user generated content platforms. There's a company that runs that platform builds a platform, you upload the constant but essentially after you upload it it's really not fully yours anymore I mean they have. They store it provide your databases and probably the term to service say that they can do whatever they want with it. And that's obviously can be problematic. And what three we have file coin which is an amazing sort of open storage network where anyone can upload files and anyone can provide storage profiles, as long as sort of the deals and and the sort of crypto economic conditions are met. And some words that relate to storage is we have access control who can who can access those files right and so in web two. There's a policy for who can buy and sell content or who can upload and view content but it's, and they may sort of tell you what it is, maybe the term to service or other places but you can't view the code that controls that you can't really verify it. And moreover, like I mentioned earlier, there's sort of policy for who can do what but ultimately, the company is sort of has the ultimate kind of control over that and they, if they want to view the data and do it they can do that. And so now this is sort of where Medusa, so it comes into into the mix, which is that Medusa is a decentralized access control network, right so basically anyone can create rules for who can sort of view their content and they can have a guarantee that no one else is going to see that content, assuming that the network is operating properly in the right sort of crypto economic conditions secure that network. Great so, and I'll just go over the design of Medusa just very very briefly. This is kind of like the general architecture of how the system works so on the left, you have client applications so only files as an example. You could also have you know private mailing list for NFT holders or document sharing and so basically each one of these applications. There's some content that they want to control access to. And you want to put those rules on chain or somewhere that's sort of transparent. And so with only files the rules are like, you know, I upload content instead of price and if you pay for it then you get to see it. You can see the mailing list it's like if you own an NFT you can sort of see the post and mailing lists. And with document sharing you could think like, like a decentralized Google Drive, where, you know, maybe I can submit a proof that I have a protocol dot AI email address and that lets me see all the documents and you know within the company of the organization. In the middle you have this Medusa contract which can live on many different blockchains, and essentially the contract controls where you can send requests and receive results back from the Medusa network. And on the right you have the network, which essentially you have many different nodes that are sort of running this network and basically what they do is they all have shares of a private key. If a valid request comes in, you sort of need a majority or threshold of those nodes to kind of compute a partial result to a partial like decryption, for example, they can aggregate those together and send the result back on chain. But of course that result, though it is public, anyone could see the result only the individual that it's intended for can actually use the result can actually use the result to go and in view some data. And so, and only files I kind of mentioned a lot of this already but right the idea is you have secret content, you upload it you set a price, whoever pays that price can see it. And sort of the tools that we're using our file coin to store using cryptid this encrypted content a blockchain some blockchain. For this demo I'm using the arbitrem test net, but could be anyone. On that blockchain you deploy smart contract with sort of sets the rules for who can access your data. And if you if payment is a part of that you can also use the blockchain for that. And then we have Medusa which is sort of controlling the re encryption or the unlocking of content based on payment being received. Okay, so now I'll go into the demo, which I tried to show before and it didn't work but it should work this time. So I just crossed. So. Okay, here we are. And actually let me, anyone wants to play with this as well I just sort of pop it in the chat. I'll do it after yeah here it is. So you can go go use it there. First thing I'll mention is there is a there's a faucet here so I it's kind of difficult to get like test and eat on arbitrem but I set up a faucet. Please don't abuse it. If there's not really any rate lift rate limiting on it. But basically if you connect your wallet you can click the faucet, it'll send you like point zero one test that you can that that should be enough to kind of use the application. So, let me just refresh the page. So now I'm sort of not logged in not connected. I can connect my wallet. I can sign in. And so what happens when you sign in is you essentially you sign a message, and then we can use that signature to basically derive your, your key, or your sort of like Medusa identity. And so this is nice because it means that we don't store any keys anywhere you don't have to store any keys as long as you have your sort of theory on private key. So you can use that to kind of use Medusa. So, so here we have a form, and this is all very rough, sort of looking but basically you have a form to upload your content, you know I have some unlocked content already here. And then down below you can use like listings, where you can you can pay and sort of unlock content so let's, you know upload something I have some. Like this like stable the fusion kind of a go on a looking thing. Let's say I just I want to sell that so I'll put in a price and we'll make it pretty cheap, a little more zero there and then let's do this is like fully gone up from stable execution. AI thingy. And then I'll click to sell my secret and so now it'll take a second but it's encrypted it's uploaded to IPFS and now it's asking me to sign a transaction to register that content with Medusa. So it's registered. I'm going to go to scroll down and see it for sale done here. Okay, great. Here's my cool iguana. See it on IPFS but you're just going to see sort of an encrypted blog or really you're not going to see anything it's trying to render image. But I can click unlock it. I'll pay the fee with a little bit of gas. And then to show up here. But decrypting it at the moment should have a better animation there to kind of give a better idea but give it a second and then there it is so please play with the demo break it. You know, this is sort of a rough really proof of concepts, but we'll see where this could be heading in the future. So I think the future is this deal me fans project that was kind of presented at the end res dinner and Lisbon. But I think I guess the how these two things come together is that this this problem of providing like a decentralized platform for people to basically buy and sell content it's much bigger than just the access control and the storage, because there's other sort of more difficult problems are more social or human problems to solve as well. Things like content discovery, you know, like, like, which would include maybe like a reputation system and sort of being able to follow people being able to search for content and having content suggested to you. But how do you do that in a decentralized way. So it's kind of an interesting interesting problem. Privacy is another issue where Medusa will allow you to sort of control access to private content but you can still see transaction metadata when you use reduce so someone can still see that I paid for content from someone else they won't know what it is that I got but they could still see that that transaction happened so that's probably something especially in this context that we would want to improve or figure out a solution for content moderation, like things like, I guess like band content, which is difficult because there's some things that are quite obviously, should be sort of banned from platform but there's also kind of a gray area and so coming up with a good way to sort of reach have like social consensus about what should be sort of moderated and what shouldn't be is also very difficult problem. And then yeah like abuse avoidance you know how do we sort of avoid things like hate speech, and also things like content theft, where you know maybe someone takes someone else's content and sort of upload as their own. And that's obviously problem, but it's kind of an interesting research question like maybe there are ways to do sort of cryptographic water, watermarking on content so that's interesting but again very difficult problem so this is kind of where the the only fans collaboration comes in the only fans is sort of a project. That's kind of being researched in the consensus lab. And so we sort of realized like, okay, obviously, we need to sort of access control Medusa is perfect for that. So that's really sort of the evolution of this demo is the next steps to basically integrate with the sort of the only fans subnet on following and then continue to build out the proof of concept and sort of see you see where it goes from there. But that's that's all I've got thank you all for listening and give me questions. Slack me or and we can set up a call. And so yeah, thank you. Thanks a lot Jonathan. Very cool stuff. Yeah, these these are really impressive. We've got James up next. All right, hey everybody. James here. If you recall I'm a technical writer so surprised my stuff relates to technical writing and the IPFS docs, specifically which is the doc set that I'm going to be starting on here at protocol apps. So, if you can move to the next slide please. The overall idea here was to do a couple many projects that were more or less related to the IPFS docs and technical writing in general, as well as the larger initiative on the team I'm on which is docs as a service for the larger protocol labs network. So what did I do I created the I, how do you say recreated the IPFS docs on a Hugo static website as opposed to view press which is what we currently use. It's really just a wireframe, not meant to actually be a functional documentation set. I created a tutorial template to be used for quick tutorial creation, and it played around with some CICD tools that are specific to technical writing like a bail and markdown Lynn. I didn't get to integrating source code like I wanted to but it's still something I'm interested in. All right, so why did I do this. One big thing that was really helpful about Lisbon was IPFS camp IPFS camp I was walking around just talking to folks developers you know real real people I guess building on IPFS. I sat in the community circle that was led by read from the Kubo team got a lot of great feedback there. I put that into a notion doc, which folks can look at if they're interested in. And yeah it's just a bunch of feedback about the IPFS docs experience. You know, my experience in launchpad was really helpful to for just kind of thinking about you know how do we have what's the best way to organize content and IPFS. I wanted to test Hugo, because it has some other features that be pressed doesn't. I'm always a big fan of automation there's certain things in technical writing that just kind of for not fun to do like, you know, coming and markdown document for spelling errors, definitely not my favorite thing. And this all relates back to docs in the service so we'll move on to the different parts of this Dave if you could. Next slide please. Alright so the first thing I did here was set up an IPFS docs wireframe on Hugo. I want to give a big shout out to my colleague Johnny who's not here. You know who were in Lisbon, Johnny gave the IPFS desktop walkthrough. And while Johnny was in Lisbon he started working on a GitHub repo to quickly spin up a Hugo doc site template essentially for the docs of the service initiative you can check it out. You know whenever you like we'd love feedback. So I served as a guinea pig for that. It's really great was able to spin up a website in about 10 minutes has a lot of stuff already templatized. Pretty easy to use has a lot of great features baked in there. Some things that are specific to Hugo relative references as opposed to manual links. Basically, Hugo won't build. If the rail reps aren't working whereas view press will build with broken links. So that's quality issue has nice themes menus are automatically created for every single page based on the header depth. There's things like short code so you can create like tabbed views which I'm a big fan of. And then Johnny started working on commands to automatically create top bars sidebars and page menus. I actually as part of this project added in another command to create a tutorial template which I'll talk about later. So if you could next slide please. So the next part of this was just thinking about the information architecture so I will say I've worked in other technical writing jobs, primarily for closed source software with one implementation essentially of that software thinking about information architecture for IPFS is a lot different for me, you know, because we have all these different tools. We have all these different implementations Kubo just IPFS being two of them that don't necessarily have feature parity. And then there's, you know, this whole decentralized nature of protocol labs like the team developing I think it's Iro. So I heard that correctly at IPFS camp the rust implementation I know there's interest in developing others. So just thinking about how do we organize all that content. Do we need multiple sites do we need one site. What's the best way to lay out menus what's the best way to make sure that people can get to the information that they're looking for as quickly as possible without getting lost or frustrated. So I mentioned, you know that I got some feedback in Lisbon at my launchpad experience just random conversations with peers in the couple weeks that I've been here, and then experienced from previous jobs has led me to the thought that it's like okay maybe we can look at different ways to do the information architecture in the site. So, I'll hopefully show this later five sometime. Approach that I took for a new layout on Hugo was first I'll say a three dimensional navigation and by that, if you guys remember the IPFS Doc site currently there's a sidebar and, you know, sub items, and then some of the pages that are linked to some of the top menus but not all of them do. So what I tried to enforce here was a top bar layout that's logical categories which are described below. Each of those top bars goes to an overview page with a sidebar of logical subcategories. And then you know there's, there's menus automatically on every single page that are basically created as a function of the header depth. Yeah, I mentioned menus by default. I was trying to start thinking about the user persona. So, am I a developer. Am I somebody trying to implement the protocol. Am I just, you know, a total noob, like myself I don't know really anything about Web three I'm just trying to understand what the IPFS. I tried to demonstrate the use of tabs over linear reading so I'll give an example of this. When you set up IPFS desktop you can set it up on Windows Mac, or Linux, right so in the current documentation. So here, read through so different sections, you have to scroll through it or click down to the section you're looking for with tabs pretty self explanatory just click the Windows tab, and you'll only see that content. Therefore, avoiding seeing other content you don't necessarily need to see or want to see the idea with the landing pages with was that those should essentially service the directories for the top bar categories to filter readers to the place they need to go. So the top bar items I created were basics reference how to tutorials and community. This is actually I took inspiration from the file coin docs basics is pretty self explanatory conceptual overview quick starts brief overview of community stuff reference is this was based on engineering team feedback. Basically the idea is to just have the HTTP gateway API reference in there, and then possibly link out to like a Kubo specific site or a JS IPFS site although this disclaimer that's still very much up in the air. We're gonna be having those conversations for a while. The next page which is basically breaks down actions that you can perform with IPFS like adding a file. Right, and then so you have a tab view how do I add a file and JS IPFS Kubo IPFS that stops on and so forth, and then tutorials pretty self explanatory. So next, next one. All right. So, I'm just going to have to keep moving a little quicker if we can just to get through in the interest of time. So real quick. The Johnny's Doc site allows for the quick creation of templates. It's something called kinds in Hugo. So as you can see from the screenshot right there you just run a command, and it'll create a tutorial template that was part of my project. I'll link to the repo at the end. So if you want to take a look at it, you can look at it there, but it just automatically automatically spits out a markdown document with a pre-formatted tutorial structure that you as the creator of that tutorial can fill in the blanks. Next slide please. All right, so just talking about automation. There's a lot of rules for markdown formatting that vary across sites, things like GitHub, Hugo, stuff like that, and in the technical right of wearing world there's style guides. Nobody wants to actually remember this stuff. It's really difficult to remember. So, Dave, if you could skip to the next slide. The answer is automation. So there's a couple tools, markdown link check, markdown lint and veil that I tested out here. If you could skip to the next slide please. The first one, markdown lint is pretty self-explanatory. It checks markdown structure and formatting spaces, things like that against a predefined set of rules. You can configure it yourself. Markdown link check. Also very straightforward. It checks for bad links. It returns an error if there's a bad link found. Next one. This is my favorite tool right here. So basically it allows you to programmatically assess a markdown document against a style guide like the Microsoft style guide. It spits out a bunch of warnings. Completely customizable, configurable. You can combine different style guides, different rule sets, things like that create your own. So for example, you'll see an error in there that says, did you really mean filecoin? In a custom version of this, we could have a set of re, how do you say, words that are allowed. So filecoin wouldn't return an error, things like that. Then there's things that are specific to, you know, writing. So if anybody remembers this from school, I don't remember half this stuff like passive voice, overactive voice, returns, you know, suggestions for that. I think another great thing here that I'm a fan of is, at the top, you'll see these weird numbers and statistics. So if you're familiar, I think we have a lot of a few former teachers here so you all may have heard of the flesh concave grade level. And so it automatically runs measures like that against the markdown document, which is, you know, potentially useful if you're trying to write like intro level material versus like a spec, like if the spec is college level reading material, probably not a huge issue. But if the basics material is over like sixth grade, eighth grade, something like that, it's like, you know, maybe that's pointing to like, okay, let's rephrase this. If you could skip to the next slide please. So yeah, the, oh, sorry. So just to wrap it up, the lessons learned here, a couple things. I'm a big fan of Johnny's product project, the docs starter repo. We're going to continue iterating on that a lot of fun things that we can do with that. We'd love, you know, for people to play around with it. I think Hugo, I think definitely has some benefits. I mentioned them earlier, things like rail refs auto of menu creation themes, a lot of folks in protocol labs already know Hugo pretty well. And then you know just running through all this gave the docs team some good data for the docs as a service initiative. Information architecture. Like I mentioned it's a it's definitely a non trivial problem going to be talking about that for a while with engineering teams different folks. The next question, once again is single site versus multiple sites. Stay tuned for updates on that. The tutorial template I'm a big fan of things like that you can imagine that we could expand that idea to other different types of content so that you can automatically create them. And then we can get, you know, community contributions that are within a set of guardrails, essentially so you have your template, fill in the blanks, boom. And lastly, if you don't want to think about writing things. The idea is, hey, here's, you know, a markdown linter, a link checker, and the style guide checker that when you're writing things, you know, if you're not a technical writer, which, you know, you don't need to be that's the whole point. You have all of these tools to help you out, essentially as you're writing and one of the my big goals in the next month or so is to start customizing all of those for protocol labs. I probably don't have time to do a demo but I'll, if anybody's interested, you know, just ping me after and happy to show you the repo and stuff like that, because I'd love to get feedback and definitely be iterating on this technical writing team so thanks everybody. Thanks James. That was great. I like Marco's idea might have a James launch back collaboration in the future. I have a background here and my name is Bob Brooks from my mark on the mosaic working group which is in the services or spaceport team at PL. So our mission here on the mosaic side is really this idea of building a marketplace, and the idea that we're going to do it on Web three principles so that it can help PL to attract the best and brightest minds Web three. I think the real there's kind of obviously two sides of this one is trying to help those teams build better and faster. I think where the vision gets in in gets more complicated is, you know, there are marketplaces out there. But if we can do it right, we can make sure that the other side is also incentivize to really be successful. If you guys read the vision on your own if you can go to the next slide Dave I want to explain a little bit about this idea of matching versus marketplace versus ecosystem. So if you think about matching what a lot of marketplaces do that aren't very, how should I say, one of the things from a business model standpoint is they just match up supply and demand. So you have supply and one side demand on the other you match them up that's liquidity. A great example here is Upwork. You know they amassed a massive number of freelancers had a lot of demand, but they didn't actually add a lot of value outside of that matching. And so what happens is they get disintermediated. There's no point on me doing my second and third project on Upwork once I've already met the person I trust them and I like them. They put it off there and then the freelancers happy because they get more money, and I'm happy because I don't have to use their system for communication. So that's kind of where a lot of people get stuck and die is okay we match but they don't do much else. The next little down is marketplace where you're actually adding a lot of value in the process before and after a match. And the great example here is Airbnb. As a client who's looking for a house, you know, they help me not only find the house but they do some quality control in there. For both sides they provide a legal framework, a contract, payment processing, and then obviously on the supply side by providing insurance and protection of your asset. So there's a lots of other value they're adding the key idea here is before and after they're actually doing value add activities. The next level beyond that is an ecosystem where you get beyond just those two, and you get other individuals involved to add value as well a great example would be developers. I have service providers on one side and I have PLN and PL teams on the other side. What about developers who are building tools, who could help them work better together what if I could get them involved in the relationship. What if I could get token holders involved who cared about the value of this token to to really play an active role in governance and make sure that we're building an ecosystem that everybody is involved in and has a say. As I looked around, the best example I could find is actually the point ecosystem. I think they've done a great job of building out a framework where everybody's incentivizing the line. So that gets into kind of our goal is to build a similar ecosystem, not just a marketplace but a true ecosystem where if I am a service provider, I'm not just trading my hours for money. I'm actually getting some state and there's some equity stakes some value beyond that that I care about, and then trying to do that same thing for clients for ecosystem partners for token holders and developers. Next slide please. So learning so far. This is way more complex of a project than we originally thought. It's just one example that second point. There's a lot more stakeholders than we realize you know we identified service providers originally because that's who we were working with. One example is one agency I talked to well it turns out my contact agency is actually an independent contractor who basically is full time for them but on the side is also a developer and developing a tool on Web three stack to help build websites faster. So there's an example of an agency that's actually an independent contractor who's also a developer stakeholder. One example of the complexity here is much bigger than we originally realize. I think a second learning we've had so far is that the idea or the vision that is on that earlier slide it really resonates with the service provider stakeholders. They are very frustrated with this life of hey I'm going to trade an hour of my time for a dollar. And that's it I get nothing else out of it. But it's it's becoming more apparent that if we can build this if we can overlay a marketplace with this ecosystem model and the incentive alignment, there is interest there. Next slide please. So roadmap, very quickly I'll go through this you know we're still in the planning stages we started jumping into the mapping and realize that we needed to slow down and kind of step back and get into the planning because it's, it's just bigger and there's a lot more steps and we realize. So our hope is in q1 to identify the stakeholders and really interview them talk with them understand what they each want. Then really build out the alignment map in q2 to try and align each stakeholder and map out those relationships and what happens between each one. I think q3 is going to be pretty pivotal in terms of modeling. You know there are there are organizations out there that do pretty extensive modeling we're hoping to work with one of them. I think one of the things I didn't understand up front was the importance of figuring out how people can game or incentive model and try to identify ways to prevent that abuse. And then the goal is in q4 to actually overlay that incentive model onto the marketplace that we build between now and then. Next slide please. The current status like I mentioned still in the planning stages. I think the biggest challenge I've got is just time, you know, and specifically the kind of catch 22 priorities so on the team itself we're currently actively trying to build a marketplace where you know we're matching up supply with demand and adding value along the way that in and of itself as a full time job and often time feels more urgent than this idea of you know creating an incentive model, which is still a very amorphous idea. I think the irony is that if we do that right it's actually more valuable in some ways than just building marketplace because an ecosystem is is a bigger vision and adds more value for everybody involved but it's just the harder one to kind of keep moving forward little by little. So definitely feeling that catch 22 challenge right now. That's all I've got. Thanks bow. Sounds great. And that was a very efficiently delivered to very, very cool. Okay, so we're going to talk about meta transactions. And some some important things to mention around them. They're based off of EIP to seven seven one these are ethereum improvement proposals. And essentially what you're doing is your, your signing a transactions transact or sorry you're signing a message from one user. And in that message, the user specifies who they trust to relay that transaction to a smart contract to be settled. And so the floor order is the main for that transaction. And this is all kind of worked out, been worked out in a secure way through this EIP. The important part to know here is that the smart contract is verifying that the forwarder is sending the right signature, and that signature has the original senders address in it, along with the forward forwarders address. And through that you can kind of verify that this is a secure transaction the original signer meant to do this. Another another EIP dimension here is 712. This is essentially a lot of this, you know it adds security and a good user experience because you typically with your transactions when they pull up in MetaMask they or any other wallet they do not present. So the data is just one long hex string, hex decimal string, and it's not very readable. So that's what EIP 712 does is it actually presents a message inside of your wallet before you sign it so that you can know what you're signing. It does that through like a base domain signature and so this is sort of a security practice that essentially the wallet will check that one you're talking to the right contract so if you see number four there it's the verifying contract so if you're talking to the wrong contract then the wallet will actually indicate that and and kind of warn you against that. Another really important one is chain ID. The thing that you really have to be worried about with these types of transactions are typically it's replay attacks which means that the signature can be used over and over again. Chain ID prevents relay attacks on other chains so some of these transactions would be or could be valid on different block chains that use the same wallet and signing schemes. Unless you add the chain ID in there and then the wallets will not allow that transaction to go through. And so there's some other other things with the domain signing, but that's sort of like the initial part of the 712 signature. The other part is defined by each transaction so this is like the custom data that you that you see in the message here. And for this we've got the owner which is the original signer we've got the trusted relayer, which is who I'm trusting to pay for my transaction. And then we have the nonce. The nonce prevents replay attacks in the same contract. Because what you do is you bump a nonce. So if it's at zero and you, or in this case it's at seven. Once the transaction goes through that goes to eight. That invalidates the that signature and that transaction from ever happening again. I think I'm just gonna, this is sort of like a simplified version of the contract here the most important part is that we are passing in the signature which is the three parameters here. We're recovering the user from that and then we're checking that against the owner. And yeah, so that is essentially how the contract is maintaining the security of a original signer signers transaction being paid by another person or entity. So we're using I'm using live P to P web RTC to make these connections in the app. This is just kind of a breakdown of some of the code and handlers. I won't stay on that much. And now I will pull up the demo. So I'm, I've got two windows open here. There one is running on port 3000 one is running on 3001. And what we're going to do here is start some live P to P nodes so the only thing that you really have to know about this app is I'm submitting an address to the contract. So I'm submitting this address to the contract through this relayer. So if I'm registering I'm going to start a node. And this is called the dialer in live P to P. And then this node is going to be the listener it's going to listen for the data that I'm passing it. So this will load up it, they each have their own node they're each have their own PR ID, you would do this offline so like if me and David are doing this transaction together I would like pass him this information on discord. And then he would, and then I would connect to his node. And let's just see if this. Let me, let me pull up the video I recorded earlier. So I think it's running right. Brian maybe you could also share this video recording in in the. Yeah. Yeah, so I will I recorded this just before the talk just in case this happened so. So here we are we're adding the PR ID and the connection address. And let me just skip forward here. Okay, here we go. Okay so you'll see that the message passed over here and this is now waiting for me to send the signature. I'm going to go over to the app on the left. I'm going to grab my wallet here. That is going to be my trusted relayer. We're going to sign this transaction. So we should see that pop up over here and you can see that image from earlier. So we've got the trusted relayer and the owner is this user's wallet right here. So we've created a signature and I'm just showing you the, what that looks like right there that's that's the signature that we're going to pass to the smart contract. So we're going to pass that signature over to the user on the left. Once they receive that they will sign and pay that so basically this is he's paying for the sorry. This user on the left is paying for this transaction to go through. And that signature we saw earlier to the smart contract. I think I'm just showing here that the signature was received by this user on the left. So it's the same signature. And then we will pay for that and you can see that settled right there. So yeah, that's that's it all the kind of communications done through live p2p. And it's basically you're passing data from one peer to another in order to facilitate a minute transaction. That's great. Thanks, Brian. Dan Elliott are next. All right. Well, yeah, I'm Elliott, and I did this collab with Dan. I'm helping to lead the ignite engineering team. So that's IPFS GUI and tools. And that includes IPFS desktop and web UI. So our project is an IPFS search integration into those applications web apps. And yeah, let's go to the next. IPFS search is a way to discover content on the distributed web. So when you start using IPFS, there's not an easy way to find out what is available there. By the way, as I'm sure you all know web UI is what IPFS desktop is built on. And these apps are the primary entry points for new IPFS users. So it's very easy to, you know, just run IPFS desktop you get a Kubo note automatically and and you can start interacting with IPFS right away. In these apps right now, you have to kind of know a CID or have a way to look up what you want. But with IPFS search, you can actually discover new IPFS content, old content, any content. It's really a search engine that tries to index everything that is in the DHT everything that's on the network. And that enables you to better appreciate the value of all the data that exists on IPFS. So as mentioned, I'm Elliot, I did this together with Dan and a special thanks to Russell, Frito, Mattias, Lytle and Julia for their help as well. So I think I'll pass it over to Dan. Hey, thanks, Elliot. Yeah, you can go to the next slide too, I think. So this was kind of covered as well but yeah basically this idea was kind of born from, I'm pretty sure interaction between IPFS search team and Russell and Lisbon about kind of trying to expand the reach of IPFS search so I can actually, can you go to the next slide. Yeah, I guess instead I'll just do a quick demo so I can take over. Basically it was, the goal here was to create like a proof of concept for for integrating IPFS search into the current web UI. And you can see Chrome and hoping. But yeah, so basically this is for those of you who haven't seen it I hope everyone at this point has seen the IPFS web UI but this is just like the main page so what we did kind of as a V zero is just add a new tab to the nav for essentially spinning up IPFS search within the web UI. So here, like everything behind the scenes is kind of powered by IPFS search is API. And right now, what's happening behind the scenes is we're just indexing all of IPFS for, in this case, like NASA. And so what we can do here is currently like explore the CID which is already existing with an IPFS web UI. This should pull up. Hopefully. Yeah. And then if we go back to search. We can also link out to the IPFS search details page. So hopefully this should spin up a on IPFS search comms detail page, if it works. My computer super slow, but let's see. Yeah. So, basically it's a pretty simple demo and proof of concept but we were able to essentially pull in the IPFS search into the web UI and kind of, I guess next steps, Katie, I guess you can share again. I'll stop sharing. Yeah, I can take over. I know Frito had some questions as well. He jumped in specifically to ask you Dan about that. Okay. Yeah, we can. Basically, for us, the next steps are really just collaborating further with everyone that we've been working with over the last week and cleaning up the UX and UI and there's a lot of functionality that IPFS search has in their application that we could try and bring into to ours as well like pagination. You can play pretty much from my PFS search.com you could queue up an entire playlist of audio and listen to music that's stored on IPFS so there's a lot of really cool things that can be done in the future here. Yeah. I don't know if Elliot you had any, any other comment. Yeah, I think that kind of covers it. We did have a fun discussion on GitHub about kind of the future of IPFS search and decentralizing it more. Using the P2P kind of ways to make sure that users can access it, even if there's some, for example, DNS censorship, and, and then yeah just deeper integration and, you know, kind of improving the, the way that users actually do a search in web UI. Thanks Dan and Elliot. Sorry, I don't really have questions but it was very nice to see Dan and Elliot and looking forward to collaborate more about this. Thanks. Yeah, thanks for all the help throughout the last week. Sure. Thanks. And a perfect live example of some of the really cool collaboration that comes from these projects and thanks for enough for, for helping the guys out. I think we've got the sound sorted. We'll give, we'll give Po-Chun's video another try here. Hello everyone, this is Po-Chun. I'm going to talk about my next project around hacking LED data storage and performance improvement. This is a wrap. No, there's no design specifically for indexing the Falcon blockchain. So here's the current architecture. So you can see there's a LED notifier that's syncing data from the Falcon network and inserting this task into a task queue now be consumed by a set of LED workers. The task defines how we extract, transform, and load blockchain data into a destination, which is usually our data warehouse. There's a couple downside with the current design. First of all, since every worker node is a independent notice node, we want to add a new worker to the pool. We need to wait for it to be fully synced with the Falcon network. Second, since each worker node is doing the network syncing, as well as the data extraction, we need a pretty high end hardware spec in order to handle the workload. So this makes running too many workloads expensive. So in order to make this design more scalable and easier to maintain, I thought about a new architecture proposal. So here's the new design. So instead of using a local disk, a data store, we use a distributed data store that shared across LED notifier and LED workers. So LED notifier will be responsible for syncing the Falcon network data to the data store. And the LED workers will only need to focus on extracting data from the distributed data store. This makes the LED worker's stay list more lightweight and easier to scale up. Also, the distributed cache can be shared across all the nodes, and both the cache and data store can be scaled up independently. So I've implemented a prototype to use F3, a distributed data store, and the red is a distributed cache. Then I realized that there's a lot of room for performance improvement in the leading code base. So I decided to pursue that in that direction. So I look at the leading production dashboard to find out what are the most expensive tasks. So we can see that most of the expensive ones are minor sector related. So when I look at the tracing for a certain sector event task, I noticed that in some cases there will be one minor that's taking a lot of time to extract data. So when I investigate further, I noticed that it's because the minor has a lot of sectors, more than two millions in this case. So what I did to the code is to make it more performant. So the trick is, instead of using a two-way merge for a beta rate, I use a multi-merge to merge all the sector states in the end. So this significantly cut down the runtime by more than half. So the result, the runtime for the task reduced from 50 seconds to 30 seconds. So another performance fix I did is to get rid of an actor code mapping in a code. So Lili tries to construct an actor code lookup table for every epoch. So actor code is just a code that indicates the type of actor. So for every epoch, Lili will loop through the state tree, usually with 1.5 million actors to build the lookup table. So this process will take 40 seconds without any caching and 10 seconds with the state store caching. However, once we build that lookup table, it's only used for a couple hundred times within a task, which just doesn't justify the cost of building this table. So after I remove that lookup table, I reduce the task runtime from 48 seconds without cached to 16 seconds with cached, then now it takes 8.5 seconds to finish that task. Right. That's all I have. Thanks. Cool. Thanks, Pochan. Sorry, wasn't able to make it. I'm glad he was able to share his video. I think Marco is going to take over screen share as he has, what was a little bit of a later mission from Sarah. Yeah. Yeah. That's good. So really quickly, for those of you who have not heard about the Early Builders program for FEM, it's a little bit different in the sense that it's focused on Early Builders building with the product being incrementally delivered rather than building on. This is just a really quick capture of how the program runs. I'm just going to leave it here. At TLDR, they come in and they go through weekly check-ins with us and then eventually they graduate. It's not a hard explicit outcome that we want them to deliver a product on FEM at the end of the cohort, but we highly encourage it and we bring them a lot of resources and bring them much closer to the PLN resources to do so. Coming out of it, they then go straight into launching their product or helping us to run community calls if they are more of a community member than a team. And or they then join the Builders funnel. So what are the useful metrics in this case? So the goals of the dashboard is to capture a snapshot of how the program is going. Probably by weekly cadence or monthly cadence is something that we're trying to figure out now. It then helps us to optimize Early Builders program for deployment of FEM because at the end of the day we want them to deploy the EVM compatible FEM. We want to see actors on the network. And so how do we make sure that progress is moving along and how can we use a dashboard to capture that? Also making sure that they have great developer experience because they are going to be the best advocates for us to move on to product launches, the FEM launches. So we need to make sure that that is going well and also to capture the value of the program as a whole. Challenges that we faced, the I-Face building this was defining metrics. The team had a little bit of a hurdle to come together, but it was a little bit challenging to know what we should be measuring that will be useful metrics, especially when there's not a hard expected outcome at the end of the program. Lack of visibility in the team's progress was also a challenge because teams communicate in very different ways and have very different transparency comfort levels. So we might not actually know what they're building, but how do we know that they're progressing along? Also figuring out automation for the dashboard, a lot of this is very subjective information and also sometimes word-of-mouth, right? So learnings is to have really strong relationship building and communication with these teams. And making sure that they feel like they can share what they're doing with you rather than, you know, you'll never find out from the GitHub or their website alone. We also track crowdsourcing team inputs, but they're also learning for us that, you know, that just needs a lot of inconsistency. So it gives us a sense, but then we also have to research and make sure that it's consistent so that when the reader reads it, they get value out of the dashboard as a whole. Next up, we are looking at, you know, after this, what's the update cadence for the dashboard? Sharing the dashboard with relevant stakeholders and seeing how it provides value and then being agile with that and also with FEM developing as it goes along to then change the metrics that we capture. So I'll do a really quick demo of the dashboard over here. Okay, cool. Here you can see total teams is about 89 teams as of now, 89 active teams. We have about 20 projects with FEM deployed as of today. So then you can tell that that's maybe something we should be nudging a little bit more on. We have 14 teams that are funded. That's something that we're looking at for sustainability beyond the program if they are tied in with a dev grant or into the builders funnel. This gives a sense of the percentage of use cases. So for our FEM product team especially, we'll know which other use cases there are key. So you'll know like, yeah, data dials are a pretty big use case and maybe that's something we should prioritize. And then of course, there are many more over here. And then you know who to ask to help build out your solution to your prints. Over here is, this is estimating engagement. This was highly subjective based on someone like me who's a program lead giving like a score of how I think engagement has been going at the weekly check-ins versus on the Slack channels and so on. So it's just a sense, but it gives us a sense that, okay, I need to know that maybe I want to shift it more to the three and fours rather than them being at the twos. And maybe for those teams that are at the twos, how do I ping them to make sure that they're doing okay and they're engaged, right? And then testing over here is more for our product team and our engineering team. So for stuff that is going out, making sure that that's captured. And if anything that needs to be tested is not on here, that's something that we know that we need to start having a conversation on and asking for volunteers. And then lastly around expertise, kind of getting a sense of the languages that everyone's working with. So we know what to prioritize when we're building our SDKs and also if we need experts to test on certain things we know who to reach out to. So yeah, that is mostly the demo. Thanks. Cool. Thanks, Sarah, Marco for your help there. So just some quick background on me. I am on the spaceport team at Protocol Labs. We provide services and resources to teams within the Protocol Labs network from events like Lab Week to onboarding processes into the network and sharing all of the different resources that Protocol Labs has to help these teams grow more quickly. And one common question that we get quite a bit is it's difficult to find people who can answer a particular question about a PL project or PL team, especially on the technical side. So for example, if I have a question on libp2p, who do I ask, where do I go? For me, I have the amazing Launchpad team here, but if I am brand new to the network have not done Launchpad yet, what do I do? And the solution that we have for now is the PLN directory and office hours. So you'll see some screenshots here. For those who have not seen the directory yet, I highly recommend you check it out. I'll put it in the chat after this call. But it is a place to see all the teams within the Protocol Labs network, as well as who is in those, who's a member, what their role is, and their contact information, even for some people, a direct link to their calendar to set up office hours. And so in this case, say I want you can search for teams in the search bar up here. Let's say I want to look for libp2p. I would get a result for the libp2p stewards team that has recently been added in. You can see their website, their Twitter, a little bit more about them. And more importantly, some of the members here, I will caveat that this is an incomplete list and still a work in progress. But looking here, I can much more easily find that Steve is the lead for this team, and he actually has an office hours link available so I can schedule a quick 15 minute chat to reach out to him and say, hey, here's the question that I have. These are the list of teams that have now been added to the PLN directory on the left hand side. So the Launchpad team, thanks everyone for this, as well as most of the engineering teams. The reason I wanted to focus on the engineering working groups is because these are the most kinds of questions that we have that we don't have that my team doesn't have a quick answer to. The groups that will be added by the end of this month are all listed here. And the V2 vision for this is that each working group will have a full list of members added, and that each working group will also have a preferred contact method listed so that some of the messages that don't need to go to individuals can go to a shared message board or email and can get responded to more quickly. A quick ask for this group is, if you see your working group here, please check it out at plnetwork.io slash directory, you can use that same search bar. And if something looks off, please use the request to edit button. And if you have any feedback, please share it with me at spaceport dash admin protocol dot AI. Thanks Denise. Super helpful resource to keep up with all the changes and where everything's located. Next he was not able to make the call today. Oh wait, Yuri is on the call, but his he submitted a video recording, which I will play now doesn't have a mic today. No worries, Yuri. We've got your presentation here. Hello everyone. My name is Yuri. I'm a software engineer from piranha. I lost a little part of my main project. Most web three products today exchange most of the community knowledge between users in different messengers like discord, telegram and slack. And information is not searchable and doesn't have a structure. Basically, it's not usable, but those channels store lots and lots of information. For example, we did the analysis of file coins like channels and we found 380 channels. And this is impressive. We didn't see a typical solution to this problem for three organizations like file coin. And for someone who is not familiar with piranha. This is our mission to build an effective knowledge base protocol specifically focused on web three communities. The network itself is fully decentralized built on the blockchain using file coin. All the content is stored in a distributed way and owned by the community itself and also provides different incentives to contribute in form of tokens and different entities like reward. And we are working now on collaboration with the coin network to reward users with attention tokens as well. So during the launchpad I was working on a community documentation. Piranha gives various web three communities the opportunity to create a separate subdomain dedicated exclusively to this community with their own topics for discussions and rules of conduct. Previously in the past there was only an FAQ page which was not flexible enough to introduce new users to the philosophy of particular community. And we had an idea to implement the dynamic documentation system like similar to gitbook but decentralized. Now we have finished the documentation menu start on the IPFS indexing with that graph and like moderators and administrators of the community and create or edit the whole documentation section with only one transaction. The main problem with the previous version was the necessity of sending that transaction each time you create or edit any documentation item. It was too long and creating complex documentation was too hard. There is an editing mode in the current version and all changes are saved on the local storage and only after publishing the JSON document with new documentation it will be saved on IPFS and hash will be stored in the blockchain. On this slide you can see how the documentation menu looks on the page and here is the editing mode. After clicking save to draft you can see how the documentation looks like without sending transaction, you can add text, you can change title, you can do anything you want. Create new posts, edit old posts and also change their item order. So a little bit about the format. It was also pretty challenging to create the graph parser for such a big JSON structure. On the left image you can see how the documentation object looks on the front-end part, on the draft, before sending to the IPFS. The right image is the same JSON object but passed by the graph. And also you can see that the main item information like documentation content are also packed to IPFS and in this structure we have only hash. So very soon this functionality will be deployed in production and will be used by community mediators. And that's all I wanted to say. Thank you very much. Thanks Yuri. Thanks for sharing that. Very cool to see Piranha. Hello everyone. There we go. In another cohort and building upon your tech in Launchpad. Snow is up next. So hi my name is Snow. I started off with a project of what scaling notion would look like for Launchpad in 2023. So to be completely honest, if you can go to my first slide, I started not knowing anything about notion and I purposely picked using this tool or having it as a part of one of my responsibilities because I did really want to learn. And so basically what I started off with my goal is what is notion going to look like when we have multiple running cohorts in parallel and like how can we make this a more accessible super useful tool for all residents and cohorts going forward. And in the beginning I was noticing some challenges of just, you know, this is a big learning curve for me. It's a new project, and also being a contractor there were some permission issues that I was having but actually as I've gone through and we can go to the next slide. I have started just researching reading, making progress of what's going on. And the thing I kept hearing again from my team or hearing from others is automation and how can we make notion for us in Launchpad. How can we automate things because we're going to be growing we're going to be having multiple cohorts more residents even doubling those numbers in 2023 for the goals that we have. And so as I was looking through, I actually just realized like things are happening really fast and features are actually coming all the time that this is a constant learning opportunity for me to find out what is new for example like recurring templates is something that just came out that I found videos about this month in November. It's kind of like not as as easy as like a recurring task feature yet but it's something that notion is hoping to make as a building block for the future so on the bottom corner you can kind of see that there's templates that you can upload a template and make it come out at a certain date but I'm wondering what that would look like for the future of making full on tasks that we're doing and we're using for Launchpad. Next slide please. So, as I think about it more what is Launchpad need to be automated as we scale. As I've been going through this especially from just joining the first cohort as a resident participant and kind of observer. And now finally with, you know v seven starting this week and actually jumping into the tasks and tools that I was put responsible for. It's those resident profiles the resident checklist, the templates that we're continuously using. They're more than one cohort and they're not running in and they're not running in parallel how do we make sure that that is making sense, and rather than moving from a manual copying pasting to more to formulas discovering your features and automation so solid. That's great thanks for sharing snow and everybody in here has engaged with the launch pad notion pages so they're useful and and helpful for current and future cohorts. And the last presentation of cohort v six is show me what you've got. We've got Robert next. So, hello everyone, I know we're over time so I will keep it brief. It's amazing to see so many great projects and I know I will be using many of them. And even more impressive is how many are in the demo stage already and I'm very, very impressed so my name's Robert, I managed the orbit program here at the file coin foundation. And what I'm looking to do is automate many of the internal orbit processes, and then also demonstrate a number of file coin virtual machine use cases so maybe we go to the next slide. For those of you who don't know the orbit program is the file coin community ambassador program. There are over 70 ambassadors and as many countries all over the world, and more or less, you know orbits been misdiagnosed as an events program because we spend money on events but these ambassadors not only hold host events but they translate documentation and publish articles about file coin in their home languages. They build on file coin. You know a number of different things that they are involved with. And it's really, really amazing to see how the program has grown it started in January, and the participation keeps going up and up and up. Okay, maybe the next slide. So, on the previous slide you saw a bunch of maybe charts. When I came to the orbit program, everything was being done manually. And as people would submit their event briefs via email, we would transcribe their proposals into spreadsheets and there's a number of situations that we have to email invoices and contracts and it's just a totally out of control very time consuming process. Imagine if we want to scale the orbit staff should have to do none of that so we are transitioning to a program called air table or software called air table. And more or less we spent a couple of weeks mapping out what the process is from applying as a volunteer to join orbit all the way through getting an event approved and having to submit the receipts. So this is more or less the automated process that air table is going to manage, including sending out automated emails and to invoices and contracts and all of the like. Let's go to the next slide. So where this is relevant for the Filecoin virtual machine is that these orbit members actually have a ranking, and we're trying to pit them in competition so there's some gamification to keep them active. And what we want to do is actually use orbit as a demonstration of three Filecoin virtual machine use cases. We have reputation rewards and voting. So more or less what's going to happen is we have the air table that's the logo all the way on the right on your screen we have all the air table basically managing all the processes. And we get really amazing data from air table. We're then going to use another software, which is coincidentally called orbit. We're going to use everyone. And that is where we're going to actually track user participation like volunteer participation and their ranking, and then we're going to get that into a Filecoin virtual machine smart contract somehow. Okay, next slide and last slide. So the Filecoin virtual machine our smart contracts we are in the design stage. So this is more or less what the process kind of looks like. So we'll have the input participation data from the air table, and that will get basically processed in the orbit software, which will then compute the ranking of the orbit members. We will then have some type of smart contracts rank tracker that associates that rank with the members Filecoin addresses, and then, more or less, basically two things will happen first, we'll have some type of token system where they receive tokens ranking up, and then they could use those tokens send them to a foundation wallet to redeem some type of perk for example, if you get to the third highest rank, you actually unlock the foundation will pay for them to come to a fill event as an example so there's some type of NFT that represents that perk that we issue to them upon them getting to that rank, and then they send it to the foundation wallet to redeem it and then they are, we, you know, send them the compensation to be able to pay for their flight and hotel and all this stuff. The other so that's more or less the reputation and rewards so that's how that would work. And then, you know, there's this idea that there would be a yearly orbit summit. And we thought maybe the highest ranked people will be able to vote on who has to host the yearly which orbit member will host the yearly summit. So we actually have an anonymous election smart contract voting machine that I actually use as a homework assignment when I am teaching software classes such as the one over the summer we did for co rise, but this works on Ethereum. It's really interesting because it has some photography involved that actually hides people's votes. We want to migrate this over to the pipeline virtual machine and more or less, we will have some type of automated process where people that are ranked three and rank for their wallet addresses will automatically when we want to generate an election. So those wallet addresses will automatically come into the election and be earmarked as valid wallet addresses to vote, and then they will vote using their private key on whatever it is they're voting on. So, those are the three use cases we're looking to build I'm hoping to do it by the five point virtual machine global hackathon that's happening at the end of November. So luckily is in the build stage with our partners that we have, we have like a third party consulting group that's helping us build that out. The rewards or reputation are in the design stage and the voting. We need to migrate from a theory into Filecoin virtual machine. So, I just want to thank Sarah theme, who has been very generous in her time this has been maybe the most unlikely collaborations between an ambassador and the Filecoin virtual machine team but I will definitely be leading on her greatly for bringing this life and I also want to thank Elijah jasso who was a TA and of course I used to teach you brought the voting machine to life so. Okay, thank you so much and I'm looking forward to delivering this in the next couple of weeks so. Very cool. Thanks Robert. I just want to see those stats about where in the world events are taking place. Thank you everybody and that brings our presentations to a close and your final, almost your final launch pad tasks complete. And with that we have a cohort being launched into the network. You're going to go forth and do more wonderful things and these preview presentations we saw today. It's a wonderful mix of of creations and ideas and talent from across the network so really cool to to see what everyone's been been been working on over the past six weeks and and working on moving forward. Before we throw those graduation caps James. One last task, and that is to take out your phone scan that QR code, throw your votes in. As if you don't have enough swag from Lisbon, the winners will be receiving more swag. And there's the categories on the left. Contributions technical contributions collaborations most valuable. We'll tally these today, share them tomorrow during our final weekly sync, and then we'll bid you farewell into the into the ecosystem, but we will definitely be calling you back. We have such a cool collaboration of talents in here and in order to build launch pad and improve it and will probably be calling upon your talents to revisit future cohorts. I do want to once again thank all of the launch pad team for the work over the past six weeks and I didn't mention it earlier. Big shout out to all the mentors who who played an integral role and they always do in guiding our cohort residents through the six weeks and Graduates, is that a word people who are going to graduate. We will likely ask you to be mentors in the future to possibly complete all the quizzes at the end of each section of the curriculum. And now having completed your project presentations, you are entitled to a 30 NFT. So look out for an email. That should be coming your way once those tasks are all complete and projects have been shared like they were today. And I think that brings our show me what you've got to a close. Another reminder to vote. And now it's up to you. Thank you everybody. Have a great weekend and see you tomorrow actually I guess before the weekend for the announcing of those votes. Thanks again for all your hard work on this. Have a great day.