 Welcome to show me what you've got. This is our final culminating event of of this cohort, and we do have a weekly sync tomorrow, which is just to wrap things up and actually to share the award winners for the voting that will happen today related to each of related to the project center shared. I'll talk a little bit about that in just a second but let's kick this off we've got a lot of presentations today I'm really excited to see what you've all presented. And yeah, let's just jump into it. So, just have a quick welcome here, then we'll get into the projects demos. We'll wrap up with some thank yous. There'll be some thank yous in the beginning as well, and then you'll graduate. Yay. All right, here's the team that met in Austin we had a great time for those of you who there I hope you enjoyed it. Those you're unable to attend in person I hope you're able to take away some of the learning and fun that happened by connecting virtually. We're going to have a bit of a overview here in case we have people who are joining who weren't in this cohort or who haven't been in launchpad for a little while maybe they did launchpad and one of the earlier cohorts you can see on the right hand side of the screen there a bit of the timeline of of how we've gotten to where we are and how the programs developed. Just as an overview launchpad is a four week on boarding program designed to train develop and match technical talent at scale with opportunities and Web three across the protocol labs network. And so throughout the six weeks that we've spent together. Launchpad has been working with this cohort V seven to scale higher on board and build community throughout the protocol labs network. And those faces down the bottom of the screen there are the wonderful team that I get to work with that makes this possible. Just running across the list there just give everyone give everyone a quick shout out we've got Molly Brooke you know Carla myself Christian Walker Lindsay Marco Katie Hannah and snow. So, when you're in touch with them next. Thank them for the work they put in on the curriculum logistics events etc to make this happen. Before we move on any further actually I just want to say another thanks to anyone who volunteered as a mentor and the roles that mentors play for us in in in launchpad. It's an integral part of our program. And thank you for giving up your time to support the current cohort members and their learning and guiding them and people who are currently in the cohort. So we'd love to welcome you back as a mentor in the very near future, but more about that later. We've got a big group here today, and lots of exciting projects to to to share. We're just going to go in order of the deck. If there are any emergencies, like I said, I'll try to check slack on another device and you can always reach out to Katie. I'm sure there'll be something that we'll need to problem solve but we'll work through it. Lastly, what was I going to say. Oh yeah. I'm going to just continue to share my screen until told otherwise so if you'd like to screen share. I'll stop screen sharing, you can take over, and that'll be great for any sort of demos or things that you have on your device that you'd like to share and control so I'll just ask if you'd like to do that so just let me know at the start of your screen. I'm happy to hand over the reins to you. But otherwise, I'll navigate from side to side and you can just tell me when when to move forward. All right, let's get into this thing. Let's let's begin showing what you've got. Oh, one last thing. Sorry, that little QR code you see in the top corner of the screen. You can scan that with a phone. It will link you to a Google form, and will it that that's a voting form for I think there's about eight categories of awards. The award winners will receive swag. And we will announce those tomorrow during our weekly sync. I'll keep reminding you and at the end of the show me what you've got session. There'll be. There'll be another link that you can use to access the voting form will also put that in slack after showing what you've got so you can consider your voting options after seeing all of your presentations. So, I think without further ado. Let's, let's see what you've got. Courtney is going to start us off here Courtney would you like to screen share you're happy for me to continue. You can continue. Thanks, Dave. So I regret that I was not able to join all of you in Austin during Cola week so for those who I haven't had a chance to connect or meet yet my name is Courtney Della Rosa. And I am a member of the spaceport content team. So for my project, I am going to be working on increasing output and subscribers times two on PL sub stack. And sub stack is an email newsletter platform that we are using to disseminate our newsletters, you can go to the next slide. So for those who may not be familiar. These are the newsletters that you should have seen hit your inbox so far if you subscribe to our sub stack and if you don't you should. So all of the lab week communications, you receives the monthly PL and updates, and most recently the lab day highlights all live on sub stack so that's the primary my primary area of focus. So here's a just read a quick recap of my project what I'm going to be working on so the overall goal is to develop and distribute a high value content that grows PL follower based on sub stack and grow the PL and sub stack subscriber list by 50% during each one of 2023 and by 100% by the end of 2023. A few things we've learned with these three different newsletters we've done is that partnering with teams to disseminate information about events, create spikes and subscribers. We really saw that during lab week as you'll see on the next slide when I show some numbers. Our work once more valuable content like funding news, marketing and ops help things like that. And high quality video content is definitely a driver to increase engagement shares and follows so we're going to be looking to expand our reach beyond the current PL network base trying to engage some other audiences and identifying opportunities to partner with other teams and the network. So you can go to the next slide. So here's a look at the numbers we started off when I launched up stack in August was 574 subscribers. We rounded out November with 1609 and maintain some really steady open rates. So that's 182% increase and subscribers since we launched in August. And that's really good considering we haven't been really doing a whole lot yet in terms of efforts to really promote and expand the subscriber base so I think once we once I kind of implement my project and we take some more initiatives. I definitely think that my goals will be attainable. A couple of other learnings is we've seen that increased output is well received when the information is interesting and relevant to the network. So that's definitely a need for content that is interesting to both technical and less technical audiences. And almost 20% of our subs, substack visitors, click through the links which is really good. So that's next slide. So strategy and next steps we plan to ship two additional newsletters so increasing our subset output to weekly increasing the cadence from two a month to four a month. Rebranding our current PLN updates a bit with a greater emphasis on funding news, since we are going to be doing for different newsletters, all with a little bit different twist, we will kind of spread out the types of information we're putting in the newsletters. We're going to work to increase video content as much as possible within the newsletters. Start cross posting more we've been doing a little bit of it but really ramp up on cross posting on all PL channels that are team currently overseas, and just continuing to identify opportunities to collaborate across the ecosystem to highlight teams events individuals content resources, etc. So by the end of January, you guys will see two new newsletters come out for me. The first is going to be called Web three and 60, and it is a monthly newsletter distilling topics concepts and tools and the Web three space covering from Deci and DeFi, things of that nature, Filecoin, etc. And this newsletter will include a brief top line and then from there will be a bullet point style format with key concepts and include a 60 second clip from a founder or subject matter expert on the topic. And again, going back to collaboration across the ecosystem. This is an opportunity for us to work with other teams and the network who, you know, might work on in these different areas and also to work with launchpad on featuring some launchpad in the curriculum when it makes sense. And we will also include a library of resources videos blogs things like that for folks who are looking to dive a little bit deeper into the topics. So, that's the first one you can expect that mid January. And the second is you asked we answered that's going to be a monthly newsletter written in a Q&A format addressing some of the most common questions raised by the PL and again, an opportunity for collaboration across the ecosystem will be collecting input in terms of questions that are coming up a lot from community ops who has very high touch with the network teams and just other interactions with founders labbers. Even you guys really will be will be taking input from anyone who wants to raise a question and we'll curate the list from there based on the question for seeing most commonly pop up. And again, kind of the same, same concept of answering these questions and providing resources partnering with those in the network who who are experts on these things to provide answers to questions. And also, this will provide an opportunity to point readers back to mosaic as a resource. There's a lot of team, you know, a lot of the smaller teams out there who maybe don't have the resources for, you know, to have a full on content team or marketing team or like that and we can kind of use this as a tool to provide some guidance and resources for them. And lastly, it's not on the slide but I'm also going to be working with the events team and community ops as well to identify some different teams in the network, who again those smaller teams who might not have the resources to promote their events so that we could potentially take on a piece of that with helping to disseminate, you know, valuable information about their events to include in our newsletter since we saw that that helped us so much around lap week. So that's what I'll be working on. Awesome. Thanks Courtney. And it's exciting that massive increase in subscribers I wonder if there was like a lab week influence in there that somehow also aided the expansion or access of it. I also just want to say for everybody coordinated a great job of staying within our time limit, which is seven minutes. And after about five minutes, I'll give you a polite reminder that to start moving towards the end just so we can squeeze everybody in within an appropriate time frame. Up next we have Brit Brit you happy for me to continue to keep sharing or would you like to take over. No please. All right. Go ahead. So, as you know, my name is Brit I met most of you at lab week, or sorry color week. I am, I work out for research acceleration, and a lot of the work that we do is trying to solve the problems that researchers are facing an academia and industry. And one of the biggest problems is journal publishing. It's a huge thorn inside of all research so today I have a thought experiment to show you guys it's called journalists manuscript publishing a conceptual decentralized framework. So what is the issue with journals and why is it such a thorn in the side of researchers. So, first and foremost journals are a business. It started out that there was you know a notable series of journals they would actually, they would curate and assemble volumes and send those volumes as printed works to people to read in the academic sphere. Well now it's 2022 and journals are just PDF, you know, sites now everything is PDF there's there's hardly any actual printed volumes to speak of, and they have just ballooned. There's, you know, potentially over 20,000 notable academic journals right now. And because there's so many and they're so fragmented. There are a few notable companies that have come to the top so Elsevier spring or Taylor Francis each one of them owns 2729 hundred of these journals collectively. Their revenue is measured in the billions of dollars I think Elsevier last year, they're just profit was $2 billion, which is just absolutely wild. And one of the biggest ethical concerns with journals right now is that they double dip. And what I mean by that is in order to get your article into a journal you have to pay the journal for publication so you pay them a couple thousand dollars to get your article published usually by page. And then, all everyone else has to pay then to access that journal as well. So, it's all usually taxpayer payer funded dollars so taxpayers are usually the ones paying the bills for the researchers to publish work, and they're also paying the institutions and the students and the, the doctors to be able to get that work back. So that's really crappy I think California just published some rulings recently or maybe it was the United States that that made taxpayer funded work supposed to be open access by default. We'll see how that works out. I'm sure the journals are very happy about that but first and foremost though journals are a money making business. On top of that, they have an opaque peer review process so peer review if you don't know that is kind of the method through which all academic work is like regarded to be either crap, or, or good. You go through a peer review process of other experts in the fields and they unbiasedly will either approve it or not, and then once it's elevated to approval status it makes it into the journal in theory, though that process is is complicated because the, the peer reviews are anonymous, and oftentimes there are conflicts of interest, and they know who you are as the submitter but you don't know who they are. And it is not unusual for a peer reviewer to, to reject a paper, because it's from a competing lab and they want to, you know, get a step ahead so another big thing that sucks is that journals are a measure of prestige. So they're all about. I want to get into the best possible highest impact factor journal. It's not a method of getting papers out it's a matter of prestige. And really the biggest issue is that journals because they're sustained in a business model, it's more a. It's more of a trend to sustain their reputation and their impact factor. There's no limit on how many PDFs a given journal could publish a journal could publish all works that happened to meet their criteria for for for publication, but instead they want to release a certain amount, because they need to keep the supply low and the demand high. And so it ends up being a, a matter of saturation economics, not a matter of actual academic rigor which is kind of a nightmare in the world where that should be the only thing that matters. So we go to the next slide. So it goes to you. What I think could be a pretty cool conceptual framework. It reinvents journals as an interconnected network of manuscripts. Each manuscript is is effectively connected to all other manuscripts through what would otherwise be treated as like the citations. So all of the direct citations of a given paper those are primary nodes, the citations of the citations are the secondary nodes and so on and then with a network like this this network can be somewhat hands off. It can be decentralized it could be stored you know on servers all over the world so that way like no one corporate entity controls it. And then how do you get a new manuscript onto the network, it could be through an open peer review process, such that if you want to insert your your manuscript on the network in a very specific spot connected to very specific subset of the manuscripts already on the network. It can then be peer reviewed by those direct tier one manuscripts, and then subsequently the two twos and two threes in a weighted mechanism, until a critical mass of endorsements are reached and then the paper can be then added to the network. And then that paper once it's on the network that author is then able to then endorse other papers future papers that want to be on the network. So it becomes like a self sustaining model for papers joining the network. In theory it opens up the peer review allows for more predictability it also allows for more labs to be a part of the peer review process as opposed to just the three or so that are required in today's model. And can we go to the next slide I think I'm running out of time so. The next slide is that just basic considerations it's actually not dissimilar to how it already exists. So the way it already exists today is that you don't get published until you go through this peer review process. The people that are the experts in the field that are peer review they're already not being paid for it. So we're not asking anything more of people they're already making $0 so let's continue giving them $0. Maybe there's a monetary way to do that but I don't know about that. And then and then you could basically create a seed network so you can create the starting framework for the seed network off of the framework 30 to today so every paper has a list of authors every paper has a list of citations and you could use those citation maps to basically create what is like the version one framework. To lay the groundwork for them everything afterwards them kind of build off of. Now collapsing journals into a knowledge network it takes away that prestige factor. So, being published in nature, or science or one of the big journals, which is today a career maker like that's how you get a job at Harvard you have to get your papers and those big ass journals. So the way so the new metric would have to be redefined that new metric would basically be the number of nodes connected to a primary node so if your paper is really bad and nobody wants to site you with their source, your likewise not going to be connected to very many things your impact factor is going to be relatively low. If you're the paper that every work site like if you're Hodgkin Huxley and you wrote the DNA paper. You're going to have an impact factor through the roof and so I feel like that's actually a more direct measure of your success rather than I was published in nature and don't actually look further don't actually read the paper just know that I was published nature and that's all you need to know that's effectively the, the unit of currency right now. Now, like any new thing. The system is only good as a number of researchers that embrace it. So if we were to roll something out like this and researchers hated it they thought it was terrible. It's going to fall flat on its face. So it needs to be a need to reach like a critical mass of adoption for it to work, but I don't think that this model is any worse than what we have in fact I think it's, it's pretty much better than every possible way. And I would love to see this rolled out technically I don't know exactly what what I'm asking for I definitely think that there is like a use of like blockchain technology be really great to to allow for like a like an archival record of like every paper and where it's been like once it's on the network in theory that shouldn't change so that can be distributed from no to note, or from server to server who's hosting so the closest we have to this right now is a service called Web of Science and Web of Science is incredibly expensive and it's totally proprietary so we would want to be the entities that. And I think I'm out of time with that I'm going to say thank you. And any questions, I don't have time for questions. We probably don't right now but any people could follow up with you. If they have any questions. Super cool idea. I've been able to get zoom up and on my phone so I've been seeing some of the comments in the chat which is great. A lot of positive feedback coming through awesome stuff great thanks a lot. All right, we've got Chris and Luke up next. And are you happy for me to continue sharing my screen. Yes. Awesome. You guys can take it away whenever you're ready. Awesome. So, very similar to what we've got presentations kind of similar what we presented down in Austin, but I'm Chris Williams I'm a recruiter obviously across the org here at PL, but one of the things that we're going to be doing moving forward is we're going to be recruiting for the PL PL network. And one thing and Luke and I have really kind of focused on is getting two forms for for a couple reasons one we want to learn about the company. Because we feel that there's valuable to understand everything that the one the company does their mission, their value, how they're connected to two protocol labs in general, and then what they do in the sector. This is going to be vital for us in order to to attract positive talent to present to these companies. And with these companies, it's a little bit different than PL, their various sizes, they could be one to two, they could be 1015 plus and when you hire, you're going to that's a that's a different type of hiring, you know from zero to three or four people versus 10 plus. So, who can I have created a pretty, pretty, I wouldn't say it's generic goes deep deeper dive in some areas but really understanding what the company does what value they add to Web three. And so we can kind of help those, we can help find qualified candidates. That being said, it's also allows us. This also allows the company to learn about us. I think one thing that we're kind of learning as we go through this process is, it's, is just as valuable for us to learn about the company it's it's important for them to learn about us. And so, you know what is our history look like what have we done where we recruited and what we recruited for. The company intake just kind of overall, and then obviously we take the next step. Once we know about the company and the value that they add in that in that area, then we actually go into the recruiting of the roles, and what they're looking for. Can you move to the next next slide please. So what we have learned throughout this process recruiting for true startups it's going to look a lot different from what we currently do I know protocol labs likes call itself a startup. That may be true but this is going to be true blue startup recruiting there's going to be no name recognition brand recognition. And that pre established reputation goes a heck of a long ways so that will affect everything down to the messaging to the initial conversation all the way to the close at the end. Big focus on selling culture vision founders founder history founder success. So the company intake form really deep dives into that. The other thing that we learned is, we're going to have to pay a lot of attention to detail, really dot our eyes across our keys. Every single one of these companies is going to be different so having a way to quickly and thoroughly get this information out in the open, have it open and accessible by anybody. So we can stay mobile and bounce between these different companies and roles is going to be key. The forms are linked here if you wanted to check amount they are still kind of in the beta form we love feedback, we really want to iterate these things. So what we've done this week is Chris and myself sat down with our recruiting manager Mark careway, and just ran through a quick little mock intake call. He was pretending to be gold sky. And the form worked extremely well obviously you kind of let the conversation flow, but we're able to get a ton of information and then got some solid feedback there. So we sent it to one of our sources who's going to act more as like a recruiter as we transition into next year and start working on these PLN roles. And it was great. They had a, they had a great mock intake call. So it's nice that this form is very transversible, I guess you could say, there's going to be a lot of unknowns for the recruiting team in 2023 we're still working through these questions. And we're really hoping that these forms will will kind of help us work through all that ambiguity and help us find success with these PLN companies. Thank you. Sorry. What's this thing. Can I go away. How did I put that up there. This is fun. Give it a right click. Yeah, cool. Thanks. Thanks a lot Chris and Luke. That sounds yeah recruiting in 2023 might look a little different and that seems like a fun addition. Walker are you, you happy for me to share are you going to take over and share. It's Walker there. Yep, I'm here. Walker, do you want me to continue sharing my screen or would you like to share your screen. I do have a demo. So I will share my screen. Okay, I'm going to stop sharing. You able to share their Walker. Yeah, which slider we on down a bit more. There you go. I think you're on 21. Is that you. Yeah, this isn't the right deck. Dave, you, I'll share my screen when I'm ready to do the demo. Why don't you just. Okay. I'm just just so you know I'm accessing the same deck is that you were on there. And if, if those slides don't look right, we could always move forward to the next presentation. Skip me. Let me see, because I made slides. Let me figure out where they went. Yeah, no problem. Looks like La Christa is up next, just to give La Christa a little heads up that we're moving on. And let's try to share this. See how this works. Oh, back to that again. Okay, let's let's not do that. And let's do this. La Christa whenever you are ready. I think my slide starts on the next one. Awesome. Thank you so much. Okay, my project is adding a preferred contact method for teams to the PLN directory. I'm part of the engagement working group within spaceport. Spaceport supports the network through content events and engagement to help teams engage we need to help them connect first. So one thing that we do, one way we do this is by maintaining the PLN directory. We can access the team page of the PLN directory at plnetwork.io slash teams to check it all out. Oh, sorry, next, that's on the next. Okay, perfect. Through speaking with many founders and PLN teams, we found that they were really struggling to kind of navigate the network and connect with the right folks. So in November, we focused our efforts on adding PLGO working groups to the directory, like lurk and bedrock. Now we'd like to add another layer to improve the effectiveness of the directory. Our goal is to provide PLN teams and working groups with the ability to receive and route inquiries efficiently. Taking those off of just the leads and kind of being able to disperse those within a team and provide PLN members with ease of access to the correct teams within the network and correct folks. Some challenges that we've seen are that we would like to have authentication in place in the directory prior to adding multiple options for contact. We believe that this may be, maybe this is a teaser hopefully active sometime in the first quarter. The update for the preferred method of contact will not be live on the website in the expected timeline we've been pushed back about three weeks. Through working on the project, we've learned that teams are really interested in listing more than one method of contact so that folks can kind of drive those inquiries based on on the topic. The project has prompted teams to review and update their profiles which has been really exciting PLN teams are very responsive. We've processed over 60 field edits in the directory in the last couple days, updating and adding information to the directory, making sure teams are assigned to the right working groups or folks are so it's been really exciting. Next slide. Okay, the progress we've made so far is that we do have the new team intake form live on the directory so now every new team that comes in can add their preferred method of contact when they're entering their team information. We also have updated the request to edit form for teams to include this line as well so folks can go in and add that information themselves, we receive the edits and update the directory. We've completed outreach to 188 teams this week. 45 of those are PLGO teams and working groups and 143 PLN teams. 4% of the PLGO teams and working groups have provided their preferred method of contact and 10% of our PLN teams have provided their preferred method of contact. Next slide. The steps in the process are to obtain contact information for 12 remaining teams that are network teams. We're possibly missing founder emails or team lead contact information on incorrect any folks that have had like bounced emails will initiate an outreach campaign to additional leads for those PLN teams if there's co founders. So if any of their team leads within their group, if we've not received a response will also initiate and track a slack or discord campaign so that we're able to meet folks where they are we want to go where the team leads are and ask for this information so that we make it easy to respond. To reach a completion rate of 70% for PLGO teams and working groups and 30% for PLN teams just understanding the size of some of their teams they don't have necessarily a group email or something of that nature. The update to the directory is expected to go live and be viewable to folks as of January 6 of 2023, you should be able to see the preferred method of contact where we've got that blue oval on this image. I want to ask for you all is to please take a look at your team profiles at the start of the new year. If there's not a preferred method of contact. Help me get one added. You can shoot me equipment quick message on slack discord via email or go in and request to edit via the button in the directory. If your working group is missing. Reach out. We're happy to help you get it added. That's it. All right. Thanks Christa. Walker, should we go back to your slides or have they moved or what's. How's that going. So, when I went to build slides, I didn't have a spot so I added them later and then I think someone must have come in and add this place holder for me. So my spot in line is a little, a little later, but I'm happy to jump in now if you like, let's just if you're later in the in the deck let's just wait till we get to you then is that okay. Yep. Okay, cool. We'll continue in the order that's things are listed because some people have to hop out early and and if you're joined late, just pointing out the QR code in the top right links you to a form to vote on for some fun awards that the winners will get some swag and reveal those tomorrow. Christians up next, and Christian you happy for me to continue sharing. Yep, you got it. All right, awesome. And much like I what I've mentioned in the previous showing what you got in Austin, very similar. And the next slide, Dave. You'll see that I've started kind of revamping the learning objectives of the PLN curriculum so you'll see an iteration of phases where we'll begin rolling out these changes, which is another way for you to keep coming back to the PL curriculum to see the different states. It's going to be a lot more targeted and focused in terms of what you'll be learning, especially for the new onboarding hires, and much like to what Chris said about a lot of information not being only placed in one location, the goal of this rebound for the PLN is to have the majority of the resources, especially for the teams to be in one location, so that you don't have to keep scrambling around all the different notion pages Google drives or slacking everyone, but we're to find things. But this is like a quick preview. And then if you go to the next slide. This is some sort of my iteration of it, it hasn't been, hasn't been committed and pulled yet. But this is what the new curriculum page will look like in terms of the section for the labs network. You'll see it's a bit more targeted and refined in terms of the sections. We are definitely developing more of the credit research page, especially what current research are happening, and also creating more of understanding what the big five big groups are in the PL organization. So this is just a teaser for what you'll see an upcoming iteration that will be released next year. In the next slide. So right now these are the current steps where I'll be working with the network good to create a section for what their services are. A lot of us don't really know what they do I myself I'm still learning what they still do. I'm going to be working with spaceport deliver updated directories and almanacs, and to keep all the resources up to date. Same thing with the research and grant teams. There are a lot more research teams that are in existence in PL, and I just learned that recently so that can be updated for all of you to come back to the curriculum and check out. And just some overall curricular structures that are implementing with the increased focus on non technical understanding differentiate differentiating technical individuals via tracks and keeping all the resources updated with the evolving ecosystem of the network, which also leads me to please if you're interested in collaborating with me or the curriculum team in updating information or adding information black me or email me happy to work with you. And you'll probably see a lot of these updates through Courtney's newsletter when we collaborate together. This is a plug for Courtney, and just working together. Utilize this curriculum and check in next year to see the update. That's all. Awesome. Thanks Christian piggybacking off of some of the things that Christian said at the end there. The, the protocol labs. Launchpad curriculum that you've interacted with is public and always accessible. So if you ever want to come back to it. You know, a couple weeks or months down the line. It's always there. And we will be linking. At the end of this show me what you got and through slack and end of cohort survey where we really value feedback on things like the curriculum as well as the whole six week experience. So if you've noticed that something might you want to, you think some area needs some tweaks or we need to focus a little bit more on something, please do let us know. We've got a lot of changes coming down the line next year and we'd like to to build out a little more. And it's kind of links to what Christian was doing for his project so thanks a lot Christian. Aziz is up next as easy happy for me to continue sharing would you like to share. No, you can continue sharing David awesome take it away. Okay, hello everyone my name is Aziz I met quite a few of you during Cola week. And as you know I work in social media I did a session during Cola week on why social is so unique specifically in crypto and more specifically even on Twitter. One thing we lack in crypto across the board, and then Web three in general is language diversity you can go the next slide Dave please. Yeah. So Web three is very much predominantly English speaking which makes sense because we're still in the early days of this industry. And even if we decide to translate a lot of the jargon a lot of it isn't really translatable to other languages, because again it's so new, we, a lot of it is still manual we have to coin a lot of the terms no pun intended. And it's not an easy thing to do, given that we tap into a lot of markets, we are very global in nature. Yes, we can do that. So for my project. I work with the TLDR team for those of you who don't know we provide sort of bite sized basic content that talks about Filecoin, which is perfect for us to translate into other languages. My approach was to create a landing page for Filecoin TLDR in Arabic. And the main goal is pretty much to tap into different markets and help achieve widespread adoption of crypto. But the challenge here is that I want to point out that translation does not necessarily mean localization. You can go to the next slide Dave please. So the Ethereum Foundation does provide translated versions of their website, but it's not really localized, meaning that if a native speaker of that language reads it. They won't really resonate. I felt the exact same way when I read their Arabic landing page sure it's grammatically correct. It's polished and everything but it doesn't feel localized. It feels very robotic and not human. And that is something that we can change. So the next step, next step would be to create, as I said in Arabic and French landing page because sadly I only speak Arabic and French and English. But thankfully we have, again we have Filecoin TLDR which launched recently if you didn't check that out. I highly recommend you do so. After we come up with those landing pages, the process can easily can be streamlined easily to other languages. And this can help with not only our presence or SEO performance, but also for events. So we have ETH Dubai in March so the Arabic landing page would fit perfectly for that. If, you know, we have an event anywhere in the world we would always have a localized slash translated landing page for that location. It would help with any anyone who's interested in Filecoin to feel that they are welcome that they resonate with this with this project. And that we are definitely would reinforce our mission of being global at heart. And also it's going to help, you know what helps is that as an organization we're very international we have people from all over the world, who can help with localizing these landing pages. This is still very premature. I think we can, you know, we're going to build it as we go. But I definitely think that this is something that we need not only for protocol labs but in Web three in general. And yeah, keep an eye out for our first non English landing page for Filecoin TLDR. That's it. Awesome. Thanks as these. I think this sounds super valuable. I think we've got people joining this call from four continents today. And I think hope to see more of this throughout the PL network in the future. Megan. Hey guys. You want me to share you happy to share. You can share. All right. Hi guys, I'm Meg as you know I'm on spaceport working content and marketing. And my project was to develop a plan to launch protocol labs on to talk. Next slide. Okay, so why to talk. So at first glance, this app is just littered with dancing teams. There's all kinds of weird challenges like report Michael and chicken. Sleepy chicken. My grandma has been warning me for like three years and all caps text messages like China's buying on you. It's littered with a lot. So, next slide. So, I'll answer the last slide, why tick tock. So, tick tock has more than 1 billion monthly active users. So, tick tock is the only thing. And really this is a way for us to market and share our content with a really wide audience. As I don't know for Chang is on here we talked this morning he says it's basically team Google. And one of the biggest communities is tech. And this is a really fun way for us to tell these stories and have fun with our community. So, it's a great way of our YouTube content as well. And the next, next half we'll be doing a series about founders. So we're going to kind of adapt those for tick tock. And really it's a great way that the three points I would I do want to make I think that are important are, it's a really great way for us to develop brand awareness share our knowledge and also support recruiting. Great audience. And also, it's pretty cheap to make content on there, as, as you'll see in my, my slides. Okay, so next slide. So why, why should we be on there. So, oh, you know, I don't think that updated. Okay, well anyway. We are the experts that are that's building Web three. As you can see below I put some hashtags that are trending. We have a ton of influencers there that are just talking about Web three that have never worked in Web three never built Web three. So I think it's really important for us to leverage. Everyone we have that's actually building Web three explaining Web three and just instead of making it you know this abstract concept it's like this is what we're doing. And we have I mean hundreds of experts I mean as we already have, you can see in this call how many talented people we have and this is just launchpad. And there's just so many stories waiting to be told in a in a new way. And, you know, as you can see I'm plugging in our the directory again but as you go through there you'll just see the teams and within every one of those teams is a different story. So, next slide. Okay, so how are we going to do this. It is very cringe sometimes when brands launch on tiktok. And, as I mentioned before, you know, we're not going to have our founders doing, you know, tiktok dances or, you know, engaging in any of those weird challenges. So, you know it's like how do we enter the conversation in, you know, in a way that's, you know, approachable and fun and informative. Okay. So, what's really great about tiktok is you can see how many times these hashtags are used. So, you know within, you know, history tiktok leadership book talk. It's really just a way for us to enter the conversation in these trending hashtags. And I think an amazing thing is, you know, telling these stories visually. We have so many, so many companies with just tons of stories Starling lab is one of them. You know we filmed the middle one is from our, the founder series that we will be launching. And then also just other other ways to like book talk. That's where, you know, everyone goes for recommendations and like I just, I basically just film that yesterday in my, in my house. As you can see there's a sock on the floor I left, but it's very low cost. Next slide please. Okay. All right. Let me just make sure. Okay. And it's also a way so I think we can sneak into the conversation, you know, in the last slide but also just really sharing the depth of our knowledge and Web three. You know, with, as you saw in recruiting we are going to be hiring a lot in the next year with all these startups and you know it's just a way to kind of engage in a fun way. Just to say Web three, everything. You know there's an article CNBC just did we're the number one remote company like that would be a way to join the conversation was like hey that's us. And I think, you know, this really touches so many teams in the ecosystem, whether it's recruiting outer core space port, and dress, and just being able to have fun with it. Again, I did yesterday with my dog. And what kind of Brit talked about is, there's a lot of a lot of problems in, you know, with central nation and in, in science currently and I think this is a way for us to be able to share those stories and you know memes are, are just fun different ways. Next slide. Okay, and yeah there's just I think the possibilities for this for short form are really endless. You know, explainers do really well in there. There's a lot of ones that are like web to versus web three so it could be like zoom versus huddle. And I think we could share a lot of stuff their company news explainers would be great. And I think, you know, I'm working with Christian and launchpad to kind of develop some stuff. And it's a way to have fun to sometimes there's trending songs that we can, you know, mix up with our video footage and, you know, also updates to within, you know, companies back to Courtney and the newsletter. And that's a huge thing too. I just did an example where it's like crypto sad, you know, it's closing its seed round and it's going to launch crypto satellites into orbit like that's super cool, and great, you know visually for that. So, next slide. And so yeah, that's, that's it. Thank you guys. It's been a pleasure being in this cohort and yeah thank you everyone for coordinating and doing all this and it's been fun. So, and feel free to obviously email me any suggestions I am all yours for anything. Thanks a lot Megan. You have me wanting to make a tick tock account. That looks fun. And follow. Megan mentioned that like, maybe a slide wasn't as up to date as she thought if that's the case for other people during your presentation. I can refresh the presentation if I realized changes might be made further down the deck so just let me know. Carol's up next. Carol, would you like to share your screen or you want me to share mine. No, it's fine. Okay, sharing. Sure. Okay, go ahead. Alright, thank you. Hello everyone. I'm, I'm not going to share my video feed and a little bit under the weather. But I'm happy to see everyone here. So my presentation will be a little bit of preparation of what I already presented on the launchpad on site. I'm going to share a couple of more screenshots and couple of more links to the actual product so it will be, I'll be happy to share those or you can access them from the slides. Please next slide. Thank you. So, as most of you know I work with with the southern team. And the first assignment that I did pick up is to work on the fat dashboard for the one note that we're running. And the overview of the project would be to So it's a strictly technical project. There's a lot of coding, not much of researching. It's a refactoring mostly, but also allowed to this project allowed me to get to know all of the technologies of protocol stack across the stack. And it was very much focused on teams current needs and priorities. And it is both internal and external user facing. Internally, we're using that to monitor our notes externally we also pair this stats that work with with our clients. Community operators of everyone knows, I'd say. Next slide please. And the motivation for this project was that current stuff that was very limited in technology. It was very MVP ish and when it was created. It was very much do and forget project. It's kind of grew during the development time. So it's just natural that it needs a little bit of rework. There's a lot of features that were missing no sorting no searching no filtering limited use of tooltips and no actual help. A lot of our users were asking like what why is this read or why is this an issue. Usually we're helping out on the community channel. But it would be great to have some more info into what those metrics and stats actually mean. Technologically, it was also hard to extend maintain and modify because it was just generated output from the back end. From with time. The list of notes grew to more than 300 right now. And this kind of view started to be really hard to follow. It didn't provide the overview that was intended. On the technical side it also was tightly coupled with orchestrator which is one of our servers that we use for aggregating notes in a subtle network. And the idea was that orchestrator should be a single responsibility servers service and the dashboard should be deployed to a different sub domain. It also as one of the motivations he also sounded like it's a good, good launchpad project. Like I mentioned, it gave me a lot of insight into different parts of the political lab stack. And mainly service infrastructure. I had a, I had to use AWS I did start using flick libraries like the file corn address library at least and some repositories, mostly from the Saturn team. It was also, it is also high visibility projects, since the improvements impact team and community. The actual output of this will be used on production. And one of the main reasons was also that it can be finished in a limited time, which is one of the requirements here so we make sure that we, we kind of finish the project at some point and move on. And there was also a local fruit for a newcomer like me, since it didn't require much knowledge, just to start the project. But throughout the project I got a lot of insight from the stack that I was working on. Next slide please. Quickly, this project is deployed on the under the address that is provided on the top of the slide. Visually, it is a little bit more pleasing than the previous one. It contains all the features that were missing and it can be played with on this address. But first to mention that it is deployed to IPFS flick.co is deploying the project IPFS notes so you can also check what kind of CID it has and access it from any of the gateways. Next slide please. So the work so far from the technical perspective, the code got integrated into that word source that is available on the github. It's still a branch, there's open pull request that should be merged or reviewed and merged as soon as possible. It's both deployed to Netlify and to IPFS on Flick. Work with Flick was not without issues, but they were very helpful and they upgraded the gateway. So we were able to deploy the project. It contains all the requested features like sorting, filtering, searching. It contains life updates, which previously were just refreshing the page and pulling the data every minute from the backend. And right now it's refreshing every 10 seconds in the background and only the change data is updating. There's also toggle to disable refreshing if you want to keep looking at the data in the current state so the data doesn't move around. There's also one heavily requested feature to click to share a copy of the current state of the dashboard. So you can filter out a couple of notes. Let's say you are a community operator and you want to share your notes with someone over on Flack. You just filter out and copy out a long link that contains all the state and it can help you out sharing. And note, sorry, and it also contains a slide in with no details that can be extended to present a little bit more insight into the actual note that you are viewing. In terms of improvements, it contains better tooltips sharing without delay previously they were just browser tooltips dedicated copy buttons next to most useful data so it allows you to check the operators email like a copy operators email or fill address easily. It improves authentication or password managers right now. Next slide please. And as final steps to this project. I definitely want this to be merged, then deployed to production, then get some feedback from community operators and implement any features that were missing. So it's going to be much, much more easily extendable. We also have a little bit more space in the note details view. So that's, that's going to be much more pleasant to maintain and extend. Thank you. Thank you Carol. Sounds like exciting development. Walker it looks like we've, we're back to you is this the correct spot. Okay, do you want to take over screen share then. Yep, I'll go ahead and share my menu bar go disappeared from me. Stop screen sharing. You guys still see my screen. Yeah. Any suggestions on how I might be able to stop screen sharing the menu bars disappeared on me. I'm going to hit that green button and see if that will make them not full screen and then maybe you can see it. This is fun. You might have to close and rejoin. Yeah, I might have to. Okay. I'm going to do that and Walker, you should be able to screen share once I leave anyway so you can you can continue and I'll be back in a second. Okay, awesome. Wait a second for Dave to join. Rob Goldberg machine. The problem is that we'd like to have some more activities for deep divers during colo to get more integrated with different protocols tools. And it would be nice if it was something that's sort of simple to do so we can get the beginner programmers but also still be interesting to the most advanced. And it should be fun. So I thought of creating something like a Rube Goldberg competition where you split the group into teams, maybe two, three individuals. Ideally, you have people with different language expertise JavaScript people go people any language it doesn't matter, because we want to show how interoperable everything is. And then you have each team spend a little bit of time every day. Just mapping out a crazy Rube Goldberg machine. Use web three tech like live p2p and IPFS. And you can even incorporate web to stuff like I did today I've got a Google search component. And then include integrate as many people's computers as possible. So the neat thing about live p2p is that you can pretty easily have nodes talk to each other, no matter if they're in the same location. And then at the end of a little competition about who can make the longest or most interesting chain. So I built a little MVP. Just to start getting a feel for how this could work. And in my little Rube Goldberg machine we start and Nikolai is actually helping me he responded to my slack chat. So we ran through this real quick before this and it and it worked. And so he's, he's using the this computer was Nikolai, and he's using the live p2p chat example application. And so I told him, I have you have to specify like a topic, so he doesn't need to know my IP address or anything like that, just a topic. So the topic was Rube and he starts the chat application gives it the topic of Rube connects to my system. And then he can, he can type some search phrase and the search term gets sent over live p2p to me on computer to here. Then my node will do a Google search for that. I instructed him not to do anything inappropriate, but he's got admin access essentially on what gets shown next, but that file get downloaded to my computer. And then my computer will add that IPFS through the Kubo CLI, and then my computer's hosting that file. So I send back to Nikolai an HTTP address that he can put in his browser, and then this is an IPFS gateway that you can pull up and view the file that my system is hosting. So we'll see if this works. Okay, so here's my terminal where he's connected Nikolai go ahead and type in something and hit enter. And if he doesn't, that's okay, I'll, I'll pull up what we did in our test for, or it's possible it is connected. So, you know, oh, there we got something. Drop bear. Okay, so I got the term drop bear from him. Here's some logging from my computer downloaded it correctly. It added it to IPFS and then I sent him this URL. My computer's pulling it up right now. Now since I'm hosting the IPFS locally, it's a little slow until it gets cached within the network. An improvement could be to pin this with pinata or something like that. But it's going to spin for a minute, and eventually we'll come up with a picture of a drop bear. So while we wait for just a few seconds. Any questions. I wanted to hook this up to slack so that it could be a little more interactive for everyone, but you need to have to get, get admin approval to get a slack bot, but we'll figure that out for next time. So it does this sometimes on the first, the IPFS gateway has timed out I just hit refresh it'll probably come up. Maybe it won't, but go ahead, Alex, what's your question. Do you have like a certain event that you'd like want to integrate this into like with the ski like launch pad project or. So I was thinking launchpad we do deep dives for example we did. Like, we install the Lotus node, and then we, we hosted a file, there we go. That's the image that came up. So yeah, throughout the Cola week was my idea, we break off and spend like, you know, a half an hour every day so we do a little bit at a time. And that way you can build out kind of a longer interesting chain that leverages everyone's expertise. So Nikolai could take that take this, or anyone could take this address put it in a browser and you pull up the scary looking thing. All right, that's it. Thanks guys. Thanks Walker. Sorry, let me get my screen share going here again. We'll see how this works. Hey, this is doing what it's about to do. Yay. Cool. All right, we scroll down here, mirrors up next mirror you happy for me to continue sharing my screen. Yes, please. That would be awesome. And everybody can you see my screen correctly just to confirm. I can. Okay, great. Perfect. So for my project I picked a task to find out how we can connect Saturn L one and L two nodes using repeat to be. Next slide please. So Saturn is a distributed CDN network, and our goal is to accelerate retrieval from file coin network. The idea is to build many points of presence all around the world. So when a browser client connects to one set to the Saturn network to an L one know the L one know this very close to the browser geographically close, and that should give us the best latency we can get. For each L one now we want to have a swarm of L two nodes in the same geo region again, that can serve as an as a larger L two cash for content. And then finally, if we cannot find content in L one in a later in L two then we go to the storage provider to fetch it from the source. Next slide please. And if you think about a request flow how it goes if things go wrong, we have a client the browser which connects to the L one note asks it for the content. This connection is happening over HTTP or HTTPS, and is a typical request response flow. Now, if L one note has the content in the cash it will respond immediately and all is good. If it doesn't have the content in the cash, then it wants to fetch it from the L two node. And that should be a request response style. So an L two note, sorry, the L one note asks an L two note for content and L two node response back. Now, we want to run L two nodes on or as part of file coin station, running on user computers running in home networks, which typically means this. These computers are behind firewall and cannot be reached directly from the cloud from outside. We found a way how to make this work with HTTP but it was super tricky. Next slide please. And, and what we could use is whole punching to like punch hole in the firewall but that's not always reliable and takes time it needs several requests and responses several. To establish a hole punch and that's at that as latency, which means the request would take longer to finish. So what we want we want the L two note to initiate a connection to L one note. Because that doesn't require any whole punching L one of this on on a public IP address. And now when L one notes want to fetch content we want ideally to send the request in the opposite direction we want L one to request content from L two note. So this is pretty much impossible with HTTP. Next slide please. And this is where a repeat to be comes into play, because the P to P gives us a bidirectional channel. So whether when the L two note dials to L one out, it will establish a connection at the CP transport, and once this is established we can send messages in both directions and request content from L one L two note, and it works super easy and it's very easy to work with. Next slide please. And I prepared a little demo to show you how this work in practice, and I will take over the screen share if I may. No problem go ahead. On the left side you see my my browser window, which is running the L, which is showing the website, served by the L one note the L one note is running somewhere in a data center in Frankfurt. I deployed my Node.js code using Docker and fly that I have service. And you can see there are no L two notes connected. So let's pretend I start my Falcon station with the Saturn L two note running. Here I will just start an OJS process. This is running a P to P note, which doesn't listen on any address, you cannot dial this note is running in private. But what it does it connects to the L one out as can be seen here. So let's change the L one know this ever of the L two note. This is another cool property of Lee P to Lee P to be that's it maintains list of all peers that are connected. And now I can start or ignore the error messages and now I can start requesting content. So in my browser, I open a or I send a request to get content from IPFS. Now the L one note will connect to the L two now to fetch the content from the L two note. The L two note will just contact the IPFS gateway to get the content and returns it back and then it goes all the way back to the browser and we can see text from an XK CD comic. And it works for images as well. That's it for the demo. And maybe just like what's next. I also did a benchmark to have an understanding what is the performance of using Lee P to P compared to using HTTPS and it turns out, Lee P to P in no JS is 20 times slower than HTTPS 20 times. So my next project is to look into this performance issues and try to figure out if we can speed it up. That's all. Thank you. That looks very cool. Awesome. I'm going to show my screen again. And sorry, give me one second. Next. Looks like we've got the AV events guide. One second here James and Patrick we've been bouncing around hosting responsibilities. Since I had to hop off. And now the host is my phone, which is not where we want it to be. So let's switch it to my computer. There we go. And now I can get that audio. I'm going to be sharing some audio on this one, I believe. Yes, and audio and it's a, there's, there might be an option when you share for optimize for video. Yeah. That's what I'm looking for right now. Optimize for video. Boom. Okay. Sound on all the fun stuff. And is that working? Is it there's a block on the screen again. Isn't there. There's blocks. Wow. All right. Okay, we're going back to that plan that we had going earlier. Sorry about this everybody. We'll address all this in our guide. Yeah, I don't know why you guys aren't hosting this. All right, let's put that on slide show. Let's share this. Let's just share everything. Are you on an Apple laptop, Dave? I am. Yeah. Okay. It's about to be a broken Apple laptop. I'm just let's see if I can find a quick solution. Yeah, actually when, when, when we switched to yours, I restarted my computer and it was working fine. But once, once I get mirror screen shared seems to have gone back. Anyway, let's keep going with this Patrick and James. Hopefully this will work just fine. Yeah. Yeah. Should I move forward to the next slide you want to. Yeah, yeah. And Patrick's not here and we're video guys we just made a video so I'll just kind of mute my thing and we'll speak for itself. Cool. And just confirm that this sound is coming through okay. And we'll, we'll go from here. Hello Launchpad. Thanks, Anna. Hi, my name is James and I work on the outer core events team as AV manager. I have over 20 years working in live events in the AV space. I hope that I can answer your question today. Thanks, Anna. Hi, my name is James and I work on the outer core events team as AV manager. I'm over 20 years working in live events in the AV space. I hope that I can answer your question today. So presenting your ideas to an audience can be stressful for some easy for others. As a conference attendee, there's always a chance that certain presentations will either bore or fascinate. There's little folks can do to control that uncertainty. Fortunately, Patrick and I have a great deal of control over how someone looks and sounds either on stage live or at home on Camden. It is our passion to bring out the best in the speakers we work with at PL and beyond. So currently, there are several well intentioned documents floating around PL that contain conflicting standards and best practices when it comes to running AV for an in person or virtual conference. Our Launchpad project aims to incorporate the best of those documents and some of our own findings into a very simple one stop shop AV events guide for both clients and vendors. I will now take it over to Patrick to elaborate on our keep it simple approach. Well, thank you, James. Hi, I'm Patrick. I also have about 20 years of experience with live event production and recently joined the Solaris team to help them out with AV. It is true. We want this guide to be very simple and evergreen. You've probably poked around on the Protocol Labs or Filecoin YouTube channels and seen videos that look very similar to the ones playing behind me. Now this layout or look or format is known within Protocol Labs as a streaming template. In other places, it's known as a pip or picture in picture. And in more places, it's called a composite. It's a composition of the video of the presenter as well as the slides they are presenting. And it's a strict standard for Protocol Labs for both live and edited. Now some folks have complained about this look as a standard because it lacks dynamics and is uninteresting. Surely we can cut between camera angles and crowd shots cover Q&A and all that within a talk, not so fast. I agree with these objections, but I also understand the reasoning behind this mandate. The format protects the anonymity of the crowd, removes the guesswork out of whoever in the world might be editing, maintains consistency across all recorded talks as well as ensures that no slide is forgotten. If the end goal is this, our guide to help folks achieve this does not need to be complicated. We'll come back to this in a minute, but first let's kick it over to James to talk about gear. Up until around six years ago, live video production and streaming was expensive and required lots of enormous dedicated hardware from switchers to mixing consoles to cameras of all shapes and sizes. Several AV vendors are still using this equipment purely because they've invested so much into it not too long ago and folks will still pay to rent it. Well, thanks to the Twitch community, most of the old equipment we hauled from venue to venue has now been replaced by open-source software and extremely affordable hardware that lowered the barrier to entry significantly. What was normally this became this. So now let's jump back to the template for a minute. If the goal is to be more budget minded, and if this look is the end product we want, our guide will help you achieve this look in the most simple and efficient way possible. Don't get us wrong, we love gear, but we also know what is not required to achieve this end product. You can do a lot with a little. For example, she's not even real. It is true. I am not. Over to you, Patrick. We want our guide to help you and your potential vendors quickly navigate the event lifecycle from pre-production, production and post-production. I'll start with in-person events. For pre-production, we'll go over venue interaction, design requests for slides, audio and video requirements. For production, we'll go over streaming keys, how to deliver those to your AV vendors, design asset loading for your AV vendors, as well as AV checks before the event, and then finally post-production. So asset gathering, all the recordings, all the isolated recordings, as well as editorial requirements, making sure that the editors use the streaming template, as well as YouTube best practices for publishing. Over to James for virtual events. We're all too familiar with video conferencing these days, but with just a few simple considerations, you can have an engaging virtual event, like this one, that stands apart from the typical Zoom meeting. These would include elements of pre-production, like platform selection, AV and internet requirements, presentation workflow. And during the production, things like framing, lighting and sound, remote interactions like breakouts, presentation, media management, and on to the post-production, which includes asset gathering, editing requirements, and best practices for YouTube. Our plan is to get approval from other video folks at PL before launching in Q1 of 2023. Blah, blah, blah, blah, blah. Please stop talking both of you. Okay, that wraps it up for the AV events guide. Stay tuned for more amazing launch guide projects. Cohort v7 forever. That was awesome. Thanks, James. Thanks, Patrick. I could see the chat blowing up there. Lots of enjoyment from that one. Great. That was really cool. That'd be a tough act to follow. Let's see who we've got next. I can do you to click through this. Am I in YouTube? Hello, Launchpad. My name is Anna, and I am not part of the launch guide. My name is Anna, and I am not. Next slide, please. How do I get to the next slide? There we go. Cool. All right. Thanks a lot again for that one. Just a reminder. The QR code in the top right there is for voting for projects, but we can do that at the end after you've seen everything. And we've got a team from Vanyan up next. And Alex, I think I saw you on the call. Would you like to, would you like me to continue screen sharing? Yeah, let's just go with that. We have a lot of slides to get through. So yeah, cool. And I'll try to, I'll give you a reminder time wise, if we, if we're getting close. Okay. So at Vanyan, we've been working on a compliance framework for minors. And the main thing we're working at right now is a way to publish and attest to audible compliance chains in a way that, you know, any user can actually go look, figure out compliance, make decisions about where they want to put their data. Next slide. Yeah, so the, like the motivating product idea is that like, any type of enterprise data, but like actual user data, hospital data, like data, like kept by financial institutions, like anything that is stored in a business setting or using a business setting needs to be some stored under some sort of compliance condition. Like this is about 80% of real data are initial research shows. So without some sort of signaling mechanism on like top, like on top of Filecoin or losing out and like systematically in taking like 80% of real data. Like there are compliance, there are minors that do individual deals and like get some certain level of compliance themselves. Like we've heard Myers like going, getting soft to compliant and like making deals with clients, but like it's not really a scalable way to burst the problem. So yeah, and also if you're, if you're running a smaller minor, like not petabytes on petabytes and petabytes and petabytes, but like maybe a couple of petabytes, maybe even a couple terabytes, like it's very expensive and there's no way to signal your services. So it's not even clear if like becoming a client is a good investment for you. So the solution that we're thinking of is publishing compliance to the certificate from the EVM in a way that's audible and adjustable by all sorts of APIs. And then we're also doing some research into actually onboarding Filecoin minors onto such a program. Next slide. So at work is that a minor would go to an auditor and get certified, which is a very long process. We want to help streamline that as many ways as possible and we're doing some research into best practices for that. And also building relationships with these auditors. The auditor would meant some sort of certification. So usually the like if you're talking about HIPAA compliance, the usual product of such an auditing process is like some sort of PDF that gets published in what a repository managed by the company that hands up these licenses. So like high trust is a form of HIPAA certification. They maintain a repository where people they license their certification process to can actually upload their audit reports. And then the auditor would then take these certifications, re-host them and then publish these certs on chain. And then make those like all that metadata indexable by some sort of indexing protocol. We're thinking of trying to bring the graph over to the FEDM. So that would be, which is going to be a hard lift, but we think it'd be good for the ecosystem and also make our end product way more usable. Next slide. So essentially what we're trying to implement is some type of compliance attestation. So like, rather like, so we are trying to basically implement some sort of statement that like X says that Y says that Z is A or more concretely like Banyan says that this auditor says that this minor is SOC2. So that somebody who trusts Banyan can go off all this route of trust and then trust that the minor that they want to make a deal with is SOC2. Next slide. Yeah, so so far, our main focus is developing this attester or like the framework in which either a client or some sort of service like Bakiya would actually ingest this data. For MVP, we're just implementing a very straightforward, a BOS database and like S3 buckets for hosting these certs, exposing all that data through like consumable APIs. For the MVP, we don't have any real minor data on board right now, but maybe we can, but if we're like going, we're trying to get some partnerships going to actually get some minors on boarded. It would be nice if we could expose this as some sort of consumable API, but we're also, we also have like, we're also planning some sort of like really simple interface for you to search and find minors that you want to make deals with. Yeah, and for onboarding right now, it's just a white glove service and consulting on the certification process because it is complicated, but there's a lot of steps that we can collapse just by doing this a lot and by researching the best routes for specifically for Lotus and for different types of Lotus implementations. In the future, we're going to move this all into that EVM for like and use the graph for implementation for Web3 exposure. And then we're in the process of researching like a whole suite of compliance products that would help get minors compliant and data onboarded into the network. Next slide. So Sean. Yup. And so kind of the whole goal of, you know, storing this on chain is kind of having this be a lot, you know, like user accessible, right? Like user accessible, publicly accessible at any time, any moment and kind of creating a sort of compliance Lego to actually build further, you know, dap sound the road on. And so we're kind of implementing an open source attestation contracts where it's sort of like a, like a compliance style, the government, the governor can mince and sort of revoke certifications for minors based on their compliance status. And this is a compliance status will just mostly be, you know, like publicly accessible, like readable contracts type getters that, you know, setters as well that like allow minors to people to build off of minors compliance statuses. And of course, for the government is actually, you know, set those changes, revoke changes, approve, et cetera, et cetera. And of course, as you can kind of see in the bottom, right there in the picture, you know, these contracts are compilable, you know, they've been deployed and kind of the, what's left here is testing. Next slide please. Okay. You got about a minute left in this one. Okay. We'll make this quick. The graph is an indexing protocol for querying networks like Ethereum and IPFS. It's just readable API for on-chain data. And this will be used to endure it to index filter options for minors by certification standards that you'll see later on in the demo. Graph indexers are actually not live on the FEBM right now. Ideally we'd like to build a sub graph, which is an open API that you can access without actually going to the smart contracts to find compliant minors. And we're currently pushing the Filecoin ecosystem to push out indexers. And while we're waiting for this feature, we do have a front end to show. Next. This is a quick demo. Yeah, you can play the video. This is a quick demo in terms of design decisions for clients. We added in multiple filters that would narrow down their options with ideal minor. And for storage providers or minors in this case, we added an open view of other minor profiles so that they can compare their own qualifications with others. This can be a way for minors to increase their certifications if they feel like they're not competitive with other minors. We were very intentional with how the demo was designed and we're hoping that Filecoin can get some indexers out so that we can hopefully release this to production. Awesome. Okay, cool. Thanks a lot. Alex and Sean. Looks very cool. I think we might have had a blank slide here, so I'm going to move past this. And I got a message from Paul that he might be on another call. I just want to scroll through here. We might need to come back to Paul. I've got, actually I've got about six minutes. Perfect. That's right within the timeframe we're looking to operate in. So you're happy to have me continue sharing, Paul? Yeah, let me just double check it on my name on this. Okay, we're good. Yeah, let me give you the quick one too. So we actually have an internal build out going on, at least with some Figma diagrams and some back end integration with Lotus to build something called enterprise.storage, but it has a very specific product market vertical that it's going after. And we'll see how that evolves, but I had an idea that there are customers out there that fit into the bell curve of data sets that are between 10 and 100 terabytes right now. And there isn't necessarily a fit in our ecosystem for that. They are archival customers. So next slide. I would like to see something built. I will certainly not be building it. I just copied this into Figma, but I basically ripped off Google flights. But there are people out there consistently that reach out to us that have region specific data storage needs first more than anything else. So I created an idea that you would just drop into a website, make it interesting for the user, throw up some possible ideas if they wanted to store something under a 12 hour SLA or one copy or more, throw up some ideas on, you know, like Toronto, for example, that's got great natural disaster protection. It's got low latency. It's got Commonwealth data sovereignty laws. And it's got America to protect it. So it doesn't have to have an army, which is also good for cost structure of running a country there for keeping the price down. And of course, that's just a joke. But another example is customers in the Maina region have no interest in storing their data outside of Maina because they have Islamic law protection. They can store things that we might not allow in the United States. It's, you know, another great example is they also might have a stable government. Hong Kong also a great example where people actually prefer to keep their data in Hong Kong because of anonymity. There are lots of dodgy data sets that get to be stored there from porn to, you know, more porn. And then you've got other things that Hong Kong favors. They have CCP agreements that allow for the CCP to look at your data, which is the requirement if you want to allow that data to be used in China. Next slide. The example here is it would just be an open market bid where SPs would advertise themselves at a region specific, or you could even swap out New York for a minor ID. But in general, I would like to see something built where large SPs are advertising themselves and being able to collect fiat payment and starting off from their regions because there's a lot of requests for this. And then you could, you can see here, this is obviously just a markup, but you would be able to see, oh, look, it's actually very cheap to store data in Northern Norway because data centers are cheap there. Power is cheap. Cooling is cheap. Maybe I only care about price more than anything else over region. Whether or not anything will get built like this with their other tool, we'll see. I'm happy to see some other people focusing around compliance and other things because those are requests that could fit into this model where when you go to store data, it's a filter of importance. You may not care whether somebody is compliant because you could just be storing lots and lots of photo albums that are 100 terabytes or more and who cares if it's compliant data. But it's an idea. Whether or not we'll integrate this into enterprise.storage depends on how complex this would be. But I'm happy to see people talking about maps, grading storage providers, all things like that will eventually come to fruition. It'll probably take a year or more for some of those tools to be vetted out and into production. That's it. Very cool. I could see this being useful. And I'm glad we were able to squeeze you in here. Paul, thanks for attending. Yeah, no, I mean, this is more on me. It was just bad scheduling on my part. Thank you. No problem. Do we have Sean here? Not sure if Sean's on the call at the moment. Sean, are you there? Maybe we'll move past Sean and we can come back if he's able to join later. Okay. Just click through these here. He's got a couple slides. That was actually my slide deck. Oh, that was your slide deck. Oh, sorry, Richard. Oh, sorry. I skipped right. Sorry about that. Let's come back here. Sorry, Richard. You're happy to have me continue sharing now that I've given everyone a preview of your presentation. That's actually the perfect way to do a slide deck. Cool. Sort of what to expect. Yeah. So yeah, my project was sort of two, two parts. I presented last week. My design doc. And sort of encouraging metrics. And as part of that, I'm also going to kind of improve observability in my own teams project, which was the Bakalaw project I mentioned that last week. And so I wanted to follow up with where we're at right now. So next slide, please. So the objective just to kind of make sure we're all on the same page. Next slide. Yeah, so the objective again is that we want to improve the observability of Bakalaw Bakalaw again is the computing over data team. So we want to improve this by building out our current systems for metrics collection processing and dashboard generation. There's a bunch of issues that are filed against this. So I'm kind of starting to work towards them. To provide a little background. Next slide. Next slide. Yeah, to provide a little background. Bakalaw has been around for about nine months, and we haven't been able to track metrics in the system so far. But when we're going to actually be up until this point, we actually haven't had any sort of substantial dashboards to track useful metrics to answer questions like how many unique users are we having like a total on a given day over a period of time. We also haven't had the ability to track sort of how many jobs are people running on our systems, how many jobs are we running per day, how many in total, these will be very nice. so to call vanity metrics, also it'd be useful to know sort of being able to break down our statistics in terms of sort of we're running certain jobs, what types of jobs are we running. And, you know, this is necessary for a few reasons. Obviously, first and foremost, it's useful to know sort of the operations of your system, right? This gives you a window, otherwise you're sort of operating in a black box, right, situation where you can't see what's going on. But if you have these metrics, it gives you a way of peeking in at the health of the system. You know, if you're expecting a certain number of jobs to run every day and then all of a sudden it just starts dropping on your dashboards, you can infer probably something isn't healthy with your system. Of course, another reason that I espoused in my design doc talk was that you do want to track metrics in order to ensure when you roll out features, you're actually able to measure the changes that you say should happen. So if you're claiming you're going to, you know, increase users or you're going to make a particular change so that number of jobs goes up, you want to be able to have metrics that are currently tracking that so you can sort of have a before and after to understand whether when your feature lands, it actually does what it said it was supposed to on the tin. And of course, tying in with all of this and also something I promoted in my design doc talk is to make sure you have the ability to roll back and measure whether this was successful. So, you know, the only thing worse than pushing something out to production, having it cause problems is not knowing whether the bleeding has stopped. You need a way to sort of understand how your system is responding to changes just in general rollbacks being the most prominent reason but there's a lot of reasons you'd want to be aware of sudden changes in your system outside of expected feature changes. So those are the main reasons and that's sort of why we're doing this the background. So next slide please. Next slide. So here's a high level overview of sort of what we're going to be or what I'm going to be designing and putting together. This is sort of a high level view of the metrics pipeline. You can see on sort of the left side I've labeled these as local nodes. These are the currently running jobs in a bakalao system. So at the moment we actually have six bakalao nodes for running. Obviously anyone can run their own nodes but we always maintain a set of six nodes so that anyone can run whatever they like on their on our systems just to test it kick the tires. These jobs are generating two forms currently of output. One of them is an open telemetry trace. What this means is every time a job is running various statistics about how long it spends in certain functions and certain amount of time memory kind of usages of the actual operating of each individual trace as it's executing is sent to a system we call Honeycomb. We also have a set of logs that we produce that are append only. So what this means is that we actually just keep track of these logs over the entire length of time the nodes up each time these nodes are reset the logs reset. So for this reason the new design will essentially have a system that every five minutes this is a cloud provider will pull all of our data from Honeycomb and from the CSV files and parse it and normalize it into data that's actually sort of stored in a data directory so that the data isn't lost if the nodes go down and we also have the ability to query for particular time periods instead of having to load the entire log file in order just to query for a particular time frame. So once we've gotten this data parsed and normalized and placed into a data directory we can actually start to use Prometheus which is the back end data storage layer for Grafana. We can use this to query the JSON files that we've stored just for example I'm using Google Cloud endpoints and Google Cloud buckets but we may end up using AWS or some other cloud provider I'm just most familiar with these at the moment. But yeah Prometheus is able to query JSON stored in the cloud so this gives us a nice way of storing our data in a publicly accessible way in Google Cloud bucket but also in a format that can be queried by Prometheus to then end up generating our final dashboard. So that's sort of the overview of the system that is going to be built out and the idea of this is that we'll be able to track very low level metrics but also as part of this pipeline aggregate them and produce higher level metrics such as things like active users whatever active might mean maybe they log in and use something so many days in a row or X number of days in a certain sliding window. So these kind of definitions can be applied at this level so that we can have very much high level statistics as well as low level statistics you really need both in order to sort of understand your system. Next slide please. Richard you have about a minute left. Okay I can get through this so if you just can kind of go through the next four slides these were produced by Luke who is working on our team we were sort of I was in Austin last week but he put this together and we helped work through this so if you can just go through the slides these are actual numbers from the initial sort of pipeline that we've run to gather some statistics. Yep and then the next slide these are sort of jobs are running it only goes back to October when we were writing logs files out next slide but this is just an example of some of the initial data we've been able to extract and hopefully we'll have a lot more data once we start figuring out what sort of the business metrics we want to track are next slide. So next step is essentially to take this pipeline as it was shown and productionize it to generate the graphs that were just shown so at the moment this is sort of being done by a bunch of goodwill bailing wire and duct tape but if if we want this to be more productionized that's kind of the next step is to how do I take this and make it look like that nice picture with all the redundancies and fallbacks that you'd expect so that's that's where we're going and next slide everyone don't forget to give yourself a hug today. Awesome thanks a lot Richard that was great. All right we've got WizKid up next Nishant are you online? Yeah I am I would like to present my own screen though. Okay cool I'll stop sharing and you can take over. Okay let me know if you have any troubles during the screen. Yeah we can see you can go ahead. Cool hey folks I attended Launchpad remotely and it was a blast I learned a lot of things and this is a project I worked on which is YouTube download IPFS which is a wrap around YouTube download for those of you don't know YouTube download is like a script where you can provide a YouTube URL download the video for your local consumption and people use it for all kinds of things to like archive YouTube videos or like share it with each other where internet is not something available so the goals I set out for this project was basically building a wrapper around YouTube deal to preserve and share videos on IPFS that is the RT centralized web basically have a simple video player which could actually play that video I use a document store to like save YouTube fees and content map it and then host it on the public internet so that anybody can like bash it locally and then start playing those videos and I don't have human interaction so sorry about that so what I built let's see a demo you can try it as we go this is available on Docker Hub right now to set this up you'll have to run this command which is docker full with git slash ytdl IPFS column latest you can copy it from the slides itself but I have it copied here now to run it I can just pass it the interactive flag which is docker run interactive with the the image name and version I want to run and what it will give me is basically a it brings in all the dependencies and everything and gives me a prompt to enter the URL so this is a video from John Oliver which is about the Qatar world group which he recently posted he's one of the critically acclaimed anchors I really love and his videos get taken down all the time into my different regimes so that makes it a good case so he downloads it connects to my local IPFS notes and if it saves it there and then gives me a link to the video player which I built and it's hosted statically now I can click on that and it starts playing over IPFS so if I go to the network tab here I can see all the content got loaded from the local network quick refresh it'll actually tell me that all the videos are loaded from IPFS.io but since I'm using IPFS companion these get loaded from my local node instead of loading from the network and that's basically the demo I have built out for you the docker images are live on hub.docker.com so you can actually go here and check it out for yourself the vpoint open source so you can start pulling these images and start caching videos locally and let me know if you have any questions around this but before that there's a scope for like improvements here this does not cover up any multiple formats like youtube has a lot of video formats that it presents for each video from anywhere like to load videos on 3g network which is like 5g network they'll do optimize like pre-optimized videos that all those formats which users are about to use so we can download video content in different formats and windows this is not happening right now but that's on my plan to like implement. Loading clear over the internet requires a stable gateway which means it does not work very well if you don't have local gateway installed so I would recommend to have a local gateway running and provide it to the script or the player so that we can do that like IPFS companion does it by default like my local gateway is connected here so make sure you have that running somehow the desks are missing like any mvp the docker images need a bit improvement the connection can get flaky at times but this works most of the time the web player here can have more meta information like right now I'm just loading the title but all of the information is available here so I can just like fetch everything from here and like show that but I did not spend too much time on building the UI it's just like make it practical having a faster internet helps but in reality what I should have been doing is downloading different chunks of those video and uploading those chunks in a content archive format instead of like uploading the file of the video directly and in future I would also like to support IPFS which would just make content resolution much more easier right now it shares a hash which we don't want so IPNS would make things so much better in this case I also plan to add a extension or maybe an extension to the IPFS companion where if you are on youtube you get a button you just click it there and it just archives that we do an IPFS for you and gives you a link to the custom like video player that you can use during this project what I learned was a new web framework we just called it html I've not worked with this before so I spent time learning on this it's a lightweight glue around web components which is like an enigma script standard I'm not sure how many of you are familiar with this but I think this is really nice and it just produces really small bundles which can be used on the web so static hosting becomes so much easier um IPFS routing behaves quite differently when using companion that means if I'm loading content of work over IPFS.io it automatically gets routed to my local list but if I'm using it over cloudflare IPFS it does not do that so there's some inconsistency there which I plan to address something later um adding content to IPFS is quite fast but retrieving is like really really slow so um content discovery needs some improvement um building mapping between URLs and manifest is like really hard IPNS might be a solution but I didn't get time to explore that so I would like to learn more about that um and IPFS local APIs um are somewhat aging with IPFS.js the thing we are building it again so hopefully we can have a better API going forward and don't forget to work for me and that's my time any questions are really appreciated thanks a lot that was great um and just under the the time limit too which is awesome um I realize we're we're running a little bit behind here and some people have to jump but it's recorded so we can come back to it later we've got about maybe four presentations or five left um I'm going to keep sharing my screen now and I believe learn as up next yes um you can keep sharing the screen yep sorry that screen share didn't work let me um go back to the other method share my desktop and then slide over yeah maybe you can refresh the slides just in case yeah and you can move to the next one uh here we go this one here yeah yeah inspired by Chris's Filecoin background okay so um I'm going to talk about chain save files it's one of the products we have at chain save uh we're going to talk about a little bit of the history of the product um then we're going to look at the front end the back end and then at the end if we have uh some time there's a little game that I recorded a run through uh that I made for this presentation so we can move on to the next slide please so uh simply put chain save files is an encrypted file storage system um it was launched about two years ago and I put some of the features uh on there so antenna encryption but the idea basically came from working with protocol labs and having our forest implementation we just start to be interested in the storage space so we sort of jumped in um it's actually built uh using something called chain save storage which is the back end that we have in go and that will support the file system and it also seamlessly connects with IPFS and Filecoin so the idea is if you're an end user you're not super tech savvy you use chain save files and everything is automatically encrypted if you're a developer uh it's more geared for you to go towards storage so those are the two separate things um the back end itself the infrastructure is exactly the same for files and storage the difference is really just the the front end so we can move to the next slide and I can show you guys a little bit more so if you were to google chain save files and you actually sign up eventually when you log in you'll see the first screen on the left so you have a few options there you can log in with your wallet your email which is what I usually use and then as soon as you do you jump to the screen on the right side so it kind of looks like an inbox for your email uh very straightforward you click upload you put something on there you can create folders put stuff in your folders and this will automatically be pinned onto IPFS and also it will be pushed to uh Filecoin as well so now that we saw the front end I'm going to show you guys how it works under the hood so we can jump to the next slide um I'm not going to go through everything in too much detail um my team lead helped me make this but this is basically like a service level diagram um just so you guys know all the cylinders you see there um those circle round things uh those are just databases we use different ones depending on how flexible or loose we need the data structure to be and then those things poking out on both sides those lines uh those are just exposed APIs so we'll start with what says user at the top so that first box is the user service this is basically where um you'll see it manages logins like user data access keys everything will be stored and managed in the user service uh a little bit lower we have files so files is where we manage the buckets and the actual file system and it is pointing down to billing so billing is pretty straightforward it's what it sounds like it takes care of the subscription plan so it tells us how much storage a user has um and to what extent they can use our file service um moving on in the middle we have something called the message bus uh all you need to know about that is the it's used when something requires changes on multiple levels so like on the IPFS level or Filecoin level um for us there's two scenarios the first one is the search at the bottom right so search service is just like when you're searching for something on your computer you have multiple fold folders you forgot where you put something you'll go through different buckets and search for content and then the second scenario where we use message bus is for the pinning service so this is where um we provide the the pinning APIs for users and this is also the section that's responsible for replicating data onto IPFS and then pushing that data onto Filecoin okay so we can move on from this slide okay and you're going to click on what says link in the middle yeah so this is just a quick youtube video and one of the devs helped me make it a little shout out to forest so this is our little guy Tai he's at the forest he finds the box supposed to represent IPFS um and then I wanted to have a false bottom that would be IPLD because IPFS is built on top of it we ran out of time but it's just a little game that tells you about the system and pinning because uh people who are not as tech savvy usually would always ask me why do I need to pin things on IPFS or what is pinning if I'm already storing it so as you guys all know things get cached and then there's something called the um garbage collection so if you don't have anything pinned or if you haven't secured anything with a miner or multiple miners on Filecoin then your content will likely not be there so that's it that's pretty much it for my presentation thank you thanks a lot Lerna that's very cool is Caleb oh there's a hand raised there was a hand raised oh it's a clap okay cool uh Caleb uh thanks for for joining us um do you are you happy to have me share the screen yes yes okay cool yeah go ahead whenever you're ready all right uh so I'm just mourning griefing because Argentina is in the finals of workup but you know Brazilian life but yeah I mean since I started working at all IPFS world on a web3 and I I found out on this infrastructure that we had different kinds of implementations for IPFS and I tried to do something different so I call Yaqui I'm sorry because I I don't have this kind of creativity for for names so yet another Kubo implementation next slide please so what we have so far mainstream let's let's say we have Kubo that's the one we're using at fission we have IRL that's been been developed by some folks that we know and we're probably going to try in the future we have elastic IPFS that's a pretty cool idea to implement is on on cloud native we have the IPFS cluster there's also something I can deploy on Kubernetes and this works very well and but it's kind of our theme I mean we're not against it on Kubernetes but it we we like to see different things and more recently we saw Fargate deployment with IPFS you know at AWS they they wrote a very cool article all of them are hyperlink so if you click we'll send to the to the to the links especially this last one it's a pretty cool tutorial by AWS like next let's place right so what is wrong with KES Kubernetes right um sometimes it's too much you know I think feel that at the beginning you have Kubernetes and that's fine but then you need sidecars and you'll need to deal with IP tables with bot affinities and a lot of stuff and start to become a heavier infrastructure environment that will need a lot of engineers to maintain so sometimes it's too much it's a pretty cool idea and a pretty cool tool I've used it for years but sometimes you need something lighter and not everything should be on containers you have some stuff that could be running on VMs or micro environments something like that um and it's always good to have alternatives for every kind of technology you have and Kubernetes I like to say is kind of the JavaScript of DevOps you know so everything nowadays is built with JavaScript which is fine because it works but sometimes it's interesting to have a different kind of perspective on it next one please okay so what I've thought I thought about hash corp moment so I just wrote that it's Kubernetes but not so what is the biggest difference let's say about them well um nomad it has this kind of mixed application deployment you know it's single it works with single self-contained agents you can deploy it on multi region federation you know you can use on VMs bare metal not only containers so it does some kind of stuff that Kubernetes does but also works more as a scheduler and a test manager so it's a pretty cool idea you know to try to do in a different way next one please yeah so so far you need to work with some not multiple but some tools for example docker and podman just to in your local machine just you know separate nodes maybe vagrant if you don't want to run your own machine you need to set up an environment so you can start to work on it we can use some advantage that nomad rep gives us you know same stuff that Kubernetes does it has this consensus algorithm that you can share resources and this is pretty good for IPFS because if you are storing things in the infrastructure you need all of them to connect with themselves and share the resources especially in a cluster so this is a pretty cool advantage next one please and what other alternative that we have well so there's a tool called wasmic cloud there's a pretty interesting one I recommend you to check it out it's very easy to deploy it's very easy to install it's powered by rostin elixir those guys they're working on it they are putting some good effort on it and I don't remember what version which version they are right now but it's a pretty lightweight environment that you can run your own machine and they just posted a new article on new stack about how they can replace Kubernetes so that's a pretty interesting idea to to check it out also next one please so what is next I had a demo to prepare for this but unfortunately demo live demos always fail so I'm sorry I need to develop a solid architecture like lskpfs because one of the problems that I see with it it's very expensive you know I want something cheaper to run I want to enable this to provide packages like nix and g that is a very interesting idea with nix that's something that we are using and agnostic deployment doesn't matter if it's a container vm or a metal you can use anywhere you know just download the the custom config and that's it and open observability I like a lot this kind of topic for medias philipsilium hotel so something like that because I think it misses a little bit on kubo you have a promedius endpoint in a port but you don't have a lot of tracing you know so it needs a little bit more so this is something that I want to try as a the last one another thing that was working is the issue uh 1366 and it's just to give another alternative when you want to test stuff on on kubo you know it's a very simple easy fix that you just need to create a memory go file and add some fx dss lines to enable people to start this on common line when they want to try and in the memory uh running memory the only thing that I need to do yet is the sharnest test so it can be merged but so far it's working well and that's it I'm sorry for my lack of creativity on these lights no no problem at all um the information was great and thanks for thanks for sharing that with us very cool uh akshay is next is akshay here I must be getting late yeah in india thanks for staying with us um no issues I usually say up to late so okay it's it's fine yeah should I share my screen yeah okay cool okay next slide please cool so uh hi guys my name is akshay so parloom is basically a decentralized data aggregation service for both on-chain and off-chain data we rely heavily on ipfs ipld and filecoin to so aggregate to store this aggregated data so the way we do this is in form of uh dag chains right and these dag chains they usually get very long because we link to all the previous data to create a history of proofs right and it becomes very difficult for even ipfs nodes to maintain all of that to mitigate this what we do is we usually create segments of these dag chains and store all of those segment metadata plus a bunch of other project metadata on our redis servers uh to do the whole thing but that's not a good approach right because the whole idea is to be decentralized so uh the goal of my project was to explore new ways the to improve the architecture and like make parloom more decentralized next slide please so here's the like new architecture that i've come up with a couple of interesting ideas here the first one is called project manager so the idea is when the full network goes live and like new peers will come and join and start snapshotting and all of that right we'll need some way where people can specify new projects and each project will have a config which you can for example think like the contact that is the chain id and a config which is basically a way to accept the data from the smart contact right and all of that will uniquely uh represent a project so we can store all of that config on ipfs generate a unique cid and use that cid to identify the project so that if like there is an duplicate project or anything so intellect as a unique identifier plus all of these network peers they'll be able to pull this config from ipfs and just start snapshotting the data so that's the first thing secondly so uh there'll be this active project list which we will be maintaining using ip and this so initially the idea was we will maintain a lot of like all of this project metadata using ip and this but the issue with that is we'll need to maintain a lot of keys to like update each ipns record right so we are still exploring other ways to do that in a more efficient way because that's not the right thing to do so for now the idea is we just maintain one active project list and all of these snapshotters they will talk to this ipns list and identify the active projects and all of that and if there is some change in uh some like project status of something right we can use lippie to be to notify all the clients like uh we can source that floating this project now all of that so it gives us a lot of flexibility to build all of that yeah next slide please yeah so like uh this is in a very uh initial and raw phase we are still like exploring and improving the this new architecture so like and the next steps would be to uh like first of all i would like to monitor and like explore all the ongoing initiatives going on in the pl network and identify new technologies which i can incorporate to improve this design secondly this whole architecture will need a lot of prototyping and designing and benchmarking as well so i'll be like i'm looking forward to doing all of that and generating reports and we share all of that with you in the future yeah that's it from my side okay great seems like a cool idea and um thanks for sharing and uh for for staying staying up late again i know it can be difficult um thanks a lot thank you um i believe theresa added slides but that she might not be able to attend theresa if you're here you're welcome to jump in i know that she had sent me a message earlier it looks like her project was on okr tracking for network goods and i'm just going to move through this quickly in the interest of time and people can return to this if they'd like you all have access to the deck um okay and andor i think andor similarly um is not on the call if he is so if you are um please jump in again feel free to check these out um and torfin i believe torfin still on the call you there torfin i think i saw torfin on the call i'm here all right all right so i uh yeah yeah i i made it home um i pivoted uh actually i got pretty excited about a topic that there was some practical applications for and uh i was already investing a lot of time and energy in this i'm still gonna do the stuff that i did before which is onboarding and some launchpad documentation for running indexer nodes uh so i'll still help out with that uh if lindsay's concerned about that we're gonna take care of her but uh this is a much more exciting topic uh and basically right now we're going through some if you want to click on the next slide we're going through some iterations of evaluating privacy options for kind of increasing um really adding configurability to the storage provider's ability to make deals that um would potentially affect like how deals are announced and um how much of that's made public but also um the ability to by default have some privacy in the way that a lot of the storage and sids are saved and so uh shan had brought to our class a white paper uh where people were analyzing like the traffic patterns of the ipfs and follow point networks to kind of make determinations about who's accessing what data uh potentially where they're storing it and by virtue of that who's doing deals with which storage providers uh which is potentially something that they don't want to expose but all of these topics have been coming up in a bi-weekly content routing work group that i created which uh the meetings of this work group are public so if anybody would like to join uh i can share with you the ipfs stewards calendar they'll also be on luma and anybody's welcome to join and kind of listen in these are topics of how the future of ipfs is going to handle content routing and it includes a lot of infrastructure discussions so feel free to join we'd love to have your opinions but i really thought why is nobody using chat gpt yet and could we potentially use ai to rapidly kind of shore up use cases and decisions we make so a question came up uh can we write a threat model for the privacy against you know ipfs users by um kind of malicious actors like what could they actually get and do this is pretty well known data to a lot of folks that work within our ecosystem but possibly not as well known or visible to folks outside of it that are just starting to experiment with it and so my hypothesis was is that chat gpt could be used to quickly eliminate the administrative burden of engineers performing threat modeling activities the possibility was is one of my engineers that's working on building a double hashing system for the dht would have to take a bunch of time to write up a threat model to help justify whether or not um we would do this work um we know very strongly that this is like a good idea it's going to come with some costs but in order to justify that to everyone else it takes a lot of work and so theoretically i was thinking can i just have the ai kind of do some of this for us so that the engineers don't have to waste a bunch of time on it um if you can click on the next slide i'm going to jump straight into kind of a summary and then i made a loom video of me going through the process of generating code with the ai and kind of some summary results that we'll jump into here in a sec but here's the pros and why you should care um the chat gpt bot is trying to convince me of the merits of decentralization like everything it's writing is like you should use this network for these reasons this is valuable which i was really entertained by uh it it it values the benefits and positive sides of decentralization and not only that when it's looking at our network it's calling out the weaknesses of where we're not decentralized which is a big topic among engineers that work on our products like you know this is too centralized right like what is a more decentralized way we can do this and it's pointing out ways that we could potentially do that i was really surprised by that um with very little coaxing it not only recognizes the advantages and challenges of our ecosystem but it also recognizes where to find obscure packages and api calls within our code base so um i'll jump to the cons to kind of explain why i thought this was really novel but um the advantage here is is that if you don't work every day with like for instance libp2p you're going to have to read a lot of documentation as an engineer to kind of figure out how to leverage some of the apis or how to make some of the package calls in order to accomplish what it is that you hope to do and so the benefit of this and i will say none of the code that it generated for me immediately worked out of the box my go implementation is kind of screwed up i'm not an engineer that writes and goes so i'm kind of limping along in comparison to what most of the engineers would be doing but i will say that the code that i was able to generate with this tool immediately recognized all those important points it recognized where i would want to go to conduct queries and it was able to produce a logical query that i recognize it wouldn't take very much finagling to turn this into like actionable code and i created an entire series of basically really creepy deal hunting tools that could recreate like for a particular user for instance what storage providers they were using and i could recreate a history of all the data accessed by a particular user and i could recreate i could search for users by storage provider like there are a lot of different ways that i could kind of structure this data but areas that i'd say we should be focused on you could write really rapid function tests um and possibly queries if you needed them and also there's a great potential for like adding this to your workflow as an engineer to write unit tests or possibly even regression tests for yourself um just pretty quickly by plugging in your own code and asking it like what what's a quick way i could put unit tests together for this so big time saver um it's not going to be perfect but it's going to get you so close that like you're just going to be like playing around with it a little bit rather than doing all the work and then the things i wanted people to be concerned about none of the code it generates is in a shippable state at least that i was generating and it doesn't quite know how to handle all the dependencies so it does a little bit of bad go stuff where it does import cycles and things that i think junior go developers probably wrestle with it was kind of having some of those same problems but those are easy things to fix and then it has all the very it has a potential to deliver to you very confidently something that is completely devoid of context so in order to use this well you have to know like truly what it is that you're trying to accomplish and whether or not what it's giving you is going to fail miserably and if you go to the next slide how are we ticking uh we're getting close to the the time limit here okay click on that loom link and then maybe just start at like a minute six just move it to minutes all right everyone that looks like wouldn't do keep on trucking all right so immediately find should i put this on mute as you want to talk over through all of our revos you want to talk over it you want me to have the audio from this no i'll talk over it so basically um what what i kind of come to the conclusion here and i put this loom up public so anyone that wants to click that link can go on and kind of go through my process basically i started with very general queries towards like what is ipfs why is it beneficial to use what are the privacy risks um if i wanted to recreate this data how would i go about doing that and then i got more and more articulate with explicit requests that i was making so i was throwing github repos at it and saying if i go to this github repo and i wanted to use this module in order to accomplish this thing can you write me a program in go that would do these specific tasks um and it spit out quite a bit of code um the conclusion that i throw into the end of this video is that um this code isn't immediately ready to work right out of the box but i built a complete like kind of privacy research kind of toolbox in a matter of probably two hours that um even as like a very inexperienced go programmer and not a software engineer by trade i could get the majority working with um you know in a pretty short amount of time so the fact that i'm able to do that as a tpm i think speaks a lot of value towards what more experienced engineers would be able to do with it and i would highly encourage folks to get in there and start playing with it a little bit i think for um kind of product decision making like i was originally trying to get at this is very valuable because you can start with a stub of a product just in the ideation phase with very little investment and that gets people iterating very quickly and that'll that'll do it cool um awesome thanks a lot torfin um absolutely yeah that was great um and i believe this is the last presentation and also that these three were unable to attend today um they've linked a loom video presentation here let's see oh a minute and 24 seconds oh wait they said play at normal speed a minute 40 let's do that that'll bring our show me what you got to a close unless i'm misspeaking and there's one more after this but we're getting close to the end here let's uh launch by a team how's it going my name is nicholas i'm representing a team with t and brendan and we're working on building the pln crm our project explain is that the protocol labs network is hosting a number of regional events by other various teams in the network and while we're hosting an increasing number of events the database of attendees is not connected we seek to build a crm that connects all of our contacts and publicize it within protocol labs network we've already made a lot of great progress when it comes to combining attendee data from over 70 plus filecoin community events and collecting over 9 000 leads working with legal to draft an agreement for attendees that allows us to use their data and creating our serum we've also made hub spot training documentation so that people within the organization can learn how to use the software we've created a lot of dashboards so we can understand on a cumulative and a monthly basis what types of people are reaching out to understanding their demographics and what might be most interesting for them to learn we've created landing pages so that you know specific topics like case studies on different storage providers can be readily accessible to our contacts and we have understanding based on email flows on open rates and click rates so we can better understand what types of communications are most valuable to anybody that we're reaching out to long term the goal for this project is to make sure that we're widening our growth funnel increasing the potential for growing brand awareness and finally creating the opportunity for regional events marketing that's accessible to all the teams within the protocol labs network and we're excited to get your help talk soon all right thanks to Nicholas uh D and Brendan for that I think this is their slide and that brings us to the end of showing what you got um thanks so much for hanging out and I know we've gone way over this was a large cohort and lots of uh lots of cool projects to be shared um so um thanks for uh for for hanging out and supporting the the last few um if you're able to and there might be people watching this later so thanks for tuning in um just a couple last pieces here a reminder to vote for um the following categories biggest contribution to existing projects the most impactful technical contribution most exciting best presentation best collaborative effort most likely to be used and the most valuable for pl um if anakler needs to go sorry what anakler needs to go oh okay anakler where where was your slide I actually I was wondering about anakler because I sent her a message on um I messaged you back yeah yeah I know and I was like oh yeah okay cool you're gonna go um and I remember seeing there was like a slide that had your name on it I don't have a slide I don't know how to use PowerPoint or if you still learn but I thought I could just do a quick demo and yeah for sure no problem yeah no sorry uh I'll keep it so quick I know y'all yeah yeah cool okay I'll stop sharing and you can you can take over sorry about that no it's okay I'll be so quick um thanks yeah so I've been working with estuary I just wanted to like learn how to be an open source contributor and uh so I've just picked up but I didn't have like one single project I've just picked up tickets over the week like I added like a CLI command um and then I like made like a branch with like lots of CLI commands and then a bash script where you can um run a local shuttle so I guess estuary is um a storage client where you can pin data to ipfs and then it'll make file coin deal script and so I like made a script where you can run like a local version of a node and then upload your files like really easily and then it'll make deals and it uh prioritizes providers based on their proximity to Texas um which I thought was funny um helps fix the memory leak I helped implement the cache for commonly hit endpoints um and then I made this last night it's in react so it's not really coding can you all see I okay I'm just gonna see me yeah we can yeah all right here so you can type in a deal ID for um any content where the deal was initiated by estuary and it will like check the status this is a deal I made last night verified reported on the chain I'm gonna add a feature where it tells you like the time uh that it was reported and sealed if you have like it's not going to be found um it's still a little um where are you on uh okay cool it's still a little glitchy right now but I'm working on it and then uh I don't know I guess what I got out of this was I mean originally I just wanted a job but I've discovered like how cool it is contributing to open source I'm really glad that like I've had an introduction to this it's always been really intimidating for me it's just like I don't know it's given me a lot of confidence in my coding abilities and like I've been really excited about it so that's um yeah that's my project I guess awesome thanks Anna Claire um glad we got to uh to see that um thanks for uh for interrupting me at the end there Katie for bringing that to my attention that was great um and I'll just hop back into this presentation quickly and we can wrap up just to say our final goodbyes and oh wait here we go again gotta share the whole thing there we go best top share and I'll scroll down to the end here um so yeah um we'll post this in slack don't worry about it uh right now but if you'd like you can scan the QR code this link will be in in our file calling um cohort slack um we'll be asking you to vote on this today and sharing um award winners tomorrow in our final weekly sync um once you complete the end of cohort survey and if you've completed all the quizzes throughout the curriculum as you work through over the past few weeks you'll be receiving um uh learn launchpad learning credential and ft from Surty so um do it I guess you know um you complete that and uh we'll have something coming your way um I believe there might also be um something coming your way if you completed the dev tools activity in um in Austin so uh I guess everybody who is there who did that might also be receiving something in the new year um keep an eye out for that and just reminder to vote and now it's all up to you um we uh are super excited to see you all you know grow in the network grow in your teams um we'll definitely be in touch we'd love for you to come back as a mentor once again shout out to the mentors from this cohort um or to have you come back and host a q and a um or even join us um at a cola week if it's uh convenient to where you're where you're based so um thanks so much this has been a great uh few weeks getting to know you and and seeing these projects um developed to fruition um and uh I look forward to seeing you tomorrow at the weekly sync because we wrap things up and then um that'll bring our cohort to a close um thanks again for for hanging out and uh sorry for that this ran late um have a great day uh evening wherever you are and um I'll see you all soon thanks again bye