 Welcome then again to our showcase demo session for cohort V9, the final responsibility that you have related to Launchpad V9 and then you are free into the space. We launch you into the network as we say at the end of this. We've got a little introduction then we'll have your project or demos, a couple of shout outs at the end and then you graduate. You'll see already in the right hand corner of your screen there's a QR code that will link you to a voting form. I'll talk a little bit more about this as we go and I'll remind you of it as we go as well but that QR code links to a form where you get to vote for a couple of awards. I think the awards come in the form of swag and they range from most technical contributions, most impactful contributions, best collaborative effort, most creative projects. We'll have time at the end to vote but this QR code will appear throughout the deck and so if you see a presentation and you're like wow, that's super creative. I love it. I'm going to go vote. You can scan that QR code. It'll also be linked on the last slide. I think there's the Google form link. All right. As a recap, you already know this but on the left hand side is just a bit about what we have accomplished over the past four weeks. Hopefully we've hit some of these goals that we as a Launchpad team have where we're trying to scale the protocol lab networks to new opportunities that new contributors have joined, that you've been onboarded and that you feel like we've built a bit of a community here and connected you to the larger protocol lab network community. We're now on that list on the right cohort V9 February 2023. We have 29 participants from across 10 different orgs in the network. For those of you who were with us in Denver, this was our group photo. I think we had nice weather. We had nice weather most of the time there. I know it was a little chilly but it wasn't that bad. A couple of snaps from the week that our photographer had shared with us as we were. This might have been just before or just after the group photo hanging out on the deck at the hotel. We've got a little video here from the week which I'm going to play. I think his name was Chris. I hope I'm not getting that wrong. Craig, thanks Alexa. Craig was there for a few days. That's a fun product. These are great to have. Great memories of a really awesome week together in Denver. All right. Let's begin the showcase. All right. These are the directions that I think you all followed. Don't worry about that three to five slides. I know there were some questions about that. You can fly through as many slides as you can in the allotted five minutes. I'll give you a warning then to wrap up around five minutes. Let's get into this. I'm really excited to see how these projects evolve from when we last shared about them at our show, what you've got so far or our showcase so far, I should say, which was the last thing we did in Denver during Cola Week. Let's start with David's formal proof of source. David, you can jump in here and take it away. Yeah. So this is kind of an extension. We were working on a grant proposal with FEM to create this reproducible build container. So a way to homogenize builds for source code that's going to be deployed on FEM. And I was thinking that's kind of cool, but you have to do that every time and if you want to verify any source code, it's like a one-off operation that you have to do locally. What if there was a blockchain that did this source code verification and had all the attributes of a blockchain? So it was distributed, it was transparent, it was verifiable. So kind of like decided to noodle around on a POC where source code stored in IPFS, the thinking is like store source code in IPFS have compiled source code that's stored on the FEM chain and then be able to with a CID and an address be able to say that this source code matches this compile code. And that's all really technical. If you're not technical, let me give you a little idea of when you're looking at a contract and you're auditing a contract to say like, I'm going to use this. I want to make sure it's valid and it does what I want it to do. You view the source code to do that. But there's no real way to say that the source code I'm looking at is the source code that's actually on the chain. And that's extremely problematic because you could be auditing something that isn't valid. So this is a way of making auditing contracts that already exist on FEM valid. And this is kind of like a separate blockchain that does this as part of its transaction process. Yeah. So that's it in a nutshell. Maybe we advance the slide one once more. So I have a build of this. It's like 80% done. So I can't demo it right now. But it's actually like a working blockchain where you submit transactions of the CID and address and it does this formal verification and then you can query the chain at any time to say, does this source code match up? So you can envision like on the source code, you have like a little badge that says verify build. That's on the source code itself. Like it could be in GitHub badge or something like that. So there's a lot of kind of cool ramifications. It's definitely like extra architecture for this. But it's also kind of like, I think would be a cool appendage as like the Filecoin virtual machine matures a little bit. And it is like looking for like additional architectures that can provide verification. Awesome. Thanks, David. We've got Sharpshark and Valeria next. Yeah. Hey guys. That's me. So first of all, let me tell you about Sharpshark. We help businesses to protect and monetize that textual visual and other copyrightable content for that. We create a group of authorship, track its usage over the internet and let the copyright owner know about possible violations and help deal with them. For example, if you are going telegraph, that was one of our biggest clients and you have articles and you want to be on the top in Google search first. And also you want to have as many views as possible because you sell ads, become handy. You create proof of authorship that is compatible with the European and U.S. legislation, thanks to IPFS in part with Sharpshark. Then we track if somebody used your article without backlinks, even if it's written and let you know if something's wrong. And if you don't like it, we help you to generate a claim and that claim goes to either site provider or Google custom search and they master apply because we are compatible with the law. When we entered protocol ops network for us, so basically it's like fundamental technology that allows the implementation of copyright law using DLT, so digital native and fast, it was like, okay, we wanted to write a book and English alphabet said, okay, guys, you're full. I'm going to support you. So that's what we do. Next slide. Recently, so when we started Launchpad, we had another project in mind to become like to go to more trustless and decentralized space using like different types of services and stuff, but we stumbled upon another legislative moments and we are still discovering whether we can do that or not. But like, of course, we're working on the product and this was like the three latest things that we implemented. So basically with us as a mean to navigate the tons of content, because especially with AI-generated content, there will be only more and more of it on the web and the silly way is to restrict it. Like don't use AI, don't use calculators, as it was that protest of math teachers when calculator was invented in 19th century. Now we just need to mean to tell whether that piece of content is quality or not quality and this is our vision. So recently, we taught IPFS to understand unicode and post and remove markdown. That's super important because copyrighted pieces in order to be copyrighted, the form matters. Also, we taught our application to tell whether content is human or AI-generated and we are not against AI-generated content. We just want to show how unique it is. For example, my chat GPT writes in my tone tone of voice already because I talked and like that's cool. And also, we recently implemented Creative Commons licensing in our interface and we also said that it would come in handy in future. Next slide. So our future plans basically, we want to have, we see that every content website should have a widget to check the originality of the content and to protect the newly created one. So super seamless and stuff we are going to create a widget for that. And also, we are thinking about creating marketplace where content that is eligible for distribution can be basically easily accessed. For example, mid journeys, generated content and the like and stuff. They are generated on the Creative Commons license and we need a convenient place to store it. So we have pretty nice cloud storage that we can use for that. Right now, calculating where it's reasonable or not to do it. And yeah, so moving to the, there should be another slide and I forgot about it. Our vision is to become IP Oracle but for content and yeah, moving there. That's great. I think this is going to be a really, it feels like it's going to be a really important part of this sort of verifying and copywriting and authorship and awarding creators with the rewards that they deserve. I think it's going to be an interesting space. There's a member in the next cohort in v10 who has a law background and works with the Filecoin Foundation. And just yesterday in Molly's protocol labs, Intro to Protocol Labs or the Deep Dive into Protocol Labs network, the discussion veered towards like AI created artwork and the artwork is being created in the style of historically famous or currently famous artists. So the questions arise like, well, does the artist who has inspired the AI get rewarded for that creation? And it's this whole space copyright and like I said, authorship that I think is going to be an interesting one to monitor how it all evolves in the near future. So it seems like an exciting space to be an important one as well. Yeah, totally seconded. Yeah, so happy to be here. Would be happy to connect with those guys because I'm also working with Filecoin Foundation. We also launched the product fully. Yeah, I can give you an introduction. Cool. Thanks. All right, let's move on to the next one. We've got Matthew next with hash fingerprint visualization. Cool. So for mine, I basically spent, I would say this is like 75% more educational for me, 24% project and 1% emoji. So I'm going to go through these super quick. So first, a quick reminder, if you want to rewind a few weeks to when we're doing the IPFS content, we all learned about hashing. So hashing is this mathematical process. You take an arbitrary piece of content or a message, you put it through an algorithm and you get a value out of a fixed length. If you recall, that's where SIDS come from in IPFS. It is just a hash of content. One of the really interesting things about hashing is this thing called the avalanche effect, whereas if you make a very small change to the input, you get a very large change in the output. And so as a result, you can treat that outcome of that as a unique fingerprint of it. So one of the things that's interesting about that is that's really useful going in that forward direction, going from content to a hash, but if you're trying to do the opposite, humans are really bad at recognizing. So really quick, like unmute, shout out, are these the same or different? Are these hashes the same or different? Don't cheat, just look at them. I'm going to say this quick. Same. Oh, wait, I see a difference. Okay. Now, same question. Same or different? Second one was easier, right? Yeah. So humans are really good at looking at visual pictures, but not good at looking at strings of numbers and letters. So there have been various visual hashes designed for that purpose. So if you want to say, like the example I gave when we were doing the previous one is like, when I'm cutting and pasting a Filecoin address to transfer a Filecoin, I'm terrified every time that I'm going to get something wrong. And so how do you know that you're sending it to the right place? And so there've been a bunch of visual hashes designed. One of the things that I think is problematic with these is you have to have an image library that can generate an image that's not going to be the same. And a lot of these, by the way, I looked at the open source projects for these, they're basically abandoned where they require a version of PHP from 2016 and stuff like that. So I looked back into the past and found some of you have used SSH. You might have seen this little grid in the past, which is the ASCII-based visualization for an open SSH key. And I was curious to see how that worked and apply it to Filecoin. I'm going to do like a super, super fast version of this because this is more technical. The very short version is you have an arbitrary grid. You go through all the bites of the information that you want to fingerprint. In the case of open SSH, that's the public key. For every two bits and a bit one or zero, so if you have two bits, you have four possible outcomes. For every two bits, you either go northwest, northeast, southwest, you go diagonally in one direction. If you hit the edge, you slide. Every time you stop, you leave a marker. And there, that table at the bottom shows you like the ASCII version of, in theory, it was supposed to be the lines got thicker. I don't know if it really looks that way, but, you know, end effect. This is what that looks like going through a fingerprint. I'm going to skip past this in the interest of time, but this is basically how you would do it mathematically. This is a visualization that someone not me made of like what that looks like to generate these. So do you want to do that for Filecoin? This is how a Filecoin address is calculated. I like that. And the portion that we're concerned about is this portion. This is basically a, going back, it is a hash of the public key and that generates 20 bytes. So if we took those same 20 bytes, ran them through that same thing, we could generate that same sort of ASCII-based visual outcome. I wanted to play around with a bit. So what I did was now getting to the project, I wrote this library that will apply, you can take an arbitrary dimension. It doesn't have to be 9 by 17. And you can take arbitrary tile sets. And so throw in some emoji. And so I went for some spacey themed emoji to fit all of the PL, you know, Galaxy things. And that's, that's what it looks like. This is what it does. It works-ish. You can take a Filecoin address and get something output like that. Here are three different Filecoin F1 addresses. You can see they look fairly different in this thing. It looks nice in dark mode. I think it looks a little better. And a mock-up of what that might look like applied to like ledger device. So, you know, you know that you're transferring it to the thing you go to. If you've ever done this procedure and you're trying to look at like 20 alphanumeric characters on your ledger, it's not fun. So that's it. If I was to go further with it, I would probably want someone who has better visual sensibilities to me to tweak the tile set. I picked those emoji really quickly. But if you were to pick some different ones, play with them, play with that. But you could probably get something that looked better. And someone should probably port it to JavaScript so it can be done on a web page. Yeah, that's the short version. And bonus in the last 10 seconds. As I was doing this, I needed to generate a bunch of quote unquote fake or real file coin addresses. So here's a file coin address that I generated. If you enhance that, you'll see the first characters are my user ID. And so bonus software drop. If you want to make vanity file coin addresses for burner addresses, I have some software for that now too. You can grab there. And if you want to test it, that works. The best way is to take that address and send a lot of file coin to that address. And I promise I'll put it to good use. That's it. Awesome. Thanks a lot, Matthew. That was cool. I like that. Mockup you did with the phone and devices and project overall, very creative. Lots of positive responses in the chat as you were talking there. Very cool. Okay, next up we have Fillet. And we've got the largest collaborative effort coming up next of this cohort. Laura, Kyle, Janelle, Naomi, Brandon, and Chris, I will turn it over to that team. Hi, everybody. We are Fillet. This was the great logo that was made by our awesome Chris Garcia. I am Janelle and our team members are if you guys want to quickly say hi. Hello. Hello. And so we are Fillet. We want to empower your workforce with Fillet. Our web three recruitment made easy. Next slide, please. And so at the beginning of the year, the recruitment team transitioned to not only recruit for internal PL roles, but also for our network companies. And so putting all the roles together created the world's largest web three job board. And so we really want to expand upon this and combine the best web two elements of hiring services like LinkedIn, hired, and other talent marketplace products, but also keep privacy, data ownership in mind, or top of mind. And so we're already trying to work through this with our current applicant tracking system, which is Greenhouse, by sharing candidate authorization forms that allow us to share my candidate's profile across multiple companies. And there are already some crypto native work platforms that have some niche elements, but we really want to create a platform that can service both the client, the company side, and the candidate side. And so we already currently support a bunch of network companies, but from a recruiting perspective, and everyone does it like a little bit differently. And so having an all in one applicant tracking system like this could really help us and other future companies like web three companies navigate through recruiting efforts more efficiently. And also on the other side, this tracking system can also act as like a sourcing method where candidates can actively update their profiles, communicate with recruiters, keep track of all their interviews on one place without having to search their inboxes with thousands of emails. And so we're really just trying to create like an all in one platform for, you know, how there's one for productivity, we're just trying to create one for recruiting as well. And so the way we are thinking of it is we want to make a platform where Tinder, LinkedIn and AngelList had a baby and was raising web three. All right, this is Laura here. So this is probably one of the roughest Figma boards I've ever created, probably my first and last I'm reading one. But so it's pretty straightforward. We imagine that the homepage that the candidates would see is pretty much a list of all of the roles from a bunch of different companies, and they would have the ability to swipe left or right kind of similar to Tinder. And also, I think one of the big things is to have like the most flexible filter options because people are always looking for different things. And then the profile page is where candidates would include their website, LinkedIn or any of their portfolios, they would also indicate what text stacks they prefer, location and salary preferences. And all of these can be hidden or made public by the user. And then over to the next screenshot is the applications page where all of the roles that they apply to or applications that they started or saved, they would all kind of just compile in that section. And then to the next is the calendar section where they can filter out certain criteria and look for all of the interviews that they're going through. And then there is also an option where it can show them what stage they're currently at. So if they are in offer stages, it will clearly indicate that. And then from the recruiting side, we would be able to send out the offer letter through that. And in that system, the candidate would be able to open up the offer letter, view it, accept it or decline it, etc. And then down below to the very left is the messaging screenshot where candidates would be able to interact with the recruiters. And also if there's any inappropriate languages that are exchanged, then they would be able to report on both ends. And in my imagination, I presume that the messaging function can eventually be disabled so that they can, I guess, avoid reaching out to each other. And if they want to revert that, they would have to go through a whole process. And then there is the company page where they can follow or unfollow companies. And they can also expand that company page to see what they're about, what roles that they have opened, maybe some information about the company culture, etc. And then the last two screenshots is mostly for the candidate. They have the profile page and also recruiters would be able to see this profile for sourcing purposes. But that's only if the user keeps those fields public, of course. And then the profile settings is where they can opt in or out to allow recruiters to reach out to you if you, for example, swipe left on one of the roles in the home page. And then it would also give users the ability to hide any of their info and display certain parts of their profile public, if that makes sense. All right. So from the client side of this, so like a hiring manager recruiter that's working on a team, things like that, we envision this to be sort of like a LinkedIn page, like company page with the ability to add rich media, maybe have a showcase through some front page or what not, showcase some specific roles and whatnot. And then the actual kind of profile them or profile and platform themselves for the client will they'll be allowed to like post their roles and look for what find out what they're looking for things like that. In addition, we'll look at things like a sourcing platform, so to speak. So like think about like if you were to search for profiles on LinkedIn, how they come up and some of those things that Laura just talked about. As far as like the ability to opt in and out of certain parameters or whatnot will show up for employers. And just as as the kind of employers can can star profiles, candidate profiles for the same way candidates can star the profiles at the the company stage. So then we can kind of create a curated list of matching profiles, as well as like if you want to kind of go degrees off of that as well. And then kind of talk, I think Naomi's going to talk a little bit about the kind of workflow and scheduling through and things like that. Yeah. Thanks for that Brandon. So yeah, and similar to kind of the the candidate facing one, we really want to make it very easy for clients to navigate. So whether they're a recruiter or a hiring manager, they will be able to view candidate profiles. And everything will be on their solar LinkedIn resume, GitHub, recruiter notes interview stage where they're in the process, will be able to message candidate directly. So as a recruiter, we could message the candidate directly, or a hiring manager could message a recruiter directly, or the candidate as well. We'll be able to view just different roles the candidate has applied for view the pipeline of like how many candidates are in the process where they're at in the process, and also be able to have a calendar view of like just upcoming interviews, like that the hiring team may have or the recruiter may have. And then just review applications, and very easy to reject and move forward with the candidate and kind of like swiping right or left. And overall, you know, our goal is just to make a platform that's very easy and beneficial for whether for everyone to use. So whether you're a recruiter, candidate or a hiring team, this is going to be the platform of the all in one platform that we're trying to showcase. So yeah, any questions, I think that's everything anyone else wants to add anything else. All right, so now we've got we've got the prototype, but we need builders. So investors, possibly you. Awesome. Thanks team. Shark Tank, here we go. Awesome. Yeah, I think I mean, it seems I was thinking during that like, is there any app based thing like this? I mean, in like web to even, you know, I've everything I've seen has always been web based. I mean, I know like you can, you can get linked in and stuff on your phone, there are those apps, but not, I haven't seen it as thought out as this in terms of that, like hiring process, which seems like there's an opportunity there. Cool. I'm excited to see where this goes. All right, following up next, we've got the other collaborative effort of this cohort, which is Antonio and Caitlin with their web three noobs community possible investor. Yeah. All right. Hey, everyone. Hi, everyone. I'm Antonio. This is Caitlin with me. And we feel web three noobs. And so we created a community so everyone can learn and collaborate with people that don't feel too tech savvy and just wanted to share some knowledge and collaborate with everyone. Everyone we feel can bring something to the table. And sometimes the outside perspective, it's the best problem solver. So that's why we created this community so everyone can improve each other. And if you are a technical contributor, don't stop listening. It's not just for noobs. We're hoping to bring in technical folks as well that love to share, collaborate, teach. So it is like a very well rounded community. Yeah. And we decided for a community to go for a community because we believe it's the best way to learn and improvise and to just get the results and the transformations that you want to get and to just achieve new heights with your work. And so when forming the community, we wanted to make sure it was very beneficial for members. It wasn't just another community to join and not participate in. So the benefits to members will be really strong curated content coming from third parties and also user generated building new connections with people all over the world, not just within the PLN, but even broader scope. And then just hopefully providing endless opportunities for conversations and collaborations within Web3. Yeah. And we created some community guidelines like most of the Web3 and even on the PLNetwork be supportive and share generously. I feel and we feel like that's the way to just improve everything, just be supportive of each other and just share your knowledge and learn by listening what everyone's saying, be constructive with everyone's doing and lift each other up. I feel like that's the best way how to work. And just the most common one, just don't spam it because, you know, let's just keep building. Not helping anyone. So there are quite a few options as far as platforms to build on. You have your Discord, which is a great option, but a lot of people are already there, Slack, Telegram. We wanted to kind of remove this community from the noise and create a more calm, safe, inclusive space for people to collaborate and contribute. And so we landed on Mighty Networks for a few reasons. They have a ton of resources on community building and community design in general, which I think will just like make us a stronger community as a whole. They do have app functionalities, just ease of use once you become a member. Have like your standard capabilities of one-on-one chats, group chats, discussions, course building, polling and surveying the community, live streaming, right within the platform, hosting virtual events within the platform, then event management for IRL. And eventually, we'll talk about this in a minute, if we move towards having a token or an NFT for membership, it does have that capability, which I think helps bridge the gap into the one-three space. So we do have a quick demo that Antonio will take us through. Yeah, this is the page you get in once you're signed up. You get the welcome list on your right. That's just some suggestions that we have for you to just be familiar with the platform and to everyone that's already in the platform. We got the feedwares like all the newses and all the chats and everything that's happening on the community. And then we kind of divided it in three spaces, the general, the web three noobs to web three gurus, and the web three topics. The general is just like the name, the general thing that every community and every platform has, like the start here and the news about the community. Then we got the web three noobs to web three gurus. That's like the more educational part of the community where everyone can attend workshops, got some recommendations on podcasts, books, everything. We got this example here for the set up your first web three wallets. It's a virtual event. And then finally we got the web three topics where we divide it by fashion art, music. It can be a lot of topics. We can keep adding those. And inside of each one we can talk about specific teams that you're either working on or you need help with it or just I don't know, want to improve something about it. And so you just share news or whatever you want to do it. So what's next for web three noobs? We're hoping to develop some strong frameworks to make it a very interactive and engaging community. I think the only way that this can be successful is if people are actually wanting to jump on the platform, share, collaborate. So making it easier for people to do that will be one of the first things that we do. And the content, we believe the content is one of the most important things besides the people. We got some, we got a database of some content ideas that we want to create and we wanted to pull on the chat and all that. We got the community engage to create that content. So we want to reach out to the web three gurus to host either workshop or just start conversations or teach something on the platform. And then growing our membership. So just sharing with our current communities with the PLN with our LinkedIn communities, creating a referral program within the platform, some general marketing initiatives, one including creating a Twitter profile to kind of share what's going on within the community. And in the future, just transition into a DAO, just figure out how to evolve the community into a DAO and create a token or the NFT membership. Just possibly get something that you can transfer for, for another members or something like that. And the most important thing that we kind of want to achieve is just create a kind of an incubator of a way to fund projects and to like collaborate with people that are non-technical with technical and just join the founders of the technologies. So yeah, if Janus, if you're a founding member and you want to help us and be a better tester and let's all be web three gurus selling early stages, that would be great if you would like to join. Thank you. Thank you. Awesome. Yeah. Thanks, Caitlin. Thanks, Antonio. Again, another example of identifying an opportunity, a need, something that I don't think I'm unaware of something of something like this that currently exists. So nice, nicely done. And let's see where this goes. I think there, you know, we have people coming through Launchpad all the time who I feel like this would be a community that they would like to be part of. And those of you who are coming in or who did come into Launchpad with feeling on the less technical side of things, this would provide a bit of a community for you to join and feel connected to web three and sort of empathize and grow with that community. So thanks a lot. Up next, we've got Emma. I believe Emma was unable to join, was just scrolling through here. I'll just leave this on the screen for a second. You can kind of read through what her goal was with or is with Polywrap. Is Tamila Lua here? So for our project, I worked on MLops on IPFS kind of as an experiment to see how can we improve the machine learning on pipeline. So this is a big problem. So McKenzie estimates that by 2030, the global impact machine learning AI and automation will be more than $13 trillion. So this has been on everybody's mind from open AI to Google's bar that was just released. Everybody's thinking about AI machine learning. And it's only going to have a bigger impact over the next few years to come. But to me, it's not just the economic impact. It's also the impact it'll have on different systems, including business and government. Especially when you think about some of these areas like public resources, healthcare and infrastructure. When we're shooting for innovation, these areas, how can we still have innovation, but not sacrifice transparency so that people can feel like they understand what's happening and the stakeholders understand how these models have been developed. So machine learning has a pretty long life cycle. The average person probably doesn't think about this. But from the time you're starting to collect the data, then you're training your model, you're packaging your model so they can be deployed. And then once it's deployed, you need to monitor it to see how is it doing in production. So it's a pretty lengthy process. And they continue to go on. It's not like a, you know, one stop. But the part that I want to focus on for this experiment was the monitoring part. So this would look like once the model is deployed into production. And as improvements are being made to the model that the everyday citizen will be able to go to IPFS link and be able to see the changes in the model in real or near to real time. So this is why I circled it for a circle. So the part I want to focus on with public services. So this is a really interesting case study to me, especially in California and San Francisco. So just in last year alone, like literally just on the library system, San Francisco spent more than $171 million just on public libraries. That's one city. So think about how much money is being spent across just the U.S. along for all the major cities in smaller cities and how much of these budgets are being allocated basically arbitrarily. And there's no really data-driven insights to how this is happening, right? So to me, I think it's too important not to dig into it further and create more transparency and also have data-driven impacts into how these decisions are being made. So there's an open source data set that was made available that basically gathers all the patrons' data for checkouts and library usage and different bridges they went to. They anonymize the data and they make it available for anybody who's curious about this data in California. I use this to make a model and then do the next part of the experiment. And in the data set, there were probably about more than 40,000 patrons before the data was cleaned. So it was a pretty good size training data set. So the experiment that I explored was deploying the model like regular, but then seeing, you know, as you're updating the model, are you able to share the update model through IPFS? So the primary IPFS tools I use, or IPFS Python kit, is so pretty newsly a few months old, the HTTP client and the API. And this is just a screenshot of the ranking models. So I'll dig into that a little bit in the videos I'm explaining it. So here's a video of the main parts of the project. So the first part of the project was the same for, you know, any other machine learning project, exploring the data, seeing what trends were there, doing some exploratory data analysis. You can see here some of the different data features they had, like the age group of the patron, what was their home library, you know, how many times did they check out items in the library? How many times did they renew? And it seems kind of trivial at first, but one of the features that is really important was age. Age is a big determinant in a driving behavior for the patrons, and also their library location. This could be a really good indicator in the future for how budgets can be allocated per location for, you know, libraries in a data-driven way, and not just because maybe somebody on a board or committee, you know, likes up to the location. So I also looked at the feature importance to see, okay, what impact did all these different features have on each other, and also develop the model itself, and did, like, a ranking of models through AutoML, and then chose a top model, and then exported that to see, you know, hey, how will we be able to get this to IPFS? So once the models are developed, you can see here, then there's a model leaderboard. I'm going to skip forward a little bit. So exporting the model. So right here, once the model was saved, export the model, you know, in your directory of choice, then integrating with IPFS through the IPFS Python toolkit, then scheduling it. That's through another library that's available on Python. So once you have your API and all information inserted, then you can have your model exported on schedule to your whatever IPFS known of choice. And so to me, I think this is most restarting for some reason. To me, I think this is a really big improvement potentially for machine learning models, is that it doesn't just have to be black box, right? So for the machine learning models that are pretty popular right now, the AI tools are popular, is very black box. People don't get to see like how the model is improving in real time, what's under the hood. They just know that it works, right? To me, this is just a, you know, beginning solution to what I think could be completely transparent pipeline for machine learning models. And I want to continue to work on this and have an interactive dashboard to show the models as it's growing. So that, okay, like, if you're a taxpayer in San Francisco, you can go onto their city hall page and be able to see, okay, this is the model that we're using for the budget right now. And here's the model growing in real time. I think it would make not just governments, but also businesses more accountable to how they're driving budgets and how that has an impact on, you know, organizations. So the reason why it's important, this is a lot of real time transparency and pipeline openness, increased access for stakeholders and improved budget allocation with data driven outcomes. And the goal for this is to have machine learning available for masses, right? Not just for all the techies and nerds, you know, everyday people who want to understand what's happening and why they have an impact on their lives. But yeah, thanks for listening to me. Awesome. Another project that's identified kind of a unique problem and opportunity at the same time. Very cool. Thanks, Tim and Lou. Cool. Jason's up next. Let's play this video. Hey, I'll sorry I can't be there today, but here's my update on my project. I'm backing up web two data into web three. So the general idea is that we're going to be reading from an S3 event stream and convert that into Filecoin deals, store that into Filecoin so that, you know, should something happen with the primary site, you can always restore it. There's other cool things you can do there with tiering, you know, kind of treat Filecoin as a glacier and stuff like that. In this situation, we're going to be using Estuary as the buffer and deal maker. And then the big question is, can you get it back? What is that tooling look like and how does that all work? And, you know, my plan is for now to do it all manually. And so I've got something working that's reading off of the event stream here. You can see an example of a long message from it. I'm using Minio and Kafka to mock out the S3 server. And that then copies that data down using the S3 API, uploads it into Estuary using the Estuary API. And it's filtering right now for just the object puts. It's not handling any tagging updates or deletes or any of that stuff yet. It pre-pens the bucket name onto the key so that it has, you know, some sort of hierarchy there. So you can, when you're going in then hunting for the file, you know what file name you're after. And then, yeah, it's pushing it up into Estuary. I've noticed it takes a very long time for deals to complete. When we were there at the COLA week, it took about two days for me to get deals validated on the network. Since then, I've done a couple more and they've gone through much quicker in the range of hours now. So not days, much better. But, you know, the tooling with Estuary is a little cumbersome. I would like to have a little more control over it. So eventually I'm thinking of pulling that into the process itself, doing the deal-making then just with Lotus directly and generating my own car files and buffering into like a IPFS light node that just runs in process. And then, you know, here's the questions that we still have left that haven't gotten to yet. Pulling the data back out, I have a suspicion that it's probably going to be easier to just pull the car file down and unpack it manually as well. Write a small little go program to, you know, say, hey, this is the file that I want out of it and all that kind of stuff. I kind of punted on all the metadata storage. Of course, that's very important if you lose your primary site and kind of want to make sure all your tags and, you know, if the customer has modified the e-tag on the object and stuff, that all gets preserved and comes back with it. So how do you store that? How do you represent that? Doing either a custom IPLD format or just throw it in an JSON object next to it, pros and cons of each. Figuring out some sort of deployment for this. There's some helm charts out there already for the estuary and Lotus and stuff. But I really haven't taken a look at them. So we need to figure that out. And then, you know, wrap it all up with some metrics so we get some nice graphing and alerting and all that kind of stuff on it. But that's where I'm at. Again, sorry I couldn't be there, but have a good one later. Awesome. I'm glad that he was able to get that video posted and share what he was working on. Yeah, great name. Might need to, like I said in the chat, maybe we need an award for some of these names. They're pretty creative. And let's move on to Laura. Yes. All right. So I kind of took a Matthew-like approach and started with a little bit of education. You can go ahead to the next slide, Dave. So just to kind of give a little bit of background, I know that you all know that my team has recently launched a new video series called Founders because I simply refuse to stop talking about it. So I won't go too far into that. But what you might not know is kind of the inner workings of a campaign. So I'm going to talk about that for a little bit before getting into my project. So basically for every public-facing video you see, there is a little hamster tunnel wheel of weird videos behind it that you don't see as the consumer. So we're A-B testing almost everything. So we have shorter clips, longer clips, clips with copy that align with like this audience versus this audience. We have a different intro here. In between every time we release a video, we're taking that data and we are seeing how we can optimize the next leg of the campaign. So to take that further, I'm going to go to the next slide, Dave. To get even like kind of into the nitty grittier here, my team's KR1 is to develop and distribute excellent content with the goal of growing our follower bases across PL channels. So these are our actual KPIs for the year. As you can see, we surprised and delighted ourselves by reaching our Twitter growth goal last week for the whole year using this campaign. So what that tells us is that the campaign is working on Twitter. Don't touch it. Let it work. Those mechanisms are good. On LinkedIn and YouTube, we have room for optimization here. So that's kind of what I focused my project on. And now that we have a month of baseline data from the four videos that we've already released, we can use that data to optimize the next leg of the campaign. Next slide, please, Dave. So kind of this standout metric that stood out to me when we were looking at this first leg of data is that we have a standout audience among our three audiences. So a little more insight here. For this campaign, we have three target audiences. We have Web3 builders. Those are developers. Those are people who work in the Web3 space. Then we have founders. These are more like they're following YC. They're following Mark Rubin. They are achieving product, market fit. And then we have end users, which is kind of an experimental audience that we're testing. And these are people who were somehow connected to institutions like NASA, CERN, MIT, Stanford Research. So that's kind of our experimental audience. But we found that our standout audience here is the founders. And what I mean when I say standout audience is they have the lowest cost per acquisition. So what that tells us is that there are a lot of people in this audience that have not yet converted to a PL user. So they're efficient and cheap to talk to. And then they also have the highest view rate and the highest engagement rate. So this is a marriage made in heaven because the most efficient cheap people to reach are the people who are showing the most interest in PL's content. So I thought, how do we make some sort of series that is a little more targeted towards this group that we can test out that isn't going to cost us anything because our big production budgets for the rest of the year are already committed? So you can go to the next slide, Dave. I was thinking back to when we interviewed all these founders in Lisbon and we asked all of them, if you could give advice to a new founder, what would you say? And then I promptly cut that footage out of pretty much everyone's final interview because it just wasn't flowing correctly with the rest of the information. So I asked our editors, go get that footage and let's make a new little short series so that we can test with this new priority audience. And so we kind of bundled it in two different formats. So we have one format that is more targeted for a LinkedIn paid media campaign. You can see it. It looks a little bit different. And then we're also piloting on YouTube shorts. And obviously we chose these two platforms because they are the ones that we are trying to invest in to make sure that we hit our KPIs for the year. And the reason we want to pilot it on shorts is because it's a new feature on YouTube. And basically any time a platform is rolling out a new feature, you want to test it out because the platform itself is going to prioritize accounts that are interacting with new features because they want users to adopt their new features. So we needed a shorts test anyway. I wanted to test with this new audience. And this basically cost us nothing because we already have a motion agency on retainer. And we're launching on Tuesday. So keep an eye out for this. And then the process starts over. We get data back and we'll keep iterating on it. Awesome. Very cool. Looking forward to seeing the release on Tuesday and where this goes. I really like that data you shared earlier. It sounds like you're crushing it on LinkedIn. Twitter. Twitter is where we're crushing. I'm sure the others will follow shortly. So looking forward to seeing how that develops. Let's see. Up next we've got Ishan. I'm Aslan from Functionland. If you go to the next. So I just want to elaborate on the problem that we are solving with an example. Let's say you've been using a service like Google Photos for 10 years putting your images there. And Google decides to now charge you $140 a year. Switching to another service is not that easy. Probably impossible because right now these service providers, they lock in our data. They don't provide us an easy way to own our data. It's like an early days of gaming, you know, like when you had to switch the whole Pac-Man machine to add the game. And then these gaming cartridges came to decouple the game from the machine. So now if you go to the next slide, we are thinking of a new way to give the ownership of the data back to the users to decouple the data from the service that we use. Which creates this competition among service providers we no longer are locked into Google. A small provider can create a better Google Photos alternative and we can simply switch. It makes it cheaper for users because it increases the competition. And also now those service providers are not like locked into using data that are on Google. So it increases the revenue for them as well. If you go to the next slide and it's like what we created called Crowd Storage, which is powered by this set of protocols we call FULA. So it's like a P2P decentralized data storage protocol that allows users to share their unused storage on their devices with each other. So let's say we are actually segregating the network into pools. So let's say I'm in Toronto, I joined a pool in Toronto. Within this pool there are 200 people who we share our storage with each other. I back up the data of some people in this pool, they back up my data. And each piece of file is basically backed up with a replication factor of three on the network so that if a few of them goes down, I still have access to my data. Can we go to the next slide? And also like now that we are giving the data back to the users and creating this decentralized network, we can add the power of blockchain to it. So transactions that can happen to monetize this network for developers. And the way it happens, it monetizes for developers. When a developer creates an application, let's say like this photos app, users pay the developer with tokens directly. So like no middle money in between, users earn tokens from sharing the storage with the other and pay those developers. And so it's attraction-wise, we actually, we have a hardware, we sold about 900 of those hardware globally. So we have nodes more than one petabyte of a storage initially when we go live. We are now shipping right now those nodes. And we created two dApps and FX files and FX photos, which is going to be replacing Google files and Google photos. They are available on Google Play and App Store. And if you go to the next slide. And this is our stack. So we are heavily based on IPFS. So we use WinFS, which is created by Fusion Labs, to actually encrypt the data on the client, short the data, send the charts through our protocol, which is based on IP to P and Graphsync, to the backend network, and back it up on the backend network and access it basically on this local pool. And can you go to the next slide. If you play this video, like it's a short demo of how FX photos connect with the backend, is it playable? And can you go to the settings and maybe just increase the speed to 1.5 visits? Thank you. So yeah, so like you basically log in with any, anything like you can even log in with Google. We just use that login to create your DID, like a key. So you log in. That's your DID on the app. You add your backend. Actually, this is now improved. You don't have to add the address like this. This was the previous version now. Yeah, like we added like an announcement that it automatically takes your backend. So now like you get this interface. This is a mobile application. So this is like, we are focused on mobile applications, not browser based applications. And you can, to see like an upload a photo, it gets encrypted, transferred to the backend. Can you go to the next? I'm not sure if that's the last one or, oh yeah. And this is the team, like, we came on your software developers and Massey actually, and we have like tokenomics leads and graphic designers, hardware designers in the team as well. Thank you. That's awesome. Thanks for sharing. And oh yeah, I didn't know Massey was working on this. He, he's hosted a number of awesome Q&A sessions for Launchpad in the past. So I'll have to tell him that I heard about this now that I know he's on the team. Thanks for sharing. Andre's up next. I saw earlier he had to hop off. Andre shared a bit, I think, of this project idea with us in Denver, a map of technological opportunities. I'll just leave this on the screen for a second. You can read through what his project goal was and some of the challenges. So congratulations everybody. Thanks again for sharing those. I know that it can seem a little burdensome to be having to work on a project showcase after, you know, after we sort of lose a bit of the momentum post-Cola week and you're getting pulled in other directions by your teams. But I think it's really awesome to see all the creative contributions and ideas that you're coming up with. And congratulations, you are now reached the end of your Launchpad journey. There's a couple of slides here I'd like to run through just before we wrap. You are entering, launching into the network. Many of you have been in the network for a while, but now you've completed Launchpad. If you also completed all of the pre-test and end-of-section tests within the curriculum, we're working on rolling out some learning credentials as kind of a something tangible that you can take with you to show that you've completed Launchpad and you've completed this four-week journey of learning about protocol labs, tech, web three, the network. And once we have the credentials ready to go, those will be shared with you based on the fact that you have completed all the requirements of the program. In terms of awards that you will receive in the near future, please take a second to vote on all of the presentations that we've heard today. These will be announced probably tomorrow. Yeah, tomorrow we used to do it at our final weekly sync. We don't have a weekly sync tomorrow, so we'll be announced on our Filecoin Slack channel. You can vote. I think you log in, you vote once. You can vote for the same presentation for more than one award if you'd like. It's just a drop-down list. All of the presentations should be on there. If there's any issues accessing the form or you notice something's missing, please slack me. But awards will be given for the biggest contributions to existing projects, most impactful technical contribution, most exciting project, the best presentation, best collaborative effort, the most likely to be used, and the most valuable, the MVPL, most valuable for PL. So please take a second to open up the link. Scan that QR code or in the showcase deck, you can click on that vote here link at the bottom. And we'll tab those votes up later and announce the winner is tomorrow. And now the future is all in your hands. Go forth. Continue to be awesome and create these great contributions that we've seen previews of today. And it's been an awesome few weeks working with you all. I hope we stay connected. And I look forward to seeing you online and in person at future network events. So thank you all. Thank you all. Yeah, actually thanks, Kyle, for mentioning this. Thanks for the thanks. You're all overly generous, but a lot of this praise needs to go to the people who are not in front of the camera, who are on the Launchpad team that make this possible. You know, I'm just the face here. And so thank you to the team. And I think that brings our showcase to a close. Thanks, everybody. Have a great rest of your day. Have a great weekend. And I'm sure I'll see you all somewhere soon, hopefully. Thanks a lot.