 Hello, everyone. Welcome to this month's Mother of All Demo Days meeting. Today, we're going to have three demos from Bedrock, Compute Over Data, and Consensus Lab. And for those that are new here or haven't attended in a while, once every month, the Starfleet teams get together to share progress in their projects in the format of a demo. So hence, Mother of All Demo Days. But first up, we have a DVD presenting from Bedrock. Hi, everyone. I'm David Drujanski. I'm an engineering manager on Bedrock. This is kind of a meta demo. So I'm going to be presenting about the value of dogfooding and have a group exercise for the second half. But the goal here is to really explain what dogfooding is, why it's valuable, how to do it. It may not always be a straightforward on some of the different teams here at Protocol Labs, and then how we do it on Bedrock specifically. And hopefully, at the end of this, you'll understand the value of dogfooding and think about ways of how to apply it to your team. So with that, what is dogfooding? So dogfooding is the process of regularly using your product, otherwise known as eating your own dogfood and experiencing it as a real user. And I put product here because it doesn't just have to be software. People have done this for hardware products, for physical objects. But obviously, at Protocol Labs, we focus on the software, so we'll lean into those types of examples. So why is this valuable? Really, the goal here is to build user empathy. And by that, by doing that, we can actually build a better product. And you'll actually be quite surprised by the number of insights that you learn about and realize as you step into the role of a user, taking off that developer hat, that engineering hat and just using the product, you'll start to see it in a different way. And we found that, at least within Bedrock and historically, like you dogfood, you'll figure out, hey, this bug is really annoying. I should fix it right away. Or you'll just have a better understanding of the overall product itself. Oh, this is what this feature does. This is why it's useful. On Bedrock, we have a bunch of different teams working on different areas of the tech stack. And so by having different dogfooding tasks, we actually get exposure to other parts of the system and learn how that part of the codebase works. So it also gives a lot of technical kind of knowledge sharing by just using different parts of the system. I get you more familiar with it. But ultimately, it increases what I call the feedback loop of this development cycle of just iterating on using the product regularly, getting feedback regularly, you're going to improve it that much faster. How do you actually dogfood? So the ideal way is to use a piece of software on a regular basis. Now, the prime example and what really got dogfooding popular, it was Google. They use this in almost all of their software products. And you can imagine for their search product, their biggest one, right? Like you use search every day. If you don't come up with the result that you were looking for, you send that off to the team and they make improvements. And the key here is like you have a list of users that's willing to deal with bugs and to deal with like not yet production grade software so that you get that feedback regularly. Maybe my favorite example is from Apple. So when they were designing the iPhone, what they actually did to design team for their version of dogfooding, they actually carved blocks of wood and would carry it in their pocket to simulate the experience of having a device with you at all times and build a better product that way. That's maybe an extreme version of dogfooding on the design side, but it shows that you don't even have to have a fully functioning product to be actually tested out and see what it actually is like to experience that product itself. So that's the ideal. Like you could use a product every day now at PL, not all of our products, at least not yet are used every day. And so how do we kind of like simulate that experience or get that feedback sooner? So on Bedrock, what we do is we actually have a team rotation. So we have three teams on Bedrock. One team for every team meeting will create a task that everyone has to complete ahead of time. And that task is time boxed to 10 minutes and everyone on the team, it doesn't matter if you're an engineer or a product manager will attempt to complete that task. And then at that meeting, we'll come, we'll discuss how it went. We'll generate all the feedback, collect it, and the team can use that going forward to improve their product. Now the keys here are that the tasks are simple, both for creating them and also for attempting them so that anyone can do it again. It doesn't matter if you're an engineer and you know how the code works or you just want to use it as a TPM, for example. It has to be easy to provide feedback. And maybe the most important thing here is that failure is okay. And what do I mean by that? Is like, if the user can't finish the task, that is a problem not with the user, but with the software most likely. And so like getting into that mindset, like we expect this to fail, and it's okay if it fails, that's really valuable feedback for the team. That means it wasn't easy enough, or the software was not good enough to perform that task from the user. And so that's actually expected. And that's, I think, a good mindset to have when you're dog fooding these versions of software as we develop them live. So this is kind of like a visual map of how that works. Again, we have three teams on bedrock. We rotate them, one team will create a task. Everyone on the team spends 10 minutes sometime throughout the week. And then they try to complete that task. We talk in the meeting how it went. We provide that feedback to the team, and then it rotates. So that's kind of how we do it. Okay, so we're going to try a really quick group exercise here. On the right is a template that we use on bedrock for creating a task. So you name the task. This is actually a copy of the latest dog fooding task we did on the team. The demo day edition. When do we want people to complete it by? Who made it? And then the task details. And you can see that these are pretty straightforward, simple details. Again, 10 minutes. But the goal here just to run through it, we're going to use this product that we've built on bedrock, the tornado team. It's called Lassie. Its goal as a product is that you give it a CID and it gets the content. And the key distinguishing factors like it doesn't matter if it's on a Filecoin node or an IPFS node, it'll just do its best to find that content. And we're going to run the HTTP server from Lassie so that we can test out the HTTP API there and making sure that we can get content like we would any other HTTP server. And this is actually used in the REIA project with Saturn. So that's why we dog fooded it earlier. And then I've also listed a few CIDs here and that we're going to try getting those specific CIDs and see how it goes. Now, because of time, I'm probably just going to run through this really quickly. But essentially, I'll go and install the latest version of Lassie. I already have it installed. So you're not going to see anything but no new downloads, but it's there. And then we're going to just run the server. So again, I'm going to run in the top left corner here. Okay, the server is up. Awesome. It's on port 5050. And now I'm going to attempt to get a CID. So this is a template. So I'm going to copy one of the CIDs from below. And I'm going to give it a name. So it's easy. Let's call it CID one. And Lassie outputs everything in a car file format. So I'm going to run that. Now, the cool thing you could see from the HTTP server, the logs of how it requested what provider it's actually getting it from, which is pretty cool. And just because we're here and to make it a better demo, I'm going to do the same thing for the second one. Okay, this one a little bit more interesting. It looked at a few more providers. You can like read through the logs if you're interested. And it tells you the overall duration. If I look at my directory, I have two. Now the next step in the demo, sorry, in the dog food task, it's to then view the file. So I'm going to go ahead and just view the car file. So I'm going to try the one. Okay, that's kind of like a blob. Interesting. And then I'm going to look at the second one as well. Okay, that one actually is a text file. So it's a little bit easier to parse. I don't have time right now. But the first one is actually like a PNG image on IPFS. This one is an XML feed for a blog. But that's it. So then afterwards I could leave feedback below. This is how we do like some lightweight feedback. And then ultimately the team would then either the user or the team could file these as GitHub issues. And then we can like kind of start to prioritize them in the backlog after talking about it. And here's something that some example feedback of what I wrote previously. And you can see what the other team has wrote like Jacob actually figured out how to open that blob more easily. And it basically requires some like car expertise, the go car library to use it. So yeah, that's kind of example of how we dog food. Let me just I think that's about it. I think the only other thing is, again, think of how you can dog food on your team. It doesn't have to be every day. But if you can do something like a rotation, I think it'll bring a lot of value to the software you build. And happy to help you do that as well. If you're stuck or you're not quite sure how to set it up, feel free to use our templates that are linked on the presentation. And we can also jump into a chat sometime and I can help out or anyone on the team as well. So thanks. Thanks so much for sharing duty. Now up next we have Wes. Brilliant. Thank you so much. My name is Wes Floyd. I'm a product manager on the back of y'all team and I'm presenting new project water lily on behalf of a much broader team. Ali Hare is one of our project leads. Simon from our team has been doing a lot of development. Kai Luke, Irina is our team is helping. It's really I guess a lot of the back of y'all team was also participating. But I just want to call out there's a lot more folks that are doing the work. I happen to be on the best time zone for this session. So I've got to take you through the content. So I'm going to take you through three components here. First, we're going to talk a little bit about how back of y'all fits in with FEM, a brief refresher. Then we're going to talk about lily pad, which is an important new component we've built between FEM and back of y'all sort of a bridge. And then I'll spend most of the time talking about ethical art, AI generated art in a really interesting novel approach that I think this is one of the first projects to ever do, which is compensating artists, not for their work on chain through NFT, but through derivative styles of their work on chain. So it's a fun use case and we'll jump right in. So for a little bit of promo here, background, back of y'all is sort of almost like an L2 on top of the file coin chain, you know, file coin chain at FEM is where a lot of our coordination work happens. Back of y'all is an off chain compute ecosystem. You can find out information about back of y'all.org to see more about the architecture and effectively it can run any sort of compute that can be containerized in a Docker container or was in binary in a batch mode across the network of back of y'all machine. So really trying to get the best of both worlds, the trust and verifiability of on chain with the verifiability of these new off chain compute systems, but more robust, complicated workloads like in this case, and the model inference for generating art. Project lily pad is the bridge between layer one and layer two. So lily pad is effectively its two components. One, it's a smart contract on falcon virtual machine. It's listening for events. People might want to invoke a back of y'all job. So it's listening for them to invoke those jobs. And then secondly, it's an off chain demon that is listening for events that are triggered through the lily pad events, call or smart contract and then actually triggering those back of y'all jobs. So it's a bit of a bridge at this point. I'm not going to use a word duct tape. I'm going to use the word bridge. So anyways, please find out more about lily pad here at the website, GitHub. Here you can see the source code. This is a component that we will potentially open up to more broad use cases. We'll talk a minute at the end about other applications of this technology. So if you're thinking about scenarios where you have on chain workloads, smart contracts that are, you know, very smart contract intensive, but it could benefit and have more robust capability if it could do off chain compute, or if you're already using that off chain compute today and AWS or GCP, but if if that compute were now more trustless, verifiable and open, those are exactly the types of use cases that we want to help you with. This is a demo from Ali's machine of actually invoking the lily pad caller solidity contracts. And in our GitHub repo, you'll see lots of examples of how to build your own here, but eventually it enables a user to pay for a job using fill or T fill if you're on test net, specify as a string input, the spec of the off chain job they want to run. What's a Docker container name? What specific code do you want to run? And then it invokes it, it sends it on chain. And then this is an example here. The previous, oops, hold on a second, the previous examples that we use this for was just generating stable diffusion images. And let me do this here. This is an example of saying, we're going to run a standard stable diffusion image, stable diffusion, by the way, for folks that are not into machine learning is a framework for generating art. So we give it a text prompt and we say generate unicorns and rainbows and the AI can magically create that art. No human had to draw this. So it's very powerful stuff. But what's interesting, and people have talked about in the past, is to say, well, what if I wanted to generate that but generate the style of Van Gogh or generate the style of Pablo Picasso, this style transfer thing is another layer on top of stable diffusion. That's a really fun application of the two technologies combined. So this is what we're working on in the past. You can see some examples of how the AI will automatically generate different combinations. It's all random each time it's all unique of this AI generated art. So the next thing that we did on top of this was to build water lily. And the goal here is to say there's a lot of underrepresented artists, a lot of opportunity for them to better monetize their work. So instead of actually taking their work and selling that work on chain, what if we could train their style? So if there was a new artist in the space, let's say her name was Misty. And Misty's got a tremendous amount of work. She's got 40 or 50 different pieces in her collection. And we're not going to do any work with her copyrighted work. We're just going to generate a style. It's an ML model effectively that represents her style. And so when I generate rainbow unicorns, I want to generate in Misty style specifically. And I also want a portion or all of the payments to go to Misty. This is the impetus behind water lily. And it's a great way to bring together all these different concepts into one place with a nice sort of humanitarian output. So we are going to be launching this project soon the next few days. You'll see some more information about it if you visit water lily.ai. We're still working a few bugs out. We're still growing the number of artists that we have on the page here. But I want to give you guys just a little bit of grounding of what it's going to look like. This is a couple of fun examples from our internal testing of what it looks like when you apply this style transfer to generated art. So we found in the public domain, this was an artist from the 1800s who had performed lots of drawing of Native Americans and English settlers and things like that. And so we gave it the text prompt generate a picture of Barack Obama. So this is Barack Obama as if that artist from the 1800s had drawn Barack Obama, which he did not generate an image of Captain America. This is all again generated images. And in this case after the work comes back from Bakery out, we will be sending the funds to the artists themselves for public domain will probably donate to a to a charity that aligns with Falcon Foundation's mission. And then a couple other fun examples we can even take things that are very messy like these are stills from an artist who did a lot of noise generated art with music. We took the stills we train their style and now we say generate rainbow unicorns. And this is actually rainbow generated unicorns with that that style applied. Some other examples here from a 1920s artist with some interesting illustrations. And now what is what does it look like as if that artist had generated Barack Obama. So lots of fun, fun things we can do there. We're really just scratching the surface. So in terms of what's next for the space, we're going to be building on a couple things. One, we're going to build on our partnerships in the decentralized science space. We've got some partners that were very interested in doing some bioinformatics pipelines that generate NFTs. And those NFTs, the back end work would go through back or yow. The NFTs would align well with their mission. So more to come on that. And then we have some other partners that were interested in doing improving the ability to generate yield and decentralized finance. So rather than just having a smart contract to execute trade on your behalf, if you could have back or yow, which consumes large amounts of important information, clean information from IPFS and Filecoin, you could run more sophisticated models of how you want to buy and sell, exchange, create loan contracts within DeFi. So there's just lots of interesting areas when you're starting to combine the power of FEM, the power of off-chain compute, we're very excited about it. And then if anyone would like to get in touch with us, please reach out. We've got a GitHub information here for these various projects, Twitter accounts. I'm available in Filecoin Slack at West Floyd. Allie, you can see here, developer Allie is the Twitter contact for Allie who's running a lot of our project day-to-day basis. Thank you for the opportunity to present. That's all I have. And we appreciate it. Awesome. Thanks so much, Wes. I can't wait to see what's next. So please come back and demo again. And the last but not least, we have Alfonso. I'm Alfonso LaRotta. I'm a research engineer at Portable Labs and at Consensus Lab. And what we're working on mainly and what we're focusing on is on IPC. And the idea behind IPC, many of you may be aware already, is to scale horizontally the Filecoin blockchain. So try to run subnets that are able to interpret with the Filecoin Mainnet and interact and encode their security to them as a way of deploying new applications that can have more scalability and new features. Last year, we were focused on figuring out how we could do this, have an MVP, and this is how it looked like. So we had Lottus, everything was a monolith and running a subnet, it was as easy as calling a command in Lottus and it would run the subnet for you and you could start interacting with it. It was great because the user experience was great, but it had a lot of problems in terms of once you want to move into production because if a subnet failed, it was really hard to, like, everything was handled with routines and then it was a nightmare. So as we were moving into production, this is what the architecture looks like and what I'm going to show today, actually, I think I demoed it six months ago, right, months ago, but it was with this architecture. Now we're moving into the production architecture. I'm going to show how it will look like as we move into Mainnet. And the idea is that we have a new piece of software. Instead of having everything in the same Lottus and like handling everything in the same Lottus, what we have now is that we have the different independent networks and subnets. So this rootnet could be the file called Mainnet and we could have running or the community could run different subnets and we have this piece of software that is called the IPC agent, which is the one that is going to orchestrate all of the communication with the different blockchain. So instead of having to run a single Lottus that has to know how to interact with every blockchain, which has a limitation because once we start having like other technologies in these subnets, it's really hard to embed every consensus algorithm and every single feature into Lottus. So in this way we are decoupling the what we call replica. So the running of the blockchains from the subnets from the IPC agents, which is our IPC client and the one that we use to use IPC over all of these networks. And so if someone wants to run a subnet in IPC that is communicating with with a minute, it wants to communicate with some L2 and it wants to run another L3. This is the architecture or the different processes that it would have to to run. So the IPC agent communicating to different nodes or peer implementations in each of these networks. So you see that these allows to to more like more decoupled architecture. But before we were all happy and young and it's just interacting with a single process. Now we have a lot of processes. We there's a lot of overhead that we need to handle. If you want to look at the code before, it was all Lottus. Everything was in in our fork of Lottus that we called you deco. Now there are a bunch of of other repos because we have the APC agent that is this client that I'm going to show that it's used to orchestrate the different instances of subnets with which our applications wants to interact. Then we have the IPC actors because with FVM before everything was in the legacy VM of Lottus. Now we have we target the FVM. Actually, we are not targeting the FVM. Everything is in rest, but we're moving into solidity this contract so that we can also be in user land with FVM. But right now what we're doing is we have a custom bundle in our subnets that include our FVM actors compiled to wasn't. So if you want to have a look at the actors that run all of the on-chain logic for IPC, you can go to this repo. And finally, we have, of course, Eudico, which is our fork of Lottus that includes like a new consensus algorithm and does all of the heavy lifting of the blockchain side of things to run each of these different subnets. So it's the peer implementation for each of these subnets. And as I was saying, there's a lot more now than before. Before using IPC with CC. Now we're trying to improve the UX, but we are working on a lot of documentation and it would be great if and like all of you can start testing this and give feedback. Like DVD is perfect that you did that demo because we really need to do this and figure out like the right UX for this tech. And with that, I will jump into my demo. So yeah, as I was saying, like the main, our main process for contract with IPC is the IPC agent. So what I'm going to do first is like run this my IPC agent. So now we're running the IPC agent. It's a really slim process that will do all the interaction with the different blockchains. What we need to do now is like we have nothing right now. IPC is not there. So we need to run some some rootnet and from there deploy a subnet. So the first thing that we're going to do is like we have this convenient script to run a rootnet. So this is a really script that runs a single validator rootnet is for testing. And it simulates what would be our interaction with the Falcon mainnet if we wanted to deploy a subnet over Falcon. This may take a while because under the hood, what this is doing is deploying a Docker container, creating a lot of demon and then running a mere validator because in our so we're simulating here actually not the Falcon mainnet but our SpaceNet testnet that runs the mere consensus and the mere consensus in the end is a BFT consensus that goes a bit faster. So it's so our subnets ship with these consensus that is faster and has like you can go at any block time that you want, but we have it configured to go a one second block time. So deploying an actual deployment as my contract should feel faster than how it feels now in Falcon mainnet. Yeah, I should have prepared some joke or something for the wood. In the meantime, let me share the other piece that, oh, OK, cool. So it was what's unexpected. So here you see that we deployed the rootnet and it gives us the script a bunch of information that we need for our IPC agent to be able to interact with with our in this case, we only have a node in the rootnet, but we could have like more a single node running by interacting with the Falcon mainnet or whatever other architecture we want. So here it says that we're running in this container. That's not important, but what it will be really important is what is our default wallet and what is the token to interact with our peer implementation in the root. And the reason for this is because we have to go to our IPC agent and tell them to what are the credentials to interact with the rootnet. In this case, I'm just going to start with the rootnet. It's listening in this board and I'm going to give I'm going to give him the with these, our IPC agents now know how to interact with the rootnet and we can create a new subnet. When we say this IPC agent creates subnet, actually what we're doing is deploying what we call the subnet actor, which is the smart contract in the rootnet that will govern the operation of the subnet. So, hey, hello. Something fit. Okay, this is probably which I didn't copy paste this correctly. Okay, so I forgot to reload the coffee not if I was my next what was my latest so sorry. Yeah, so now I'm going to be able to create should have done a cheat sheet for this. So, yeah, I once I get the token I have to put it in my in my my coffee and reload the coffee. So now you see that we created the subnet actor that is going to govern the new subnet and he says that we have a subnet like a new subnet actor where we can run a subnet with ID root t0102. So now what we're going to do is actually run a note for that subnet. So we're going to subnet and running that something. And I mean, we have also this convenient script to show how it works. But in order for you to see the logs and see what it's happening under the hood, I'm gonna run the note manually to show you how it works. So here I'm just running a lot to steam on with the genesis of this new subnet. And what we're going to do now is initialize initialize the validator for the subnet. So we initialize the validator for the subnet. We import our wallet. And you'll see that right now if I try to run the validator, so actually, yeah. I try to run the validator, it's gonna fail. And the reason for this is that because I haven't joined the subnet. So my note will interact with the parent and we'll have to ask permission to this subnet actor that we just deployed to ask for permission to, because like the parent is the one that runs the subnet. So what we need to do is to join the subnet in the parent. To do that, like we have to check, I mean, we have to announce what is our validator address. We take like any of these multi-addresses for my validator. And now we're gonna join the subnet. So from the APC agents, we say that we wanna join these new subnets, that we wanna put this amount of falcons as collateral and that we wanna be, we want other validators to find out in this. So we copy there. And once we've joined the subnet, what we're gonna be able is that now the parent will have us registered as a validator of the subnet. And you see that when we try to start running in these gates. Now we're part of the subnet and we have the rootnet where we're mining. And also we have a new network here, subnet where we could be running any kind of application. What happens if like, again, with the APC agents, we can handle all the life cycle of our subnet. We could deploy another subnet like for instance, we could create here any another subnet with a parent in the root. They will have some other, so the ID was in this case, C01002 and I could run my own application on my own nodes for this. But the subnet and I can also leave the subnet. And if I leave the subnet, what happens is that I take out my stake in the subnet. So here in the right, in the bottom right where I'm mining, once I leave the subnet, these validators should crash because like I no longer have rights to mine in that subnet. So I leave the subnet and once these transactions go through and the stake goes down, these validators should crash. And you see it crashes because I no longer have access to mine. So this is just the life cycle of a subnet. Sorry for the, I forget completely about reloading the config. We also have convenience. So I showed you how to run a node for the subnet manually, but we are also working on scripts to run, but that's gonna take a long time. So probably won't do that, but like we can run these subnets in a doaker and you specify like the subnet that you wanna, you wanna, the node for the subnet that you wanna deploy on so on and you're able to have containerized all of these processes that have been running locally. And the idea is that we're trying to figure out like the right UX. So, I mean, once it's ready, we will really appreciate all the feedback that you can give us to make these, like deploying subnets as soon as possible. Thank you. Awesome. Thanks so much Alfonso. Great demos everybody. Thank you so much. Looking forward to seeing everybody come back next month, which will be April 20th. But yeah, thanks you guys and have a great day. Thanks for coming.